Download as pdf or txt
Download as pdf or txt
You are on page 1of 5

Title: Forging Uncharted Horizons: Navigating the Risks of AI

Author(s): EPW Engage

Source: Economic and Political Weekly (Engage), .

ISSN (Online): 2349-8846

Published by: Economic and Political Weekly (Engage)

Article URL: https://www.epw.in/engage/article/forging-uncharted-horizons-navigating-risks-ai

Author(s) Affiliation: Curated by Tiya Singh [tiya@epw.in]

Articles published in EPW Engage are web exclusive.

1
Forging Uncharted Horizons: Navigating
the Risks of AI
EPW Engage

Abstract: With its problem-solving


capabilities, Artificial Intelligence (AI)
could very well be the unrivalled
solution to the most intractable
problems of our time. However, AI
might not just yet be the icing on the
cake we were looking for- there are
significant constraints to be considered
that prevent it from being a perfect
solution to all possible problems, and
the debate on ethics of Artificial
Intelligence remains ridden with
complexities.

Artificial Intelligence (AI) has the potential to radically alter almost all spheres of human life.
This is because it allows for quick processing of vast amounts of data and makes predictions or
decisions that would be otherwise rather impossible for humans to do in a reasonable amount
of time. AI holds the potential to substantially improve efficiency, accuracy, and safety in many
fields beyond human capabilities. But is that enough?

AI also poses several dangers, such as the potential for bias and discrimination if the data used
to train AI systems, made by humans themselves, is not diverse or representative of all
populations. There is also the risk of job loss as AI takes over tasks previously performed by
humans. Additionally, there are widespread concerns about AI being used for malicious
purposes, such as cyber attacks or weapon-building and spreading misinformation.

To maximise the benefits of AI while minimising the risks, it is important for stakeholders like
business magnates, technologists, policymakers, and society as a whole to consider the ethical
implications of AI and ensure that it is developed and used responsibly.

AI and human intelligence: Are they the same?

Artificial intelligence and human intelligence are two distinct types of intelligence, each with its

2
strengths and limitations. One of the key differences between artificial and human intelligence is
the way they learn. AI systems attempt to replicate forms of human intelligence. However, even
though AI can process vast amounts of information quickly and accurately, it can still come
with its own limitations.

Risks posed by AI:

The fears that AI can take over all human jobs can be misleading. As Adiraj Chakraborty
noted in his 2019 Letter to EPW, ‘Artificial vs Human Intelligence’ “Most of the jobs that we
can think of require a multiplicity of skills. From technical expertise to intuitive mastery, jobs
generally require a host of cognitive and socio-behavioural skills that are interconnected.”An
EPW Engage article titled, ‘Interrogating the AI Hype: A Situated Politics of Machine Learning
in Indian Healthcare’ authored by Radhika Radhakrishnan highlights the need to prioritise a
more human-centred approach to technology development and deployment, to ensure that the
benefits of technological advancements are equitable and accessible. This is due to the
possibility that AI can, she warns, exacerbate existing inequalities or perpetuate the profit-driven
interests of private corporations at the expense of marginalised communities. She suggests that
“instead of asking, “How can AI solve this problem?” it is most worthwhile to ask “What
problems can AI solve?” This has at least two advantages: one, it makes us focus on using AI-
based technologies to solve only those problems that are within the scope of technology to
solve, and two, it incentivises us to find need-based solutions as opposed to market-driven
capitalist solutions to identified problems.”

Others even question the permeation of AI in creative spheres such as that of art. Neerej Dev
and Vipula P C in their 2023 letter ‘Artificial Intelligence and Art’ show that there is great
scope to philosophically investigate and “question the ethics of using artificial intelligence for
commercial purposes without giving proper credit to the original creators.”

Suchana Seth, in her 2017 article, ‘Machine Learning and Artificial Intelligence Interactions with
the Right to Privacy’ explains through the example of algorithms are being used “increasingly
by state and corporate actors for surveillance that falls well outside the domain of law
enforcement.” She shows the very real challenges cast by artificial intelligence in the realm of
human privacy. In a similar vein, Gaurav Pathak, in his 2022 letter titled, ‘Clearview AI:
Dangers to Privacy from the Private Sector’, argued, Gaurav Pathak exemplifies this in the
context of Clearview AI, which is a United States (US)-based company that downloaded over
10 billion facial images from the internet. He writes, "a citizen can challenge the use of
Clearview AI or any other facial recognition technology by the state, but what remedies does
one have against the existence of a private database such as the Clearview AI? The Supreme
Court in the Pegasus case (Manohar Lal Sharma v Union of India and Others, Writ Petition
(Crl) No 314 of 2021, Order dated 27 October 2021) has said “the right to privacy is directly

3
infringed when there is surveillance or spying done on an individual, either by the State or by
any external agency.” It even cautioned by saying, “it is undeniable that surveillance and the
knowledge that one is under the threat of being spied on can affect the way an individual
decides to exercise his or her rights. Such a scenario might result in self-censorship.”

On similar lines, Tripti Bhushan, in her 2023 letter titled ‘Negative Impact of ChatGPT in
Research’ wrote about the dangers posed by newly introduced technology of generative AI,
such as ChatGPT. She contends “ChatGPT is capable of generating highly convincing
responses to questions and prompts, which could make it difficult for users to discern between
accurate and inaccurate information.” Further, ChatGPT can disrupt the creation of
independent and original scholarship and give more space for plagiarism to arise. Bhushan
iterates "The [ChatGPT] model has the ability to generate responses that are highly convincing
and could be used to manipulate individuals or spread propaganda. This could have serious
implications for democracy and could potentially harm individuals and communities. For
example, if a political organisation were to use ChatGPT to generate responses that were
designed to influence voters, this could lead to a breach of trust and could potentially harm the
democratic process."

Additionally, large language models, such as ChatGPT, could exacerbate the concentration of
information in the hands of a few, with the resources to develop and deploy such models. This
trend goes against the principle of decentralisation, where power and access to information are
distributed across broader networks.

Conclusion

These examples point towards the fact that there is a need to tread forward with caution.
Artificial intelligence needs to be imagined in a way that it does not override ethical norms, it
should be more inclusive, and we should aspire to make data and algorithms more
accountable, by enabling transparency and initiating safeguards in its creation and deployment.

References:

Read more:

Artificial Intelligence: Uncritical View | R Narasimhan, 1979

Who Is Responsible When Technology Fails the Marginalised? | Sakina


Dhorajiwala, 2020

4
Technology or Ideology? | A Amarender Reddy, 2021

Do Machine Learning Techniques Provide Better Macroeconomic Forecasts? |


Sabyasachi Kar, Amaani Bashir and Mayank Jain, 2022

You might also like