Artificial Intelligence

You might also like

Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1of 22

The biggest threat of AI is a matter of debate, but some of the most commonly cited concerns include:

 Job loss: As AI becomes more sophisticated, it is likely to automate many tasks that are
currently performed by humans. This could lead to widespread job losses, particularly in low-
skill and repetitive jobs.
 Bias and discrimination: AI systems can be biased, either due to the data they are trained on
or the way they are programmed. This could lead to discrimination against certain groups of
people, such as those based on race, gender, or sexual orientation.
 Privacy: AI systems can collect and analyze vast amounts of data about people. This data could
be used to track people's movements, monitor their online activity, or even predict their future
behavior. This could raise serious privacy concerns.
 Security: AI systems could be hacked or used to launch cyberattacks. This could have a
devastating impact on critical infrastructure, financial systems, or even national security.
 Weaponization: AI could be used to develop autonomous weapons that could kill without
human intervention. This raises the specter of a future war in which machines fight machines,
with potentially catastrophic consequences for humanity.
These are just some of the potential threats posed by AI. It is important to note that these threats are not
inevitable. With careful planning and responsible development, we can mitigate many of these risks.
However, it is also important to be aware of the potential dangers of AI so that we can take steps to
prevent them from becoming a reality.

In addition to the threats listed above, there are several other potential risks associated with AI, such as:

 Lack of transparency: AI systems can be complex and difficult to understand. This lack of
transparency could make it difficult to assess the risks of AI systems and hold them
accountable for their actions.
 Existential risk: Some experts believe that AI could eventually become so intelligent that it
poses an existential threat to humanity. This is a very speculative risk, but it should not be
ignored.
The future of AI is uncertain, but it is clear that this technology has the potential to both benefit and harm
humanity. It is important to be aware of the potential risks of AI so that we can take steps to mitigate
them and ensure that AI is used for good.

From morning to night, going about our everyday routines, AI technology drives much of what we do.
When we wake, many of us reach for our mobile phone or laptop to start our day. Doing so has become
automatic, and integral to how we function in terms of our decision-making, planning and information-
seeking.

artificial Intelligence (AI) has become a transformative force across various


industries, revolutionizing the way we live and work. From improving
healthcare to optimizing transportation, AI's capabilities are awe-inspiring.
However, amid the advancements, it is crucial to recognize and comprehend
the potential dangers that accompany this powerful technology. This blog
sheds light on the importance of understanding AI risks and explores five
potential dangers of artificial intelligence.

Before we further dive into this, let us understand what Artificial Intelligence
(AI) is

What is Artificial Intelligence (AI)?


Artificial Intelligence (AI) refers to the simulation of human intelligence in
machines that are programmed to think, learn, and perform tasks that typically
require human intelligence. It involves the development of algorithms and
models that enable computers to analyse data, recognize patterns, and make
decisions based on the information processed. AI encompasses a wide range
of technologies, including machine learning, natural language processing,
computer vision, and robotics. The goal of AI is to create systems that can
perform complex tasks autonomously, adapt to new situations, and
continuously improve their performance over time. From virtual assistants and
recommendation systems to autonomous vehicles and medical diagnoses, AI
is transforming various industries and impacting our daily lives in remarkable
ways. As research and development in AI continue to advance, the
possibilities for its applications are vast and ever-expanding.

Learn About the 5 Potential Dangers


of Artificial Intelligence
1. Security Threats and Privacy Concerns
As AI evolves, it brings unprecedented capabilities, but these advancements also
introduce new challenges, particularly in terms of security and privacy. The
integration of AI often involves processing vast amounts of personal and sensitive
data, raising concerns about data breaches, unauthorized access, and potential
misuse. The sophistication of AI systems can make them vulnerable to
cyberattacks, where attackers exploit vulnerabilities to compromise data integrity
and confidentiality. Ensuring robust cybersecurity measures, stringent data
protection protocols, and constant monitoring are imperative to safeguard against
AI-driven security threats and uphold individual privacy in our increasingly
interconnected digital landscape.
2. Lack of Accountability and Transparency
AI models can be complex and challenging to interpret, leading to a lack of
transparency in decision-making processes. This "black box" nature of AI can
make it difficult to hold AI systems accountable for their actions, especially in
critical applications like autonomous vehicles or healthcare diagnosis. Researchers
and policymakers must work together to develop explainable AI techniques to
enhance transparency and accountability in AI systems.
3. Bias and Discrimination
Artificial Intelligence (AI) ushers innovation, enabling machines to learn, reason,
and decide. Yet, bias and discrimination within AI systems pose a significant
challenge.
AI learns from data, recognizing patterns and making predictions. But data carries
biases - historical, systemic, skewed - affecting AI outcomes. For instance, facial
recognition algorithms show racial and gender biases, with far-reaching
consequences.
Addressing bias requires curated data, diverse teams, and transparent AI models.
Ethical guidelines from developers, policymakers, and stakeholders are essential.
Transparency in AI decisions, ongoing monitoring, and bias rectification are vital.
Unchecked bias in AI could perpetuate inequality. As we tap AI's potential,
combating bias isn't just ethical; it's essential for AI to truly benefit humanity.
4. Unintended Consequences and Ethical Challenges
The advancement of AI raises profound ethical dilemmas. As AI systems make
autonomous decisions, they may lead to unintended consequences with serious
implications. For example, self-driving cars may face ethical dilemmas in life-or-
death situations. Addressing these challenges requires a collective effort involving
technologists, ethicists, policymakers, and society as a whole.
5. AI Superintelligence and Regulation Issues
While we are currently still exploring the innumerous ways in which AI can aid
today's fast pacing world, concerns about superintelligent AI, capable of
surpassing human intelligence, have been raised. If such AI systems were to
develop, the risk of losing control over them becomes a significant concern.
Thoughtful research, policies, and ethical guidelines must be established now to
address these hypothetical scenarios and prevent any future AI regulatory
problems.
Conclusion
As AI continues to reshape the world, understanding its potential risks
becomes a fundamental responsibility. By acknowledging and addressing
these dangers, we can harness AI's potential for the greater good while
minimizing the negative impacts. Emphasizing ethics, accountability,
transparency, and inclusivity in AI development and deployment will be crucial
to creating a safer and more beneficial AI-powered future.

Remember, AI is a tool that humans wield, and it is essential to use this tool
responsibly for the collective well-being of humanity.
Major threats from AI misuse
The researchers described three ways in which AI could threaten human existence.

First, AI could increase opportunities for the manipulation of people. In China and 75 other countries,
the government is expanding AI-based surveillance. AI rapidly cleans, organizes, and analyses
massive amounts of personal data, including video content captured by cameras deployed in public
places. While this could help counter acts of terrorism, a good use, on the downside, this data could
contribute to the rise in polarization and extremist views.

AI could help create an enormous and powerful personalized marketing infrastructure to manipulate
consumer behavior and generate commercial revenue for social media. Experimental evidence
suggests that in the 2016 United States presidential election, political parties used AI tools to
manipulate political beliefs and voter behavior.

China's Social Credit System automatically denies people access to banking and insurance services,
levies fines, and prevents them from traveling and sending their children to schools based on
analysis of their financial transactions, police records, and social relationships to produce
evaluations of individual behavior.

AI has wide applications in military and defense systems, which raises a second threat due to AI, the
advancements in the area of Lethal Autonomous Weapon Systems (LAWS). These autonomous
weapons could locate, visually recognize, and 'aim at' human targets with no control of humans over
them, which makes them novel and lethal weapons of mass destruction.

Disrupting elements, like terrorist organizations, could cheaply mass-produce LAWS, which comes
in all sizes and forms, and set them up to kill at a mass scale. For instance, it is feasible to equip
millions of quadcopter drones, small, mobile devices with explosives, visual identification, and
autonomous navigational abilities, and programmed to kill without human supervision.

Thirdly, the extensive deployment of AI technology-driven tools could result in the loss of tens to
hundreds of jobs in the coming decades. However, the widespread deployment of AI tools would
depend largely on policy decisions by governments and society and the pace of development of AI,
robotics, and other complementing technologies.
Nonetheless, the impact of AI-driven automation would be worst for people engaged in lower-skilled
jobs in low and middle-income countries (LMICs). Eventually, it would not spare the upper segments
of the skill-ladder of the global workforce, including those inhabiting high-income countries.

Forensics And Toxicology eBook Compilation of the top interviews, articles, and
news in the last year.Download the latest edition

For many decades, humans have envisioned and pursued machines that are more intelligent,
conscious, and powerful than ourselves. It has led to the development of AGI, theoretical AI-based
machines that learn and intuitively improve their code and start developing their own purposes.

After connecting them to the real world, i.e., via robots, weapons, vehicles, or digital systems, it
becomes difficult to envision or predict the effects and outcome of AGI with any certainty. Yet,
deliberately or not, these machines could harm and subjugate humans. Accordingly, in a recent
survey conducted among members of AI society, 18% of the participants raised concerns that AGI
development could be existentially catastrophic.

Furthermore, the authors highlighted that while AI holds the potential to revolutionize healthcare by
improving diagnostics and help developing new treatments, some of its applications could be
detrimental. For instance, most AI systems are trained on datasets where populations subject to
discrimination are under-represented.

Due to this incomplete and biased dataset, an AI-driven pulse oximeter overestimated blood oxygen
levels in patients with darker skin. Similarly, facial recognition systems misclassify the gender of
darker-skinned subjects.

Conclusions
The medical and public health community should raise the alarm about the risks and threats posed
by AI, similar to how the International Physicians for the Prevention of Nuclear War presented
evidence-based arguments about the threat of nuclear war.
Another possible intervention could be to ensure adequate checks and balances for AI, which
requires strengthening public-interest organizations and democracy. Then, AI would fulfill its promise
to benefit humanity and society.

Most importantly, there is a need for evidence-based advocacy for a radical makeover of social and
economic policies over the coming decades. Rather, we should begin preparing our future
generations to live in a world where human labor would no longer be needed for goods and services
production because AI would have dramatically changed the work and employment scenario.

Artificial intelligence (AI) is poised to enhance productivity and innovation around the
world. The expected benefits promise to be transformative, but the negative
repercussions could be magnified in developing countries, where the livelihoods of
many people are precarious and social institutions can be fragile.

AI’s influence will be widespread because it can be integrated with other technologies
and applied to almost any activity that involves information and communication
technologies. In an effort to improve understanding of how to ethically and equitably
implement AI in the development context, IDRC has published the white paper Artificial
intelligence and human development. It outlines the potential benefits and risks of this
new technology and presents a proactive research agenda to address challenges posed
by AI that are of particular concern in the developing world.

Learn more about AI and read concrete research recommendations in the full IDRC
white paper: Artificial intelligence and human development: Toward a research
agenda (PDF, 36.6MB).

Read more on our work in AI in a featured UNESCO blog post.

AI and the potential for development


AI is an area of computer science dedicated to creating software that can be taught to
perform complex procedures. What makes AI “intelligent” is that it can learn new
behaviours, improve performance as more experience is gained, and make decisions
and predictions based on available data. The algorithms at the core of some AI systems
are trained using the large datasets that are now available thanks to the “big data”
revolution. It is the intelligent capabilities of AI systems that allow for the automation of
tasks that until now required human judgement to deliver.
There is enormous potential for how AI can benefit the developing world and what it can
contribute towards achieving the UN’s Sustainable Development Goals:

Healthcare
AI can play a crucial role in augmenting healthcare capacity by filling gaps in human
expertise, increasing productivity, and enhancing disease surveillance.

Agriculture
AI applications can provide critical insights and solutions to improve the efficiency and
quality of agricultural activities. AI is already being used to support water management
in the Middle East and drought monitoring in Africa. Farmers in Uganda have access to
mobile phone-based tools for the automated identification and tracking of crop
infestations.

Economic development
AI has the potential to drive growth through innovation, increased productivity, and the
optimization of business processes. Start-ups in several African countries are building a
variety of innovative businesses based on AI systems. Other companies are using AI-
enabled mobile phone platforms to provide access to financial services for hundreds of
millions of Africans who either do not or cannot access these services through
traditional banks.

Education and training


AI techniques can be used to support the roles of teachers, tutors, and administrators
by providing personalized learning opportunities at scale, such as the intelligent tutoring
systems currently under development in India.

Greater government efficiency and transparency


The delivery of government services and information could also be improved using AI
systems — ideally to maximize social returns while minimizing financial cost. AI systems
can provide automated access in multiple languages and dialects, and high-level
decision-making could be enhanced by automating complex assessments that
incorporate a range of technical, organizational, and social factors.

Recognizing AI’s risks and challenges


As with any widely adopted technology — especially one as powerful and potentially
pervasive as AI — the benefits come with risks that must be managed and mitigated.
Many examples have already been documented of these risks playing out in real-world
applications.

Exacerbating societal biases


AI systems can reflect societal biases and assumptions held by their designers or
inherent in the datasets on which their core algorithms are trained. Using biased
systems to automate decision-making processes could amplify the impact of these
biases by systematically producing results that disadvantage particular individuals and
groups, especially those who are marginalized. As an example, a computer program
used in the U.S. to assess the risk of re-offense by individuals in the criminal justice
system was shown to flag black defendants as high risk nearly twice as often as white
defendants.

Threatening privacy
The use of AI algorithms can supercharge the capacity for surveillance, and thereby
threaten privacy. For example, by integrating AI-powered facial recognition software,
closed-circuit TV systems can track individuals as they move through the urban
landscape. Such blanket monitoring is concerning and could be exploited both socially
and politically. Unchecked surveillance has the potential to erode privacy, which is key
to other fundamental rights such as freedom of expression and association.

Loss of jobs
With the growing use of machine learning and AI systems in nearly all sectors of the
economy, widespread automation will extend beyond manufacturing to impact
knowledge-based roles. Many of these positions can be partly or entirely automated,
reducing the need for human workers. Estimates of the extent of job losses due to
automation vary greatly, but it is expected that the pace of change will be rapid, giving
societies and governments limited time to adjust

Fake news and misinformation


In a highly connected world reliant on online sources of information, misinformation is a
genuine and growing threat to stability and democracy. By capitalizing on the vast
amounts of personal data collected via social media, AI applications can facilitate and
automate far-reaching propaganda and behavioural manipulation campaigns. The 2016
U.S. presidential election has become a notorious example of the role of targeted
misinformation.

As the above examples illustrate, there is potential for AI to increase inequality and
generate economic disruption, social unrest, and even political instability in the
developing world. While these risks are only beginning to be addressed by even the
most advanced countries and economies, the developing world faces particular
contextual challenges. The governance and regulation of technology are rudimentary in
many countries in the Global South. Communication and digital services infrastructure is
lacking in lower-income and rural areas, and there is a general skills shortage for the
development and deployment of AI applications. Furthermore, a significant
comprehension gap between social scientists, policymakers, NGOs, and those with
technical understanding of AI will make addressing these challenges problematic.
Big names including Steve Wozniak and Elon Musk warn race to develop AI systems is out of control, urges six month
pause

A group of artificial intelligence (AI) experts and executives have banded together to urge a six month pause in
developing more advanced systems than OpenAI’s newly launched GPT-4.
The call came in an open letter which cited the risks to society, and big name tech luminaries have added their
signature to the letter.
This includes Steve Wozniak, co-founder of Apple; Elon Musk, CEO of SpaceX, Tesla and Twitter; researchers at
DeepMind; AI heavyweight Yoshua Bengio (often referred to as one of the “godfathers of AI”); and Professor Stuart
Russell, a pioneer of research in the field.

AI pause
However there were some notable omissions of people who did not sign the open letter, including Alphabet
CEO Sundar Pichai; Microsoft CEO Satya Nadella; and OpenAI chief executive Sam Altman, who earlier this
month publicly stated he was a “little bit scared” of artificial intelligence technology and how it could affect the
workforce, elections and the spread of disinformation.
Interest in AI has surged recently thanks to the popularity of AI chatbots like OpenAI’s ChatGPT, and Google’s Bard,
driving oversight concerns about the tech.

Earlier this month the US Chamber of Commerce called or the regulation of artificial intelligence technology.
And this week the UK government set out its own plans for ‘adaptable’ regulations to govern AI systems.
Now the open letter made clear the alarm about advanced AI, being felt among high-level experts, academics and
executives about the technology.
“AI systems with human-competitive intelligence can pose profound risks to society and humanity, as shown by
extensive research and acknowledged by top AI labs,” the letter states.

Contemporary AI systems are now becoming human-competitive at general tasks, and we must ask
ourselves: Should we let machines flood our information channels with propaganda and untruth? Should we automate
away all the jobs, including the fulfilling ones? Should we develop nonhuman minds that might eventually
outnumber, outsmart, obsolete and replace us? Should we risk loss of control of our civilization?”
“Such decisions must not be delegated to unelected tech leaders,” the letter states. “ Powerful AI systems should be
developed only once we are confident that their effects will be positive and their risks will be manageable.”
“Therefore, we call on all AI labs to immediately pause for at least 6 months the training of AI systems more powerful
than GPT-4,” the letter stated. “This pause should be public and verifiable, and include all key actors. If such a pause
cannot be enacted quickly, governments should step in and institute a moratorium.”

Read also : UK Regulator Begins Scrutiny Of Microsoft Partnership With OpenAI


“Humanity can enjoy a flourishing future with AI. Having succeeded in creating powerful AI systems, we can now
enjoy an ‘AI summer’ in which we reap the rewards, engineer these systems for the clear benefit of all, and give
society a chance to adapt,” said the letter.

“Society has hit pause on other technologies with potentially catastrophic effects on society. We can do so here. Let’s
enjoy a long AI summer, not rush unprepared into a fall.”

Previous warnings
Both Elon Musk and Steve Wozniak, and others including the late Professor Stephen Hawking have warned about the
dangers of AI in previous years.
Indeed Professor Hawking warned artificial intelligence could spell the end of life as we know it on Planet Earth.
Professor Hawking also predicted that humanity has just 100 years left before the machines take over.
Musk meanwhile was a co-founder of OpenAI – though he resigned from the board of the organisation years ago.

However Musk previously stated he believes AI poses a real threat to humans if unchecked, and in 2014 tweeted that
artificial intelligence could evolve to be “potentially more dangerous than nukes”.

In 2015 Musk donated $10 million to the Future of Life Institute (FLI) – a non-profit advisory board dedicated to
weighing up the potential of AI technology to benefit humanity.
Likewise Apple’s Steve Wozniak predicted eight years ago that in the future, the world will be controlled by AI and
that robots will treat humans as their pets.
Read also : Google Offers Gemini AI Model To Challenge GPT-4
Cat already out of the bag?
Some experts have questioned whether it is realistic to halt the development of even more advanced AI systems and
technology.

“Although the development of generative AI is going faster and has a broader impact than expected, it’s naive to
believe the development of generative AI can be stopped or even paused,” noted Frederik Mennes, director product
management & business strategy at cybersecurity specialist OneSpan.

“There is now a geopolitical element,” said Mennes. “If development would be stopped in the US, other regions will
simply catch up and try to take the lead.”
“If there is a need for regulation, it can be developed in parallel,” Mennes concluded. “But the development of the
technology is not going to wait until regulation has caught up. It’s just a reality that technology develops and
regulation catches up.”

Two risks
Another expert highlighted what he feels are the two main risks associated with advanced AI technology, but also
feels the cat is already out of the bag.

“The interesting thing about this letter is how diverse the signers and their motivations are,” noted Dan Shiebler, head
of machine learning at cloud security specialist Abnormal Security.

“Elon Musk has been pretty vocal that he believes AGI (computers figuring out how to make themselves better and
therefore exploding in capability) to be an imminent danger, whereas AI sceptics like Gary Marcus are clearly coming
to this letter from a different angle,” said Shiebler.

“In my mind, technologies like LLMs (large language models) present two types of serious and immediate risks,” said
Shiebler.

“The first is that the models themselves are powerful tools for spammers to flood the internet with low quality content
or for criminals to uplevel their social engineering scams,” said Shiebler. “At Abnormal we have designed our
cyberattack detection systems to be resilient to these kinds of next-generation commoditised attacks.”

“The second is that the models are too powerful for businesses to not use, but too unpolished for businesses to use
safely,” said Shiebler. “The tendency of these models to hallucinate false information or fail to calibrate their own
certainty poses a major risk of misinformation.”

“Furthermore, businesses that employ these models risk cyberattackers injecting malicious instructions into their
prompts,” said Shiebler. “This is a major new security risk that most businesses are not prepared to deal with.”

“Personally, I don’t think this letter will achieve much,” Shiebler concluded. “The cat is out of the bag on these
models. The limiting factor in generating them is money and time, and both of these will fall rapidly. We need to
prepare businesses to use these models safely and securely, not try to stop the clock on their development.”
As the debate on whether Artificial Intelligence (AI) is more of a threat than a technological
advancement came to a head, US President Joe Biden issued a wide-ranging Executive
Order on October 30 clearly warning against the dangers posed by AI, particularly in the
areas of development of devastating weapons of destruction and cyber attacks.
The order went on to define the framework of strong intervention by the Federal
government — should it become necessary for ensuring AI Safety, Security and Trust — by
way of monitoring companies in the business of developing AI products and taking even
coercive action under the old Defence Productions Act to compel abandonment of a given
initiative of the enterprise. This move has been made also in the context of the government
being a key funder of scientific research in the US.
The major threats associated with AI range from breach of data privacy to interference in
elections, from destruction of critical infrastructure particularly in the energy sector to
pandemic preparedness and from misuse of biological material to creating instruments of
repression. American administration officials have talked of Chinese-sponsored research
devoted to exploring ways of disrupting critical communications infrastructure between the
US and Asia.
It was clear – judging from the fact that Social Media had already become an instrument of
combat – that AI representing an ultimate advance of IT would tempt countries and players
to develop it for newer versions of ‘proxy wars’ as well as for weapon automatisation.
Interestingly, AI-based deepfakes – designed, among other things, to influence the
electorate, defame leaders and dignitaries and create public discords – are already
operating in the Indian context.
Algorithmic bias can be caused by misusing data to create socio-economic inequality,
market volatility and communal friction.
Geoffrey Hinton known for his work on machine learning and neural networks has reportedly
talked of dangers from AI while Elon Musk along with many other tech leaders has warned
in an open letter that uncontrolled AI experiments could pose ‘profound risks to society and
humanity’.
Sam Altman CEO of OpenAI wanted policymakers to ensure that regulations ‘promoted
innovation and did not strangle it’.
India is at the forefront of global efforts to facilitate the legitimate growth of AI for the
larger good of the world without letting it produce any destructive fallout. In the run-up to
the G20 Summit in Delhi last September, the security establishment was able to bank on
the Facial Recognition System relying on AI-based cameras. AI was an important talk point
at the Delhi G20 meet.
The potential of generative AI in making an economic impact on the world was examined
and some concerns were raised about possible job loss especially white-collar jobs resulting
from AI applications.
It can be clearly seen that without cutting jobs, businesses can use AI to improve efficiency
and productivity – efficiency being a measure of productivity.
The need for global AI regulations will be an important matter on the agenda of the next
G20 in Brazil. Anti-competitive policies and big tech monopolies are other points that would
come up there.
The Delhi G20 summit discussed how to harness AI for economic development while
protecting human rights and suggested that there is a need for global oversight of this
rapidly evolving technology.
Prime Minister Narendra Modi and US President Biden had bilateral talks on the eve of the
G20 summit at the Prime Minister’s residence on September 8 to expand cooperation
between the two countries in emerging domains like AI and Space. This has brought Indo-
US relations to an entirely new level of strategic friendship.
Earlier in July this year, the Indo-US Science and Technology Forum (IUSSTF) designed a
programme of Indo-US engagement for Technology Partnership for the Future with a new
direction and investment of energy in AI and with an endowment of $2 million. The
programme will work for the joint development and commercialisation of AI.
India believes that it can steer clear of any risks to get the program to promote social well-
being by impacting health care, agriculture, climate change and so on.
More recently, AI figured prominently in the talks between US President Biden and President
Xi Jinping of China held in San Francisco on November 15 during the APEC conference.
Just as IT applications made human processes more efficient with computerisation, AI
makes IT applications smarter – being smart means being able to produce more per unit of
resource whether the resource is money, manpower or time.
Indian industries are well prepared to use AI – technological services, finance and retail are
switching over to AI regime faster than sectors like media& communications, education,
transport etc but India on the whole is moving in that direction more speedily than many
other nations.
AI is the ability of a system or a program to think and learn from experience. Machine
learning and deep learning techniques are based on vast volumes of data being analysed
instantly for making intelligent decisions or solving specific problems. AI technology helps to
improve relationships with customers and their loyalty to a brand. For this browsing history,
the preferences and interests of customers are all collated and analysed through AI tools.
Natural Language Processing is used to make conversation as human and personal as
possible.
Chat boxes improve the user experience and learning material can be accessed through
Voice Assistants like Alexa and Siri. Robots powered by AI use real-time updates to provide
for carrying of goods, cleaning of offices and managing inventory.
For economic companies, AI is helpful in reducing credit card fraud by examining the user’s
patterns. AI can identify fake reviews. People leverage the strength of AI because the work
they need to carry out on a daily basis is increasing all the time.
Mention should be made of AI helping to create a rich learning experience by generating
audio and video summaries and integral lesson plans. AI can help analyse chronic conditions
and lab data to ensure early diagnosis.
Apart from personal usage, facial recognition is now a widely used AI application in high-
security areas in several industries. AI identifies illegal access. Machine learning allows for
the detection of even minor abnormalities and can point to anything that is wrong with the
system.
AI, however, has to still advance a lot in areas such as language processing, creativity,
problem-solving, comprehension of subtleties and evaluation of human output. AI is
essentially a computer process – it is basically adding layers of 1s and 0s amazingly fast.
Binary calculations do not create a ‘soul’ and as an analyst said – the ‘within’ of the
computer and the human is not the same.
Analytical thinking, human consciousness and emotional intelligence place the human brain
on a different footing that was not available to computers.
At the end of it, AI worked on pattern reading and word recognition and was governed by
the input-output principle.
AI can improve business growth and expansion, advance the cause of education and health,
reform governance with particular reference to the implementation of welfare schemes,
make the organisational working more transparent and ethical and help to enrich
everybody’s lifestyle.
Exercise of human judgement will be necessary in crucial areas ranging from dispensing
justice to the responsibility of pressing the nuclear button.
Incidentally, nuclear deterrence is based on the principle that whereas the launch of a
nuclear missile would be a conscious decision the response ensuring total annihilation of the
opponent could be fully automated.
In the final analysis, it is to be realised that any technology- because of its public profile-
would be used for the good of humanity but was also available to the malcontents ranging
from non-state actors like terrorists to ruthless dictators who did not think of anything
beyond their ‘personal’ power.
President Biden’s initiative in issuing an executive order for the benefit of national and
global security has come in time – India fully sharing the hopes and concerns that AI
provided to the world.
Prime Minister Modi has called for expanding ‘ethical’ AI and stipulated that even though AI
had ‘associated risks’ it had proven to be an enabler of the digital and innovation
ecosystem.
India recognises the need for actively formulating regulations rooted in a ‘risk-based user
harm’ approach. It envisages AI as a catalyst for economic growth, job creation and
industrial transformation. India acknowledges the importance of safeguarding citizens’
interests and wants AI benefits to become accessible to all segments of society in an
equitable manner. We should favour a collaborative approach to AI development, involve
government bodies, industry stakeholders, researchers, civil society and subject experts and
opt for an adaptive regulatory framework.
NITI Ayog, the premier policy think-tank of India, is actively engaged in evolving India’s
strategy for AI development giving special attention to key areas like education, workforce
development, data usage, security and research.
DC Pathak

As the world witnesses unprecedented growth in artificial intelligence (AI) technologies, it's
essential to consider the potential risks and challenges associated with their widespread
adoption.

AI does present some significant dangers — from job displacement to security and privacy
concerns — and encouraging awareness of issues helps us engage in conversations about
AI's legal, ethical, and societal implications.

Here are the biggest risks of artificial intelligence:

1. Lack of Transparency
Lack of transparency in AI systems, particularly in deep learning models that can be
complex and difficult to interpret, is a pressing issue. This opaqueness obscures the
decision-making processes and underlying logic of these technologies.

When people can’t comprehend how an AI system arrives at its conclusions, it can lead to
distrust and resistance to adopting these technologies.

2. Bias and Discrimination


AI systems can inadvertently perpetuate or amplify societal biases due to biased training
data or algorithmic design. To minimize discrimination and ensure fairness, it is crucial to
invest in the development of unbiased algorithms and diverse training data sets.

3. Privacy Concerns
AI technologies often collect and analyze large amounts of personal data, raising issues
related to data privacy and security. To mitigate privacy risks, we must advocate for strict
data protection regulations and safe data handling practices.

4. Ethical Dilemmas
Instilling moral and ethical values in AI systems, especially in decision-making contexts with
significant consequences, presents a considerable challenge. Researchers and developers must
prioritize the ethical implications of AI technologies to avoid negative societal impacts.
5. Security Risks
As AI technologies become increasingly sophisticated, the security risks associated with their
use and the potential for misuse also increase. Hackers and malicious actors can harness the
power of AI to develop more advanced cyberattacks, bypass security measures, and exploit
vulnerabilities in systems.

The rise of AI-driven autonomous weaponry also raises concerns about the dangers of rogue
states or non-state actors using this technology — especially when we consider the potential loss
of human control in critical decision-making processes. To mitigate these security risks,
governments and organizations need to develop best practices for secure AI development and
deployment and foster international cooperation to establish global norms and regulations that
protect against AI security threats.

6. Concentration of Power
The risk of AI development being dominated by a small number of large corporations and
governments could exacerbate inequality and limit diversity in AI applications. Encouraging
decentralized and collaborative AI development is key to avoiding a concentration of power.

7. Dependence on AI
Overreliance on AI systems may lead to a loss of creativity, critical thinking skills, and human
intuition. Striking a balance between AI-assisted decision-making and human input is vital to
preserving our cognitive abilities.

8. Job Displacement
AI-driven automation has the potential to lead to job losses across various industries, particularly
for low-skilled workers (although there is evidence that AI and other emerging technologies
will create more jobs than it eliminates).

As AI technologies continue to develop and become more efficient, the workforce must adapt
and acquire new skills to remain relevant in the changing landscape. This is especially true for
lower-skilled workers in the current labor force.

9. Economic Inequality
AI has the potential to contribute to economic inequality by disproportionally benefiting wealthy
individuals and corporations. As we talked about above, job losses due to AI-driven automation
are more likely to affect low-skilled workers, leading to a growing income gap and reduced
opportunities for social mobility.

The concentration of AI development and ownership within a small number of large corporations
and governments can exacerbate this inequality as they accumulate wealth and power while
smaller businesses struggle to compete. Policies and initiatives that promote economic equity—
like reskilling programs, social safety nets, and inclusive AI development that ensures a more
balanced distribution of opportunities — can help combat economic inequality.

10. Legal and Regulatory Challenges


It’s crucial to develop new legal frameworks and regulations to address the unique issues arising
from AI technologies, including liability and intellectual property rights. Legal systems must
evolve to keep pace with technological advancements and protect the rights of everyone.

11. AI Arms Race


The risk of countries engaging in an AI arms race could lead to the rapid development of AI
technologies with potentially harmful consequences.

Recently, more than a thousand technology researchers and leaders, including Apple co-founder
Steve Wozniak, have urged intelligence labs to pause the development of advanced AI systems.
The letter states that AI tools present “profound risks to society and humanity.”

In the letter, the leaders said:

"Humanity can enjoy a flourishing future with AI. Having succeeded in creating powerful AI
systems, we can now enjoy an 'AI summer' in which we reap the rewards, engineer these systems
for the clear benefit of all, and give society a chance to adapt."

12. Loss of Human Connection


Increasing reliance on AI-driven communication and interactions could lead to diminished
empathy, social skills, and human connections. To preserve the essence of our social nature, we
must strive to maintain a balance between technology and human interaction.

13. Misinformation and Manipulation


AI-generated content, such as deepfakes, contributes to the spread of false information and the
manipulation of public opinion. Efforts to detect and combat AI-generated misinformation are
critical in preserving the integrity of information in the digital age.

In a Stanford University study on the most pressing dangers of AI, researchers said:

“AI systems are being used in the service of disinformation on the internet, giving them the
potential to become a threat to democracy and a tool for fascism. From deepfake videos to online
bots manipulating public discourse by feigning consensus and spreading fake news, there is the
danger of AI systems undermining social trust. The technology can be co-opted by criminals,
rogue states, ideological extremists, or simply special interest groups, to manipulate people for
economic gain or political advantage.”
14. Unintended Consequences
AI systems, due to their complexity and lack of human oversight, might exhibit unexpected
behaviors or make decisions with unforeseen consequences. This unpredictability can result in
outcomes that negatively impact individuals, businesses, or society as a whole.

Robust testing, validation, and monitoring processes can help developers and researchers identify
and fix these types of issues before they escalate.

15. Existential Risks


The development of artificial general intelligence (AGI) that surpasses human intelligence raises
long-term concerns for humanity. The prospect of AGI could lead to unintended and potentially
catastrophic consequences, as these advanced AI systems may not be aligned with human values
or priorities.

To mitigate these risks, the AI research community needs to actively engage in safety research,
collaborate on ethical guidelines, and promote transparency in AGI development. Ensuring that
AGI serves the best interests of humanity and does not pose a threat to our existence is
paramount.

As the debate on whether Artificial Intelligence (AI) is more of a threat than a technological
advancement came to a head, US President Joe Biden issued a wide-ranging Executive
Order on October 30 clearly warning against the dangers posed by AI, particularly in the
areas of development of devastating weapons of destruction and cyber attacks.

The order went on to define the framework of strong intervention by the Federal
government -- should it become necessary for ensuring AI Safety, Security and Trust -- by
way of monitoring companies in the business of developing AI products and taking even
coercive action under the old Defence Productions Act to compel abandonment of a given
initiative of the enterprise. This move has been made also in the context of the government
being a key funder of scientific research in the US.

The major threats associated with AI range from breach of data privacy to interference in
elections, from destruction of critical infrastructure particularly in the energy sector to
pandemic preparedness and from misuse of biological material to creating instruments of
repression. American administration officials have talked of Chinese-sponsored research
devoted to exploring ways of disrupting critical communications infrastructure between the
US and Asia.

It was clear - judging from the fact that Social Media had already become an instrument of
combat - that AI representing an ultimate advance of IT would tempt countries and players
to develop it for newer versions of ‘proxy wars’ as well as for weapon automatisation.
Interestingly, AI-based deepfakes - designed, among other things, to influence the
electorate, defame leaders and dignitaries and create public discords - are already operating
in the Indian context.

Algorithmic bias can be caused by misusing data to create socio-economic inequality,


market volatility and communal friction.

Geoffrey Hinton known for his work on machine learning and neural networks has reportedly
talked of dangers from AI while Elon Musk along with many other tech leaders has warned
in an open letter that uncontrolled AI experiments could pose ‘profound risks to society and
humanity’.

Sam Altman CEO of OpenAI wanted policymakers to ensure that regulations ‘promoted
innovation and did not strangle it’.

India is at the forefront of global efforts to facilitate the legitimate growth of AI for the
larger good of the world without letting it produce any destructive fallout. In the run-up to
the G20 Summit in Delhi last September, the security establishment was able to bank on
the Facial Recognition System relying on AI-based cameras. AI was an important talk point
at the Delhi G20 meet.

The potential of generative AI in making an economic impact on the world was examined
and some concerns were raised about possible job loss especially white-collar jobs resulting
from AI applications.

It can be clearly seen that without cutting jobs, businesses can use AI to improve efficiency
and productivity - efficiency being a measure of productivity.

The need for global AI regulations will be an important matter on the agenda of the next
G20 in Brazil. Anti-competitive policies and big tech monopolies are other points that would
come up there.

The Delhi G20 summit discussed how to harness AI for economic development while
protecting human rights and suggested that there is a need for global oversight of this
rapidly evolving technology.

Prime Minister Narendra Modi and US President Biden had bilateral talks on the eve of the
G20 summit at the Prime Minister's residence on September 8 to expand cooperation
between the two countries in emerging domains like AI and Space. This has brought Indo-
US relations to an entirely new level of strategic friendship.

Earlier in July this year, the Indo-US Science and Technology Forum (IUSSTF) designed a
programme of Indo-US engagement for Technology Partnership for the Future with a new
direction and investment of energy in AI and with an endowment of $2 million. The
programme will work for the joint development and commercialisation of AI.

India believes that it can steer clear of any risks to get the program to promote social well-
being by impacting health care, agriculture, climate change and so on.

More recently, AI figured prominently in the talks between US President Biden and President
Xi Jinping of China held in San Francisco on November 15 during the APEC conference.
Just as IT applications made human processes more efficient with computerisation, AI
makes IT applications smarter - being smart means being able to produce more per unit of
resource whether the resource is money, manpower or time.

Indian industries are well prepared to use AI - technological services, finance and retail are
switching over to AI regime faster than sectors like media& communications, education,
transport etc but India on the whole is moving in that direction more speedily than many
other nations.

AI is the ability of a system or a program to think and learn from experience. Machine
learning and deep learning techniques are based on vast volumes of data being analysed
instantly for making intelligent decisions or solving specific problems.

AI technology helps to improve relationships with customers and their loyalty to a brand.
For this browsing history, the preferences and interests of customers are all collated and
analysed through AI tools. Natural Language Processing is used to make conversation as
human and personal as possible.

Chat boxes improve the user experience and learning material can be accessed through
Voice Assistants like Alexa and Siri. Robots powered by AI use real-time updates to provide
for carrying of goods, cleaning of offices and managing inventory.

For economic companies, AI is helpful in reducing credit card fraud by examining the user’s
patterns. AI can identify fake reviews. People leverage the strength of AI because the work
they need to carry out on a daily basis is increasing all the time.

Mention should be made of AI helping to create a rich learning experience by generating


audio and video summaries and integral lesson plans. AI can help analyse chronic conditions
and lab data to ensure early diagnosis.

Apart from personal usage, facial recognition is now a widely used AI application in high-
security areas in several industries. AI identifies illegal access. Machine learning allows for
the detection of even minor abnormalities and can point to anything that is wrong with the
system.

AI, however, has to still advance a lot in areas such as language processing, creativity,
problem-solving, comprehension of subtleties and evaluation of human output.

AI is essentially a computer process - it is basically adding layers of 1s and 0s amazingly


fast. Binary calculations do not create a ‘soul’ and as an analyst said - the ‘within’ of the
computer and the human is not the same.

Analytical thinking, human consciousness and emotional intelligence place the human brain
on a different footing that was not available to computers.

At the end of it, AI worked on pattern reading and word recognition and was governed by
the input-output principle.

AI can improve business growth and expansion, advance the cause of education and health,
reform governance with particular reference to the implementation of welfare schemes,
make the organisational working more transparent and ethical and help to enrich
everybody’s lifestyle.

Exercise of human judgement will be necessary in crucial areas ranging from dispensing
justice to the responsibility of pressing the nuclear button.

Incidentally, nuclear deterrence is based on the principle that whereas the launch of a
nuclear missile would be a conscious decision the response ensuring total annihilation of the
opponent could be fully automated.

In the final analysis, it is to be realised that any technology- because of its public profile-
would be used for the good of humanity but was also available to the malcontents ranging
from non-state actors like terrorists to ruthless dictators who did not think of anything
beyond their ‘personal’ power.

President Biden’s initiative in issuing an executive order for the benefit of national and
global security has come in time - India fully sharing the hopes and concerns that AI
provided to the world.

Prime Minister Modi has called for expanding ‘ethical’ AI and stipulated that even though AI
had ‘associated risks’ it had proven to be an enabler of the digital and innovation
ecosystem.

India recognises the need for actively formulating regulations rooted in a ‘risk-based user
harm’ approach. It envisages AI as a catalyst for economic growth, job creation and
industrial transformation. India acknowledges the importance of safeguarding citizens’
interests and wants AI benefits to become accessible to all segments of society in an
equitable manner. We should favour a collaborative approach to AI development, involve
government bodies, industry stakeholders, researchers, civil society and subject experts and
opt for an adaptive regulatory framework.

NITI Ayog, the premier policy think-tank of India, is actively engaged in evolving India’s
strategy for AI development giving special attention to key areas like education, workforce
development, data usage, security and research.

You might also like