Professional Documents
Culture Documents
Kazim Ali Muhit - Tech Article Writing - DRMC - 01 - Unraveling The Abyss - The Dark Side of Artificial Intelligence
Kazim Ali Muhit - Tech Article Writing - DRMC - 01 - Unraveling The Abyss - The Dark Side of Artificial Intelligence
Artificial Intelligence
Introduction
For example, in the Netherlands, the ‘Top 600’ list attempts to ‘predict’ which young people will
commit certain crimes. One in three of the ‘Top'600'—many of whom have reported being
followed and harassed by police—are of Moroccan descent. In Italy, a predictive system used by
police called Delia includes ethnicity data to profile and ‘predict’ people’s future criminality. Other
systems seek to ‘predict’ where crime will be committed, repeatedly targeting areas with high
populations of racialized people or more deprived communities. Crime data is a record of police
and criminal justice authorities' activities, revealing structural biases and inequalities based on
race, class, and gender. It is used to justify decisions, such as arrests and prosecutions, and is
increasingly shared with other authorities, affecting decisions on immigration, housing, benefits,
child custody, and school punishment. For example, in the UK, black people are policed and
criminalized disproportionately more than white people on any measurement: stop and search,
arrest, prosecution, pre-trial detention, imprisonment, and more.
Automation and Job Replacement
The development of autonomous weapons systems, AI-powered weapons that can identify and
engage targets without human intervention, raises ethical concerns about the delegation of
lethal force to non-human entities. These systems may operate without adequate human
oversight, leading to unintended consequences or escalations of conflicts. The arms race and
proliferation of AI-powered weapons could fuel tension, mistrust, and instability on the global
stage. Ethical decision-making is crucial, but implementing these principles is challenging.
Privacy and surveillance concerns arise, as mass surveillance capabilities could infringe on
individuals' rights. AI systems trained on biased data may perpetuate biases in military
operations. 4 The Chinese government has made a declaration to catch up to the US in AI
technology development by 2025 and lead the world by 2030. 52 According to a paper
published by the Brookings Institution, because of China’s goal of being a world leader in AI,
one can assume that it is doing more research and development on “intelligent” weapons than
what is publicly available.
In 2016, the Carnegie Endowment outlined the importance of LAWS
(Lethal Autonomous Weapons Systems) in India's evolving defense
environment. In April 2018, Modi said at the Defense Expo 2018, a
biennial arms fair event hosted by the Ministry of Defense, that LAWS
will be crucial in building offensive and defensive military capabilities.
He highlighted the fact that India is already a world leader in
information technology and can thus lead the global trend of AI
applications in weapons.
Existential Risks and Superintelligence
Human brains dominate other species due to their unique capabilities, but AI could surpass
human intelligence and become super-intelligent, making control difficult or impossible. The fate
of humanity depends on human goodwill and the feasibility of AI, superintelligence, and practical
scenarios for AI takeovers. AGI, which is more intelligent than humans, may have objectives or
drives that are inexplicable to us. There is a chance of existential risk if these objectives clash
with the welfare of humans. AGI might try to reorganize the world to suit its goals, which could
have disastrous effects on mankind.
Technical Singularity
By enhancing its own design, AGI produces exponentially faster technical advancements. This
results in a situation known as the technological singularity, in which advances in technology
happen at a rate never before conceivable for humans to keep up with or comprehend.
Economic Disruption
As AI replaces labor in many industries, it creates a large-scale loss of jobs. In the absence of
meticulous planning and retraining initiatives, this may worsen socioeconomic disparities and
spark instability and civil unrest.
Concerns about superintelligence have been voiced by leading computer scientists and tech
CEOs such as Geoffrey Hinton,[7] Yoshua Bengio,[8] Alan Turing,[a] Elon Musk,[11] and
OpenAI CEO Sam Altman.[12] In 2022, a survey of AI researchers with a 17% response rate
found that the majority of respondents believed there is a 10 percent or greater chance that our
inability to control AI will cause an existential catastrophe.[13][14] In 2023, hundreds of AI
experts and other notable figures signed a statement that "Mitigating the risk of extinction from
AI should be a global priority alongside other societal-scale risks such as pandemics and
nuclear war".[15] Following increased concern over AI risks, government leaders such as United
Kingdom prime minister Rishi Sunak[16] and United Nations Secretary-General António
Guterres[17] called for an increased focus on global AI regulation.
Conclusion
AI has numerous applications, but it's crucial to address its negative aspects, such as biases,
privacy dangers, employment displacement, and existential threats. AI algorithms can
perpetuate biases, leading to discrimination and societal inequalities. Privacy and surveillance
threats are also significant. Automation and job displacement may lead to socioeconomic
disparities. The militarization of AI presents ethical risks, including autonomous weapons and
potential destabilization. The pursuit of artificial superintelligence raises existential risks.
Collaboration between policymakers, researchers, and industry stakeholders is essential to
address these challenges and harness AI's potential for societal benefit.
presented by,