Professional Documents
Culture Documents
General Metrics: by Renee Poindexter
General Metrics: by Renee Poindexter
General Metrics: by Renee Poindexter
AI
AI
by renee poindexter
General metrics
8,535 1,317 80 5 min 16 sec 10 min 7 sec
characters words sentences reading speaking
time time
99 2 2
Issues left Critical Advanced
Plagiarism
This text hasn’t been checked for plagiarism
Writing Issues
2 Clarity
2 Hard-to-read text
AI
AI'S DANGERS AND CHALLENGES 6
by (Name)
Professor (Tutor)
The Name of the School (University)
The City and State where it is located
The Date
Introduction
According to Castro and New (2016, pg. 33), Arti cial Intelligence (AI) is a
branch of computer science that aims to create computers and systems that
can learn and make decisions in the same way that humans do. Castro and New
(2016, pg. 34) say that the Association for the Advancement of Arti cial
Intelligence characterizes AI as "the scienti c knowledge of the principles
underlying mind and intelligent behavior and its implementation in computers."
The term "AI" does not imply human-level intelligence in any particular
implementation of the technology.
A Look at AI's Dangers and Challenges
According to Bill Joy (Joy, 2000, pg. 240), "Our most powerful 21st-century
technologies—robotics, genetic engineering, and nanotechnology—threaten to
make humans an endangered species." Joy argues that rather than relying on
humans to solve increasingly complex societal issues, people will increasingly
delegate decision-making authority to machines as they become increasingly
intelligent and complex societal problems. As per Makridakis (2017, pg. 50),
this will ultimately lead to robots taking control of all signi cant decisions, with
humans relying on them and scared to make their judgments. Joy and other
scientists and philosophers believe that Kurzweil and his supporters
underestimate the magnitude of the challenge and the potential dangers of
thinking machines and intelligent robots. A utopian world where machines and
robots do all work may lead humans to relegate to second-class status.
The Increasing Use of Arti cial Intelligence (AI) Systems Has its Own Set of
Risks.
Overuse and Underuse of Arti cial Intelligence
Suppose the EU fails to make use of AI's potential. In that case, it might
jeopardize the EU's ability to compete with other regions of the globe and lead
to economic stagnation and fewer opportunities for individuals. AI's machine
learning relies on data, which may explain why it is underused. Arti cial
Intelligence (AI) is a powerful tool, but it may also be misused if used for jobs
that are not well-suited, such as explaining complicated social problems.
Without accountability, there may not be an incentive for the manufacturer to
offer high-quality products or services, which might harm people's con dence
in the technology; nevertheless, laws may be overly rigid and discourage
innovation (Saha, Sengupta & Das, 2020, n.p).
The Dangers of Arti cial Intelligence to Democracy and Human Rights
As per Saha, Sengupta & Das (2020, n.p), the outcomes of AI are in uenced by
the architecture of the AI and the data it utilizes. There might be bias in both
the design and the data. There may be crucial parts of a problem not included
in the algorithm or may be built to re ect and reproduce systemic prejudices. In
contrast, using numbers to represent complex social reality can make the AI
appear factual and precise when it is not. Sometimes, this practice is known as
"Mathwashing." When misused, arti cial intelligence (AI) might lead to
employment and ring choices based on race, gender, and age, as well as loan
offers and criminal prosecutions.
In the future, arti cial intelligence (AI) may have signi cant rami cations for
the right to privacy and data security. Using it in face recognition equipment or
online tracking and pro ling is just one example of its use. It is possible, thanks
to AI, to combine bits and pieces of information that a person has provided to
create whole new data. As per Saha, Sengupta & Das (2020, n.p), it can be a
danger to democracy, as AI is often accused of creating online echo chambers
where people only see what they want to see online, rather than fostering a
1
climate conducive to free and open debate among all members of society.
This technology may even be used to make compelling fake video, audio, and
pictures, known as "deep fakes," which pose nancial hazards, tarnish
thanks to the use of AI, but they are also able to complete a more signi cant
number of tasks.
The changes in demand for labor could therefore harm the overall distribution
of income. Szczepański (2019, n.p) says that, due to market imperfections, a
faster pace of change may result in more negative outcomes. Arti cial
Intelligence (AI) can theoretically increase productivity and income growth but
also raise inequality. While society as a whole would be vastly more prosperous,
many individuals, communities, and regions will only be worse off due to
technological advancement. Concerns have been expressed that AI will only
make current trends toward an unequal distribution of national income even
worse by increasing the concentration of wealth in a few "superstar"
companies and sectors.
Competition and Transparency
Competition could be distorted if companies with more information have an
advantage and can effectively eliminate their rivals because of this. There is a
risk that criminals could exploit information imbalances. AI can be used to
predict a person's willingness to pay, or a political campaign can change their
message based on their online behavior or other data. Additionally, people may
not know if they are interacting with AI or a human being at times (Saha,
Sengupta & Das, 2020, n.p).
Conclusion
Poorly designed, misused, or compromised AI applications that directly contact
humans or are built into the human body pose a safety risk. We could lose
control of dangerous weapons if the use of arti cial intelligence in weapons is
not adequately regulated. Misuse of AI might lead to polarization in the public
realm and in uence the outcome of elections. Individuals related to speci c
opinions or acts might be tracked and pro led by AI, undermining free
assembly and protest.
References
Castro, D. and New, J., 2016. The promise of arti cial intelligence. Center for
Data Innovation, 115(10), pp.32-35.
Joy, B., 2000. Why the future does not need us (Vol. 8, No. 4, pp. 238-262). San
Francisco, CA: Wired.
Makridakis, S., 2017. The forthcoming Arti cial Intelligence (AI) revolution: Its
impact on society and rms. Futures, 90, pp.46-60.
Saha, M., Sengupta, A. and Das, A., 2020. Cyber Threats in Arti cial
Intelligence.
Szczepański, M., 2019. Economic impacts of arti cial intelligence (AI) [pdf].
European Parliamentary Think Tank. Retrieved from
https://www.europarl.europa.eu/thinktank/en/document.html?
reference=EPRS_BRI(2019)637967
1. As per Saha, Sengupta & Das (2020, n.p), it can be a danger Hard-to-read Clarity
to democracy, as AI is often accused of creating online echo text
chambers where people only see what they want to see
online, rather than fostering a climate conducive to free and
open debate among all members of society.