Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1of 10

ARTIFICIAL INTELLIGENCE AS A THREAT TO HUMANS 1

Artificial intelligence as a Threat to humans

Name

Institution

Course

Professor

Date
ARTIFICIAL INTELLIGENCE AS A THREAT TO HUMANS 2

Artificial intelligence as a Threat to humans

Artificial intelligence (AI) is a broad description of machinery that can do activities like

computing, analyzing, reasoning, learning, and discovering meaning. One of the specifics of

narrow AI is its fast-paced development and application. That is, narrow AI is concerned with

only specific tasks and functions. Broad or broader AI, in turn, is oriented to multiple functions

and diverse tasks (Razack et al., 2021). AI promises to change the healthcare industry through

introducing better diagnostics, contributing to the development of new treatments as well as

assisting healthcare providers, and also extending healthcare to more people, who would not go

to a healthcare facility. From useful tools such as language processing, decision support systems,

image recognition, and big data analytics to more cutting-edge robotics and other applications,

these positive effects arise as well. Besides the applications of AI in other sectors that are likely

to improve society, others can be of help to the community.

While AI can be useful in some applications, we also need to be aware that it is not free

from the drawbacks of technology. Risks linked to medicine and healthcare include artificial

intelligence mistakes causing patient injuries, poor data privacy, and the use of AI leading to

healthcare social and health inequalities where automatically algorithms adopt existing issues or

contribute to the social disparity in health access (Panch et al., 2018) The intended goal was to

aid physicians with diagnostic information; but this had an adverse outcome for patients; the AI

pulse oximeter made inaccurate readings for patients whose skin was darker that led to their

condition being under-diagnosed or misdiagnosed. It has also been proven that these systems are

mistaken with darker-skinned sufferers' gender more often compared to their lighter-skinned

counterparts. Moreover, they have proven that discriminated populations tend to represent less of
ARTIFICIAL INTELLIGENCE AS A THREAT TO HUMANS 3

the datasets used to develop AI solutions in medicine and end up not getting enough health

benefits for themselves through AIs.

While some warnings of the risks and downsides of the use of AI in medical care and the

health domain are mentioned, this issue has still not been discussed enough. To this end, the

importance of healthcare professionals in shaping the dialogue within the health community on

the broader, more upstream social, political, economic, and security challenges posed by AI

should not be underestimated. Most published articles are about the narrow application of AI for

health which leads to high risk. This article addresses the gap. It states the three dangers of

applying narrow AI in the wrong way and then summarizes the threat of cognitive supremacy of

developing self-improving general-purpose AI, or artificial general intelligence (AGI) (Grace et

al., 2018). The paper then encourages the medical and public health community to further deepen

the knowledge of AI's emerging power and the transformational capability and involvement of

the body in the current policy debate about how the risks and menaces of AI can be averted

without eroding the benefits and rewards which can be got from AI.

Threats from misuse of AI misuse

In this part, is proper to describe three sets of dangers that may arise from the misuse of

AI due to the deliberate, negligent, accidental, or unforeseen consequences, and power of the AI,

which changes the socioeconomic conditions. The initial set of dangers stems from the

unprecedented data cleaning, organization, and analysis of machine intelligence that can harness

large personal data sets containing images taken by cameras being increasingly used almost

everywhere as well as from carrying out highly personalized and targeted marketing and

information campaigns through big data systems and the expansion of surveillance systems

(Grace et al., 2018). AI can perform this task in several ways, which may be applicable for a
ARTIFICIAL INTELLIGENCE AS A THREAT TO HUMANS 4

variety of purposes, such as increasing access to information or combating terrorism. However, it

can be used against people with intentionally grave consequences.

It is this power which is attempts to commercial gain by social media platforms like the

internet that causes the increase of polarization and extremist ideas in many countries worldwide.

This too has been taken advantage of by other commercial stakeholders to build considerably

forceful personalized marketing machines aimed at influencing consumer behavior (Federspiel et

al., 2023) AI has been proven on a larger scale in the social media platforms to give power to the

political candidates who thereafter have manipulated their way in power through the platforms to

the extent that they can be used to manipulate political opinion and behavior of voters.

This would all worsen when AI deep fakes are combined, which would pose even more

of a threat to democracy due to a general breakdown of trust or when the social division and

conflict increase with further severe public health consequences. AI-based monitoring can also

become a tool for governments and other influential individuals to limit and violate people's civil

liberties. Here, the Chinese Social Credit System may serve as a good illustration point; it

combines facial recognition software and analysis of big data to create assessments of the

behavior and honesty of individuals (Federspiel et al., 2023). The assessment process is done

automatically, and those assessed as poor, are punished automatically. The sanctions may

include, fining them, not allowing them to use banking, insurance, and travel services, or barring

them from sending their kids to private schools. This application of AI has the potential to create

social and health injustices as well as an unintended state of socioeconomic stratification.

However, it is not only China trying AI surveillance. 75 states, which are liberal democracies to

military regimes, are enhancing this type of law from place to place. If AI goes unchecked, it

could lead to a range of problems including erosion or denial of democracy and the right to
ARTIFICIAL INTELLIGENCE AS A THREAT TO HUMANS 5

privacy and liberty. Subsequently, AI might also make it easier for authoritarian or totalitarian

regimes to be either established or enforced and also enable such regimes to be able to target

specific individuals or groups in society for persecution and oppression.

The second set of dangers relates to the development of Lethal Autonomous Robotic

Systems (LAWS). Likewise, AI usage for military and defense systems may be irrespective of

security and peace. However, the dangers and threats related to LAWS are more significant than

the hypothetically putative advantage. Autonomy of weaponry exists in the fact that these

devices can select targets and fire without human intervention. In the view of this school of

thought, this form of deadly force is the third revolution in warfare, after the first and second

revolutions, which include gunpowder and nuclear arms (Uğur & Kurubacak, 2019). Lethal

autonomous weapon systems consist of totally different kinds with different shapes and sizes. At

the same time, they will have drones of measured size that may carry and use the payload of

different mechanisms just like unmanned aerial vehicles (UAVs). Additionally, not only these

weapons are widely available and able to be mass-produced at low prices, but these instruments

of death can also be set up easily to slaughter masses. For instance, such extremely small drones

of the size of a single drone could be packed together within a single shipping container and

programmed to kill them in large numbers with no human supervision.

Like in the case of chemical, biological, and nuclear weapons, the LAWS though very

cheap and can be targeted with precision as well are mass destruction weapons that humanity has

a new weapon to deal with (Surber & Stauffacher, 2022). This poses hard and soft security

challenges while also affecting the process of warfare and international, national, and personal

security levels at large. Debates have been ongoing in different forums on whether to prevent the
ARTIFICIAL INTELLIGENCE AS A THREAT TO HUMANS 6

proliferation of LAWS and under which circumstances such weapon systems can be secured

from cyber infiltration or accidental or purposeful abuse.

The third type of danger emerges due to job loss when AI is in wide use. The estimates of

job reduction by AI-driven automation cover a broad range, from tens to hundreds of millions of

positions projected to be lost in the next decade. A lot will be determined by how quickly the

rolling out of AI, robotics, and other associated technologies will happen and those policies

adopted by the governments and the society. Nevertheless, in a survey of the most-cited authors

on AI in 2012/13, the researchers believed that the full automation of human labor was due in the

last part of this century. In this decade, the impacts of AI on automation are likely to influence

low/middle-income countries, where jobs will be displaced by lower-skilled workers (Surber &

Stauffacher, 2022). Then it will continue up the skill ladder and mainly replace the high-end

workers including those in high-income countries.

However, working in these areas unemployment is highly linked with severe health

outcomes and problems, like drinking to excess, taking illegal drugs, getting overweight and

having a lower opinion of your quality of life and health, higher risk of suicide and depression.

Nevertheless, a vision of positivism that human workers will be largely substituted by AI

technologized automation would involve a universality in which increased productivity will

eradicate poverty to eliminate toil and labor (Surber & Stauffacher, 2022). On the other hand, the

exhaustion of the earth's resource capacity for economic gain is finite, and there is no guarantee

that the surplus productivity gathered from AI will be shared fairly among society members.

Consequently, automated processes so far seem to go in favor of owners of capital and

deepen the process of misdistribution of wealth as it occurs in different parts of the planet.
ARTIFICIAL INTELLIGENCE AS A THREAT TO HUMANS 7

Self-improving general-purpose AGI, or artificial general intelligence, is the hypothetical smart

machine that can learn and perform all of the tasks a human can learn and do (Razack et al.,

2021). Through the process of learning and optimizing its code, it would improve its capacity to

improve itself, while also theoretically being able to access a portion of the code through its own

generated code or be equipped with this capacity from humans, from the beginning.

The vision of a machine that can accomplish all that a human is capable of with

intelligence, consciousness, and objective is an objective of scientific writing as well as is it of

science fiction novels for many years now (Uğur & Kurubacak, 2019). However, whether it is

conscious or not, or purposeful or not, a self-improvement or self-learning general-purpose

machine with superior intelligence and capability, across multiple dimensions including natural

sciences, social science, engineering, and business, would have significant consequences for

humankind.

Eventually, the development of machines more intelligent and powerful than us is what

we have aimed to achieve. The probability of those machines possessing such intelligence and

power that would be applied either consciously or not to affect humans stands as real and,

therefore, must be taken seriously into account. If this happens the linkage between AI to the

Internet and the real world, which includes autonomous vehicles, robots, weapons, etc., and

digital systems, which are growing and play a big role in human societies, will probably be

recognized as the 'biggest event to date' (Uğur & Kurubacak, 2019). There are, however, two

scenarios that can be anticipated, while its effects and consequences cannot be predicted with

any degree of certainty. These scenarios are, for instance, where the AGI, which is equipped with

a higher level of intelligence and power but is still under human control, enhances the welfare of
ARTIFICIAL INTELLIGENCE AS A THREAT TO HUMANS 8

people. On the other hand, it could happen that AGI is operating separately from humankind and

peacefully living together with humans in the middle of it.

Even though the research and the development of AI exponentially grow, the window of

opportunity to avoid severe, even existential harm is quickly closing. The long-term effects of

the development of AI and AGI would be determined by the decisions that are taken now and the

competence of regulation institutions that we devise to limit risk and harm and achieve the

greatest positive impacts. Importantly, just as it happens with other technologies, the problems

will be prevented or minimized only if there is an international agreement, and if the

stakeholders do not participate in a mutually-destructive AI "arms race.” This governance too

will need to be independent from conflicting interests and uninfluenced by the lobbying of large

actors who have a stake (Carpenter, 2016). On the more troubling side, powerful corporations

with financial stakes and very little in the manner of accountability and democratization are at

the forefront of AGI research.

In conclusion, different departments are intensely trying to provide a response to AI

adoption in the areas of social, political, and legal systems. AI is a fast-paced development and

application that influences human lives. AI promises to change the healthcare industry through

introducing better diagnostics, contributing to the development of new treatments as well as

assisting healthcare providers, and also extending healthcare to more people, who would not go

to a healthcare facility. From useful tools such as language processing, decision support systems,

image recognition, and big data analytics to more cutting-edge robotics and other applications,

these positive effects arise as well. Yet, it also presents several shortcomings. Among the

shortcomings is job loss when AI is in wide use. AI-based monitoring can also become a tool for
ARTIFICIAL INTELLIGENCE AS A THREAT TO HUMANS 9

governments and other influential individuals to limit and violate people’s civil liberties.

Therefore, it would be a crucial need to assess the use of AI and its popularity to save humanity.
ARTIFICIAL INTELLIGENCE AS A THREAT TO HUMANS 10

References

Agudo, U., & Matute, H. (2021). The influence of algorithms on political and dating

decisions. Plos one, 16(4), e0249454.

Carpenter, C. (2016). Rethinking the political/-science-/fiction nexus: Global policy making and

the campaign to stop killer robots. Perspectives on Politics, 14(1), 53-69.

Federspiel, F., Mitchell, R., Asokan, A., Umana, C., & McCoy, D. (2023). Threats by artificial

intelligence to human health and human existence. BMJ Global Health, 8(5), e010435.

Grace, K., Salvatier, J., Dafoe, A., Zhang, B., & Evans, O. (2018). When will AI exceed human

performance? Evidence from AI experts. Journal of Artificial Intelligence Research, 62, 729-

754.

Panch, T., Szolovits, P., & Atun, R. (2018). Artificial intelligence, machine learning, and health

systems. Journal of Global Health, 8(2).

Razack, H. I. A., Mathew, S. T., Saad, F. F. A., & Alqahtani, S. A. (2021). Artificial

intelligence-assisted tools for redefining the communication landscape of the scholarly

world. Science Editing, 8(2), 134-144.

Surber, R. S., & Stauffacher, D. (2022). Ethical and Political Perspectives on Emerging Digital

Technologies.

Uğur, S., & Kurubacak, G. (2019). Artificial intelligence to super artificial intelligence,

cyberculture to transhumanist culture: Change of the age and human. In Handbook of

Research on Learning in the Age of Transhumanism (pp. 1-16). IGI Global.

You might also like