Final Irr

You might also like

Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1of 6

V36861Z6 1

Artificial Intelligence: A Prosperous Future or a Hidden Curse?

Time and time again, tech experts, scientists, and CEOs have voiced their concerns on ethical

issues brought up by the advancement of Artificial Intelligence. In 2016, Twitter chatbot “Tay” was

released and quickly removed from the website as a result of it spewing derogatory racial comments and

pro-nazi sentiment (Lee). In another instance, software bots on Wikipedia were found arguing with one

another, having digital feuds that lasted for years (Tsvetkova, et al.e0171774). These occurrences may not

seem like major threats, but when Russian arms manufacturer Kalashnikov announced their autonomous

combat robots, a missile targeting system specifically built to fire at humans, the potential misuse of these

technologies becomes apparent. According to Milena Tsvetkova, Ruth García-Gavilanes, Luciano Floridi,

and Taha Yasseri of the University of Oxford Internet Institute, an artificial intelligence bot, or software

agent, is defined as “a computer program that is persistent, autonomous, and reactive” (Tsvetkova, et

al.e0171774). These bots are characterized by programming code that runs continuously and can be

activated by itself. They can execute decisions without human intervention, perceive the surrounding

context they are in, and adapt to changes made in the environment. In the wrong hands, AI can do

detrimental damage. In 2017, a group of 1,677 AI/robotics researchers and experts came together in a

conference to form 23 principles called the 23 Asilomar AI principles. These principles call for the

production of beneficial and humane intelligence to minimize the chances of exploitation; they were

formulated to prevent situations in which ethics and moral values were lost, or ignored. The United States

government should strive to implement the 23 Asilomar principles into federal and international law to

prevent legal, moral, and ethical dilemmas when it comes to AI ("Asilomar AI Principles").

Artificial intelligence technology is currently being invested heavily in an attempt to initiate an

“AI arms race.” The global leaders in autonomous weapons development include China, South Korea,

Russia, The United States, and the European Union (Haner and Garcia). According to a study done by

Justin Haner, a doctoral candidate in the Department of Political Science at Northeastern University, and

Denise Garcia, a professor at Northeastern, “Oversight of increased autonomy in warfare is critically


V36861Z6 2

important because this deadly technology is likely to proliferate rapidly, enhance terrorist tactics,

empower authoritarian rulers, undermine democratic peace, and is vulnerable to bias, hacking, and

malfunction” (Haner and Garcia). Experts agree that regulating autonomous machinery is necessary to

prevent the potential evildoings of others. The creation of this technology needs to be controlled and

monitored to avoid privacy and cybersecurity issues as well. For instance, the Chinese government is

persecuting millions of Uighur Muslims and using AI‐based surveillance technology with facial

recognition software to monitor them in the camps of Xinjiang province ("Eradicating Ideological

Viruses' ...”). Similar software is being utilized in China to consolidate power and surveil citizens via

smartphone apps like Fengcai and IJOP (Samuel 1). Implementing the Asilomar Principles into US

federal law would force tech giants to show transparency when introducing AI.

Artificial intelligence has created a resource that people across the globe can use on their own

accord. Its ongoing development, driven by the 23 Asilomar principles, would ensure a safe and ethical

way to operate such technology if signed into federal law. These principles particularize transparency,

human control, and avoidance of a global arms race. For example, principle 11 insists people have the

“right to access, manage, and control the data they generate given AI systems’ power to analyze and

utilize that data” (''Asilomar AI Principles"). This ensures that no political figure or company would go

out of their way to implement surveillance without breaking the law. If signed into law, major tech firms

that distribute artificially intelligent technology can be held accountable when illegal data gathering or

surveillance of activity occurs. First-world nations should incorporate these principles to prevent a

surveillance-oriented society in the future and certify transparency. Along with personal privacy, these

principles also emphasize liberty, shared benefit, and human control. Principle 13 suggests the use of AI

to analyze personal data must not “unreasonably curtail people’s real or perceived liberty” ("Asilomar AI

Principles''). It should however be used to benefit and empower society. At the end of the day, humans

should choose how and whether to allow AI systems to make decisions on accomplishing human-chosen

objectives. Founder of Sertain Research, Barry Chudakov, states that “If we are fortunate, we will follow
V36861Z6 3

the 23 Asilomar AI Principles … and work toward ‘not undirected intelligence but beneficial

intelligence.’ Akin to nuclear deterrence stemming from mutually assured destruction, AI and related

technology systems constitute a force for a moral renaissance” (Rainie et al. 1). In 2018, the Legislative

Assembly of California signed a bill advocating for the support of the 23 Asilomar AI Principles in the

state as “guiding values for the development of artificial intelligence and related public policy”

(California State, Legislature, Assembly). These principles, which have been endorsed by Elon Musk and

Stephen Hawking, should be implemented into federal and international law ("Asilomar AI Principles").

Mark Esper, former US Secretary of Defense, explained in an Artificial Intelligence Public

Conference that “whichever nation harnesses AI first will have a decisive advantage on the battlefield for

many, many years…Because AI can observe, orient, decide and act at multiples of human ability, it will

become irresponsible to send a human pilot into battle without an AI co-pilot” (Allison and Y). In the US

alone, 17.5 billion dollars have been allocated for drone spending (Haner and Garcia). It is not unusual

that the United States is the world’s largest producer of lethal autonomous weapons, with a military

budget that exceeds China's, Russia's, South Korea's, and the total spending of all 28 EU member states.

A report done by Zachary L. Morris, a Major in the United States Army, recommends that “the military

conduct extensive testing, experimentation, iterative learning, and certification before transitioning

between each phase of autonomous weapons development” (Morris 4). Principle 16 in the Asilomar

principles encourages Zachary’s recommendations that humans should decide how and whether to assign

decisions to AI systems in order to achieve human-defined goals. Not only does this require testing and

experimentation, but an intensive analysis of how the machine functions. In developing nations such as

Nigeria, Yemen, and Syria, terrorist organizations have been seen implementing autonomous weaponry to

convey attacks. ISIS, Boko Haram, and Houthi rebels in Yemen have used modified drones for use as

improvised explosives (Haner and Garcia). Implementation and enforcement of these principles must take

place, obligating nations to secure the distribution of AI weapons.


V36861Z6 4

On the other hand, 28 countries have demanded that destructive robots be outlawed completely,

including many in Africa and South America (Haner and Garcia). Weapon restrictions have proven

effective methods for preventing the use of nuclear weapons, biological arms, and landmines, explaining

why many nations are calling for a complete ban. UN Secretary-General António Guterres urged artificial

intelligence experts to work towards banning AI weaponry, calling them “morally repugnant and

politically unacceptable” (Stauffer). Although it seems doubtful that global superpowers halt the

production of autonomous weapons anytime soon, France, Germany, and others have advocated for

developing guiding principles as a code of conduct encouraging autonomous weapon development to stay

following existing international law (Haner and Garcia). Restricting the development of autonomous

weaponry is an unlikely outcome, though it may be an option. While AI weapons present a harsh reality

we must face, artificial intelligent technology provides many benefits for consumers meaning regulatory

legislation is required to protect human rights. The rise of military weaponry should not present a reason

to completely outlaw AI technology.

Word Count: 1268

Works Cited

Allison, Graham T., and Y. "The Clash of AI Superpowers." The National Interest, no. 165, Jan.-Feb.

2020, p. 11+. Gale General OneFile, https://link.gale.com/apps/doc/A610852268/ITOF?

u=nysl_li_jhsch&sid=ITOF&xid=17274671. Accessed 17 Sept. 2020.

"Asilomar AI Principles." Futureoflife, futureoflife.org, 8 Jan. 2017,


V36861Z6 5

futureoflife.org/ai-principles/. Accessed 30 Oct. 2020.

California State, Legislature, Assembly. ACR-215 Asilomar Principles.

Leginfo.legislature.ca, legislature.ca.gov,

leginfo.legislature.ca.gov/faces/billTextClient.xhtml?bill_id=201720180ACR215.

Accessed 30 Oct. 2020. 2018 Legislature, Assembly Bill 215, section 206,

passed 7 Sept. 2018.

"'Eradicating Ideological Viruses” China's Campaign of Repression against

Xinjiang's Muslims." Hrw.org, Human Rights Watch, 9 Sept. 2018,

https://www.hrw.org/report/2018/09/09/eradicating-ideological-viruses/chinas-campaign-repression-

against-xinjiangs. Accessed 18 Oct. 2020.

Haner, J. and Garcia, D. (2019), The Artificial Intelligence Arms Race: Trends and World Leaders in

Autonomous Weapons Development. Glob Policy, 10: 331-337. doi:10.1111/1758-5899.12713

Lee, Kai-Fu. AI Superpowers. Houghton Mifflin Harcourt, 2018.

Lee, Peter. "Official Microsoft Blog." Learning from Tay's Introduction,

Microsoft, 25 Mar. 2016, blogs.microsoft.com/blog/2016/03/25/

learning-tays-introduction/. Accessed 5 Oct. 2020.

Morris, Zachary L., Maj. "A Four-Phase Approach to Developing Ethically

Permissible Autonomous Weapons." Army Press Online Journal, May 2018, pp.

1-8, www.armyupress.army.mil/Portals/7/Army-Press-Online-Journal/documents/

Morris-v2.pdf. Accessed 5 Oct. 2020.

Rainie, Lee, et al. Artificial Intelligence and the Future of Humans. Pew

Research Center, 10 Dec. 2018. Pew Research Center, www.pewresearch.org/

internet/2018/12/10/artificial-intelligence-and-the-future-of-humans/.

Accessed 30 Oct. 2020.

Samuel, Sigal. "China Is Installing a Secret Surveillance App on Tourists'

Phones." Vox, 3 July 2019. www.vox.com/future-perfect/2019/7/3/


V36861Z6 6

20681258/china-uighur-surveillance-app-tourist-phone. Accessed 12 Nov.

2020.

Stauffer, Brian. "Stopping Killer Robots: Country Positions on Banning Fully

Autonomous Weapons and Retaining Human Control." Human Rights Watch,

hrw.org, 10 Aug. 2020, www.hrw.org/report/2020/08/10/stopping-killer-robots/ country-

positions-banning-fully-autonomous-weapons-and#. Accessed 12 Nov.

2020.

Tsvetkova, Milena, et al. "Even Good Bots Fight: The Case of Wikipedia." PLOS ONE, vol. 12, no. 2, 23 Feb.

2017, p. e0171774, doi:10.1371/journal.pone.0171774. Accessed 21 Sept. 2020

You might also like