22140-The Terminator Dilemma

You might also like

Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1of 2

The Terminator Dilemma

Introduction

Weapon systems and force are used to ensure compliance with ethical concerns and

international law. Terminators are robotics controlled by Skynet intelligence systems that can

hunt, select and attack targets with high precision. These systems protect vehicles, anti-vehicles,

human bases, human supervision, and humans co-located with systems. They provide speed and

precise accuracy in command and control in the complex battle space. Besides, they can be

operated in war zones with communication loss. They give a nation a military capability

advantage over an adversary because they are fast in making lethal action decisions than humans.

This discussion argues that lethal force decision-making should not be left in the hands of robots.

However, they are associated with legal, safety, moral risk, and operational control

challenges. Their risks and operational control failures lead to accidents, loss of life, and

property. Scharre et al. (2018) found that machine intelligence is brittle, inflexible, capable of

narrow tasks, and more prone to failure when pushed outside its operational limits leading to

large-scale accident destruction and unintended conflict escalation. They may fail due to errors

and strike unintended targets leading to accidents and destruction (Steinhoff, 2023). Unlike

humans, machines are not legal agents bound by laws of war to make ethical decisions about the

lawfulness of attacks.

The ethical dilemma concerns whether humanity's principles and public morals should

allow the decisions on the use of force to be substituted with computerized processes and

delegate life and death decisions to machines. International laws that limit the use of mass

destruction weapons. According to Stückelberger & Duggal (2018), International humanitarian

law opposes the adoption of autonomous weapon systems. Laws of war require that weapons
provide target distinction, proportionality, and precaution in the attack. Besides, it is essential to

keep humans in the decisions to take destructive actions because such decisions in the hands of

robotic terminators would lead to substantial catastrophes (Stückelberger & Duggal, 2018). Thus

nations should act decisively to establish laws that limit the autonomy in using weapons.

The best approach is to develop these technologies and exploit their vulnerabilities for

human defense. Besides, the decision to kill should not be left to terminator robots. Terminators

must work alongside humans who can make flexible and robust. This approach will minimize

large-scale accidents, runway guns, failure replications across multiple systems, and conflict

escalation. More importantly, it is necessary to adhere to the ethical rules of war to avoid

unleashing robots we cannot control on humans.

References

Scharre, P., Work, R., Selva, P., & Kendall, F. (2018). (rep.). Autonomous Weapons and the

Future of War (pp. 2–14). Geneva: CNAS. Retrieved May 24, 2023, from

https://nsiteam.com/social/wp-content/uploads/2018/10/Scharre-Autonomous-Weapons-

Oct-2018-no-backup.pdf.

Steinhoff, J. K. (2023). An Ethical Dilemma: Weaponization of Artificial Intelligence. Small

Wars Journal, 1(1), 2–10. https://smallwarsjournal.com/jrnl/art/ethical-dilemma-

weaponization-artificial-intelligence

Stückelberger, C. (2018). Ethics and Autonomous Weapons Systems: An Ethical Basis For

Human Control? In P. Duggal (Ed.), Cyber Ethics 4.0: Serving Humanity with Values (Ser.

17, pp. 323–360). essay, Globethics.net.

You might also like