Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1of 65

Premier Debate

January/February 2021
FREE LD Brief
PremierDebate.com

Find Us on Facebook
Introduction
Friends of Premier Debate,

This is Premier’s Free brief on the topic “Resolved: States ought to ban lethal autonomous weapons.”
This brief includes 100+ cards, including all the most relevant stock arguments on the topic for both the
AFF and NEG, a variety of philosophical positions ranging from utilitarianism to Kantianism, various DAs
and Counterplans, and blocks for both sides. Additionally, we’ve included definitions for all the key
terms in the resolution for a robust topicality debate.

We want to remind the readers about standard brief practice to get the most out of this file. Best
practice for brief use is to use it as a guide for further research. Find the articles and citations and cut
them for your own personal knowledge. You’ll find even better cards that way. If you want to use the
evidence in here in a pinch, you should at least re-tag and highlight the evidence yourself so you know
exactly what it says and how you’re going to use it. Remember, briefs can be a tremendous resource but
you need to familiarize yourself with the underlying material first.

If you like what we’re doing and these cards have been helpful to you, consider signing up for online
coaching through Premier Debate. Our coaches were elite competitors in their own right and have now
coached students to elimination rounds, earning TOC bids, and qualifying to state and national
championships. See premierdebate.com/coaching for more details on how to apply!

Finally, we’d like to thank Peter Zhang, Andrew Qin and Amy Santos for their help in assembling this
brief. These are some of the best, round-ready cards you’ll see on the topic, and we couldn’t have done
it without them.

Good luck everyone. See you ‘round!

Bob Overing & John Scoggin

Directors | Premier Debate


Aff
Stock
Alliances

LAWs could threaten U.S. alliances because allies wouldn’t perceive LAW deployment
as credible.
Leys 18 Leys, Nathan. “Autonomous Weapon Systems and International Crises.” Strategic Studies
Quarterly, vol. 12, no. 1, 2018, pp. 48–73. JSTOR, www.jstor.org/stable/26333877. [Premier]

In this case, both the United States and China would probably be less likely to escalate than if one of
their human pilots had died. Compare the public outcry over Iran’s decision in January 2016 to detain 10
US Navy personnel in the Persian Gulf to the muted public reaction to China’s seizure in December 2016
of a US Navy submersible drone.65 A failure to respond to the death of a military member could prove
politically disastrous for a US leader, but destroyed AWS do not have grieving families

On the other hand, the diminished risk to US pilots and the resulting reduction in public demands for
revenge may encourage leaders to deploy AWS when they would have otherwise chosen to not deploy
piloted air patrols. This could end up multiplying risk, because each side will to some degree be unaware
of the other’s autonomous/unmanned capabilities, and neither side can know how adversarial military
systems driven by AI will interact differently than those driven by human intelligence. Thus, AWS may
make crises involving patrols over disputed territory more likely but less dangerous.

Scenario Three: Public Pressure to Withdraw Forward-Deployed Forces

Citing potential cost savings and reduced risk to American Soldiers, a Congressional report urges DOD to
withdraw them from the Korean Demilitarized Zone (DMZ) and replace them with autonomous robotic
sentries.66 Liberal doves and conservative hawks, persuaded by the report’s assertion that AWS will be
at least as effective as the currently deployed US forces, unite in support of the proposal. The South
Korean government supports the addition of the autonomous sentries but opposes the withdrawal of
American troops.

This scenario draws on Thomas Schelling’s observation that although small forces of American troops
stationed on allied soil cannot repel a mass invasion, “bluntly, they can die. They can die heroically,
dramatically, and in a manner that guarantees that the action cannot stop there.”67 The notion,
applying James Fearon’s formulation, is that if these tripwire troops are killed in an attack on an ally’s
territory, the potential domestic political costs imposed on a US leader who chooses not to respond
guarantee an overwhelming military response, making the US commitment to South Korea more
credible. But AWS, by definition and design, cannot “die heroically.” If AWS physically displace human
soldiers from an ally’s territory, the potential domestic political costs to leaders of not responding in the
event of attack would be diminished. Hence, South Korean leaders might perceive the United States’
commitment as less credible.

One of the primary arguments for the development and deployment of AWS is that robots can remove
humans from harm’s way. This assertion runs directly counter to the DOD’s insistence that AWS will
fight alongside human soldiers, rather than displacing them.68 Whatever the Pentagon’s insistence
today and regardless of whether AWS could replace humans without impacting military effectiveness, it
is entirely plausible that placing soldiers in harm’s way will become politically untenable if AWS are seen
as a viable replacement for human soldiers. This is particularly true if the most visible permutations of
AWS are autonomous unmanned underwater, surface, and air vehicles rather than AI-enabled computer
systems designed to support military logistics and decision making. Congress and the public are more
likely to demand the replacement of human soldiers with AWS if they can picture a robot armed with
high-tech firearms storming onto the battlefield.

Designing AWS to support—rather than replace—human soldiers may make sense from a military
perspective, but it raises political risks domestically and internationally. First, public misunderstandings
of AI, steeped in science fiction archetypes of hyper-advanced robot warriors, may lead to the
overestimation of AWS’ capabilities, even as they fuel pacifist opposition to AWS’ development. This
overestimation, in turn, may convince supporters of AWS that new weapons can replace human soldiers
without reducing military effectiveness.

The second danger to the international credibility of the United States and its commitments to its allies
also flows from this potential for domestic overestimation of AWS’ capabilities. To the extent political
pressure leads to AWS geographically displacing forward-deployed forces instead of supporting them,
they may make US commitments to defend allies less credible. If South Korea is less certain of the US
defense commitment, it might choose to self-help by building up its own military, possibly including the
development of nuclear weapons or its own AWS.

This tension between the enhanced lethality of US forces and the diminished credibility of US
commitments from forward-deployed AWS could be resolved in two ways. First, military leaders could
convince civilian policy makers and the public that AWS would be ineffective unless they are directly
supporting US Soldiers. Given the parochial incentives of commanders to emphasize the capabilities and
downplay the limitations of new military systems during the appropriations process, this may be easier
said than done. But if this framing is successful, and if AWS are deployed to augment tripwire forces,
they may marginally increase the credibility of US commitments by signaling a prioritization of that ally’s
defense and increasing the fighting effectiveness of deployed land forces.69

Alternatively, the United States could adopt a doomsday device approach to make US involvement in a
conflict automatic. Such an effort would involve programming prepositioned AWS to strike North Korean
targets if a certain condition occurs (e.g., a critical mass of North Korean troops crosses the DMZ). To
make this commitment credible, the United States would have to convince South Korea that it will not
simply call off its AWS when the time comes. That would require preprogramming AWS to cut off all
communication with US commanders at the moment the system decides to strike North Korea. This
approach raises obvious concerns that a computer glitch or a large-scale military exercise could trip the
system, not only dragging the United States into a war it does not want but also starting a conflict where
a crisis might otherwise have been averted.
Arms Race

LAWs could spark an arms race.


Pedron and da Cruz 20 summarizes Stephanie Mae Pedron and Jose de Arimateia da Cruz. "The
Future of Wars: Artificial Intelligence (AI) and Lethal Autonomous Weapon Systems (L eapon Systems
(LAWS)." International Journal of Security Studies, 2(1). 2020.
https://digitalcommons.northgeorgia.edu/cgi/viewcontent.cgi?article=1020&context=ijoss. [Premier]

Another conceivable risk is that LAWS might trigger an arms race among nation-states because of their
immense tactical advantage. At present, global reactions to LAWS are divided, despite the fact that no
such weapons have been fully developed (Lewis, 2015). However, many countries currently utilize semi-
autonomous weapons and continue to devote resources to the development of fully autonomous
technology. For example, the U.S. has long repurposed unmanned systems like drones to target
members of international terrorist organizations. In these operations, a human operator always gives
the order to kill (Stone, 2013). Autonomous weapons also use comparable technology to those in the
public sector, which suggests that countries can indirectly develop tools for AWS as they support the
advancement of civilian-based AI (Schroeder, 2016).

We are in a LAW arms race.


Gubrud 16 Mark Gubrud, adjunct professor in the Curriculum in Peace, War & Defense at the
University of North Carolina. "Why Should We Ban Autonomous Weapons? To Survive." IEEE. 1 June
2016. https://spectrum.ieee.org/automaton/robotics/military-robots/why-should-we-ban-autonomous-
weapons-to-survive. [Premier]

Many statements at the CCW have endorsed human control as a guiding principle, and Altmann and I
have suggested cryptographic proof of accountable human control as a way to verify compliance with a
ban on autonomous weapons. Yet the CCW has not set a definite goal for its deliberations. And in the
meantime, the killer robot arms race has taken off.

FULL SPEED AHEAD

In 2012, the Obama administration, via then-undersecretary of defense Ashton Carter, directed the
Pentagon to begin developing, acquiring, and using “autonomous and semi-autonomous weapon
systems.” Directive 3000.09 has been widely misperceived as a policy of caution; many accounts insist
that it “requires a human in the loop.” But instead of human control, the policy sets “appropriate levels
of human judgment” as a guiding principle. It does not explain what that means, but senior officials are
required to certify that autonomous weapon systems meet this standard if they select and kill people
without human intervention. The policy clearly does not forbid such systems. Rather, it permits the
withdrawal of human judgment, control, and responsibility from points of lethal decision.

The policy has not stood in the way of programs such as the Long Range Anti-Ship Missile, slated for
deployment in 2018, which will hunt its targets over a wide expanse, relying on its own computers to
discriminate enemy ships from civilian vessels. Weapons like this are classified as merely “semi-
autonomous” and get a green light without certification, even though they will be operating fully
autonomously when they decide which pixels and signals correspond to valid targets, and attack them
with lethal force. Every technology needed to acquire, track, identify, and home in or control firing on
targets can be developed and used in “semi-autonomous weapon systems,” which can even be sent on
hunt-and-kill missions as long as the quarry has been “selected by a human operator.” (In case you’re
wondering, “target selection” is defined as “The determination that an individual target or a specific
group of targets is to be engaged.”) It’s unclear that the policy stands in the way of anything.

In reality, the directive signaled an upward inflection in the trend toward killer robots. Throughout the
military there is now open discussion about autonomy in future weapon systems; ambitious junior
officers are tying their careers to it. DARPA and the Navy are particularly active in efforts to develop
autonomous systems, but the Air Force, Army, and Marines won’t be left out. Carter, now the defense
secretary, is heavily promoting AI and robotics programs, establishing an office in Silicon Valley and a
board of advisors to be chaired by Eric Schmidt, the executive chairman of Google’s parent company
Alphabet.

The message has been received globally as well. Russia in 2013 moved to create its own versions of
DARPA and the of the U.S. Navy’s Laboratory for Autonomous Systems Research, and deputy prime
minister Dmitry Rogozin called on Russian industry to create weapons that “strike on their own,”
pointing explicitly to American developments. China, too, has been developing its own drones and
robotic weapons, mirroring the United States (but with less noise than Russia). Britain, Israel, India,
South Korea… in fact, every significant military power on Earth is looking in this direction.

Both Russia and China have engaged in aggressive actions, arms buildups, and belligerent rhetoric in
recent years, and it’s unclear whether they could be persuaded to support a ban on autonomous
weapons. But we aren’t even trying. Instead, the United States has been leading the robot arms race,
both with weapons development and with a policy that pretends to be cautious and responsible but
actually clears the way for vigorous development and early use of autonomous weapons.

Deputy defense secretary Robert Work has championed the notion of a “Third Offset” in which the
United States would leap to the next generation of military technologies ahead of its “adversaries,”
particularly Russia and China. To calm fears about robots taking over, he emphasizes “human-machine
collaboration and combat teaming” and says the military will use artificial intelligence and robotics to
augment, not replace human warfighters. Yet he worries that adversaries may field fully autonomous
weapon systems, and says the U.S. may need to “delegate authority to machines” because “humans
simply cannot operate at the same speed.”

Work admits that the United States has no monopoly on the basic enabler, information technology,
which today is driven more by commercial markets than by military needs. Both China and Russia have
strong software and cyber hacking capabilities. Their latest advanced fighters, tanks, and missiles are
said to rival ours in sophistication. Work compares the present to the “inter-war period” and urges the
U.S. to emulate Germany’s invention of blitzkrieg. Has he forgotten how that ended?

Arms races lead to uncontrollable escalation – only a ban solves.


Gubrud 16 Mark Gubrud, adjunct professor in the Curriculum in Peace, War & Defense at the
University of North Carolina. "Why Should We Ban Autonomous Weapons? To Survive." IEEE. 1 June
2016. https://spectrum.ieee.org/automaton/robotics/military-robots/why-should-we-ban-autonomous-
weapons-to-survive. [Premier]

Nobody wants war. Yet, fearing enemy aggression, we position ourselves at the brink of it. Arms races
militarize societies, inflate threat perceptions, and yield a proliferation of opportunities for accidents
and mistakes. In numerous close calls during the Cold War, it came down to the judgment of one or a
few people not to take the next step in a potentially fatal chain of events. But machines simply execute
their programs, as intended. They also behave in ways we did not intend or expect.

Our experience with the unpredictable failures and unintended interactions of complex software
systems, particularly competitive autonomous agents designed in secrecy by hostile teams, serves as a
warning that networks of autonomous weapons could accidentally ignite a war and, once it has started,
rapidly escalate it out of control. To set up such a disaster waiting to happen would be foolish, but not
unprecedented. It’s the type of risk we took during the Cold War, and it’s similar to the military planning
that drove the march to war in 1914. Arms races and confrontation push us to take this kind of risk.

Paul Scharre, one of the architects of Directive 3000.09, has suggested that the risk of autonomous
systems acting on their own could be mitigated by negotiating “rules of the road” and including humans
in battle networks as “fail-safes.” But it’s asking a lot of humans to remain calm when machines indicate
an attack underway. By the time you sort out a false alarm, autonomous weapons may actually have
started fighting. If nations can’t agree to the simple idea of a verified ban to avoid this danger, it seems
less likely that they will be able to negotiate some complicated system of rules and safeguards.

Direct authority to launch a nuclear strike may never be delegated to machines and a war between the
United States and China or Russia might not end in nuclear war, but do we want to take that risk? There
is no reason to believe we can engineer safety into a tense confrontation between networks of
autonomous weapons at the brink of war. The further we go down that road, the harder it will be to
walk back. Banning autonomous weapons and asserting the primacy of human control isn’t a complete
solution, but it is probably an essential step to ending the arms race and building true peace and
security.

BACK TO BASICS

The fundamental problem is conflict itself, which pits human against human, reason against reason and
machine against machine. We struggle to contain our conflicts, but passing them on to machines risks
finding ourselves nominally still in command yet unable to control events at superhuman speed.

We are horrified by killer robots, and we can ground their prohibition on strong a priori principles such
as human control, responsibility, dignity—and survival. Instead of endlessly debating the validity of
these human prejudices, we should take them as saving grace, and use them to stop killer robots.
Bias

AI is systematically biased against oppressed groups – LAWs lead to deadly inequality.


Ramsay-Jones 20 Hayley Ramsay-Jones, Soka Gakkai International. "Intersectionality and Racism."
Stop Killer Robots. February 2020. https://www.stopkillerrobots.org/wp-
content/uploads/2020/02/2020_Campaigners-Kit_FINAL.pdf. [Premier]

“To dismantle long-standing racism, it is important to identify and understand the colonial and historic
structures and systems that are responsible for shaping how current governments and institutions view
and target specific communities and peoples.”3

When it comes to artificial intelligence (A.I.) there is an increasing body of evidence that shows that A.I.
is not neutral and that racism operates at every level of the design process, production, implementation,
distribution and regulation. Through the commercial application of big-data we are being sorted into
categories and stereotypes. This categorization often works against people of colour when applying for
mortgages, insurance, credit, jobs, as well as decisions on bail, recidivism, custodial sentencing,
predictive policing and so on.

An example of this is the 2016 study by ProPublica, which looked at predictive recidivism and analysed
the scores of 7,000 people over two years. The study revealed software biased against
AfricanAmericans, who were given a 45% higher risk reoffending score than white offenders of the same
age, gender and criminal record.4

When we apply biased A.I. to killer robots we can see how long-standing inherent biases pose an ethical
and human rights threat, where some groups of people will be vastly more vulnerable than others. In
this regard, killer robots would not only act to further entrench already existing inequalities but could
exacerbate them and lead to deadly consequences.

FACIAL RECOGNITION

The under-representation of people of colour and other minority groups in science, technology,
engineering and mathematics (STEM) fields, means that technologies in the west are mostly developed
by white males, and thus perform better for this group. Joy Buolamwini,5 a researcher and digital
activist from MIT, revealed that facial recognition software recognizes male faces far more accurately
than female faces, especially when these faces are white. For darker-skinned people, however, the error
rates were over 19%, and unsurprisingly the systems performed especially badly when presented with
the intersection between race and gender, evidenced by a 34.4% error margin when recognizing dark-
skinned women.

Big Brother Watch, a UK based civil liberties organization, launch a report in 2018 titled “The Lawless
Growth of Facial Recognition in UK Policing”6 It exposed the London Metropolitan Police as having a
“Dangerous and inaccurate” facial recognition system. It misidentified more than 98% of people who
attended a London based carnival celebrating Caribbean music and culture.

Although companies creating these systems are aware of the biases in the training data, they continue
to sell them to state and local governments, who are now deploying them for use on members of the
public. Whether neglect is intentional or unintentional, these types of applications of new information
technology are failing people of colour intersectionally at a disturbing rate.

HISTORICAL, LATENT BIAS

Historical or latent bias is created by frequency of occurrence. For example, in 2016 an MBA student
named Rosalia7 discovered that googling “unprofessional hairstyles” yielded images of mainly black
women with afro-Caribbean hair; conversely when she searched “professional hairstyles” images of
mostly coiffed white women emerged. This is due to machine learning algorithms; it collects the most
frequently submitted entries and, as a result, reflects statistically popular racist sentiments. These learnt
biases are further strengthened, thus racism continues to be reinforced.

A more perilous example of this is in data-driven, predictive policing that uses crime statistics to identify
“high crime” areas. These areas are then subject to higher and often more aggressive levels of policing.
Crime happens everywhere. However, when an area is over-policed, which is often the case in
communities of colour, it results in more people of colour being arrested and flagged as “persons of
interest”. Thus, the cycle continues and confirmation bias occurs.8

Predictive policing has also led to increased levels of racial and ethnic profiling and the expansion of
gang databases. Racial and ethnic profiling takes place when law enforcement relies on generalizations
based on race, descent, national or ethnic origin, rather than objective evidence or individual behavior.
It then subjects targeted groups to stops, detailed searches, identity checks, surveillance and
investigations. Racial and ethnic profiling has not only proven to be ineffective and counterproductive9 ;
evidence has shown that the over-criminalization of targeted groups reinforces stereotypical
associations between crime and ethnicity10. The humiliation and stigmatization that results from this
can also lead to adverse psychological, physical and behavioral impacts, including the internalization of
negative stereotypes and diminished self-worth.

Gang databases are currently being used in a number of regions around the world including in North and
South America and Europe. These databases reinforce and exacerbate already existing discriminatory
street policing practices such as racial and ethnic profiling with discriminatory A.I.

It is not necessary to be suspected of a crime to be placed (without your knowledge or consent) in a


gang database. Yet those in gang databases face increased police surveillance, higher bail if detained,
elevated charges, increased aggression during police encounters, and if you also happen to be an
immigrant, you could face the threat of deportation.

In New York City, the police department’s gang database registered 99% people of colour11. A state
audit in California found that the “CalGang” database included 42 infants younger than one-year-old, 28
of whom had supposedly “admitted” to being gang members12 and that 90% of the 90,000 people in
the database were men of colour. In the UK, the London police force database, the “Gangs Matrix” have
almost 4000 people registered. Of those, 87% are from black, Asian and minority ethnic backgrounds,
and 78% are black. A disproportionate number, given that the police’s own figures show that only 27%
of those responsible for violent offenses are black.

The issue with racial and ethnic bias engrained in A.I. is not only that they reproduce inequalities, but
actually replicate and amplify discriminatory impact. For example, having one or several police officers
expressing racial bias leads to a certain number of discriminatory cases. Introducing A.I. technology with
a racial bias risks amplifying discriminatory instances to an unprecedented scale leading to further
exclusion and marginalization of social groups that have been historically racially and ethnically
discriminated against.13

BENEFITS VS CONSEQUENCES

A.I. is part of our daily lives and has the potential to revolutionize societies in a number of positive ways.
However, there is a long history of people of colour being experimented on for the sake of scientific
advances from which they have suffered greatly but do not benefit. An example of this is from James
Marion Sims, known as the ‘father of gynecology’ for reducing maternal death rates in the US, in the
19th century. He conducted his research by performing painful and grotesque experiments on enslaved
black women. “All of the early important reproductive health advances were devised by perfecting
experiments on black women”.14 Today, the maternal death rate for black women in the US is three
times higher than it is for white women.

This indicates that when it comes to new information technology, facial recognition systems, algorithms,
automated and interactive machine decision-making, communities of colour are often deprived of their
benefits and subjected to their consequences. This unfortunate reality where science is often inflicted
on communities of colour rather than aided by it must be addressed, especially when these technologies
are being weaponized.

LACK OF TRANSPARENCY AND ACCOUNTABILITY

Concerns are being raised about the lack of transparency behind how algorithms function. As A.I.
systems become more sophisticated, it will become even more difficult for the creators of these systems
to explain the choices the systems make. This is referred to as the “black box problem”15. Creators of
these systems are incapable of understanding the route taken to a specific conclusion. Opacity in
machine learning isoften mentioned as one of the main impediments for transparency in A.I. because
black box systems cannot be subjected to meaningful standards of accountability and transparency. This
also makes it harder to address discrimination.

Additionally, the question of who will be held accountable for human rights abuses is becoming
increasingly urgent. Holding those responsible for the unlawful killings of people of colour by law
enforcement and the military is already a huge challenge in many countries. This issue, however, would
be further impaired if the unlawful killing was committed by a killer robot. Who would be held
responsible: the programmer, manufacturer, commanding officer, or the machine itself?16 Lethal force
by these weapons would make it even easier for people of colour to be at the mercy of unlawful killings
and far more difficult to obtain justice for victims of colour and their families.

A SHIFT IN THINKING

'The nature of systemic racism means that it is embedded in all areas of society, the effects of this type
of oppression will not easily dissipate. Through the continual criminalization and stigmatization of
people of colour, systemic racism operates by creating new ways to reinvent itself. The development of
weapons that target, injure and kill based on data-inputs and pre-programmed algorithms, is a
frightening example of how colonial violence and discrimination continue to manifest in notions of racial
superiority and dominance. Automating violence in this regard could not only lead to biased killings, but
simultaneously amplify power disparities based on racial hierarchies causing irreparable harm to
targeted communities.

According to Reni Eddo-Lodge, racism perpetuates partly through malice, carelessness and ignorance; it
acts to quietly assist some, while hindering others.17 It is within this framework that we must identify
and apply an intersectional racial critique on killer robots, whilst also grapple with, and take action to,
address systemic racism and the lack of representations in our own organizations and social movements.

In order to break the culture and circles of violence prevalent in weapons systems, and in society, we
must shed light on the root causes of violence, domination and oppression wherever they may lie. We
can start by identifying the structures of power and privilege that exist in our own organizations, by
looking at whose voice is not present; and confronting misuse of power and the occupation of space. In
doing so, we can foster movements that are truly global and representative of all peoples from different
walks of life, cultures, and communities.

Development of LAWs will disproportionately benefit colonial powers – African


countries should be at the center of this debate.
Ray 19 Trisha Ray is an Associate Fellow at ORF’s Technology and Media Initiative. "The need for
African centrality in the Lethal Autonomous Weapons debate." Observer Research Foundation. 8 April
2019. https://www.orfonline.org/expert-speak/the-need-for-african-centrality-in-the-lethal-
autonomous-weapons-debate-49695/. [Premier]

However, while all eyes are on the UN GGE for any potential framework on LAWS, the Convention on
Certain Conventional Weapons (CCW) is notoriously slow and encumbered by its consensus-driven
model of decision making. Most notably, persisting inequity, remains the most problematic aspect of the
CCW process and forms the broader debate around LAWS.

Despite the fact that African countries make up 28% of UN’s total membership, its views are barely
represented outside the continent. There is an assumption that African states are preoccupied with very
different development and security concerns and that autonomous weapons are simply not a priority for
them. However, in reality, LAWS are as much a pressing security issue for African nations as they are for
the P-5. Between the Somali Civil War, communal conflicts in Nigeria, the Boko Haram insurgency, and
several other instances of armed conflict, Africa is where some of the deadliest, most persistent violent
conflicts in the world take place. Autonomous weapons, even with their promise of greater policing
power and potential to reduce casualties, would be dangerous if deployed in such conflicts, especially in
the absence of norms and legal frameworks.

This context has shaped the African response to autonomous weapons. For instance, Algeria, Egypt,
Ghana, Uganda and Zimbabwe along with The African Group represented by South Africa were some of
the first countries to have supported a ban on killer robots. Each statement reflects different priorities:
for some, the concern is human control; for others, it is the potential for a massive numbers of
casualties falling outside the purview of international humanitarian law. Ghana and Zimbabwe present
arguments that acknowledge that great powers will continue to develop LAWS for their immense
military benefits irrespective of the views of smaller powers. This last point in particular best reflects the
paradox faced by African actors calling for a ban: no legally-binding instrument on LAWS is possible with
the backing of the P-5. Moreover, while the most prominent non-governmental actors on the global
stage push for a ban, the active pursuit of the same capabilities by the militaries of the Western
countries they originate from, continues in parallel.

LAWS are thus transforming into an arena of geopolitical and strategic competition while also reflecting
the same inequalities that have characterised technology-led military innovation for decades. In the case
of LAWS, there is a strong first-mover advantage due to exponential increases in lethality provided by
even the most incremental improvements in speed and efficiency. Currently, research and investment in
AI/ML and robotics is dominated by the US, China and a handful of other major powers. In 2016, 80% of
the $26-39 billion invested by private firms in artificial intelligence was by Alibaba, Amazon, Google,
Facebook and Baidu. Similarly, some of the largest seed-funded robotics startups of 2018 were Chinese,
American and German. Thus, the great powers control the technologies as well as the institutions and
the agenda.

Most global-level institutions and platforms have a similar approach to the African problem: they have
them at the table, but rarely build and shape the debate on terms that translate to their countries.
Given the stakes state and civilian actors in Africa have in shaping the development of norms and
technologies linked to LAWS, an Africa-centered LAWS debate is much overdue. For several of these
countries, the conflicts are close to home; the distant geographies and distant futures that concern most
of the major powers are not directly relevant.

Biases and mistakes can lead to racial discrimination.


Janer and Garcia 19 Justin Haner Northeastern University Denise Garcia International Committee for
Robot Arms Control and Northeastern University. "The Artificial Intelligence Arms Race: Trends and
World Leaders in Autonomous Weapons Development." Global Policy Volume 10 . Issue 3 . September
2019. https://onlinelibrary.wiley.com/doi/pdf/10.1111/1758-5899.12713. [Premier]

Public oversight and accountability is particularly important because lethal AI-based systems are
vulnerable to bias, hacking, and computer malfunction. As many as 85 per cent of all AI projects are
expected to have errors due to either bias in the algorithm, biased programmers, or bias in the data
used to train them (Gartner, 2018). AI bias tends to be particularly discriminatory against minority
groups and could lead to over-targeting or false positives as facial recognition systems become further
integrated into the weapons of war (West et al., 2019). Beyond bias, AI systems can be hacked, have
unintended coding errors, or otherwise act in ways programmers never could have predicted. One such
coding error happened in autonomous financial trading systems in May 2010 causing a ‘flash crash’
wiping out $1 trillion worth of stock market value in just a few minutes (Pisani, 2015). Even when
correctly coded, many AI programs exploit unforeseen loopholes to achieve their programmed goals,
such as a Tetris-playing AI that would pause the game just before the last block fell so that it could never
lose, or another AI program that would delete the answer key against which it was being tested so that
it could receive a falsely inflated perfect score (Scharre, 2019). The consequences of such bias and errors
as AI is added to increasingly autonomous weapons could be devastating.
Diplomacy

Evidence from drones suggest that LAWs would harm diplomacy.


Roff 15 Heather M. Roff is Visiting Associate Professor at Josef Korbel School of International Studies at
the University of Denver and a Research Associate at the Eisenhower Center for Space and Strategic
Studies with the US Air Force Academy. "Lethal Autonomous Weapons and Jus Ad Bellum
Proportionality." Case Western Reserve Journal of International Law, 47(1). 2015.
https://scholarlycommons.law.case.edu/cgi/viewcontent.cgi?
referer=&httpsredir=1&article=1006&context=jil. [Premier]

Albert Camus once remarked that killing by using a machine as a proxy for a soldier is a danger that we
ought to shun because what is “gained in cleanliness is lost in understanding.”31 What he meant was
that sanitizing killing on one side and attempting to legitimize this supposedly clean form of violence is
myopic because one can never kill cleanly. There will undoubtedly be consequences that will entrench
the violence and perpetuate the cycle. Leaping ahead approximately sixty years, we see that Camus’
observations still hold true and are increasingly prescient for the types of autonomous killing under
discussion. Recent evidence from the United States’ use of unmanned aerial vehicles, or “drones,” in
Pakistan, Yemen, and Somalia to target members of al-Qaeda and its affiliates suggests that using this
type of weapon breeds more animosity and acts as a recruiting strategy for terrorist organizations,
thereby frustrating the U.S.’s goals. 32 Indeed, the U.S.’s adversaries paint its use of unmanned systems
as disrespectful and cowardly, and this belief, in turn, incites distrust, skepticism, and hatred in the
target population.33 What may be gained in cleanliness is lost in understanding.

While these current unmanned operations still require human combatants in the combat theater at
forward operating bases or airstrips, they are not wholly clean or without risk. AWS, on the other hand,
would permit a state to excise the human from this kind of combat situation, and may eliminate the
need for any support crew in theater. The perception of clean killing would increase. Thus, any findings
we have about a target population’s perception of unmanned drones may actually be even stronger in
the case of AWS. Even if AWS are used defensively, the message it sends to one’s adversary is that their
lives are not worth sending a human combatant to fight. This perception, along with evidence from
present drone operations, suggests that feelings of animosity may increase as a result of AWS use.

Acrimonious feelings affect the likelihood of peaceful settlement and negotiation between belligerent
states. Substantial evidence indicates that when high levels of distrust, enmity, and hatred exist
between warring parties, conflicts are prolonged and peaceful settlements are unlikely.34 Data suggests
that when belligerent parties begin to relate to each other in this negative way, conflicts assume a zero
sum characteristic, whereby they end by either total defeat or surrender.35
Emboldenment

LAWs lower the cost of war while imposing a host of other security risks.
Asaro 12 Peter Asaro is Affiliate Scholar at Stanford Law School’s Center for Internet and Society,
CoFounder and Vice-Chair of the International Committee for Robot Arms Control, and the Director of
Graduate Programs for the School of Media Studies at The New School for Public Engagement in New
York City. "On banning autonomous weapon systems: human rights, automation, and the
dehumanization of lethal decision-making." International Review of the Red Cross, 94(886). Summer
2012. https://www.cambridge.org/core/services/aop-cambridge-
core/content/view/992565190BF2912AFC5AC0657AFECF07/S1816383112000768a.pdf/on-banning-
autonomous-weapon-systems-human-rights-automation-and-the-dehumanization-of-lethal-decision-
making.pdf. [Premier]

Autonomous weapon systems raise a host of ethical and social concerns, including issues of asymmetric
warfare and risk redistribution from combatants to civilians and the potential to lower the thresholds for
nations to start wars.9 Insofar as such weapons tend to remove the combatants who operate them from
area of conflict and reduce the risks of causalities for those who possess them, they tend to also reduce
the political costs and risks of going to war. This could result in an overall lowering of the threshold of
going to war. Autonomous weapon systems also have the potential to cause regional or global instability
and insecurity, to fuel arms races, to proliferate to non-state actors, or initiate the escalation of conflicts
outside of human political intentions. Systems capable of initiating lethal force without human
supervision could do so even when political and military leadership has not deemed such action
appropriate, resulting in the unintended initiation or escalation of conflicts outside of direct human
control.10 Thus, these systems pose a serious threat to international stability and the ability of
international bodies to manage conflicts.
Crisis Instability

LAWs increase the risk of nuclear war.


Laird 20 Burgess Laird, Senior International Researcher at RAND. "The Risks of Autonomous Weapons
Systems for Crisis Stability and Conflict Escalation in Future U.S.-Russia Confrontations." RAND. 3 June
2020. https://www.rand.org/blog/2020/06/the-risks-of-autonomous-weapons-systems-for-crisis.html.
[Premier]

While holding out the promise of significant operational advantages, AWS simultaneously could increase
the potential for undermining crisis stability and fueling conflict escalation in contests between the
United States and Russia. Defined as “the degree to which mutual deterrence between dangerous
adversaries can hold in a confrontation,” as my RAND colleague Forrest Morgan explains, crisis stability
and the ways to achieve it are not about warfighting, but about “building and posturing forces in ways
that allow a state, if confronted, to avoid war without backing down” on important political or military
interests. Thus, the military capabilities developed by nuclear-armed states like the United States and
Russia and how they posture them are key determinants of whether crises between them will remain
stable or devolve into conventional armed conflict, as well as the extent to which such conflict might
escalate in intensity and scope, including to the level of nuclear use. AWS could foster crisis instability
and conflict escalation in contests between the United States and Russia in a number of ways; in this
short essay I will highlight only four.

First, a state facing an adversary with AWS capable of making decisions at machine speeds is likely to
fear the threat of sudden and potent attack, a threat that would compress the amount of time for
strategic decisionmaking. The posturing of AWS during a crisis would likely create fears that one's forces
could suffer significant, if not decisive, strikes. These fears in turn could translate into pressures to strike
first—to preempt—for fear of having to strike second from a greatly weakened position. Similarly,
within conflict, the fear of losing at machine speeds would be likely to cause a state to escalate the
intensity of the conflict possibly even to the level of nuclear use.

Second, as the speed of military action in a conflict involving the use of AWS as well as hypersonic
weapons and other advanced military capabilities begins to surpass the speed of political
decisionmaking, leaders could lose the ability to manage the crisis and with it the ability to control
escalation. With tactical and operational action taking place at speeds driven by machines, the time for
exchanging signals and communications and for assessing diplomatic options and offramps will be
significantly foreclosed. However, the advantages of operating inside the OODA loop of a state
adversary like Iraq or Serbia is one thing, while operating inside the OODA loop of a nuclear-armed
adversary is another. As the renowned scholar Alexander George emphasized (PDF), especially in
contests between nuclear armed competitors, there is a fundamental tension between the operational
effectiveness sought by military commanders and the requirements for political leaders to retain control
of events before major escalation takes place.

Third, and perhaps of greatest concern to policymakers should be the likelihood that, from the vantage
point of Russia's leaders, in U.S. hands the operational advantages of AWS are likely to be understood as
an increased U.S. capability for what Georgetown professor Caitlin Talmadge refers to as “conventional
counterforce” operations. In brief, in crises and conflicts, Moscow is likely to see the United States as
confronting it with an array of advanced conventional capabilities backstopped by an interconnected
shield of theater and homeland missile defenses. Russia will perceive such capabilities as posing both a
conventional war-winning threat and a conventional counterforce threat (PDF) poised to degrade the
use of its strategic nuclear forces. The likelihood that Russia will see them this way is reinforced by the
fact that it currently sees U.S. conventional precision capabilities precisely in this manner. As a
qualitatively new capability that promises new operational advantages, the addition of AWS to U.S.
conventional capabilities could further cement Moscow's view and in doing so increase the potential for
crisis instability and escalation in confrontations with U.S. forces.

In other words, the fielding of U.S. AWS could augment what Moscow already sees as a formidable U.S.
ability to threaten a range of important targets including its command and control networks, air
defenses, and early warning radars, all of which are unquestionably critical components of Russian
conventional forces. In many cases, however, they also serve as critical components of Russia's nuclear
force operations. As Talmadge argues, attacks on such targets, even if intended solely to weaken Russian
conventional capabilities, will likely raise Russian fears that the U.S. conventional campaign is in fact a
counterforce campaign aimed at neutering Russia's nuclear capabilities. Take for example, a
hypothetical scenario set in the Baltics in the 2030 timeframe which finds NATO forces employing
swarming AWS to suppress Russian air defense networks and key command and control nodes in
Kaliningrad as part of a larger strategy of expelling a Russian invasion force. What to NATO is a logical
part of a conventional campaign could well appear to Moscow as initial moves of a larger plan designed
to degrade the integrated air defense and command and control networks upon which Russia's strategic
nuclear arsenal relies. In turn, such fears could feed pressures for Moscow to escalate to nuclear use
while it still has the ability to do so.

Finally, even if the employment of AWS does not drive an increase in the speed and momentum of
action that forecloses the time for exchanging signals, a future conflict in which AWS are ubiquitous will
likely prove to be a poor venue both for signaling and interpreting signals. In such a conflict, instead of
interpreting a downward modulation in an adversary's operations as a possible signal of restraint or
perhaps as signaling a willingness to pause in an effort to open up space for diplomatic negotiations,
AWS programmed to exploit every tactical opportunity might read the modulation as an opportunity to
escalate offensive operations and thus gain tactical advantage. Such AWS could also misunderstand
adversary attempts to signal resolve solely as adversary preparations for imminent attack. Of course,
correctly interpreting signals sent in crisis and conflict is vexing enough when humans are making all the
decisions, but in future confrontations in which decisionmaking has willingly or unwillingly been ceded
to machines, the problem is likely only to be magnified.

High speed, unpredictability, and hacking will lead to accidents.


Cook 19 Adam Cook, Lieutenant Colonel, USAF. "Taming Killer Robots: Giving Meaning to the
“Meaningful Human Control” Standard for Lethal Autonomous Weapon Systems." JAG School Paper, No.
1, Air University Press. June 2019. https://media.defense.gov/2019/Jun/18/2002146749/-1/-
1/0/JP_001_COOK_TAMING_KILLER_ROBOTS.PDF. [Premier]

As Mr. Heyns suggests, the distinctive character of autonomous weapon systems raises a host of novel
risks and policy issues. The first is speed. As illustrated in Flash Boys, Michael Lewis’s colorful history of
the advent of highfrequency stock trading, research and development in modern computing has largely
developed into a competition to shave off tiny increments of time, sometimes measured in picoseconds.
In the race to beat their competitors to market, the traders described by Lewis, and the computer
technicians and programmers supporting them, went to almost obscene lengths to ensure their orders
would beat their competitors’ to market in a race measured in increments incomprehensible to the
human brain. In one instance, a particularly ambitious firm spent tens of millions of dollars buying land
rights through rural Pennsylvania and Ohio in order to lay their own high-speed fiber optic cable. This
dedicated cable ensured that the firm’s buy and sell orders (determined by their equally speedy
algorithms) could outrace their competitors’ orders between the New York and Chicago stock exchanges
in a race lasting less time than the blink of a human eye.38

The unprecedented speed with which autonomous systems of all types can determine and execute
decisions presents significant issues when those decisions involve the use of deadly weapons. The speed
of operations made possible by LAWS represents a paradigm shift in the conception of battle plan
execution, which has throughout history unfolded no more quickly than the speed at which a human
brain can assess information, weigh alternatives, and determine the best course of action. As Schmitt
and Thurnher write, “Many nations, including China, are already developing advanced systems with
autonomous features. Future combat may therefore occur at such a high tempo that human operators
will simply be unable to keep up. . . . [Therefore] a force that does not employ fully autonomous
weapon systems will inevitably operate outside its enemy’s ‘OODA [observe, orient, decide, act] loop,’
thereby ceding initiative on the battlefield.”39

The second distinctive feature of an autonomous weapon systems is, well, its autonomy. Never in the
history of military affairs have weapons been designed with the potential not just to assist humans in
waging wars, but to actually commence hostilities—even without a conscious human decision to do so.

This risk is compounded by the unpredictability of increasingly complex algorithms involving in some
cases millions of lines of code. The more powerful the sensors connected to the platform, and the more
factors the system’s processor is asked to weigh before deciding on a course of action, the more difficult
it is for human operators to predict how an autonomous system will react to any given sequence of real-
world events or even to understand after the fact why the system reacted the way it did.40 This risk
factor is increased dramatically by the potential concurrent employment of autonomous systems by
adversaries, each system reacting to the others’ actions—or perceived actions—at a speed beyond the
ability of the human brain to react to, or even comprehend, in real time.

The fourth and final novel risks posed by autonomous systems are hacking by adversaries and simple
coding errors, either of which could lead to unanticipated and even deadly actions by improperly
designed or secured LAWS. The US federal government’s abysmal track record of acquiring and
deploying IT systems, including the disastrous roll-out of the Affordable Care Act (Obamacare)
marketplaces, the failure of data systems at agencies ranging from the Federal Bureau of Investigation
to the Veterans Administration, and the highly publicized hacking of General Services Administration
databases by Chinese actors, does not exactly inspire confidence in the government’s ability to compile
millions of lines of error-free code with impenetrable defenses against adversarial hacking.

The real-world dangers posed by the combination of the risk factors set forth above (speed,
autonomous action, complexity, unknown errors, and hacking) are illustrated by Wall Street’s famous
“flash crash” of 6 May 2010. On that date, the Dow Jones Industrial Average suddenly lost nearly 10
percent of its total value in just minutes. As detailed by Mr. Scharre in his most recent paper on
autonomous weapons, “A U.S. Securities and Exchange Commission (SEC) report following the incident
determined that the crash was initiated by an automated stock trade (a ‘sell algorithm’) executing a
large sale unusually quickly. This caused a sale that normally would have occurred over several hours to
be executed within 20 minutes. This sell algorithm then interacted with high-frequency trading
algorithms to cause a rapid price drop.”41

Perhaps most alarming of all, other organizations which have investigated the flash crash, including the
Department of Justice and the Commodity Futures Trading Commission, have disputed the SEC’s finding,
instead attributing the flash crash to a London-based trader hacking into the autonomous trading
algorithms of other firms. Scharre’s conclusion, and the lack of certainty into the causes of the crash
even seven years later, lends credence to the fears of many in the international community that
autonomous weapon systems deployed into the wild without proper controls could act in unexpected
and disastrous ways. “What appears clear across multiple analyses of the May 2010 incident is that
automated stock trades and high-frequency trading algorithms at the very least played a role in
exacerbating the crash. This may have been due, in part, to unanticipated interactions between
adversarial trading algorithms. It is also possible that behavioral hacking of the algorithms was a
factor.”42

LAWs increase volatility and the risk of unstoppable escalation.


Pedron and da Cruz 20 summarizes Stephanie Mae Pedron and Jose de Arimateia da Cruz. "The
Future of Wars: Artificial Intelligence (AI) and Lethal Autonomous W onomous Weapon Systems (L
eapon Systems (LAWS)." International Journal of Security Studies, 2(1). 2020.
https://digitalcommons.northgeorgia.edu/cgi/viewcontent.cgi?article=1020&context=ijoss. [Premier]

Alexander Kott (2018), Chief Scientist of the Army Research Laboratory, stated that one of the many
remarkable features of AI is its ability to make things “individually and collectively more intelligent.” But
at the same time, it also makes situations more volatile. Future deployment of LAWS presents several
security challenges such as hacking of the system or unanticipated failure, particularly if the system
utilizes machine learning applications. LAWS are expected to enhance a military’s lethal force, so issues
following their deployment can have mighty consequences. Since many AI systems are first developed in
the public sphere, and then repurposed for military use, integration errors can occur once the system is
transferred to a combat environment (Sayler, 2019b). Consequences will be dependent on the type of
failure that occurs. For example, unintended escalation in a crisis may occur if LAWS engage targets
other than what the human operator intended or if adversaries deliberately introduce data that
produces an error in the system. Human mistakes are typically contained to a single individual. But
errors in complex AI systems, especially if they are deployed at scale, risk simultaneous—perhaps even
inevitable—failure (Scharre, 2017). Moreover, the danger of machines producing unconventional
outcomes that cannot be immediately terminated—if the outcome can be terminated at all—may result
in a destabilizing effect if the system spirals out of human control.
Prolif

LAWs are uniquely attractive for terrorism and there are multiple avenues for access.
Ware 19 Jacob Ware holds a master’s in security studies from Georgetown University and an MA
(Hons) in international relations and modern history from the University of St Andrews. "Terrorist
Groups, Artificial Intelligence, and Killer Drones." 24 September 2019.
https://warontherocks.com/2019/09/terrorist-groups-artificial-intelligence-and-killer-drones/. [Premier]

Terrorist groups will be interested in artificial intelligence and lethal autonomous weapons for three
reasons — cost, traceability, and effectiveness.

Firstly, killer robots are likely to be extremely cheap, while still maintaining lethality. Experts agree that
lethal autonomous weapons, once fully developed, will provide a cost-effective alternative to terrorist
groups looking to maximize damage, with Tegmark arguing that “small AI-powered killer drones are
likely to cost little more than a smartphone.” Additionally, killer robots will minimize the human
investment required for terrorist attacks, with scholars arguing that “greater degrees of autonomy
enable a greater amount of damage to be done by a single person.” Artificial intelligence could make
terrorist activity cheaper financially and in terms of human capital, lowering the organizational costs
required to commit attacks.

Secondly, using autonomous weapons will reduce the trace left by terrorists. A large number of
munitions could be launched — and a large amount of damage done — by a small number of people
operating at considerable distance from the target, reducing the signature left behind. In Tegmark’s
words, for “a terrorist wanting to assassinate a politician … all they need to do is upload their target’s
photo and address into the killer robot: it can then fly to the destination, identify and eliminate the
person, and self-destruct to ensure nobody knows who was responsible.” With autonomous weapons
technology, terrorist groups will be able to launch increasingly complex attacks, and, when they want to,
escape without detection.

Finally, killer robots could reduce, if not eliminate, the physical costs and dangers of terrorism, rendering
the operative “essentially invulnerable.” Raising the possibility of “fly and forget” missions, lethal
autonomous weapons might simply be deployed toward a target, and engage that target without
further human intervention. As P. W. Singer noted in 2012, “one [will] not have to be suicidal to carry
out attacks that previously might have required one to be so. This allows new players into the game,
making al-Qaeda 2.0 and the next-generation version of the Unabomber or Timothy McVeigh far more
lethal.” Additionally, lethal autonomous weapons could potentially reduce human aversion to killing,
making terrorism even more palatable as a tactic for political groups. According to the aforementioned
February 2018 report, “AI systems can allow the actors who would otherwise be performing the tasks to
retain their anonymity and experience a greater degree of psychological distance from the people they
impact”; this would not only improve a terrorist’s chances of escape, as mentioned, but reduce or even
eliminate the moral or psychological barriers to murder.

Terrorist Acquisition of Lethal Autonomous Weapons Is Realistic


The proliferation of artificial intelligence and killer robot technology to terrorist organizations is realistic
and likely to occur through three avenues — internal development, sales, and leaks.

Firstly, modern terrorist organizations have advanced scientific and engineering departments, and
actively seek out skilled scientists for recruitment. ISIL, for example, has appealed for scientists to trek to
the caliphate to work on drone and AI technology. The individual technologies behind swarming killer
robots — including unmanned aerial vehicles, facial recognition, and machine-to-machine
communication — already exist, and have been adapted by terrorist organizations for other means.
According to a French defense industry executive, “the technological challenge of scaling it up to swarms
and things like that doesn’t need any inventive step. It’s just a question of time and scale and I think
that’s an absolute certainty that we should worry about.”

Secondly, autonomous weapons technology will likely proliferate through sales. Because AI research is
led by private firms, advanced AI technology will be publicly sold on the open market. As Michael
Horowitz argues, “militant groups and less-capable states may already have what they need to produce
some simple autonomous weapon systems, and that capability is likely to spread even further for purely
commercial reasons.” The current framework controlling high-tech weapons proliferation — the
Wassenaar Arrangement and Missile Technology Control Regime — is voluntary, and is constantly tested
by great-power weapons development. Given interest in developing AI-guided weapons, this seems
unlikely to change. Ultimately, as AI expert Toby Walsh notes, the world’s weapons companies can, and
will, “make a killing (pun very much intended) selling autonomous weapons to all sides of every
conflict.”

Finally, autonomous weapons technology is likely to leak. Innovation in the AI field is led by the private
sector, not the military, because of the myriad commercial applications of the technology. This will make
it more difficult to contain the technology, and prevent it from proliferating to nonstate actors. Perhaps
the starkest warning has been issued by Paul Scharre, a former U.S. defense official: “We are entering a
world where the technology to build lethal autonomous weapons is available not only to nation-states
but to individuals as well. That world is not in the distant future. It’s already here.”

Prolif to non-state actors is inevitable absent intervention.


Chertoff 18 Philip Chertoff is a J.D. candidate at Harvard Law School, Young Leader in Foreign and
Security Policy at the Geneva Centre for Security Policy, where he researched autonomous weapons
systems and emerging technology issues. “Perils of Lethal Autonomous Weapons Systems Proliferation:
Preventing Non-State Acquisition.” Geneva Center for Security Policy. October 2018. Issue 2.
https://dam.gcsp.ch/files/2y10RR5E5mmEpZE4rnkLPZwUleGsxaWXTH3aoibziMaV0JJrWCxFyxXGS.
[Premier]

Academics, defence industry representatives and AI developers have all raised concerns about the
possible acquisition of LAWS by malicious actors. In the 2015 expert meeting, GCSP’s AI expert,
JeanMarc Rickli, warned about the value such systems offer to non-state actors and the risks of potential
acquisition—including their possible use for indiscriminate violence and terrorist acts.3 Before the first
GGE, Elon Musk and other AI developers issued an open letter that specifically cited the significant risk
that LAWS will become “weapons that despots and terrorists use against innocent populations”.4
Yet, during the second GGE, acquisition by non-state actors, such as terrorists, was highlighted only in a
few throwaway lines by certain representatives. This general absence was surprising, not just because of
the significant attention to malicious actors in conversations prior to the second GGE, but also because
of the significant risks posed by malicious use of LAWS. As Germany stated, in one of the few comments
on malicious actors, LAWS could “exacerbate the threat of terrorism” and expand terrorist capacity to
“indiscriminately inflict harm and inflict terror on civilians”.5

The risk of disastrous use of LAWS by non-state actors is far greater than potential misuse by state
actors. For professionalised militaries, like those of the US, UK, France, Russia, and China, command and
subordination are requirements for the introduction of any new weapons systems. Any fully-
autonomous weapons systems currently under development are likely a long way from integration into
active engagement because professional state militaries need assurances of predictability and reliability.
For malicious actors, however, concerns about predictability and reliability are less pressing, especially if
LAWS could be a force multiplier in their asymmetric conflict. Malicious non-state actors have no need
to account for proportionality or distinction in their attacks and, for terrorist groups, such indiscriminate
violence may be the goal, as such brutality would cultivate the fear and intimidation that is integral to
their missions.

4. LAWS and Proliferation

Some may believe that the acquisition of LAWS by malicious actors is a distant reality. However, such
views fail to consider both the current proliferation of increasingly autonomous weapons systems, as
well as concentrated efforts by malicious actors to acquire offset systems. The proliferation of LAWS
should not be thought of as a watershed event; instead, it is a developing process of increasing
autonomy in weapons systems. Currently, there are already weapons systems that operate with a
degree of autonomy that might be considered characteristic of LAWS. Israel has developed two semi-
autonomous drone systems, Harpy and Harop, which operate as “loitering munitions”. These systems
“loiter” around a target area, searching for targets, and engage when targets are located. Harpy and
Harop have been used, or are currently in-use, by a number of countries such as China and Azerbaijan,
and have been deployed in active conflicts.6

Autonomy is also winding its way into ground munitions and missile technology. South Korea has two
such weapons in use. The SGR-A1 is a semi-autonomous sentry gun that offers automated targeting.7
The Super aEgis II sentry turret was originally designed with fully autonomous capacity.8 Since 2005, the
British RAF has operated the Brimstone missile system, which offers an “indirect targeting” mode.9
Increasingly, autonomous systems are being implemented in active conflicts. As development continues
towards the creation of fully-autonomous systems, systems will offer ever-increasing autonomy in each
iteration.

Active pursuit of offset systems by non-state actors has always been a significant concern. Theoretically,
illicit actors need to pursue increasingly advanced technologies in order to stay ahead of their enemies.
Bruce Hoffman observed, “success for terrorists is dependent on their ability to keep one step ahead of
not only the authorities but also counterterrorist technology.” Non-state actor pursuit of autonomous
weapons is then not a question of if, but when. Hamas, Hezbollah and ISIS have already demonstrated
the deployment of armed, remote-controlled drone systems.10 The New York Times recently reported
that the Islamic State has developed a homebrewed armed drone program, using modified off-the-shelf
drone systems to drop bombs on, or kamikaze strike, Allied forces.11
It is highly unlikely that any state currently considering the development of LAWS can be persuaded to
disengage from such activity based on the risk of malicious acquisition. Recognizing this, international
efforts are largely focused on wrangling states to develop them with responsible use and ethical
considerations in mind. For all the same concerns about ethics, international law, and strategic stability,
it is as important that states take future proliferation to malicious non-state actors seriously. Nations
should consider the creation of a harmonised export control regime for military-grade LAWS, and critical
LAWS components, to reduce the risk of technology transfer to malicious actors.

Bans are an urgent and effective solution.


Ware 19 Jacob Ware holds a master’s in security studies from Georgetown University and an MA
(Hons) in international relations and modern history from the University of St Andrews. "Terrorist
Groups, Artificial Intelligence, and Killer Drones." 24 September 2019.
https://warontherocks.com/2019/09/terrorist-groups-artificial-intelligence-and-killer-drones/. [Premier]

Drones and AI provide a particularly daunting counter-terrorism challenge, simply because effective
counter-drone or anti-AI expertise does not yet exist. That said, as Daveed Gartenstein-Ross has noted,
“in recent years, we have seen multiple failures in imagination as analysts tried to discern what
terrorists will do with emerging technologies. A failure in imagination as artificial intelligence becomes
cheaper and more widely available could be even costlier.” Action is urgently needed, and for now,
counter-terrorism policies are likely to fit into two categories, each with flaws: defenses and bans.

Firstly, and most likely, Western states could strengthen their defenses against drones and weaponized
AI. This might involve strengthening current counter-drone and anti-AI capabilities, improving training
for local law enforcement, and establishing plans for mitigating drone or autonomous weapons
incidents. AI technology and systems will surely play an important role in this space, including in the
development of anti-AI tools. However, anti-AI defenses will be costly, and will need to be implemented
across countless cities throughout the entire Western world, something Michael Horton calls “a
daunting challenge that will require spending billions of dollars on electronic and kinetic
countermeasures.” Swarms, Scharre notes, will prove “devilishly hard to target,” given the number of
munitions and their ability to spread over a wide area. In addition, defenses will likely take a long time to
erect effectively and will leave citizens exposed in the meantime. Beyond defenses, AI will also be used
in counter-terrorism intelligence and online content moderation, although this will surely spark civil
liberties challenges.

Secondly, the international community could look to ban AI use in the military through an international
treaty sanctioned by the United Nations. This has been the strategy pursued by activist groups such as
the Campaign to Stop Killer Robots, while leading artificial intelligence researchers and scientific
commentators have published open letters warning of the risk of weaponized AI. That said, great
powers are not likely to refrain from AI weapons development, and a ban might outlaw positive uses of
militarized AI. The international community could also look to stigmatize, or delegitimize, weaponized AI
and lethal autonomous weapons sufficiently to deter terrorist use. Although modern terrorist groups
have proven extremely willing to improvise and innovate, and effective at doing so, there is an extensive
list of weapons — chemical weapons, biological weapons, cluster munitions, barrel bombs, and more —
accessible to terrorist organizations, but rarely used. This is partly down to the international stigma
associated with those munitions — if a norm is strong enough, terrorists might avoid using a weapon.
However, norms take a long time to develop, and are fragile and untrustworthy solutions. Evidently,
good counter-terrorism options are limited.

The U.S. government and its intelligence agencies should continue to treat AI and lethal autonomous
weapons as priorities, and identify new possible counter-terrorism measures. Fortunately, some
progress has been made: Nicholas Rasmussen, former director of the National Counterterrorism Center,
admitted at a Senate Homeland Security and Governmental Affairs Committee hearing in September
2017 that “there is a community of experts that has emerged inside the federal government that is
focused on this pretty much full time. Two years ago this was not a concern … We are trying to up our
game.”

Nonstate actors are already deploying drones to attack their enemies. Lethal autonomous weapon
systems are likely to proliferate to terrorist groups, with potentially devastating consequences. The
United States and its allies should urgently address the rising threat by preparing stronger defenses
against possible drone and swarm attacks, engaging with the defense industry and AI experts warning of
the threat, and supporting realistic international efforts to ban or stigmatize military applications of
artificial intelligence. Although the likelihood of such an event is low, a killer robot attack could cause
massive casualties, strike a devastating blow to the U.S. homeland, and cause widespread panic. The
threat is imminent, and the time has come to act.

Offensive LAWs will be proliferated to terrorists.


Janer and Garcia 19 Justin Haner Northeastern University Denise Garcia International Committee for
Robot Arms Control and Northeastern University. "The Artificial Intelligence Arms Race: Trends and
World Leaders in Autonomous Weapons Development." Global Policy Volume 10 . Issue 3 . September
2019. https://onlinelibrary.wiley.com/doi/pdf/10.1111/1758-5899.12713. [Premier]

Autonomous weapons are poised for rapid proliferation. At present, this technology is concentrated in a
few powerful, wealthy countries that have the resources required to invest heavily in advanced robotics
and AI research. However, Moore’s Law and the declining costs of production, including 3D printing,
will soon enable many states and non-state actors to procure killer robots (Scharre, 2014). At present,
quadcopter drones cost as little as $25 and a small $35 Raspberry Pi computer can run AI advanced
enough to defeat United States Air Force fighter pilots in combat simulations (Cuthbertson, 2016).
Accountability will become increasingly difficult as more international actors are able to acquire lethal
AWS. Houthi rebels are using weaponized drones and both ISIS and Boko Haram have adapted drones
for use as improvised explosive devices (Olaifa, 2018; Nissenbaum and Strobel, 2019). Proliferation to
groups using terrorist tactics is particularly worrying because perceived costs drop as AWS increases the
distance between the attacker and the target, making it less likely that they will be caught, while
simultaneously increasing perceived benefits by allowing more precise target selection in order to
maximize deadly impact and fear. Such offensive capabilities are likely to proliferate faster than lethal
AI-based defensive systems due to protective technology having higher relative costs due to increased
need to differentiate and safely engage targets.
Rapid proliferation of LAWs to small states undermines global security.
Pedron and da Cruz 20 summarizes Stephanie Mae Pedron and Jose de Arimateia da Cruz. "The
Future of Wars: Artificial Intelligence (AI) and Lethal Autonomous W onomous Weapon Systems (L
eapon Systems (LAWS)." International Journal of Security Studies, 2(1). 2020.
https://digitalcommons.northgeorgia.edu/cgi/viewcontent.cgi?article=1020&context=ijoss. [Premier]

Rushed development of LAWS may result in failures to comply with international laws of war, since
these weapons are fundamentally different from any prior form of weaponry in that they make
independent decisions about how to act. The complexity of these systems may make it impossible for
people to predict what they will do in every possible situation. LAWS therefore presents a gap in the
existing legal order by underscoring the inadequacy of the current established means of holding an
individual or state liable for actions conducted during wartime (Crootof, 2016). Additionally,
proliferation may amplify the offensive competencies of small countries—possibly even independent
actors. Rapid, disproportionate increases in the military capabilities of relatively small nation-states can
have detrimental effects on the current global state of affairs. Should these nation-states opt to hire
technically capable individuals from third parties that have the skills to gradually develop or hack LAWS
or similar sophisticated weaponry, then global security may be undermined.
Philosophy
Kant

LAWs cannot exercise practical reasons because it is incapable of moral deliberation.


Ulgen 17 Ozlem Ulgen is Reader in International Law and Ethics in the School of Law. "Kantian Ethics in
the Age of Artificial Intelligence and Robotics." Questions of International Law. 31 October 2017.
www.qil-qdi.org/kantian-ethics-age-artificial-intelligence-robotics/#:~:text=Unlike%20utilitarian
%20arguments%20which%20favour,achieving%20a%20greater%20public%20good. [Premier]

Autonomy of the will requires inner and outer development of the person to reach a state of moral
standing and be able to engage in moral conduct. This is suggestive of an innate sense of right and
wrong.[61] Can machines emulate this sort of ‘will’? Artificial intelligence in autonomous weapons may
allow machine logic to develop over time to identify correct and incorrect action, showing a limited
sense of autonomy. But the machine does not possess a ‘will’ of its own nor does it understand what
freedom is and how to go about attaining it by adopting principles that will develop inner and outer
autonomy of the will. It has no self-determining capacity that can make choices between varying
degrees of right and wrong. The human can decide to question or go against the rules but the machine
cannot, except in circumstances of malfunction and mis-programming. It has no conception of freedom
and how this could be enhanced for itself as well as humans. The machine will not be burdened by moral
dilemmas so the deliberative and reflective part of decision-making (vital for understanding
consequences of actions and ensuring proportionate responses) is completely absent. There is a limited
sense in which artificial intelligence and robotics may mimic the outer aspect of Kant’s autonomy of the
will. Robots may have a common code of interaction to promote cooperation and avoid conflict among
themselves. Autonomous weapons operating in swarms may develop principles that govern how they
interact and coordinate action to avoid collision and errors. But these are examples of functional,
machine-to-machine interaction that do not extend to human interaction, and so do not represent a
form of autonomy of the will that is capable of universalisation.

When we talk about trust in the context of using artificial intelligence and robotics what we actually
mean is reliability. Trust relates to claims and actions people make and is not an abstract thing.[62]
Machines without autonomy of the will, in the Kantian sense, and without an ability to make claims
cannot be attributed with trust. Algorithms cannot determine whether something is trustworthy or not.
So trust is used metaphorically to denote functional reliability; that the machine performs tasks for the
set purpose without error or minimal error that is acceptable. But there is also an extension of this
notion of trust connected to human agency in the development and uses to which artificial intelligence
and robotics are put. Can we trust humans involved in developing such technologies that they will do so
with ethical considerations in mind – ie limiting unnecessary suffering and harm to humans, not violating
fundamental human rights? Once the technology is developed, can we trust those who will make use of
it to do so for benevolent rather than malevolent purposes? These questions often surface in debates on
data protection and the right to privacy in relation to personal data trawling activities of technologies.
Again, this goes back to what values will be installed that reflect ethical conduct and allow the
technology to distinguish right from wrong.

5. Kantian notion of rational beings and artificial intelligence


Kant’s focus on the rational thinking capacity of humans relates to potential rather than actual
possession of rationality, taking account of deficient rationality, immoral conduct, and situations where
humans may deliberately act irrationally in order to gain some advantage over an opponent. Technology
may be deemed to have rational thinking capacity if it engages in a pattern of logical thinking from
which it rationalises and takes action. Although Kant’s concept is specifically reserved for humans who
can set up a system of rules governing moral conduct (a purely human endeavour and not one that can
be mechanically produced), the capacity aspect may be fulfilled by artificial intelligence and robotics’
potential rather than actual rational thinking. But this seems a low threshold raising concerns about
predictability and certainty of the technology in real-life scenarios. So there would need to be much
greater clarity and certainty about what sort of rationality the technology would possess and how it
would apply in human scenarios.

When we compare machines to humans there is a clear difference between the logic of a calculating
machine and the wisdom of human judgment.[63] Machines perform cost effective and speedy
peripheral processing activities based on quantitative analysis, repetitive actions, and sorting data (eg
mine clearance; and detection of improvised explosive devices). They are good at automatic reasoning
and can outperform humans in such activities. But they lack the deliberative and sentient aspects of
human reasoning necessary in human scenarios where artificial intelligence may be used. They do not
possess complex cognitive ability to appraise a given situation, exercise judgment, and refrain from
taking action or limit harm. Unlike humans who can pull back at the last minute or choose a workable
alternative, robots have no instinctive or intuitive ability to do the same. For example, during warfare
the use of discretion is important to implementing rules on preventing unnecessary suffering, taking
precautionary measures, and assessing proportionality. Such discretion is absent in robots.[64]

Machines are unable of being consistent with Kantian ethics.


Ulgen 17 Ozlem Ulgen is Reader in International Law and Ethics in the School of Law. "Kantian Ethics in
the Age of Artificial Intelligence and Robotics." Questions of International Law. 31 October 2017.
www.qil-qdi.org/kantian-ethics-age-artificial-intelligence-robotics/#:~:text=Unlike%20utilitarian
%20arguments%20which%20favour,achieving%20a%20greater%20public%20good. [Premier]

A fully autonomous rule-generating approach would mean the technology produces its own rules and
conduct without reference to or intervention from humans. After the initial design and programming by
humans, the technology makes its own decisions. This is ‘machine learning’ or ‘dynamic learning
systems’ whereby the machine relies on its own databank and experiences to generate future rules and
conduct.[55] Fully autonomous weapons systems, for example, would have independent thinking
capacity as regards acquiring, tracking, selecting, and attacking human targets in warfare based on
previous experience of military scenarios.[56] Such an approach presents challenges. There is
uncertainty and unpredictability in the rules that a fully autonomous weapons system would generate
beyond what it has been designed to do, so that it would not comply with international humanitarian
law or Kantian ethics. In the civilian sphere, fully autonomous technology may generate rules that
adversely impact on human self-worth and progress by causing human redundancies, unemployment,
and income instability and inequality. Adverse impact on human self-worth and progress, and
uncertainty and unpredictability in the rule-generating process are contrary to what is fundamentally
beneficial to humankind; such a process cannot produce rules that are inherently desirable, doable,
valuable, and capable of universalisation. A perverse ‘machine subjectivity’ or ‘machine free will’ would
exist without any constraints, similar to Kant’s ‘hypothetical imperatives’ formed by human subjective
desires.

LAWs violate human dignity which takes priority over secondary aims.
Ulgen 17 Ozlem Ulgen is Reader in International Law and Ethics in the School of Law. "Kantian Ethics in
the Age of Artificial Intelligence and Robotics." Questions of International Law. 31 October 2017.
www.qil-qdi.org/kantian-ethics-age-artificial-intelligence-robotics/#:~:text=Unlike%20utilitarian
%20arguments%20which%20favour,achieving%20a%20greater%20public%20good. [Premier]

Human dignity is accorded by recognising the rational capacity and free will of individuals to be bound
by moral rules, as well as through notions of accountability and responsibility for wrongdoing.[71] We
accept that when wrongdoing is committed someone needs to be held accountable and responsible.
How can artificial intelligence express person-to-person accountability and fulfil this aspect of human
dignity (ie accountability for wrongdoing means respecting moral agents as equal members of the moral
community)? Could we ever accept artificial intelligence as equal members? There is also the matter of
whether artificial intelligence and robotics will be able to treat humanity as an end in the Kantian sense.

In the military sphere the use of lethal autonomous weapons are arguably used for a relative end (ie the
desire to eliminate a human target in the hope of preventing harm to others). For Kant, relative ends are
lesser values capable of being replaced by an equivalent. Killing a human being in the hope that it will
prevent further harm is insufficiently morally grounded to override human dignity and may be reckless
if alternatives and consequences are not considered. Utilitarians may counter that balancing interests
involves consideration of the greater good, which in this instance is to prevent harm to others.[72]
Consequentialist thinking and the utilitarian calculus are reflected in the proportionality principle under
art 51 API, requiring assessment of whether an attack is expected to cause excessive incidental loss of
civilian life in relation to the concrete and direct military advantage anticipated. But utilitarianism
cannot overcome the problem of applying a quantitative assessment of life for prospective greater good
that treats the humans sacrificed as mere objects, and creates a hierarchy of human dignity. Unless
autonomous weapons can only be used to track and identify rather than eliminate a human target, they
would extinguish a priceless and irreplaceable objective end possessed by all rational beings; human
dignity.[73]

Using autonomous weapons to extinguish life removes the reason for having morals in the first place;
human dignity of rational beings with autonomy of the will. In doing so a relative end is given priority
over an objective end.[74] Lack of face-to-face killing creates a hierarchy of human dignity. Military
personnel, remote pilots, commanders, programmers, and engineers are immune from rational and
ethical decision-making to kill another human being and do not witness the consequences. By replacing
the human combatant with a machine the combatant’s human dignity is not only preserved but
elevated above the human target. This can also be seen as a relative end in that it selfishly protects your
own combatants from harm at all costs including violating the fundamental principle of humanity as an
objective end.[75]
LAWs erode democratic peace.
Janer and Garcia 19 Justin Haner Northeastern University Denise Garcia International Committee for
Robot Arms Control and Northeastern University. "The Artificial Intelligence Arms Race: Trends and
World Leaders in Autonomous Weapons Development." Global Policy Volume 10 . Issue 3 . September
2019. https://onlinelibrary.wiley.com/doi/pdf/10.1111/1758-5899.12713. [Premier]

Between states, full autonomy in war may undermine democratic peace (Altmann and Sauer, 2017).
Immanuel Kant’s theory of democratic peace relies upon the public not supporting unnecessary wars as
they will be the ones called upon to fight in them. When many countries transitioned from a
conscription or draft-based military to an allvolunteer force, public opposition to ongoing wars declined
(Horowitz and Levendusky, 2011). This trend will likely be exacerbated and further lower the threshold
for use of force when even fewer human soldiers are needed to fight wars (Cole, 2017; Garcia, 2014).
Semi-autonomous systems have already begun to erode the fundamental norms of international law
against the use of force (Bode and Huelss, 2018; Edmeades, 2017; Garcia, 2016).
Democracy

LAWs undermine the public deliberation that usually constrains war-making in


democracies.
Amoroso 17 Daniele Amoroso, Professor of International Law at the University of Cagliari, and
Guglielmo Tamburrin, Philosophy of Science and Technology Professor, Universita' di Napoli Federico.
"The ethical and legal case against autonomy in weapons systems." Global Jurist, 2017.
https://www.researchgate.net/publication/319985172_The_Ethical_and_Legal_Case_Against_Autonom
y_in_Weapons_Systems. [Premier]

The present argument is based on a straightforward assumption. Even if one does not fully embrace the
(admittedly controversial) democratic peace theory, one must acknowledge that, in democratic
countries, public opinion and legislative assemblies play an important role in deterring governments
from deploying their armed forces in aggressive military campaigns. In this respect, a crucial factor lies in
the risk of casualties among national military personnel. Indeed, popular outrage generally stemming
from the return of “flag-draped coffins” represents a significant incentive for representatives sitting in
parliaments to exert a meaningful control over the use of war powers by the executive. As a collateral
(and not necessarily intended) effect, this democratic dynamics may prevent States from breaching the
prohibition on the use of force. A notable case in point occurred in 2013, when the US and UK
governments renounced to wage (an arguably unlawful) war against Assad in Syria apparently in view of
the disapproval expressed by domestic public opinion and parliamentary representatives.93

A policy allowing for the use of AWS would inevitably affect this (obliquely) virtuous circle. If human
troops are replaced, say, by robots, the potential cost of the conflict in terms of human losses
significantly decreases (when it does not equate to zero) and, with it, sensitivity to the issue in the
general public. Accordingly, legislative assemblies would be less motivated to control the governmental
exercise of war powers, thereby encouraging further executive unilateralism in this field.94 As a final
result, democratically unchecked military operations will be more and more likely to occur, leading to
more frequent breaches of the prohibition on the use of force. As the Austrian delegation openly put it
at the 2014 CCW Meeting of Experts: “[p]utting soldiers’ lives at stake makes States think twice whether
to engage in armed conflict. Autonomous weapons remove such restraint from the use of military
force”. 95

Significantly enough, a strikingly similar conclusion has been reached by Rebecca Crootof, a resolute
critic of an AWS ban. In a recent analysis devoted to the influence of AWS on the distribution of war
powers in the US legal system, Crootof describes how (and why) the development of this technology
would lead to a further concentration of the war power in the Executive’s hands.96 When turning to
consider the international legal implications of this process, she notes that, as a result of this
concentration of power, US Presidents “will be more willing to engage in humanitarian interventions”.
97 At this juncture, the author acknowledges that unilateral humanitarian interventions are still
prohibited under international law, but suggests that a more frequent resort to military force by the US
could lead to the consolidation of a new exception to the prohibition on the use of force under Article 2,
para. 4 of the UN Charter.98 Whether this result could ever be achieved, however, it is highly doubtful, if
one only considers that substantial portions of the world community have treated US-led unilateral
humanitarian interventions with the utmost suspicion. What is less controversial, for the time being, is
that more frequent resort to military force will ultimately mean more frequent violations of the
prohibition on the use of force. Accordingly, Crootof’s statement quoted above is aptly turned on its
head and rephrased as follows: because of the concentration of power stemming from the use of AWS,
US Presidents “will be more willing to engage in violations of the prohibition on the use of force under
Art. 2 para. 4 UN Charter”. This amended conclusion affords a quite strong motivation for banning AWS!

Polls affirm.
Deeney 19 Chris Deeney, Senior Vice President, U.S. Ipsos Observer. "Six in Ten (61%) Respondents
Across 26 Countries Oppose the Use of Lethal Autonomous Weapons Systems." Ipsos. 21 January 2019.
https://www.ipsos.com/en-us/news-polls/human-rights-watch-six-in-ten-oppose-autonomous-
weapons. [Premier]

According to a recent online survey conducted by Ipsos on behalf of Human Rights Watch for the
Campaign to Stop Killer Robots, sixty one percent of adults across 26 countries say that they oppose the
use of lethal autonomous weapons systems, also known as fully autonomous weapons. On the other
hand, 22 percent support such use and 17 percent say that they are not sure. In a similar study
conducted by Ipsos in January 2017, 56 percent were opposed, 24 percent not opposed, and 19 percent
unsure.

Support for fully autonomous weapons is strongest in India (50%) and Israel (41%). The strongest
opposition is in Turkey (78%), South Korea (74%), and Hungary (74%).

Among those who are opposed, 66% say that they feel this way because they believe lethal autonomous
weapons systems cross a moral line as machines should not be allowed to kill. More than half (54%) of
those who are opposed also feel this way because weapons are “unaccountable.”
International Law

Limits on perception and interpretation prevent robots from distinguishing


combatants from civilians. Tech doesn’t solve.
Docherty 12 Bonnie Docherty, senior researcher in the Arms Division of Human Rights Watch and
senior clinical instructor at the International Human Rights Clinic (IHRC) at Harvard Law School. "Losing
Humanity: The Case against Killer Robots." Human Rights Watch. November 2012.
https://www.hrw.org/sites/default/files/reports/arms1112_ForUpload.pdf. [Premier]

An initial evaluation of fully autonomous weapons shows that even with the proposed compliance
mechanisms, such robots would appear to be incapable of abiding by the key principles of international
humanitarian law. They would be unable to follow the rules of distinction, proportionality, and military
necessity and might contravene the Martens Clause. Even strong proponents of fully autonomous
weapons have acknowledged that finding ways to meet those rules of international humanitarian law
are “outstanding issues” and that the challenge of distinguishing a soldier from a civilian is one of
several “daunting problems.”122 Full autonomy would strip civilians of protections from the effects of
war that are guaranteed under the law.

Distinction

The rule of distinction, which requires armed forces to distinguish between combatants and
noncombatants, poses one of the greatest obstacles to fully autonomous weapons complying with
international humanitarian law. Fully autonomous weapons would not have the ability to sense or
interpret the difference between soldiers and civilians, especially in contemporary combat
environments.

Changes in the character of armed conflict over the past several decades, from state-tostate warfare to
asymmetric conflicts characterized by urban battles fought among civilian populations, have made
distinguishing between legitimate targets and noncombatants increasingly difficult. States likely to field
autonomous weapons first—the United States, Israel, and European countries—have been fighting
predominately counterinsurgency and unconventional wars in recent years. In these conflicts,
combatants often do not wear uniforms or insignia. Instead they seek to blend in with the civilian
population and are frequently identified by their conduct, or their “direct participation in hostilities.”
Although there is no consensus on the definition of direct participation in hostilities, it can be
summarized as engaging in or directly supporting military operations.123 Armed forces may attack
individuals directly participating in hostilities, but they must spare noncombatants.124

It would seem that a question with a binary answer, such as “is an individual a combatant?” would be
easy for a robot to answer, but in fact, fully autonomous weapons would not be able to make such a
determination when combatants are not identifiable by physical markings. First, this kind of robot might
not have adequate sensors. Krishnan writes, “Distinguishing between a harmless civilian and an armed
insurgent could be beyond anything machine perception could possibly do. In any case, it would be easy
for terrorists or insurgents to trick these robots by concealing weapons or by exploiting their sensual
and behavioral limitations.”125
An even more serious problem is that fully autonomous weapons would not possess human qualities
necessary to assess an individual’s intentions, an assessment that is key to distinguishing targets .
According to philosopher Marcello Guarini and computer scientist Paul Bello, “[i]n a context where we
cannot assume that everyone present is a combatant, then we have to figure out who is a combatant
and who is not. This frequently requires the attribution of intention.”126 One way to determine
intention is to understand an individual’s emotional state, something that can only be done if the soldier
has emotions. Guarini and Bello continue, “A system without emotion … could not predict the emotions
or action of others based on its own states because it has no emotional states.”127 Roboticist Noel
Sharkey echoes this argument: “Humans understand one another in a way that machines cannot. Cues
can be very subtle, and there are an infinite number of circumstances where lethal force is
inappropriate.”128 For example, a frightened mother may run after her two children and yell at them to
stop playing with toy guns near a soldier. A human soldier could identify with the mother’s fear and the
children’s game and thus recognize their intentions as harmless, while a fully autonomous weapon
might see only a person running toward it and two armed individuals.129 The former would hold fire,
and the latter might launch an attack. Technological fixes could not give fully autonomous weapons the
ability to relate to and understand humans that is needed to pick up on such cues.

Inherent limitations make it impossible to enact a proportionate attack.


Docherty 12 Bonnie Docherty, senior researcher in the Arms Division of Human Rights Watch and
senior clinical instructor at the International Human Rights Clinic (IHRC) at Harvard Law School. "Losing
Humanity: The Case against Killer Robots." Human Rights Watch. November 2012.
https://www.hrw.org/sites/default/files/reports/arms1112_ForUpload.pdf. [Premier]

The requirement that an attack be proportionate, one of the most complex rules of international
humanitarian law, requires human judgment that a fully autonomous weapon would not have. The
proportionality test prohibits attacks if the expected civilian harm of an attack outweighs its anticipated
military advantage.130 Michael Schmitt, professor at the US Naval War College, writes, “While the rule
is easily stated, there is no question that proportionality is among the most difficult of LOIAC [law of
international armed conflict] norms to apply.”131 Peter Asaro, who has written extensively on military
robotics, describes it as “abstract, not easily quantified, and highly relative to specific contexts and
subjective estimates of value.”132

Determining the proportionality of a military operation depends heavily on context. The legally
compliant response in one situation could change considerably by slightly altering the facts. According
to the US Air Force, “[p]roportionality in attack is an inherently subjective determination that will be
resolved on a case-by-case basis.”133 It is highly unlikely that a robot could be pre-programmed to
handle the infinite number of scenarios it might face so it would have to interpret a situation in real
time. Sharkey contends that “the number of such circumstances that could occur simultaneously in
military encounters is vast and could cause chaotic robot behavior with deadly consequences.”134
Others argue that the “frame problem,” or the autonomous robot’s incomplete understanding of its
external environment resulting from software limitations, would inevitably lead to “ faulty behavior.”135
According to such experts, the robot’s problems with analyzing so many situations would interfere with
its ability to comply with the proportionality test.
Those who interpret international humanitarian law in complicated and shifting scenarios consistently
invoke human judgment, rather than the automatic decision making characteristic of a computer. The
authoritative ICRC commentary states that the proportionality test is subjective, allows for a “fairly
broad margin of judgment,” and “must above all be a question of common sense and good faith for
military commanders.”136 International courts, armed forces, and others have adopted a “reasonable
military commander” standard.137 The International Criminal Tribunal for the Former Yugoslavia, for
example, wrote, “In determining whether an attack was proportionate it is necessary to examine
whether a reasonably well-informed person in the circumstances of the actual perpetrator, making
reasonable use of the information available to him or her, could have expected excessive civilian
casualties to result from the attack.”138 The test requires more than a balancing of quantitative data,
and a robot could not be programmed to duplicate the psychological processes in human judgment that
are necessary to assess proportionality.

A scenario in which a fully autonomous aircraft identifies an emerging leadership target exemplifies the
challenges such robots would face in applying the proportionality test. The aircraft might correctly locate
an enemy leader in a populated area, but then it would have to assess whether it was lawful to fire. This
assessment could pose two problems. First, if the target were in a city, the situation would be constantly
changing and thus potentially overwhelming; civilian cars would drive to and fro and a school bus might
even enter the scene. As discussed above, experts have questioned whether a fully autonomous aircraft
could be designed to take into account every movement and adapt to an ever-evolving proportionality
calculus. Second, the aircraft would also need to weigh the anticipated advantages of attacking the
leader against the number of civilians expected to be killed. Each leader might carry a different weight
and that weight could change depending on the moment in the conflict. Furthermore, humans are
better suited to make such value judgments, which cannot be boiled down to a simple algorithm.139

Proponents might argue that fully autonomous weapons with strong AI would have the capacity to apply
reason to questions of proportionality. Such claims assume the technology is possible, but that is in
dispute as discussed above. There is also the threat that the development of robotic technology would
almost certainly outpace that of artificial intelligence. As a result, there is a strong likelihood that
advanced militaries would introduce fully autonomous weapons to the battlefield before the robotics
industry knew whether it could produce strong AI capabilities. Finally, even if a robot could reach the
required level of reason, it would fail to have other characteristics—such as the ability to understand
humans and the ability to show mercy—that are necessary to make wise legal and ethical choices
beyond the proportionality test.

LAWs fail the test of military necessity.


Docherty 12 Bonnie Docherty, senior researcher in the Arms Division of Human Rights Watch and
senior clinical instructor at the International Human Rights Clinic (IHRC) at Harvard Law School. "Losing
Humanity: The Case against Killer Robots." Human Rights Watch. November 2012.
https://www.hrw.org/sites/default/files/reports/arms1112_ForUpload.pdf. [Premier]

Like proportionality, military necessity requires a subjective analysis of a situation. It allows “military
forces in planning military actions … to take into account the practical requirements of a military
situation at any given moment and the imperatives of winning,” but those factors are limited by the
requirement of “humanity.”140 One scholar described military necessity as “a context-dependent,
value-based judgment of a commander (within certain reasonableness restraints).”141 Identifying
whether an enemy soldier has become hors de combat, for example, demands human judgment.142 A
fully autonomous robot sentry would ind it difficult to determine whether an intruder it shot once was
merely knocked to the ground by the blast, faking an injury, slightly wounded but able to be detained
with quick action, or wounded seriously enough to no longer pose a threat. It might therefore
unnecessarily shoot the individual a second time. Fully autonomous weapons are unlikely to be any
better at establishing military necessity than they are proportionality.

Military necessity is also relevant to this discussion because proponents could argue that, if fully
autonomous weapons were developed, their use itself could become a military necessity in certain
circumstances. Krishnan warns that the development of “[t]echnology can largely affect the calculation
of military necessity.”143 He writes: “Once [autonomous weapons] are widely introduced, it becomes a
matter of military necessity to use them, as they could prove far superior to any other type of
weapon.”144 He argues such a situation could lead to armed conflict dominated by machines, which he
believes could have “disastrous consequences.” Therefore, “it might be necessary to restrict, or maybe
even prohibit [autonomous weapons] from the beginning in order to prevent a dynamics that will lead
to the complete automation of war that is justified by the principle of necess ity.”145 The consequences
of applying the principle of military necessity to the use of fully autonomous weapons could be so dire
that a preemptive restriction on their use is justified.

LAWs violate the Marten’s Clause.


Docherty 12 Bonnie Docherty, senior researcher in the Arms Division of Human Rights Watch and
senior clinical instructor at the International Human Rights Clinic (IHRC) at Harvard Law School. "Losing
Humanity: The Case against Killer Robots." Human Rights Watch. November 2012.
https://www.hrw.org/sites/default/files/reports/arms1112_ForUpload.pdf. [Premier]

Fully autonomous weapons also raise serious concerns under the Martens Clause. The clause, which
encompasses rules beyond those found in treaties, requires that means of warfare be evaluated
according to the “principles of humanity” and the “dictates of public conscience. ”146 Both experts and
laypeople have an expressed a range of strong opinions about whether or not fully autonomous
machines should be given the power to deliver lethal force without human supervision. While there is
no consensus, there is certainly a large number for whom the idea is shocking and unacceptable. States
should take their perspective into account when determining the dictates of public conscience.

Ronald Arkin, who supports the development of fully autonomous weapons, helped conduct a survey
that offers a glimpse into people’s thoughts about the technology. The survey sought opinions from the
public, researchers, policymakers, and military personnel, and given the sample size it should be viewed
more as descriptive than quantitative, as Arkin noted.147 The results indicated that people believed that
the less an autonomous weapon was controlled by humans, the less acceptable it was.148 In particular,
the survey determined that “[t]aking life by an autonomous robot in both open warfare and covert
operations is unacceptable to more than half of the participants.”149 Arkin concluded, “People are
clearly concerned about the potential use of lethal autonomous robots. Despite the perceived ability to
save soldiers’ lives, there is clear concern for collateral damage, in particular civilian loss of life.”150
Even if such anecdotal evidence does not create binding law, any review of fully autonomous weapons
should recognize that for many people these weapons are unacceptable under the principles laid out in
the Martens Clause.

ALWs are questionable under international law since they fail to meet proportionality
requirements and can’t be held accountable.
Asaro 12 Peter Asaro is Affiliate Scholar at Stanford Law School’s Center for Internet and Society,
CoFounder and Vice-Chair of the International Committee for Robot Arms Control, and the Director of
Graduate Programs for the School of Media Studies at The New School for Public Engagement in New
York City. "On banning autonomous weapon systems: human rights, automation, and the
dehumanization of lethal decision-making." International Review of the Red Cross, 94(886). Summer
2012. https://www.cambridge.org/core/services/aop-cambridge-
core/content/view/992565190BF2912AFC5AC0657AFECF07/S1816383112000768a.pdf/on-banning-
autonomous-weapon-systems-human-rights-automation-and-the-dehumanization-of-lethal-decision-
making.pdf. [Premier]

In terms of the legal acceptability of these systems under existing IHL,11 the primary question appears
to be whether autonomous systems will be able to satisfy the principles of distinction and
proportionality.12 Given the complexity of these systems, and our inability to foresee how they might
act in complex operational environments, unanticipated circumstances, and ambiguous situations, there
is a further difficulty – how we can test and verify that a newly designed autonomous weapon system
meets the requirements imposed by IHL, as required by Article 36 of Additional Protocol I,13 and more
generally how to govern the increasingly rapid technological innovation of new weapons and tactics.14

There is a separate concern that such systems may not have an identifiable operator in the sense that no
human individual could be held responsible for the actions of the autonomous weapon system in a given
situation, or that the behaviour of the system could be so unpredictable that it would be unfair to hold
the operator responsible for what the system does.15 Such systems might thus eliminate the possibility
of establishing any individual criminal responsibility that requires moral agency and a determination of
mens rea. 16 In the event of an atrocity or tragedy caused by an autonomous weapon system under the
supervision or command of a human operator they may also undermine command responsibility and
the duty to supervise subordinates, thus shielding their human commanders from what might have
otherwise been considered a war crime. It is thus increasingly important to hold states accountable for
the design and use of such systems, and to regulate them at an international level

Technical constraints on LAWs prevent adherence to international law.


Asaro 12 Peter Asaro is Affiliate Scholar at Stanford Law School’s Center for Internet and Society,
CoFounder and Vice-Chair of the International Committee for Robot Arms Control, and the Director of
Graduate Programs for the School of Media Studies at The New School for Public Engagement in New
York City. "On banning autonomous weapon systems: human rights, automation, and the
dehumanization of lethal decision-making." International Review of the Red Cross, 94(886). Summer
2012. https://www.cambridge.org/core/services/aop-cambridge-
core/content/view/992565190BF2912AFC5AC0657AFECF07/S1816383112000768a.pdf/on-banning-
autonomous-weapon-systems-human-rights-automation-and-the-dehumanization-of-lethal-decision-
making.pdf. [Premier]

While there are various examples of military weapons and practices that, arguably, do not include direct
human involvement in lethal decision-making, the new wave of technological capability has raised
serious concerns and trepidation amongst both the international law community and military
professionals as to the moral and legal legitimacy of such systems. As Dr. Jakob Kellenberger, past
president of the International Committee of the Red Cross (ICRC), expressed at a conference in San
Remo, Italy, in September 2011:

A truly autonomous system would have artificial intelligence that would have to be capable of
implementing IHL. While there is considerable interest and funding for research in this area,
such systems have not yet been weaponised. Their development represents a monumental
programming challenge that may well prove impossible. The deployment of such systems would
reflect a paradigm shift and a major qualitative change in the conduct of hostilities. It would also
raise a range of fundamental legal, ethical and societal issues which need to be considered
before such systems are developed or deployed. A robot could be programmed to behave more
ethically and far more cautiously on the battlefield than a human being. But what if it is
technically impossible to reliably program an autonomous weapon system so as to ensure that
it functions in accordance with IHL under battlefield conditions? [...] [A]pplying pre-existing legal
rules to a new technology raises the question of whether the rules are sufficiently clear in light
of the technology’s specific – and perhaps unprecedented – characteristics, as well as with
regard to the foreseeable humanitarian impact it may have. In certain circumstances, states will
choose or have chosen to adopt more specific regulations.7

As Kellenberger makes clear, there are serious concerns as to whether autonomous technologies will be
technically capable of conforming to existing IHL. While many military professionals recognize the
technological movement towards greater autonomy in lethal weapons systems, most express strong
ethical concerns, including policymakers at the US Office of the Secretary of Defense:

Restraints on autonomous weapons to ensure ethical engagements are essential, but building
autonomous weapons that fail safely is the harder task. The wartime environment in which
military systems operate is messy and complicated, and autonomous systems must be capable
of operating appropriately in it. Enemy adaptation, degraded communications, environmental
hazards, civilians in the battlespace, cyber attacks, malfunctions, and ‘friction’ in war all
introduce the possibility that autonomous systems will face unanticipated situations and may
act in an unintended fashion. Because they lack a broad contextual intelligence, or common
sense, on par with humans, even relatively sophisticated algorithms are subject to failure if they
face situations outside their intended design parameters. The complexity of modern computers
complicates this problem by making it difficult to anticipate all possible glitches or emergent
behavior that may occur in a system when it is put into operation.8

Because even ‘artificially intelligent’ autonomous systems must be pre-programmed, and will have only
highly limited capabilities for learning and adaptation at best, it will be difficult or impossible to design
systems capable of dealing with the fog and friction of war. When we consider the implications of this
for protecting civilians in armed conflict, this raises several ethical and legal questions, particularly in
relation to conforming to the IHL requirements of the principles of distinction, proportionality, and
military necessity, and the difficulty of establishing responsibility and accountability for the use of lethal
force.

Robots can’t be programmed to follow i-law.


Asaro 12 Peter Asaro is Affiliate Scholar at Stanford Law School’s Center for Internet and Society,
CoFounder and Vice-Chair of the International Committee for Robot Arms Control, and the Director of
Graduate Programs for the School of Media Studies at The New School for Public Engagement in New
York City. "On banning autonomous weapon systems: human rights, automation, and the
dehumanization of lethal decision-making." International Review of the Red Cross, 94(886). Summer
2012. https://www.cambridge.org/core/services/aop-cambridge-
core/content/view/992565190BF2912AFC5AC0657AFECF07/S1816383112000768a.pdf/on-banning-
autonomous-weapon-systems-human-rights-automation-and-the-dehumanization-of-lethal-decision-
making.pdf. [Premier]

What does it really mean to say that we can program the rules of IHL into a computer? Is it simply a
matter of turning laws written to govern human actions into programmed codes to constrain the actions
of machine? Should the next additional protocol to the Geneva Conventions be written directly into
computer code? Or is there something more to IHL that cannot be programmed? It is tempting to take
an engineering approach to the issue and view the decisions and actions of a combatant as a ‘black box’,
and compare the human soldier to the robotic soldier and claim that the one that makes fewer mistakes
according to IHL is the ‘more ethical’ soldier. This has been a common argument strategy in the history
of artificial intelligence as well.

There are really two questions here, however. The empirical question is whether a computer, machine,
or automated process could make each of these decisions of life and death and achieve some
performance that is deemed acceptable. But the moral question is whether a computer, machine or
automated process ought to make these decisions of life and death at all. Unless we can prove in
principle that a machine should not make such decisions, we are left to wonder if or when some clever
programmers might be able to devise a computer system that can do these things, or at least when we
will allow machines to make such decisions

The history of artificial intelligence is instructive here, insofar as it tells us that such problems are, in
general, computationally intractable, but if we can very carefully restrict and simplify the problem, we
might have better success. We might also, however, compare the sort of problems artificial intelligence
has been successful at, such as chess, with the sort of problems encountered in applying IHL
requirements. While IHL requirements are in some sense ‘rules’, they are quite unlike the rules of chess
in that they require a great deal of interpretative judgement in order to be applied appropriately in any
given situation. Moreover, the context in which the rules are being applied, and the nature and quality
of the available information, and alternative competing or conflicting interpretations, might vary widely
from day to day, even in the same conflict, or even in the same day.

We might wish to argue that intelligence is uniquely human, but if one can define it specifically enough,
or reduce it to a concrete task, then it may be possible to program a computer to do that task better.
When we do that, we are necessarily changing the definition of intelligence by redefining a complex skill
into the performance of a specific task. Perhaps it is not so important whether we redefine intelligence
in light of developments in computing, though it certainly has social and cultural consequences. But
when it comes to morality, and the taking of human lives, do we really want to redefine what it means
to be moral in order to accommodate autonomous weapon systems? What is at stake if we allow
automated systems the authority to decide whether to kill someone? In the absence of human
judgement, how can we ensure that such killing is not arbitrary?

Automating the rules of IHL would likely undermine the role they play in regulating ethical human
conduct. It would also explain why designers have sought to keep humans-in-the-loop for the purposes
of disambiguation and moral evaluation. As Sir Brian Burridge, commander of the British Royal Air Force
in Iraq from 2003 to 2005, puts it:

Under the law of armed conflict, there remains the requirement to assess proportionality and
within this, there is an expectation that the human at the end of the delivery chain makes the
last assessment by evaluating the situation using rational judgement. Post-modern conflicts
confront us ... with ambiguous non-linear battlespaces. And thus, we cannot take the human,
the commander, the analyst, those who wrestle with ambiguity, out of the loop. The debate
about the human-in-the-loop goes wider than that.24

The very nature of IHL, which was designed to govern the conduct of humans and human organizations
in armed conflict, presupposes that combatants will be human agents. It is in this sense anthropocentric.
Despite the best efforts of its authors to be clear and precise, applying IHL requires multiple levels of
interpretation in order to be effective in a given situation. IHL supplements its rules with heuristic
guidelines for human agents to follow, explicitly requires combatants to reflexively consider the
implications of their actions, and to apply compassion and judgement in an explicit appeal to their
humanity. In doing this, the law does not impose a specific calculation, but rather, it imposes a duty on
combatants to make a deliberate consideration as to the potential cost in human lives and property of
their available courses of action.

Even if consequences matter under international law, the broader risks of LAWs
outweigh the narrow benefits.
Amoroso 17 Daniele Amoroso, Professor of International Law at the University of Cagliari, and
Guglielmo Tamburrin, Philosophy of Science and Technology Professor, Universita' di Napoli Federico.
"The ethical and legal case against autonomy in weapons systems." Global Jurist, 2017.
https://www.researchgate.net/publication/319985172_The_Ethical_and_Legal_Case_Against_Autonom
y_in_Weapons_Systems. [Premier]

Those who defend the rule that AWS should be permitted on consequentialist grounds have usually
assumed a narrow perspective on expected consequences. Roughly speaking, from this perspective
AWS development and deployment should be permitted insofar as these new conventional arms are
expected to bring about reduced casualties in one’s own and the opponents’ camp, as well as among
non-belligerents who happen to be present on the battlefield. This expectation is grounded in the belief
that AWS will be capable of performing more accurate targeting than human soldiers, and will be
programmed to adopt more conservative decisions to fire insofar as these machines can be made free
from human self-preservation concerns.80 The force of these narrow consequentialist arguments for
the future deployment of AWS depends on another crucial assumption. This is the ceteris paribus
assumption that the deployment of AWS will not have a significant impact outside battlefield scenarios.
However, the weakness of this ceteris paribus assumption has been convincingly and repeatedly
brought out.81 Indeed, one may reasonably expect that the spreading of AWS will bring about
comprehensive and long-term consequences for international security along with local and short-term
military advantages on the battlefield. A document produced by ICRAC (International Committee for
Robot Arms Control)82 summarizes various threats to international security raised by AWS
concentrating in particular on the proliferation of these weapons with oppressive regimes83 and
terrorists,84 their mass proliferation among state actors giving rise to a new arms race,85 less
disincentives to start wars,86 on account of the reduced numbers of soldiers that will be involved, and
correspondingly lowered thresholds for armed conflicts, unpredictability of interaction with friendly or
enemy AWS, their cyber vulnerability possibly leading to unintended conflicts, acceleration in the pace
of battle, in addition to continuous global battlefields brought about by AWS left behind to patrol post-
conflict zones over long time periods.

In addition to these concerns, one should carefully note that future AWS, more than many other
conventional arms, have the potential to deliver destructive attacks on nuclear objectives. Large
swarms of AWS flying at supersonic and hypersonic speeds might be capable of delivering a powerful
first strike against the opponent’s nuclear arsenals, to the extent that they may thwart the opponent’s
capability of responding with nuclear retaliation. In this scenario, nuclear deterrence based on mutually
assured destruction would no longer count as a motivation to withhold aggression and first strike
strategies would be prized instead.87

What is then a wide consequentialist appraisal of the overall expected benefits and costs flowing from
AWS deployment? Arguably, by permitting the AWS deployment, one might expect the good
consequence of reduced casualties among belligerents and non-belligerents in some local battlefield
scenarios. By taking this course of action, however, one would significantly raise at the same time the
danger of starting a new arms race leading to regional and global destabilization risks, up to and
including the weakening of traditional nuclear deterrence factors based on mutually assured
destruction. As the latter negative consequences outweigh the sum of the expected benefits flowing
from AWS deployment, the collective rule of behavior that is expected to produce the preferable set of
consequences in a global geopolitical context is that of prohibiting – rather than permitting – the
production and deployment of AWS.

4.2 Consequentialist pro-ban arguments in a legal perspective

In the framework of international law debates, consequentialist approaches have been equally pursued
by those backing or else opposing a ban on AWS. Significantly enough, the two sides of the debate are
positioned coherently with the distinction, set out above, between narrow and wide consequentialist
arguments. In fact, the consequentialist approach is commonly wielded by the antiban front, which has
argued that AWS’ deployment will ultimately result in ‘higher-than-human’ performances with respect
to adherence to IHL, because robots can become more accurate than human soldiers in targeting
military objectives and, unlike human soldiers, are utterly unconstrained by the need for self-
preservation and immune from human passions (such as anger, fear and vengefulness).

As noted above, however, this narrow appraisal only captures a fraction of the overall picture, since it is
confined to the battlefield-related effects and screens off (by the implicitly assumed ceteris paribus
clause) more pervasive effects that are likely to flow from AWS deployment. Indeed, supporters of a ban
reach opposite conclusions on the basis of a broader consideration of the consequences that one may
expect from an increased use of AWS. This enlarged perspective brings into play a distinct legal regime,
since one moves from the law regulating the conduct of hostilities (IHL, or jus in bello) to the law
pertaining to the maintenance of international peace and security (Art. 1, para. 1 UN Charter). The latter
includes, but is not limited to, the rules governing the use of force, or jus ad bellum (viz. the prohibition
on the use of force under Art. 2 para. 4 UN Charter; the right to self-defence under Art. 51 UN Charter;
and the collective security system governed by Chapter VII of the UN Charter).89

A preliminary observation on the shifting legal framework is in order here. While the potential impact of
AWS on international peace and security is often described as a matter of concern for international law,
this proposition has been rarely discussed in depth, so that is not entirely clear what is the actual legal
issue at stake. This led one author to radically rule out the relevance of jus ad bellum in this field, in the
light of the fact that the determination as to ‘[w]hether a breach of a rule of ius ad bellum has occurred
[…] is independent from the type of weapon that has been used’. 90

The latter view is not without foundation. Whether a certain use of force is contrary to the jus ad bellum
ultimately depends on the circumstances in which force is unleashed (Who? Against whom? Why?) and
not on the sorts of weapons that are employed. If, for instance, State A deploys a swarm of AWS against
State B, the legality of its conduct will be gauged, under jus ad bellum, on the basis of the following
elements: whether State A acted in self-defense or whether the use of force was authorized by the UN
Security Council. Conversely, it will be completely immaterial whether the attack was carried out
through AWS or other alternative means.

Yet, and again, this is not the whole story. The law governing the maintenance of international peace
and security cannot be reduced to a static, binary decision rule. This legal regime is not only about
determining whether a specific armed activity is lawful or not under the prohibition on the use of force.
Rather, it is about ensuring – in the words of the 1984 Declaration on the Right of Peoples to Peace –
that ‘the policies of States be directed towards the elimination of the threat of war’. 91 This claim entails
that a more comprehensive (and dynamic) appraisal must be carried out, which may well include an
evaluation of policies allowing the use of AWS, especially in connection with the question whether these
policies are conducive to more peace and security in international relations or, on the contrary,
represent a factor of instability at global and regional levels. Should the latter be the case, any such
policy would not only be undesirable as a matter of normative ethics, but also as a matter of
international law, as it would run contrary to the maintenance of international peace and security,
namely, according to a commonly shared view, it would run counter to the ‘purpose of all purposes’ of
the UN Charter.92

In the previous sub-section, a variety of nefarious wide-scale consequences were listed, which are likely
to ensue from permissive policy towards AWS. Each one of these consequences, taken individually, is
arguably sufficient to support the contention that an AWS permissive policy should be outlawed as
detrimental to the achievement of the UN goal of a world order of peace and security. Here, we limit
ourselves to underlining that a policy allowing the use of AWS would end up encouraging a more liberal
approach to the use of force by States. In their turn, such liberal approaches may bring about a higher
likelihood of violations of the prohibition on the use of force under Article 2 para. 4 of the UN Charter.
Deontological constraints take lexical precedence.
Amoroso 17 Daniele Amoroso, Professor of International Law at the University of Cagliari, and
Guglielmo Tamburrin, Philosophy of Science and Technology Professor, Universita' di Napoli Federico.
"The ethical and legal case against autonomy in weapons systems." Global Jurist, 2017.
https://www.researchgate.net/publication/319985172_The_Ethical_and_Legal_Case_Against_Autonom
y_in_Weapons_Systems. [Premier]

To begin with, let us notice that deontological arguments in normative ethics concern some agent-
relative obligations and patient-relative rights in warfare scenarios involving the use of lethal force by
means of AWS. The conclusions of deontological arguments against AWS notably flow from inviolable
foundational values (human dignity) and requirements of moral responsibility and accountability that
are deeply entrenched into IHL, IHRL and ICL. Under these bodies of law, rules concerning the use of
lethal force are categorical and cannot be derogated from in the pursuit of a greater good (as a
consequentialist analysis would suggest), except for special excusing conditions that are explicitly
envisaged by the rules themselves.

In fact, while international law is certainly permeable to the influence of consequentialist thinking, some
of its norms posit inviolable precepts, whose respect is not amenable to trade-offs against utilitarian
considerations. Reference is made to jus cogens norms, the (Kantian102) deontological noyau dur,
sitting at the top of the hierarchy of the sources of international legal order. To the extent that the basic
tenets of IHL, IHRL and ICL rank as jus cogens (indeed, a relatively unproblematic assumption)103 ,
therefore, deontological arguments based upon them take precedence over consequentialist ones as a
matter of positive international law, 104 and thus independently from any philosophical stand one may
wish to take on the “deontology vs. consequentialism” ethical tensions.

Accordingly, the joint ethical and legal reinforcement of categorical obligations motivates the following
Prioritization of Deontology rule (PD-rule): the conclusions of deontological arguments against AWS
cannot be overridden in any circumstance in which they are applicable. However, it was noticed that
there are circumstances in which deontological arguments do not provide such guidance, i.e. when the
obligations of certain sorts of agents or the rights of certain sorts of potential patients are not at stake.
In these circumstances, deontological approaches – both ethical and legal ones – are inapplicable. How
is this deontologically unregimented space of action possibilities to be dealt with? Here, the agent-
neutral (and patient-neutral) consequentialist framework can provide the required guidance, insofar as
wide consequentialist reasons agree with and are enshrined into the principles (set forth by the UN
Charter and related legal instruments) concerning peace and security. In the deontologically
unregimented space of action, wide consequentialist reasons are reinforced by a variety of legal
instruments concerning peace and security. Thus, deontological and consequentialist frameworks can be
amalgamated by assigning them to different domains, thereby avoiding intertheoretical conflicts in
normative ethics and legal conflicts between different bodies of law. In other words, one is led to added
to the PD-rule the following default rule (DEF-rule): apply consequentialist arguments against AWS
whenever deontological arguments are inapplicable.

In virtue of its merging of deontological and consequentialist frameworks, the confluence model based
on the prioritization of deontological reasons against AWS (PD model from now on) bolsters the case for
an extensive AWS prohibition over and above lethal AWS, as it includes consequenceoriented reasons
to curb some potential uses of AWS that have no direct lethal effects on human beings. A recently
discussed scenario of this kind concerns autonomous Unmanned Underwater Vehicles (UUV), which
may be used for trailing ballistic missile submarines. In a British Pugwash report by Sebastian Brixey-
Williams it is claimed that “within a decade …adaptable long-endurance or rapidly-deployable
unmanned underwater vehicles (UUV) and unmanned surface vehicles (USV), look likely to undermine
the stealth of existing submarines”. 105 Autonomy appears to be a crucial feature for the task that these
UUV are supposed to perform, insofar as remote control signals do not travel well in salted underwater
environments. And clearly, there is no direct lethal or sublethal effect on human beings produced by
autonomous UUV which compromise the stealth of ballistic missile submarines. However, implementing
this scenario may have a significant impact on peace and stability. Indeed, the difficulty of detecting
submarines fitted with acoustic quieting systems makes them survivable candidates in the case of a first
nuclear strike. By the same token, these vessels are assigned a key role in the MAD nuclear
strategy. Therefore, undermining their stealth by means of autonomous UUV may undercut a major
condition for the mutual acceptance of the MAD strategy. In these circumstances, the prioritized
deontological rule does not apply, but a request for prohibiting these uses of autonomous UUV can be
supported by appealing to the default consequentialist rule of the PD model.
Human Dignity

LAWs intrinsically diminish human dignity.


Pedron and da Cruz 20 summarizes Stephanie Mae Pedron and Jose de Arimateia da Cruz. "The
Future of Wars: Artificial Intelligence (AI) and Lethal Autonomous W onomous Weapon Systems (L
eapon Systems (LAWS)." International Journal of Security Studies, 2(1). 2020.
https://digitalcommons.northgeorgia.edu/cgi/viewcontent.cgi?article=1020&context=ijoss. [Premier]

The possibility of LAWS rendering the final decision to apply lethal force has sparked worldwide
discussion regarding fundamental issues related to the value of human life, accountability for
operations, and human dignity. Advocates of a preemptive LAWS ban posit that autonomous weapon
systems contravene every human beings inherent right to life under Article 6 § 1 of the International
Covenant on Civil and Political Rights (ICCPR), which states that, “Every human being has the inherent
right to life. This right shall be protected by law. No one shall be arbitrarily deprived of his life” (OHCHR,
n.d.). The ICCPR was adopted by the U.N. General Assembly in 1966 to ensure that its parties
acknowledge and respect the rights of individuals; it is crucial for global human rights laws. Similarly, the
Geneva Conventions and its Additional Protocols lie at the center of international humanitarian practices
by limiting the cruelty of warfare. Organizations like Human Rights Watch (2014) have argued that
abiding by the prerequisites for lawful force outlined within the Geneva Conventions articles, LAWS
would require immense data regarding an infinite number of situations. The sheer amount of possible
scenarios means that machines will likely not be able to adequately respond to every circumstance they
might face

International security researchers Anja Dahlmann and Marcel Dickow from the German Institute for
International and Security Affairs argue that, “machines do not understand what it means to kill a
human being” (2019). One of LAWS’ greatest advantages—that is, their lack of emotion—is also a
central flaw. A robot’s inability to reflect over its actions or to comprehend the value of an individual’s
life and the significance of loss essentially turns the humans targeted into little more than data points or
objects (Dahlmann & Dickow, 2019; Docherty, 2014; Purves, Jenkins, & Strawser, 2015). The value of
human life is, in essence, diminished. Naturally, this breaches the dignity of the person, which some
describe as the underlying principle of the international law of human rights (Docherty, 2014; Heyns,
2013). In the same vein, proponents of a LAWS ban argue that by their nature, machines are not moral
actors (Lucas, 2016). Therefore, they cannot be held responsible for their actions. Should a machine
perform an illegal action in combat—or, essentially, a war crime—it would be impossible to effectively
punish or deter the weapon. Unless LAWS can be developed to possess certain human qualities, then no
amount of technological improvements can remedy these issues.
Blocks
AT: Cost-Effective

Low cost enables proliferation to terrorists.


Kelly 18 Frank Nicholas Kelly. "Analyzing the Potential for Universal Disarmament of Autonomous
Weapons Systems Or How I Learned to Stop Worrying and Love the Killer Robot." Brooklyn Journal of
International Law, 44(1). 2018. https://brooklynworks.brooklaw.edu/bjil/vol44/iss1/9/. [Premier]

The low cost and minimized risk LAWS could provide, however, is considered by critics a negative
attribute of these devices. 128 The proliferation of LAWS is almost inevitable among states because they
are expected to become cheap to manufacture.129 As LAWS become cheaper and more prominent in
warfare, there is a greater possibility that these weapons could enter the hands of terrorist groups or
other actors who do not bind themselves to humanitarian laws.1 30
AT: Casualties

Lowering the cost of war increases net global casualties.


Acheson 18 Ray Acheson is the director of Reaching Critical Will, the disarmament program of the
Women’s International League for Peace and Freedom. "To Preserve Our Humanity, We Must Ban Killer
Robots." Nation. 1 October 2018. https://www.thenation.com/article/archive/to-preserve-our-
humanity-we-must-ban-killer-robots/. [Premier]

Then there is the argument that autonomous weapons will save lives. As we have seen with armed
drones, remote-controlled weapons have made war less “costly” to the user of the weapon. Operators
safely ensconced in their electronic fighting stations thousands of miles away don’t face immediate
retaliation for their acts of violence. While this is obviously attractive to advanced militaries, which don’t
have to risk the lives of their soldiers, it arguably raises the cost of war for everyone else. It lowers the
threshold for the use of force, especially in situations where the opposing side does not have equivalent
systems to deploy in response. In the near future, autonomous weapon systems are not likely to result
in an epic battle of robots, where machines fight machines. Instead, they would likely be unleashed
upon populations that might not be able to detect their imminent attack and might have no equivalent
means with which to fight back. Thus the features that might make autonomous weapons attractive to
technologically advanced countries looking to preserve the lives of their soldiers will inevitably push the
burden of risk and harm onto the rest of the world.

These features also fundamentally change the nature of war. The increasing automation of weapon
systems helps to take war and conflict outside of the view of the deploying countries’ citizenry. If its own
soldiers aren’t coming home in body bags, does will the public pay attention to what its government
abroad? Does it care about the soldiers or the civilians being killed elsewhere? From what we have seen
with the use of drones, it seems that it is easier for governments to sell narratives about terrorism and
victory if their populations can’t see or feel the consequences themselves.

LAWs just shift the burden to civilians.


Kelly 18 Frank Nicholas Kelly. "Analyzing the Potential for Universal Disarmament of Autonomous
Weapons Systems Or How I Learned to Stop Worrying and Love the Killer Robot." Brooklyn Journal of
International Law, 44(1). 2018. https://brooklynworks.brooklaw.edu/bjil/vol44/iss1/9/. [Premier]

Critics of this argument suggest that while this is beneficial to the LAWS-using state, it shifts the burden
onto the civilians in the location where the LAWS are being used.1 09 In traditional combat, combatants
engage in military activities at their own risk, but in a situation where LAWS are used in the place of
human soldiers, the local civilians in the vicinity of military activity would be placed at greater risk than
the soldiers who deploy the LAWS." 0

Machines are not inherently “better” than humans and they would lack uniquely
human features.
Zawieska 17 Karolina Zawieska, Industrial Research Institute for Automation and Measurements. "An
ethical perspective on autonomous weapon systems." UNODA Occasional Papers, No. 30. November
2017.
https://www.researchgate.net/publication/323359493_An_ethical_perspective_on_autonomous_weap
on_systems_Perspectives_on_Lethal_Autonomous_Weapon_Systems. [Premier]

This is not the only way forward. By approaching the ethical debate over autonomous weapon systems
from the perspective of humans rather than machines, it is possible to acknowledge human
distinctiveness and complexity rather than dismissing or oversimplifying it. From this perspective,
machine autonomy and machine intelligence are problematic terms, as they refer to functions that are
radically different from the human processes that we describe with similar language . Therefore, when
reflecting on the use and characteristics of autonomous weapon systems, we should use related
terminology carefully, shaping it in a way that emphasizes rather than obscures the difference between
what is human and what is only human-like.

On demands for improvement

Proponents of autonomous weapon systems often claim that such systems may equal and eventually
outperform human beings, not just at narrowly defined tasks, but also at complex processes like the
application of ethics. The implication is that human capacities and conduct would benefit from
augmentation with autonomous technologies, or through substitution with fully autonomous weapon
systems. While such an approach may be well intentioned, with aims such as minimizing harm on the
battlefield, it places in question both the core morality of humans and the relative shortcomings in their
performance (a word commonly employed to blur the distinction between human and machine
standards of achievement).17 The perception that autonomous systems can achieve perfection or at
least surpass humans at many tasks may contribute to a mistaken belief that, in areas such as ethical
behaviour, real improvement is only achievable through the use of autonomous systems.

While it is indisputable that humans do not act ethically in every circumstance, some proponents of
autonomous weapon systems have advanced their argument with the fallacious suggestion that humans
behave unethically or inadequately as a general rule. This stance does not reflect actual human moral or
ethical experience, and it would be a mistake to expect machines to be “a better version of human
beings”.18 Rather, we should acknowledge the inherently human nature of ethical values and take
responsibility for ethical conduct. The fact that such values and human rights often remain in the realm
of aspirations rather than actual conduct should not stop us from pursuing ethical principles, and hence,
a fuller expression of our own humanity
AT: Inevitable

LAWs are not inevitable and the parameters of engagement have not yet been set.
Zawieska 17 Karolina Zawieska, Industrial Research Institute for Automation and Measurements. "An
ethical perspective on autonomous weapon systems." UNODA Occasional Papers, No. 30. November
2017.
https://www.researchgate.net/publication/323359493_An_ethical_perspective_on_autonomous_weap
on_systems_Perspectives_on_Lethal_Autonomous_Weapon_Systems. [Premier]

The inevitability of these systems is not a matter of consensus, however.4,5 In fact, such military and
technological determinism deserves our firm rejection.

The decision to deploy autonomous weapon systems, in particular lethal autonomous weapon systems,
is a choice that has yet to be made, and parameters for their potential deployment have yet to be
agreed upon. Recognition of this fact leaves room to discuss not only how to use and manage weapon
systems endowed with different degrees of autonomy (for example, what functions should be subject to
meaningful human control), but also if using such weapons is justified at all.

The supposed ability of autonomous weapon systems to strengthen the application of humanitarian
principles in armed conflict is commonly cited as a justification for the development and use of such
systems, but this argument merits close scrutiny. The design and use of autonomous weapon systems to
reduce fatalities, unnecessary suffering and the risk of war crimes would, in realistic terms, make war
more palatable for its perpetrators by diminishing their sense of direct responsibility for victims. The
potential for autonomous weapon systems to further distance humans from their violent actions
constitutes one of the most compelling arguments against such systems. Furthermore, applying
humanitarian principles in the pursuit of military objectives has historically proven difficult, and the
ability of autonomous weapon systems to adhere to ethical standards of conduct is even less certain.
Because tactical superiority is the main goal of military research6 and technology plays a key role in
ensuring military competitiveness,7 designers of autonomous weapon systems may see little practical
reason to restrain their lethal potential. The use of autonomous weapons could, in other words, lead to
“inhumanely efficient” wars.8

The prospect of humans waging war on such profoundly impersonal terms underscores the need to
address the wider context of military and civilian technological progress in discussing whether to limit or
prohibit the use of autonomous weapon systems. The consequences of the debate over autonomous
weapon systems will extend far beyond the use of particular weapons, making it critical for participants
to apply underlying ethical principles in a manner that deliberately avoids defining human beings in
increasingly machine-like terms. A rigorous examination of the ethical framework for this debate may
ultimately demand steps to strengthen and even redefine existing human rights protections.

LAWs are not inevitable and even if they are, we should still ban them.
Asaro 12 Peter Asaro is Affiliate Scholar at Stanford Law School’s Center for Internet and Society,
CoFounder and Vice-Chair of the International Committee for Robot Arms Control, and the Director of
Graduate Programs for the School of Media Studies at The New School for Public Engagement in New
York City. "On banning autonomous weapon systems: human rights, automation, and the
dehumanization of lethal decision-making." International Review of the Red Cross, 94(886). Summer
2012. https://www.cambridge.org/core/services/aop-cambridge-
core/content/view/992565190BF2912AFC5AC0657AFECF07/S1816383112000768a.pdf/on-banning-
autonomous-weapon-systems-human-rights-automation-and-the-dehumanization-of-lethal-decision-
making.pdf. [Premier]

Why should we assume that autonomous weapon systems are inevitable? What might this actually
mean? As a philosopher and historian of science and technology, I often encounter claims about the
‘inevitability’ of scientific discoveries or technological innovations. The popularity of such claims is
largely due to the retrospective character of history, and applying our understanding of past
technologies to thinking about the future. That is, it seems easy for us, looking back, to say that the
invention of the light bulb, or the telephone, or whatever technology you prefer was inevitable –
because it did in fact happen. It is hard to imagine what the world would be like if it had not happened.
Yet when one looks carefully at the historical details, whether a technology succeeded technologically
was in most cases highly contingent on a variety of factors. In most cases, the adoption of the
technology was not guaranteed by the success of the innovation, and the means and manner of its
eventual use always depended upon a great variety of social and cultural forces. Indeed, when we look
at the great many technological failures, and indeed the many failed attempts to commercialize the light
bulb before it finally succeeded, what becomes clear is that very few, if any, technologies can fairly be
claimed to be ‘inevitable’. And even the successful light bulb was dependent upon the innovation and
development of electrical utilities, and a host of other electric appliances, such as toasters, for its
widespread adoption. Technologies evolve much faster now, but they are just as dynamic and
unpredictable.

Perhaps what Anderson and Waxman mean is that it seems very likely that these technologies will be
developed. This seems more plausible. Indeed, simplistic systems can already implement the essential
elements of an autonomous weapon system, though these would fail to meet the existing international
legal standards of discrimination and proportionality.32 But even ignoring the existing legal limitations,
the fact that we can build autonomous lethal technologies does not mean we will use them. Given that
various sorts of autonomous weapon systems are already possible, it might be claimed that it is their
adoption that is inevitable. But to assume this would be glossing over the important differences
between the invention of a technology and its widespread adoption in society. There are certainly some
strong motivations for adopting such technologies, including the desire to reduce the risks to military
personnel, as well as reduce the costs and number of people needed for various military operations and
capabilities.

Or more strongly, Anderson and Waxman might mean that we should assume that it is inevitable that
there will be autonomous weapon systems that are capable of meeting the requirements of some
measure of discrimination and proportionality. But this is an empirical claim, about the capabilities of
technologies that do not yet exist, being measured against a metric that does not yet exist. As a purely
empirical question, these technologies may or may not come into existence and we may not even be
able to agree upon acceptable metrics for evaluating their performance, so why should we believe that
they are inevitable?33
The crucial question here is whether these technologies can meet the requirements of international law,
and this is far from certain. The arguments claiming the ethical superiority of robotic soldiers sound
suspiciously like claims from the early days of artificial intelligence that computers would someday beat
human grandmasters at chess. And, forty years later than initial predictions, IBM’s Deep Blue did
manage to beat Gary Kasparov. But there are important differences between chess and IHL that are
worth noting. Chess is a fairly well-defined rulebased game that is susceptible to computational analysis.
Ultimately, the game of chess is not a matter of interpretation, nor is it a matter of social norms.
International law, while it has rules, is not like chess. Law always requires interpretation and judgement
in order to apply it to real world situations. These interpretations and judgements are aided by historical
precedents and established standards, but they are not strictly determined by them. The body of case
law, procedures, arguments, and appeals is able to defend old principles or establish new precedents,
and thereby establish norms and principles, even as those norms and principles continue to grow in
meaning over time.

Thus, insisting that autonomous weapon systems are inevitable is actually quite pernicious. On the one
hand, this assumption would make the establishment of a ban seem automatically impractical or
unworkable. That is, if we start from the assumption that the banned systems will exist and will be used,
then why should we bother to ban them? But of course, they do not exist and are not being used, and
even if they were being used already they could still be banned going forward. And far from being
unworkable or impractical, a ban could be quite effective in shifting innovation trajectories towards
more useful and genuinely ethical systems. It seems straightforward that we can define the class of
autonomous weapon systems clearly enough, and then debate how a treaty might apply to, or exempt,
certain borderline cases such as reactive armour, anti-ballistic missile defences, or supervisory systems.
A ban cannot be expected to prohibit each and every use of automation in armed conflict, but rather to
establish an international norm that says that it is illegitimate to use systems that make automated
lethal decisions. The international bans on landmines and cluster munitions may have not completely
eliminated landmines and cluster munitions and their use in armed conflict, but they have made it more
difficult for manufacturers to produce them profitably, and for militaries to use them without
repercussions in the international community.

Moreover, starting from an assumption of the inevitability of autonomous weapon systems appears to
make the acceptability of such systems a foregone conclusion. Yet what is ultimately at issue here is
what the international standards of acceptability will be – what the international community will
consider the norms of conduct to be. To assume the inevitability of the development and use of the
technologies in question is to close off further discussion on the wisdom and desirability of pursuing,
developing, and using these technologies. In short, the development and use of autonomous weapon
systems is not inevitable – no technology is. Yes, they are possible; if they were not then there would be
no need to ban them, but their developments still requires great investment. And even if we may not be
able to prevent the creation of certain technologies, we will always be able to assert a position on the
moral and legal acceptability of their use. It does not follow that simply because a technology exists, its
use is acceptable.
AT: Precision

Even if machines perform better in theory, they should be used as supplements to


human judgements.
Asaro 12 Peter Asaro is Affiliate Scholar at Stanford Law School’s Center for Internet and Society,
CoFounder and Vice-Chair of the International Committee for Robot Arms Control, and the Director of
Graduate Programs for the School of Media Studies at The New School for Public Engagement in New
York City. "On banning autonomous weapon systems: human rights, automation, and the
dehumanization of lethal decision-making." International Review of the Red Cross, 94(886). Summer
2012. https://www.cambridge.org/core/services/aop-cambridge-
core/content/view/992565190BF2912AFC5AC0657AFECF07/S1816383112000768a.pdf/on-banning-
autonomous-weapon-systems-human-rights-automation-and-the-dehumanization-of-lethal-decision-
making.pdf. [Premier]

Are more precise weapons more ‘moral’ than less precise weapons? It is easy enough to argue that
given the choice between attacking a military target with a precision-guided munition with low risk of
collateral damage, and attacking the same target by carpet bombing with a high risk or certainty of great
collateral damage, one ought to choose the precision-guided munition. That is the moral and legal
choice to make, all other things being equal. Of course, there is quite a bit that might be packed into the
phrase ‘all other things being equal’. Thus it is true that one should prefer a more precise weapon to a
less precise weapon when deciding how to engage a target, but the weapon is not ethically
independent of that choice. And ultimately it is the human agent who chooses to use the weapon that is
judged to be moral or not. Even the most precise weapon can be used illegally and immorally. All that
precision affords is a possibility for more ethical behaviour – it does not determine or guarantee it.

This may seem like a semantic argument, but it is a crucial distinction. We do not abrogate our moral
responsibilities by using more precise technologies. But as with other automated systems, such as cruise
control or autopilot, we still hold the operator responsible for the system they are operating, the
ultimate decision to engage the automated system or to disengage it, and the appropriateness of these
choices. Indeed, in most cases these technologies, as we have seen in the use of precision-guided
munitions and armed drones, actually increase our moral burden to ensure that targets are properly
selected and civilians are spared. And indeed, as our technologies increase in sophistication, we should
design them so as to enhance our moral conduct.

There is something profoundly odd about claiming to improve the morality of warfare by automating
humans out of it altogether, or at least by automating the decisions to use lethal force. The rhetorical
strategy of these arguments is to point out the moral shortcomings of humans in war – acts of
desperation and fear, mistakes made under stress, duress, and in the fog of war. The next move is to
appeal to a technological solution that might eliminate such mistakes. This might sound appealing,
despite the fact that the technology does not exist. It also misses two crucial points about the new kinds
of automated technologies that we are seeing. First, that by removing soldiers from the immediate risks
of war, which teleoperated systems do without automating lethal decisions, we can also avoid many of
these psychological pressures and the mistakes they cause. Second, if there were an automated system
that could outperform humans in discrimination tasks, or proportionality calculations, it could just as
easily be used as an advisory system to assist and inform human decision-makers, and need not be given
the authority to initiate lethal force independently of informed human decisions.29

Killer robots would be biased and ruthless.


Acheson 18 Ray Acheson is the director of Reaching Critical Will, the disarmament program of the
Women’s International League for Peace and Freedom. "To Preserve Our Humanity, We Must Ban Killer
Robots." Nation. 1 October 2018. https://www.thenation.com/article/archive/to-preserve-our-
humanity-we-must-ban-killer-robots/. [Premier]

Proponents of fully autonomous weapon systems argue that these weapons will be keep human soldiers
in the deploying force out of danger and that they will be more “precise.” They believe these weapons
will make calculations and decisions more quickly than humans, and that those decisions—in targeting
and in attack—will be more accurate than those of humans. They also argue that the weapons will not
have emotional responses to situations—they won’t go on a rampage out of revenge; they won’t rape.

But many tech workers, roboticists, and legal scholars believe that we will never be able to program
robots to accurately and consistently discriminate between soldiers and civilians in times of conflict .
“Although progress is likely in the development of sensory and processing capabilities, distinguishing an
active combatant from a civilian or an injured or surrendering soldier requires more than such
capabilities,” explained Bonnie Docherty of Harvard Law School and Human Rights Watch. “It also
depends on the qualitative ability to gauge human intention, which involves interpreting the meaning of
subtle clues, such as tone of voice, facial expressions, or body language, in a specific context.”

There are also widespread concerns about programming human bias into killer robots. The practice of
“signature strikes” already uses identifiers such as “military-age male” to target and execute killings—
and to justify them afterward. Imagine a machine programmed with prejudice on the basis of race, sex,
gender identity, sexual orientation, socioeconomic status, or ability. Imagine its deployment not just in
war but in policing situations.

This dehumanization of targets would be matched by dehumanization of attacks. Algorithms would


create a perfect killing machine, stripped of the empathy, conscience, or emotion that might hold a
human soldier back. Proponents of autonomous weapons have argued that this is exactly what would
make them better than human soldiers. They say machines would do a better job of complying with the
laws of war than humans do, because they would lack human emotions. But this also means they would
not possess mercy or compassion. They would not hesitate or challenge a commanding officer’s
deployment or instruction. They would simply do as they have been programmed to do—and if this
includes massacring everyone in a village, they will do so without hesitation.
AT: Red Lines

Red lines have dangerous downsides.


Etzioni and Etzioni 17 Amitai Etzioni, PhD, Allen Institute for Artificial Intelligence, and Oren Etzioni,
PhD, professor of computer science, and CEO of the Allen Institute for Artificial Intelligence. Pros and
Cons of Autonomous Weapons Systems. Military Review, May-June 2017,
https://www.armyupress.army.mil/Journals/Military-Review/English-Edition-Archives/May-June-
2017/Pros-and-Cons-of-Autonomous-Weapons-Systems/. [Premier]

We suggest that nations might be willing to forgo this advantage of fully autonomous arms in order to
gain the assurance that once hostilities ceased, they could avoid becoming entangled in new rounds of
fighting because some bombers were still running loose and attacking the other side, or because some
bombers might malfunction and attack civilian centers. Finally, if a ban on fully autonomous weapons
were agreed upon and means of verification were developed, one could aspire to move toward limiting
weapons with a high but not full measure of autonomy.
AT: Unrealistic

LAWs ban is realistic.


Asaro 12 Peter Asaro is Affiliate Scholar at Stanford Law School’s Center for Internet and Society,
CoFounder and Vice-Chair of the International Committee for Robot Arms Control, and the Director of
Graduate Programs for the School of Media Studies at The New School for Public Engagement in New
York City. "On banning autonomous weapon systems: human rights, automation, and the
dehumanization of lethal decision-making." International Review of the Red Cross, 94(886). Summer
2012. https://www.cambridge.org/core/services/aop-cambridge-
core/content/view/992565190BF2912AFC5AC0657AFECF07/S1816383112000768a.pdf/on-banning-
autonomous-weapon-systems-human-rights-automation-and-the-dehumanization-of-lethal-decision-
making.pdf. [Premier]

How might we make sense of the claim that a ban on autonomous weapon systems would be
unrealistic? Is it that such a ban would, in practice, be difficult to implement? All arms control treaties
pose challenges in their implementation, and a ban on autonomous weapon systems should not prove
exceptionally more or less difficult than others, and therefore is not unrealistic in this sense. Or is the
claim that it would be politically difficult to find support for such a ban? In my personal experience,
there are a great many individuals, particularly among military officers and policymakers but also
among engineers and executives in the defence industry, who would support such a ban. Moreover, it is
clear, from my experiences in engaging with the public, that strong moral apprehensions about
automated weapons systems are broad-based, as is fear of the potential risks they pose. At the very
least, a ban is not unrealistic in the sense that it might likely find broad public and official support.
Neg
Stock
Warfighting

LAWs offer multiple advantages on the battlefield.


Etzioni and Etzioni 17 summarizes Amitai Etzioni, PhD, Allen Institute for Artificial Intelligence, and
Oren Etzioni, PhD, professor of computer science, and CEO of the Allen Institute for Artificial
Intelligence. Pros and Cons of Autonomous Weapons Systems. Military Review, May-June 2017,
https://www.armyupress.army.mil/Journals/Military-Review/English-Edition-Archives/May-June-
2017/Pros-and-Cons-of-Autonomous-Weapons-Systems/. [Premier]

Military advantages. Those who call for further development and deployment of autonomous weapons
systems generally point to several military advantages. First, autonomous weapons systems act as a
force multiplier. That is, fewer warfighters are needed for a given mission, and the efficacy of each
warfighter is greater. Next, advocates credit autonomous weapons systems with expanding the
battlefield, allowing combat to reach into areas that were previously inaccessible. Finally, autonomous
weapons systems can reduce casualties by removing human warfighters from dangerous missions.1

The Department of Defense’s Unmanned Systems Roadmap: 2007-2032 provides additional reasons for
pursuing autonomous weapons systems. These include that robots are better suited than humans for
“‘dull, dirty, or dangerous’ missions.”2 An example of a dull mission is long-duration sorties. An example
of a dirty mission is one that exposes humans to potentially harmful radiological material. An example of
a dangerous mission is explosive ordnance disposal. Maj. Jeffrey S. Thurnher, U.S. Army, adds, “[lethal
autonomous robots] have the unique potential to operate at a tempo faster than humans can possibly
achieve and to lethally strike even when communications links have been severed.”3

In addition, the long-term savings that could be achieved through fielding an army of military robots
have been highlighted. In a 2013 article published in The Fiscal Times, David Francis cites Department of
Defense figures showing that “each soldier in Afghanistan costs the Pentagon roughly $850,000 per
year.”4 Some estimate the cost per year to be even higher. Conversely, according to Francis, “the TALON
robot—a small rover that can be outfitted with weapons, costs $230,000.”5 According to Defense News,
Gen. Robert Cone, former commander of the U.S. Army Training and Doctrine Command, suggested at
the 2014 Army Aviation Symposium that by relying more on “support robots,” the Army eventually
could reduce the size of a brigade from four thousand to three thousand soldiers without a concomitant
reduction in effectiveness.6

Air Force Maj. Jason S. DeSon, writing in the Air Force Law Review, notes the potential advantages of
autonomous aerial weapons systems.7 According to DeSon, the physical strain of high-G maneuvers and
the intense mental concentration and situational awareness required of fighter pilots make them very
prone to fatigue and exhaustion; robot pilots, on the other hand would not be subject to these
physiological and mental constraints. Moreover, fully autonomous planes could be programmed to take
genuinely random and unpredictable action that could confuse an opponent. More striking still, Air
Force Capt. Michael Byrnes predicts that a single unmanned aerial vehicle with machine-controlled
maneuvering and accuracy could, “with a few hundred rounds of ammunition and sufficient fuel
reserves,” take out an entire fleet of aircraft, presumably one with human pilots.8
In 2012, a report by the Defense Science Board, in support of the Office of the Under Secretary of
Defense for Acquisition, Technology and Logistics, identified “six key areas in which advances in
autonomy would have significant benefit to [an] unmanned system: perception, planning, learning,
human-robot interaction, natural language understanding, and multiagent coordination.”9 Perception,
or perceptual processing, refers to sensors and sensing. Sensors include hardware, and sensing includes
software.10

LAWs are in force, make war humane, and are cost effective.
Rabkin and Yoo 17 Jeremy Rabkin, professor of law at Antonin Scalia Law School at George Mason
University, and John Yoo, Emanuel S. Heller Professor of Law Co-Faculty Director, Korea Law Center
Director, UC Berkeley Public Law & Policy Program. "‘Killer Robots’ Can Make War Less Awful." The Wall
Street Journal. 1 September 2017. https://www.wsj.com/articles/killer-robots-can-make-war-less-awful-
1504284282. [Premier]

Mr. Musk has established himself in recent years as the world’s most visible and outspoken critic of
developments in artificial intelligence, so his views on so-called “killer robots” are no surprise. But he
and his allies are too quick to paint dire scenarios, and they fail to acknowledge the enormous potential
of these weapons to defend the U.S. while saving lives and making war both less destructive and less
likely.

In a 2014 directive, the U.S. Defense Department defined an autonomous weapons system as one that,
“once activated, can select and engage targets without further intervention by a human operator.”
Examples in current use by the U.S. include small, ultralight air and ground robots for conducting
reconnaissance and surveillance on the battlefield and behind the lines, antimissile and counter-battery
artillery, and advanced cruise missiles that select targets and evade defenses in real-time. The Pentagon
is developing autonomous aerial drones that can defeat enemy fighters and bomb targets; warships and
submarines that can operate at sea for months without any crew; and small, fast robot tanks that can
swarm a target on the ground.

Critics of these technologies suggest that they are as revolutionary—and terrifying—as nuclear
weapons. But robotics and the computing revolution will have the opposite effect of nuclear weapons.
Rather than applying monstrous, indiscriminate force, they will bring more precision and less
destruction to the battlefield. The new generation of weapons will share many of the same qualities that
have made the remote-controlled Predator and Reaper drones so powerful in finding and destroying
specific targets.

The weapons are cost-effective too. Not only can the U.S. Air Force buy 20 Predators for roughly the cost
of a single F-35 fighter, it can also operate them at a far lower cost and keep them on station for much
longer. More important, robotic warriors—whether remote-controlled or autonomous—can replace
humans in many combat situations in the years ahead, not just in the air but on the land and sea as well.
Fewer American military personnel will have to put their lives on the line in risky missions.

Critics are concerned about taking human beings out of the loop of decision-making in combat. But
direct human involvement doesn’t necessarily make warfare safer, more humane or less incendiary.
Human soldiers grow fatigued and become emotionally involved in conflict, which can result in errors of
judgment and the excessive use of force.

It's a key defensive tool.


Galliott 18 Jai Galliott is Group Leader of Values in Defence & Security Technology at the Australian
Defence Force Academy at the University of New South Wales; Non-Residential Fellow at the Modern
War Institute. "Killer robots: Why banning autonomous weapons is not a good idea." ABC News. 30
August 2018. https://www.abc.net.au/news/2018-08-31/killer-robots-weapons-banning-them-is-not-a-
good-idea/10177178. [Premier]

Public pressure for a ban has been mounting ahead of a United Nations meeting occurring this week in
Geneva, with Australian officials in attendance, and rights activists citing arguments about human
dignity, control issues and accountability.

But a ban would be a mistake.

Confusion about the means for making the world safer is not new.

It was prevalent in the anti-nuclear campaign during the Cold War period and recently renewed with the
invention of miniaturised warheads, and the campaign to ban land mines.

People kill people

While it has become a distasteful dictum in the wake of recent shootings, people kill people.

It is not the weapon that is the root of the evil.

The terrible damage inflicted on multitudes of human beings in places like Rwanda and Sudan with
knives, sticks or stones shows the need for a very different approach to the international debate on
autonomous weapons.

Lack of consensus among the 125 nations involved in the UN meetings in Geneva, including countries
with significantly advanced robotics technology such as the US, Russia, China, and Israel, has created a
vacuum in which a consortium of non-government actors led by the Campaign to Stop Killer Robots has
encouraged ill-informed countries to subscribe to a ban on lethal autonomous weapon systems.

This is despite the fact that autonomous robots are a highly effective defensive weapon for countries
like Australia.

They are always on, unafraid of getting shot, can be quickly deployed and spare soldiers' lives.

Covered by gun fire, they can create a formidable obstacle.

LAWs enable effective and humanitarian warfighting.


Pedron and da Cruz 20 summarizes Stephanie Mae Pedron and Jose de Arimateia da Cruz. "The
Future of Wars: Artificial Intelligence (AI) and Lethal Autonomous W onomous Weapon Systems (L
eapon Systems (LAWS)." International Journal of Security Studies, 2(1). 2020.
https://digitalcommons.northgeorgia.edu/cgi/viewcontent.cgi?article=1020&context=ijoss. [Premier]

Automation has been a boon to all segments of society. It has not only made lives easier, but also paved
the way for technological revolutions in both the public and private sectors. Benefits in progress related
to automation are numerous. From a national security perspective, classically automated non-lethal
systems have already had profound effects on the way the U.S. conducts war. Automation provides an
immediate force-multiplier effect because of the machine’s ability to conduct basic tasks such as
product assembly, material handling, and palletization, thereby removing the need to hire and train
personnel for those duties (Lucas, 2016). But the potential benefits of lethal automation are even
greater. During instances of armed conflict, complex technologies that employ intricate tools and
algorithms allow for the mechanization of more numerous and difficult tasks. Using a maximally
autonomous weapon in combat may also be advantageous in environments with poor or broken down
communication links, since they have the capacity to continue operating on their own.

AI are generally capable of reacting faster than humans; this would ultimately suit the quickening pace
of combat spurred by technological innovation. The quick reaction times of AWS may result in an
overwhelming advantage in the field at the beginning of a conflict (Sharkey, 2018). In certain
circumstances, AI may even supersede the decision-making processes of humans (Surber, 2018). Owing
to the absence of negative emotions related to personal gain, self-interest, and loss, AI may also make
more objective choices in times of crisis that could save lives (ICRC, 2011; Sassóli, 2014). Furthermore,
machines are not subject to the same endurance limitations as people, so LAWS would have the
potential to operate in combat settings for extended periods of time or until its termination.

Depending on the system’s design, LAWS could feasibly replace combatants, thereby eliminating the
need for human deployments in high-risk areas. In other words, it can reduce the risk to American lives
without diminishing U.S. combat capabilities. This feature of AWS, according to Nathan Leys (2018) in his
article, Autonomous Weapon Systems and International Crises, “may reduce U.S. domestic political
opposition to military interventions, especially in humanitarian contexts without an immediately
apparent U.S. national interest.” This could prove useful for long-term political strategies, although that
is based on the assumption that leaders restrain themselves from waging war only because of military
casualties or the derived social consequences that arise from it. If that is the case, then development of
LAWS might also encourage aggression (Bocherty, 2012; Wallach, 2015).
Precision

Humans make deadly mistakes.


Noone and Noone 15 Dr. Gregory P. Noone, Ph.D., J.D. is the Director of the National Security and
Intelligence Program at Fairmont State University and an Associate Professor of Political Science and
Law, and Dr. Diana C. Noone, Ph.D., J.D. is the Chair of Social Sciences in the College of Liberal Arts at
Fairmont State University and an Associate Professor of Criminal Justice. "The Debate Over Autonomous
Weapons Systems." Case Western Reserve Journal of International Law. Volume 47. Issue 1. 2015.
https://scholarlycommons.law.case.edu/cgi/viewcontent.cgi?article=1005&context=jil. [Premier]

More common ground may be found in that all parties also agree that human error exists and that we
collectively strive to eliminate the pain and suffering caused by such error. We have investigated civilian
train, ferry, and airline crashes such as the 1985 Japan Airlines that killed 520 people, caused by
improper maintenance techniques.29 We try and compensate for poor witness identification in criminal
cases that may lead to the death penalty for an accused. Every civilian law enforcement shooting is
thoroughly reviewed. Human error in the medical field results in 100-200 deaths every day in the United
States that may lead to litigation and extensive discovery.30

Likewise, in the military, human error has claimed more than its share of lives. A deadly steam fire
onboard the USS IWO JIMA killed ten sailors because the civilian maintenance crew used brass nuts
instead of steel ones on a steam valve.31 In 1987, during the Iran-Iraq war, in which the U.S. was
supporting Iraq, the USS STARK did not adequately identify a threat from an Iraqi fighter jet, that
(supposedly) misidentified the STARK as an Iranian ship, and as a result 37 sailors died when two 1,500-
pound Exocet missiles impacted the ship.32 As a result of the STARK’s under reaction error, the next
year the USS VINCENNES had an overreaction of human error and shot down an Iranian civilian Airbus
A300 in the Persian Gulf, killing all the civilian passengers and crew. The VINCENNES believed the
airplane was descending into an attack profile and was identified as a military aircraft by its “squawk”
transmission, when in reality it was ascending after takeoff en route to Dubai and was recorded with a
civilian squawk.3

Nearly all friendly fire incidents are the result of human error. The friendly fire that shot down of a pair
of U.S. Army Blackhawks by U.S. Air Force F-15’s in northern Iraq’s “No Fly Zone” in 1994 was the result
of human error by the AWACS crew as well as the F15’s that made visual contact prior to shooting.34
U.S. Army Ranger, and former NFL player, Pat Tillman was killed in Afghanistan as a result of human
error by his fellow unit members when he was misidentified as the enemy in a firefight in 2004.35 “Such
tragedies demonstrate that a man in the loop is not a panacea during situations in which it may be
difficult to distinguish civilians and civilian objects from combatants and military objectives. Those who
believe otherwise have not experienced the fog of war.”36 In short, human error causes untold deaths –
perhaps AWS can perform better

LAWs would reduce casualties – multiple warrants.


Arkin 13 Ronald C. Arkin, Mobile Robot Laboratory College of Computing Georgia Institute of
Technology. "Lethal Autonomous Systems and the Plight of the Non-combatant." AISB Quarterly, No
137, July 2013. https://www.law.upenn.edu/live/files/3880-arkinlethal-autonomous-systems-and-the-
plight-of. [Premier]

Multiple potential benefits of intelligent war machines have already been declared by the military,
including: a reduction in friendly casualties; force multiplication; expanding the battlespace; extending
the warfighter’s reach; the ability to respond faster given the pressure of an ever increasing battlefield
tempo; and greater precision due to persistent stare [constant video surveillance that enables more
time for decision making and more eyes on target]. This argues for the inevitability of development and
deployment of lethal autonomous systems from a military efficiency and economic standpoint, unless
limited by IHL.

It must be noted that past and present trends in human behavior in the battlefield regarding adhering to
legal and ethical requirements are questionable at best. Unfortunately, humanity has a rather dismal
record in ethical behavior in the battlefield. Potential explanations for the persistence of war crimes
include5 : c

You might also like