Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1of 55

Disclosure---R4

1AC---V1
1AC
Advantage
Lethal autonomous weapons (LAWs) are inevitable and threaten
nuclear conflict---they are the third revolution in warfare.
Kai-Fu Lee 21, founding director of Microsoft Research Asia, former president of Google
China, CEO of Sinovation Ventures and an expert in artificial intelligence, 9/11/21, The Third
Revolution in Warfare, https://www.theatlantic.com/technology/archive/2021/09/i-weapons-
are-third-revolution-warfare/620013/
On the 20th anniversary of 9/11, against the backdrop of the rushed U.S.-allied Afghanistan withdrawal, the grisly reality of armed
combat and the challenge posed by asymmetric suicide terror attacks grow harder to ignore.
But weapons technology has changed substantially over the past two decades. And thinking ahead to the
not-so-distant future, we must ask: What if these assailants were able to remove human suicide bombers or attackers from the
equation altogether? As someone who has studied and worked in artificial intelligence for the better part of four decades, I worry
about such a technology threat, born from artificial intelligence and robotics.
Autonomous weaponry is the third revolution in warfare, following gunpowder and
nuclear arms . The evolution from land mines to guided missiles was just a prelude to true
AI-enabled autonomy —the full engagement of killing: searching for, deciding to engage, and obliterating another
human life, completely without human involvement.
An example of an autonomous weapon in use today is the Israeli Harpy drone , which is
programmed to fly to a particular area, hunt for specific targets, and then destroy them using a
high-explosive warhead nicknamed “Fire and Forget.” But a far more provocative example is illustrated in the dystopian
short film Slaughterbots, which tells the story of bird-sized drones that can actively seek out a particular person and shoot a small
amount of dynamite point-blank through that person’s skull. These drones fly themselves and are too small and nimble to be easily
caught, stopped, or destroyed.
These “ slaughterbots ” are not merely the stuff of fiction . One such drone nearly killed the
president of Venezuela in 2018, and could be built today by an experienced hobbyist for
less than $1,000. All of the parts are available for purchase online , and all open-source
technologies are available for download. This is an unintended consequence of AI and robotics
becoming more accessible and inexpensive . Imagine, a $1,000 political assassin ! And this
is not a far-fetched danger for the future but a clear and present danger.
We have witnessed how quickly AI has advanced, and these advancements will accelerate
the near-term future of autonomous weapons . Not only will these killer robots become more
intelligent , more precise , faster , and cheaper ; they will also learn new capabilities ,
such as how to form swarms with teamwork and redundancy, making their missions virtually
unstoppable . A swarm of 10,000 drones that could wipe out half a city could theoretically cost as
little as $10 million.
Even so, autonomous weapons are not without benefits. Autonomous weapons can save soldiers’ lives if wars are fought by
machines. Also, in the hands of a responsible military, they can help soldiers target only combatants and avoid inadvertently killing
friendly forces, children, and civilians (similar to how an autonomous vehicle can brake for the driver when a collision is imminent).
Autonomous weapons can also be used defensively against assassins and perpetrators.
But the downsides and liabilities far outweigh these benefits. The strongest such liability is moral—
nearly all ethical and religious systems view the taking of a human life as a contentious act requiring strong justification and
scrutiny. United Nations Secretary-General António Guterres has stated, “The prospect of machines with the discretion and power to
take human life is morally repugnant.”
Autonomous weapons lower the cost to the killer. Giving one’s life for a cause—as suicide bombers do—is still
a high hurdle for anyone. But with autonomous assassins, no lives would have to be given up for killing.
Another major issue is having a clear line of accountability —knowing who is responsible in
case of an error. This is well established for soldiers on the battlefield. But when the killing is assigned to an
autonomous-weapon system, the accountability is unclear (similar to accountability ambiguity when an
autonomous vehicle runs over a pedestrian).
Such ambiguity may ultimately absolve aggressors for injustices or violations of international
humanitarian law. And this lowers the threshold of war and makes it accessible to anyone. A
further related danger is that autonomous weapons can target individuals , using facial or gait recognition, and the
tracing of phone or IoT signals. This enables not only the assassination of one person but a genocide of
any group of people. One of the stories in my new “scientific fiction” book based on realistic possible-future scenarios, AI
2041, which I co-wrote with the sci-fi writer Chen Qiufan, describes a Unabomber-like scenario in which a terrorist carries out the
targeted killing of business elites and high-profile individuals.
Greater autonomy without a deep understanding of meta issues will further boost the speed
of war (and thus casualties) and will potentially lead to disastrous escalations , including
nuclear war . AI is limited by its lack of common sense and human ability to reason
across domains. No matter how much you train an autonomous-weapon system, the limitation on domain will
keep it from fully understanding the consequences of its actions.
In 2015, the Future of Life Institute published an open letter on AI weapons, warning that “a global arms race is
virtually inevitable .” Such an escalatory dynamic represents familiar terrain, whether the Anglo-
German naval-arms race or the Soviet-American nuclear-arms race. Powerful countries have long fought for
military supremacy. Autonomous weapons offer many more ways to “ win ” (the smallest,
fastest, stealthiest, most lethal, and so on).
Pursuing military might through autonomous weaponry could also cost less , lowering the
barrier of entry to such global-scale conflicts . Smaller countries with powerful technologies,
such as Israel, have already entered the race with some of the most advanced military robots,
including some as small as flies. With the near certainty that one’s adversaries will build up
autonomous weapons, ambitious countries will feel compelled to compete.
Where will this arms race take us? Stuart Russell, a computer-science professor at UC Berkeley, says, “The capabilities of
autonomous weapons will be limited more by the laws of physics—for example, by constraints on range, speed, and payload—than
by any deficiencies in the AI systems that control them. One can expect platforms deployed in the millions, the
agility and lethality of which will leave humans utterly defenseless .” This multilateral arms
race, if allowed to run its course, will eventually become a race toward oblivion.
Nuclear weapons are an existential threat , but they’ve been kept in check and have even
helped reduce conventional warfare on account of the deterrence theory. Because a nuclear war leads to
mutually assured destruction , any country initiating a nuclear first strike likely faces
reciprocity and thus self-destruction.
But autonomous weapons are different . The deterrence theory does not apply ,
because a surprise first attack may be untraceable . As discussed earlier, autonomous-weapon
attacks can quickly trigger a response, and escalations can be very fast , potentially leading
to nuclear war . The first attack may not even be triggered by a country but by terrorists or
other non-state actors. This exacerbates the level of danger of autonomous weapons.

Autonomous weapons are inherently susceptible to accidents and


mistakes---data collection is probabilistic and prone to spoofing
Kelsey D. Atherton 21, military technology journalist, 05/17/21, Autonomous war machines
could make costly mistakes on future battlefields,
https://www.popsci.com/technology/autonomous-machines-mistakes-un-institute-report/
On some future battlefield, a military robot will make a mistake . Designing autonomous
machines for war means accepting some degree of error in the future, and more vexingly, it means
not knowing exactly what that error will be.
As nations, weapon makers, and the international community work on rules about autonomous weapons, talking honestly
about the risks from data error is essential if machines are to deliver on their promise of limiting harm.
A new release from an institute within the UN tackles this conversation directly. Published today, “Known Unknowns: Data Issues
and Military Autonomous Systems” is a report from the United Nations Institute for Disarmament Research. Its intent is to help
data
policy makers better understand the risks inherent in autonomous machines. These risks include everything from how
processing can fail , to how data collection can be actively gamed by hostile forces. A major
component of this risk is that data collected and used in combat is messier than data in a lab, which
will change how machines act. 
The real-world scenarios are troubling. Maybe the robot’s camera, trained for the desert
glare of White Sands Missile Range, will misinterpret a headlight’s reflection on a foggy
morning. Perhaps an algorithm that targets the robot’s machine gun will calibrate the distance
wrong , shifting a crosshair from the front of a tank to a piece of playground equipment. Maybe an autonomous
scout , reading location data off a nearby cell phone tower, is deliberately fed wrong
information by an adversary, and marks the wrong street as a safe path for soldiers.
Autonomous machines can only be autonomous because they collect data about their
environment as they move through it, and then act on that data . In training environments,
the data that autonomous systems collect is relevant, complete, accurate, and high quality. But, the report
notes, “conflict environments are harsh , dynamic and adversarial , and there will always be
more variability in the real-world data of the battlefield than the limited sample of data on
which autonomous systems are built and verified.”
One example of this kind of error comes from a camera sensor. During a presentation in October 2020, an executive
of a military sensor company showed off a targeting algorithm, boasting that the algorithm could distinguish between military and
civilian vehicles. In that same demonstration, the video marked a human walking in a parking lot and a tree as identical targets. 
When military planners build autonomous systems, they first train those systems with data in a
controlled setting . With training data, it should be possible to get a target recognition
program to tell the difference between a tree and a person. Yet even if the algorithm is correct in training,
using it in combat could mean an automated targeting program locking onto trees instead
of people , which would be militarily ineffective. Worse still,  it could lock onto people instead of trees, which could lead to
unintended casualties.
Hostile soldiers or irregulars, looking to outwit an attack from autonomous weapons, could also try to fool
the robot hunting them with false or misleading data. This is sometimes known as  spoofing , and
examples exist in peaceful contexts. For example, by using tape on a 35 mph speed limit sign to make the 3 read a little
more like an 8, a team of researchers convinced a Tesla car in self-driving mode to accelerate to
85mph.
In another experiment, researchers were able to fool an  object-recognition algorithm into assuming an
apple was an iPod by sticking a paper label that said “iPod” onto the apple . In war, an autonomous
robot designed to clear a street of explosives might overlook an obvious booby-trapped
bomb if it has a written label that says “ soccer ball ” instead. 
An error anywhere in the process, from collection to interpretation to communicating that
information to humans, could lead to “ cascading effects ” that result in unintended harm, says Arthur
Holland Michel, associate researcher in the Security and Technology programme at the UN Institute for Disarmament Research, and
the author of this report.
“Imagine a reconnaissance drone that, as a result of spoofing or bad data, incorrectly
categorizes a target area as having a very low probability of civilian presence ,” Holland Michel
tells Popular Science via email. “Those human soldiers who act on that system’s assessment wouldn’t
necessarily know that it was faulty , and in a very fast-paced situation they might not have
time to audit the system’s assessment and find the issue.”
If testing revealed that a targeting camera might mistake trees for civilians, the soldiers would know to look for that error in battle .
If the error is one that never appeared in testing, like an infrared sensor seeing the heat of several clustered
radiators and interpreting that as people, the soldiers would not even have reason to believe the
autonomous system was wrong until after the shooting was over .
Talking about how machines can produce errors , especially unexpected errors, is
important because otherwise people relying on a machine will likely assume it is accurate.
Compounding this problem, it is hard in the field to discern how an autonomous machine made its
decision.
“The type of AI called Deep Learning is notoriously opaque and is therefore often called a “ black
box .” It does something with probability, and often it works, but we don’t know why,” Maaike Verbruggen, a doctoral researcher
at the Vrije Universiteit Brussel, tells Popular Science via email. “But how can a soldier assess whether a machine
recommendation is the right one, if they have no idea why a machine came to that conclusion ?”
Given the uncertainty in the heat of battle , it is reasonable to expect soldiers to follow
machine recommendations, and assume they are without error. Yet error is an inevitable part
of using autonomous machines in conflicts . Trusting that the machine acted correctly does not free soldiers
from obligations under international law to avoid unintentional harm.
While there are weapons with autonomous features in use today, no nation has explicitly said it is ready to trust a machine to target
and fire on people without human involvement in the process. However, data errors can cause new problems, leaving humans
responsible for a machine behaving in an unexpected and unanticipated way. And as machines become more
autonomous , this danger is likely only to increase .
“When it comes to autonomous weapons, the devils are in the technical details ,” says Holland
Michel. “It’s all very well to say that humans should always be held accountable for the actions of autonomous weapons but if those
systems, because of their complex algorithmic architecture, have unknown failure points that nobody could have anticipated with
existing testing techniques, how do you enshrine that accountability?”
One possible use for fully autonomous weapons is only targeting other machines , like
uninhabited drones, and not targeting people or vehicles containing people. But in practice, how that
weapon collects, interprets, and uses data becomes tremendously important.
“If such a weapon fails because the relevant data that it collects about a building is
incomplete, leading
that system to target personnel, you’re back to square one,” says Holland Michel.

AWs increase first strike pressures, compress decision timelines and


unbalance counterforcing---all of which is escalatory, especially with
Russia
Burgess Laird 20, senior international defense researcher at the RAND Corporation,
06/03/20, The Risks of Autonomous Weapons Systems for Crisis Stability and Conflict
Escalation in Future U.S.-Russia Confrontations, https://www.rand.org/blog/2020/06/the-risks-of-autonomous-
weapons-systems-for-crisis.html
AWS simultaneously could increase the potential
While holding out the promise of significant operational advantages,
for undermining crisis stability and fueling conflict escalation in contests between the
United States and Russia. Defined as “the degree to which mutual deterrence between dangerous adversaries can hold in a
confrontation,” as my RAND colleague Forrest Morgan explains, crisis stability and the ways to achieve it are not about
warfighting, but about “building and posturing forces in ways that allow a state, if confronted, to
avoid war without backing down” on important political or military interests . Thus, the military
capabilities developed by nuclear-armed states like the United States and Russia and how they posture them are key determinants of
whether crises between them will remain stable or devolve into conventional armed conflict, as well as the extent to which such
AWS could foster crisis
conflict might escalate in intensity and scope, including to the level of nuclear use.
instability and conflict escalation in contests between the United States and Russia in a
number of ways; in this short essay I will highlight only four.
While holding out the promise of significant operational advantages, AWS simultaneously could increase the potential for
undermining crisis stability and fueling conflict escalation.
Share on Twitter
First, a
state facing an adversary with AWS capable of making decisions at machine speeds
is likely to fear the threat of sudden and potent attack , a threat that would compress the
amount of time for strategic decisionmaking. The posturing of AWS during a crisis would likely create
fears that one's forces could suffer significant, if not decisive , strikes . These fears in turn
could translate into pressures to strike first —to preempt—for fear of having to strike
second from a greatly weakened position . Similarly, within conflict, the fear of losing at machine
speeds would be likely to cause a state to escalate the intensity of the conflict possibly even to
the level of nuclear use .
Second, as the speed of military action in a conflict involving the use of AWS as well as hypersonic weapons
and other advanced military capabilities begins to surpass the speed of political decisionmaking ,
leaders could lose the ability to manage the crisis and with it the ability to control
escalation . With tactical and operational action taking place at speeds driven by machines, the
time for exchanging signals and communications and for assessing diplomatic options
and offramps will be significantly foreclosed. However, the advantages of operating inside the OODA loop of a state
adversary like Iraq or Serbia is one thing, while operating inside the OODA loop of a nuclear-armed adversary is another. As the
renowned scholar Alexander George emphasized (PDF), especially in contests between nuclear armed competitors, there is a
fundamental tension between the operational effectiveness sought by military commanders and
the requirements for political leaders to retain control of events before major escalation takes
place.
Third, and perhaps of greatest concern to policymakers should be the likelihood that, from the vantage point of Russia's leaders, in
U.S. hands the operational advantages of AWS are likely to be understood as an increased
U.S. capability for what Georgetown professor Caitlin Talmadge refers to as “ conventional
counterforce ” operations. In brief, in crises and conflicts, Moscow is likely to see the United States as
confronting it with an array of advanced conventional capabilities backstopped by an interconnected
shield of theater and homeland missile defenses. Russia will perceive such capabilities as posing both a
conventional war-winning threat and a conventional counterforce threat (PDF) poised to
degrade the use of its strategic nuclear forces. The likelihood that Russia will see them this way is
reinforced by the fact that it currently sees U.S. conventional precision capabilities  precisely in this
manner. As a qualitatively new capability that promises new operational advantages, the addition of AWS to U.S.
conventional capabilities could further cement Moscow's view and in doing so increase the
potential for crisis instability and escalation in confrontations with U.S. forces.
In other words, the fielding of U.S. AWS could augment what Moscow already sees as a formidable U.S.
ability to threaten a range of important targets including its command and control networks, air
defenses, and early warning radars, all of which are unquestionably critical components of
Russian conventional forces. In many cases, however, they also serve as critical components of Russia's
nuclear force operations. As Talmadge argues, attacks on such targets, even if intended solely to weaken Russian
conventional capabilities, will likely raise Russian fears that the U.S. conventional campaign is in fact a
counterforce campaign aimed at neutering Russia's nuclear capabilities. Take for example, a hypothetical scenario set in the
Baltics in the 2030 timeframe which finds NATO forces employing swarming AWS to suppress Russian air defense networks and key
command and control nodes in Kaliningrad as part of a larger strategy of expelling a Russian invasion force. What to NATO is a
logical part of a conventional campaign could well appear to Moscow as initial moves of a larger plan designed to degrade the
integrated air defense and command and control networks upon which Russia's strategic nuclear arsenal relies. In turn, such
fears could feed pressures for Moscow to escalate to nuclear use while it still has the ability
to do so.
Finally, even if the employment of AWS does not drive an increase in the speed and momentum
of action that forecloses the time for exchanging signals, a future conflict in which AWS are
ubiquitous will likely prove to be a poor venue both for signaling and interpreting
signals . In such a conflict, instead of interpreting a downward modulation in an adversary's
operations as a possible signal of restraint or perhaps as signaling a willingness to pause in an
effort to open up space for diplomatic negotiations , AWS programmed to exploit every
tactical opportunity might read the modulation as an opportunity to escalate offensive
operations and thus gain tactical advantage. Such AWS could also misunderstand adversary
attempts to signal resolve solely as adversary preparations for imminent attack . Of
course, correctly interpreting signals sent in crisis and conflict is vexing enough when humans are making all the decisions, but in
future confrontations in which decisionmaking has willingly or unwillingly been ceded to machines,
the problem is likely only to be magnified .
Concluding Thoughts
Much attention has been paid to the operational advantages to be gained from the development of AWS. By contrast, much less
attention has been paid to the risks AWS potentially raise. There are times in which the fundamental tensions between the search for
military effectiveness and the requirements of ensuring that crises between major nuclear weapons states remain stable and
escalation does not ensue are pronounced and too consequential to ignore. The development of AWS may well be increasing the
likelihood that one day the United States and Russia could find themselves in just such a time. Now, while AWS are still in
their early development stages, it is worth the time of policymakers to c arefully consider whether
the putative operational advantages from AWS are worth the potential risks of instability and
escalation they may raise.

The same is true with China---LAWs are uniquely accident prone


Ryan Fedasiuk 22, adjunct fellow in the Technology and National Security program at the
Center for a New American Security, 04/27/22, The U.S. and China Need Ground Rules for AI
Dangers, https://foreignpolicy.com/2022/04/27/us-china-artificial-intelligence-dangers/
***Edited for ableist language
The threats are bigger, the stakes are higher, and the level of trust between the United States and
China is lower today than it was in 2014, when experts from both countries first began discussing the risks posed by artificial
intelligence (AI). At a time when about 9 in 10 U.S. adults consider China to be a “competitor” or “enemy,” calls for Washington and
Beijing to cooperate on shared challenges routinely fall on deaf ears. But, as laboratories in both countries continue
to unveil dramatic capabilities for AI systems, it is more important than ever that the United States
and China take steps to mitigate existential threats posed by AI accidents.
As a technology, AI is profoundly fragile . Even with perfect information and ideal operating
circumstances, machine learning systems break easily and perform in ways contrary to their
intended function. Since 2017, the Global Partnership on AI has logged “more than 1,200 reports of
intelligent systems causing safety, fairness, or other real-world problems ,” from autonomous car
accidents to racially biased hiring decisions. When the stakes are low, the risk of an AI accident can be tolerable—such as being
presented with an uninteresting Netflix recommendation or suboptimal driving route. But in a high-pressure, low-
information military environment, both the probability and consequences of AI accidents are
bound to increase .
Weapon systems put at high alert, for instance, could mistake a routine incident for an attack—and
even automatically respond . Some of the Cold War’s most dangerous nuclear warning
malfunctions were narrowly avoided because  human judgment prevailed . For now, nuclear
command and control systems in the United States and China still require that element of human decision-
making—but, for instance, shipboard defense systems that might be involved in naval
confrontations do not .
Neither side trusts the other on this issue. Over the past six months, I have spoken on a handful of occasions with
retired Chinese military leaders about the risks involved with AI systems. They view the U.S. Defense
Department’s AI ethics principles and broader approach to “responsible AI” as bad-faith efforts to skirt
multilateral negotiations aimed at restricting the development of autonomous weapons . Meanwhile,
U.S. observers don’t believe China is serious about those negotiations, given its extraordinarily narrow
definition of lethal autonomous weapons systems. (China has called for a ban only on autonomous weapons that cannot be recalled
once initiated and which kill with indiscriminate effect.) Both militaries are developing automated  target
recognition and fire control systems based on AI, and the last substantial discussion among the United Nations
Group of Governmental Experts focused on these issues is set to conclude in mid-2022.
In Washington, a major source of trepidation is the uncertainty surrounding the testing and
evaluation of Chinese military AI systems. Relative to most countries, the United States is extremely rigorous in
terms of its software testing and development process. DoD Directive 3000.09, for example, requires AI systems to be equipped with
safeties and anti-tamper mechanisms. When pressed on the question, Chinese military leaders verbally insist that
their AI systems must also pass rigorous inspection—but no public records document what
this process might look like in practice.

Artificial intelligence possesses fundamental technical limitations---


eliminating human decision making risk catastrophic nuclear conflict
***focus on nuclear weapons good
Zachary Kallenborn 22, research affiliate with the Unconventional Weapons and
Technology Division of the National Consortium for the Study of Terrorism and Responses to
Terrorism (START) and a policy fellow at the Schar School of Policy and Government, 02/01/22,
Giving an AI control of nuclear weapons: What could possibly go wrong?,
https://thebulletin.org/2022/02/giving-an-ai-control-of-nuclear-weapons-what-could-
possibly-go-wrong/
***edited for gendered language
If artificial intelligences controlled nuclear weapons, all of us could be dead .
That is no exaggeration. In 1983, Soviet Air Defense Forces Lieutenant Colonel Stanislav Petrov was
monitoring nuclear early warning systems, when the computer concluded with the   highest
confidence  that the United States had launched a nuclear war . But Petrov was doubtful :
The computer estimated only a handful of nuclear weapons were incoming, when such a surprise attack would more plausibly entail
an overwhelming first strike. He also didn’t trust the new launch detection system, and the radar system didn’t
have corroborative evidence. Petrov decided the message was a false positive and did nothing. The
computer was wrong ; Petrov was right. The false signals came from the early warning system
mistaking the sun’s reflection off the clouds for missiles . But if Petrov had been a machine ,
programmed to respond automatically when confidence was sufficiently high, that error would have started a
nuclear war.
Militaries are increasingly incorporating autonomous functions into weapons systems, though
as far as is publicly known, they haven’t yet turned the nuclear launch codes over to an AI system. Russia has developed a
nuclear-armed , nuclear-powered torpedo that is autonomous in some not publicly known manner,
and defense thinkers in the United States have proposed automating the launch decision
for nuclear weapons .
There is no guarantee that some military won’t put AI in charge of nuclear launches;  International law doesn’t specify that there
should always be a “Petrov” guarding the button. That’s something that should change, soon.
How autonomous nuclear weapons could go wrong. The huge problem with autonomous nuclear weapons,
and really all autonomous weapons, is error . Machine learning-based artificial
intelligences —the current AI vogue—rely on large amounts of data to perform a task.
Google’s AlphaGo program beat the world’s greatest human go players , experts at the ancient Chinese game
that’s even more complex than chess, by playing millions of games against itself to learn the game. For a
constrained game like Go, that worked well. But in the real world, data may be biased or incomplete in all
sorts of ways. For example, one hiring algorithm concluded being named Jared and playing high school lacrosse was the most
reliable indicator of job performance, probably because it picked up on human biases in the data.
In a nuclear weapons context, a
government may have little data about adversary military platforms;
existing data may be structurally biased , by, for example, relying on satellite imagery; or data
may not account for obvious, expected variations such as imagery in taken during foggy, rainy,
or overcast weather.
The nature of nuclear conflict compounds the problem of error.
How would a nuclear weapons AI even be trained? Nuclear weapons have only been used twice in Hiroshima and Nagasaki, and
serious nuclear crises are (thankfully) infrequent. Perhaps inferences can be drawn from adversary nuclear
doctrine, plans, acquisition patterns, and operational activity, but the lack of actual
examples of nuclear conflict means judging the quality of those inferences is impossible .
While a lack of examples hinders humans too, humans have the capacity for higher-order reasoning .
Humans can create theories and identify generalities from limited information or information that is
analogous, but not equivalent.  Machines cannot .
The deeper challenge is high false positive rates in predicting rare events . There have thankfully
been only two nuclear attacks in history. An autonomous system designed to detect and retaliate against an
incoming nuclear weapon, even if highly accurate, will frequently exhibit false positives . Around
the world, in North Korea, Iran, and elsewhere, test missiles are fired into the sea and rockets
are launched into the atmosphere. And there have been many false alarms of nuclear attacks,
vastly more than actual attacks. An AI that’s right almost all the time still has a lot of
opportunity to get it wrong . Similarly, with a test that accurately diagnosed cases of a rare disease 99 percent of the time,
a positive diagnosis may mean just a 5 percent likelihood of actually having the disease, depending on assumptions about the
disease’s prevalence and false positive rates. This is because with rare diseases, the number of false positives could
vastly outweigh the number of true positives. So, if an autonomous nuclear weapon concluded with 99 percent confidence a nuclear
war is about to begin, should it fire?
In the extremely unlikely event those problems can all be solved, autonomous nuclear weapons introduce new
risks of error and opportunities for bad actors to manipulate systems . Current AI is not
only brittle; it’s easy to fool. A single pixel change is enough to convince an AI a stealth
bomber is a dog. This creates two problems. If a country actually sought a nuclear war, they could fool
the AI system first, rendering it useless. Or a well-resourced, apocalyptic terrorist organization like the
Japanese cult Aum Shinrikyo might attempt to trick an adversary’s system into starting a  catalytic
nuclear war . Both approaches can be done in quite subtle, difficult-to-detect ways: data
poisoning may manipulate the training data that feeds the AI system, or unmanned systems or
emitters could be used to trick an AI into believing a nuclear strike is incoming.
The risk of error can confound well-laid nuclear strategies and plans . If a military had to start a nuclear
war, targeting an enemy’s own nuclear systems with gigantic force would be a good way to go to limit retaliation. However, if an
AI launched a nuclear weapon in error , the decisive opening salvo may be a pittance—a single
nuclear weapon aimed at a less than ideal target. Accidentally nuking a major city might
provoke an overwhelming nuclear retaliation because the adversary would still have all its
missile silos , just not its city.
Some have nonetheless argued that autonomous weapons (not necessarily autonomous nuclear weapons) will eventually reduce the
risk of error. Machines do not need to protect themselves and can be more conservative in making decisions to use force. They do
not have emotions that cloud their judgement and do not exhibit confirmation bias—a type of bias in which people interpret data in
a way that conforms to their desires or beliefs.
While these arguments have potential merit in conventional warfare, depending on how technology evolves, they do not in nuclear
warfare. As strategic deterrents, countries have strong incentives to protect their nuclear weapons platforms, because they literally
safeguard their existence. Instead of being risk avoidant, countries have an incentive to preemptively launch under attack, because
otherwise they may lose their nuclear weapons. Someemotion should also be a part of nuclear decision-
making : the prospect of catastrophic nuclear war should be terrifying , and the decision made
extremely cautiously .
Finally, while autonomous nuclear weapons may not exhibit confirmation biases, the lack of training data and real-
world test environments mean an autonomous nuclear weapon may experience numerous
biases , which may never be discovered until after a nuclear war has started.
The decision to unleash nuclear force is the single most significant decision a leader can make. It  commits a state to an existential
conflict with millions—if not billions—of lives in the balance. Such a consequential, deeply
human decision should
never be made by a computer .
Activists against autonomous weapons have been hesitant to focus on autonomous nuclear
weapons . For example, the International Committee of the Red Cross makes no mention of autonomous nuclear weapons in its
position statement on autonomous weapons. (In fairness, the International Committee for Robot Arms Control’s 2009
statement references autonomous nuclear weapons, though it represents more of the intellectual wing of the so-called “stop killer
robots” movement.) Perhaps activists see nuclear weapons as already broadly banned or do not wish to legitimize nuclear weapons
generally, but the lack of attention is a mistake . Nuclear weapons already have broad established norms against
their use and proliferation, with numerous treaties supporting them. Banning autonomous nuclear weapons should be an easy win
to establish norms against autonomous weapons. Plus, autonomous
nuclear weapons represent perhaps the
highest-risk manifestation of autonomous weapons (an artificial superintelligence is the only potential
higher risk). Which is worse: an autonomous gun turret accidently killing a civilian, or an autonomous nuclear weapon igniting a
nuclear war that leads to catastrophic destruction and possibly the extinction of all humanity? Hint: catastrophic destruction is
vastly worse.
Where autonomous nuclear weapons stand. Some autonomy in nuclear weapons is already here, but it’s
complicated and unclear how worried we should be.
Russia’s Poseidon is an “Intercontinental Nuclear-Powered Nuclear-Armed Autonomous Torpedo”
according to US Navy documents, while the Congressional Research Service has also described it as an
“autonomous undersea vehicle.” The weapon is intended to be a second-strike weapon used in
the event of a nuclear conflict. That is, a weapon intended to ensure a state can always retaliate against a
nuclear strike, even an unexpected, so-called “ bolt from the blue .”  An unanswered question is: what
can the Poseidon do autonomously? Perhaps the torpedo just has some autonomous maneuvering ability to better reach its target—
basically, an underwater cruise missile. That’s probably not a big deal, though there may be some risk of error in
misdirecting the attack .
It is more worrisome if the torpedo is given permission to attack autonomously under
specific conditions. For example, what if, in a crisis scenario where Russian leadership fears a
possible nuclear attack, Poseidon torpedoes are launched under a loiter mode? It could be that
if the  Poseidon loses communications with its host submarine, it launches an attack . Most
worrisome: The torpedo has the ability to attack on its own , but this possibility is quite unlikely. This would require
an independent means for the Poseidon to assess whether a nuclear attack had taken place, while sitting far beneath the ocean. Of
course, given how little is known about the Poseidon, this is all speculation. But that’s part of the point: understanding how
another country’s autonomous systems operate is really hard.

Only meaningful human control can rein in LAWs---autonomous


warfare is uniquely escalatory
Noel Sharkey 20, professor emeritus of artificial intelligence and robotics at the University
of Sheffield and chair of the International Committee for Robot Arms Control, 02/01/20, Fully
Autonomous Weapons Pose Unique Dangers to Humankind,
https://www.scientificamerican.com/article/fully-autonomous-weapons-pose-unique-dangers-
to-humankind/
***edited for gendered language
In September 2019 a swarm of 18 bomb-laden drones and seven cruise missiles overwhelmed Saudi Arabia's advanced air defenses
to crash into the Abqaiq and Khurais oil fields and their processing facilities. The surprisingly sophisticated attack, which Yemen's
Houthi rebels claimed responsibility for, halved the nation's output of crude oil and natural gas and forced an increase in global oil
prices. The drones were likely not fully autonomous, however: they did not communicate with one another to pick their own targets,
such as specific storage tanks or refinery buildings. Instead each drone appears to have been preprogrammed with precise
coordinates to which it navigated over hundreds of kilometers by means of a satellite positioning system.
Fully autonomous weapons systems (AWSs) may be operating in war theaters even as you read this
article, however. Turkey has announced plans to deploy a fleet of autonomous Kargu
quadcopters against Syrian forces in early 2020, and Russia is also developing aerial swarms for
that region. Once launched, an AWS finds, tracks, selects and attacks targets with violent force, all without human supervision.
Autonomous weapons are not self-aware, humanoid “Terminator” robots conspiring to take over; they are computer-
controlled tanks, planes, ships and submarines. Even so, they represent a radical change in the nature of warfare .
Humans are outsourcing the decision to kill to a machine—with no one watching to
ascertain the legitimacy of an attack before it is carried out. Since the mid-2000s, when the U.S. Department
of Defense triggered a global artificial-intelligence arms race by signaling its intent to
develop autonomous weapons for all branches of the armed forces, every major power and several lesser
ones have been striving to acquire these systems. According to U.S. Secretary of Defense Mark Esper, China is
already exporting AWSs to the Middle East.
The military attractions of autonomous weapons are manifold. For example, the U.S. Navy's X-47B, an unmanned fighter jet
that can land and take off from aircraft carriers even in windy conditions and refuel in the air, will have 10 times the reach of
piloted fighter jets. The U.S. has also developed an unmanned transoceanic warship called Sea Hunter, to
be accompanied by a flotilla of DASH (Distributed Agile Submarine Hunting) submarines. In January 2019 the Sea Hunter traveled
from San Diego to Hawaii and back, demonstrating its suitability for use in the Pacific. Russia is automating its state-of-
the-art T-14 Armata tank, presumably for deployment at the European border; meanwhile weapons manufacturer
Kalashnikov has demonstrated a fully automated combat module to be mounted on existing
weapons systems (such as artillery guns and tanks) to enable them to sense, choose and attack targets. Not to be outdone,
China is working on AI-powered tanks and warships , as well as a supersonic
autonomous air-to-air combat aircraft called Anjian, or Dark Sword, that can twist and turn so sharply
and quickly that the g-force generated would kill a human pilot.
Given such fierce competition, the focus is inexorably shifting to ever faster machines and
autonomous drone swarms , which can overwhelm enemy defenses with a massive,
multipronged and coordinated attack. Much of the push toward such weapons comes from defense
contractors eyeing the possibility of large profits , but high-ranking military commanders
nervous about falling behind in the artificial-intelligence arms race also play a significant
role. Some nations, in particular the U.S. and Russia, are looking only at the potential military advantages
of autonomous systems—a blinkered view that prevents them from considering the disturbing scenarios that can unfold
when rivals catch up.
As a roboticist, I recognized the necessity of meaningful human control over weapons systems when I first learned of the plans to
build AWSs. We are facing a new era in warfare, much like the dawn of the atomic age. It does not take a historian to realize that
once a new class of weapon is in the arsenals of military powers, its use will incrementally
expand , placing humankind at risk of conflicts that can barely be imagined today . In 2009 I and
three other academics set up the International Committee for Robot Arms Control, which later teamed up with other
nongovernmental organizations (NGOs) to form the Campaign to Stop Killer Robots. Now a coalition of 130 NGOs from 60
countries, the campaign seeks to persuade the United Nations to negotiate a legally binding treaty that would prohibit the
development, testing and production of weapons that select targets and attack them with violent force without meaningful human
control.
TIME TO THINK
The ultimate danger of systems for warfare that take humans out of the decision-making loop is illustrated by the true story of “the
man who saved the world.” In 1983 Lieutenant Colonel Stanislav Petrov was on duty at a Russian nuclear early-
warning center when his computer sounded a loud alarm and the word “ LAUNCH ” appeared
in bold red letters on his screen—indications that a U.S. nuclear missile was fast approaching. Petrov
held his nerve and waited. A second launch warning rang out, then a third and a fourth. With the fifth, the red “LAUNCH”
on his screen changed to “MISSILE STRIKE.” Time was ticking away for the U.S.S.R. to retaliate, but Petrov continued his
deliberation. “Then I made my decision,” Petrov said in a BBC interview in 2013. “I would not trust the
computer .” He reported the nuclear attack as a false alarm —even though he could not be certain. As it
turned out, the onboard computing system on the Soviet satellites had misclassified sunlight
reflecting off clouds as the engines of intercontinental ballistic missiles .
This tale illustrates the vital role of deliberative human decision-making in war: given
those inputs, an autonomous system would have decided to fire . But making the right call
takes time. A hundred years' worth of psychological research tells us that if we do not take at least a
minute to think things over, we will overlook contradictory information , neglect
ambiguity , suppress doubt , ignore the absence of confirmatory evidence , invent
causes and intentions, and conform with expectations . Alarmingly, an oft-cited rationale for
AWSs is that conflicts are unfolding too quickly for humans to be making the decisions.
“It's a lot faster than me,” Bruce Jette, a U.S. Army acquisitions officer, said last October to Defense News, referring to a targeting
system for tanks. “I can't see and think through some of the things it can calculate nearly as fast as it can.” In fact, speed is a key
reason that partially autonomous weapons are already in use for some defensive operations , which
require that the detection of, evaluation of and response to a threat be completed within seconds. These systems—variously
known as SARMO (Sense and React to Military Objects), automated and automatic weapons systems— include
Israel's Iron
Dome for protecting the country from rockets and missiles; the U.S. Phalanx cannon , mounted
on warships to guard against attacks from antiship missiles or helicopters; and the German NBS Mantis gun , used to
shoot down smaller munitions such as mortar shells. They are localized, are defensive, do not target humans and are switched on by
humans in an emergency—which is why they are not considered fully autonomous.
The distinction is admittedly fine, and weapons on the cusp of SARMO and AWS technology are already in use.
Israel's Harpy and Harop aerial drones, for instance, are explosive-laden rockets launched prior
to an air attack to clear the area of antiaircraft installations. They cruise around hunting for radar signals,
determine whether the signals come from friend or foe and, if the latter, dive-bomb on the assumption that the radar is connected to
an antiaircraft installation. In May 2019 secretive Israeli drones—according to one report, Harops—blew up
Russian-made air-defense systems in Syria.
These drones are “loitering munitions” and speed up only when attacking , but several fully
autonomous systems will range in speed from fast subsonic to supersonic to hypersonic . For
example, the U.S.'s Defense Advanced Research Projects Agency (DARPA) has tested the Falcon unmanned hypersonic aircraft at
speeds around 20 times the speed of sound—approximately 21,000 kilometers per hour.
In addition to speed, militaries
are pursuing “ force multiplication ”—increases in the destructive
capacity of a weapons system—by means of autonomous drones that cooperate like wolves in
a pack, communicating with one another to choose and hunt individual targets. A single human
can launch a swarm of hundreds (or even thousands) of armed drones into the air, on the land or water, or under
the sea. Once the AWS has been deployed, the operator becomes at best an observer who could
abort the attack—if communication links have not been broken.
To this end, the U.S. is developing swarms of fixed-wing drones such as Perdix and Gremlin, which can travel long distances with
missiles. DARPA has field-tested the coordination of swarms of aerial quadcopters (known for their high maneuverability) with
ground vehicles, and the Office of Naval Research has demonstrated a fleet of 13 boats that can “overwhelm an adversary.” The
China Electronics Technology Group, in a move that reveals the country's intentions, has (separately) tested a group of 200 fixed-
wing drones, as well as 56 small drone ships for attacking enemy warships. In contrast, Russia seems to be mainly interested in tank
swarms that can be used for coordinated attacks or be laid out to defend national borders.
GAMING THE ENEMY
The Petrov story also shows that although computers may be fast, they are often wrong . Even now, with the
incredible power and speed of modern computing and sensor processing, AI
systems can err in many
unpredictable ways . In 2012 the Department of Defense acknowledged the potential for such computer issues with
autonomous weapons and asserted the need to minimize human errors, failures in human-machine interactions, malfunctions,
degradation of communications and coding glitches in software. Apart from these self-evident safeguards, autonomous
systems would also have to be protected from subversion by adversaries via cyberattacks ,
infiltration of the industrial supply chain, jamming of signals, spoofing (misleading of positioning
systems) and deployment of decoys .
In reality, protecting against disruptions by the enemy will be extremely difficult , and the
consequences of these assaults could be dire. Jamming would block communications so that an
operator would not be able to abort attacks or redirect weapons. It could disrupt
coordination between robotic weapons in a swarm and make them run out of control. Spoofing , which sends
a strong false GPS signal, can cause devices to lose their way or be guided to crash into buildings .
Decoys are real or virtual entities that deceive sensors and targeting systems. Even the most
sophisticated artificial-intelligence systems can easily be gamed . Researchers have found that a
few dots or lines cleverly added to a sign, in such a way as to be unnoticeable to humans, can mislead a self-
driving car so that it swerves into another lane against oncoming traffic or ignores a stop sign.
Imagine the kinds of problems such tricks could create for autonomous weapons . Onboard
computer controllers could, for example, be fooled into mistaking a hot dog stand for a tank.
Most baffling, however, is the last directive on the Defense Department's list: minimizing “other enemy countermeasures or actions,
or unanticipated situations on the battlefield.” It
is impossible to minimize unanticipated situations
on the battlefield because you cannot minimize what you cannot anticipate . A conflict zone
will feature a potentially infinite number of unforeseeable circumstances ; the very essence of conflict is to
surprise the enemy. When
it comes to AWSs, there are many ways to trick sensor processing or
disrupt computer-controlled machinery.
One overwhelming computer problem the Department of Defense's directive misses, rather astonishingly, is the unpredictability of
machine-machine interactions. What happens when enemy autonomous weapons confront one another? The worrisome answer is
that no one knows or can know. Every AWS will have to be controlled by a top-secret computer
algorithm. Its combat strategy will have to be unknown to others to prevent successful enemy countermeasures.
The secrecy makes sense from a security perspective—but it dramatically reduces the predictability of the
weapons' behavior.
A clear example of algorithmic confrontation run amok was provided by two booksellers, bordeebook and profnath, on the Amazon
Web site in April 2011. Usually the out-of-print 1992 book The Making of a Fly sold for around $50 plus $3.99 shipping. But every
time bordeebook increased its price, so did profnath; that, in turn, increased bordeebook's price, and so on. Within a week
bordeebook was selling the book for $23,698,655.93 plus $3.99 shipping before anyone noticed. Two simple and highly predictable
computer algorithms went out of control because their clashing strategies were unknown to competing sellers.
Although this mispricing was harmless, imagine what could happen if the complex combat algorithms of
two swarms of autonomous weapons interacted at high speed. Apart from the uncertainties
introduced by gaming with adversarial images, jamming, spoofing, decoys and cyberattacks, one
must contend with the impossibility of predicting the outcome when computer algorithms
battle it out . It should be clear that these weapons represent a very dangerous alteration in the nature of warfare.
Accidental conflicts could break out so fast that commanders have no time to understand or
respond to what their weapons are doing—leaving devastation in their wake.
ON THE RUSSIAN BORDER
Imagine the following scenario, one among many nightmarish confrontations that could accidentally transpire—unless the race
toward AWSs can be stopped. It is 2040, and thousands of autonomous supertanks glisten with frost along Russia's border with
Europe. Packs of autonomous supersonic robot jets fly overhead, scouring for enemy activity. Suddenly a tank fires a missile over the
horizon, and a civilian airliner goes down in flames. It is an accident—a sensor glitch triggered a confrontation mode—but the tanks
do not know that. They rumble forward en masse toward the border. The fighter planes shift into battle formation and send alerts to
fleets of robot ships and shoals of autonomous submarines in the Black, Barents and White Seas.
After less than 10 seconds, NATO's autonomous counterweapons swoop in from the air, and attack formations begin to develop on
the ground and in the sea. Each side's combat algorithms are unknown to its enemy, so no one can predict how the opposing forces
will interact. The fighter jets avoid one another by swooping, diving and twisting with centrifugal forces that would kill any human,
and they communicate among themselves at machine speeds. Each side has many tricks up its sleeve for gaming the other. These
include disrupting each other's signals and spoofing with fake GPS coordinates to upset coordination and control.
Within three minutes hundreds of jets are fighting in the skies over Russian and European cities at near-hypersonic speed. The tanks
have burst across the border and are firing on communications infrastructure, as well as at all moving vehicles at railway stations
and on roads. Large guns on autonomous ships are pounding the land. Autonomous naval battles have broken out on and under the
seas. Military leaders on both sides are trying to make sense of the devastation that is happening around them. But what can they
do? All communications with the weapons have been jammed, and there is a complete breakdown of command-and-control
structures. Only 22 minutes have passed since the accidental shooting-down of the airliner, and swarms of tanks are fast
approaching Helsinki, Tallinn, Riga, Vilnius, Kyiv and Tbilisi.
Russian and Western leaders begin urgent discussions, but no one can work out how this started or why. Fingers are itching on
nuclear buttons as near-futile efforts are underway to evacuate the major cities. There is no precedent for this chaos, and the
militaries are befuddled. Their planning has fallen apart, and the death toll is ramping up by the millisecond. Navigation systems
have been widely spoofed, so some of the weapons are breaking from the swarms and crashing into buildings. Others have been
hacked and are going on killing sprees. False electronic signals are making weapons fire at random. The countryside is littered with
the bodies of animals and humans; cities lie in ruins.
THE IMPORTANCE OF HUMANS
A binding international treaty to prohibit the development, production and use of AWSs and to ensure meaningful human control
A human expert , with full awareness of the situation
over weapons systems becomes more urgent every day.
and context and with sufficient time to deliberate on the nature, significance and legitimacy of
the targets, the necessity and appropriateness of an attack and the likely outcomes, should
determine whether or not the attack will commence . For the past six years the Campaign to Stop Killer Robots
has been trying to persuade the member states of the U.N. to agree on a treaty. We work at the U.N. Convention on Certain
Conventional Weapons (CCW), a forum of 125 nations for negotiating bans on weapons that cause undue suffering. Thousands of
scientists and leaders in the fields of computing and machine learning have joined this call, and so have many companies, such as
Google's DeepMind. At last count, 30 nations had demanded an outright ban of fully autonomous weapons, but most others want
regulations to ensure that humans are responsible for making the decision to attack (or not). Progress is being blocked, however, by
a small handful of nations led by the U.S., Russia, Israel and Australia.
At the CCW, Russia and the U.S. have made it clear that they are opposed to the term “human control.” The U.S. is striving to replace
it with “appropriate levels of human judgment”—which could mean no human control at all, if that were deemed appropriate.
Fortunately, some hope still exists. U.N. Secretary-General António Guterres informed the group of governmental experts at the
CCW that “machines with the power and discretion to take lives without human involvement are politically unacceptable, are
morally repugnant and should be prohibited by international law.” Common sense and humanity must prevail —
before it is too late.

The risk of miscalculation is real, and our theorization is accurate---


artificial intelligence enhances the security dilemma, and ‘fog of war’
spirals---all of which dramatically increase the risk of inadvertent
nuclear escalation
James Johnson 21, lecturer in strategic studies at the University of Aberdeen, 10/15/21,
Inadvertent escalation in the age of intelligence machines: A new model for nuclear risk in the
digital age, European Journal of International Security, 7(3), pp. 337-359
***edited for gendered and ableist language
A new model of inadvertent escalation in the digital age
In his seminal work on inadvertent escalation, Barry Posen identifies the
major causes of inadvertent
escalation as the ‘ security dilemma’ , the Clausewitzian notion of the ‘ fog of war’ , and offensively
orientated military strategy and doctrine.Footnote45 This section applies Posen's framework to examine the
effects of AI technology on the causes of inadvertent escalation. In the light of recent technological change, this section revisits
Posen's analytical framework to examine the psychological underpinnings of security dilemma theorising (misperception, cognitive
AI technology and the emerging digital
bias, and heuristics) to consider the how and why the novel characteristics of
information ecosystem may destabilize ‘ crisis stability’ and increase inadvertent escalation
risk.Footnote46
The security dilemma concept
According to ‘security dilemma’Footnote47 theorising, the relative ease of carrying out and defending against
military attacks (the ‘offence-defence balance’), and the ambiguity about whether a weapon is offensive
or defensive (the ‘offence-defence distinguishability’), can cause crisis instability and deterrence
failure because these characteristics create fear and uncertainty about an adversary's
intentions. That is, where they harbour benign (that is, non-aggressive/defensive) or malign (that is, aggressive/offensive) intent
towards the other side.Footnote48 In his seminal paper on the topic, Robert Jervis defines the security dilemma as the ‘unintended
and undesired consequences of actions meant to be defensive … many of how a state tries to increase its security decrease the
security of others’ – that is, one state's gain in security can inadvertently undermine the security of
others .Footnote49 Actors tend to perceive the accumulation of offensive capabilities (and attendant
offensive strategies) by others as threatening , assuming the worst respond with
countermeasures – for example, arms racing , crisis mobilization , raising the alert
statu s, and pre-emptive war .Footnote50
According to Jervis, it is ‘the fear of being exploited’ that ‘most strongly drives the security
dilemma’.Footnote51 As we have noted, the fear (both rational and irrational ) of conventional skirmishes
crossing the nuclear Rubicon can be found in nuclear policymakers’ statements as far back as the
1950s.Footnote52 Therefore, security dilemma logic can be used to consider both peacetime spirals
of political mistrust and shifts in the military balance, crisis stability dynamics, and escalation
mechanisms once military conflict begins. The security dilemma concept is an inherently inadvertent phenomenon;
that is, weapons indistinguishability (that is, offence vs defence) allows states to inadvertently (for
rationale political or military reasons) threaten others, which can spark spirals of mutual distrust,
strategic competition , and military action. Footnote53
There are several ways in which the security dilemma can act as a catalyst for inadvertent escalation
that is likely to be compounded in the digital age .Footnote54 First, while nuclear-armed states have
a shared interest in avoiding nuclear war , they also place a high value on their nuclear forces
– both vital security assets and as symbols of national prestige and status. Footnote55 As a corollary,
conventional weapons – devised by a nuclear power for defensive purposes or counterforce missions – may
nonetheless be viewed by an adversary as an offensive threat to their nuclear survivability
(discussed below).Footnote56 Second, escalatory rhetoric and other aggressive responses by a threatened state
(especially in situations of military asymmetry) may be misperceived as unprovoked malign intent and not
as a response to the initiator's behaviour prompting action and reaction spirals of escalation .
Third, and related, the state of heightened tension and the compressed decision-making
pressures during a conventional conflict would radically increase the speed by which action and
reaction spirals of escalation unravel . In the digital age , these dynamics would be further
compounded , which would reduce the options and time for de-escalation and increase the
risks of horizontal (the scope of war) and vertical (the intensity of war) inadvertent escalation .Footnote57
Security dilemma theorising also ties in with the concept of the ‘capability/vulnerability paradox’ in International
Relations.Footnote58 That is, one state's pursuit of a new resource to compete against other states
introduces a vulnerability that is perceived as threatening.Footnote59 This paradox suggests that when a
state's military capabilities are dependent on a particular resource (that is, that may be exploited or
dominated by an adversary), both sides have incentives for pre-emptive first strikes – the resource
vulnerability and not the new capability, per se generate these destabilizing outcomes .Footnote60 Much like
the security dilemma, therefore, the potential impact of the ‘capability/vulnerability paradox’ has
increased in the context of technological advancements associated with the ‘ information
revolution’ in military affairs (that is, the centralisation of information and dependencies on digital information to
conduct modern warfare), which now includes artificial intelligence .Footnote61 As Pentagon insider Richard
Danzig notes, ‘digital technologies … are a security paradox : even as they grant unprecedented
powers, they also make users less secure’ .Footnote62 In this sense, as a cause of inadvertent
escalation, the security dilemma connects to the broader strategic and security literature on
misperception, emotions, and cognition in various studies evince a strong qualitative connection
between human psychology and war .Footnote63 How might the confusion and uncertainty of war increase
inadvertent risk?
Clausewitzian ‘fog of war’
Inadvertent escalation risk can also be caused by the confusion and uncertainty (or ‘ fog
of war’ ) associated with gathering, analysing, and disseminating relevant information about
a crisis or conflict – which has important implications for the management, control, and termination of
war.Footnote64 Traditional nuclear deterrence, crisis stability, and strategic stability theorising work off the questionable premise
that actors are rational and presume that they are therefore ‘able to understand their environment and coordinate policy
misperception , miscalculation ,
instruments in ways that cut through the “fog of war”’.Footnote65 In reality,
accidents , and failures of communication ‘events can readily escape control’ , and although
these escalation dynamics are often unintended , potentially catastrophic events can
result.Footnote66 With the advent of the speed , uncertainty , complexity , and cognitive
strains (see below) associated with conducting military operations on the digitised battlefield ,
the prospects that decision-making by ‘ dead reckoning’ – dependent on the experience and
sound judgement of defence-planners when information is scarce and autonomous bias
proclivities loom large – can prevent command and control failures caused by the ‘ fog’ seems
fanciful .Footnote67
The confusion and uncertainty associated with the ‘fog of war’ can increase inadvertent risk in three ways:
(1) it can complicate the task of managing and controlling military campaigns at a tactical level
(or situational battlefield awareness); (2) it can further compound the problem of offence-defence
distinguishability ; and (3) it can increase the fear of a surprise or pre-emptive attack (or a ‘bolt
from the blue’ ).Footnote68 Taken together, these mechanisms can result in unintentional (and
possibly irrevocable ) outcomes and thus obfuscate the meaning and the intended goals of
an adversary's military actions.Footnote69 In short, the dense ‘ fog of war’ increases the risk of
inadvertent escalation because ‘misperceptions, misunderstandings, poor communications, and
unauthorized or unrestrained offensive operations could reduce the ability of civilian
authorities to influence the course of war’.Footnote70
While Cold War decision-makers also faced extreme domestic and foreign pressures to decipher the adversary's intentions correctly,
these pressures
are being amplified in the current digitized information ecosystem –
reducing the timeframe for decision-makers during a crisis, increasing the cognitive and
heuristic burdens caused by information overload and complexity .Footnote71 Cognitive
pressures caused by the sheer volume of information flow (both classified and open sources) are being
compounded by the degraded reliably and politicization (or ‘weaponisation’) of information.
These pressures can, in turn, make decision-makers more susceptible to cognitive bias ,
misperceptions , and heuristics (or mental shortcuts) to approach complex problem-solving – either
directly or indirectly informing decision-making.Footnote72
While disinformation and psychological operations in deception and subversion are not a new phenomenon,Footnote73 the
introduction of new AI-enhanced tools in the emerging digital information ecosystem enables
a broader range of actors (state and non-state) with asymmetric techniques to
manipulate, confuse, and deceive.Footnote74  Disinformation operations might erode the credibility
of, and undermine public confidence in, a states’ retaliatory capabilities by targeting specific systems (that
is, command and control) or personnel (primarily via social media) who perform critical functions in maintaining these
capabilities.Footnote75 For example, cyberweapons – notably ‘ left of launch’ operations – have been
allegedly used by the United States to undermine Iranian and North Korean confidence in their nuclear
forces and technological preparedness.Footnote76
The potential utility of social media to amplify the effects of a disinformation campaign was
demonstrated during the Ukrainian Crisis in 2016 when military personnel's cell phones were the victims of Russian
information operations.Footnote77 However, the use of these novel techniques during a nuclear crisis and
how they might impact the ‘ fog of war’ is less understood and empirically untested.Footnote78 During a
nuclear crisis, a state might attempt, for instance, to influence and shape the domestic debate of an
adversary (for example, shift preferences, exacerbate domestic-political polarisation, or coerce/co-opt groups and individuals on
social media) in order to improve its bargaining hand by delegitimizing (or legitimising) the use of
nuclear weapons during an escalating situation, or bring pressure to bear on an adversary's
leadership to sue for peace or de-escalate a situation – a tactic which may, of course, dangerously
backfire.Footnote79
Moreover, a third-party (state or non-state) actor to achieve its nefarious goals could employ active
information techniques (for example, spreading false information of a nuclear detonation, troop movement, or
missiles leaving their garrison) during a crisis between nuclear rivals (for example, India-Pakistan ,
US-China , Russia-NATO ) to incite crisis instability .Footnote80 Public pressures on decision-
makers from these crisis dynamics, and evolving at a pace that may outpace events on the
ground, might impel (especially thinned-skinned and inexperienced) leaders, operating under the shadow of
the deluge of 24-hour social media feedback and public scrutiny, to take actions that they
might not otherwise have.Footnote81
In sum, the emerging AI-enhanced digitised information environment , though not fundamentally
altering the premise upon which actors (de-)escalate a situation, imperfect information, uncertainty, and risk associated with the
‘fog of war’, nonetheless, introduces
novel tools of misinformation and disinformation and an
abundance of information radically alters the cognitive pressures placed on decision-makers
during crisis and conflict. Besides, decision-makers subjection to an abundance of disparate and
often unverified information will indubitably influence actors’ (collectively or individuals) policy
preferences and perceptions. As a result, complicating the security dilemma challenge of understanding an adversary's
capabilities, intentions, doctrine, and strategic thinking, with potentially profound repercussions for escalation dynamics. How
might these pressures affect states’ regional conflicts with different military doctrine, objectives, and attitudes to risk?
Offensive military doctrine and strategy
Because of a lack of understanding between (nuclear-armed and non-nuclear-armed) adversaries about
where the new tactical possibilities offered by these capabilities figure on the Cold War-era
escalation ladder , thus the pursuit of new ‘strategic non-nuclear weapons’ (for example,
cyber weapons , drones , missile defense , precision munitions , counterspace
weapons ) increases the risk of misperception .Footnote82 The fusion of AI technology into
conventional weapon systems (that is, to enhance autonomous weapons , remote sensing for
reconnaissance, improving missile guidance and situational awareness) is creating new possibilities for a range of
destabilizing counterforce options targeting states’ nuclear-weapon delivery and support
systems; for example, cyber NC3 ‘kill switch’ attacks or tracking adversaries’ nuclear-armed
submarines and mobile missile launchers .Footnote83
Russia , the U nited S tates, and China are currently pursuing a range of dual-capable
(conventional and nuclear-capable) delivery systems (for example, hypersonic guide vehicles , stealth
bombers , and a range of precision munitions) and advanced conventional weapons ( drones ,
space-based , and cyber weapons .Footnote84) that are capable of achieving strategic effects –
that is, without the need to use nuclear weapons.Footnote85 These lines are blurred further by the use of dual-
use command and control systems (that is, early warning, situational awareness, and surveillance) to manage
conventional and nuclear missions.Footnote86 Chinese analysts, for example, while concerned about the
vulnerabilities of their command and control systems to cyberattacks, are optimistic about the deployment of AI
augmentation (for example, ISR, intelligent munitions, and unmanned aerial vehicles) to track and target an
adversary's forces, and will lower cost of (economic and political) signalling and deploying military forces.Footnote87
Advances in AI technology , in conjunction with the technologies it can enable (for example,
remote sensing , hypersonic technology , and robotics and autonomy ), increase the
speed , precision , lethality , and survivability of strategic non-nuclear weapons, thus exacerbating
the destabilizing effects of these capabilities used by nuclear-armed rivals. Thus, opening new
pathways for both horizontal and vertical inadvertent escalation .Footnote88 Moreover, these
technological advances have been accompanied by destabilizing doctrinal shifts by certain
regional nuclear powers ( Russia , North Korea , and possibly China ), which indicates a
countenance of the limited use of nuclear weapons to deter an attack (or ‘escalate to de-escalate’), in
situations where they face a superior conventional adversary and the risk of large-scale
conventional aggression.Footnote89
Furthermore, volatility in nuclear-armed states’ civil-military relations can create internal pressures
to pursue a more aggressive nuclear force posture or doctrine.Footnote90 The assumption that new (and latent)
regional nuclear-states will act in ways that reflect their interests as rational actors to avoid nuclear war, thus enhance deterrence (or
‘rational deterrence theory’),Footnote91 and crisis stability, understates the role of influential military organisations (that is,
offensive-doctrinal bias, parochialism, and behavioural rigidities) in shaping nuclear doctrine that can lead to deterrence failure and
nuclear escalation – despite national security interests to the contrary.Footnote92 For instance, in a nuclear-armed state where the
military officers influence the nuclear strategy, the adoption of offensive doctrines may emerge, which reflect volatile civil-military
relations rather than strategic realities.Footnote93
In short, technological advancements to support states’ nuclear deterrence capabilities will
develop in existing military organizations imbued with their norms, cultures, structures, and
invariably, mutually exclusive strategic interests. China's ‘ military-civil fusion’ concept (a
dual-use integration strategy) designed to co-opt or coerce Chinese commercial entities to support the
technological development of the People's Liberation Army (PLA) to meet the needs of ‘intelligentized warfare in the
future’ – is an important test case in a civilian-led initiative designed to drive military innovation in the pursuit of broader
geostrategic objectives.Footnote94 The impact of China's ‘military-civil fusion’ on civil-military relations remains unclear,
however.Footnote95
US counterforce capabilities – to disarm an enemy without resort to nuclear weapons – used in conjunction with air and
missile defences to mop up any residual capabilities after an initial attack will
generate crisis instability and ‘ use
it or lose it’ pressures.Footnote96  Chinese analysts , for example, have expressed concern that US
advances in AI could overwhelm Chinese air defenses , thus reducing the time available to
commanders to respond to an imminent attack – for example, from the US autonomous AI-
enabled Long Range Anti-Ship Missile (AGM-158C) designed to target ‘high-priority
targets’.Footnote97 Furthermore, the increased optimism in states’ ability to use AI-enhancements to
find, track, and destroy others’ nuclear forces enabled by AI technology (notably when military-
capability imbalances exist) will be an inherently destabilizing phenomenon.Footnote98 What one side
views as conventional operations might be viewed by the other side as a precursor to a disabling
counterforce attack (for example, targeting dual-use command and control centres and air defences), thus increasing
inadvertent escalation risk .Footnote99
Because of the asymmetry of interest at stake in a regional crisis involving the United States, the stakes will likely favour the
defending nuclear-armed power.Footnote100 According to ‘prospect theory’, a regional power would perceive the
relative significance of a potential loss more highly than a gain.Footnote101 That is, when prospect theory is
applied to deterrence dynamics, leaders are inclined to take more risks (that is, risk-acceptant) to protect
their positions, status, and reputations, than they are to enhance their position .Footnote102 Thus,
having suffered a loss, leaders are generally predisposed to engage in excessive risk-taking
behaviour to recover lost territory – or other position or reputational damage.Footnote103 If, for instance, Chinese
or North Korean leaders are faced with the prospect of an imminent attack on Taiwan, or
Pyongyang view their regime survival at stake , they would likely countenance greater risks
to avoid this potential ( existential ) loss .Footnote104 Furthermore, these capabilities’ crisis instability
could also result from irrational behaviour derived from misperception, cognitive biases, or
other emotional impulses, which makes nuclear escalation more likely.Footnote105
For example, Chinese analysts tend to overestimate the US's military AI capabilities relative to
open-source reports – often citing outdated or inaccurate projections of US AI ‘warfighting’ budgets ,
development, and force posture.Footnote106 The framing of Chinese discussion on US military AI
projects is analogous to Soviet concerns about the missile gap with the US during the Cold
War;Footnote107 thus, risk compounding Beijing's fear that AI technology could be
strategically destabilising.Footnote108 In a world with imperfect (and asymmetric) information about
the balance of power and resolve, and incentives to misrepresent and manipulate perceptions
(exploit psychological dispositions and vulnerabilities) and emotions (strategic framing and fear appeals) of the
information ecosystem, bargaining failure, and war are more likely .Footnote109
Given the confluence of secrecy , complexity , erroneous , or ambiguous intelligence data
(especially from open-source intelligence and social media outlets),Footnote110  AI-augmentation will likely
exacerbate compressed decision-making and the inherent asymmetric nature of
cyberspace information.Footnote111 For example, using AI-enhanced cyber capabilities to degrade
or destroy a nuclear-states command and control systems – whether as part of a deliberate coercive
counterforce attack or in error as part of a limited conventional strike – may generate pre-emptive ‘ use it or lose
it’ situations .Footnote112 In a US-China conflict scenario, for instance, a US penchant for
counterforce operations targeting adversaries’ command and control, the comingled nature of
China's (nuclear and conventional) missile forces, US and Chinese preference for the pre-emptive
use of cyberweapons , and domestic- political pressures on both sides to retaliate for costly
losses (either physical/kinetic or non-physical/political), increases the dangers of inadvertent
escalation .Footnote113
These risks should give defence planners pause for thought using advanced conventional capabilities to project military power in
conflicts with regional nuclear powers. In short, conventional doctrines and operational concepts could exacerbate old (for example,
third-party interference, civil-military overconfidence, regime type, accidental or unauthorised use, or an overzealous commander
with pre-delegation authority) and create new pathways (for example, AI-enhanced ISR and precision missile
targeting and guidance, drone swarms, AI-cyberattacks, and mis/disinformation subversion ) to
uncontrollable inadvertent escalation . Missile defences and advanced conventional weapons are unlikely to
prevent these escalatory mechanisms once battlefield perception shifts and the nuclear threshold is crossed.Footnote114
To the extent to which a state may succeed in limiting the damage from a nuclear exchange in using
technologically enhanced (including AI ) counterforce operations continues to be an issue of considerable
debate.Footnote115 However, from the perspective of inadvertent escalation, the efficacy of damage-limitation
counterforce tactics is less significant than whether an adversary views them as such . Chinese
and Russian fear that the United States is developing and deploying conventional
counterforce capabilities – (especially cyberattacks on NC3 systems ) to blunt their nuclear
deterrence risks generating ‘crisis instability’ caused by ‘ use it or lose it’ pressures – that is,
pressures to use nuclear weapons before losing the capability to do so.Footnote116
Nuclear powers maintain different attitudes and perceptions on the escalation risk posed by cyberattacks and information warfare
more generally, however. Chinese
analysts, in particular, have expressed an acute awareness of the
potential vulnerabilities of their respective NC3 systems to cyberattacks.Footnote117 The
United States has begun to take this threat more seriously , whereas Russian strategists, despite bolstering their
cyber defences appear more sanguine and view information warfare as a continuation of peacetime politics by other
means.Footnote118 Consequently, the risk of inadvertent escalation caused by misperception and
miscalculation will likely increase.Footnote119 In what ways might the new tools and techniques emerging in digitised
information exacerbate these dynamics?
The digitised information ecosystem, human psychology, and inadvertent risk
Misperceptions, cognitive bias, and the human psychological features of security dilemma
theorising can also be used to elucidate the escalatory dynamics that can follow from inflammatory,
emotionally charged, and other offensive public rhetoric (see, for example, fake news, disinformation, rumours,
and propaganda) used by adversaries during crisis – or saber-rattling behaviour .Footnote120 During, in
anticipation of, or to incite a crisis or conflict, a state or non-state actor (for example, clandestine digital ‘sleeper cells’)
could employ subconventional (or ‘ grey zone’ ) information warfare campaigns to amplify its impact
by sowing division, erode public confidence, and delaying an effective official response. Footnote121
The public confusion and disorder that followed a mistaken cell phone alert warning residents in Hawaii of an imminent ballistic
missile threat in 2018 serve as a worrying sign of the vulnerabilities of US civil defences against state or non-state actors’ seeking
asymmetric advantages vis-à-vis a superior adversary – that is, compensating for its limited nuclear
capabilities.Footnote122  North Korea , for example, might conceivably replicate incidents like the Hawaii
false alarm in 2018 in a disinformation campaign (that is, issuing false evacuation orders, issuing false nuclear alerts,
and subverting real ones via social media) to cause mass confusion .Footnote123
During a crisis in the S outh C hina S eas or South Asia, for example, when tensions are running high, state or
non-state disinformation campaigns could have an outsized impact on influencing crisis
stability (dependent on the interpretation and processing of reliable intelligence) with potentially severe escalatory
consequences. This impact would be compounded when populist decision-makers heavily
rely on social media for information-gathering and open-source intelligence and thus more
susceptible to social media manipulation.Footnote124 In extremis, a populist leader may come to view
social media as an accurate barometer of public sentiment , eschewing official (classified and non-classified)
evidence-based intelligence sources, and regardless of the origins of this virtual voice – that is, from genuine users or fake accounts
as part of a malevolent disinformation campaign. Consequently, the agenda-setting framing narrative of decision-makers during a
crisis would instead be informed by a fragmented and politicised social media information ecosystem; amplifying rumours,
conspiracy theories, and radical polarisation, which in turn, reduces the possibility of achieving a public consensus to inform and
legitimatise decisions during a crisis. Such dynamics may also expose decision-makers to increased
‘rhetorical entrapment’ pressure whereby alternative policy options (viable or otherwise) may be
overlooked or dismissed out of hand.Footnote125
Furthermore, increased public scrutiny levels – especially coupled with disinformation and public panic – could
further increase political pressures on leaders whose electoral success determines their political
survival.Footnote126 Under crisis conditions, these dynamics may compromise diplomatic de-
escalation efforts and complicate other issues that can influence crisis stability, including maintaining a credible deterrence
and public confidence in a state's retaliatory capability and effective signalling resolve to adversaries and assurance to
allies.Footnote127 State or non-state disinformation campaigns might also be deployed in conjunction
with other AI-augmented non-kinetic/political (for example, cyberattacks , deep fake technology ,
or disinformation campaigns via social media amplified by automated bots) or kinetic/military (see, for example,
drone swarms , missile defense , anti-satellite weapons , or hypersonic weapons )
actions to distract decision-makers – thus, reducing their response time during a crisis and
conferring a tactical or operational advantage to an adversary .Footnote128
For example, in the aftermath of a terrorist attack in India 's Jammu and Kashmir in 2019, a disinformation
campaign (see, for example, fake news and false and doctored images) that spread via social media amid a heated national
election,Footnote129 inflamed emotions and domestic-political escalatory rhetoric, that in turn, promoted calls for retaliation
againstPakistan and brought two nuclear-armed adversaries close to conflict .Footnote130 This
crisis provides a sobering glimpse of how information and influence campaigns between two
nuclear-armed adversaries can affect crisis stability and the concomitant risks of inadvertent
escalation. In short, the catalysing effect of costly signalling and testing the limits of an adversary's
resolve (which did not previously exist) to enhance security instead increases inadvertent escalation
risks and leaves both sides less secure.
The effect of escalatory imbued rhetoric in the information ecosystem can be a double-edged sword for inadvertent escalation risk.
On the one hand, public rhetorical escalation can mobilise domestic support and signal deterrence and resolve to an adversary –
making war less likely. On the other hand, sowing public fear, distrust (for example, confidence in the legitimacy and
reliability of NC3 systems), and threatening a leader's reputation and image (for example, the credibility of
strategic decision-makers and robustness of nuclear launch protocols) domestically can prove costly, and in turn, may
inadvertently make enemies of unresolved actors. For example, following the Hague's Permanent Court of
Arbitration ruling against China over the territorial disputes in the South China Seas in 2016, the Chinese government
had to resort to social media censorship to stem the flood of nationalism, calling for war with US ally the
Philippines.Footnote131 Furthermore, domestic public disorder and confusion – caused, for example, by a
disinformation campaign or cyberattacks – can in itself act as an escalatory force , putting
decision-makers under pressure to respond forcefully to foreign or domestic threats, to protect a
states’ legitimacy, self-image, and credibility.Footnote132
These rhetorical escalation dynamics can simultaneously reduce the possibility for face-saving
de-escalation efforts by either side – analogous to Thomas Schelling's ‘tying-hands mechanism’.Footnote133 During
heightened tensions between the United States and North Korea in 2017, for instance, the Trump
administration's heated war of words with Kim Jong Un, whether a madman's bluff or in earnest (or ‘rattle the
pots and pans’) raised the costs of Kim Jong Un backing down (that is, with regime survival at stake), thus
increasing inadvertent escalation risk , and simultaneously, complicating de-
escalation .Footnote134 Because of the fear that its nuclear (and conventional) forces are vulnerable to a
decapacitating first strike, rhetorical escalation between a conventionally inferior and superior state is
especially dangerous.Footnote135 Research would be beneficial on how the contemporary information ecosystem might
affect decision-making in different political systems.
Ultimately, states’ willingness to engage in nuclear brinkmanship will depend upon information (and mis/disinformation), cognitive
bias, and the perception of, and the value attached to, what is at stake. Thus, if one side considers the potential
consequences of not going to war as intolerable (that is, regime survival, the ‘ tying-hands’ , or
‘ use it or lose it ’ pressures), then off-ramps , firebreaks , or other de-escalation
measures will be unable to prevent crisis instability from intensifying.Footnote136 Finally, to
the extent to which public pressure emanating from the contemporary information environment affects
whether nuclear war remains ‘special’ or ‘ taboo’ will be critical for reducing the risk of inadvertent escalation by
achieving crisis stability during a conventional war between nuclear-armed states. Future research would be beneficial (a) on how
the digitised information ecosystem affects decision-making in different political regimes; and (b) the potential effect of asymmetry
and learning in the distribution of countries with advanced AI-capabilities and dynamics associated with its adoption. Will nuclear-
armed states with advanced AI-enabled capabilities treat less advanced nuclear peers that lack these capabilities differently? And
how might divergences in states synthesis and adoption of military AI contribute to misperception, miscalculation, and accidents?
Nuclear war has devastating, disparate societal impacts---BUT
current public discourse is desensitized to nuclear threats---analyzing
plausible scenarios for conflict is necessary to reinvigorate public
concern that compels governmental action towards risk reduction
Andrew Futter et al. 20, Professor of International Politics at the University of Leicester;
Samuel I. Watson, Associate Professor at the University of Warwick; Peter J. Chilton, Research
Fellow at the University of Birmingham; Richard J. Lilford, Professor of Public Health at the
University of Birmingham, “Nuclear war, public health, the COVID-19 epidemic: Lessons for
prevention, preparation, mitigation, and education,” Bulletin of the Atomic Scientists, Vol. 76,
No. 5, pg. 271-276, 2020, T&F.
It may seem tactless, even perverse, to write about other sorts of disasters that might befall our planet in the middle of a pandemic.
But write we must. For the current crisis is a harbinger of crises to come, whether humanmade or natural. While many of the
lessons to be learned from the COVID-19 outbreak are specific to communicable disease , they
may also provide insight into a broader set of challenges that the world may face if nuclear
weapons were ever to be used again.
Dealing with a pandemic is trivial compared to dealing with the aftermath of a nuclear incident or attack .
Thermal injury , followed by radiation illness , not to mention the disruption to society and the
impact on the environment , would dwarf the effect of COVID-19 . The basic infrastructure of
government, the criminal justice system, finance, telecommunications, and food supply could be severely disrupted, whereas they
public concern over nuclear weapons has
have remained largely intact during the current pandemic. But
faded from a high point a generation ago. In part, this may be because of psychological biases
that do not properly weight the impact of an event by its probability of occurring .
Consequently, the public must once again be educated about and sensitized to nuclear risk .
The task of prevention and preparation cannot be left to governments alone. As with climate change,
the whole of society must be engaged in pushing to transform how humans think about
and manage our nuclear world. Only then will governments have the incentive to reduce
systemic risk and plan for the unthinkable.
It is paradoxical that the prevention of nuclear war , so prominent in the public mind during the 1980s, has
almost faded from view despite the continued proliferation of nuclear weapons and the means to deliver them; despite the
unraveling of the nuclear arms control edifice that has undergirded international order since the 1960s;
despite rising political tensions across the world; despite well-documented near misses resulting
from accidents and miscalculation ; and despite the risk that nuclear materials could fall into terrorist hands.
During the Cold War, governments and civil society groups planned extensively for the impact of nuclear weapons, and the general
public was encouraged to read or watch a series of “duck and cover” or “protect and survive” pamphlets and TV programs explaining
what to do in the event of a nuclear war. Today that seems strange, even slightly comical. It should not be.
A sober analysis of the risks and consequences of nuclear catastrophe reveals that they are
unacceptably high . But by learning lessons from the COVID-19 pandemic and applying them
to the nuclear realm , engaged citizens can help to reduce those risks .
The consequences of nuclear attacks
The consequences of nuclear use depend on the size, number, and types of weapons, the altitude at which the explosion occurs, and
population density. Alex Wellerstein’s NUKEMAP is an online tool that allows users to calibrate the gruesome effects of nuclear
nuclear weapons
strikes of different magnitudes over any part of the world (Wellerstein 2020). As the tool makes clear,
destroy human life in three zones radiating out from the epicenter: the fireball ; the shock wave ; and the
area of a residual radiation , whose direction depends on prevailing winds. As an example, the 455- kiloton W88 warhead
currently deployed on missiles inside US nuclear-powered submarines, if detonated above London, would kill an estimated 675,000
people and injure over a million more, not taking into account radiation damage and subsequent fallout. The Tsar Bomba, a 50-
megaton bomb released into the atmosphere by the Soviet Union in 1961 and the most powerful bomb ever to be tested, could have
killed up to 7.6 million people and injured a further 4 million if detonated over New York City. During the Cold War ,
experts estimated that the use of just 1 percent of the world’s nuclear stockpile could kill
about 56 million people and injure another 61 million (Daugherty, Levi, and Von Hippel 1986).
The medical effects of nuclear war are summarized in a report of that title, published by the British Medical
Association’s Board of Science and Education in 1983 (British Medical Association 1983). Its conclusions derive from the
generic effects of blast , thermal , and radiation injury , as well as from observations
made following the bombings of Hiroshima and Nagasaki in 1945 and from over 2,000
nuclear tests (Simon and Bouville 2015). The fireball destroys everything at close hand, while at a greater
distance thermal radiation causes flash burns and fires . A blast wave follows. Traveling at 90 meters per
second, it wreaks havoc, crushing people in buildings, injuring them with flying debris, or choking them with dust. Survivors of
thermal and blast injury, and those at greater distance from ground zero, are exposed to nuclear radiation and
fallout . In the short term, they are at risk of radiation sickness , the main features of which are bone marrow
suppression, gastro-intestinal symptoms, and skin damage. The severity of the disease depends on the radiation dose. Longer-
term effects of radiation include reduced fertility , congenital abnormality (especially microcephaly),
and cancer (especially of the thyroid).
However, just as the impact of a pandemic does not end with health effects , the impact of a
nuclear strike would also go beyond the immediate death toll . Supply chains , including
those for food and medicine , would be severely disrupted . Law and order would probably break down
on a massive scale. There are also risks that are theoretical and controversial, but which would be cataclysmic
if they occurred. Prominent among these is the risk of a so-called nuclear winter resulting from
particles released into the high atmosphere (Sagan 1983; Scouras 2019).
Another theoretical risk is that of electromagnetic pulse disruption of electronic systems. Such
an effect caused satellites in low orbit to fail following the high-altitude Starfish Prime nuclear test, carried out by
the United States in 1962 (Plait 2012). Many writers have tried to imagine life in the aftermath of a
nuclear strike , and the descriptions make the reader wonder if those killed immediately
are not the fortunate ones (Whitcomb 2019; Witze 2020).
How might a nuclear incident arise ?
Although the major nuclear powers have reduced stockpiles from their peaks in the 1980s, there are still over 13,000 nuclear
weapons in the world today (Ploughshares Fund 2020). The bombs released in Japan in August 1945 relied entirely on fission, while
in modern warheads fission is merely the detonator for an immensely more powerful
fusion reaction . Several hundred of these weapons are held at high states of readiness for
an attack. What might trigger their deployment ? There are four main risks.
First is a planned attack. The 1945 attack on Japan is the only example to date. During the Cold War, potential belligerents were
ostensibly restrained under the condition of mutual assured destruction, which itself relies on retaliation, rationality, and
uncertainty about how the other side would act. Such gamesmanship may have been successful while there were only two actors, the
United States and the Soviet Union, but it has become more complex and arguably more fragile in a world where nine states can
deploy nuclear weapons, and where new flashpoints have emerged in East Asia, South Asia, and possibly the Middle East.
Second is miscalculation . There have been numerous nuclear near misses in our past: most
famously, the near launch from a Russian nuclear submarine during the Cuban Missile
crisis in 1962, and as a result of the NATO military exercise, code-named Able Archer , which led to a nuclear
war scare in 1983. But also, more recently during the India–Pakistan Kargil war of 1999, just a year
after both had conducted nuclear tests.
Third is an accident . It is at least conceivable that nuclear weapons could be used by
accident, possibly through a computer malfunction or human error . Perhaps the best example
of this would is the so-called “ Petrov incident ” in 1983, when scattered rays of sunlight tricked a
Soviet alert system into thinking a US nuclear attack was incoming (Lewis et al. 2015).
Fourth is by non-state actors, such as terrorist groups. The chance of a nuclear detonation by a terrorist group may be limited; but
perhaps more worrying is the possibility that by simulating an attack from one country they could provoke retaliation from another,
or from some other interference that leads to nuclear use.
Most commentators think that miscalculation or accident is the most likely progenitor of a
nuclear strike, by a considerable margin ; if that is true, then nonuse of nuclear weapons for 75
years has been the result mostly of luck rather than judgment (Pelopidas 2017).
Quantifying the risk of nuclear events
The magnitude of the risk of a nuclear event is hard to estimate. The risk of a single incident, leading to the death of, say, one million
people, might be as high as 50 percent over the next 50 years, according to one model (Barrett, Baum, and Hostetler 2013). Another
widely cited figure is a 2 percent chance per year (Hellman 2008). A survey of experts found a wide range of estimates of the
probability of nuclear war over a 10- year period; only one of the 79 respondents put the risk at zero percent, and 60 put it at over 10
percent (Lugar 2005).
The expected loss from a future event is the product of its probability and its impact, both of which could themselves be assigned
probability distributions to represent the associated uncertainties. The impact could be calibrated in disability adjusted life years or
even just life years lost. As a simple illustration, a 5 percent probability of an event with 50 million causalities results in an expected
loss of 2.5 million (0.05 x 50 m) lives.
However, the skewed distribution of impact means the probability of losses that are orders of magnitude larger than this cannot be
ignored. Figure 1 provides an example of the expected life years lost from a nuclear conflict by providing probability distributions
based on estimates from the literature. In this example, the expected number of lives lost is 29 million, even though the median
probability of a nuclear conflict is “only” 10 percent and the median number of lives lost is 1 million. By way of comparison, the
World Health Organization estimated that climate change would be responsible for around 241,000 additional deaths each year to
2030 (or about 2.5 million over ten years) (World Health Organization 2014). Neither of these calculations take into account loss of
life due to indirect economic effects. Nor do they include suffering caused by chronic illness and disability. In the case of nuclear
exposure, this also includes terrible effects on unborn children. However, even without taking these considerations into account, it is
clear that both nuclear war and climate change are huge threats to public health and wellbeing. But there is little reason to conclude
that climate change is a greater hazard. The effects of nuclear war are immediate, whereas climate change provides plenty of
warning, allowing infrastructure to be preserved, even if at high cost.
Public perceptions and social concern
A generation ago, nuclear risk was at the forefront of the public debate. Citizens across the globe were genuinely worried that a
nuclear war might break out between East and West, and this spurred huge public protests and a strong anti-nuclear movement.
However, today,
the appreciation of nuclear risk appears much lower, with far less public concern
beyond elite-level discussion and civil society activism . Notwithstanding the work of the International
Physicians for the Prevention of Nuclear War (an international federation of medical groups), the International Campaign to Abolish
nuclear risks
Nuclear Weapons, the recent Humanitarian Initiative on Nuclear Weapons, and the 2017 Nuclear Ban Treaty,
appear to have fallen below other global societal risks , such as climate change , and, following the
outbreak of COVID19, global pandemics . Why has the risk of nuclear war almost dropped out of
popular concern when there is little or no objective reason for citizens to lower their
guard? There are four main reasons.
First is a failure to consider both the probability and magnitude of nuclear events. As the above
calculations show, probability should not be considered in isolation from the magnitude of an event if it
occurs. The expected loss should be kept in mind when assessing threats .
Second is the general public’s bandwidth for giving attention to important issues . There
appears to be a limit to the number of issues that can rise to prominence at any one time; issues must
compete for public and journalistic attention (Hilgartner and Bosk 1988). But other issues, important
as they may be, should not crowd out the nuclear risk.
Third is the availability heuristic . People are more engaged by things they have experienced
than things they must imagine. Expect public support for investment to prevent and prepare for
pandemics in the near future. However, the hidden danger is often the greater danger , in part
because it is hidden and less tangible .
Fourth is a sense of futility . Challenges such as climate change and pandemic prevention
are perceived to be more “doable” in the sense that people feel they can influence the course of
events. Such a sense of powerlessness may induce a nihilistic attitude . However , citizens are
not powerless to reduce nuclear risk.
Learning nuclear lessons from COVID-19 and preparing for the unthinkable
COVID-19 crisis, in addition to serving as a timely reminder of the very personal
The current
nature of global catastrophic risk , can also shine ligh t on the ongoing nuclear challenge
that global society faces.
The first objective when dealing with global catastrophic risks, such as that posed by nuclear weapons, is the importance
of prevention . It is easy to think that nuclear prevention differs from pandemic prevention in the sense that pandemics arise
from the natural world while nuclear events are entirely human made. However, pandemics involve human actions
at all levels , from the way the environment is managed (Brulliard 2020), through containment in facilities that experiment
with modification of the viral genome, and through the nations and international agencies that respond to emerging threats. Both
viral and nuclear risks can be mitigated by international co-operation . The risk of pandemics can be
reduced through international agreement covering early reporting of communicable disease outbreaks. Delayed reporting resulted in
delayed action in the case of COVID-19.
bilateral and multilateral agreements , supported by trust building , are
Worryingly, similar
eroding in the nuclear arena. Ensuring that the current global arms control architecture –
including the Nuclear NonProliferation Treaty agreed in 1968 and the New START agreement between the United States and Russia
that is due to expire next year – survives into a new era is essential. Likewise, continued international efforts to
reduce the risks posed by nuclear terrorism through securing nuclear facilities and accounting for all fissile materials are also vital.
Genuine political commitment to nuclear disarmament would of course be the ultimate
prevention mechanism , but whether nuclear disarmament is possible in our lifetimes is a
moot point. Indeed, global engagement with nuclear disarmament appears to be on the wane even after the high point of
agreement of the 2017 Nuclear Ban Treaty. Nevertheless, if the world cannot disarm , at least it could create a
regime where all, or the great majority, of armaments are taken off high alert and various
confidence building and risk reduction mechanisms are put in place, given the well-
documented risks of accident or miscalculation . All these measures require strengthening
international bodies that can carry out inspections and help overcome suspicion through increasing
transparency . For example, governments will be more confident to reduce the high alert status of nuclear weapons if they
can be assured that other governments are doing likewise.
If prevention is not possible, then attention must turn toward preparation. It has been argued that the world was not properly
prepared for the current pandemic, from a lack of personal protective equipment to economic planning for lockdown, meaning that
decisions had to be made on the fly. However, if governments were not prepared for the pandemic, then they are likely not prepared
for other global disasters either, the most significant of which would arguably be a nuclear disaster.
Duncan Campbell’s 1982 book War Plan UK gives an unnerving insight into the limitations of planning for life after a nuclear attack
even in an age where such an event was taken seriously (Campbell 1982). And it is not clear that much societal contingency planning
beyond the continuity of government exists in most states today (see Graff 2017). COVID-19 has highlighted the enormous pressures
on the health service, police officers, and other essential workers, and has shown that these workers can become ill or even die.
Moreover, even if just one city was attacked by a nuclear weapon, it would be necessary for other parts of the country to come to its
aid, and the government would have to step in to put emergency measures in place for the distribution of food and water, shelter,
and healthcare.
Policy makers cannot just wring their hands and say how catastrophic it would be and hope for the best. The fact that it would be
difficult to manage such a scenario is the very reason why the plans should be made. Such plans would have to involve the whole of
society, just as they did in the 1960s. Citizens need to persuade their governments to spend money and energy on difficult questions.
How to maintain food supplies? How to get money to people who need it? Who is an essential worker? Which industries or parts of
society should be prioritized? What is the correct balance between state and private industry in the response? How much should the
population be allowed to know? How far should human rights be suspended? What should the parts of the country that are
functioning do to help those that are not?
The current COVID-19 crisis also provides insight into the challenges that citizens would face in the event of a nuclear attack
(whether small or large in scale, or indeed just threatened). A nuclear
crisis is likely to create far greater levels of
panic , hoarding , and shortages of medical supplies than has COVID-19. There would be a rush to
stockpile iodine , for example, to counter the effects of radiation on the thyroid, but also of the equipment
necessary to treat burns or gain access to clean water . A nuclear attack would also almost certainly
mean the curtailment of civil liberties , as well as lockdowns and restrictions on travel
(both domestically and abroad). Rather than to prevent the spread of illness, this would be done to allow the authorities to try to
manage the crisis and prevent lawlessness. It
may even include martial law and possibly a restriction of
citizens’ ability to access reliable information . To some extent, this is easier today with 24-hour television
news reporting and myriad online resources to keep everyone up to date (assuming TV and radio transmission is still possible), but
the flip side of this is that knowing what is real or believable is difficult (Lazer et al. 2018). This also highlights the
importance of clear and unequivocal messaging on the part of trustworthy governments (another
significant challenge highlighted by the response to COVID-19).
Perhaps the most important piece s of the nuclear risk puzzle are education and
engagement . Notwithstanding the excellent work by organizations such as the Nuclear Threat Initiative, the
public is probably less familiar with the basics of nuclear weapons and nuclear risks than
at any point since the 1940s, so it is essential that more be done to educate the public about them, perhaps
in a similar way to what has happened with climate change . With respect to engagement, a nuclear
disaster, and certainly a nuclear war, would be a catastrophe that extended beyond borders, and while an immediate reaction might
be to close borders and look inward, it is clear that any response would have to be global.
A nuclear wake-up call
In 1966 the BBC docudrama The War Game depicting a hypothetical nuclear attack on the United Kingdom was deemed so
upsetting that it was initially banned from being broadcast. Two decades later, the films The Day After and Threads portrayed the
harrowing impact of nuclear attacks on towns in the US Midwest and on Sheffield, England, respectively. Upsetting as these
films may have been, they nevertheless played an important role in educating the public about
nuclear risks. A generation later, in the midst of the challenges and politics of the modern world, people seem to have
forgotten the dangers posed by nuclear weapons or are at best blissfully ignorant . It is
essential, however unpleasant it may seem, that citizens think about the unthinkable and make a
concerted effort to hopefully prevent, but in a worst-case scenario mitigate and manage, the threats posed by
nuclear weapons . The world has survived for 75 years without the use of nuclear weapons in war, but this
does not automatically mean that the same will be true in the future. That governments have
avoided catastrophe thus far is, at least in part, due to luck . There is no reason to assume that
this luck will hold out indefinitely .
There is a limit to how far governments are prepared to move without the support of their
citizens. As was the case in the abolition of the slave trade two hundred years ago or with climate change today, the causal
chain is often from citizen to government , rather than the other way around (Jennings 2013).
Citizens should hold politicians to account . It is crucially important that scientists and other experts are
humble about how much is known – or how much can be known. However, the gradual awakening to the dangers of climate change,
and more recently virulent disease, shows that the
public can absorb abstract ideas and incorporate
them in their worldview beyond just reciting empty slogans . But a societal movement
requires engagement from a broad swath of groups including the press , teachers , the
judiciary , and humanitarian and religious groups to ensure that the issue of nuclear
risk is placed at the center of the public agenda in a sober but serious way.
Plan
The United States should vest legal duties in lethal autonomous
weapon systems requiring systems to be under human control.
Solvency
The plan mandates AI systems to allow human control, clarifying
accountability rules and prohibiting AI from making decisions over
the use of lethal force
Tucker Davey 16, author at the Future of Life Institute, 11/21/16, Who is Responsible for
Autonomous Weapons?, https://futureoflife.org/ai/peter-asaro-autonomous-weapons/
Consider the following wartime scenario: Hoping to spare the lives of soldiers, a country deploys an autonomous weapon to
wipe out an enemy force. This robot has demonstrated military capabilities that far exceed even the best soldiers, but when it hits the
ground, it
gets confused. It can’t distinguish the civilians from the enemy soldiers and begins
taking innocent lives . The military generals desperately try to stop the robot, but by the time they succeed it has already
killed dozens.
Who is responsible for this atrocity? Is it the commanders who deployed the robot, the designers and manufacturers of
the robot, or the robot itself?
Liability: Autonomous Systems 
As artificial intelligence improves, governments
may turn to autonomous weapons — like military robots —
in order to
gain the upper hand in armed conflict. These weapons can navigate environments on
their own and make their own decisions about who to kill and who to spare. While the example above may never
occur, unintended harm is inevitable. Considering these scenarios helps formulate important questions that governments and
researchers must jointly consider, namely:
How do we hold human beings accountable for the actions of autonomous systems? And how is justice served when the killer is
essentially a computer?
As it turns out, there is no straightforward answer to this dilemma. When a human soldier commits an atrocity and kills innocent
civilians, that soldier is held accountable. But when autonomous weapons do the killing, it’s difficult to
blame them for their mistakes.
An autonomous weapon’s “decision” to murder innocent civilians is like a computer’s
“decision ” to freeze the screen and delete your unsaved project. Frustrating as a frozen computer may be,
people rarely think the computer intended to complicate their lives.
Intention must be demonstrated to prosecute someone for a war crime, and while autonomous
weapons may demonstrate outward signs of decision-making and intention, they still run on a code that’s just as
impersonal as the code that glitches and freezes a computer screen . Like computers, these systems are not
legal or moral agents, and it’s not clear how to hold them accountable — or if they can be held accountable — for their mistakes.
So who assumes the blame when autonomous weapons take innocent lives? Should they even be allowed to kill at all?
Liability: from Self-Driving Cars to Autonomous Weapons 
Peter Asaro, a philosopher of science, technology, and media at The New School in New York City, has been working on addressing
these fundamental questions of responsibility and liability with all autonomous systems, not just weapons. By exploring
fundamental concepts of autonomy, agency, and liability, he intends to develop legal approaches for regulating the use of
autonomous systems and the harm they cause.
At a recent conference on the Ethics of Artificial Intelligence, Asaro discussed the liability issues surrounding the application of AI to
weapons systems. He explained, “ AI poses threats to international law itself — to the norms and standards that
we rely on to hold people accountable for hold states accountable for military interventions — as able to blame systems for
malfunctioning instead of taking responsibility for their decisions.”
The legal system will need to reconsider who is held liable to ensure that justice is served when an
accident happens. Asaro argues that the moral and legal issues surrounding autonomous weapons are much different than the issues
surrounding other autonomous machines, such as self-driving cars.
Though researchers still expect the occasional fatal accident to occur with self-driving cars, these autonomous vehicles are designed
with safety in mind. One of the goals of self-driving cars is to save lives. “The fundamental difference is that with any kind of
weapon, you’re intending to do harm, so that carries a special legal and moral burden ,” Asaro
explains. “There is a moral responsibility to ensure that [the weapon is] only used in legitimate and appropriate circumstances.”
Furthermore, liability with autonomous weapons is much more ambiguous than it is with self-
driving cars and other domestic robots.
With self-driving cars, for example, bigger companies like Volvo intend to embrace strict liability – where the manufacturers assume
full responsibility for accidental harm. Although it is not clear how all manufacturers will be held accountable for autonomous
systems, strict liability and threats of class-action lawsuits incentivize manufacturers to make their product as safe as possible.
Warfare, on the other hand, is a much messier situation.
“Youdon’t really have liability in war,” says Asaro. “The US military could sue a supplier for a bad
product, but as a victim who was wrongly targeted by a system, you have no real legal
recourse .”
Autonomous weapons only complicate this . “These systems become more
unpredictable as they become more sophisticated, so psychologically commanders feel less responsible for
what those systems do. They don’t internalize responsibility in the same way,” Asaro explained at the Ethics of AI conference.
To ensure that commanders internalize responsibility , Asaro suggests that “the system has to allow
humans to actually exercise their moral agency .”
That is, commanders must demonstrate that they can fully control the system before they
use it in warfare. Once they demonstrate control, it can become clearer who can be held
accountable for the system’s actions.
Preparing for the Unknown 
Behind these concerns about liability, lies the overarching concern that autonomous machines might act in ways that humans never
intended. Asaro asks: “When these systems become more autonomous, can the owners really know what they’re going to do?”
Even the programmers and manufacturers may not know what their machines will do. The purpose of developing autonomous
machines is so they can make decisions themselves – without human input. And as the programming inside an autonomous system
becomes more complex, people will increasingly struggle to predict the machine’s action.
Companies and governments must be prepared to handle the legal complexities of a domestic or military robot or system causing
unintended harm. Ensuring justice for those who are harmed may not be possible without a clear framework for liability.
Asaro explains, “We need to develop policies to ensure that useful technologies continue to be developed, while
ensuring that we
manage the harms in a just way. A good start would be to prohibit automating
decisions over the use of violent and lethal force , and to focus on managing the safety risks in beneficial
autonomous systems.”

Human control solves all of our internal links


Daniele Amoroso 21, Associate Professor of International Law at the University of Cagliari;
and Guglielmo Tamburrini, professor of philosophy of science at Universita di Napoli Federico
II, 08/19/21, Toward a Normative Model of Meaningful Human Control over Weapons Systems,
Ethics and International Affairs 35(2), pp. 245-272
***MHC = meaningful human control
Conclusion
We have argued in this article for a principled, differentiated, and prudential MHC framework. Selected ethical
and legal reasons advanced in AWS debates buttress a principled stand on indefeasible fail-safe actor, accountability attractor, and
moral agency enactor roles for human controllers of weapons systems. Given the wide variety of AWS and the graded and normative
concerns they give rise to, these core requirements can be met by adopting suitably differentiated approaches, which neither rule out
all autonomy in weapons systems nor relegate human control to a nominal or perfunctory role. And differentiated control levels
must be modulated along the “what,” “where,” and “how” dimensions of weapons systems tasks and capabilities. Finally, our
framework is prudential, in that stricter forms of human control are applied by default on precautionary grounds, unless another
form is otherwise agreed on by the international community of states.
A final remark is in order, which concerns the relationship between the outlined core contents of MHC and the stability and peace
concerns raised by AWS. While ultimately motivated here by deontological reasons, the core contents of MHC would
also prove effective from a consequentialist perspective . Crucially, by enforcing the MHC
requirement in the ways unfolded here, one connects the tempo of military attacks to human cognitive
capacities and reaction times (with the notable exception of certain uses of defensive AWS), thereby mitigating
the widespread concern that autonomy in weapon systems might lead to an acceleration in the
pace of war that is incompatible with human cognitive and sensory motor
limitations.Footnote79 By preserving the MHC communication channel between humans and
weapons systems, as even required under the L3 and L4 exceptions, one indeed sets up a bulwark against
the risk of runaway interactions between AWS that can lead to unintentional
outbreaks of conflict.
Otherwise, it is impossible to hold anyone accountable for actions
taken by LAWs, leading to illegal and risky behavior
Dr Tanya Krupiy 18, Lecturer in Digital Law at Newcastle University, 09/09/2018,
Regulating a game changer: using a distributed approach to develop an accountability
framework for lethal autonomous weapon systems, Georgetown Journal of International Law,
50(1), pp. 45-112
B. Challenges to Assigning Accountability
There are challenges to imputing a war crime caused by a LAWS under existing criminal
law and international criminal law frameworks. There exist three bases for ascribing responsibility to an individual in criminal
law. These are: (1) the existence of a causal link between the act of an individual and the outcome," 4 (2) a legal duty to act or to
abstain from an act,"' and (3) a moral duty to act or to abstain from an act." 6 The weaker the element of causality between the
conduct of an individual and the wrongful outcome, the harder it will be to justify imposing criminal responsibility on an individual
on any of the three bases. Traditionally, criminal law has been concerned with blamewor- thy conduct where an individual made a
conscious decision to carry out or to contribute to a wrongful act." 7
Broadly, international criminal law requires that "for the accused to be criminally culpable his [or her]
conduct must ...have contributed to, or have had an effect on, the commission of the crime.""18 In the
absence of a sufficiently close link between the act and the outcome, it is difficult to argue that the individual had
the necessary mental element to carry out the act in question."9 In the context of LAWSs , it
is challenging to trace the link between the conduct of a particular programmer , corporate
employee , or government official and the international crime of a LAWS .12 ° There may be a
lack of sufficient proximity between the politicians' decision to bring into effect a law
regulating LAWSs and the developer's decision relating to a particular LAWS design. The
hypothetical legislation discussed here sets a range of parameters that specifies reliability and design criteria for a LAWS. Since the
reliability of the LAWS is linked to its architecture, the flawed design of a particular LAWS creates an opportunity for a LAWS to exe-
cute a war crime. Similarly, when the policies of the defense department relating to how LAWSs are to be embedded into the armed
forces are silent on how LAWS will be designed, there is no causal link between the promulgated policy and the LAWS's architecture.
From this perspective, senior
officials bear no accountability for war crimes committed by LAWSs,
because these individuals are not involved in developing and manufacturing the weapon
systems.
The existing legal frameworks , such as the doctrine of command responsibility,1 2 render it difficult to impute
accountability to officials, such as a senior defense official, for a war crime that a LAWS carries out.
Since states designed the doctrine of command responsibility with natural persons and military
organizations in mind, this doctrine is difficult to apply to the relationship between a human
superior and a LAWS. 22 Article 28 of the Rome Statute of 1998 imposes criminal responsibility on civilian and military
superiors who have "effective command and control" or "effective authority and control" over subordinates who commit an
international crime.1 2' The International Criminal Court interpreted the test of "effective control" in a manner identical to the
customary international law definition. 2 4 Under customary international law, such superiors should have had the "mate- rial
ability" to prevent and punish the commission of these offenses."' For instance, the superior should have had the power to "initiate
meas- ures leading to proceedings against the alleged perpetrators."126 Moreover, the superior should have failed to take "necessary
and rea- sonable" measures within his or her power to prevent the subordinate from committing an international crime or to punish
the subordi- nate. 2 7 When states formulated the doctrine of command responsibil- ity, they assumed that individuals are in a
hierarchical relationship to each other. 28 This can be gleaned from the nature of the indicia states require to establish effective
control. Indicators that a superior possesses "effective control" over a subordinate include the fact that a supe- rior has the ability to
issue binding orders 29 and that the subordinate has an expectation that he or she has to obey such orders.'
Hin-Yan Liu posits that the doctrine of command responsibility applies to a relationship between two human beings
and therefore does not regulate an interface between a LAWS and an individual .' This assertion is well-
founded, because how LAWSs function does not fit with how subordinates interact with superiors . In
principle, a LAWS could be programmed to check that the order lies within a set of orders that it can implement. However, a
LAWS does not make decisions in 32 the sense in which human beings think of decision-making.'
Since a LAWS lacks moral agency,"' a LAWS cannot reflect on whether it is under an obligation
to obey an order. Neither does the threat of punishment play a role in its decision-making
process. A LAWS is guided by its software when it performs an action .3 4 The operator activates processes
in the software by inputting an instruction into a LAWS."'S There is a temporal and geographical gap
between a LAWS's acts and the cre- ation of the architecture guiding how a LAWS performs." 6
The operator lacks a material ability to prevent a LAWS from operating in an unreliable
manner and triggering a war crime due to not being involved in its design. 7 Furthermore, it is
impossible to employ the doctrine of command responsibility to attribute a war crime to
an individual in a corporation . An indicator of "effective control" is that the alleged superior was, "by virtue of his or
her position, senior in some sort of formal or informal hierarchy to the perpetrator."18 Where the corporation is a separate
organization from the defense depart- ment and where the defense staff does not have the responsibility to oversee the day-to-day
operations of contractors via a chain of com- mand, 9 defense
officials lack "effective control " over corporate
employees.14° In such cases, the defense officials or a similar body can neither issue binding
orders nor expect obedience from individuals who are not subordinate to them through a
chain of command. 41
The fact that customary international law requires that the conduct associated with the war crime take place during an armed
conflict14 2 is not a setback for imputing accountability to individuals involved in developing and regulating LAWSs. As the
International Criminal Tribunal for the Former Yugoslavia held in the Prosecutorv. Kunarac case:
The armed conflict need not have been causal to the commission of the crime, but the existence of an armed conflict must, at a
minimum, have played a substantial part in the perpetrator's ability to commit it, his decision to commit it, the manner in which it
was committed or the purpose for which it was com- mitted. Hence, if it can be established, as in the present case, that the
perpetrator acted in furtherance of or under the guise of the armed conflict, it would be sufficient to conclude that his acts were
closely related to the armed conflict.14
Because the legislature144 and the individual defense ministries typi- cally promulgate regulations relating to the armed forces,14
and because the armed forces employ LAWSs in an armed conflict, there is a nexus between the conduct of the officials and the
performance of a LAWS on the battlefield. In principle, it should not matter that there is a time gap between the conduct of a
government official and the use of a LAWS during an armed conflict. This scenario is akin to a situation where an individual starts
planning to commit a war crime during peacetime but carries out the crime during an armed conflict.
On the development level, the fact that many organizations are likely to produce individual
components for a LAWS poses a challenge for assigning accountability to a particular
individual . 4 6 Even when a single corporation designs and manufactures a LAWS, there
will be numerous individuals collaborating on designing a LAWS and on determining
product specifications . 47 For instance, a team of programmers could work together on proposing
alternative blueprints for the architecture of a LAWS. The board of directors or senior managers could decide
what product specifications to give to programmers to develop. The collaborative nature of the decision-making
related to the architecture of a LAWS poses difficulty for attributing a flawed design of a
LAWS to a particular individual.
Furthermore, because the armed forces deploy the LAWS at the time it brings about a war crime,148 it "may not be possible
to establish the relevant intent and knowledge of a particular perpetrator " who is not part of
the armed forces. 49 Customary international law requires that the superior knew or " had reason
to know " that the subordinate was about to commit or had committed a war crime."' The
complexity of the software and hardware makes it challenging to impute even
negligence to a particular individual involved in designing a LAWS ."' One possible counterargument,
though, is that a programmer who knows he or she is unable to foresee the exact manner in which a LAWS will perform its mission
because of the nature of the artificial intelligence software is reckless when he or she certifies that the system is suitable for carrying
out missions involving autonomous application of lethal force." 2 Since the mental element of the doctrine of command respon-
sibility encompasses recklessness,1"' on the application of this approach the programmer would fulfill the mental element
requirement of the doctrine of command responsibility. The same argument applies to cor- porate directors and defense officials.
Scholars propose various strategies for addressing the accountability gap. Alex Leveringhouse posits that an individual who takes
excessive risks associated with ceding control to a LAWS or who fails to take into account the risks associated with employing a
LAWS should bear responsibility." 4 Such responsibility stems from the fact that the indi- vidual could foresee that a LAWS could
link two variables in an inappro- priate manner and carry out an unlawful act."' On this basis, the operator would be accountable." 6
Leveringhouse's approach mirrors the U.K. Joint Doctrine Note 2/11, which places responsibility on the last person who issues
commands associated with employing a LAWS 17 for a military activity.
Although Leveringhouse's approach.s merits consideration, it gives
insufficient weight to the fact that
operators play no role in devising regulatory frameworks regarding the operational
restrictions placed on the employment of LAWSs and regarding the steps they should take in order to mitigate the risks
associated with employing LAWSs. As a result, Leveringhouse..9 unduly restricts the range of individuals
who are held accountable. The better approach is found in the U.K. Joint Doctrine 2/11,16° which places accountability on
relevant national military or civilian authorities who authorize the employment of LAWSs. However, the U.K Joint Doctrine 2/11
does not go far enough, because it does not extend accountability to senior politicians.161 Heather Roff explains that policy elites
and heads of state are the ones who truly make decisions to employ LAWSs. 16 2 Because they possess full knowl- edge about such
systems and decide the limitations for their use, they are morally and legally responsible for the outcomes brought about by LAWSs.
63 Roff concludes that current legal norms make it impossible to hold political elites responsible.16 4
Thilo Marauhn maintains that Article 25(3) (c) of the Rome Statute of 1998 could be employed to attribute responsibility to
developers and manufacturers of LAWSs.165 This provision criminalizes aiding, abet- ting, and assisting the commission of an
international crime for "the 1'66 purpose of facilitating the commission of ... a crime." However, Marauhn's proposal is unworkable.
Developers and
First, it requires that the aider and abettor is an accessory to a crime another individual perpetrated.167
manufacturers cannot aid a LAWS to perpetrate a war crime, because a LAWS is not a
natural person . Further, developers and manufacturers are unlikely to fulfill the actus
reus requirement of carrying out acts "specifically directed to assist, encourage or lend moral
sup- port" to the perpetration of a certain specific crime with such support having a "substantial effect upon the
perpetration of the crime."168 Corporations operate to earn a profit. Corporate directors and manag- ers know that bodies, such as
the Department of Defense, will not buy a weapon system where it is clear that the system is designed to bring about war crimes.169
Given that states formulated the doctrine of com- mand responsibility with human relationships and human perpetrators in mind,
171 the
better course of action is to formulate a legal framework to address the context of
LAWSs . This position is in line with the pro- posals of many scholars, such as Gwendelynn Bills, who argue that states should
adopt a new treaty to regulate LAWSs.171
III. ENVISAGING LAWSS AS RELATIONAL ENTITIES
There are indications that states may endow artificial intelligence with legal personhood . Saudi Arabia
granted citizenship to the artificial intelligence system Sophia in 2017. A number of experts recommended to the EU that it
should consider recognizing autonomous robots as having a legal status of “ electronic
persons .” While it may be desirable to grant legal status to robotic systems, care should be
taken not to conflate the nature of the legal status of human beings with that of artificial
intelligence systems. A group of computer scientists explains that “[r]obots are simply not people.” It is undesirable to apply
existing categories, such as moral agency, to robotic systems, because they do not capture the nature of such systems. Rather, we
should develop a separate category for understanding the nature of LAWSs . Luciano Floridi and J.W.
Sanders propose that the current definition of moral agency is anthropocentric and that a new
definition of a moral agent should be developed to include artificial intelligence
systems. Traditionally, many legal philosophers associate moral agency with: (1) an ability to intend an action, (2) a capacity to
autonomously choose the intended action, and (3) a capability to perform an action. In order to be able to intend an action and to
autonomously elect an action, an individual needs to possess a capacity to reflect on what beliefs to hold.

Only the plan causes international follow-on---unilateral bans fail


Daniele Amoroso 21, Associate Professor of International Law at the University of Cagliari;
and Guglielmo Tamburrini, professor of philosophy of science at Universita di Napoli Federico
II, 08/19/21, Toward a Normative Model of Meaningful Human Control over Weapons Systems,
Ethics and International Affairs 35(2), pp. 245-272
The notion of meaningful human control ( MHC ) has gathered overwhelming consensus
and interest in the autonomous weapons systems (AWS) debate. By shifting the focus of this
debate to MHC , one sidesteps recalcitrant definitional issues about the autonomy of
weapons systems and profitably moves the normative discussion forward. Some delegations
participating in discussions at the Group of Governmental Experts on Lethal Autonomous Weapons Systems meetings
endorsed the notion of MHC with the proviso that one size of human control does not fit all weapons systems and
uses thereof. Building on this broad suggestion, we propose a “differentiated”—but also “principled” and “prudential”—framework
for MHC over weapons systems. The need for a differentiated approach—namely, an approach acknowledging that the extent of
normatively required human control depends on the kind of weapons systems used and contexts of their use—is supported by
highlighting major drawbacks of proposed uniform solutions. Within the wide space of differentiated MHC profiles, distinctive
ethical and legal reasons are offered for principled solutions that invariably assign to humans the following control roles: (1) “fail-
safe actor,” contributing to preventing the weapon's action from resulting in indiscriminate attacks in breach of international
humanitarian law; (2) “accountability attractor,” securing legal conditions for international criminal law (ICL) responsibility
ascriptions; and (3) “moral agency enactor,” ensuring that decisions affecting the life, physical integrity, and property of people
involved in armed conflicts be exclusively taken by moral agents, thereby alleviating the human dignity concerns associated with the
autonomous performance of targeting decisions. And the prudential character of our framework is expressed by means of a rule,
imposing by default the more stringent levels of human control on weapons targeting. The default rule is motivated by epistemic
uncertainties about the behaviors of AWS. Designated exceptions to this rule are admitted only in the framework of an international
agreement among states, which expresses the shared conviction that lower levels of human control suffice to preserve the fail-safe
actor, accountability attractor, and moral agency enactor requirements on those explicitly listed exceptions. Finally, we maintain
that this
framework affords an appropriate normative basis for both national arms review
policies and binding international regulations on human control of weapons systems .
In academic and diplomatic debates about so-called autonomous weapons systems (AWS), a watchword
has rapidly gained ground across the opinion spectrum : All weapons systems, including
autonomous ones, should remain under human control . While references to the human control element
were already present in early documents on AWS,Footnote1 the U.K.-based NGO Article 36 must be credited for putting it in the
center of the discussion by circulating, since 2013, a series of reports and policy papers making the case for
establishing meaningful human control (MHC) over individual attacks as a legal requirement under international law.Footnote2
Unlike the calls for a preemptive ban on AWS , the notion of MHC has been met with
substantial interest by a number of states. This response is explainable by a variety of
converging factors. To begin with, human control is an easily understandable  concept that “is
accessible to a broad range of governments and publics regardless of their degree of technical
knowledge”; it therefore provides the international community with a “ common language for
discussion ” on AWS.Footnote3 A second feature contributing to the success of the notion of MHC is
its “constructive ambiguity,”Footnote4 which may prove helpful in bridging the gap between
various positions expressed at the international level on the AWS issue . Third, and finally, the notion
allows one to shift the focus of the AWS debate from a recalcitrant definitional
problem , requiring one to draw precise boundaries between automation and autonomy in
weapons systems, to a normative problem , requiring one to specify what kinds and levels of
human control ought to be exerted over weapons systems in general .Footnote5 Unlike the former
definitional problem, the latter normative problem appears to be more tractable and more likely
to be successfully addressed through negotiations . In this perspective, it was correctly underlined that one
should not look at MHC necessarily as a “solution” to the ethical and legal concerns raised by autonomy in weapons
systems.Footnote6 Rather,MHC indicates the right “approach ” to cope with them.Footnote7
And indeed, growing attention to the issue of human control has emerged from diplomatic
talks that have been taking place in Geneva within the Group of Governmental Experts on Lethal
Autonomous Weapons Systems (GGE), established by the State Parties to the Convention on Certain Conventional
Weapons (CCW). In addition to those State Parties explicitly endorsing the call for an MHC
requirement ,Footnote8 most delegations taking part in the CCW proceedings underscored the
importance of this issue in both their official speeches and their working papers. The
International Committee of the Red Cross (ICRC) expressed concerns for the “loss of human
control and judgement in the use of force and weapons” in a ground-breaking position paper from May 12,
2021. These concerns motivated the ICRC's call for new legally binding rules on AWS.Footnote9
2AC---R4
Case
2AC---Overview
K
2AC---Framework
Independently, debating AI governance is vital to the future of
humanity---we prioritize the finding of good solutions, even if
imperfect.
Gracia ’19 [Eugenio; September; Senior Adviser on peace and security at the Office of the
President of the United Nations General Assembly; SSRN Papers; “The militarization of artificial
intelligence: a wake-up call for the Global South,” https://papers.ssrn.com/sol3/papers.cfm?
abstract_id=3452323]
From a long-term perspective, human civilization might plausibly be living in a ‘technological
transformation trajectory’, in which radical breakthroughs in science and technology head us in a
different direction yet to be clearly grasped.68 AI governance on closer inspection is not just about
imposing restrictions , but encouraging prevention and foresight as well. And if we hope to be
in a position to start taking consequential measures on AI policy at the international level anytime soon, it is
advisable to acknowledge that, before risks can be mitigated , ‘they must first be understood ’.69
Applying AI as a general-purpose technology in the military is not the same as weaponizing it or handing over control to machines.
Where we draw the line of what is ‘ inevitable ’ remains a choice humans must make, but one
does not need to wait for ideal , Goldilocks conditions to frame the problem in
search for a desirable outcome in a reasonable timespan . This crucial task must not be left
solely to leading tech countries . Not necessarily because new voices may bring (or not) insights that
others had failed to consider. Actually, when stakes are too high and are likely to entail worldwide
externalities, all should get involved and partake in discussions concerning our shared
future , championing inclusiveness and more diverse representation in a plurality of settings, including both geographical and
gender balance. The more people join the conversation the better , possibly with UN support to bridge the gap.
Global IR can also help to bring more diversity of views to the table and expand the opportunities for scholarly dialogue. The
relationship between AI and arms control remains understudied in the IR discipline, insofar as real-world AI applications are still
being shaped by the rapidly evolving technology landscape. But if international security will be transformed in so many ways,
Global South leaders and scholars cannot afford to stand idle while others make decisions and
allow the militarization of AI to go deeper, unimpeded, and ever more deadly Developing countries
should not be relegated to the role of spectators , technology-takers, or (even worse) victims. It is
incumbent upon all states to step forward and find rational pathways to promote safe , beneficial, and
friendly AI, rather than dangerous, inimical, and bellicose AI .
AT: Paperson
Legal solutions can rupture the intelligibility of colonialism, create
meaningful progress, and reinterpretation of liberal rights.
Bhandar 13 – lecturer at Kent Law School and Queen Mary School of Law – her areas of
research and teaching include property law, equity and trusts, indigenous land rights, post-
colonial and feminist legal theory, multiculturalism and pluralism, critical legal theory, and
critical race theory Brenna, “Strategies of Legal Rupture: the politics of judgment”
[http://www.forensic-architecture.org/wp-content/uploads/2013/02/BHANDAR-Brenna.-
Strategies-of-Legal-Rupture.pdf] //
In this article, my aim is to consider the use of law as a political strategy of rupture in colonial and post -

colonial nation states. The question of whether and how to use law in order to transform and potentially shatter an existing political - legal order is one that continues
t o plague legal advocates in a variety of places, from Australia, to India, to Canada to Israel/Palestine. For example, the struggle for the recognition of indigenous rights in the
T he question of indigenous sovereignty is ultimately
context of colonial settler regimes has often produced pyrrhic victories. 21

quashed, and aboriginal rights are paradoxically recognised as an interest that derives from the prior occupation of the
land by aboriginal communities but is at the same time parasitic on underlying Crow n sovereignty ; an interest that can be justifiably limited in
the interests of settlement. 22 Thus, the primary and inescapable question remains: how does one utilise the law without
re - inscribing the very colonial legal order that one is attempting to break down? 23 I argue that this is an
inescapable dilemma; as critical race theorists and indigenous scholars have shown, to not avail ourselves

of the law in an effort to ameliorate social ills , and to promote and protect the rights of oppressed minorities is to
essentially abrogate one’s political responsibilities . Moreover, the reality of political struggle (particularly of the anti - colonial
variety) is that it is of a diffuse and varied nature, engaging multiple different tactics in order to achieve its ends.¶ The notion of the ruptural defence emerges from the work of
Jacques Vergès, a French advocate and subject of a film by Barbet Schroder entitled Terror’s Advocate . The film is as much a portrait of Vergès ’ life as it is a series of vignettes
of armed anti - colonial and anti - imperial struggle during the decades between the late 1940s and the 1980s. I should say at the beginning that I do not perceive Vergès as a
heroic figure or defender of the oppressed; we can see from his later decisions to defend Klaus Barbie, for instance, that his desire to reveal the violence wrought by European
tracing the development of what Vergès called the ruptural defence, the film takes us
imperial powers was pursued at any cost. But in

to the heart of the inescapable paradoxes and contradictions involved in using law as a means of
political resistance in colonial and post - colonial contexts. I want to explore the strategy of rupture as developed by Vergès but also in a broader
se nse, to consider whether there is in this defence strategy that arose in colonial, criminal law contexts, something that is generalisable, something that can be drawn out to
form a notion of legal rupture more generally.¶ To begin then, an exploration of Vergès’ ‘rupture defence’, or rendered more eloquently, a strategy of rupture. At the beginning of
the film, Vergès comments on his strategy for the trial of Djamila Bouhired, a member of the FLN, who was tried in a military court for planting a bomb in a cafe in Algiers in
1956. Vergès states the following in relation to the trial:¶ The problem wasn’t to play for sympathy as left - wing lawyers advised us to do, from the murderous fools who judged
us, but to taunt them, to provoke incidents that would reac h people in Paris, London, Brussels and Cairo...¶ The refusal to play for sympathy from those empowered to uphold
the law in a colonial legal order hints at the much more profound refusal that lies at the basis of the strategy of rupture, which we see unf old throughout the film. In refusing to
accept the characterisation of Djamila’s acts as criminal acts, Vergès challenges the very legal categories that were used to criminalise, condemn and punish anti - colonial
resistance. The refusal to make the defendan ts’ actions cognisable to and intelligible within the colonial legal framework breaks the capacity of the judges to adjudicate in at
least two senses. First, their moral authority is radically undermined by an outright rejection of the legal terms of refer ence and categories which they are appointed to uphold.
The legal strategy of rupture is a politics of refusal that calls into question the justiciability of the
purported crime by challenging the moral and political jurisdiction of the colonial legal order
itself.¶ Second, the refusal of the legal categorisation of the FLN acts of resistance as criminal brought into light the contradictions inherent in the official French position
and the reality of the Algerian context. This was not, as the official line would have it, simply a case of French criminal law being applied to French nationals. The repeated
assertion that the defendants were independent Algerian actors fighting against colonial brutality, coupled with repeated revelations of the use of torture on political prisoners
made it impossible for the contradictions to be “rationally contained” within the normal operations of criminal law. The revelation and denunciation of torture in the courtroom
not to prevent statements or admissions from being admis sable as evidence (as such violations would normally be used) but to challenge the legitimacy of the imposition of a
colonial legal order on the Algerian people made the normal operation of criminal law procedure virtually impossible . 24 And it is in this ma king impossible of the operation of
the legal order that the power of the strategy of rupture lies. ¶ In refusing to render his clients’ actions intelligible to a colonial (and later imperial) legal framework, Vergès
makes visible the obvious hypocrisy of the colonial legal order that attempts to punish resistance that employs violence, in the same spatial temporal boundaries where the brute
violence of colonial rule saturates everyday life. In doing so, this is a strategy that challenges the monopoly of le gitimate violence the state holds. Vergès aims to render visible
The ruptural defence seeks to
the false distinction between common crimes and political crimes, or more broadly, the separation of law and politics. 25

subvert the order and structure of the tr ial by re - defining the relation between accuser and accused. This
illumination of the hypocrisy of the colonial state questions the authority of its judiciary to
adjudicate. But more than this, his strategy is ruptural in two senses that are fundamental to the operation of the law in the colonial settler and post - colonial contexts.
The first is that the space of opposition within the legal confrontation is reconfigured . The
second, and related point, is that the strictures of a legal politics of recognition are shattered.¶ In relation to the first point, a space of opposition is, in the view of Fanon, missing
in certain senses, in the colonial context. A space of opposition in which a genuinely mutual struggle between coloniser and colonised can occur is de nied by spatial and legal -
political strategies of containment and segregation. While these strategies also exhibit great degre es of plasticity 26 , the control over such mobility remains to a great degree in

the hands of the colonial occupier. The legal strat egy of rupture creates a space of political
opposition in the courtroom that cannot be absorbed or appropriated by the legal order . In
Christodoulidis’ view, this lack of co - option is the crux of the strategy of rupture .¶ This strategy of

rupture also poin ts to a path that challenges the limits of a politics of recognition , often one of the key legal and political strategies
utilised by indigenous and racial minority communities in their struggles for justice. Claims for recognition in a juridical frame ine vitably involve a variety of onto -
epistemological closures. 27 Whether because of the impossible and irreconciliable relation between the need for universal norms and laws and the specificities of the particular
claims that come before the law, or because of the need to fit one’s claims within legal - political categories that are already intelligible within the legal order, legal recognition
has been critiqued, particularly in regards to colonial settler societies, on the basis that it only allows identities, legal claims, ways of being that are always - already proper to the
existing juridical order to be recognised by the law. In the Canadian context, for instance, many scholars have elucidated the ways in which the legal doctrine of aboriginal title to
land im ports Anglo - American concepts of ownership into the heart of its definition; and moreover, defines aboriginality on the basis of a fixed, static concept of cultural
The strategy of rupture elides the violence of recognition by challenging the legitimacy of
difference.

the colonial legal order itself.¶ In an article discussing Vergès’ strategy of rupture, Emilios Christodoulidis takes up a question posed to Vergès by
Foucault shortly after the publication of Vergès’ book, De La Stratégie Judiciare, as to wh ether the defence of rupture in the context of criminal law trials in the colony could be
rupture could inform
generalised more widely, or whether it was “not in fact caught up in a specific historical conjuncture.” 28 In exploring how the strategy of

practices and theory outside of the courtroom, Christodoulidis characterises the strategy of rupture as one mode of immanent critique. As
individuals and communities subjected to the force of law, the law itself becomes the object of critique , the
object that ne eds to be taken apart in order to expose its violence. To quote from Christodoulidis:¶ Immanent critique aims to generate within these institutional frameworks
contradictions that are inevitable (they can neither be displaced nor ignored), compelling (they necessitate action) and transformative in that (unlike internal critique) the
overcoming of the contradiction does not restore, but transcends, the ‘disturbed’ framework within which it arose. It pushes it to go beyond its confines and in the process, fam

ously in Marx’s words, ‘enables the world to clarify its consciousness in waking it from its dream about itself’. 29¶ Christodoulidis explores how the strategy of rupture
can be utilised as an intellectual resource for critical legal theory and more broadl y, as a point of
departure for political strategies that could cause a crisis for globalised capital. Strategies of rupture are particularly crucial when considering a system, he notes, that has been
so successful at appropriating, ingesting and making its own, political aspirations (such as freedom, to take one example) that have also been used to disrupt its most violent and
exploitative tendencies. Here Christodoulidis departs from the question of colonialism to focus on the operation of capitalism in po st - war European states. It is also this
bifurcation that I want to question, and rather than a distinction between colonialism and capitalism, to consider how the colonial (as a set of economic and political relations
that rely on ideologies of racial diff erence, and civilisational discourses that emerged during the period of European colonialism) is continually re - written and re - instantiated
through a globalised capitalism. As I elaborate in the discussion of the Salwa Judum judgment below, it is the combi nation of violent state repression of political dissent that
finds its origins (in the legal form it takes) during the colonial era, and capitalist development imperatives that implicate local and global mining corporations in the
dispossession of tribal p eoples that constitutes the legal - political conflict at issue.¶ After the Trial: From Defence to Judgment¶ In response to a question from Jean Lapeyrie (a
member o f the Action Committee for Prison - Justice) during a discussion of De La Stratégie Judiciare published as the Preface to the second edition, Vergès remarks that there
The strategy of rupture is a tactic
are actually effective judges, but that they are effective when forgetting the essence of what it is to be a judge. 31

utilised to subvert the order and structure of a trial; to re - define the very terms upon which the trial is premised. On
this view, the judge, charged with the obligation to uphold the rule o f law is of course by definition not able to do anything but sustain an unjust political order.¶ In the film
Terror’s Advocate , one is left to wonder about the specificities of the judicial responses to the strategy deployed by Vergès. (Djamila Bouhired , for instance, was sentenced to
death, but as a result of a worldwide media campaign was released from prison in 1962). While I would argue that the judicial response is clearly not what is at stake in the
Exposing a law to
ruptural defence, I want to consider the potentia lity of the judgment to be ruptural in the sense articulat ed by Christodoulidis, discussed above.

its own contradictions and violence, revealing the ways in which a law or policy contradicts and violates rights to basic political
freedoms , has clear political - legal effects and consequences. Is it possible for members of the judiciary to expose contradictions in the legal order
itself, thereby transforming it? Would the redefinition, for instance, of constitutional provisions guaranteeing r ights that come into conflict with capitalist development
the re - definition of the limitations on the guarantees of individual and group
imperatives constitute such a rupture? In my view,

freedom that are inevitably and invariably utilised to justify state repression of rights in favour of capitalist development imperatives, security, or colonial
settlement have the potential to contribute to the re - creation of political orders that could be
more just and democratic.¶ We may be reluctant to ever claim a ju dgment as ruptural out of
fear that it would contaminate the radical nature of this form of immanent critique . Is to
describe a judgment as ruptural to belie the impossibility of justice, the aporia that confronts every moment of judicial decision - making? I want to suggest that it is impossible
to maintain such a pure position in relation to law, particularly given its capacity (analogous to that of capital itself) for reinvention. Thus, I want to explore the potential for
judges to subvert state violence e ngendered by particular forms of political and economic dispossession, through the act of judgment. In my view, basic rights protected by

re - defining
constitutional guarantees (as in the Indian case) have been so compromised in the interests of big business and develo pment imperatives, that

rights to equality , dignity and security of person, and subverting the interests of the state -
corporate nexus is potentially ruptural, in the sense of causing a crisis for discrete
tentacles of global capitalism.¶ At th is juncture, we may want to explicitly account for the specific differences between criminal defence cases
and Vergès‘ basic tactic, which is to challenge the very jurisdiction of the court to adjudicate, to define the act of resistance as a criminal one, and constitutional challenges to the
While one tactic seeks to render the illegitimacy of the colonial state
violation of rights in cases such as Salwa Judum .

the other is a tactic used to re - define the terms upon which


bare in its confrontation with anti - colonial resistance,

political dissent and resistance take place within the constitutional bounds of the post - colonial state.
These two strategies appear to be each other’s opposite; one challenges the legitimacy of the state itself through refusing the ju
risdiction of the court to criminalise freedom fighters, while the other calls on the judiciary to hold the state to account for criminalising and violating the rights of its citizens to

However, the common thread that situates these strategies


engage in political acts of dissent and resistance.

within a singular political framework is the fundamental challenge they pose to


the state’s monopoly over defining the terms upon which anti - colonial and anti - capitalist
political action takes place. ¶ Here I will turn to consider a post - colonial context in which the colonial is continually being re - written, juridically

speaking, in light of neo - liberal economic imperatives unleashed from the late 1980s onwards. A recent judgment of the Indian Supreme

Court provide s an opportunity to consider a moment in which capitalist development imperatives and the
exploitation of tribal peoples by the state of Chattisgarh are put on trial by a group of three plaintiffs. The
judgment provides, amongst other things, an opportunit y to consider the strategy of the plaintiffs and also the judicial response. As I argue below, this judgment

presents an instance of rupture precisely because the fundamental freedoms of the people of Chattisgarh
are redefined by the Court in such a way as to challenge and condemn the capitalist development imperatives that
have put their lives and livelihoods at risk.¶ Salwa Judum¶ The Indian Supreme Court rendered judgment in the case of Nandini Sundar and others v the
State of Chhattisgarh on July 5th, 2011. In this case, Sundar, a professor of Sociology at the Delhi University, along with Ramachandra Guha, an eminent Indian historian, and
Mr. E.A.S. Sarma, former Secretary to Government of India and former Commissioner, Tribal Welfare, Government of And hra Pradesh, petitioned the Supreme Court of India
alleging, inter alia , that widespread violations of human rights were occurring in the State of Chattisgarh, on account of the ongoing Naxalite/Maoist insurgency and the counter
- insurgency activities of th e State government and the Union of India (or the national government). More specifically, the petitioners alleged that the State was in violation of
Articles 14 and 21 of the Indian Constitution. Article 14 guarantees equality before the law of each citiz en and freedom from discrimination on the basis of race, religion, caste,
sex or place of birth. Article 21 of the Constitution guarantees the protection of life and personal liberty.¶ While a comprehensive overview of the Naxalbari movement is beyond
the scope of this article, I provide a very brief description of the movement here, by way of explaining the political and legal background of the judgment. The Naxalites are
revolutionary communists (Maoists) who split from the Communist Party of India short ly after independence. The movement includes a social base comprised of “landless,
small peasants with marginal landholdings,” and adivasis. 32 Bhatia notes that in the state of Bihar, many people join the movement in order to pursue short - term goals, such
as better education, food and housing, and employment, with revolution a distant concept if not altogether foreign to more immediate objectives of radical change. 33 Regardless
of whether revolution is the immediate or long - term objective of members of the N axalite movement, it is clear that along with economic and social rights lies the desire for
freedom from violence and fear in a political context described by some as semi - feudal. One young Naxalite described the hangover of feudal attitudes towards lowe r castes by
stating “the landlord’s moustache has got burnt but the twirl still remains.” 34¶ With a powerful and unrelenting presence in tribal areas, arguably amongst the most
impoverished parts of the country, Naxalites have engaged in nonviolent and dir ect armed action against state and national governments intent on pursuing capitalist modes of
development at the expense of the poor, throughout nine different states in India. Specifically, the Naxals have focused on land rights, minimum wages for labour ers, common
property resources and housing rights. 35 The strategy of armed resistance has met with criticism across the political spectrum 36 , and it is difficult to gauge the level of support
for the Naxalites amongst left and progressive communities in Indi a. However, the ascription by India’s Prime Minister Manmohan Singh to the Naxals as the “singlest greatest
threat to India’s national security” 37 was the precursor to a vicious campaign of repression called Operation Greenhunt that has attracted criticism by political progressives.¶
This is the background to the petition brought by Sunder, Guha and Sarma. The petition alleged, inter - alia, the widespread violation of human rights of people of Dantewada
District and neighboring areas in Chattisgarh. Specifica lly, the petitioners alleged that the State of Chattisgargh was supporting the activities of an armed vigilante group called
‘Salwa Judum’ (‘Purification Hunt’ in the Gondi language). The State was actively promoting the Salwa Judum through the appointment of Special Police Officers. The
government of Chattisgargh, along with the Union of India government, was alleged to have employed thousands of ‘special police officers’ as a part of their counter - insurgency
strategies. The SPOs, a category that finds its origins in colonial policing legislation 38 , are members of the tribal communities, and are often minors. The SPOs, as the Court
notes, are armed by the State and given little or no training, to fight the battles against the Naxalites. At the time the Court passed down its judgment, 6500 tribal youth have
been conscripted into the Salwa Judum (para 44). He writes that the “wholesale militarisation of the movement since the 1990s has culminated in a vanguard war trapped in an
expanding culture of counterinsur gency.” 3¶ The State of Chattisgargh recruits SPOs (also known as Koya Commandos) under the provisions of the Chattisgargh Police Act
2007. Under this Act, the SPOs, as the Court notes, “enjoy the ‘same powers, privileges and perform same duties as coordinate constabulary and subordinate of the Chattisgargh
Police.” 40 The Union Government of India sets the limit of the number of SPOs that each state can appoint for the purposes of reimbursement of an honorarium under the
Security Rated Expenditure Scheme. The State argued on its behalf that the SPOs receive two months of training covering such things as the use of arms, community policing,
UAC and Yoga training, and the use of scientific and forensic aids in policing. 41 The Union Government argued that the SPOs “have played a useful role in the collection of
intelligence, protection of local inhabitants and ensuring security of property in d isturbed areas.” 42 Despite these attempts at a defence, the Court found in favour of the
petitioners.¶ The Court contrasts the provisions of the 2007 Act that provide for the conditions under which the Superintendant of Police may appoint “any person” as an SPO
with the parallel provisions in the British era legislation. They find that the 2007 Act, unlike its predecessor, fails to delimit the circumstances under which such appointments
can be made. The circumstances however, do include “terrorist/extremist” incidents, and the Court thus finds that the SPOs are “intended to be appointed with the
responsibilities of engaging in counter - insurgency activities.” 43 The Court agrees with the allegations of the petitioners, that thousands of tribal youth are being ap pointed by
both the State and Union governments to engage in armed conflict with the Naxalites, and that this placing the lives of tribal youth in “grave danger.” 44 Given that youth being
conscripted have very low levels of education and are often illiterat e, and that they themselves have likely been the victims of state and Naxal violence, the Court found that they
could not “under any conditions of reasonableness” assume that the youth are exercising the requisite degree of free will and volition in relati on to their comprehension of the
conditions of counter - insurgency and the consequences of their actions, and thus, were not viewed by the court as freely deciding to join the police force as SPOs. 45 After a
very thorough analysis of the conditions under whi ch tribal youths become SPOs and the use and abuse of the SPOs by the State, the Court found the State of Chattisgarh to be
The Court ’s findings in relation to the
in violation of Articles 14 and 21 of the Constitution by appointing tribal youth as SPOs engaged in counter - insurgency.¶

SPOs are remarkable insofar as they account for the socio - economic conditions and lived realities of
the tribal youth. In their judgment, however, they go much further than engaging a contextualised and nuanced
approach to the interpretation of the rights to equality, life and personal liberty. They enquire into the causes of Naxalite

violence, and in doing so, hold capitalist development imperatives to account; the constitutional rights of tribals
and others dispossessed of t heir lands and livelihoods are being violated in the interests of capital. Drawing on academic work critical of globalisation, the Court quotes the
following:¶ ‘[T]he persistence of “Naxalism ”, the Maoist revolutionary politics, in India after over six decades of parliamentary politics is a visible paradox in a democratic
“socialist” India.... India has come into the twenty - first century with a decade of departure from the Nehruvian socialism to a free - market, rapidly globalizing economy, which
has created new dynamics (and pockets) of deprivation along with economic growth. Thus the same set of issues, particularly those related to land, continue to fuel protest
politics, violent agitator pol itics, as well as armed rebellion....’ 46¶ The Court recognises that the capitalist development imperatives of the state are the cause of the armed
resistance when they state that the problem lies not with the people of Chattisgarh, nor those who question th e conditions under which the conflict has been produced, but
“[t]he problem rests in the amoral political economy that the State endorses, and the resultant revolutionary politics that it unnecessarily spawns.” 47 Quoting from a report
written by an expert g roup appointed by the Planning Commission of India, the Court takes note of the “irreparable damage” caused to marginalised communities by the
development paradigm adopted by the national governmen t since independence . 48 The economic development pursued has “inevitably caused the displacement” of these
communities and “reduced them to a sub - human existence”. 49 The Expert Group also noted their surprise at the refusal of the State to recognise the reasons for the political
dissent expressed by the Naxalites, a nd the disruption of law and order. The Court adopts this observation, and notes that:¶ Rather than heeding such advice [to address the
dehumanisation wreaked by capitalist development policies], which echoes the wisdom of our Constitution, what we have wi tnessed in the instant proceedings have been
repeated assertions of inevitability [sic] of muscular and violent statecraft. 50¶ Following this remarkable assertion of Constitutional values that are opposed to the state violence
used to repress political dis sent, the Court accounts for the violence rendered by capitalist development imperatives:¶ The culture of unrestrained selfishness and greed
spawned by modern neo - liberal economic ideology, and the false promises of ever increasing spirals of consumption l eading to economic growth that will lift everyone, under -
gird this socially, politically and economically unsustainable set of circumstances in India in general, and Chattisgarh in particular.” 51¶ The exploitation of natural resources
violate principles t hat are “fundamental to governance” and this violation eviscerates the promise of equality before the law, and dignit y of life (Article 21) . 52 Capitalist
development imperatives and neo - liberalism “necessarily tarnish” and “violate in a primordial sense” Ar ticles 14 and 21 of th e Indian Constitution . 53 The Court thus
positions the rights to equality and dignity in opposition to capitalist development imperatives, and most significantly, do not find that the violation can be justifiably limited.
Economic polic ies that violate the spirit of the Constitution, and run counter to the “primary task of the state” which is to provide security for all of its citizens “without violating
human dignity” cause levels of social unrest that ultimately amount to an “abdicatio n of constitutional r esponsibilities” . 54 In their finding that neo - liberal ideology amongst
other economic policies are the root causes of the social unrest and Naxal militancy, the Court re - values the constitutional and human rights of its citizens and as serts a
the
radically different vision of the role of the state in promoting and protecting democracy. Based on the spirit of the Constitution as enacted at the time of independence,

Court clearly puts forth a view that the conditions for democracy begin w ith state protection
and enhancement of human dignity and equality, education, and freedom from violence, rights
and values that are contrary to the capitalist economic policies embraced by the State of Chattisgarh.¶
Constitutional rights claims, whether we are looking at state of the art constitutions in Canada, South Africa or elsewhere, do not often go beyond a liberal conception of rights.
And indeed, utilising human rights as a means of provoking political ruptures (as opposed to ameliorating existin g conditions) surely seems like a rather bankrupt endeavor, in
light of how rights to freedom and liberty have been effectively co - opted by market imperatives. 55 The ISC judgment thus seems all the more compelling, in its condemnation
of developmental terro rism, and capitalist greed. The judgment finds in favour of the petitioners. In doing so, they explicitly critique the capitalist model of development that
has impoverished so many millions of people. They express the view that people do not rise up in arm ed insurgency against the state without cause, and find the failure of the
State to affirmatively fulfill its obligation to protect the life and liberty of the SPOs is a breach and violation of the Constitution. They find, significantly, that the very econ omic
policies pursued by the government, coupled with the treatment of tribals as nothing more than cannon fodder in the war against the Naxalites, have dehumanised those most
vulnerable t o poverty. The Court adopts the words of Joseph Conrad in their cond emnation of the impoverishment and exploitation of the tribals by both the state and union
governments. Drawing parallels with Conrad’s characterisation of the colonial exploitation of the Congo in the late 19th and early 20th centuries, the Court relates the “vilest
scramble for loot that ever disfigured the history of human conscience” to the “scouring of the earth by the unquenchable thirst for natural resources by imperialist powers”. 56¶
The Court also alludes to the virulent auto - immune reaction that e xists like a germ, waiting to explode, amongst the tribal youths. With no established mechanism for getting
the arms back from the tribal youths, the Court predicts the possibility of these youths becoming “roving groups of armed men endangering the societ y, and the people in those
areas as a third front.” They write that it “entirely conceivable that those youngsters refuse to return the arms Consequently, we would then have a large number of armed
youngsters, running scared for their lives, and in violati on of the law. It is entirely conceivable that they would then turn against the State, or at least defend themselves using
those firearms, against the security forces themselves; for their livelihoo d, and subsistence...” 57¶ In finding the government respons ible for the
socio - economic conditions that have led to revolutionary activity, and in condemning their brutal and inhumane use of tribal youth in the armed struggle against the Naxalites,
the Court
(caused, in the view of the Court, by policies of privatisation tha t leave the state ideologically and actually incapable of dealing with the social unrest)

engages in an immanent critique of the political ideology and legal policies of the government. In
critiquing the violence of capitalist development imperat ives pursued by the state, and the inhumane violence utilised by the state to counter its natural consequences (armed
unrest), the Court re - invests concepts of liberty, life, and equality with political meaning that goes beyond their usual liberal interpel lations. The concept of security is
reinterpreted with the interests of the poor in mind, the market logic of efficiency condemned as a guiding principle and objective of government policy.¶ Conclusion¶ Outside of
a criminal or military law context, a strategy of rupture might involve an exposure of the contradictions that inhere in colonial, capitalist legal orders that eviscerate the
potentiality that rights hold to enable ind ividuals to live lives free of fear, violence and exploitation. In considering how this rupture might occur through the act of judgment, it
may be through challenging the authority of the state to engage its citizens in ways that violate political and ethi cal norms of freedom. In the judgment analysed here, the Court
seizes the power to define the constitutional norms and crucially, the meaning of the rights to life, personal liberty and security. They engage an act of radical re - definition of
democratic ri ghts with the lived conditions of the poorest communities at the forefront of their analysis.¶ In this judgment the Indian Supreme Court redefines the imperatives
of security as a State obligation to its citizens, to “secure for our citizens conditions of social, economic, and political justice for a ll who live in India...” . 58 Without this, the
Court notes that the State will not have achieved human dignity for its citizens. This, for the court, is an essential truth, and “policies which cause vast disaffectio n amongst the
poor” not only exist in opposition to this truth but are also “necessarily destructive of national unity and integration” . 59 The Court thus identifies Indian democracy as what is
at stake in the revaluing and reinterpretation of rights to life and security of the person.¶ In directing their analysis at the ways in which neo - liberal capitalism as a political and
economic rationality “has launched a frontal assault on the fundaments of liberal democracy, displacing its basic principles of consti tutionalism, legal equality, political and civil
liberty, political autonomy, and universal inclusion with market criteria...”, 60 the ISC attempts to recuperate the deracinated vision of democracy that Indian corporateers and
government ministers appear to ha ve in mind. Surely, in a time when even the most basic conditions for a democracy that attends to the political, social and economic needs of
charting
the common, those with basic common needs of education, freedom of association and movement, freedom from deprivat ion and dispossession, are absent,

such legal terrain opens a space for political rupture.¶ In considering whether legal judgments can be ruptural in
the sense elucidated by Christodoulidis, and reflecting on Sundar et al , it is clear that an ind ependent judiciary does have the power to disturb

the monopoly of violence exercised by the government , and to transcend this


disturbed framework by offering a radically different interpretation of security
and freedom. Security as the insurance of egoism reflects a Benthamite definition of law’s raison d’être as nothing other than the security of private property.
The law exists in order to provide security for the property - owning classes, security for their actual wealth and also feelings of security; the law provides freedom from the fear
of loss. In moving far from Benthamite concerns with the protection of private property, the Court redefines security, a concept fundamental to law’s being, and more
particularly, a concept too often used and abused in t he interests of private corpora tions . ¶ Although this use of the law does not refuse the authority of the court in
the way that Vergès’ strategy of rupture did in the colonial - criminal law context, it most certainly redefines the ambit of what is an intel ligible rights claim. By bringing the
opens the space for a
socio - economic conditions of the adivasis and tribal peoples to the forefront of their interpretations of the rights at issue , the Court

legal consciousness that can no longer remain caught up in a fantasy about it’s own effectiveness
in actually protecting the rights of the poorest and most vulnerable. This movement by the Indian Supreme Court charts an av enue
that holds promise for the anti - colonial struggles of legal advocates elsewhere.

Ontology focus bad --- imagined contingent futures change the


conditions of possibility but don’t rely on linear time to produce
material benefits
Jessica Hurley 17, Assistant Professor in the Humanities at the University of Chicago,
“Impossible Futures: Fictions of Risk in the Longue Durée”, Duke University Press,
https://read.dukeupress.edu/american-literature/article/89/4/761/132823/Impossible-
Futures-Fictions-of-Risk-in-the-Longue
Birkerts’s dismissal of a decolonial future for North America as something “so contrary to what we
know both of the structures of power and the psychology of the oppressed that the imagination simply balks” (41)
is consistent with a much longer history of colonial rhetoric and practice that rejects Native land
claims as impossible or unrealistic. The Supreme Court’s invocation in their 2005 City of Sherrill v. Oneida Nation
decision of the “impossibility doctrine” that governs the “impracticability of returning to Indian control land that generations earlier
passed into numerous private hands” demonstrates the ongoing power of a white-defined realism to
distinguish possible from impossible actions with regard to its own practices of settler
colonialism (quoted in Rifkin 2009, 4). In this view, for the United States to abide by the terms of its
treaties with Native nations is unthinkable; it falls beyond the limits of plausibility that define
possible actions. And as Mark Rifkin has argued (ibid.), the idea of Native sovereignty is not just unrealistic but an
epistemological challenge to the real itself, to the construction of reality that maintains life as we know it in the United States. The
“sadder realism” called for by Buell must not, therefore, be taken as the neutral generic option in
dealing with risk, but rather recognized as one which relies on standards of verisimilitude and
plausibility that perpetuate the oppression of indigenous communities whether they are applied
directly to nuclear risk or to the legal standards that define the limits of Native self-determination. ¶ Redefining the
real, “draw[ing] attention to the possible by showing the contingent dimension of the actual”
(Revel 2009, 52), thus becomes a strategy of decolonization. Realism, in this context, is a self-
fulfilling prophecy : the return of Native land is regarded as impossibly implausible by the United States, and so it fails to
appear in the imaginable scenarios at key moments of legal and political decision-making and does not come to pass. Consequently,
as Ward Churchill writes, all “anti-colonial fighters…accepted as their agenda a redefinition of reality in terms deemed quite
impossible within the conventional wisdom of their oppressors”; any decolonial movement will require a counter-realist political
and aesthetic strategy (1992, 174). Almanac suggests that apocalypse remains a potent force in redefining reality against colonial
norms even as the novel re-forms our traditional understanding of nuclear apocalypse. No longer sudden and total, Almanac gives us
the longue durée apocalypse of nuclear waste, an apocalypse defined not by the sudden absence of the future but rather by the
impossibility of constructing any mechanism by which we might imagine a specific future or futures. ¶ Such an apocalypse is neither
a sudden ending nor a revelation of eternal truth but rather a narratological shift that transfigures the present through a radical
futurelessness. Apocalypse stands, in Almanac, against the futurological equivalent of what Michael Bernstein (1994) has critiqued
as “backshadowing”: the historiographical tendency to construct the past backwards from the present, occluding the contingency of
the present, limiting the presents-that-could-havebeen to one, and including in the historical narrative only those factors that gave
rise to this specific outcome. A predetermined future, as Bernstein’s subtitle Against Apocalyptic History ¶ suggests, does
exactly the same thing: it binds the present to the future with a single unfrayed rope and makes the
present the necessary, unchangeable precursor to a known future. These¶ imagined futures,
despite their virtuality , have significant material effects in the present, making¶
certain things possible and rendering others unthinkable, as we saw at the WIPP where a¶ plausible set of future
scenarios allowed the repository to open and foreclosed the possibility of ¶ shutting down nuclear manufacturing. In an
indigenous context, meanwhile, the historical¶ determinism instantiated by the imagined futures of the nuclear state
has rendered Native nations¶ paradoxically futureless, since indigenous lands and communities
are by far the most damaged¶ by the ongoing mining, processing, testing, and dumping practices
of the nuclear-militaryindustrial¶ complex.17¶ Apocalypse, then, becomes visible in Silko’s novel not as a model of
linear historical determinism, as in the Genesis-to-Revelation teleology that has long subtended Christian historiographies, but
rather as a narrative form that explodes such determinism to reveal the contingent nature of the present and allow
for other possibilities in both the present and the future. The epistemological challenges to human
understanding posed by the deep time of nuclear waste are taken up by Silko to reveal not simply the multiplicity of possible futures
(a conceptual leap which, as Annie McClanahan (2009) and R. John Williams (2016) have shown, was made within the nuclear-
military-industrial complex and has been profitably taken up by global corporations to deeply conservative ends), but the absolute
impossibility of imagining any specific future at all. ¶ Exploding the reservoir of probable futures that traditionally structures the
novel form (Kermode [1968] 2000) transforms the novel’s present in much the same way that, in radical historiography, telling a
different story about the past does. In contrast to the ineluctable presentism that defines the nuclear complex at the WIPP and
beyond, the Native/nuclear temporalities that the snake occupies are those of a longue durée that spirals and returns from both the
past and the future, in which “these days and years were all alive, and all these days would return again” (Almanac 247). The
qualitatively different future whose possibility is so vigorously unimagined at the WIPP is, in Almanac, an inescapable
future that is, in the novel’s non-linear timeframe , also a part of the present . Actions and
objects have a different realityeffect in this light. When macaws speak of revolution, when opals bleed and grant
visions, when ghosts weigh down a donkey, none of these things are unrealistic or even magically realistic. Rather, they are
manifestations of deep time temporalities , simultaneously Native and nuclear, producing impossible
juxtapositions of space-time (dead riders on a live mule, the future projected on an opal screen) within the Western chronology of
the novel form. Silko, bound to damaged and damaging futures by the nuclear complex as it intersects with the other histories of
damage left in the wake of colonial modernity, uses apocalypse to transfigure the present: to see the other possibilities
that reside in it and to couple those possibilities to their own pasts and their own futures,
constructing not only a transfigured instant but wholly transfigured timelines , worlds with a
solidity of their own.
AT: Adams
2AC---AT: Human Warfare
But, we do solve the turn---humans have the capacity for empathy
Kelsey Piper 19, senior writer at Future Perfect, 06/21/19, Death by algorithm: the age of
killer robots is closer than you think, https://www.vox.com/2019/6/21/18691459/killer-robots-
lethal-autonomous-weapons-ai-war
“The most interesting argument for autonomous weapons ,” Walsh told me, “is that robots can be
more ethical .” Humans, after all, sometimes commit war crimes, deliberately targeting innocents or killing people who’ve
surrendered. And humans get fatigued, stressed, and confused, and end up making mistakes. Robots, by contrast, “follow exactly
their code,” Walsh said.
Pentagon defense expert and former US Army Ranger Paul Scharre explores that idea in his 2018 book, Army of None: Autonomous
Weapons and the Future of War. “Unlike human soldiers,” he points out, “machines never get angry or seek revenge.” And “it isn’t
hard to imagine future weapons that could outperform humans in distinguishing between a person holding a rifle and one holding a
rake.”
Ultimately, though, Scharre argues that this argument has a fatal flaw: “ What’s
legal and what’s right aren’t
always the same .” He tells the story of a time his unit in Afghanistan was scouting and their
presence was discovered by the Taliban. The Taliban sent out a 6-year-old girl, who unconvincingly
pretended to be herding her goats while really reporting the location of the US soldiers by radio to the Taliban.
“The laws of war don’t set an age for combatants ,” Scharre points out in the book. Under the laws of
war, a Taliban combatant was engaging in a military operation near the US soldiers and it would
be legal to shoot her. Of course, the soldiers didn’t even consider it — because killing children is
wrong . But a robot programmed to follow the laws of war wouldn’t consider details like
that. Sometimes soldiers do much worse than what the law permits them to do. But on other occasions, they do
better — because they’re human , and bound by moral codes as well as legal ones. Robots
wouldn’t be.
Emilia Javorsky, the founder of Scientists Against Inhumane Weapons, points out that there’s a much better way to use
robots to prevent war crimes , if that’s really our goal. “Humans and machines make different
mistakes, and if they work together, you can avoid both kinds of mistakes . You see this in medicine —
diagnostic algorithms make one kind of mistake; doctors tend to make a different kind of mistake.”
So we could design weapons that are programmed to know the laws of war — and accordingly
will countermand any order from a human that violates those laws — and that do not have
the authority to kill without human oversight . Scientists Against Inhumane Weapons and other researchers
who study LAWS have no objections to systems like those. Their argument is simply that, as a matter of international law and as a
focus for weapons development and research, there
should always be a human in the loop.
we could have the best of both worlds : robots that have automatic
If this avenue is pursued,
guardrails against making mistakes but also have human input to make sure the automatic
decisions are the right ones . But right now, analysts worry that we’re moving toward full
autonomy : a world where robots are making the call to kill people without human input.
What could possibly go wrong?
Fully autonomous weapons will make it easier and cheaper to kill people — a serious
problem all by itself in the wrong hands. But opponents of lethal autonomous weapons warn that the consequences could be
worse than that.
For one thing, if
LAWS development continues, eventually the weapons might be extremely
inexpensive. Already today, drones can be purchased or built by hobbyists fairly cheaply, and prices are likely to keep
falling as the technology improves. And if the US used drones on the battlefield, many of them
would no doubt be captured or scavenged. “If you create a cheap, easily proliferated weapon of mass destruction, it
will be used against Western countries,” Russell told me.
Lethal autonomous weapons also seem like they’d be disproportionately useful for ethnic
cleansing and genocide ; “drones that can be programmed to target a certain kind of person ,”
Ariel Conn, communications director at the Future of Life Institute, told me, are one of the most straightforward applications of the
technology.
Then there are the implications for broader AI development. Right now, US machine learning and AI is the best in the world, which
means that the US military is loath to promise that it will not exploit that advantage on the battlefield. “ The US military
thinks it’s going to maintain a technical advantage over its opponents ,” Walsh told me.
That line of reasoning, experts warn, opens us up to some of the scariest possible scenarios for AI.
Many researchers believe that advanced artificial intelligence systems have enormous potential for
catastrophic failures  — going wrong in ways that humanity cannot correct once we’ve developed them, and (if we screw
up badly enough) potentially wiping us out.
AT: Extinction Link
Preventing extinction isn’t liberal futurity or naïve optimism.
Mobilizing against dangerous conditions is preferable to fatalism that
consigns the planet to nuclear war.
Tim Stevens 18, Lecturer in Global Security at King’s College London, “Exeunt Omnes?
Survival, Pessimism and Time in the Work of John H. Herz”, Millennium: Journal of
International Studies, Volume 46, Number 3
Herz explicitly combined, therefore, a political realism with an ethical idealism, resulting in what he termed a ‘survival ethic’.65 This
was applicable to all humankind and its propagation relied on the generation of what he termed ‘world-consciousness’.66 Herz’s
implicit recognition of an open yet linear temporality allowed him to imagine possible
futures aligned with the survival ethic, whilst at the same time imagining futures in
which humans become extinct . His pessimism about the latter did not preclude
working towards the former. As Herz recognised, it was one thing to develop an ethics of survival but
quite another to translate theory into practice . What was required was a collective ,
transnational and inherently interdisciplinary effort to address nuclear and environmental
issues and to problematize notions of security , sustainability and survival in the context of nuclear
geopolitics and the technological transformation of society. Herz proposed various practical ways in which young people in
particular could become involved in this project. One idea floated in the 1980s, which would alarm many in today’s more
cosmopolitan and culturally-sensitive IR, was for a Peace Corps-style ‘peace and development service’, which would ‘crusade’ to
provide ‘something beneficial for people living under unspeakably sordid conditions’ in the ‘Third World’.67 He expended most of
his energy, however, from the 1980s onwards, in thinking about and formulating ‘a new subdiscipline of the social sciences’, which
he called ‘Survival Research’.68
Informed by the survival ethic outlined above, and within the overarching framework of his realist liberal internationalism, Survival
Research emerged as Herz’s solution to the shortcomings of academic research, public education and policy development in the face
of global catastrophe.69 It was also Herz’s plea to scholars to venture beyond the ivory tower and become – excusing the gendered
language of the time – ‘homme engagé, if not homme révolté’.70 His proposals for Survival Research were far from systematic but
they reiterated his life-long concerns with nuclear and environmental issues, and with the necessity to act in the face of threats to
human survival. The
principal responsibilities of survival researchers were two-fold. One, to
raise awareness of survival issues in the minds of policy-makers and the public , and to
demonstrate the link between political inaction now and its effect on subsequent
human survival . Two, to suggest and shape new attitudes more ‘appropriate to the
solution of new and unfamiliar survival problems’, rather than relying on ingrained modes
of thought and practice .71 The primary initial purpose, therefore, of Survival Research would be to identify scientific,
sociocultural and political problems bearing on the possibilities of survival, and to begin to develop ways of overcoming these. This
was, admittedly, non-specific and somewhat vague, but the central thrust of his proposal was clear: ‘In our age of global
survival concerns, it should be the primary responsibility of scholars to engage in survival
issues’.72 Herz considered IR an essential disciplinary contributor to this endeavour, one that should be promiscuous across the
social and natural sciences. It should not be afraid to think the worst , if the worst is at all possible,
and to establish the various requirements – social, economic, political – of ‘a livable world’ .73
How this long-term project would translate into global policy is not specified but, consistent with his previous work, Herz identified
the need for shifts in attitudes to and awareness of global problems and solutions. Only then would it be possible for ‘a turn round
that demands leadership to persuade millions to change lifestyles and make the sacrifices needed for survival’.74
Productive pessimism and temporality
In 1976, shortly before he began compiling the ideas that would become Survival Research, Herz wrote: For the first time, we are
compelled to take the futuristic view if we want to make sure that there will be future generations at all. Acceleration of
developments in the decisive areas (demographic, ecological, strategic) has become so strong that even the egotism of après nous le
déluge might not work because the déluge may well overtake ourselves, the living.75
Of significance here is not the appeal to futurism per se , although this is important, but the
suggestion this is ‘the first time ’ futurism is necessary to ensuring human survival . This
is Herz the realist declaring a break with conventional realism: Herz is not bound to a cyclical vision of
political or historical time in which events and processes reoccur over and again . His
identification of nuclear weapons as an ‘absolute novum’ in international politics demonstrates this belief in
the non-cyclical nature of humankind’s unfolding temporality .76 As Sylvest observes of Herz’s
attitude to the nuclear revolution, ‘the horizons of meaning it produced installed a temporal break
with the past, and simultaneously carried a promise for the future ’.77
This ‘promise for the future’ was not , however, a simple liberal view of a better future
consonant with human progress . His autobiography is clear that his experiences of Nazism and the
Holocaust destroyed all remnants of any original belief in ‘inevitable progress’ .78 His
frustration at scientism, technocratic deception, and the brutal rationality of twentieth-century
killing, all but demanded a rejection of the liberal dream and the inevitability of its
consummation . If the ‘new age’ ushered in by nuclear weapons, he wrote, is characterised by
anything, it is by its ‘ indefiniteness of the age and the uncertainties of the future ’; it
was impossible under these conditions to draw firm conclusions about the future course
of international politics.79 Instead, he recognised the contingency, precarity and fragility of international politics, and the
ghastly tensions inherent to the structural core of international politics, the security dilemma.80
Herz was uneasy with both cyclical and linear-progressive ways of perceiving historical time. The
former ‘closed’ temporalities are endemic to versions of realist IR, the latter to post-Enlightenment narratives feeding liberal-
utopian visions of international relations and those of Marxism.81 In their own ways, each marginalises and
diminishes the contingency of the social world in and through time, and the agency of
political actors in effecting change. Simultaneously, each shapes the futures that may be
imagined and brought into being . Herz recognised this danger. Whilst drawing attention to his own gloomy
disposition, he warns that without care and attention, ‘the assumption may determine the
event ’.82 As a pessimist, Herz was alert to the hazard of succumbing to negativity, cynicism or resignation. E.H. Carr recognised
this also, in the difference between the ‘ deterministic pessimism ’ of ‘pure’ realism and those realists
‘who have made their mark on history’; the latter may be pessimists but they still believe ‘human affairs
can be directed and modified by human action and human thought’.83 Herz would share this anti-
deterministic perspective with Carr. Moreover, the possibility of agency is a product of a temporality
‘neither temporally closed nor deterministic , neither cyclical nor linear-progressive ;
it is rooted in contingency ’.84
Again quoting from his autobiographical account of the impact of Nazism, Herz described the relationship between his early pessimism and his
developing intellectual stance: The world became a theatre of the absurd. Suicide would probably have been the logical next move, and I considered it
from time to time. But I was still too young for such a radical step. One thing, however, emerged: a growing interest in domestic and, above all,
international politics. My complete resignation was no longer appropriate. If not from within, fascism might perhaps still be destroyed from without. To
my continuing interest in theory, therefore, was added a practical interest in action.85 Channelling the spirit of E.H. Carr, he wrote of this ‘brutal
awakening’ to the nature of power politics in the 1930s that, ‘Study could no longer be “pure” research; it had to become research committed to warn of
the deadly peril and show the way to the necessary action.’86 His commitment to active engagement was an early one, gestated during his personal
experiences of Nazism in the 1930s.87 This desire to combat Nazism from the outside was manifest in his activities for the Allies during and after World
War II but it coloured his scholarly life also. Herz recognised pessimism was a powerful force in his life but, rather than overcome or mask it, he used it
to propel his intellectual project further, and to engage with, not withdraw from, the world. He was, as van Munster and Sylvest relate, ‘[d]eeply
pessimistic yet a committed social thinker’.88
Herz was explicit about this: a
realistic and consistent pessimism can clarify where we are and
prepare us to do what is necessary.89 Pessimism is a necessary component of a realistic view
of the world, upon which proper and reasoned action can be founded . In this sense,
pessimism can be productive . It produces positive outcomes through action , rather
than negative ones through inaction or resignation . These are subjective value-
judgements , to be sure, but are obtained through a process of realist engagement with the
world , rather than blind [mere] fumbling or ideological railroading . Survival Research was a
response to Herz’s pessimism about the future, not a rejection of it . This leads us to two
observations about the relevance of pessimism to the study of international relations.
The first is that pessimism does not imply disengagement from the world. If anything, the example of
John Herz suggests the opposite. He was a pessimist, but his brand of pessimism was no ‘ passive fatalism ’.90 As he
recalled a few years before he died, ‘I consider myself a realist who comes sometimes to pessimistic conclusions, but never gives up
looking for solutions if ever so difficult ones’.91 Pessimism
can be a spur to thought and to action and
need not be a watchword for conservatism in theory or practice.92 This is not to say being a
pessimist is easy. Morgenthau, for his part, ‘never flagged in efforts to use his conceptual skills to help improve the human
condition’, despite his pessimism about the ability and will of people to take the long view on significant political issues.93 This
required that scholars chart different paths through troublesome times and articulate
alternative visions of international order, not to preclude political action but to
facilitate it ; not quite the conservative position realism is often assumed to occupy.94 In the face of worldly
frustrations and horrors, it is this attention to the production of alternative futures that
prevents ‘pessimism from turning into fatalism ’.95
The second observation is that it is unhelpful and misleading to treat pessimism and optimism as
oppositional .96 Pessimism and optimism are commonly regarded as antonyms but often enjoy a symbiotic relationship. In
Herz, they mingle and cross-pollinate in ways that defy easy explication . Stirk claims, for instance,
that Herz’s optimism about how the world could be refashioned ‘was never more than
guarded ’, restrained by his fierce attachment to the importance of the security dilemma.97 Puglierin notes that his ‘blatant
pessimism’ (eklatanter Pessimismus) was always accompanied by some form of optimism.98 We are reminded of Gramsci’s famous
statement regarding ‘pessimism of intellect, optimism of the will’ as the cognitive binary at work in the political mind.99 Even as
his pessimism deepened over the course of his career, he was always wont to end his analyses with a ‘ yet ’
or ‘ in spite of it all ’.100 Importantly, as he became more pessimistic, ‘the solutions he proposed became ever more
ambitious’.101 His growing pessimism was accompanied by increasing resolve to tackle the
problems of the world head-on , although, as he admitted in a footnote in the 1980s, ‘Not for a moment do I have
the illusion that what I have proposed is likely to happen’.102 A suitably pessimistic aside, perhaps, but it did not deter him from
continuing his project for another twenty years. This
drive seems not to be rooted in optimistic
conviction , nor even a subtle version of hope , but in a properly pessimistic reading of
the world and its possibilities, engendered as they were by the ontological temporality of
perpetual change.

Existential fears need not be settler projections of demise but can be


contingently appropriated to reverse indigenous erasure
Weiss 15—Ph.D. candidate, Anthropology, University of Chicago (Joseph, “UNSETTLING FUTURES: HAIDA FUTURE-
MAKING, POLITICS AND MOBILITY IN THE SETTLER COLONIAL PRESENT,” Dissertation submitted to the Faculty of the
Division of Social Sciences, Department of Anthropology, University of Chicago, December 2015, 223-232, dml)
And yet, something has changed in this landscape from the initial erasures of Native futurity we drew out in the first chapter. In the
narratives of colonial actors like Duncan Campbell Scott, it was absolutely clear that “Indians” were disappearing because their
social worlds were being superseded by more “civilized” ways of living and being, ones that these Native subjects would also,
inevitably, in the end, adopt (or failing that, perish outright). There was a future. It was simply a settler one. But the
nightmare futures of that my Haida interlocutors ward against in their own future-making reach beyond Haida life
alone. Environmental collapse, most dramatically, threatens the sustainability of all life ; toxins in the
land and the waters threaten human lives regardless of their relative indigeneity ,
race , or gender (e.g. Choy 2011; Crate 2011). Put another way, the impetus for non-Haida (and non-First
Nations subjects more generally) to be “ united against Enbridge” with their indigenous neighbours
comes in no small part because an oil spill also profoundly threatens the lives and livelihoods of non-Aboriginal coastal
residents, a fact which Masa Takei, among others, made clear in Chapter 3. Nor is the anxiety that young people might abandon their
small town to pursue economic and educational advantage in an urban context limited to reserve communities. Instead, the
compulsions of capitalist economic life compel such migrations throughout the globe. The nightmare futures that Haida
people constitute alternative futures to ward against are not just future of indigenous
erasure under settler colonialism. They are erasures of settler society itself .
There is thus an extraordinary political claim embedded in Haida future-making, a claim which
gains its power precisely because Haida future-making as we have seen it does not (perhaps
cannot ) escape from the larger field of settler-colonial determination . Instead, in Haida
future-making we find the implicit assertion that Haida people can make futures that address the
dilemmas of Haida and settler life alike , ones that can at least “ navigate ,” to borrow Appadurai’s
phrasing, towards possible futures that do not end in absolute erasure . If Povinelli and Byrd are
correct and settler liberal governance makes itself possible and legitimate through a
perpetual deferral of the problems of the present, then part of the power of Haida future-
making is to expose the threatening non-futures that might emerge out of this bracketed
present, to expose as lie the liberal promise of a good life always yet to come and to attempt to constitute alternatives.
AT: Alt---Pedagogy
Alt’s paradigm crushes resistance to material aspects of colonialism
by reducing all movements to a binary.
Corey Snelgrove et al. 14, University of British Columbia; Rita Kaur Dhamoon, University
of Victoria; and Jeff Corntassel, University of Victoria, 2014, “Unsettling settler colonialism: The
discourse and politics of settlers, and solidarity with Indigenous nations,” Decolonization:
Indigeneity, Education & Society, Vol. 3, No. 2, p. 1-32,
http://decolonization.org/index.php/des/article/view/21166/17970
Corey: This relational, interdependent focus is also important amongst settlers ourselves – perhaps as a
way to counter the flattening of differences that occurs amongst settlers, particularly in
solidarity work. Settlers obviously need to be doing our own work and challeng ing ‘our’ institutions and
practices that serve to protect or further colonization. But we can’t do this if we flatten the
differences and ignore the inequalities and power relationships that exist within settler society.
Not only does such flattening prevent much needed alliances but flattening itself can actually
work to protect certain elements of settler colonialism . For instance, white supremacy works to naturalize white settler
presence. In terms of solidarity then, I find it problematic for myself, as a white, class privileged, cis-hetero, and able bodied male (as
well as people like me) to demand other peoples to act in solidarity, while also not holding myself (and others like me) responsible
and accountable to other forms of violence that may be a contributing factor to the further reification of structures that support
settler colonialism, like the State. Now I’m not arguing for the continued eschewal of Indigenous governance and legal orders
because others experience violence, but rather, that the substantive recognition of Indigenous governance and
legal orders also requires a dismantling of other, related forms of domination . This latter dismantling I
see as necessary but also insufficient for the dismantling of settler colonialism. These sites and spaces of domination
and resistance are distinct, but also connected dialectically. This seems to be something that
settlers, white settlers specifically, have yet to articulate and take up, critique and act against . And
this is perhaps most evident in how settlers seem to be continuously waiting for instruction from
Indigenous peoples on how to act. Rita: I wonder if this relational approach is a more useful direction for settler colonial
studies, not unlike the kind of work you do Jeff, in thinking about colonialism in a global, comparative context. Jeff: And I think, the
more you can make those links, the British occupation of Maori territory is directly related to HBC’s strategy to begin treaty making
here... All those things are interrelated. They are shared, and they are seen as shared strategies. The other thing I see is this impulse
to delocalize it... it’s always that kind of Free Tibet Syndrome... the further away acts of genocide are from your location, the more
outrage expressed at these injustices. It’s a way of avoiding complicity, but it’s also a way of recasting the gaze. It’s like, ‘We’re not
going to look right here, because this appears to be fairly peaceful’ And so it’s always that sort of re-directing away from localized
responsibility, and almost magnifying impacts farther away. Rita: So what settler colonial studies does do, is help us relocate to
locality, which is helpful. You mention the HBC. I wonder what was the relationship between the Hudson Bay Company in Canada
and the East India Company or the East Africa Company? If we’re thinking about settler colonialism as a
structure, how is it related to other modalities of gendered and sexualized white supremacy? How
are the logics of State sovereignty and authority over nonwhite bodies connected? If we’re thinking about it, as non-
Indigenous peoples being ‘in solidarity’, part of that is locating, attacking the whole structure of
imperialism that is deeply gendered and homonationalist, that depends on neo-liberal projects
of prioritizing able-bodied workers who can serve capitalism . Corey: Part of this, I think, what
we’ve been discussing here, relates to what I sometimes see as the framing of ‘settler’ as event, rather than
structure – where we are perhaps overly focused on the question of ‘who’ at the expense of the
‘how’. If we don’t understand how settlers are produced we run the risk of represent ing
settlers as some sort of transhistorical subject with transhistorical practices . So I’m worried
that while in one moment the term ‘settler’ denaturalizes our – that is all non-Indigenous peoples –
presence on Indigenous lands, in the next, and through this construction of the ‘settler’ as
transhistorical, we renaturalize it . In short, we go from a disavowal of colonization, to its
representation as inevitable . Here is where I think a historical materialist or genealogical
approach to the production of settler subjects may be useful in show ing how this production is
conditioned by but also contingent on a number of factors – white supremacy, hetero-
patriarchy, capitalism , colonization, the eschewal of Indigenous governance and legal orders,
environmental degradation, etc. Now this is also not to say that the binary of Indigenous/Settler isn’t accurate. I think its
fundamental. Rather, I think it is possible and important to recognize that there have been, and are, individuals
(or even collectives) that might be referred to as something other than settlers by Indigenous
peoples, perhaps as cousins. Or in a similar vein, that there have been and are practices by settlers that
aren’t colonial (and here is where centering Indigenous peoples’ accounts of Indigenous-
settler relations, as well as their own governance, legal and diplomatic orders is crucial ). But I think
it’s just as important to recognize that these relations have and do not occur despite settler
colonial and imperial logics, and thus outside of the binary . Rather, such relations occur in the
face of it. The binary then is fundamental as the logics that uphold the binary cannot be ignored due to the existence of possiblly
good relations as the logics that uphold the binary threaten those relations through the pursuit of the elimination of Indigenous
peoples. Rita: Yet, how do we act in light of these entanglements, and with, rather than overcoming differences? Corey: Tuck and
Yang (2012) had this really great article, “Decolonization is not a Metaphor.” In it, they talk about the importance of an ethics of
incommensurability – a recognition of how anti-racist and anti-capitalist struggles are incommensurable with decolonization. But
what I’ve been thinking about recently is whether these struggles are incompatible. For example, in the Indigenous resurgence
literature, there is a turn away, but it’s also not an outright rejection. It also demands settlers to change. Yet recognizing that
settlers are (re)produced, the change demanded is not just an individual transformation ,
but one connected to broader social, economic, and political justice. There are then, it seems,
potential lines of affinity between decolonization and others, though incommensurable, struggles. And in
order to sustain this compatibility in the face of incommensurability, relationships are essential in
order to maintain accountability and to resist repeating colonial and other relations of
domination, as well as, in very strategic terms, in supporting each other’s resistance.
2AC---LAWs Turn
LAWs outweigh and turn the links---autonomous weapons are tactics
of the robotic empire. Also proves the alt fails
Ian Shaw 17, Lecturer in Human Geography at the School of Geographical and Earth Sciences
of the University of Glasgow, 08/31/17, Robot Wars: US Empire and geopolitics in the robotic
age, https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5639954/
Conclusion: A robot empire
The US military and the wider scientific community regularly imagine, construct, and script robot
futurologies . These are not innocent or epiphenomenal projections: they must be thought of
as performative. Robot futurologies prime and legitimize the conditions for investment , research
and development , and state (in)security  now. This article has itself constructed a futurology: one in which US
empire, geopolitics, and warfare are fundamentally transformed by robotics. Its original contribution has been to pull a discrete set
of imagined futures into the orbit of empire. In doing so, I do not assert this exact future will materialize – but, rather, a set of
conditions that contain this future (and others like it) are now being imagined, researched, and materialized. Futurologies are non-
linear and open to contingency. For this reason, itis vital that scholars , writers, activists, scientists, and
artists intervene in the unequal geographies they mobilize.
Empires have risen and fallen with the evolution of land power, sea power, and air power. By analyzing the US defense
community’s roadmaps, predictions, and anxieties, this article has explored the ascent of robot
power and the potential futures of US warfare, geopolitics, and empire . The value of using empire
across this article has been to constellate the past, present, and future, together with the spatial, the political, and the technical.
Empire is not a spectacular monism: it is an uneven constellation of bodies and machines, neurons and algorithms, and the
splintering warscapes of our more-than-human (in)security. Of course, robots possess a rebellious ontological quality (Walters,
2014) that disrupts easy predictions about the future. This uncertainty is compounded by the globalization of robots in
militaries, militias, and terrorist groups across the world, which will transform the US-led robotic order of
things . ‘This issue is particularly serious’, Kadtke and Wells (2014: 26) warn, ‘when one considers that in the future, many
countries may have the ability to manufacture , relatively cheaply, whole armies of Kill Bots that
could autonomously wage war .’
US empire is shifting, slowly, and never completely, away from a warfighting regime based on human
augmentation – soldiers extending themselves as tool-beings – to an imperium of robotic autonomy. Swarms
of autonomous robots are thus set to transform the spaces, logics, and agents of state
violence . Graham Harman (2002: 220) argues we are ‘stationed unwittingly in a cryptic empire of force-
against-force’. This cryptic empire of force-against-force can be understood as the material
conditions of violence . From satellites, to IEDs, to Reaper drones, objects continually
recondition an aleatory warscape. Empire is always immanent to this more-than-human matter-processing. The US
military’s ultimate proxy war is composed of robots autogenically securing battle-sites, replacing
humans as enforcers of empire . The possibility of materializing this kind of geopolitical future depends upon the US
military’s ability (and resources) to install a Roboworld of artificial topologies across the planet. Contrary to the civilizing missions of
19th-century empires, US empire projects a capital-intensive vision of the future, an empire of robots nourished by the
pervasive techno-colonization of the lifeworld.
This future involves the infiltration of empire into planetary technics: a robo-mesh . This
would entrench a post-national and global biopolitics in which robots mediate the full
spectrum of social interactions and perform a series of autogenic manhunts, collapsing
separate spaces of state authority and violence. Future robotics, embedded across society, alongside
ubiquitous sensing, computing, and the internet of things, would blur the boundaries between military ,
law enforcement , and commercial robotics , as well as everyday devices. ‘The profound
ramifications of this trend are that nearly every aspect of global society could become
instrumented , networked , and potentially available for control via the Internet’ (Kadtke
and Wells, 2014: 46). We are only at the dawn of realizing such an expansive robo-mesh. The question,
therefore, is not simply how  robots will secure everyday life , but who they will secure. Robots could simply
exacerbate and entrench preexisting conflicts and social inequalities .
Above all, a robotic US empire crystallizes the conditions for an unaccountable form of
violence . By outsourcing the act of killing to artificial agents , violence – at least from the
perspective of the US military – would transmute into an engineering exercise , like building a bridge,
planned and executed by robots. In this sense, robot warfare is alienated because the act of
violence functions without widespread human input. As one US Department of Defense-funded study
predicts, ‘in the longer term, fully robotic soldiers may be developed and deployed, particularly by
wealthier countries’ (Kadtke and Wells, 2014: 48). Such an ascendancy of robot soldiers embodies
the triumph of  technics over demos . As Tom Engelhardt (2012) argues, ‘think of us as moving from the citizen’s
army to a roboticized, and finally robot, military…. [W]e are moving toward an ever greater outsourcing of war
to things that cannot protest, cannot vote with their feet (or wings), and for whom there is no
“home front” or even a home at all’. The risk here is that democracy abstains from the act of
killing : on the loop, but no longer in the loop. An empire of indifference fought by legions of
imperial robots.

You might also like