Professional Documents
Culture Documents
CLASE 14 - Sloan
CLASE 14 - Sloan
Robotics at War
Elinor Sloan
To cite this article: Elinor Sloan (2015) Robotics at War, Survival, 57:5, 107-120, DOI:
10.1080/00396338.2015.1090133
Elinor Sloan is Professor of International Relations at Carleton University, Ottawa and a fellow of the Canadian
Global Affairs Institute. Her most recent book is Modern Military Strategy (Routledge, 2012).
increasing speed of warfare are also driving forces. Civilian and military
leaders will face the challenge of reconciling the desire to do whatever pos-
sible to reduce the risk to their warfighters with the necessity of accounting
for the laws of armed conflict and broader ethical issues.
Classifying robots
In his 2009 book Wired for War, Peter Singer argued that a machine is a
robot if it has three things: sensors to monitor the environment; processors
or artificial intelligence to decide how to respond; and some set of tools to
conduct that response.1 What is new today is a move towards greater auton-
omy within the second aspect, the response decision. The progression is
from remote-controlled to semi-autonomous and finally to potentially fully
autonomous capability. The autonomy categories are not set in stone and
are better understood as reflecting either end of a continuum, with what the
Pentagon calls (in its Unmanned System Integrated Roadmap) ‘self-directed’
and ‘self-deciding’ systems at either end of the spectrum.2
Self-directed, or semi-autonomous, systems are preprogrammed to
perform specific actions that they carry out independently of external influ-
ence or control. This type of capability is already in existence. The Global Hawk
UAV, for example, normally operates as a remote-controlled platform ‘teth-
ered’ to a human operator thousands of miles away, but it is also designed to
operate independently of human control within a particular patrol area desig-
nated by its human operators. Likewise, army unmanned ground systems are
being designed to move autonomously around the battlefield to undertake
specific tasks.3 By contrast, a self-deciding, or fully autonomous, robot would
be able to respond to sensed information in different ways depending on the
circumstances. Rather than having a preprogrammed response or even a pre-
programmed goal, a self-deciding machine would be able to seek the optimal
solution in unforeseen situations. It would be able to choose the goal that is
dictating its path, adapting and learning from the sensed information around
it. That is to say, the robot’s actions would originate within it and reflect its
ends.4 It would, in short, ‘act as the human brain does’.5
Incorporating lethality into the ‘remote-controlled–semi-autonomous–
fully autonomous’ schema takes us to the heart of contemporary debate
Robotics at War | 109
Attractions of autonomy
The original driver for unmanned warfare was to assign to a machine those
jobs that are dull, repetitive and dangerous. UAVs allow for persistent sur-
veillance of territory by a drone that never gets tired or hungry, and that
is controlled from a distance by a person who is not at risk. Intelligence,
surveillance and reconnaissance (ISR) is the key task that remote-controlled
airborne platforms perform. In future, naval versions could similarly offer
persistent surveillance of territorial waters, locating submarines in place of
or in conjunction with traditional, manned, anti-submarine warfare aircraft.7
Underwater robots also give reach and capability without putting a person
at risk. They are used to hunt for mines and explosives and in future may be
used as small scouting submarines for tasks like port security and surveying
the depths of the ocean.8 Unmanned surface vessels, the naval equivalent
of UAVs, are being developed and some countries, such as Singapore, use
them to protect manned ships. Unmanned ground vehicles are used pri-
marily to carry out the dangerous tasks of demining and searching for and
destroying roadside bombs. Additional roles include patrolling and guard-
ing military warehouses, airfields and port facilities; reconnaissance, such
as entering buildings before soldiers; and logistics, aiding and complement-
ing the mobility of soldiers by carrying gear overland.
Despite the benefits of remote-controlled warfare, there are operational
shortcomings. The electromagnetic-bandwidth and satellite-connection
requirements of tethered platforms present a challenge. These systems are
110 | Elinor Sloan
own attack decisions. A future force that does not have fully autonomous
systems may not be able to effectively compete with an enemy that does.12
There can, however, be military operational disadvantages to greater
autonomy in robotics in warfare. Commanders may want to maintain
control of weapons on the battlefield, staying connected by a link at all times
and disengaging a robot if the link is broken. The fear is that a machine
could somehow compromise an operation, perhaps revealing something
commanders want to keep quiet.13 In this regard, using autonomous robots
might be considered disadvantageous or unduly risky in certain situations.14
Another shortcoming is that not all robots are created equal when it comes
to mission performance. Remote-controlled military robots made their
debut in the air, a domain generally unencumbered by obstacles. But even
early UAVs were prone to technical failure when dealing, for example, with
the dust of Afghanistan. The challenge of negotiating terrain in all-weather
circumstances now limits the potential of unmanned ground vehicles. In
many situations a ground robot’s performance is not at the level of that of a
human, including driving on snow-covered roads, driving into the sun, and
driving in rain or dust storms.15 Clearly while robots might be better suited
to some roles than humans, humans remain far more talented at others.16
Conduct of war
Just as last century military planners eventually integrated the new technol-
ogy of manned flight into war-fighting concepts, so too is remote-controlled
and robotic technology now being incorporated into thinking about the
conduct of war. In the 1990s remote-controlled platforms were used almost
exclusively to provide ground forces with a view of what was ‘over the next
hill’. What was already being done by manned aircraft was now done in a
more persistent manner by unmanned aircraft, and the role was (and is)
to support ground forces by providing real-time surveillance information
about the position of enemy ground forces. In the 2000s UAVs progressed
from being a pure ISR platform to one that combined that function with
lethal strike. Predators armed with precision munitions were used in close
air support of troops on the ground, again much as manned platforms had
previously done (and continue to do).
112 | Elinor Sloan
machines of the First and Second World Wars, to the much smaller, more
mobile and agile army units of the information era. Swarming would repre-
sent a reversal in this trend – a return to mass in warfare. Quantity – or mass
– is re-emerging as critical for gaining military advantage.19
Ethical concerns
As soon as remote-controlled aerial vehicles were armed with precision-strike
munitions, robotic warfare started to be considered in terms of the laws of
armed conflict. Could it be just for a person thousands of miles from harm’s
way to make a strike decision to kill another human being? The answer is yes
when one considers that remote-controlled lethal weapons are just the latest
development in the move away from face-to-face battle – from cannon, to
artillery, to air-to-ground precision strike by manned aircraft. However, the
decision to fire must also meet fundamental provisions of the laws of armed
conflict, particularly discrimination and proportionality. Discrimination
means the ability to distinguish between military objectives and civilian
populations and to limit civilian casualties. Proportionality involves an
assessment of whether the expected collateral damage of an action is likely
to be excessive in relation to the expected gain in military advantage.
A concurrent debate was, and is, whether remote-controlled lethal force
makes killing too easy. The thinking is that political leaders may more
easily authorise the use of force if they know air personnel are not being
put in harm’s way, and that controllers and their commanders will be more
likely to pull the trigger. But anecdotal evidence suggests that warfare by
committee leads to fewer, not more, strikes. Lawyers and government offi-
cials sit in operations rooms looking at video feeds, vetoing any action not
considered legal. Moreover, unlike pilots at the scene of the action, remote-
control strikers are not caught up in the rush of combat, potentially making
tragic decisions with imperfect information. In this vein, robots in warfare
are sometimes presented as having moral advantages because they are not
human. Robot soldiers will not carry out revenge attacks on civilians, rape
people or panic in the heat of battle. They do not have human emotions like
fear, anger and guilt, which may lead to war crimes, and they are not con-
strained by desire for self-preservation.
114 | Elinor Sloan
a response but are not of vital enough concern to warrant a large-scale mili-
tary deployment. Faced with civil strife in a war-torn nation, for example,
a government will think twice about sending ground forces if the circum-
stance does not pose a direct threat to its interests. Yet it could send drones
to help aid agencies track refugees or assist local or indigenous forces on
the ground. Natural disasters are one-off situations in which drones are
often deployed, but it is conceivable that a fleet of semi-autonomous non-
lethal drones could monitor on a sustained basis ongoing civil strife, for
example over Sudan. Airborne platforms seem best suited as a humanitar-
ian-assistance or disaster-relief contribution, but in future robots may be
similarly useful in the other dimensions of
war. It is possible, for example, that units
Possibilities include dominated by non-lethal remote-controlled
or semi-autonomous ground vehicles could
both robot-on-human undertake the task of distributing humani-
and robot-on-robot tarian aid. In addition, one of the habitual
problems of peacekeeping and stabilisation
engagements missions is insufficient troop strength. Non-
lethal robots could be used to augment
boots on the ground in missions that require the presence of many troops
over a long period of time.26
As for lethal platforms, remote-controlled ones with a human directly in
the loop should be pursued in all dimensions of warfare to enhance military
effectiveness and reduce risk to friendly forces. The challenge will be for
militaries to integrate such platforms into new doctrines, rather than merely
adding them into established modes of operation. Military leaders will need
to concurrently rethink existing force structure, taking into account remote-
controlled lethal robots as an integral part of future navies, armies and air
forces. The possibilities for remote-controlled lethal warfare include both
robot-on-human and robot-on-robot engagements. The robot-on-human
aspect is already with us (close air support of troops and striking terrorists,
for example), but to date there have not been any remote-controlled robot-on-
robot engagements such as air-to-air battle between unmanned combat aerial
vehicles. This is perhaps the area in which most doctrinal work is needed.
Robotics at War | 117
* * *
118 | Elinor Sloan
Military systems that can kill without human intervention are already here.
South Korea, for example, has deployed an automated turret on the demili-
tarised zone between North and South Korea that can identify, track and
destroy a moving target, and was originally designed with an auto-firing
system. At customer request, the manufacturer put the human back in the
loop with a software change.27 What eludes these systems to date is the arti-
ficial intelligence for self-decision, judgement, proportionality assessment
and discrimination in a given situation that would transform them into fully
autonomous robots. Some believe it will never be possible to give a robot
these sorts of ‘human’ capabilities, while others believe such advances will
materialise in the not-too-distant future.
Lethal autonomous robots, should they appear, will occupy a unique
moral ground. On the one hand, they would save lives by taking soldiers
out of harm’s way. They would not cause the type of superfluous suffering
associated with chemical and biological weapons that led to the ban of those
weapons, nor would they have the massive and indiscriminate destructive
effect of a nuclear weapon. Indeed, lethal robotic systems are more likely
to be precise and cause limited collateral damage. On the other hand, the
advent of such robots would be chilling for those who believe that even
more threatening than a killer robot that cannot discern civilians from com-
batants is a robot that can make complex decisions about who it wants to
kill.28 The possibility has been raised of machines with superhuman intel-
ligence improving their own design, eventually developing weapons that
humans cannot understand or control.29 Taken to the extreme, artificial
intelligence is seen as an existential threat to the human race.30
Most arguments for constraining the development and use of autono-
mous lethal robotic systems are in fact grounded in ethical rather than
physical considerations. Proponents of a prohibition point out that a machine
cannot be held accountable for unlawful harm caused by its actions, at least
in the sense that the robot would appreciate or perceive being ‘punished’
and therefore be deterred. Neither would it be possible to trace liability to a
military commander, manufacturer or computer programmer, because the
machine will have been acting autonomously.31 The predominance of ethical
concerns means that the ‘acceptability’ of lethal autonomous robots will be
Robotics at War | 119
Notes
1 P.W. Singer, Wired for War: The Robotics 8 Anna Mulrine, ‘New Pentagon
Revolution and Conflict in the 21st Blueprint Sees Bigger Role for
Century (London: Penguin Books, Robot Warfare’, Christian Science
2009), p. 45. Monitor, 27 December 2013, http://
2 US Department of Defense, www.csmonitor.com/World/
‘Unmanned Systems Integrated Security-Watch/2013/1227/
Roadmap FY2013–2038’ (Washington New-Pentagon-blueprint-sees-bigger-
DC: US Department of Defense, role-for-robot-warfare.
undated), http://www.defense.gov/ 9 Robert O. Work and Shawn Brimley,
pubs/DOD-USRM-2013.pdf, pp. 66–7. ‘20YY: Preparing for War in the
3 Jeffrey S. Thurnher, ‘Legal Robotic Age’ (Washington DC: Center
Implications of Fully Autonomous for a New American Security, January
Targeting’, Joint Force Quarterly, vol. 2014), p. 30.
67, no. 4, 2012, p. 79. 10 US Air Force General Philip
4 Canadian Army Land Warfare Centre, Breedlove, quoted in George
‘No Man’s Land: Tech Considerations Galdorisi, ‘Keeping Humans in the
for Canada’s Future Army’ (Kingston, Loop’, US Naval Institute Proceedings,
Ontario: Canadian Army Land vol. 141, no. 1, February 2015.
Warfare Centre, 2014), pp. 2–52. 11 Paul McLeary, ‘US Army
5 US Department of Defense, Studying Replacing Thousands
‘Unmanned Systems Integrated of Grunts with Robots’, Defense
Roadmap’, p. 67. News, 20 January 2014, http://
6 Thomas K. Adams, ‘Future www.defensenews.com/arti-
Warfare and the Decline of Human cle/20140120/DEFREG02/301200035/
Decisionmaking’, Parameters, Winter US-Army-Studying-Replacing-
2011–12, vol. 41, no. 1. (Article first Thousands-Grunts-Robots.
published in Parameters in 2001–02.) 12 Thurnher, ‘Legal Implications of Fully
7 David Pugliese, ‘Canadian-Made Autonomous Targeting’, p. 79.
Drone to be Tested for Potential Sub- 13 Paul Cornish as paraphrased in
Hunting Role’, Ottawa Citizen, 22 April Tom Chivers, ‘Robots: Do We
2015, http://ottawacitizen.com/news/ Want to Give Them a Licence to
politics/canadian-made-drone-to-be- Kill?’ Telegraph, 14 November 2013,
tested-for-potential-sub-hunting-role. http://blogs.telegraph.co.uk/news/
120 | Elinor Sloan