Download as pdf or txt
Download as pdf or txt
You are on page 1of 4

1

Lethal Autonomous Weapon Systems


and the rules governing the conduct of hostilities

Lethal Autonomous Weapons Systems (LAWS) are coming to the battlefields of the future.
Their development and the deployment by states will happen, in various stages over time, in a
close future. The problematics and the issues related to the potential use of these weapons have
started to be addressed by states and NGOs at the international level at the United Nations in
Geneva within the framework of the Convention on Certain Conventional Weapons1. So far, three
meetings of experts on LAWS have taken place since 2014, and in 2016 the Fifth Review
Conference decided to establish a Group of Governmental Experts (GGE) on LAWS.

Even though there is currently no international definition of what LAWS really are, we
nevertheless have an understanding of their possible capabilities and of some of the military
advantages they could bring to states that would possess them. Currently no weapons are truly
capable of what one would call true autonomy but weapons such as drones or defensive weapons
already exist that have a certain degree of autonomy. And that degree of autonomy seems to be
increasing. Drones for instance, bring new and more varied options when deciding to attack a
target. Already we see that states are less willing to send soldiers to eliminate specific targets and
are likely to use drones whenever possible. Soldiers get tired, wounded or killed. By contrast, if
an autonomous weapon is severely damaged it can be destroyed, repaired or scrapped. Drones
eliminate the short term risks associated with sending soldiers in the field, and the long term issues
and costs related to their possible injuries or death. Autonomous weapons cannot hate, get mad,
know fear, and have (so far) no survival instinct. Soldiers can, in certain circumstances, be brought
to respond with inappropriate or uncontrolled and unreliable behavior, such as rage or retaliation
on civilians to avenge fallen comrades. Autonomous weapons would not share the emotional
weaknesses of human soldiers. Machines so far lack human feelings and emotions and are
therefore not likely to have their decision compromised by them. Autonomous weapons are
unlikely to resort to retaliation to avenge the destruction of another machine, or to surrender in the
face of a threat. In these cases autonomous weapons could help reduce civilian casualties in war.
The general view seems to be growing that machines can be programmed to function in a reliable
manner in the face of various sets of circumstances, therefore providing a reliable behavior. That
is of course, for as long as they do not malfunction or do not get hacked.

Significant arguments are being put forward as to why lethal autonomous weapons would
make a lot of sense. Western societies are increasingly less willing to support not only significant
and sustained casualties, whether death or injury, but even small casualties, of their troops, if other
means of waging war exist that can prevent or reduce such casualties. Autonomous weapons
increase the physical distance between the attacker and the target, eliminating in most cases the
very need for troops on the ground to carry the same mission; and as technology evolves,
commanders can be expected to increasingly deploy autonomous weapons to carry on attacks or
specific strikes, and so avoid deploying troops and incurring the risk of losing men. In addition,
machines do not become tired and can fight for longer hours and/or wait for a specific target more
easily than troops. And with autonomous weapons that would have sufficient capabilities,

1
http://www.unog.ch/80256EE600585943/(httpPages)/8FA3C2562A60FF81C1257CE600393DF6?OpenDocument
2

communication with a remote operator would no longer be required, thereby eliminating the risk
of missing a target through a delay in decision making or communication with the operator. With
autonomous weapons, armies would not have to fear traitors since machines cannot betray you or
change sides (at least willingly). It is therefore logical to expect that all sorts of lethal autonomous
weapons will continue to be developed for all sorts of purposes, with more or less firing power
and destruction capabilities, and of all sorts of sizes. We could envision that LAWS could be used
with minimal risk in certain environments such as the seas, or in unpopulated areas. So far, man is
simply gradually interjecting technology to replace the physical presence of soldiers by a remote
operator in conducting and firing weapons that until recently still required the presence of a human
at the location of the attack. But as technology continues to evolve, it must be expected that even
this remote control of a human operator will gradually be eroded and give way to increasingly
sophisticated controls on board of the weapons themselves, leading to weapons that will identify,
locate and engage targets on their own.

The main question is therefore not whether there is a military advantage in producing and
deploying LAWS, but whether such weapons will be able to comply with the rules of International
Humanitarian Law (IHL) that govern armed conflicts. Some argue that we will be able to program
LAWS to be more cautious and accurate than human beings. Others argue that machines and
computers will always lack the human judgment necessary for the lawful use of force. In fact, the
current state of technology does not allow us today to really determine where such line will fall.

Nevertheless, we can already set certain parameters. Such systems, once developed and
deployed, should be able to distinguish between legitimate targets and civilians/civilian objects,
and to function in populated areas, or in unclear situations such as counterinsurgency or entangled
offensive situations. However, IHL and its applications were created for humans, and were
designed to involve certain elements of judgment that we believe so far can only be passed upon
by humans. IHL was created for humans and only humans can so far decide to deliberately comply
or not with its rules. Most humans, whether very intelligent or not, are able to understand the main
rules of IHL. They may choose to not apply them or to disregard them, but they can generally
understand them, mostly because they apply to and concern other human beings, and involve
decisions of a moral nature that are for the most part applicable in most societies and religions. In
addition, laws have been created to deal with the few cases where humans, willingly or not, violate
such rules. How machines could be made to respect the rules of war on their own remains to be
seen, without giving them capabilities that would replicate the same characteristics and abilities to
perceive and judge as those of a human brain. In order to be deployed, autonomous weapons would
have be bound by the rules of IHL such as distinction (Art 48, 51(2) and 52(2) of Additional
Protocol I (API)), proportionality (Art 515B API) and precaution in attack, just to name a few2.

Concerning distinction, an autonomous weapon will have to be able to distinguish between


a lawful and unlawful situation. In some cases, it seems it would not be too difficult to distinguish
between military installations or vehicles, and non-military ones, especially if combat is taking
place in an open environment; and even more so in a traditional international conflict where two
different state armies are fighting each other and combatants are wearing uniforms. But that
assumes that things are clear, or that combat takes place in an open environment. But the distinction
between civilians and combatants becomes more difficult if fighting is taking place in an urban
2
https://www.icrc.org/eng/assets/files/other/icrc_002_0321.pdf
3

area for instance. In such cases, civilians are not supposed to take part in hostilities but in fact often
do so. Direct participation of civilians in hostilities makes the distinction process very difficult and
while distinguishing civilians from combatants is already difficult for trained military personnel,
it seems hard to think, at least for now, that autonomous weapons would be able to make such
distinction in the near future. The case of civilians carrying arms but not taking part in hostilities
would also pose a problem. In some cultures, weapons are common amongst civilians. This does
not automatically mean that they are taking part in hostilities and carrying a gun does not
automatically make a civilian a valid military target. The context, the actions and the intent need
to be considered. Switzerland during the third CCW meeting of experts on autonomous weapons
that was held in Geneva in April 2016 insisted that autonomous weapons should also be able the
respect the rules governing the prohibition of the denial of quarter and the protection of persons
hors de combat. Autonomous weapons should be able to preserve a reasonable possibility for
combatants to surrender3. In any case according to Art 50(1) of API, if during the targeting phase
there is a doubt about the nature of the target, then that target is immune from attack. These rules
are also contained in other treaty bodies banning specific weapons, and in customary law or
military manuals. They are now customary and apply in all types of conflicts (international or non-
international) and all states, whether a party to API or not, are bound by them. Most autonomous
weapons seem to be developed today primarily as aerial vehicles and do not differ much in their
means of attack (bombs, missiles), from war planes or drones. Their missions are generally simple.
But autonomous machines fighting on land would certainly face the issues above. For now at least,
there is no AI capable of analyzing such complex situations. No autonomous weapons today would
be able to distinguish between active combatants and those who are hors de combat or be able to
determine if someone is willing to surrender or no longer poses a threat. Such issues all currently
require human judgment.

Considering proportionality, autonomous weapons would have to analyze whether an


attack can be carried on. The decision to attack any target must be taken after weighing both the
military advantage and the possible damage to civilian objects or protected persons. The attack
must not cause damages that would be excessive in relation to the concrete and direct military
advantage that can be anticipated. Such decisions are not black and white, but always involve a
difficult and delicate balance, often depending on a complex context. They are subjective and
human judgment is so far the only way to assess proportionality between military advantage and
the damage anticipated. Battlefields are changing environments with incomplete information; the
determination of military advantage and of proportionality is weighed by military commanders
when they decide specifically which target will be attacked and when. Such determinations are
contextual and the lawfulness of a decision to attack, to suspend an attack that no longer seems
proportionate, or the choice of target, can change if facts on the ground change. It seems difficult
to preprogram all possible scenarios, and it would seem therefore that very sophisticated artificial
intelligence on the weapon itself would be required in order to allow for such weighing of complex
facts. Some experts such as Michael N. Schmitt argue that military advantage algorithms could
in theory be programmed into autonomous weapons systems4 but no artificial intelligence has yet
appeared that has the possibility to make these subjective assessments.

3
http://www.unog.ch/80256EDD006B8954/(httpAssets)/558F0762F97E8064C1257F9B0051970A/$file/2016.04.13
+LAWS+Legal+Session+(as+read).pdf
4
http://harvardnsj.org/wp-content/uploads/2013/02/Schmitt-Autonomous-Weapon-Systems-and-IHL-Final.pdf
(P20)
4

Considering precaution in attack, military commanders need to take the decision to use
specific weapons based on the target that they choose, the weapons at their disposal at the time,
and the need to minimize civilian casualties and superfluous injuries. This assessment depends on
the context and currently human judgement is the most reliable. But one could argue that this
matter could possibly be dealt with more easily by autonomous weapons, should the other two
matters above come within the realm of artificial intelligence.

Altogether, there is a lot of debate between technical experts and legal experts on whether
or not LAWS can respect or not IHL rules. Some organizations such as Campaign to Stop Killer
Robots believe that these weapons will never be capable of respecting IHL rules, and have been
calling for a preemptive ban5. But as Kenneth Anderson and Matthew Waxman note in their paper
on why a preemptive ban would not work: the emergence of a new weapon often sparks an
insistence in some quarters that the weapon is ethically and legally abhorrent and should be
prohibited by law 6. Some other experts are saying that LAWS can be programmed to follow IHL
rules. In addition, the very definition of what constitutes an autonomous weapon is far from having
reached a consensus, therefore making the implementation of a ban difficult to implement. Some
organizations like ICRC are not calling for a preemptive ban and tend to have a more pragmatic
approach, although ICRC is generally skeptical on the capacity of LAWS to respect IHL rules. In
2013 during a Seminar on fully autonomous weapon systems at the French permanent Mission in
Geneva, Kathleen Lawand, head of the arms unit at ICRC stated that The ICRC has a number of
concerns regarding the capability of autonomous weapon systems to comply with IHL. In
particular, developing the capacity of autonomous weapon systems to fully comply with the IHL
rules of distinction, proportionality and precautions in attack appears today to be a monumental
programming challenge. Indeed, it may very well prove impossible. She does not however
completely shuts down the idea that it would one day be possible to program an autonomous
weapon system to fully comply with the rules of distinction, proportionality and precautions in
attack 7. As the Permanent Mission of the Holy See in Geneva explained in its working paper on
LAWS in April 20168 it is sometimes necessary to go beyond the "letter" of the law and interpret
it according to the context to preserve the "spirit" of the law. Even if autonomous weapons could
be programmed to respect the rules of IHL it seems difficult to program them to be able to interpret
the context within the spirit of the law and go beyond the preprogrammed rules.

Another question that arises from the potential violations of IHL in case of deployment of
LAWS is who would be responsible for mistakes. International Criminal Law only punishes
voluntary violations of IHL and only applies to humans. Would the use of these weapons create a
lack of accountability? Who could be held responsible in case of error, malfunction or hacking?
This will be the subject of a next memorandum.

5
http://www.stopkillerrobots.org/the-solution/
6
http://www.unog.ch/80256EDD006B8954/(httpAssets)/702327CF5F68E71DC1257CC2004245BE/$file/LawandEt
hicsforAutonomousWeaponSystems_Whyabanwontworkandhowthelawsofwarcan_Waxman+anderson.pdf (P8)
7
https://www.icrc.org/eng/resources/documents/statement/2013/09-03-autonomous-weapons.htm
8
http://www.unog.ch/80256EDD006B8954/(httpAssets)/752E16C02C9AECE4C1257F8F0040D05A/$file/2016_LA
WSMX_CountryPaper_Holy+See.pdf

You might also like