Download as pdf or txt
Download as pdf or txt
You are on page 1of 27

3

67

The regulation of the use of Artificial


Intelligence (AI) in warfare: between
International Humanitarian Law (IHL) and
Meaningful Human Control

MATEUS DE OLIVEIRA FORNASIER


Doutor em Direito (UNISINOS), com Pós-Doutorado em Direito e Teoria pela
University of Westminster (Inglaterra). Professor do Programa de Pós-Graduação
Stricto Sensu (Mestrado e Doutorado) (UNIJUI).

Artigo recebido em 6/7/2020 e aprovado em 10/3/2021.

CONTENTS: 1 Introduction 2 Principles for autonomous lethal weapons (LAWs) 3 On the


difficulties related to the regulation of the use of IA in warfare by IHL 4 Conclusion 5 References.

ABSTRACT: The proper principles for the regulation of autonomous weapons were
studied here, some of which have already been inserted in International Humanitarian
Law (IHL), and others are still merely theoretical. The differentiation between
civilians and non-civilians, the solution of liability blanks and proportionality are
fundamental principles for the regulation of the warlike use of artificial intelligence
(AI), but the significant human control of the warlike AI must be added to them.
Through the hypothetical-deductive procedure, with a qualitative approach and
bibliographic review, it was concluded that the realization of the differentiation
criterion, value-sensitive design, the elimination of accountability gaps, significant
human control and IHL must support the regulation of the use of autonomous
weapon systems – however, the differentiation between civilians and non-civilians
and proportionality are not yet technologically possible, which makes compliance
with IHL still dependent on significant human control; and the opacity of warlike AI
algorithms would make legal accountability for its use difficult.

KEYWORDS: Artificial Intelligence Autonomous Weapons Regulation Responsibility


Meaningful Human Control.

Revista Jurídica da Presidência Brasília v. 23 n. 129 Fev./Maio 2021 p. 67-93


http://dx.doi.org/10.20499/2236-3645.RJP2021v23e129-2229
68 The regulation of the use of Artificial Intelligence (AI) in warfare

A regulação do uso bélico da Inteligência Artificial (IA): entre o Direito Internacional


Humanitário (DIH) e o Controle Humano Significativo

SUMÁRIO: 1 Introdução 2 Princípios para uso das armas autônomas letais (LAWs) 3 As dificuldades
relacionadas à regulação do uso bélico de (IA) no DIH 4 Conclusão 5 Referências.

RESUMO: Os princípios adequados à regulação de armas autônomas foram


estudados aqui, alguns dos quais já inseridos no Direito Internacional Humanitário
(DIH), enquanto outros ainda são apenas teóricos. A diferenciação entre civis e não
civis, a solução de lacunas de responsabilidade e a proporcionalidade são princípios
fundamentais para a regulação do uso bélico da inteligência artificial (IA), mas
o controle humano significativo da IA bélica deve ser somado a ele. Mediante o
procedimento hipotético-dedutivo, com uma abordagem qualitativa e revisão
bibliográfica, concluiu-se que a concretização do critério de diferenciação, o design
sensível a valores, a eliminação de lacunas de responsabilização, o controle humano
significativo e o DIH devem fundamentar a regulação do uso de sistemas e armas
autônomas – porém, a diferenciação entre civis e não civis e a proporcionalidade
ainda não são tecnologicamente possíveis, o que torna a adequação ao DIH ainda
dependente do controle humano significativo; e a opacidade dos algoritmos da IA
bélica dificultaria a responsabilização jurídica pelo seu uso.

PALAVRAS-CHAVE: Inteligência Artificial Armas Autônomas Regulação


Responsabilidade Controle Humano Significativo.

Revista Jurídica da Presidência Brasília v. 23 n. 129 Fev./Maio 2021 p. 67-93


http://dx.doi.org/10.20499/2236-3645.RJP2021v23e129-2229
Mateus de Oliveira Fornasier 69

La regulación del uso de la Inteligencia Artificial (AI) en la guerra: entre el Derecho


Internacional Humanitario (DIH) y el Control Humano Significativo
CONTENIDO: 1 Introducción 2 Principios para el uso de las armas autónomas letales (LAWs) 3 Las
dificultades relacionadas a la regulación del uso belicoso de (IA) no DIH 4 Conclusión 5 Referencias.

RESUMEN: Los principios adecuados para la regulación de armas autónomas


son estudiados en esto trabajo, algunos de los cuales ya han sido confirmados
en el Derecho Internacional Humanitario (DIH), y otros aún son teóricos. La
diferenciación entre civiles y no civiles, la solución de brechas en la responsabilidad
y la proporcionalidad son principios fundamentales para la regulación del
uso bélico de la inteligencia artificial (IA), pero a ellos se debe agregar
el control humano significativo de la IA de guerra. Mediante el procedimiento
hipotético-deductivo, con enfoque cualitativo y revisión bibliográfica, se concluyó
que la realización del criterio de diferenciación, el proyecto sensible al valor, la
eliminación de las brechas de responsabilidad, el control humano significativo y
el DIH deben apoyar la regulación del uso de sistemas y armas autónomos; sin
embargo, la diferenciación entre civiles y los no civiles y la proporcionalidad aún
no son tecnológicamente posibles, lo que hace que el cumplimiento del DIH siga
dependiendo de un control humano significativo; y la opacidad de los algoritmos
de IA de guerra dificultaría la responsabilidad legal por su uso.

PALABRAS CLAVE: Inteligencia Artificial Armas Autónomas Regulación


Responsabilidad Control Humano Significativo.

Revista Jurídica da Presidência Brasília v. 23 n. 129 Fev./Maio 2021 p. 67-93


http://dx.doi.org/10.20499/2236-3645.RJP2021v23e129-2229
70 The regulation of the use of Artificial Intelligence (AI) in warfare

1 Introduction

A utonomous weapons result from the combination of AI and military equipment


in general; they are capable of operating on land, in the air, in water and
in outer space, being categorized by the US Armed Forces as being defensive or
offensive (DEL MONTE, 2018, p. 95-97). Such weapons can also be non-lethal (such
as surveillance equipment) or lethal (which select and attack targets without human
intervention). They are highly worrisome, and the United Nations are still addressing
the ban on lethal autonomous weapons, even though the US, China and Russia are
currently implementing them (i.e. the US Navy Phalanx system, which has similar
among the Chinese and the Russians, which identifies and attacks autonomously
anti-ship missiles). When applied to weapons, AI becomes very interesting militarily:
it incomparably improves speed, stealth and effectiveness of military operations, and
significantly reduces spending as well, allowing personnel to devote their time to
tasks that really currently require human attention (FROESE, 2018).
That is an extremely relevant topic, in a scenario of technological
development also directed for military use. Therefore, it is up to the studies of
Public International Law – mainly International Humanitarian Law (IHL) – to
corroborate with an update, in the light of technical normative issues and in
a transdisciplinary way, looking for intersections between the Philosophies of
Technology and Law for acquiring insights into the standardization of AI warfare.
Given this, this could be asked: which principles of IHL can serve as a basis
for the regulation of autonomous weapons? Hypothetically, the differentiation
between civilians and non-civilians, the solution of blanks in responsibility, and
proportionality as well, are fundamental principles for the regulation of the
military use of AI, but the significant human control of AI in war must be added
to them, in a necessary update of IHL. Methodologically, a literature review was
conduced, with a hypothetical-deductive procedure and a qualitative approach.
This work aimed to study the most adequate principles for the regulation of
autonomous weapons, some of which are already enforced in IHL rules, and others
still only theoretical yet. Its development was divided into two specific objectives.
In its first section, it was analyzed what principles related to the development and
use of autonomous weapons have been pointed out as the most suitable for their
regulation. For that, the probable consequences of the use of autonomous weapons
for the future of the war were explained; soon after, the principles of distinction,
value sensible design, the elimination of responsibility gaps and, in particular,

Revista Jurídica da Presidência Brasília v. 23 n. 129 Fev./Maio 2021 p. 67-93


http://dx.doi.org/10.20499/2236-3645.RJP2021v23e129-2229
Mateus de Oliveira Fornasier 71

meaningful human control, were studied. The second section covered the most
important IHL rules for the future regulation of the matter.

2 Principles for autonomous lethal weapons (LAWs)


AI must revolutionize military operations, being useful in almost all spheres of
military activities: from training to actions; in autonomous and semi-autonomous
systems (which are controlled by humans or in human-machine interaction). States
would be obliged to invest in the long term in AI research, both from a civil and
military point of view (LELE, 2019, p. 153). Defensively, AI is more relevant to
decision-making (from planning to actual operations) at various levels of military
action, in times of peace and war. China, Great Britain, Israel, Russia, South Korea and
the USA are investing in these weapons, being them applicable as:

(I) Drones (aerial, terrestrial or submarine): they can be used for automated
analysis of data and images (serving as support for decisions), or for combat.

(II) Lethal autonomous weapons (LAWs): permanently installed systems that


act autonomously to prevent attacks on civilian and military installations
(such as dams and nuclear installations), borders and warships.

(III) Autonomous assistants (or partially autonomous): used to dismantle mines and
bombs, evacuate the wounded from battle zones, deliver supplies and explore
places inaccessible to humans. (KREUTZER; SIRRENBERG, 2020, p. 230-231).

Sinchana et al. (2020) list the main uses of AI technologies in military defense today:

(I) Ontologies: data models used to represent knowledge and represent


relationships between various concepts about something. Used to
represent operations, intelligence, logistics, and to define entities, events,
tactics, techniques and processes.

(II) Systems based on knowledge and AI capacity: software that uses methods
of determination and knowledge to solve problems. Used by the Air Forces
to provide technical expertise in maintenance.

(III) Autonomous weapons: weapon systems capable of selecting and acting


on targets without human intervention. Drones, advanced robots, artillery,
etc., designed and used to act in specific situations in combat.

(IV) Terrain analysis: use of systems for site analysis before military operations
are also carried out. Their use underlies tactical decisions, operations and
intelligence. (SINCHANA et al., 2020).

Revista Jurídica da Presidência Brasília v. 23 n. 129 Fev./Maio 2021 p. 67-93


http://dx.doi.org/10.20499/2236-3645.RJP2021v23e129-2229
72 The regulation of the use of Artificial Intelligence (AI) in warfare

Autonomous weapons will be deployed by the main nuclear powers (USA, China and
Russia), with autonomy on the human level, by 2050 (DEL MONTE, 2018, p. 165-166). In
this new reality – in which the main nuclear powers will not face each other directly, as this
would trigger total human annihilation – such weapons will pose a danger to humanity
beyond its destructive potential. Their autonomy can trigger a potentially unavoidable
diplomatically war and, when the great powers and some dishonest States have
autonomous weapons, the greater the chances of miscalculations or misinterpretation
and, consequently, of a war that would continue by autopilot until human annihilation.
Israel already has Harap, a drone that searches and destroys radar systems in a completely
autonomous way, flying until a target appears. The US anti-submarine vessel Sea Hunter
sails for months at a time, looking for enemies without people on board. In addition, there
are about 400 weapons and partially autonomous robotic systems under development
worldwide (COKER, 2019, p. 57).
For Johnson (2019, p. 159-160), unmanned autonomous weapons are deployed
defensively and offensively, which can undermine the deterrent utility of existing
defense systems. In addition, the merger of AI with other early warning systems,
shortening decision time and facilitating the location of high-value hidden military
assets (such as submarines), may adversely affect international security and stability
in the use of nuclear weapons. The rapid diffusion and resources of dual use provided
by autonomous weapons will complicate the ability of States to anticipate, assign
and effectively combat future autonomous attacks. Thus, the incipient development
of counter-AI will have an increasing importance in national security and in strategic
calculations. And the relatively slow pace of global defense industry AI development
relative to the commercial sector will affect the balance of power and the structure of
international competition, further worsening the prospects for international security.
As dependencies between the digital and physical domains increase, so do
threats from cyber attacks. Machine learning will expand the scope and scale future
cyber attacks, which can overwhelm the States’ fledgling cyber defenses. AI’s many
inexplicable resources (black box) will exacerbate these risks and further complicate
defense planning for an uncertain and complex strategic scenario. For now, it is
still unclear what resources AI will leverage, whether new weapons can emerge
and how this dynamic will affect the future military and strategic balance between
States – and potentially between States and non-State entities as well.
The rapid US-China race to innovate in AI will have profound and destabilizing
implications for future security. Since both sides are internalizing such technologies,

Revista Jurídica da Presidência Brasília v. 23 n. 129 Fev./Maio 2021 p. 67-93


http://dx.doi.org/10.20499/2236-3645.RJP2021v23e129-2229
Mateus de Oliveira Fornasier 73

each side is likely to conceive very differently about them. Chinese-American


prejudices, preferences and other cognitive biases will be codified and inserted into
autonomous weapons. Under conditions of crisis and conflict, prejudicial biases
will exacerbate the underlying distrust and misperceptions between the US and
China. Those technical challenges would increase the perception in Washington
that Beijing intends to exploit AI to fulfill its geopolitical ambitions, and vice versa.
But although the Sino-American competition for autonomous weapons is not yet a
security dilemma, several concerns emerge from the situation:

(I) A security dilemma is an unstable and dangerous situation, with


uncertainties, miscalculations and struggles for power. Although there
are alarmists who compare the competition in question to an arms race,
it may still be more sensible to view AI as a transformative technology
with practically endless applications politically, economically and socially
– as happened in the past with electricity, which was used to great human
progress, but also brought a range of unintended consequences.

(II) Although the situation does not reach the level of a security dilemma,
China and the USA are competing fiercely for the potential gains of AI,
which may trigger a real security dilemma;

(III) Chinese and American public policymakers should manage to


understand the possible political (i.e. compromised democratic elections),
economic (replacement of human workers by machines) and ethical (related
to privacy, advancing cyber war, etc.) implications of the application of AI,
but as a rule one should not fear every instance of its implementation.
(BROWN, 2020, p. 33-34).

Although the fast adoption of AI technologies can transform war on several


fronts, it carries new risks that will have to be reconciled with the broad integration
of algorithmic systems in military functions worldwide (JENSEN; WHYTE;
CUOMO, 2019). In the same places where the new technology promises to transform
the characteristics of military power, it also complicates the cognitive aspects of
decision and bureaucratic interactions in security institutions. The speed with which
AI systems enable new forms of war also destabilizes human agency in conducting it.
Technology reshapes human daily life and behavior through algorithms. Individuals
gradually lose autonomy, becoming mere points in a global information grid. Every
time one communicates digitally, he/she reconnects to the network as a transmitter
and activate one’s brain’s reward circuits (that’s why digital technology is so addictive).
Thus, humanity is experiencing a change in the relationship with technology, in

Revista Jurídica da Presidência Brasília v. 23 n. 129 Fev./Maio 2021 p. 67-93


http://dx.doi.org/10.20499/2236-3645.RJP2021v23e129-2229
74 The regulation of the use of Artificial Intelligence (AI) in warfare

addition to its agency capacity and changes in the brain’s neural pattern, as humans
start to use what is programmed in machines to manage themselves: the internet
search engines manage individuals by reading their thoughts, directing them to what
they decide each individual would consider interesting. Algorithms also monitor brain
rhythms, heartbeat and eye movements of drone pilots, checking their attention and
concentration at work. A pilot can be disconnected if he/she is considered to be at risk
of becoming very stressed, being the task transferred, consequently, to another person
who is momentarily more capable (COKER, 2019, p. 56).
Autonomous weapons will outperform the human in moral situations, as the
human situation encourages the loss of human control, and many of the immoral
actions that result from it (COKER, 2019, p. 58-59). Robots, by comparison, do not
have the same fighting dynamics as humans, as they do not have the same prejudices
against enemies, nor would they have the propensity to value preexisting belief
patterns, or human feelings such as guilt and shame.
AI will drive the war even more because of technological factors – with
mankind being increasingly absorbed by them, and they by mankind, in a symbiotic
and post-human situation. AI-oriented systems will be less and less considered
as tools, and more as collaborators. Machines are better than humans for routine,
repetitive and incessant situations (such as risk monitoring, information gathering,
data analysis and pattern discovery, if any, reacting quickly and operating other
machines). This demands a change in the humans’ attitude towards the machine
(from master to colleague) – otherwise, the human can become excessively
dependent on it and lose independent agency.
Regarding the development and use of autonomous weapons, there are
positions that consider them ethically inadequate, being the removal of humans
from death control a sufficient premise to ban autonomous weapons. Due to the
relatively low cost of this type of apparatus/systems, the possibility of programming
their algorithm with moral rules, and the possibility of removing humans from the
front lines of combat, these technologies must be developed and used in conflicts.
Furthermore, autonomous moral weapons may be the only entities capable of making
genuinely ethical decisions about destroying their targets. However, there are
requirements for its use:

(I) Such entities must have systems of guidance and judgment equal to or
greater than the abilities of human beings;

Revista Jurídica da Presidência Brasília v. 23 n. 129 Fev./Maio 2021 p. 67-93


http://dx.doi.org/10.20499/2236-3645.RJP2021v23e129-2229
Mateus de Oliveira Fornasier 75

(II) There must be the incorporation of moral programs with which all
parties agree in them (relating, for example, to the Law of War and IHL);

(III) The ignorance of these requirements would subject the responsible


ones to international pressure and sanctions;

(IV) New protocols, referring to autonomous weapons, must be added to


international norms related to war. (UMBRELLO; TORRES; DE BELLIS, 2019, p. 8).

Thus, a design that ethically directs the development of autonomous weapons


can be proposed – value-sensitive design, with principles that incorporate the
values of stakeholders in a project, encourage cooperation and coordination
of stakeholders and, with that, promotes the social acceptance of autonomous
weapons as a future fact preferable to war involving human losses (UMBRELLO,
2019, p. 30). Technologies are always imbued with values, implicitly or explicitly. The
development of an international autonomous arms governance is vital, therefore.
And through the value-sensitive design approach, along with the incorporation of
IHL as a basis for determining design requirements, such autonomous weapons
could incorporate ethical-legal values.
Levinghaus (2016, p. 119-122) presents interesting points related to ethics in
the use of autonomous weapons. Firstly, at the current stage of technology, it is
impossible to design a weapon that, once programmed, fulfills with high reliability
the ideally constructed premises regarding the criterion of distinction – the ability of
machines to distinguish between legitimate and illegitimate human targets. And from
the perspective of IHL, autonomous weapons that are unable to distinguish are illegal.
But autonomous systems capable of distinguishing between various types of objects,
which autonomously track and destroy (inhuman) enemy objects are by no means
unethical or illegal, as their use is not related to human destruction.
It is also important to address the blanks in responsibility, which arise in
situations where no one can be liable for the use of force during armed conflicts.
In the case of using autonomous weapons, there is no possibility of configuring
non-accountability. Here the issue is very much concerning to increase individual
risk by sending autonomous weapons for warfare action. In other words, one must
always question how safe autonomous weapons are – as enemy hackers could
invade them, for example. Thus, interdisciplinary approach – between normative
and cognitive knowledge about autonomous weapons – must be carried out.
It is also interesting to note the temporality of the liability assessment, which, at
the first glance, is retrospective (relative to past facts). However, a so-called prospective

Revista Jurídica da Presidência Brasília v. 23 n. 129 Fev./Maio 2021 p. 67-93


http://dx.doi.org/10.20499/2236-3645.RJP2021v23e129-2229
76 The regulation of the use of Artificial Intelligence (AI) in warfare

responsibility can also be considered, obliging interested parties to create adequate


structures to prevent accidents. Thus, adequate standards of prospective responsibility
for the use and development of autonomous weapons must be guaranteed, and the
risks arising from their deployment by the military and weapons designers must
be mitigated – as they are the ones who must carry out regular reviews of arms
technologies to ensure that projects meet standards relevantly.
Fully autonomous weapons, having a range of thousands of kilometers and the
power to choose human targets, endanger the responsibility for warlike actions, and
can undermine human dignity in some scenarios at the same time, even if they behave
in an IHL compatible manner (HOROWITZ, 2016, p. 33-34). While it is possible to
resolve (or mitigate) this problem through training, accountability rules and restricting
scenarios for the use of such weapons, this area requires further investigation.
Autonomous military systems can create significant moral dilemmas because
of total authority, not just complementary to human judgment, that could be given
to the operating system – which would make human beings ethically less relevant
to war decisions. However, such systems are not yet possible, and it is unlikely that
humans will give up this level of control over war. It is necessary, therefore, to rethink
the theory of war based on the use of autonomous weapons, as they, on the one
hand, can increasingly alienate humans from the relevance of conducting war; on
the other hand, the war itself could become more precise, involving less unnecessary
human suffering. In any case, it will be essential to ensure that the human element
remains the central protagonist of the war.
The efforts to minimize the risks of AI in the military field should be focuses not
only on issues of definition related to autonomous weapons, but also on binding
principles for responsible AI’s research (SURBER, 2018). In this sense, the use of the
term autonomy for technological artifacts must be considered a synonym for the loss
of human control and responsibility for the results of technological processes. The
guiding principles of AI research may require programmers and engineers to develop
only technological artifacts whose results will remain controllable by humans, and
for which they will always be responsible. It would be advisable to also group
principles identified by initiatives of professional organizations and representatives
of the private sector on the issue, and to create an international body to oversee
compliance with such standards.
An introduction of software codes and algorithms in a legislative process would
require the creation of a constant political-technological interface through, for

Revista Jurídica da Presidência Brasília v. 23 n. 129 Fev./Maio 2021 p. 67-93


http://dx.doi.org/10.20499/2236-3645.RJP2021v23e129-2229
Mateus de Oliveira Fornasier 77

example, fixed State departments for AI. A constant dialogue between technology
experts and politicians in an institutional integration can limit the risk that
programmers and policymakers will surpass the responsibility for the immoral
results of autonomous systems. In addition, such an idea would require the source
codes to be publicly accessible.
Several reasons to support the attack on the human dignity of autonomous weapons
can be listed, and perhaps the main ones are (SHARKEY, 2019): (I) the suppression of
human agency over other people’s life and death decisions (thus, totally subjugating the
human to the autonomous action of the machines); (II) the possibility of disrespecting
meaningful human control over such systems/apparatus. There are many technological
applications that affect human dignity (such as chemical weapons, nuclear weapons,
biological weapons), in addition to the fact that human behavior itself may harm the
dignity of others (by causing unnecessary suffering).
Thus, only human dignity cannot be considered a very reliable basis for
arguments against autonomous weapons when compared to other means and
weapons of war. The risk to human dignity is only one of the reasons for banning
autonomous weapons and maintaining human control over them – and yet, it
is perhaps not the most attractive. It is more intelligent to resort to various types
of objections in arguments against autonomous weapons, not just to human
dignity – not least because there is no philosophical consensus about their
meaning. Three other classes of arguments must be added to dignity, then:
(I) non-conformities of autonomous weapons to IHL;
(II) need for significant human control in the use of technology;
(III) deleterious consequences of using such technology for global stability.
AI weapons would create new arms race that would put everyone at risk,
but much more widespread than the nuclear weapons race, as they are cheaper
and much easier to develop independently. States will be better off preventing the
development and implementation of such systems. Preventive governance structures
should be implemented with limits on the development of AI weapons with the
potential to violate IHL (GARCIA, 2019, p. 340).
Warlike AI will make war more inhumane, because attribution of responsibility is
a necessary element to hold war criminals liable, and these weapons make liability
much more difficult. Currently, nations have one of the greatest opportunities in history
to promote a better future, developing preventive security structures that preventively
prohibit AI armament and ensure that such technology is used only for the common
good of humanity.

Revista Jurídica da Presidência Brasília v. 23 n. 129 Fev./Maio 2021 p. 67-93


http://dx.doi.org/10.20499/2236-3645.RJP2021v23e129-2229
78 The regulation of the use of Artificial Intelligence (AI) in warfare

2.1 On the importance of meaningful human control


Meaningful human control emerged as the most promising starting point for
a holistic approach to the regulation of autonomous weapons in Geneva between
2014 and 2015, and it seems to have become a contemporary trend in the debate
on the matter (BHUTA; BECK; GEISS, 2016, p. 381 et seq.). There is a contradiction,
however, in the possibility of significant human control in the context of
autonomous weapons – as once there is significant human control, there cannot
be complete autonomy for machines. So, such a notion seems to be just a synonym
(more constructive and pleasant internationally) for a ban on total autonomy
over certain critical functions of an autonomous weapons system. Furthermore,
although human control is seen as an ambiguous principle, the States that form
the European Union have helped, within the framework of UN discussions on the
topic of lethal autonomous weapons (LAWs), to enshrine it as the new organizing
principle to regulate question (BARBÉ; BADELL, 2020, p. 147).
At the most basic normative level, meaningful human control can be developed
from two premises (ROFF; MOYES, 2016): (I) a machine that applies force and
operates without any human control must not be accepted; (II) a human being
simply pressing a start button in response to cues from a computer, without clarity
or cognitive perception, is not enough for substantive human control. Although
diplomatic responses to the concept of meaningful human control tend to focus
more on the term meaningful, such an expression generally indicates only the need
for the political community to outline what forms of human control are needed. This
process may be based on general control principles as cumulatively constituting
previous, concomitant and posterior control mechanisms in relation to the use of
force – thus the following principles result from that thought:
(I) Predictability: it must be possible to predict the consequences, within
humanly understandable parameters, regarding the design of a technology, and
the production, storage and maintenance of its apparatus, and the provision of
accurate information throughout the system must be maintained;
(II) Reliability: the technology must be designed to offer reliability – and also to
degrade itself in case of malfunction, preventing catastrophic failures (which is
an important control in case of exogenous events or shocks);
(III) Transparency: the technology must be designed in a way so that it becomes
possible, if necessary, to interrogate the system in order to clearly, intuitively and
humanly inform users/ operators about the decisions, objectives and logic used

Revista Jurídica da Presidência Brasília v. 23 n. 129 Fev./Maio 2021 p. 67-93


http://dx.doi.org/10.20499/2236-3645.RJP2021v23e129-2229
Mateus de Oliveira Fornasier 79

by the system in carrying out its actions – and its design must be focused on its
actual user, not for an ideal one;
(IV) Accuracy of information: the user must have accurate information, in order to
understand the intended results (purposes), the technology and the processes
applied by it, as well as the context in which it is supposed to work;
(V) Possibility of timely human intervention: a human user must initiate the
operation of a certain technology while the contextual information on which
they are acting is still relevant. Furthermore, although systems may be designed
to operate more quickly than it is humanly possible, there must be resources
for timely intervention by another system, process or human being;
(VI) Existence of responsibility with a certain standard: liability for use of a
technology must be conceptual and practically linked to the potential for
timely human action and intervention. In addition, it must reaffirm who holds
it (due to processes initiated), condition the technical system (ensuring that its
users/operators understand the consequences for their actions or omissions),
and focus on broader sets (organizations, for example) that produce such
technological devices and systems – although it may initially fall on human
agents related to the system.
The specification of the sufficiency level is difficult to detail, but the technological
categories can still be assessed based on those considerations. Technological
limits, such as automation and autonomy, may represent different capacities of
predictability, reliable information about the context of use, timely intervention or
consideration of responsibility. In addition, any extension of the legal concept of
what an attack is can also be challenged in the face of the challenges that these
regulatory requirements present. In fact, the meaning of the expression attack is
particularly important, because the texture of the law is open – which makes it both
beneficial and harmful: beneficial because its interpretation can change as needs
arise; and harmful because they can be changed arbitrarily as well. The mere duty
of conformity is not sufficient when key terms can be interpreted so openly in the
context of autonomous weapons, to the point of emptying any argument in favor of
the application of human legal judgment.
Being functioning of human communities also dependent on the knowledge and
compliance with social norms. The possibility of justifying their violation in favor of
more important ones, it is necessary that the artificial agents, when becoming part
of the human communities, must follow such norms too – contextualized, of course,

Revista Jurídica da Presidência Brasília v. 23 n. 129 Fev./Maio 2021 p. 67-93


http://dx.doi.org/10.20499/2236-3645.RJP2021v23e129-2229
80 The regulation of the use of Artificial Intelligence (AI) in warfare

according to peculiarities pertaining to them, and the norms that compel human
conduct must be known by them as well. But moral judgments are not only based
on compliance with the rules, as reasons are also needed to motivate the action
(SCHEUTZ; MALLE, 2020). Thus, the humanly possible transparency of the reasons
that led a machine to decide on a certain standard is fundamental – which includes
justifying its actions according to applicable standards.
If machines enter society, deciding on humans’ lives and deaths, or assuming
socially influential roles, they will have to be able to decide and act in a normatively
compatible manner, expressing their knowledge of applicable rules before the
action, and offering appropriate justifications, especially in response to criticism,
after action. Thus, it is up to humans to design artificial agents and provide them with
this form of normative competence that will serve as a social safeguard, ensuring
that they are able to improve human condition.

3 On the difficulties related to the regulation of the use of AI in warfare by IHL


Arms races can be managed, targeted, or even prevented (MAAS, 2019,
p. 303-304). Their regulation may be direct (by coalitions/national political
regimes), by downwards-elaborated norms, or upwards, through the establishment
of epistemic communities – small and properly organized communities of specialists.
But they are limited when it comes to military AI, given the limited consensus on
the ethical character of such technologies. Furthermore, the window of opportunity
to coordinate such a community and institutionalize global norms on military AI
may already be closing – and this is even worse because the current debates in the
nascent epistemic community about that issue have too narrow objectives, which
concern the insertion of the human control over such systems/apparatus.
The nascent epistemic community that fights military AI should look beyond
its focus on banning LAWs just for ethical and legal reasons. While valid, a more
effective and comprehensive control effort for military AI can cause this epistemic
community to reorganize and readjust itself – but the community must support the
entire portfolio of justifications that historically drive arms control – which include
not only ethics and legality, but also strategic stability and security. This approach
is more likely to affect policy, as it offers more plausible justifications for actors
sympathetic to the control of military AI, which helps in the dissemination and
institutionalization of standards, in addition to directing control efforts across the
entire spectrum of risks presented by military AI.

Revista Jurídica da Presidência Brasília v. 23 n. 129 Fev./Maio 2021 p. 67-93


http://dx.doi.org/10.20499/2236-3645.RJP2021v23e129-2229
Mateus de Oliveira Fornasier 81

With regard to non-state actors, it is also necessary to make technology


companies, when deciding to undertake efforts to develop autonomous weapons,
educate themselves about IHL and participate in the debates focused on AI
security (LEWIS, 2019). In other words, a more informed approach by business is
recommended, benefiting from history, Law, principles and practices that have long
been designed to promote humanity in war. These steps can help to constructively
regulate the use of AI in war.
It is hard to believe that States someway would agree to go back to a situation
without using autonomous weapons (ETZIONI; ETZIONI, 2017, p. 79-80). Thus, perhaps
the most promising way to regulate them is, initially, to determine whether it is
possible to enter into fully autonomous international arms prohibition agreements
with missions impossible to abort or retrace – because if they malfunction and reach
civilian centers, it will be impossible to stop them, analogically to what happens to
unexploded landmines placed without marking.
Several organizations of civil society are actively pushing for a ban, including the
Future of Life Institute and Stop Killer Robots Campaign. In addition, several States
have been discussing complex weapons under the United Nations Convention on
Certain Conventional Weapons (CCW) since 2014. A group of governmental experts
(GGE) on lethal autonomous weapons met for the first time in 2017, but their work
has progressed much slowly hitherto – with divergences in definitions, methodology
and the expected scope for negotiations, among other reasons. There is no consensus
on the desirable scope of any prohibitions to be imposed on autonomous weapons,
and the military powers remain cautious about introducing severe restrictions on
the use of these technologies (GARCIA, 2019). Nevertheless, the involved States
agree that, despite the dubious nature and rapid development of those technologies,
progress in the AI sector and civil applications should not be hindered. Furthermore,
they agree that IHL must be respected. Significant human control is advocated by
most countries, especially with regard to target selection and attack.
There is no direct and specific legal restriction on the use of fully autonomous
combat systems currently – however, the use of such weapons contradicts IHL
rules. Furthermore, a comprehensive ban on the development and use of robotic
technologies is unlikely to be possible. The most plausible scenario for solving
the problem internationally is just a ban on the direct use of this type of military
equipment during armed conflicts. At the same time, it is necessary to outline the
acceptable areas for the application of robotic technologies, being medical and

Revista Jurídica da Presidência Brasília v. 23 n. 129 Fev./Maio 2021 p. 67-93


http://dx.doi.org/10.20499/2236-3645.RJP2021v23e129-2229
82 The regulation of the use of Artificial Intelligence (AI) in warfare

logistical support for military operations, military construction, and bomb removal
robots, the most advisable areas for their use (YURIEVICH et al., 2019).
The most likely scenario for drawing up an international agreement on the
problem of deadly automatic systems is not a comprehensive ban on the development,
use and distribution of technologies without meaningful human control, but a ban on
the use of this type of military equipment. Moreover, considering the political and
economic interests in the current global scenario, the likelihood of a swift adoption of
a commitment document is not high. More importantly, the regulation of autonomous
weapons must establish in which areas their use would be acceptable.
An international regulation for LAWs must give fundamental importance to
significant human control – and in that sense, although the UN is perhaps the best
option for international regulation of the issue, it faces different dimensions of
challenges due to the exponential growth of the AI (SETHU, 2019, p. 367): firstly,
the UN is the appropriate international legal framework for the regulation of AI
in its warlike use, as that organization is the main bulwark for the protection of
human rights and the application of IHL – and AI weapons can be used in times of
war as well as in times of peace and, for that reason, both human rights and IHL
will have to converge. Secondly, legal parameters to solve the problems related to
the use of LAWs must be precisely defined – for that, it is pertinent to define the
principiological foundation of meaningful human control, which will shed light on
the mental element of crime or the necessary area of the commitment of a crime.
Nevertheless, there are inconsistencies and disagreements between States in
obtaining a uniform definition of the meaning of such a foundation – therefore, the
UN plays the fundamental role of reaching consensus.
Automated weapon systems will become commonplace in the future war, not
least because AI-based data analysis is unmatched in terms of human knowledge.
However, human intervention is crucial when considering ethical issues based on
machine judgments, and the humanization of war should be the area of concern for
IHL in this sense, since the implications of the warlike use of AI will affect future
global developments based on disruptive warfare technologies. The use of automated
weapon systems in external warfare must remain under human control – otherwise,
it can turn against people, causing unnecessary civilian losses. The social challenges
regarding the use of such technology are critical, especially in view of the religious
and cultural influences, which weigh heavily on military personnel regarding the use
of autonomous weapons.

Revista Jurídica da Presidência Brasília v. 23 n. 129 Fev./Maio 2021 p. 67-93


http://dx.doi.org/10.20499/2236-3645.RJP2021v23e129-2229
Mateus de Oliveira Fornasier 83

The problem with agreeing to a general and preventive ban on autonomous


weapon systems lies in the possibilities of evasion to it. Scholars continue to
debate definitions about autonomy, agency, technical resources, operational
aspects and appropriate supervisory human control in the context of
autonomous weapons. Finding a State consensus for definitions about them in
the UN Convention on Certain Weapons, which underlies negotiations on legal
regulation, hardens the process (BODE; HUELSS, 2018). Regardless of whether or
not a ban is imposed, autonomous weapons will shape understandings of what
is practically appropriate and thus replace a deeper political-social debate about
what is normatively acceptable about them. It is necessary to be aware of such
normative consequences of autonomous weapons to assess the role they will play
in future security policies and in international relations in general.
While some States are calling for a preventive ban on LAWs, most of the international
community is considering using them as a possibility. However, the objective of acquiring
such systems varies between political systems. Democracies, motivated by the possible
reduction in the loss of human life in war activities, would need to invest significant
resources to reach the needed high levels of human-machine interactive technology
to make possible the use and development of LAWs. In authoritarian systems, however,
LAWs could mean a costcutting for dictators wishing to repress protesters when police
or soldiers are resistant to the use of lethal force (WARREN; HILLAS, 2020, p. 840).
Multilateral forums engaged in discussions on LAWs have focused more on combat
scenarios than on repression of popular movements hitherto – but for an effective
global regulation which could refrain non-state actors to proliferate them, the causes
by which LAWs are being adopted in each country must be addressed.
UN members are aware about the risks posed by terrorists and how autonomous
commercial technologies can be readjusted and armed, but there is no consensus on
a multilateral response to that threat, and domestic responses would vary. For China,
which seeks civil-military integration in which the public and private sectors are
confused, AI seems to be already being nationalized. On the other hand, democracies
seems too vulnerable, as it is more difficult to coerce transnational corporations
into promoting a governmental agenda. Thus, the US response indicates that
China perhaps has a more conducive system to a viable and mutually beneficial
relationship with those entities.

Revista Jurídica da Presidência Brasília v. 23 n. 129 Fev./Maio 2021 p. 67-93


http://dx.doi.org/10.20499/2236-3645.RJP2021v23e129-2229
84 The regulation of the use of Artificial Intelligence (AI) in warfare

3.1 IHL Principles as the Foundation for the Future Regulation of the Military Use
of Artificial Intelligence

In IHL, Article 36 of the Armed Conflict Protocol-1, ratified by Brazil, together with
Protocol 2, by Decree 849, to the 25th of June, 1993 (BRAZIL, 1993), determines that
the study, development, acquisition, adoption of new weapons and the prohibition
of their use when attacking an enemy, must observe the following principles of
international humanitarian aid (MATHEW; MATHEW, 2019). In addition, the following
principles must also be observed:
(I) Distinction (arts. 50 to 54): civilian people or objects cannot be attacked,
and harassing civilians who do not participate in any hostilities is considered
a heinous crime, meaning the complete failure of the Armed Forces to identify
the specific target. The hideous character of such a crime also includes weapons
that cannot be discriminated (such as chemical and biological ones);
(II) Proportionality (art. 35): counterattacks must be proportional to the offenses
that provoke them, and the weapons used must avoid damaging civil people or
objects, when compared to the expected military advantage they offer;
(III) Precaution (articles 57 and 58): it is illegal to cause excessive damage when
comparing the action to the general military advantage, and the leadership
defines what is an extreme loss. Specific precautionary measures must be taken
to protect soldiers, and the attack must be carried out so that warnings of its
occurrence are issued whenever possible.
Responsibility blanks regarding to the use of LAWs could be reduced, and thus,
respect for people could be shown, as even when machines make all the decisions,
it would be possible to align the effective development of AI with human moral
commitments, and thus, to be in compliance with international war conventions
(PFAFF, 2019, p. 148-150). Then, eliminating or strictly reducing the use of LAWs is out
of the question. The development and use of LAWs could prevent war or reduce the
damage caused by war activities as well – however, LAWs could encourage militaristic
responses even when other non-violent alternatives are available, resulting in
atrocities for which no one would liable, and this would desensitize militaries to the
murder they commit. In order to promote the first and avoid the second, the following
measures should be considered by States to ensure the ethical use of LAWs:
(I) IHL update, establishing normative standards that include human
responsibility for the behavior of autonomous systems – also regarding the
quality of information and assessments that machines provide, ensuring that

Revista Jurídica da Presidência Brasília v. 23 n. 129 Fev./Maio 2021 p. 67-93


http://dx.doi.org/10.20499/2236-3645.RJP2021v23e129-2229
Mateus de Oliveira Fornasier 85

systems are used only in conditions for which they were designed with ethical
performance, and the monitoring of the interactions between the environment
and the machine (being that operations must be interrupted when conditions
change in a manner that could make violations possible);
(II) Establishment of accountability standards for employees, programmers,
designers and manufacturers for violations – especially because trust conditions
of commanders and operators of LAWs heavily depend on guarantees that the
machine meets moral and functional standards of operation;
(III) Maintenance of a reasonably high limit for use, using autonomous systems /
apparatus only when the conditions of the jus ad vim (fair use of force) are met;
(IV) Specification of conditions for the use of LAWs, in order to guarantee
meaningful human control and the maintenance of trust – conditions that must
be updated along technological developments, avoiding further blanks and
antinomies between the effective use of AI and moral commitments;
(V) Establishment of AI proliferation patterns that, at a minimum, include a
commitment to employ these systems only in conflicts that meet the standards
of jus ad bellum and jus in bello. Furthermore, there must be a strong presumption
of denial to the recipients of the technology who, in the past, have been weak in
their commitments to these standards;
(VI) Preservation of the identity of the human military personnel, being health
treatment offered for possible desensitization and psychological trauma;
(VII) Development of a communication plan to explain the ethical framework for
the use of AI to the public, the media and the legislature.
The UN GGE on LAWs has emphasized the need for the development
and use of such systems in accordance with IHL. However, with regard to the
governance of autonomy in arms, there is a blank in actual IHL, as challenges raised
by LAWs overcome issues of compatibility with IHL, including critical issues related
to ethics, morality and fundamental values that are critical for mankind (CHENGETA,
2020). The shortcomings of the legal regime regarding LAWs mainly concern the
responsibility blanks that appear when such technology is used. Such blanks cannot
be solved by ignoring their existence, or by creative interpretations of the existing
law, or even through political statements without legal force, being the development
of a binding instrument on LAWs essential.
The use of LAWs would undermine the whole paradigm of war, and its
application in combat is inherently incapable of fulfilling the distinction and
proportionality fundamental principles of IHL (SZPAK, 2020, p. 11). It is important to

Revista Jurídica da Presidência Brasília v. 23 n. 129 Fev./Maio 2021 p. 67-93


http://dx.doi.org/10.20499/2236-3645.RJP2021v23e129-2229
86 The regulation of the use of Artificial Intelligence (AI) in warfare

prohibit, under treaties, any development, production and use of fully autonomous
weapons currently, when technology has not yet developed fully autonomous weapons
(although this is not dystopian or distant). But until the conclusion of such a treaty,
the rules of armed conflict in IHL must apply to LAWs. The proposed treaty should
define what is an autonomous weapon, as well as what roles and functions are
allowed for their use (defensive functions, such as guarding military installations
and no-fly zones, target identification, reconnaissance, mine dismantling and aid to
victims). Autonomous systems are not able to understand subjective factors, such
as intention and ambiguous behavior hitherto – and this is even more problematic,
regarding IHL, when the unclear definition of who are the civilians a conflict is
recalled. Uncertainties, sudden changes, and dependence on context/environment,
demonstrate the incompatibility of LAWs in terms of distinction and proportionality
(BRENNEKE, 2020, p. 91).
In terms of proportionality, collateral damage could be more easily estimated
with the use of AI, but the element of military advantage is too obscure, complex
and subjective, given the myriad of information from various sources. The rule of
proportionality in IHL is openly formulated and, ultimately, depends on morality in
a way that requires battlefield experience and general awareness – characteristics
that LAWs currently do not have, and that will not be technologically acquired soon.
However, the deployment of LAWs in permissive environments is highly unlikely
to violate IHL rules. In addition, the obligation to take precautions in the attack
depends on the rules that establish distinction and proportionality (BRENNEKE,
2020, p. 92). It is also questionable whether such systems could adequately detect
and treat injured or surrendered soldiers. The situation is also complicated by the
fact that the viability of actions on the battlefield is impossible to formulate in
abstract or binary terms.
LAWs are relatively close to a future incompatible with the applicable rules of IHL,
at least as far as common types of battlefields of the present are concerned. But the
fewer people and civilian objects in a given environment, the greater the likelihood
that LAWs will not violate IHL. If scientific research achieves major advances leading
to adherence to the most important distinction, proportionality and precautions in
attack, it will require a considerable amount of time and effort. So far, mankind must
remain in meaningful control of the war actions to guarantee the observance of IHL
during conflicts – although this guiding principle is still received with criticism
by the main global military powers (mainly Russia and the USA). Consequently,

Revista Jurídica da Presidência Brasília v. 23 n. 129 Fev./Maio 2021 p. 67-93


http://dx.doi.org/10.20499/2236-3645.RJP2021v23e129-2229
Mateus de Oliveira Fornasier 87

uncertainties remain about the legal and political paths to follow. States’ proposals
range from a legally binding treaty to a political declaration with no further action.
On the other hand, technological progress should not be neglected, but used
consciously to improve military capabilities and legal conduct (in the light of IHL)
(BRENNEKE, 2020, p. 93).
In addition, deep machine learning can create a logic for decision making,
different from the initially programmed one, developed completely by the machine
and humanly incomprehensible (HUGHES, 2020, p. 127-129). In this sense, inexplicable
algorithms would present a significant risk if the characteristics of the target on which
the algorithm is focused are not humanly known and, inexplicability, would impair
a weapons reviewer’s ability to evaluate weapons using such algorithms. A poorly
understood review would prevent commanders and operators from doing their jobs.
The use of opaque algorithms would also cause skill problems, and if something
goes wrong, they can create intractable problems for due process and accountability
regimes. It is clear that human mental processes, which are also complex and
problematic, are mostly accepted for such purposes – however, mankind still has
the possibility to choose (or not) the inexplicability of the algorithm, while the
inexplicability of the human psyche is inevitable. It is recommended, therefore, to
understand deep learning as still being inherently problematic due to the richness
of the algorithm’s opacity.

4 Conclusion
Autonomous weapons make decisions without any human intervention,
selecting and attacking targets. They are developed on the basis of computation and
robotics, and may come to combat humanity. This must be considered normatively,
otherwise, humanity will irreversibly depend on the machine for moral decisions.
The use of autonomous weapons in warfare can undermine human agency, then;
however, their use could also end the perishing of human lives in military actions
– and for that reason, there are those who defend their use, as long as the IHL is
respected, which would be programmed in their algorithms (value sensible design).
In addition, fundamental norms must also be observed for the assessment of the
ethical and legal character (according to IHL) related to the criterion of distinction
(ability of such apparatus to distinguish between human and inhuman targets, and
between legitimate and illegitimate human targets as well), to overcome gaps in
responsibility for the risk of such weapons. In addition, the prospective responsibility

Revista Jurídica da Presidência Brasília v. 23 n. 129 Fev./Maio 2021 p. 67-93


http://dx.doi.org/10.20499/2236-3645.RJP2021v23e129-2229
88 The regulation of the use of Artificial Intelligence (AI) in warfare

of the military and weapon designers – who must create sufficient standards and
strategies to avoid negative contingencies in the use of such weapons – must also
be normatively developed.
Disrespect for human dignity is inherent in the development and use of
autonomous weapons – and that factor must be weighed to approach their risks.
In other hand, this is not something exclusive to such technology, which makes it
fallacious to exclusively use the dignity argument to support the ban of the use
of such weapons. It must therefore be added to reasons such as non-conformities
with IHL, the need for significant human control, and the deleterious consequences
of the development and use of such technology for global (political, economic,
environmental, etc.) stability. From the fundament of significant human control,
a list of principles can be outlined, such as predictability, reliability, transparency,
precision of information, possibility of timely human intervention and the existence
of responsibility regarding certain standards.
The regulation of autonomous weapons with a basis on international regimes is
something valid and powerful, but it must be added to the efforts of nascent epistemic
communities that fight against their proliferation. However, such communities must
draw on experience (mainly on nuclear weapons control), going beyond the ethical
and legal study of the possibilities of human control over autonomous weapons.
Strategies to sensitize political authorities to the possibilities of human control
should also be promoted, seeking to demonstrate not only the risks of their use,
but also the advantages of using AI for the stability and security of States and
international regimes.
There is still no specific regulation of International Law on the development
and use of autonomous weapons. It is counterproductive to expect a total ban on
the use of AI in war. Thus, it would be interesting to prohibit its use against human
lives (which is already supported by IHL) and to encourage it in auxiliary areas – such
as medical and logistical support, bomb dismantling, removal of victims and civilians
from battlefields, etc. The principles of distinction, proportionality and precaution,
enforced in IHL conventions, therefore, can initially parameterize the regulation of the
military use of AI.
National and international regulations on the use of autonomous weapons
must be established, and IHL must be updated to address also the peculiarities
of that technology – always paying attention to the need of maintain meaningful

Revista Jurídica da Presidência Brasília v. 23 n. 129 Fev./Maio 2021 p. 67-93


http://dx.doi.org/10.20499/2236-3645.RJP2021v23e129-2229
Mateus de Oliveira Fornasier 89

human control. In addition, accountability must also be established for developers


and manufacturers of such weapons, in addition to military users.
At the interface between IHL and technological advancement, the principles
of distinction (between civilians and non-civilians) and proportionality are still the
most problematic practically and theoretically. They are not yet technologically
conceivable, as they are highly dependent on context, environment and on a general
morality which is still impossible to be learned, currently, by algorithms. Therefore,
if IHL compliance with the adoption of lethal autonomous weapons is really desired,
meaningful human control – even though it is subject to much theoretical, military
and political criticism – is still necessary. The impossibility of explaining the
algorithms would make it extremely difficult to hold legal accountability regarding
the risks of the use of military devices that use them.
Efficient regulation of the use and development of autonomous lethal weapons,
on a global level, must consider not only warfare scenarios, but also the possibilities
of their use in the repression of social movements – which further complicates that
issue, as the degree of democratization of each State variably motivates the use
of such weapons. The UN is perhaps the most appropriate entity for international
regulation of the military use of AI, mainly due to its tradition in Human Rights
and IHL. However, the lack of consensus on the basis of meaningful human
control – perhaps the most important for the discussion of the normative parameters
of autonomous weapons – brings a very pertinent complexity to be assessed.

5 References
BARBÉ, Esther; BADELL, Diego. The European Union and Lethal Autonomous Weapons
Systems: United in Diversity? In: JOHANSSON-NOGUÉS, Elisabeth; VLASKAMP,
Martijn C.; BARBÉ, Esther (ed.). European Union Contested: Foreign Policy in a New
Global Context. Cham: Springer, 2020, p. 133-152. DOI: https://doi.org/10.1007/978-
3-030-33238-9.

BHUTA, Nehal; BECK, Susanne; GEISS, Robin. Present futures: concluding reflections and
open questions on autonomous weapons systems. In: BHUTA, Nehal; BECK, Susanne;
GEISS, Robin; LIU, Hin-Yan; KRESS, Claus (eds.). Autonomous Weapons Systems: Law,
Ethics, Policy. Cambridge: Cambridge University Press, 2016, p. 347-383.

BODE, Ingvild; HUELSS, Hendrik. Autonomous weapons systems and changing norms
in international relations. Review of International Studies, v. 44, n. 3, p. 393-413,
2018. DOI: https://doi.org/10.1017/S0260210517000614.

Revista Jurídica da Presidência Brasília v. 23 n. 129 Fev./Maio 2021 p. 67-93


http://dx.doi.org/10.20499/2236-3645.RJP2021v23e129-2229
90 The regulation of the use of Artificial Intelligence (AI) in warfare

BRASIL. Decreto no 849, de 25 de junho de 1993. Promulga os Protocolos I e II de


1977 adicionais às Convenções de Genebra de 1949, adotados em 10 de junho de
1977 pela Conferência Diplomática sobre a Reafirmação e o Desenvolvimento do
Direito Internacional Humanitário aplicável aos Conflitos Armados. Available at:
http://www.planalto.gov.br/ccivil_03/decreto/1990-1994/D0849.htm. Access in:
19 mar 2021.

BRENNEKE, Matthias. Lethal Autonomous Weapons systems and their compatibility


with International Humanitarian Law: a primer on the debate. In: GILL, Terry
D.; GEISS, Robin; KRIEGER, Heike; PAULUSSEN, Christophe (eds.). Yearbook of
International Humanitarian Law 2018. Berlin: Springer, 2020, p. 59-99. DOI: https://
doi.org/10.1007/978-94-6265-343-6_3.

BROWN, Justin. An Exploration of the Nascent AI Race Dynamics Between the United
States and China. MUNDI, v. 1, n. 1, p. 1-40, 2020. Available at: https://tuljournals.
temple.edu/index.php/mundi/article/view/388. Access in: 07 jul 2020.

CHENGETA, Thompson. Is existing law adequate to govern autonomous weapon


systems. DAVEL, Marelie; BARNARD, Etienne (eds.). FAIR 2019 South African Forum
for Artificial Intelligence Research, v. 2540, p. 244-251, 2020. Available at: https://
eprints.soton.ac.uk/438456/. Access in: 7 jul. 2020

COKER, Christopher. Artificial Intelligence and the Future of War. Scandinavian Journal
of Military Studies, v. 2, n. 1, p. 55-60, 2019. DOI: https://doi.org/10.31374/sjms.26.

DEL MONTE, Louis A. Genius Weapons: artificial intelligence, autonomous weaponry,


and the future of warfare. New York: Prometheus Books, 2018.

ETZIONI, Amitai; ETZIONI, Oren. Pros and Cons of Autonomous Weapons Systems.
Military Review, p. 72-81, May-June, 2017. Available at: https://www.armyupress.army.
mil/Portals/7/military-review/Archives/English/pros-and-cons-of-autonomous-
weapons-systems.pdf. Access in: 7 jul. 2020.

FROESE, Erich. Artificial Intelligence in the United States Military: Exploring


Lethal Autonomous Weapons. AI Pavilion Seminar: How will artificial intelligence
change humanity?, December 15, 2018. Available at: https://aipavilion.github.io.
Access in: 7 jul. 2020.

GARCIA, Eugenio V. AI & Global Governance: When Autonomous Weapons


Meet Diplomacy. SSRN Electronic Journal, 2019. DOI: https://doi.org/10.2139/
ssrn.3522616.

HOROWITZ, Michael C. The Ethics & Morality of Robotic Warfare: Assessing the
Debate over Autonomous Weapons. Daedalus, v. 145, n. 4, p. 25-36, 2016. DOI: https://
doi.org/10.1162/DAED_a_00409.

Revista Jurídica da Presidência Brasília v. 23 n. 129 Fev./Maio 2021 p. 67-93


http://dx.doi.org/10.20499/2236-3645.RJP2021v23e129-2229
Mateus de Oliveira Fornasier 91

HUGHES, Joshua G. The Law of Armed Conflict Issues Created by Programming


Automatic Target Recognition Systems Using Deep Learning Methods In: GILL,
Terry D.; GEISS, Robin; KRIEGER, Heike; PAULUSSEN, Christophe (eds.). Yearbook
of International Humanitarian Law 2018. Berlin: Springer, 2020, p. 100-135. DOI:
https://doi.org/10.1007/978-94-6265-343-6_4.

JENSEN, Benjamin M.; WHYTE, Christopher; CUOMO, Scott. Algorithms at War: The
Promise, Peril, and Limits of Artificial Intelligence. International Studies Review,
viz025, 2019. DOI: https://doi.org/10.1093/isr/viz025.

JOHNSON, James. Artificial intelligence & future warfare: implications for


international security. Defense & Security Analysis, v. 35, n. 2, p. 147-169, 2019. DOI:
https://doi.org/10.1080/14751798.2019.1600800.

KREUTZER, Ralf T.; SIRRENBERG. Understanding Artificial Intelligence: Fundamentals,


Use Cases and Methods for a Corporate AI Journey. Cham: Springer, 2020. DOI: https://
doi.org/10.1007/978-3-030-25271-7.

LELE, Ajey. Disruptive Technologies for the Militaries and Security. Singapore:
Springer, 2019. DOI: https://doi.org/10.1007/978-981-13-3384-2.

LEVINGHAUS, Alex. Ethics and Autonomous Weapons. Oxford: Palgrave Macmillan,


2016. DOI: https://doi.org/10.1057/978-1-137-52361-7.

LEWIS, Larry. Resolving the Battle over Artificial Intelligence in War. The RUSI Journal,
v. 164, n. 5-6, p. 62-71, 2019. DOI: https://doi.org/10.1080/03071847.2019.1694228.

MAAS, Matthijs M. How viable is international arms control for military artificial
intelligence? Three lessons from nuclear weapons. Contemporary Security Policy,
v. 40, n. 3, p. 285-311, 2019. DOI: https://doi.org/10.1080/13523260.2019.1576464.

MATHEW, Thomas; MATHEW, Anita. Ethical dilemma in future warfare: use of


automated weapon systems. In: Seventeenth AIMS International Conference on
Management, p. 1808-1812, 2019. Available at: http://www.aims-international.org/
aims17/17ACD/PDF/A261-Final.pdf. Access in: 7 jul. 2020.

PFAFF, C. Anthony. The Ethics of Acquiring Disruptive Technologies. PRISM, v. 8,


n. 3, p. 128-145, 2019. Available at: https://www.jstor.org/stable/26864280. Access
in: 7 jul. 2020.

Revista Jurídica da Presidência Brasília v. 23 n. 129 Fev./Maio 2021 p. 67-93


http://dx.doi.org/10.20499/2236-3645.RJP2021v23e129-2229
92 The regulation of the use of Artificial Intelligence (AI) in warfare

ROFF, Heather M.; MOYES, Richard. Meaningful human control, artificial intelligence
and autonomous weapons. In: Briefing Paper Prepared for the Informal Meeting
of Experts on Lethal Autonomous Weapons Systems, UN Convention on Certain
Conventional Weapons, Geneva, Switzerland, 2016. Available at: http://www.
article36.org/wp-content/uploads/2016/04/MHC-AI-and-AWS-FINAL.pdf. Access in:
7 jul. 2020.

SCHEUTZ, M.; MALLE, B. F. May Machines Take Lives to Save Lives? Human
Perceptions of Autonomous Robots (with the Capacity to Kill). In: GAILLOT, J. (ed.).
Lethal autonomous weapons: Re-examining the law & ethics of robotic warfare.
Oxford: Oxford University Press, 2020 (in press). Available at: http://research.clps.
brown.edu/soccogsci/publications/publications%20copy.html. Access in: 7 jul. 2020.

SETHU, Sagee Geetha. The Inevitability of an International Regulatory Framework


for Artificial Intelligence. In: IEEE. 2019 International Conference on Automation,
Computational and Technology Management (ICACTM). IEEE, 2019. p. 367-372. DOI:
https://doi.org/10.1109/ICACTM.2019.8776819.

SHARKEY, Amanda. Autonomous weapons systems, killer robots and human dignity.
Ethics and Information Technology, v. 21, p. 75-87, 2019. DOI: https://doi.org/10.1007/
s10676-018-9494-0.

SINCHANA, C; SINCHANA, K.; AFRIDI, S.; POOJA, M. R. Artificial Intelligence


Applications in Defense. In: RANGANATHAN, G.; CHEN, J.; ROCHA, A. (eds). Inventive
Communication and Computational Technologies: Proceedings of ICICCT 2019.
Singapore: Springer, 2020, p. 727-734. DOI: https://doi.org/10.1007/978-981-15-
0146-3_68.

SURBER, Regina. Artificial Intelligence: Autonomous Technology (AT), Lethal


Autonomous Weapons Systems (LAWS) and Peace Time Threats. ICT4Peace
Foundation and the Zurich Hub for Ethics and Technology (ZHET), v. 1, p. 1-21, 2018.
Available at: https://ict4peace.org/wp-content/uploads/2018/02/2018_RSurber_AI-
AT-LAWS-Peace-Time-Threats_final.pdf. Access in: 19 mar. 2021.

SZPAK, Agnieszka. Legality of Use and Challenges of New Technologies in Warfare–


the Use of Autonomous Weapons in Contemporary or Future Wars. European Review,
v. 28, n. 1, p. 118-131, 2020. DOI: https://doi.org/10.1017/S1062798719000310.

UMBRELLO, Steven. Lethal Autonomous Weapons: Designing War Machines with


Values. Delphi, v. 2, n. 1, p. 30-34, 2019. DOI: https://doi.org/10.21552/delphi/2019/1/7.

UMBRELLO, Steven; TORRES, Phil; DE BELLIS, Angelo F. The future of war: could
lethal autonomous weapons make conflict more ethical?. AI & Society, p. 1-10, 2019.
DOI: https://doi.org/10.1007/s00146-019-00879-x.

Revista Jurídica da Presidência Brasília v. 23 n. 129 Fev./Maio 2021 p. 67-93


http://dx.doi.org/10.20499/2236-3645.RJP2021v23e129-2229
Mateus de Oliveira Fornasier 93

WARREN, Aiden; HILLAS, Alek. Friend or frenemy?. The role of trust in human-machine
teaming and lethal autonomous weapons systems. Small Wars & Insurgencies, v. 31,
n. 4, p. 822-850, 2020. DOI: https://doi.org/10.1080/09592318.2020.1743485.

YURIEVICH, Mamychev Alexey; VLADIMITOVNA, Gayvoronskaya; ANATOLIEVNA,


Petrova Daria; IGOREVNA, Miroshnichenko Olga. Deadly Automatic Systems: Ethical
And Legal Problems. Journal of Politics and Law, v. 12, n. 4, p. 50-55, 2019. DOI:
https://doi.org/10.5539/jpl.v12n4p50.

Revista Jurídica da Presidência Brasília v. 23 n. 129 Fev./Maio 2021 p. 67-93


http://dx.doi.org/10.20499/2236-3645.RJP2021v23e129-2229

You might also like