Professional Documents
Culture Documents
The Regulation of The Use of Artificial Intelligence (Ai) in WARFARE: Between International Humanitarian Law (IHL) and Meaningful Human Control
The Regulation of The Use of Artificial Intelligence (Ai) in WARFARE: Between International Humanitarian Law (IHL) and Meaningful Human Control
net/publication/351963707
CITATIONS READS
0 179
1 author:
SEE PROFILE
Some of the authors of this publication are also working on these related projects:
Democracy in 21st Century and new technologies: e-democracy, blockchain and govtechs observed from Human Rights View project
All content following this page was uploaded by Mateus De Oliveira Fornasier on 28 June 2021.
ABSTRACT: The proper principles for the regulation of autonomous weapons were
studied here, some of which have already been inserted in International Humanitarian
Law (IHL), and others are still merely theoretical. The differentiation between
civilians and non-civilians, the solution of liability blanks and proportionality are
fundamental principles for the regulation of the warlike use of artificial intelligence
(AI), but the significant human control of the warlike AI must be added to them.
Through the hypothetical-deductive procedure, with a qualitative approach and
bibliographic review, it was concluded that the realization of the differentiation
criterion, value-sensitive design, the elimination of accountability gaps, significant
human control and IHL must support the regulation of the use of autonomous
weapon systems – however, the differentiation between civilians and non-civilians
and proportionality are not yet technologically possible, which makes compliance
with IHL still dependent on significant human control; and the opacity of warlike AI
algorithms would make legal accountability for its use difficult.
SUMÁRIO: 1 Introdução 2 Princípios para uso das armas autônomas letais (LAWs) 3 As dificuldades
relacionadas à regulação do uso bélico de (IA) no DIH 4 Conclusão 5 Referências.
1 Introduction
meaningful human control, were studied. The second section covered the most
important IHL rules for the future regulation of the matter.
(I) Drones (aerial, terrestrial or submarine): they can be used for automated
analysis of data and images (serving as support for decisions), or for combat.
(III) Autonomous assistants (or partially autonomous): used to dismantle mines and
bombs, evacuate the wounded from battle zones, deliver supplies and explore
places inaccessible to humans. (KREUTZER; SIRRENBERG, 2020, p. 230-231).
Sinchana et al. (2020) list the main uses of AI technologies in military defense today:
(II) Systems based on knowledge and AI capacity: software that uses methods
of determination and knowledge to solve problems. Used by the Air Forces
to provide technical expertise in maintenance.
(IV) Terrain analysis: use of systems for site analysis before military operations
are also carried out. Their use underlies tactical decisions, operations and
intelligence. (SINCHANA et al., 2020).
Autonomous weapons will be deployed by the main nuclear powers (USA, China and
Russia), with autonomy on the human level, by 2050 (DEL MONTE, 2018, p. 165-166). In
this new reality – in which the main nuclear powers will not face each other directly, as this
would trigger total human annihilation – such weapons will pose a danger to humanity
beyond its destructive potential. Their autonomy can trigger a potentially unavoidable
diplomatically war and, when the great powers and some dishonest States have
autonomous weapons, the greater the chances of miscalculations or misinterpretation
and, consequently, of a war that would continue by autopilot until human annihilation.
Israel already has Harap, a drone that searches and destroys radar systems in a completely
autonomous way, flying until a target appears. The US anti-submarine vessel Sea Hunter
sails for months at a time, looking for enemies without people on board. In addition, there
are about 400 weapons and partially autonomous robotic systems under development
worldwide (COKER, 2019, p. 57).
For Johnson (2019, p. 159-160), unmanned autonomous weapons are deployed
defensively and offensively, which can undermine the deterrent utility of existing
defense systems. In addition, the merger of AI with other early warning systems,
shortening decision time and facilitating the location of high-value hidden military
assets (such as submarines), may adversely affect international security and stability
in the use of nuclear weapons. The rapid diffusion and resources of dual use provided
by autonomous weapons will complicate the ability of States to anticipate, assign
and effectively combat future autonomous attacks. Thus, the incipient development
of counter-AI will have an increasing importance in national security and in strategic
calculations. And the relatively slow pace of global defense industry AI development
relative to the commercial sector will affect the balance of power and the structure of
international competition, further worsening the prospects for international security.
As dependencies between the digital and physical domains increase, so do
threats from cyber attacks. Machine learning will expand the scope and scale future
cyber attacks, which can overwhelm the States’ fledgling cyber defenses. AI’s many
inexplicable resources (black box) will exacerbate these risks and further complicate
defense planning for an uncertain and complex strategic scenario. For now, it is
still unclear what resources AI will leverage, whether new weapons can emerge
and how this dynamic will affect the future military and strategic balance between
States – and potentially between States and non-State entities as well.
The rapid US-China race to innovate in AI will have profound and destabilizing
implications for future security. Since both sides are internalizing such technologies,
(II) Although the situation does not reach the level of a security dilemma,
China and the USA are competing fiercely for the potential gains of AI,
which may trigger a real security dilemma;
addition to its agency capacity and changes in the brain’s neural pattern, as humans
start to use what is programmed in machines to manage themselves: the internet
search engines manage individuals by reading their thoughts, directing them to what
they decide each individual would consider interesting. Algorithms also monitor brain
rhythms, heartbeat and eye movements of drone pilots, checking their attention and
concentration at work. A pilot can be disconnected if he/she is considered to be at risk
of becoming very stressed, being the task transferred, consequently, to another person
who is momentarily more capable (COKER, 2019, p. 56).
Autonomous weapons will outperform the human in moral situations, as the
human situation encourages the loss of human control, and many of the immoral
actions that result from it (COKER, 2019, p. 58-59). Robots, by comparison, do not
have the same fighting dynamics as humans, as they do not have the same prejudices
against enemies, nor would they have the propensity to value preexisting belief
patterns, or human feelings such as guilt and shame.
AI will drive the war even more because of technological factors – with
mankind being increasingly absorbed by them, and they by mankind, in a symbiotic
and post-human situation. AI-oriented systems will be less and less considered
as tools, and more as collaborators. Machines are better than humans for routine,
repetitive and incessant situations (such as risk monitoring, information gathering,
data analysis and pattern discovery, if any, reacting quickly and operating other
machines). This demands a change in the humans’ attitude towards the machine
(from master to colleague) – otherwise, the human can become excessively
dependent on it and lose independent agency.
Regarding the development and use of autonomous weapons, there are
positions that consider them ethically inadequate, being the removal of humans
from death control a sufficient premise to ban autonomous weapons. Due to the
relatively low cost of this type of apparatus/systems, the possibility of programming
their algorithm with moral rules, and the possibility of removing humans from the
front lines of combat, these technologies must be developed and used in conflicts.
Furthermore, autonomous moral weapons may be the only entities capable of making
genuinely ethical decisions about destroying their targets. However, there are
requirements for its use:
(I) Such entities must have systems of guidance and judgment equal to or
greater than the abilities of human beings;
(II) There must be the incorporation of moral programs with which all
parties agree in them (relating, for example, to the Law of War and IHL);
example, fixed State departments for AI. A constant dialogue between technology
experts and politicians in an institutional integration can limit the risk that
programmers and policymakers will surpass the responsibility for the immoral
results of autonomous systems. In addition, such an idea would require the source
codes to be publicly accessible.
Several reasons to support the attack on the human dignity of autonomous weapons
can be listed, and perhaps the main ones are (SHARKEY, 2019): (I) the suppression of
human agency over other people’s life and death decisions (thus, totally subjugating the
human to the autonomous action of the machines); (II) the possibility of disrespecting
meaningful human control over such systems/apparatus. There are many technological
applications that affect human dignity (such as chemical weapons, nuclear weapons,
biological weapons), in addition to the fact that human behavior itself may harm the
dignity of others (by causing unnecessary suffering).
Thus, only human dignity cannot be considered a very reliable basis for
arguments against autonomous weapons when compared to other means and
weapons of war. The risk to human dignity is only one of the reasons for banning
autonomous weapons and maintaining human control over them – and yet, it
is perhaps not the most attractive. It is more intelligent to resort to various types
of objections in arguments against autonomous weapons, not just to human
dignity – not least because there is no philosophical consensus about their
meaning. Three other classes of arguments must be added to dignity, then:
(I) non-conformities of autonomous weapons to IHL;
(II) need for significant human control in the use of technology;
(III) deleterious consequences of using such technology for global stability.
AI weapons would create new arms race that would put everyone at risk,
but much more widespread than the nuclear weapons race, as they are cheaper
and much easier to develop independently. States will be better off preventing the
development and implementation of such systems. Preventive governance structures
should be implemented with limits on the development of AI weapons with the
potential to violate IHL (GARCIA, 2019, p. 340).
Warlike AI will make war more inhumane, because attribution of responsibility is
a necessary element to hold war criminals liable, and these weapons make liability
much more difficult. Currently, nations have one of the greatest opportunities in history
to promote a better future, developing preventive security structures that preventively
prohibit AI armament and ensure that such technology is used only for the common
good of humanity.
by the system in carrying out its actions – and its design must be focused on its
actual user, not for an ideal one;
(IV) Accuracy of information: the user must have accurate information, in order to
understand the intended results (purposes), the technology and the processes
applied by it, as well as the context in which it is supposed to work;
(V) Possibility of timely human intervention: a human user must initiate the
operation of a certain technology while the contextual information on which
they are acting is still relevant. Furthermore, although systems may be designed
to operate more quickly than it is humanly possible, there must be resources
for timely intervention by another system, process or human being;
(VI) Existence of responsibility with a certain standard: liability for use of a
technology must be conceptual and practically linked to the potential for
timely human action and intervention. In addition, it must reaffirm who holds
it (due to processes initiated), condition the technical system (ensuring that its
users/operators understand the consequences for their actions or omissions),
and focus on broader sets (organizations, for example) that produce such
technological devices and systems – although it may initially fall on human
agents related to the system.
The specification of the sufficiency level is difficult to detail, but the technological
categories can still be assessed based on those considerations. Technological
limits, such as automation and autonomy, may represent different capacities of
predictability, reliable information about the context of use, timely intervention or
consideration of responsibility. In addition, any extension of the legal concept of
what an attack is can also be challenged in the face of the challenges that these
regulatory requirements present. In fact, the meaning of the expression attack is
particularly important, because the texture of the law is open – which makes it both
beneficial and harmful: beneficial because its interpretation can change as needs
arise; and harmful because they can be changed arbitrarily as well. The mere duty
of conformity is not sufficient when key terms can be interpreted so openly in the
context of autonomous weapons, to the point of emptying any argument in favor of
the application of human legal judgment.
Being functioning of human communities also dependent on the knowledge and
compliance with social norms. The possibility of justifying their violation in favor of
more important ones, it is necessary that the artificial agents, when becoming part
of the human communities, must follow such norms too – contextualized, of course,
according to peculiarities pertaining to them, and the norms that compel human
conduct must be known by them as well. But moral judgments are not only based
on compliance with the rules, as reasons are also needed to motivate the action
(SCHEUTZ; MALLE, 2020). Thus, the humanly possible transparency of the reasons
that led a machine to decide on a certain standard is fundamental – which includes
justifying its actions according to applicable standards.
If machines enter society, deciding on humans’ lives and deaths, or assuming
socially influential roles, they will have to be able to decide and act in a normatively
compatible manner, expressing their knowledge of applicable rules before the
action, and offering appropriate justifications, especially in response to criticism,
after action. Thus, it is up to humans to design artificial agents and provide them with
this form of normative competence that will serve as a social safeguard, ensuring
that they are able to improve human condition.
logistical support for military operations, military construction, and bomb removal
robots, the most advisable areas for their use (YURIEVICH et al., 2019).
The most likely scenario for drawing up an international agreement on the
problem of deadly automatic systems is not a comprehensive ban on the development,
use and distribution of technologies without meaningful human control, but a ban on
the use of this type of military equipment. Moreover, considering the political and
economic interests in the current global scenario, the likelihood of a swift adoption of
a commitment document is not high. More importantly, the regulation of autonomous
weapons must establish in which areas their use would be acceptable.
An international regulation for LAWs must give fundamental importance to
significant human control – and in that sense, although the UN is perhaps the best
option for international regulation of the issue, it faces different dimensions of
challenges due to the exponential growth of the AI (SETHU, 2019, p. 367): firstly,
the UN is the appropriate international legal framework for the regulation of AI
in its warlike use, as that organization is the main bulwark for the protection of
human rights and the application of IHL – and AI weapons can be used in times of
war as well as in times of peace and, for that reason, both human rights and IHL
will have to converge. Secondly, legal parameters to solve the problems related to
the use of LAWs must be precisely defined – for that, it is pertinent to define the
principiological foundation of meaningful human control, which will shed light on
the mental element of crime or the necessary area of the commitment of a crime.
Nevertheless, there are inconsistencies and disagreements between States in
obtaining a uniform definition of the meaning of such a foundation – therefore, the
UN plays the fundamental role of reaching consensus.
Automated weapon systems will become commonplace in the future war, not
least because AI-based data analysis is unmatched in terms of human knowledge.
However, human intervention is crucial when considering ethical issues based on
machine judgments, and the humanization of war should be the area of concern for
IHL in this sense, since the implications of the warlike use of AI will affect future
global developments based on disruptive warfare technologies. The use of automated
weapon systems in external warfare must remain under human control – otherwise,
it can turn against people, causing unnecessary civilian losses. The social challenges
regarding the use of such technology are critical, especially in view of the religious
and cultural influences, which weigh heavily on military personnel regarding the use
of autonomous weapons.
3.1 IHL Principles as the Foundation for the Future Regulation of the Military Use
of Artificial Intelligence
In IHL, Article 36 of the Armed Conflict Protocol-1, ratified by Brazil, together with
Protocol 2, by Decree 849, to the 25th of June, 1993 (BRAZIL, 1993), determines that
the study, development, acquisition, adoption of new weapons and the prohibition
of their use when attacking an enemy, must observe the following principles of
international humanitarian aid (MATHEW; MATHEW, 2019). In addition, the following
principles must also be observed:
(I) Distinction (arts. 50 to 54): civilian people or objects cannot be attacked,
and harassing civilians who do not participate in any hostilities is considered
a heinous crime, meaning the complete failure of the Armed Forces to identify
the specific target. The hideous character of such a crime also includes weapons
that cannot be discriminated (such as chemical and biological ones);
(II) Proportionality (art. 35): counterattacks must be proportional to the offenses
that provoke them, and the weapons used must avoid damaging civil people or
objects, when compared to the expected military advantage they offer;
(III) Precaution (articles 57 and 58): it is illegal to cause excessive damage when
comparing the action to the general military advantage, and the leadership
defines what is an extreme loss. Specific precautionary measures must be taken
to protect soldiers, and the attack must be carried out so that warnings of its
occurrence are issued whenever possible.
Responsibility blanks regarding to the use of LAWs could be reduced, and thus,
respect for people could be shown, as even when machines make all the decisions,
it would be possible to align the effective development of AI with human moral
commitments, and thus, to be in compliance with international war conventions
(PFAFF, 2019, p. 148-150). Then, eliminating or strictly reducing the use of LAWs is out
of the question. The development and use of LAWs could prevent war or reduce the
damage caused by war activities as well – however, LAWs could encourage militaristic
responses even when other non-violent alternatives are available, resulting in
atrocities for which no one would liable, and this would desensitize militaries to the
murder they commit. In order to promote the first and avoid the second, the following
measures should be considered by States to ensure the ethical use of LAWs:
(I) IHL update, establishing normative standards that include human
responsibility for the behavior of autonomous systems – also regarding the
quality of information and assessments that machines provide, ensuring that
systems are used only in conditions for which they were designed with ethical
performance, and the monitoring of the interactions between the environment
and the machine (being that operations must be interrupted when conditions
change in a manner that could make violations possible);
(II) Establishment of accountability standards for employees, programmers,
designers and manufacturers for violations – especially because trust conditions
of commanders and operators of LAWs heavily depend on guarantees that the
machine meets moral and functional standards of operation;
(III) Maintenance of a reasonably high limit for use, using autonomous systems /
apparatus only when the conditions of the jus ad vim (fair use of force) are met;
(IV) Specification of conditions for the use of LAWs, in order to guarantee
meaningful human control and the maintenance of trust – conditions that must
be updated along technological developments, avoiding further blanks and
antinomies between the effective use of AI and moral commitments;
(V) Establishment of AI proliferation patterns that, at a minimum, include a
commitment to employ these systems only in conflicts that meet the standards
of jus ad bellum and jus in bello. Furthermore, there must be a strong presumption
of denial to the recipients of the technology who, in the past, have been weak in
their commitments to these standards;
(VI) Preservation of the identity of the human military personnel, being health
treatment offered for possible desensitization and psychological trauma;
(VII) Development of a communication plan to explain the ethical framework for
the use of AI to the public, the media and the legislature.
The UN GGE on LAWs has emphasized the need for the development
and use of such systems in accordance with IHL. However, with regard to the
governance of autonomy in arms, there is a blank in actual IHL, as challenges raised
by LAWs overcome issues of compatibility with IHL, including critical issues related
to ethics, morality and fundamental values that are critical for mankind (CHENGETA,
2020). The shortcomings of the legal regime regarding LAWs mainly concern the
responsibility blanks that appear when such technology is used. Such blanks cannot
be solved by ignoring their existence, or by creative interpretations of the existing
law, or even through political statements without legal force, being the development
of a binding instrument on LAWs essential.
The use of LAWs would undermine the whole paradigm of war, and its
application in combat is inherently incapable of fulfilling the distinction and
proportionality fundamental principles of IHL (SZPAK, 2020, p. 11). It is important to
prohibit, under treaties, any development, production and use of fully autonomous
weapons currently, when technology has not yet developed fully autonomous weapons
(although this is not dystopian or distant). But until the conclusion of such a treaty,
the rules of armed conflict in IHL must apply to LAWs. The proposed treaty should
define what is an autonomous weapon, as well as what roles and functions are
allowed for their use (defensive functions, such as guarding military installations
and no-fly zones, target identification, reconnaissance, mine dismantling and aid to
victims). Autonomous systems are not able to understand subjective factors, such
as intention and ambiguous behavior hitherto – and this is even more problematic,
regarding IHL, when the unclear definition of who are the civilians a conflict is
recalled. Uncertainties, sudden changes, and dependence on context/environment,
demonstrate the incompatibility of LAWs in terms of distinction and proportionality
(BRENNEKE, 2020, p. 91).
In terms of proportionality, collateral damage could be more easily estimated
with the use of AI, but the element of military advantage is too obscure, complex
and subjective, given the myriad of information from various sources. The rule of
proportionality in IHL is openly formulated and, ultimately, depends on morality in
a way that requires battlefield experience and general awareness – characteristics
that LAWs currently do not have, and that will not be technologically acquired soon.
However, the deployment of LAWs in permissive environments is highly unlikely
to violate IHL rules. In addition, the obligation to take precautions in the attack
depends on the rules that establish distinction and proportionality (BRENNEKE,
2020, p. 92). It is also questionable whether such systems could adequately detect
and treat injured or surrendered soldiers. The situation is also complicated by the
fact that the viability of actions on the battlefield is impossible to formulate in
abstract or binary terms.
LAWs are relatively close to a future incompatible with the applicable rules of IHL,
at least as far as common types of battlefields of the present are concerned. But the
fewer people and civilian objects in a given environment, the greater the likelihood
that LAWs will not violate IHL. If scientific research achieves major advances leading
to adherence to the most important distinction, proportionality and precautions in
attack, it will require a considerable amount of time and effort. So far, mankind must
remain in meaningful control of the war actions to guarantee the observance of IHL
during conflicts – although this guiding principle is still received with criticism
by the main global military powers (mainly Russia and the USA). Consequently,
uncertainties remain about the legal and political paths to follow. States’ proposals
range from a legally binding treaty to a political declaration with no further action.
On the other hand, technological progress should not be neglected, but used
consciously to improve military capabilities and legal conduct (in the light of IHL)
(BRENNEKE, 2020, p. 93).
In addition, deep machine learning can create a logic for decision making,
different from the initially programmed one, developed completely by the machine
and humanly incomprehensible (HUGHES, 2020, p. 127-129). In this sense, inexplicable
algorithms would present a significant risk if the characteristics of the target on which
the algorithm is focused are not humanly known and, inexplicability, would impair
a weapons reviewer’s ability to evaluate weapons using such algorithms. A poorly
understood review would prevent commanders and operators from doing their jobs.
The use of opaque algorithms would also cause skill problems, and if something
goes wrong, they can create intractable problems for due process and accountability
regimes. It is clear that human mental processes, which are also complex and
problematic, are mostly accepted for such purposes – however, mankind still has
the possibility to choose (or not) the inexplicability of the algorithm, while the
inexplicability of the human psyche is inevitable. It is recommended, therefore, to
understand deep learning as still being inherently problematic due to the richness
of the algorithm’s opacity.
4 Conclusion
Autonomous weapons make decisions without any human intervention,
selecting and attacking targets. They are developed on the basis of computation and
robotics, and may come to combat humanity. This must be considered normatively,
otherwise, humanity will irreversibly depend on the machine for moral decisions.
The use of autonomous weapons in warfare can undermine human agency, then;
however, their use could also end the perishing of human lives in military actions
– and for that reason, there are those who defend their use, as long as the IHL is
respected, which would be programmed in their algorithms (value sensible design).
In addition, fundamental norms must also be observed for the assessment of the
ethical and legal character (according to IHL) related to the criterion of distinction
(ability of such apparatus to distinguish between human and inhuman targets, and
between legitimate and illegitimate human targets as well), to overcome gaps in
responsibility for the risk of such weapons. In addition, the prospective responsibility
of the military and weapon designers – who must create sufficient standards and
strategies to avoid negative contingencies in the use of such weapons – must also
be normatively developed.
Disrespect for human dignity is inherent in the development and use of
autonomous weapons – and that factor must be weighed to approach their risks.
In other hand, this is not something exclusive to such technology, which makes it
fallacious to exclusively use the dignity argument to support the ban of the use
of such weapons. It must therefore be added to reasons such as non-conformities
with IHL, the need for significant human control, and the deleterious consequences
of the development and use of such technology for global (political, economic,
environmental, etc.) stability. From the fundament of significant human control,
a list of principles can be outlined, such as predictability, reliability, transparency,
precision of information, possibility of timely human intervention and the existence
of responsibility regarding certain standards.
The regulation of autonomous weapons with a basis on international regimes is
something valid and powerful, but it must be added to the efforts of nascent epistemic
communities that fight against their proliferation. However, such communities must
draw on experience (mainly on nuclear weapons control), going beyond the ethical
and legal study of the possibilities of human control over autonomous weapons.
Strategies to sensitize political authorities to the possibilities of human control
should also be promoted, seeking to demonstrate not only the risks of their use,
but also the advantages of using AI for the stability and security of States and
international regimes.
There is still no specific regulation of International Law on the development
and use of autonomous weapons. It is counterproductive to expect a total ban on
the use of AI in war. Thus, it would be interesting to prohibit its use against human
lives (which is already supported by IHL) and to encourage it in auxiliary areas – such
as medical and logistical support, bomb dismantling, removal of victims and civilians
from battlefields, etc. The principles of distinction, proportionality and precaution,
enforced in IHL conventions, therefore, can initially parameterize the regulation of the
military use of AI.
National and international regulations on the use of autonomous weapons
must be established, and IHL must be updated to address also the peculiarities
of that technology – always paying attention to the need of maintain meaningful
5 References
BARBÉ, Esther; BADELL, Diego. The European Union and Lethal Autonomous Weapons
Systems: United in Diversity? In: JOHANSSON-NOGUÉS, Elisabeth; VLASKAMP,
Martijn C.; BARBÉ, Esther (ed.). European Union Contested: Foreign Policy in a New
Global Context. Cham: Springer, 2020, p. 133-152. DOI: https://doi.org/10.1007/978-
3-030-33238-9.
BHUTA, Nehal; BECK, Susanne; GEISS, Robin. Present futures: concluding reflections and
open questions on autonomous weapons systems. In: BHUTA, Nehal; BECK, Susanne;
GEISS, Robin; LIU, Hin-Yan; KRESS, Claus (eds.). Autonomous Weapons Systems: Law,
Ethics, Policy. Cambridge: Cambridge University Press, 2016, p. 347-383.
BODE, Ingvild; HUELSS, Hendrik. Autonomous weapons systems and changing norms
in international relations. Review of International Studies, v. 44, n. 3, p. 393-413,
2018. DOI: https://doi.org/10.1017/S0260210517000614.
BROWN, Justin. An Exploration of the Nascent AI Race Dynamics Between the United
States and China. MUNDI, v. 1, n. 1, p. 1-40, 2020. Available at: https://tuljournals.
temple.edu/index.php/mundi/article/view/388. Access in: 07 jul 2020.
COKER, Christopher. Artificial Intelligence and the Future of War. Scandinavian Journal
of Military Studies, v. 2, n. 1, p. 55-60, 2019. DOI: https://doi.org/10.31374/sjms.26.
ETZIONI, Amitai; ETZIONI, Oren. Pros and Cons of Autonomous Weapons Systems.
Military Review, p. 72-81, May-June, 2017. Available at: https://www.armyupress.army.
mil/Portals/7/military-review/Archives/English/pros-and-cons-of-autonomous-
weapons-systems.pdf. Access in: 7 jul. 2020.
HOROWITZ, Michael C. The Ethics & Morality of Robotic Warfare: Assessing the
Debate over Autonomous Weapons. Daedalus, v. 145, n. 4, p. 25-36, 2016. DOI: https://
doi.org/10.1162/DAED_a_00409.
JENSEN, Benjamin M.; WHYTE, Christopher; CUOMO, Scott. Algorithms at War: The
Promise, Peril, and Limits of Artificial Intelligence. International Studies Review,
viz025, 2019. DOI: https://doi.org/10.1093/isr/viz025.
LELE, Ajey. Disruptive Technologies for the Militaries and Security. Singapore:
Springer, 2019. DOI: https://doi.org/10.1007/978-981-13-3384-2.
LEWIS, Larry. Resolving the Battle over Artificial Intelligence in War. The RUSI Journal,
v. 164, n. 5-6, p. 62-71, 2019. DOI: https://doi.org/10.1080/03071847.2019.1694228.
MAAS, Matthijs M. How viable is international arms control for military artificial
intelligence? Three lessons from nuclear weapons. Contemporary Security Policy,
v. 40, n. 3, p. 285-311, 2019. DOI: https://doi.org/10.1080/13523260.2019.1576464.
ROFF, Heather M.; MOYES, Richard. Meaningful human control, artificial intelligence
and autonomous weapons. In: Briefing Paper Prepared for the Informal Meeting
of Experts on Lethal Autonomous Weapons Systems, UN Convention on Certain
Conventional Weapons, Geneva, Switzerland, 2016. Available at: http://www.
article36.org/wp-content/uploads/2016/04/MHC-AI-and-AWS-FINAL.pdf. Access in:
7 jul. 2020.
SCHEUTZ, M.; MALLE, B. F. May Machines Take Lives to Save Lives? Human
Perceptions of Autonomous Robots (with the Capacity to Kill). In: GAILLOT, J. (ed.).
Lethal autonomous weapons: Re-examining the law & ethics of robotic warfare.
Oxford: Oxford University Press, 2020 (in press). Available at: http://research.clps.
brown.edu/soccogsci/publications/publications%20copy.html. Access in: 7 jul. 2020.
SHARKEY, Amanda. Autonomous weapons systems, killer robots and human dignity.
Ethics and Information Technology, v. 21, p. 75-87, 2019. DOI: https://doi.org/10.1007/
s10676-018-9494-0.
UMBRELLO, Steven; TORRES, Phil; DE BELLIS, Angelo F. The future of war: could
lethal autonomous weapons make conflict more ethical?. AI & Society, p. 1-10, 2019.
DOI: https://doi.org/10.1007/s00146-019-00879-x.
WARREN, Aiden; HILLAS, Alek. Friend or frenemy?. The role of trust in human-machine
teaming and lethal autonomous weapons systems. Small Wars & Insurgencies, v. 31,
n. 4, p. 822-850, 2020. DOI: https://doi.org/10.1080/09592318.2020.1743485.