Professional Documents
Culture Documents
Artificial Intelligence Decision Making
Artificial Intelligence Decision Making
Artificial Intelligence Decision Making
92 (2023) 1–8
Introduction
∵
Artificial Intelligence, Decision Making and
International Law
The question how artificial intelligence (ai), including machine learning (ml),
impacts on law in general, and on international law in particular, has gained
more and more traction in recent years. Ensuing debates have mainly homed
in on threats and opportunities posed to law by this technology, and remained
on a rather abstract level.
This special issue of the Nordic Journal of International Law aims to add
granularity and depth to existing research by narrowing our focus to what we
see as a critical area. The overarching question for the special issue is how ai,
including ai-supported and automated decision making, might impact on
decisions we take in international law. This allows the special issue to track
how technologically induced practice makes its way into domestic law, and,
potentially, from there onwards into international law.
The theme and aim is based on the insight that all forms of practice rely, in
one way or another, on enabling technologies, and that technological change
in and of itself garners changes in practice.1 Whether, and if so how, these
changes are relevant under domestic and international law is part of what the
special issue sets out to chart. With international law already being a field of
many disciplines, the question of ai and other emerging technologies opens
up the field to further interdisciplinary encounters. This introduction offers
reflections in response to the question and aim of the special issue, presenting
individual contributions along the way.
The general field of ai and law scholarship has grown rapidly, especially
since the early 90s.2 Yet, if international legal scholars have been following
1 L. Amoore, ‘Machine learning political orders’, 49:1 Review of International Studies (2023) pp.
20–36, at p. 21, doi:10.1017/S0260210522000031.
2 See, e.g., the journals Artificial Intelligence and Law and Frontiers in ai: Law and Technology.
3 Relevant examples of scholarships in ai and international law include, but are not limited
to: R. Adams, and N. N. Loideáin, ‘Addressing indirect discrimination and gender stereotypes
in ai virtual personal assistants: the role of international human rights law’ 8:2 Cambridge
International Law Journal (2019) pp. 241–257, doi: https://doi.org/10.4337/cilj.2019.02.04;
M. Arvidsson, ‘The swarm that we already are: artificially intelligent (ai) swarming “insect
drones”, targeting and international humanitarian law in a posthuman ecology’ 11:1 Journal
of Human Rights and the Environment (2020) pp. 114–137, doi: 10.4337/jhre.2020.01.05; E.
Benvenisti, ‘ejil Foreword: Upholding Democracy Amid the Challenges of New Technolo-
gy: What Role for the Law of Global Governance?’ 29:1 The European Journal of Internation-
al Law (2018) pp. 9–82, doi: https://doi.org/10.1093/ejil/chy013; T. Burri, ‘International Law
and Artificial Intelligence’ 60:1 German Yearbook of International Law (2018) pp. 91–108, doi:
https://doi.org/10.3790/gyil.60.1.91; A. Hárs, ‘ai and international law – Legal personality
and avenues for regulation’ 62:4 Hungarian Journal of Legal Studies (2022) pp. 320–344, doi:
10.1556/2052.2022.00352; M. Hildebrandt, ‘Text-Driven Jurisdiction in Cyberspace’, 2:8 The-
oretical and Applied Law (2021) pp. 7, doiI: 10.22394/2686-7834-2021-2-6-20; F. Johns, ‘Data,
Detection, and the Redistribution of the Sensible in International Law’, 111:1 American Soci-
ety of International Law (2017) pp. 57–103, doi: 10.1017/ajil.2016.4; F. Johns and C. Compton,
‘Data Jurisdictions and Rival Regimes of Algorithmic Regulation’, Regulation and Governance
(2022) pp. 63–84, doi: https://doi.org/10.1111/rego.12296; M. Langford, ‘Taming the Digital Le-
viathan: Automated Decision-Making and International Human Rights’, 114 American Journal
of International Law Unbound (2020) pp. 141–146, doi:10.1017/aju.2020.31; A. Leiter, and M. Pe-
tersmann, ‘Tech-based Prototypes in Climate Governance: On Scalability, Replicability, and
Representation’, 33 Law & Critique (2022), pp. 319–333, doi: https://doi.org/10.1007/s10978-
022-09331-4; M. Liljefors, G. Noll, and D. Steuer, War and Algorithm (Rowman & Littlefield,
New York, 2019); M. Maas, ‘International Law Does Not Compute: Artificial Intelligence and
to optimize and ‘scale up’ governance and decision making processes and
to relieve lawyers and (other) governance professionals from mundane and
repetitive tasks, ai has rapidly gained traction. Even in law, the question is sel-
dom if ai is part of the answer to pressing contemporary concerns, but rather
how to address those more efficiently through ai. Fear of missing out on stra-
tegic advantage looms large: “Even if international lawyers for governments
in the United States and Europe are sceptical about the benefits of machine
learning and big data”, Ashley Deeks warns, “they must consider the possibility
that states such as China will begin to deploy these tools in power-enhanc-
ing ways”.4 The anxious tone is well-known to international lawyers as close to
identical with the cold war period of bomber gaps and nuclear competition.
Other scholars call for lawyers not to throw international law’s slow hermeneu-
tic overboard in efficiency-driven efforts to maximize law’s calculable output.
There are, as Laurence Diver emphasizes, good reasons for interpretation and
execution of (international) legal norms remaining a reflexive rather than a
computational legal practice.5 When Louise Amoore suggests that the solu-
tions ml can offer beget the problems it deserves, she highlights the risk that
technology
cuts both ways, though, as magical capabilities are being projected onto ai as if
it could be isolated from the deficits of its all-to-human enablers and embed-
ders.11 The individual authors of this special issue steer clear from both, elid-
ing ‘optimist’ as much as ‘pessimist’ views on ai and associated technologies.
The overall picture emerging from the following articles is neither of cher-
ry-picked ‘salient technological failures’ nor of technological magic. Instead,
the mixed message is that ai may, but does not necessarily, solve problems in
international law. Moreover, ai certainly restructures and delimits what can
be addressed as a problem.12 It exacerbates certain already existing problems,
such as its lack of a cohesive core, authority and execution; its dependence on
state power and recognition; its history tainted by colonial violence and ine-
quality. In addition, ai introduces new problems that international lawyers are
not readily aware of and attentive to.
This special issue of the Nordic Journal of International Law presents six full
length articles, following a trajectory from historical and methodological shifts
towards particular contexts of application, and onwards to regulatory and con-
ceptual issues.
In the first article, John Haskell takes a historical approach to trace how
international law and its scholarship has been continually reworking its rela-
tion to computer-oriented technologies since at least the 1950s.13 Haskell asks
how this engagement takes place, what it tells us about the state of the disci-
pline of international law, and the consequences of concentrating on the phe-
nomena of digital technologies. He is sceptically inclined towards reading an
overarching logic into digital technologies, making them into a mere variation
of capitalism, with the computer being the next upgrade of our suffering as
humans. Haskell ends by reminding us that we are ‘children of evolutionary
biology’ and summons us to be ‘cyborg international lawyers’ in the original
sense, freeing ourselves from the constraints of the environment to the extent
that we wish.
In the second contribution, Geoff Gordon focuses on international institu-
tions, showing how ai has shifted terrains of contestation from arguments
concerning rules – associated with traditional international legal practice and
11 On the recurrent reference to ai as ‘magic’, see S. Larsson and M. Viktorelius, ‘Reducing the
contingency of the world: magic, oracles, and machine-learning technology’, ai & Society:
The Journal of Human-Centred Systems and Machine Intelligence (2022), doi: https://doi.
org/10.1007/s00146-022-01394-2.
12 Amoore, supra note 1.
13 J. Haskell, ‘International Law as Cyborg Science’, Nordic Journal of International Law (2023)
this current issue.
quantification, tracking it back to 1970 US law and politics onwards right into
the computerization of warfare and its effects on targeting in conformity with
ihl. In a second step, they analyze the effects that the quantification process
has had on contemporary international law. As these technologies spread from
the US to its allies, Gunneflo and Noll show that this has concrete repercus-
sions on the practice of states under treaty and customary ihl alike.
The fifth article, contributed by Leila Brännström, analyses the emerging
global field of data regulation, asking how the EU approach to data govern-
ance relates to an emergent international law in this field. Regulating access to
data is ultimately about regulating the data used in machine learning and in
automated forms of decision support. The conflict is staged between a position
advocating the free flow of data (as that of the US), and a position asserting
data sovereignty (such as those held by China and India). Brännström shows
that the EU position seeks, but ultimately fails, to offer an alternative to these
binary positions. As the US leverages international trade agreements to pur-
sue free trade in data, the EU remains unable to counter the resulting global
inequalities with a mitigating framework. A major factor in this failure is the
divergence between EU data protection law and the digital economic prac-
tices of the 21st century. Today, Brännström finds, the EU quest for digital sov-
ereignty is but an attempt to climb the ladder of global digital value chains. By
conclusion, the EU has little to offer in a formative phase for an important area
of international law.
In the sixth and final article, Outi Korhonnen, Merima Bruncevic and Matilda
Arvidsson focus on international legal subjectivity and cyberspace decision
making. In their analysis Korhonnen et al. draw on the ‘the uncanny valley’
from the 1970s influential essay by robotics professor Masahiro Mori. Mori
famously argued that human-likeness in robots evokes a sense of affinity in
humans, but only up to a point where the robot is experienced as both too
human-like and eerily non-human, thus evoking uncanniness. Korhonen et al.
apply Mori’s notion, as well as the Freudian ‘uncanny’ in psychoanalysis and
jurisprudence, to the field of a-human, non-human, and more-than-human
agents in cyberspace – a space that international law and scholarship has, for
some time now, been debating in terms of regulatory capacity and legal subjec-
tivity of the variety of agents dwelling there. By discussing autonomous deci-
sion-making, the co-existence between the human and non-human subjects in
international law’s uncanny valley, the article proposes that international law
needs to cater for a larger, dialectical, spectrum of non-human subjectivities
and digital jurisdictions at the same time as its traditional field is challenged
by radical developments of legal pluralism.
Matilda Arvidsson
University of Gothenburg, Department of Law, Gothenburg, Sweden
matilda.arvidsson@law.gu.se
Gregor Noll
Department of Law, University of Gothenburg, Gothenburg, Sweden
gregor.noll@law.gu.se
Acknowledgements