Artificial Intelligence Decision Making

You might also like

Download as pdf or txt
Download as pdf or txt
You are on page 1of 8

Nordic Journal of International Law

92 (2023) 1–8

Introduction


Artificial Intelligence, Decision Making and
International Law
The question how artificial intelligence (ai), including machine learning (ml),
impacts on law in general, and on international law in particular, has gained
more and more traction in recent years. Ensuing debates have mainly homed
in on threats and opportunities posed to law by this technology, and remained
on a rather abstract level.
This special issue of the Nordic Journal of International Law aims to add
granularity and depth to existing research by narrowing our focus to what we
see as a critical area. The overarching question for the special issue is how ai,
including ai-supported and automated decision making, might impact on
decisions we take in international law. This allows the special issue to track
how technologically induced practice makes its way into domestic law, and,
potentially, from there onwards into international law.
The theme and aim is based on the insight that all forms of practice rely, in
one way or another, on enabling technologies, and that technological change
in and of itself garners changes in practice.1 Whether, and if so how, these
changes are relevant under domestic and international law is part of what the
special issue sets out to chart. With international law already being a field of
many disciplines, the question of ai and other emerging technologies opens
up the field to further interdisciplinary encounters. This introduction offers
reflections in response to the question and aim of the special issue, presenting
individual contributions along the way.
The general field of ai and law scholarship has grown rapidly, especially
since the early 90s.2 Yet, if international legal scholars have been following

1 L. Amoore, ‘Machine learning political orders’, 49:1 Review of International Studies (2023) pp.
20–36, at p. 21, doi:10.1017/S0260210522000031.
2 See, e.g., the journals Artificial Intelligence and Law and Frontiers in ai: Law and Technology.

Published with license by Koninklijke Brill nv | doi: 10.1163/15718107-bja10060


© Matilda Arvidsson and Gregor Noll, 2023 | ISSN: 0902-7351 (print) 1571-8107 (online)
Downloaded from Brill.com04/27/2023 01:57:04PM
This is an open access article distributed under the terms of the CC BY 4.0 license. via free access
2 introduction

these developments, only little of it has hitherto translated into international


legal scholarship. Perhaps, international legal scholars have found it difficult to
decipher the meanings and relevance of ai and law debates, as these are often
geared towards computation and burdened with a technological language.
The material applications to the field of international law has not been self-ex-
planatory, and it is fair to say that the main intended audience of the early ai
and law scholarship has not been that of international law and its scholars.
Regardless of the reasons, comparatively little intra-legal scholarship conver-
sations have borne fruit towards critical ends. As a result, some re-inventions
of the wheel have surfaced in international legal scholarship on ai, assisted
and automated decision making, as international legal scholarship suffers
from a lack of informed interdisciplinary conversations with ai scholarship
and intra-disciplinary ones with other fields of legal scholarly expertise.
It is in this fledgling state of scholarship that the present special issue places
itself. It draws both on an increasing interest in ai in international law and
scholarship, and an increasing use and implementation of ai in a range of
sectors of interest to and application of international law.3 With a promise

3 Relevant examples of scholarships in ai and international law include, but are not limited
to: R. Adams, and N. N. Loideáin, ‘Addressing indirect discrimination and gender stereotypes
in ai virtual personal assistants: the role of international human rights law’ 8:2 Cambridge
International Law Journal (2019) pp. 241–257, doi: https://doi.org/10.4337/cilj.2019.02.04;
M. Arvidsson, ‘The swarm that we already are: artificially intelligent (ai) swarming “insect
drones”, targeting and international humanitarian law in a posthuman ecology’ 11:1 Journal
of Human Rights and the Environment (2020) pp. 114–137, doi: 10.4337/jhre.2020.01.05; E.
Benvenisti, ‘ejil Foreword: Upholding Democracy Amid the Challenges of New Technolo-
gy: What Role for the Law of Global Governance?’ 29:1 The European Journal of Internation-
al Law (2018) pp. 9–82, doi: https://doi.org/10.1093/ejil/chy013; T. Burri, ‘International Law
and Artificial Intelligence’ 60:1 German Yearbook of International Law (2018) pp. 91–108, doi:
https://doi.org/10.3790/gyil.60.1.91; A. Hárs, ‘ai and international law – Legal personality
and avenues for regulation’ 62:4 Hungarian Journal of Legal Studies (2022) pp. 320–344, doi:
10.1556/2052.2022.00352; M. Hildebrandt, ‘Text-Driven Jurisdiction in Cyberspace’, 2:8 The-
oretical and Applied Law (2021) pp. 7, doiI: 10.22394/2686-7834-2021-2-6-20; F. Johns, ‘Data,
Detection, and the Redistribution of the Sensible in International Law’, 111:1 American Soci-
ety of International Law (2017) pp. 57–103, doi: 10.1017/ajil.2016.4; F. Johns and C. Compton,
‘Data Jurisdictions and Rival Regimes of Algorithmic Regulation’, Regulation and Governance
(2022) pp. 63–84, doi: https://doi.org/10.1111/rego.12296; M. Langford, ‘Taming the Digital Le-
viathan: Automated Decision-Making and International Human Rights’, 114 American Journal
of International Law Unbound (2020) pp. 141–146, doi:10.1017/aju.2020.31; A. Leiter, and M. Pe-
tersmann, ‘Tech-based Prototypes in Climate Governance: On Scalability, Replicability, and
Representation’, 33 Law & Critique (2022), pp. 319–333, doi: https://doi.org/10.1007/s10978-
022-09331-4; M. Liljefors, G. Noll, and D. Steuer, War and Algorithm (Rowman & Littlefield,
New York, 2019); M. Maas, ‘International Law Does Not Compute: Artificial Intelligence and

Nordic Journal of International Law 92 (2023) 1–8


Downloaded from Brill.com04/27/2023 01:57:04PM
via free access
introduction 3

to optimize and ‘scale up’ governance and decision making processes and
to relieve lawyers and (other) governance professionals from mundane and
repetitive tasks, ai has rapidly gained traction. Even in law, the question is sel-
dom if ai is part of the answer to pressing contemporary concerns, but rather
how to address those more efficiently through ai. Fear of missing out on stra-
tegic advantage looms large: “Even if international lawyers for governments
in the United States and Europe are sceptical about the benefits of machine
learning and big data”, Ashley Deeks warns, “they must consider the possibility
that states such as China will begin to deploy these tools in power-enhanc-
ing ways”.4 The anxious tone is well-known to international lawyers as close to
identical with the cold war period of bomber gaps and nuclear competition.
Other scholars call for lawyers not to throw international law’s slow hermeneu-
tic overboard in efficiency-driven efforts to maximize law’s calculable output.
There are, as Laurence Diver emphasizes, good reasons for interpretation and
execution of (international) legal norms remaining a reflexive rather than a
computational legal practice.5 When Louise Amoore suggests that the solu-
tions ml can offer beget the problems it deserves, she highlights the risk that
technology

forecloses the multiplicity of plural solutions to a single target, and re-


duces the framing of the political problem to the weighting of inputs.
Every adjustment or modification of the parameters in the deep learning
model is simultaneously an arrangement of the political problem.6

Amoore’s analysis of the foreclosure of plural solutions strongly speaks to how


questions of justice, equality and a peaceful international order risk being

the Development, Displacement or Destruction of the Global Legal Order’, 20 Melbourne


Journal of International Law (2019) pp. 1–29; P. Molnar, ‘Technology on the margins: ai and
global migration management from a human rights perspective’, Cambridge International
Law Journal (2019) pp. 305–330, doi:10.4337/cilj.2019.02.07; G. Sullivan, The Law of the List: UN
counterterrorism sanctions and the politics of global security law (Cambridge University Press,
Cambridge, 2020); D. Van Den Meerssche, ‘Virtual Borders: International Law and the Elusive
Inequalities of Algorithmic Association’, 33:1 European Journal of International Law (2022) pp.
171–204, doi: https://doi.org/10.1093/ejil/chac007.
4 A. Deeks, ‘High-Tech International Law’, 88:3 George Washington Law Review (2020b) pp.
574–653, at 574.
5 L. Diver, ‘Computational legalism and the affordance of delay in law’, 1:1 Cross-Disciplinary
Research in Computational Law (2020) pp. 1–15.
6 Amoore, supra note 1, p. 10.

Nordic Journal of International Law 92 (2023) 1–8


Downloaded from Brill.com04/27/2023 01:57:04PM
via free access
4 introduction

reduced to computational ones. This brings us to the issue of what questions


international legal scholarship on ai and automatic decision making can,
should and does already ask.
Until now, international legal scholarship has primarily been seized with
questions about “the application of existing international law to new facts and
scenarios posed by ai-driven technologies”, as well as “how existing [interna-
tional] law should shape the way states develop and deploy these technolo-
gies”.7 This special issue turns the gaze from calls for regulation and adaptation
to instead look at what happens to international law – its central concepts,
modes of development, and related issues that are conventionally considered
central to international law – once it becomes entrenched with ai. Taking seri-
ously the question how ai, including ai-supported and automated decision
making, might impact on international law and the decisions made in it as a
discipline and a profession, the special issue invites scholars to look beyond
the alluring “glitz of ai hype” to consider the “great deal of mundane work
[that] underlies the practices of doing machine learning”.8 Towards this end,
an explicit aim in this special issue is to analyze ai, including ml, in its current
form, which is to say that machine autonomy and technological singularity is
not part of the analysis.9 It is not an imagined future of superintelligent auton-
omous machines, self-governing autonomous weapons systems or robo-judges
driving the analyses of our authors. Rather, their focus is on the mundane pro-
cesses of ml and its associated practices, as these are already at work in pres-
ent conditions at both national and international levels of governance and the
implementation of law.
But how to frame ml as it impacts international law? “If we retain an arti-
fact-centric conception of technology”, Langford argues, we risk “reifying and
romanticizing the imaginary of a ‘human/e state’”.10 Reifying and romanticizing

7 A. Deeks, ‘Introduction to the Symposium: How will Artificial Intelligence Affect


International Law?’ 114 American Journal of International Law Unbound (2020a) pp. 138–
140, at p. 139, doi:10.1017/aju.2020.29.
8 M.C. Elish and D. Boyd, ‘Situating methods in the magic of big data and ai’ 85:1
Communication Monographs (2018) pp. 57–80, p. 69, doi: https://doi.org/10.1080/03637751.
2017.1375130.
9 Technical singularity, or ‘singularity’, is a concept closely associated with popular culture
– in particular science fiction. It denotes the point in technological development when
artificial machine intelligence outgrows human intelligence, gains substantial self-
understanding (self consciousness), at which point it is imagined as irreversibly being
beyond human control.
10 Langford, supra note 3, at 145 (footnote omitted).

Nordic Journal of International Law 92 (2023) 1–8


Downloaded from Brill.com04/27/2023 01:57:04PM
via free access
introduction 5

cuts both ways, though, as magical capabilities are being projected onto ai as if
it could be isolated from the deficits of its all-to-human enablers and embed-
ders.11 The individual authors of this special issue steer clear from both, elid-
ing ‘optimist’ as much as ‘pessimist’ views on ai and associated technologies.
The overall picture emerging from the following articles is neither of cher-
ry-picked ‘salient technological failures’ nor of technological magic. Instead,
the mixed message is that ai may, but does not necessarily, solve problems in
international law. Moreover, ai certainly restructures and delimits what can
be addressed as a problem.12 It exacerbates certain already existing problems,
such as its lack of a cohesive core, authority and execution; its dependence on
state power and recognition; its history tainted by colonial violence and ine-
quality. In addition, ai introduces new problems that international lawyers are
not readily aware of and attentive to.
This special issue of the Nordic Journal of International Law presents six full
length articles, following a trajectory from historical and methodological shifts
towards particular contexts of application, and onwards to regulatory and con-
ceptual issues.
In the first article, John Haskell takes a historical approach to trace how
international law and its scholarship has been continually reworking its rela-
tion to computer-oriented technologies since at least the 1950s.13 Haskell asks
how this engagement takes place, what it tells us about the state of the disci-
pline of international law, and the consequences of concentrating on the phe-
nomena of digital technologies. He is sceptically inclined towards reading an
overarching logic into digital technologies, making them into a mere variation
of capitalism, with the computer being the next upgrade of our suffering as
humans. Haskell ends by reminding us that we are ‘children of evolutionary
biology’ and summons us to be ‘cyborg international lawyers’ in the original
sense, freeing ourselves from the constraints of the environment to the extent
that we wish.
In the second contribution, Geoff Gordon focuses on international institu-
tions, showing how ai has shifted terrains of contestation from arguments
concerning rules – associated with traditional international legal practice and

11 On the recurrent reference to ai as ‘magic’, see S. Larsson and M. Viktorelius, ‘Reducing the
contingency of the world: magic, oracles, and machine-learning technology’, ai & Society:
The Journal of Human-Centred Systems and Machine Intelligence (2022), doi: https://doi.
org/10.1007/s00146-022-01394-2.
12 Amoore, supra note 1.
13 J. Haskell, ‘International Law as Cyborg Science’, Nordic Journal of International Law (2023)
this current issue.

Nordic Journal of International Law 92 (2023) 1–8


Downloaded from Brill.com04/27/2023 01:57:04PM
via free access
6 introduction

argumentation – to pattern-recognition processes that determine efficient


solutions towards optimal outcomes. He traces the part that the interoperation
of analogue and digital information technologies plays in this shift, to ask how
it has determined and changed what is made legible in terms of international
law. The article argues that the interoperation of digital and analogue technol-
ogies has supported an intensification of efficiency-maximizing institutional
routines, expressed in the optimization function associated with neural net-
works and ai, to the effect that defies the reflective character associated with
traditional international law and legal practice. ai, and the increasing imple-
mentation of it in international institutions, Gordon argues, assumes power by
both defining problems and engendering international governance issues in
the act of addressing them.
How this plays out concretely is explored in Matilda Arvidsson and Gregor
Noll’s text, the third article in this issue. It offers an account of how the authors,
together with an industry partner – Smartr, whose Erik Lorentzen and Mattias
Sundén are contributing to the article with statistical analysis – and a public
partner – the Swedish Migration Agency – attempted to build a post hoc inter-
vention ‘anti-discrimination machine’. Drawing on autoethnographic method,
as well as legal and ml scholarship, their contribution seeks to answer whether
a ml-driven post hoc intervention system, such as the one they attempted to
build, reduces the overall risk of discrimination emerging from human discre-
tion in asylum law decision making. They identify discretion as being merely
shifted from the human decision maker in the asylum case to human decision
makers in the process of data wrangling. Arvidsson and Noll conclude that a
ml-driven ‘anti-discrimination machine’ will generally not reduce the overall
risk of discrimination emerging from human discretion in legal decision mak-
ing in asylum law. This is, they argue, due to the merely shifting of discretion
from one space to another, the latter being less accessible to public scrutiny
and accountability.
The fourth article, authored by Markus Gunneflo and Gregor Noll, asks what
proportionality reasoning means for decision support in international law,
both in that rendered by textbooks and that enabled by ai and ml. Drawing
on empirical material from ihl, Gunneflo and Noll proceed in two steps. First,
they analyze the principle of proportionality as it has successively emerged
in international law, finding that it actively promotes automation of decision
taking, as it promotes quantitative modes of thought necessary to perform
the cost-benefit analysis intrinsic to proportionality. They call this process

Nordic Journal of International Law 92 (2023) 1–8


Downloaded from Brill.com04/27/2023 01:57:04PM
via free access
introduction 7

quantification, tracking it back to 1970 US law and politics onwards right into
the computerization of warfare and its effects on targeting in conformity with
ihl. In a second step, they analyze the effects that the quantification process
has had on contemporary international law. As these technologies spread from
the US to its allies, Gunneflo and Noll show that this has concrete repercus-
sions on the practice of states under treaty and customary ihl alike.
The fifth article, contributed by Leila Brännström, analyses the emerging
global field of data regulation, asking how the EU approach to data govern-
ance relates to an emergent international law in this field. Regulating access to
data is ultimately about regulating the data used in machine learning and in
automated forms of decision support. The conflict is staged between a position
advocating the free flow of data (as that of the US), and a position asserting
data sovereignty (such as those held by China and India). Brännström shows
that the EU position seeks, but ultimately fails, to offer an alternative to these
binary positions. As the US leverages international trade agreements to pur-
sue free trade in data, the EU remains unable to counter the resulting global
inequalities with a mitigating framework. A major factor in this failure is the
divergence between EU data protection law and the digital economic prac-
tices of the 21st century. Today, Brännström finds, the EU quest for digital sov-
ereignty is but an attempt to climb the ladder of global digital value chains. By
conclusion, the EU has little to offer in a formative phase for an important area
of international law.
In the sixth and final article, Outi Korhonnen, Merima Bruncevic and Matilda
Arvidsson focus on international legal subjectivity and cyberspace decision
making. In their analysis Korhonnen et al. draw on the ‘the uncanny valley’
from the 1970s influential essay by robotics professor Masahiro Mori. Mori
famously argued that human-likeness in robots evokes a sense of affinity in
humans, but only up to a point where the robot is experienced as both too
human-like and eerily non-human, thus evoking uncanniness. Korhonen et al.
apply Mori’s notion, as well as the Freudian ‘uncanny’ in psychoanalysis and
jurisprudence, to the field of a-human, non-human, and more-than-human
agents in cyberspace – a space that international law and scholarship has, for
some time now, been debating in terms of regulatory capacity and legal subjec-
tivity of the variety of agents dwelling there. By discussing autonomous deci-
sion-making, the co-existence between the human and non-human subjects in
international law’s uncanny valley, the article proposes that international law
needs to cater for a larger, dialectical, spectrum of non-human subjectivities

Nordic Journal of International Law 92 (2023) 1–8


Downloaded from Brill.com04/27/2023 01:57:04PM
via free access
8 introduction

and digital jurisdictions at the same time as its traditional field is challenged
by radical developments of legal pluralism.

Matilda Arvidsson
University of Gothenburg, Department of Law, Gothenburg, Sweden
matilda.arvidsson@law.gu.se

Gregor Noll
Department of Law, University of Gothenburg, Gothenburg, Sweden
gregor.noll@law.gu.se

Acknowledgements

Gregor Noll is a professor in international law, and Matilda Arvidsson is an


associate professor of international law, both at the Department of Law,
University of Gothenburg. The work with this special issue was carried out pur-
suant to the wasp-hs financed project ai, Democracy, and the Social Contract
of which Matilda Arvidsson is the pi, and within the period during which
Gregor Noll held the position of a Torsten Söderberg Research Professor at the
School of Business, Economics and Law, Gothenburg. The editors would like to
thank assistant senior lecturer Anna Nilsson, Lund University, who assumed
the editorial functions of selecting and communicating with referees for their
own contribution ‘Decision Making in Asylum Law and Machine Learning:
An Autoethnograpic Account and Lessons Learned on Data Wrangling and
Human Discretion’ to ensure double anonymity. Thanks also to Andrea Leiter,
Jannice Käll and Fleur Johns for comments and discussions on draft versions of
the articles; to the anonymous reviewers to each article for their valuable input
and work; as well as the editorial team at the Nordic Journal of International
Law for their enthusiasm regarding the topic of this special issue and for their
flexibility in working towards its final completion. A special thanks to associ-
ate professor Miriam Bak McKenna who was the managing editor at the time
when the special issue was proposed, as well as to assistant professor Zuzanna
Godzimirska who is the current managing editor.

Nordic Journal of International Law 92 (2023) 1–8


Downloaded from Brill.com04/27/2023 01:57:04PM
via free access

You might also like