Download as pdf or txt
Download as pdf or txt
You are on page 1of 68

KU LEUVEN CAMPUS BRUSSELS

FACULTY OF LAW
Academic year 2020-2021

When AI goes criminal: attributing criminal liability in case of crimes involving AI


16.282 words

Promoter: M. PANZAVOLTA

Master’s thesis, submitted by


Bjarne GALLE
as part of the final examination for the degree of
MASTER OF INTELLECTUAL PROPERTY AND ICT LAW
KU LEUVEN CAMPUS BRUSSELS
FACULTY OF LAW
Academic year 2020-2021

When AI goes criminal: attributing criminal liability in case of crimes involving AI


16.282 words

Promoter: M. PANZAVOLTA

Master’s thesis, submitted by


Bjarne GALLE
as part of the final examination for the degree of
MASTER OF INTELLECTUAL PROPERTY AND ICT LAW
Plagiarism declaration

I confirm that this thesis is my own work, that all ideas contained in the thesis are expressed in my
own words and that I have not literally or quasi-literally taken anything over from other texts, except
for fragments cited between quotes for which I provided full and accurate bibliographical data.

-Bjarne Galle
Abstract

Artificial intelligence is already a reality with many beneficial applications. But AI can also be used
for malicious purposes, such as to commit crimes. AI can even commit crimes while this was never
the intention of the user or programmer. From crashes involving self-driving vehicles to drug-
purchasing shopping bots and racist chatbots, crimes involving AI are a pressing issue that will only
grow in importance in the years to come. However, AI can have some characteristics that can
significantly complicate criminal liability, such as autonomy, unpredictability, and a lack of
transparency. The purpose of this thesis is then to assess how the current general criminal law
theory can attribute criminal liability when AI is involved in crimes. To this end, the thesis focuses
on the liability of users of the AI, programmers, and AI agents themselves.

The issue at hand requires going back to the very basics of criminal liability: the requirements of
actus reus and mens rea. This is because AI can pose problems on a fundamental level and therefore
requires that the very basics of criminal law are re-examined again. These two elements that
constitute criminal offences are first outlined on a theoretical level so that they can then be applied
to crimes involving AI.

When AI is used instrumentally as a means to an end to intentionally commit crimes, the current
criminal law framework around intentional crimes more or less suffices, although an appeal must
be made to some creative legal solutions when the AI can be considered as being semi- or fully
intelligent. This is because the AI system can no longer be equated to a mere inanimate tool in such
cases. When AI commits crimes while this was not the intention of the user or programmer,
however, significant problems do occur. An appeal must then be made to criminal negligence. To
this extent, the applicable duty of care remains very unclear and turns out to be very hard to assess
in the absence of proper legal or industry standards. Criminal negligence also requires that risks are
foreseeable, while the characteristics of AI can make it very difficult to foresee these risks. Many
issues also arise when trying to hold AI agents themselves criminally liable for the crimes they
commit. AI currently still lacks legal personality, but a bigger problem is posed by AI’s inability to
meet the basic requirements of criminal liability. A potential element of voluntariness can preclude
AI from meeting the actus reus requirement, while requirements such as having free will, being able
to make moral judgments, having true intentions, and being capable of forming awareness can all
preclude the mens rea requirement from being met. Various problems also arise in the context of
prosecuting and punishing AI.
I
1 Introductory chapter................................................................................................................ - 1 -
1.1 Problem statement and relevance.................................................................................. - 1 -
1.2 Research objectives and research questions .................................................................. - 2 -
1.3 Methodology ................................................................................................................... - 2 -
1.4 Limitations ....................................................................................................................... - 3 -
2 Crimes involving artificial intelligence ..................................................................................... - 5 -
2.1 Artificial intelligence........................................................................................................ - 5 -
2.1.1 Concept ..................................................................................................................... - 5 -
2.1.2 Machine learning....................................................................................................... - 7 -
2.2 Some examples of crimes involving AI ............................................................................ - 8 -
2.3 Characteristics of AI that complicate criminal liability ................................................. - 12 -
3 Relevant offenders in case of crimes involving artificial intelligence.................................... - 14 -
3.1 Purpose of this chapter ................................................................................................. - 14 -
3.2 Various relevant offenders............................................................................................ - 14 -
3.3 Focus of the thesis......................................................................................................... - 15 -
4 Two constituent elements of crimes required for criminal liability ...................................... - 17 -
4.1 Actus reus ...................................................................................................................... - 17 -
4.1.1 The scope of “acts” ................................................................................................. - 17 -
4.1.2 Voluntariness........................................................................................................... - 18 -
4.2 Mens rea........................................................................................................................ - 19 -
4.2.1 General sense: culpability or blameworthiness ...................................................... - 19 -
4.2.2 Special sense: mental states required by specific offences .................................... - 20 -
4.2.2.1 Intent................................................................................................................ - 20 -
4.2.2.2 Negligence........................................................................................................ - 21 -
4.2.2.3 Recklessness..................................................................................................... - 22 -
4.2.2.4 Exception: strict liability ................................................................................... - 23 -
5 Attributing criminal liability in case of crimes involving artificial intelligence ...................... - 25 -
5.1 Introduction .................................................................................................................. - 25 -
5.2 Intentional crimes: instrumental use of AI ................................................................... - 25 -
5.2.1 Concept ................................................................................................................... - 25 -
5.2.2 AI as an unintelligent and inanimate tool ............................................................... - 26 -
5.2.3 AI as a semi-intelligent entity .................................................................................. - 27 -
5.2.4 AI as a fully intelligent entity (strong AI) ................................................................. - 28 -
5.3 Negligence-based liability ............................................................................................. - 30 -
5.3.1 Concept ................................................................................................................... - 30 -
5.3.2 Negligence ............................................................................................................... - 30 -

II
5.3.2.1 Requirements................................................................................................... - 30 -
5.3.2.2 The applicable duty of care.............................................................................. - 30 -
5.3.2.3 Foreseeability of risks ...................................................................................... - 32 -
5.3.3 Recklessness ............................................................................................................ - 33 -
5.3.4 Strict liability: not a viable solution ......................................................................... - 34 -
5.3.5 The doctrine of natural and probable consequences ............................................. - 35 -
5.4 Criminal liability of AI agents ........................................................................................ - 36 -
5.4.1 Concept ................................................................................................................... - 36 -
5.4.2 Legal personality ..................................................................................................... - 36 -
5.4.3 Actus reus ................................................................................................................ - 38 -
5.4.4 Mens rea.................................................................................................................. - 40 -
5.4.4.1 General sense: culpability or blameworthiness .............................................. - 40 -
5.4.4.2 Special sense: mental states required by specific offences ............................ - 42 -
5.4.4.2.1 Intent ........................................................................................................... - 42 -
5.4.4.2.2 Negligence ................................................................................................... - 44 -
5.4.4.2.3 Recklessness ................................................................................................ - 45 -
5.4.4.2.4 Exception: strict liability .............................................................................. - 46 -
5.4.5 Prosecuting and punishing AI .................................................................................. - 46 -
5.4.5.1 Prosecuting AI in practice ................................................................................ - 46 -
5.4.5.2 Imposing criminal punishments on AI ............................................................. - 47 -
5.4.5.3 Functions of criminal law and AI ...................................................................... - 49 -
6 Conclusion .............................................................................................................................. - 53 -
7 Bibliography ........................................................................................................................... - 55 -
7.1 Legal and policy documents .......................................................................................... - 55 -
7.1.1 EU ............................................................................................................................ - 55 -
7.1.2 Other ....................................................................................................................... - 55 -
7.2 Doctrine ......................................................................................................................... - 56 -
7.2.1 Books and contributions to books .......................................................................... - 56 -
7.2.2 Articles in journals ................................................................................................... - 57 -
7.2.3 Internet sources ...................................................................................................... - 60 -
7.3 News sources ................................................................................................................ - 61 -

III
1 Introductory chapter

1.1 Problem statement and relevance

Artificial intelligence, or AI, is a hot topic right now. It is no longer science fiction, but AI is a reality
with many currently existing applications that are already part of our lives.1 AI is now being used to
detect cardiac arrests based on the sound of the voice of people who call emergency services, to
improve the accuracy of radiologists, and to improve the welfare of animals at animal farms.2 The
European Economic and Social Committee (EESC) has also issued an opinion, stating that AI may
significantly benefit society by, inter alia, personalizing education, improving jurisprudence, making
society safer, and boosting the European economy.3 The EESC even went so far as to say that AI
“may even potentially help eradicate disease and poverty”.4

However, AI can also come with risks. In particular, AI can be used to commit crimes, and AI can
even commit crimes while this was never the intention of the user or programmer. Criminal law
thus faces new challenges, as it is traditionally anthropocentric. But now criminal law is being
confronted with artificial entities that can also be considered as having some measure of
intelligence. Considering AI as a mere inanimate tool in the hands of a human perpetrator, as is the
case with a hammer or knife, may no longer be appropriate in all situations. Hence, it should be
assessed whether the current criminal law framework can adequately address the question of
criminal liability when AI is involved in the commission of crimes. Moreover, by applying existing
criminal law principles that are anthropocentric to crimes involving AI, the research also sheds new
light upon these old principles and may therefore expose problems and deficiencies of the current
criminal law framework.

1
Communication (Comm.). Artificial Intelligence for Europe, 25 April 2018, COM(2018)237 final, ch 1.
2
Ibid.
3
Opinion (EESC). Artificial intelligence: The consequences of artificial intelligence on the (digital) single market,
production, consumption, employment and society, 31 August 2017, 2017/C 288/01, art. 3.1.
4
Ibid.
-1-
1.2 Research objectives and research questions

The objective of the thesis is to assess how criminal liability can be attributed in case of crimes
involving AI. By making this assessment, the thesis will investigate whether the current criminal law
framework has the means to adequately address the liability question for such crimes.

To this extent, the issue at hand will be approached from the perspective of general criminal law
theory. The main research question can further be divided into multiple preparatory sub-questions
that must first be answered. These research questions can be formulated as follows:

Main research question:

→ How can criminal liability be attributed in case of crimes involving AI, according to the current
general criminal law theory? (Chapter 5)

Sub-questions:

→ What is AI and how can AI be involved in crimes? (Chapter 2)

→ What individuals and entities are the relevant offenders in case of crimes involving AI? (Chapter
3)

→ What are the constituent elements of crimes required for criminal liability? (Chapter 4)

1.3 Methodology

As was already mentioned, the research will be conducted from the perspective of general criminal
law theory. This has two implications.

First, it means that the issue at hand will be addressed using general criminal law, rather than the
criminal law of the legal system of any particular state. This is because the thesis aims to address an
issue that is currently relevant for various legal systems around the world, and also because the
issue goes back to the very basic requirements of criminal law that all legal systems more or less
hold in common. To this extent, “general” refers to the various legal systems of the Western world,
being the common law and continental law systems. It should be mentioned, however, that the

-2-
thesis does somewhat favour common law as a result of a majority of the literature originating from
common law systems.

Second, the research will be conducted from the perspective of criminal law theory, rather than
from the perspective of legislation, case law, or a combination of these aspects. The reason for this
is that large parts of the thesis are very forward-looking and speculative and have yet to generate
any legislation or case law at all. Hence, an abstract, theoretical approach is best suited to
accomplish the research objectives. As a result, legislation and case law will in principle not be used,
save for illustrative purposes.

As for the type of research that will be conducted, the research will be descriptive, evaluative, and
normative. The research will be descriptive because the thesis aims to describe the existing
phenomenon of AI and its involvement in crimes, as well as the currently existing general criminal
law theory. Additionally, the eventual goal of the research will be evaluative, as the current criminal
law theory will then be applied to crimes involving AI, so that it will be made clear whether or not
the current criminal law theory has the adequate means to answer the liability question when AI is
involved in crimes. This will make the research normative as well, as various problems and
deficiencies of the current criminal law framework will be pointed out. The conclusions of the thesis
could then be used for normative purposes by serving as a basis for new criminal law legislation.

Of course, the thesis will also be somewhat interdisciplinary. As the thesis concerns crimes involving
AI, the concept of AI will have to be explained, as well as some machine learning methods and some
characteristics of AI that may complicate criminal liability. Hence, the research must delve into the
basics of computer science and computer programming. Some arguments will even involve
discussions about philosophical concepts. All of this will be reflected in the sources that are used,
as a significant portion will not, strictly speaking, be legal doctrine.

1.4 Limitations

An important limitation will be that the thesis will not address criminal liability that may arise when
AI becomes the target or victim of crime. Instead, the thesis will be limited to criminal liability
resulting from AI committing (or being used to commit) crimes.

Another limitation is that the thesis will refrain from addressing the issue of civil liability, including
civil liability resulting from criminal liability, although parallels may be drawn where this is useful.
-3-
This also means that the thesis will not concern itself with the question of whether crimes such as
these are best regulated through criminal law in the first place. The debate about whether harm
caused by AI should best be approached through either civil liability or criminal liability (or through
other means, such as ethical programming) falls outside the scope of the thesis. Instead, the
objective of the thesis is to assess whether criminal law could (rather than should) attribute liability
when AI is involved in crimes, if this were to be desired.

-4-
2 Crimes involving artificial intelligence

2.1 Artificial intelligence

2.1.1 Concept

Artificial intelligence as a term was first coined by John McCarthy in the 1950s, who is known as “the
father of artificial intelligence”.5 McCarthy used the term to propose a research project in order to
“find how to make machines use language, form abstractions and concepts, solve kinds of problems
now reserved for humans, and improve themselves”.6

Yet, many decades later, AI remains a vague and broad concept that is the subject of much debate.
Different people understand AI in different ways, but on a general level, there is agreement that AI
is about the attempt of reproducing intelligence in computer systems. 7 However, intelligence as a
concept is also highly debated and lacks any unanimously accepted definition.8 As a result, it is
unsurprising that the same is true for the concept of AI. There is, however, a helpful distinction that
is made by Russell and Norvig in what is considered one of the most used textbooks of AI,9 where
they argue that definitions of AI can be grouped into four categories that relate to human thinking,
human behaviour, rational thinking, and rational behaviour respectively.10

For the purposes of this thesis, it is not necessary to select any particular definition. Not only would
picking one definition unavoidably be controversial, there is also the argument that omnis definitio
in iure periculosa est: defining in law is dangerous.11 By picking one particular definition, some forms
of AI would be excluded without this necessarily being the intention, and any particular definition
may not hold up in the long term, considering the very rapidly developing nature of AI and

5
V. RAJARAMAN, “John McCarthy – Father of Artificial Intelligence”, Resonance 2014, vol. 19, 198-207.
6
J. MCCARTHY, M.L. MINSKY, N. ROCHESTER and C.E. SHANNON, “A Proposal for the Dartmouth Summer Research
Project on Artificial Intelligence”, AI Magazine 2006, vol. 27(4), 12.
7
P. WANG, “What Do You Mean by ‘AI’?” in P. WANG, B. GOERTZEL and S. FRANKLIN (eds.), Artificial General Intelligence
2008: Proceedings of the First AGI Conference, Amsterdam, IOS Press, 2008, ch 1.
8
See S. LEGG and M. HUTTER, “A collection of Definitions of Intelligence”, Frontiers in Artificial Intelligence and
Applications 2007, vol. 157, 17-24.
9
Report (AI HLEG). A definition of Artificial Intelligence: main capabilities and scientific disciplines, 8 April 2019,
https://ec.europa.eu/digital-single-market/en/news/definition-artificial-intelligence-main-capabilities-and-scientific-
disciplines, 1.
10
S.J. RUSSELL and P. NORVIG, Artificial intelligence: a modern approach, Upper Saddle River, Pearson Education, 2010,
1-2; A. STEVANOVIC and Z. PAVLOVIC, "Concept, Criminal Legal Aspects of the Artificial Intelligence and Its Role in Crime
Control", Journal of Eastern-European Criminal Law 2018, vol. 2018(2), 33-34.
11
A. STEVANOVIC and Z. PAVLOVIC, "Concept, Criminal Legal Aspects of the Artificial Intelligence and Its Role in Crime
Control", Journal of Eastern-European Criminal Law 2018, vol. 2018(2), 32.
-5-
technology in general. Moreover, “by far the greatest danger of Artificial Intelligence is that people
conclude too early that they understand it”.12 Hence, it is definitely not the place of a legal scholar
to define the concept of AI in detail. Instead, the term “artificial intelligence” will be used in the
most general way instead, as meaning computer systems that have some semblance of intelligence.
This notion of AI is, of course, very broad and encompasses many things. However, to further
delineate this notion a bit, some of the basic features and characteristics of AI will be highlighted
later on in this chapter.13 The intent there is not to comprehensively identify all features and
characteristics of AI, but rather those that can especially complicate the criminal liability issue.
Additionally, the concept of machine learning will be discussed in the next section. It should also be
noted that the term AI will not be used to refer to the scientific discipline that is dedicated to
building AI systems,14 but rather the actual algorithms, systems, and devices that employ AI
technology and which can (be used to) commit crimes.

Further, it is also useful to make a distinction between narrow or weak AI on the one hand and
general or strong AI on the other. Narrow AI means AI that can perform one or a few specific tasks
(often exceptionally well), whereas general AI can perform most activities that humans can do.15
Currently existing AI systems are all narrow AI systems.16 Hence, current AI systems do not have
general-purpose reasoning, or “common sense”,17 which makes humans so flexible to perform a
broad variety of tasks quite well, rather than excel at a select number of specific tasks.18 Experts in
the field do not expect general AI to come about for at least another few decades.19

12
E. YUDKOWSKY, “Artificial Intelligence as a Positive and a Negative Factor in Global Risk” in N. BOSTROM and M.M.
CIRKOVIC (eds.), Global and Catastrophic Risks, New York, Oxford University Press, 2008, 308.
13
See section 2.3.
14
Report (AI HLEG). A definition of Artificial Intelligence: main capabilities and scientific disciplines, 8 April 2019,
https://ec.europa.eu/digital-single-market/en/news/definition-artificial-intelligence-main-capabilities-and-scientific-
disciplines, 3.
15
Ibid., 5.
16
Ibid., 5.
17
P. SCHARRE and M.C. HOROWITZ, Artificial Intelligence: What Every Policymaker Needs to Know, Center for a New
American Security, 2018, https://www.cnas.org/publications/reports/artificial-intelligence-what-every-policymaker-
needs-to-know, 10.
18
Ibid., 4.
19
R. ABBOTT and A. SARCH, “Punishing Artificial Intelligence: Legal Fiction or Science Fiction”, UC Davis Law Review
2019, vol. 53, 332.
-6-
2.1.2 Machine learning

Machine learning is an important subset of AI that can especially complicate the criminal liability
issue. Once again there are various definitions, but they all seem to hold in common that a system
is able to “intelligently perform tasks beyond traditional number crunching by learning the
surrounding environment through repeated examples”.20 To put it simply, machine learning enables
a system to display intelligence by learning from data and subsequently adjusting its behaviour to
optimally achieve a given goal.21 This means that machine learning systems are not hard coded in
the sense that they are explicitly programmed to produce particular outcomes, but rather they are
soft coded, as they are able to adapt their own architecture through experience to more optimally
achieve the task at hand.22 Providing input data along with desired outcomes so that a system can
adapt itself is called training.23 Unsurprisingly, the data that a system uses to learn is of tremendous
importance: “data is the fuel that powers the engine of machine learning”.24

A distinction can be made between some different popular methods of machine learning.
Supervised learning is useful for making predictions and uses so-called labelled data, meaning that
each piece of input data that the system uses to learn also comes with the correct output relating
to the characteristic of interest.25 Unsupervised learning on the other hand uses unlabelled data
and is used to identify patterns within large amounts of data. 26 In reinforcement learning, which is
very important in robotics,27 the AI system is free to make decisions and is then provided with a
reward signal depending on whether it makes a good or bad decision, so that the system can learn
to maximize these positive reward signals over time.28 Another interesting technique is deep

20
I. EL NAQA and M.J. MURPHY, "What Is Machine Learning?" in I. EL NAQA, R. LI and M.J. MURPHY (eds.), Machine
Learning in Radiation Oncology: Theory and Applications, Cham, Springer International Publishing, 2015, 6.
21
P. SCHARRE and M.C. HOROWITZ, Artificial Intelligence: What Every Policymaker Needs to Know, Center for a New
American Security, 2018, https://www.cnas.org/publications/reports/artificial-intelligence-what-every-policymaker-
needs-to-know, 5.
22
I. EL NAQA and M.J. MURPHY, "What Is Machine Learning?" in I. EL NAQA, R. LI and M.J. MURPHY (eds.), Machine
Learning in Radiation Oncology: Theory and Applications, Cham, Springer International Publishing, 2015, 4.
23
Ibid.
24
P. SCHARRE and M.C. HOROWITZ, Artificial Intelligence: What Every Policymaker Needs to Know, Center for a New
American Security, 2018, https://www.cnas.org/publications/reports/artificial-intelligence-what-every-policymaker-
needs-to-know, 5.
25
B. BUCHANAN and T. MILLER, Machine Learning for Policymakers: What It Is and Why It Matters, Belfer Center for
Science and International Affairs, 2017, https://www.belfercenter.org/publication/machine-learning-policymakers, 6.
26
Ibid., 9.
27
Ibid., 11.
28
Report (AI HLEG). A definition of Artificial Intelligence: main capabilities and scientific disciplines, 8 April 2019,
https://ec.europa.eu/digital-single-market/en/news/definition-artificial-intelligence-main-capabilities-and-scientific-
disciplines, 4.
-7-
learning, which consists of layers of nodes that are somewhat similar to the neurons in the human
brain.29 By having multiple layers between the input and output, this method can improve accuracy
while requiring less human guidance.30

2.2 Some examples of crimes involving AI

When discussing the issue of criminal liability in case of crimes involving AI, it is useful to first briefly
give some real-world examples of such crimes. This will also serve to illustrate some of the
characteristics of AI that can complicate liability, which will be discussed in more detail in the next
section. These examples are by no means exhaustive but are intended to give a general idea of what
crimes involving AI may look like. When thinking about crimes that involve AI, the science fiction
doomsday scenario of “killer robots” of course comes to mind. But in reality, crimes that involve AI
are already very relevant today, and the importance of these crimes will only increase in the future.

One of the earliest examples of a potential crime that involved AI dates back to 1981, when Japanese
factory worker Kenji Urada had to perform maintenance on a robot but had forgotten to completely
turn the robot off.31 The robot ended up smashing the unfortunate man into an adjacent machine
using its hydraulic arm, making him the first recorded victim to be killed by a robot.32

A more recent example is the deployment of an online shopping bot, named the Random Darknet
Shopper, by some Swiss artists in 2014. The goal was to let the bot make random purchases on the
deep web so that the items could be put on display at an art gallery.33 The bot purchased various
items, such as clothes and cigarettes, but also master keys used by the fire brigade, a fake Louis

29
B. BUCHANAN and T. MILLER, Machine Learning for Policymakers: What It Is and Why It Matters, Belfer Center for
Science and International Affairs, 2017, https://www.belfercenter.org/publication/machine-learning-policymakers, 14-
15.
30
Report (AI HLEG). A definition of Artificial Intelligence: main capabilities and scientific disciplines, 8 April 2019,
https://ec.europa.eu/digital-single-market/en/news/definition-artificial-intelligence-main-capabilities-and-scientific-
disciplines, 4.
31
Y.-H. WENG, C.-H. CHEN and C.-T. SUN, "Toward the Human–Robot Co-Existence Society: On Safety Intelligence for
Next Generation Robots", International Journal of Social Robotics 2009, vol. 1, 273.
32
Ibid.
33
M. POWER, “What happens when a software bot goes on a darknet shopping spree?”, The Guardian 5 December
2014, https://www.theguardian.com/technology/2014/dec/05/software-bot-darknet-shopping-spree-random-
shopper.
-8-
Vuitton handbag, and ten ecstasy pills.34 A criminal investigation was started, but in the end the
Swiss prosecutor decided to clear the artists of all charges.35

The aforementioned bot is not the only bot that has engaged in potentially criminal activities. In
2016, Microsoft deployed an AI chatbot named Tay on Twitter that could have automated
discussions with other users by imitating their language.36 Within 24 hours the bot had to be pulled
offline by Microsoft, seeing as Tay had learned to deny the Holocaust, advocate genocide, and make
various racist and sexist remarks.37 Obviously, this was never the intention of Microsoft, which goes
to show the unpredictability of AI systems that can self-learn.38

Serious concerns are also being raised about the possible use of AI on the stock market that may
lead to market abuse and manipulation. 39 When the goal is profit maximisation, AI can learn to
manipulate the market through reinforcement learning, as manipulative behaviour that is profitable
could lead to a positive reward signal.40 This can become particularly problematic when AI is used
in high-frequency trading algorithms, seeing as these algorithmic systems can make extremely fast
trading decisions, resulting in thousands or even millions of trades per day.41 AI can play an
important role in the strategies behind these trading decisions through, inter alia, machine learning
and deep learning.42 High-frequency trading has already been linked to multiple stock market
crashes, the first being the “Flash Crash” of the DJIA index in 2010.43

Next, another example is traffic accidents involving self-driving cars. The mass availability of fully
autonomous vehicles is likely not far off, while plenty of cars with various self-driving features are
already commercially available (Tesla Autopilot of course comes to mind). While these cars may

34
Ibid.
35
J. KASPERKEVIC, “Swiss police release robot that bought ecstasy online”, The Guardian 22 April 2015,
https://www.theguardian.com/world/2015/apr/22/swiss-police-release-robot-random-darknet-shopper-ecstasy-
deep-web.
36
D. VICTOR, "Microsoft Created a Twitter Bot to Learn From Users. It Quickly Became a Racist Jerk.", The New York
Times 24 March 2016, https://www.nytimes.com/2016/03/25/technology/microsoft-created-a-twitter-bot-to-learn-
from-users-it-quickly-became-a-racist-jerk.html.
37
Ibid.
38
R. DREMLIUGA and N. PRISEKINA, "The Concept of Culpability in Criminal Law and AI Systems", Journal of Politics and
Law 2020, vol. 13(3), 258.
39
T.C. KING, N. AGGARWAL, M. TADDEO and L. FLORIDI, "Artificial Intelligence Crime: An Interdisciplinary Analysis of
Foreseeable Threats and Solutions", Science and Engineering Ethics 2020, vol. 26, 97-100.
40
See E. MARTINEZ-MIRANDA, P. MCBURNEY and M.J. HOWARD, Learning Unfair Trading: a Market Manipulation
Analysis From the Reinforcement Learning Perspective, 2020, https://arxiv.org/abs/1511.00740.
41
B.G. BUCHANAN, Artificial intelligence in finance, 2019, https://zenodo.org/record/2612537, 15-16.
42
Ibid., 16.
43
Ibid., 17.
-9-
drastically improve traffic safety, it seems unavoidable that accidents will still occur. In fact, already
in 2018 self-driving cars claimed their first fatal victim: in Arizona, 49-year-old Elaine Herzberg was
hit by a self-driving Volvo XC90, an Uber vehicle part of a government-mandated testing
programme, while crossing a road by bicycle.44 Hence, the issue of criminal liability in this context is
no longer merely an interesting legal question,45 but already posed difficulties in the real world.
Questions arose about any possible criminal liability on Uber’s part,46 but in the end the prosecutor
ended up solely charging the driver, who was watching a TV episode of The Voice at the time of the
accident, for negligent homicide.47 As regards civil liability, a settlement was reached between Uber
and the victim’s family.48 An issue that was not relevant in this particular case but that might further
complicate the issue of criminal liability for similar accidents in the future is the classic trolley
problem, where a self-driving car might have to make a moral choice between two bad, but arguably
not equally bad outcomes (put simply, into who the car should crash when a crash is unavoidable).49

Further, AI is also very relevant for military purposes, to the extent that some even speak of an “AI
arms race”.50 Apparently, killer robots might not remain a science fiction scenario for very long:
technology is making progress towards so-called lethal autonomous weapons, which would be able
to fully autonomously select and engage targets without any meaningful control by a “human-in-
the-loop”.51 Moreover, precursors already exist and are currently being used, such as the US Phalanx

44
D. WAKABAYASHI, “Self-Driving Uber Car Kills Pedestrian in Arizona, Where Robots Roam”, The New York Times 19
March 2018, https://www.nytimes.com/2018/03/19/technology/uber-driverless-fatality.html; W. PAVIA, “Driverless
Uber car ‘not to blame’ for woman’s death”, The Times 21 March 2018, https://www.thetimes.co.uk/article/driverless-
uber-car-not-to-blame-for-woman-s-death-klkbt7vf0.
45
See G. HALLEVY, “Unmanned Vehicles: Subordination to Criminal Law under the Modern Concept of Criminal
Liability”, Journal of Law & Information Science 2011, vol. 21(2), 200-211.
46
A. MARSHALL, “Why Wasn't Uber Charged in a Fatal Self-Driving Car Crash?”, Wired 17 September 2020,
https://www.wired.com/story/why-not-uber-charged-fatal-self-driving-car-crash/; R. SCHMELZER, “What Happens
When Self-Driving Cars Kill People?”, Forbes 26 September 2019,
https://www.forbes.com/sites/cognitiveworld/2019/09/26/what-happens-with-self-driving-cars-kill-
people/?sh=7077c435405c.
47
R. CELLAN-JONES, “Uber's self-driving operator charged over fatal crash”, BBC News 16 September 2020,
https://www.bbc.com/news/technology-54175359; T.B. LEE, “Safety driver in 2018 Uber crash is charged with negligent
homicide”, Ars Technica 16 September 2020, https://arstechnica.com/cars/2020/09/arizona-prosecutes-uber-safety-
driver-but-not-uber-for-fatal-2018-crash/.
48
S. NEUMAN, “Uber Reaches Settlement With Family Of Arizona Woman Killed By Driverless Car”, National Public Radio
29 March 2018, https://www.npr.org/sections/thetwo-way/2018/03/29/597850303/uber-reaches-settlement-with-
family-of-arizona-woman-killed-by-driverless-car.
49
See Y. HU, “Robot Criminals”, University of Michigan Journal of Law Reform 2019, vol. 52, 496.
50
E. MOORE GEIST, “It’s already too late to stop the AI arms race–We must manage it instead”, Bulletin of the Atomic
Scientists 2016, vol. 72(5), 318-321.
51
B. DOCHERTY, Mind the Gap: The Lack of Accountability for Killer Robots, New York, Human Rights Watch, 2015, 6.
- 10 -
weapons defence system.52 Another example would be the plans of the Russian Navy to develop
self-learning minefields that use AI to decide when to detonate.53 To this extent, concerns are being
raised about an apparent accountability gap which could preclude criminal (and civil) liability.54

Finally, there is an extensive joint report by Trend Micro, UNICRI, and Europol’s EC3 that shows that
AI already has many malicious uses. There is evidence that AI is currently being used to, inter alia,
improve the effectiveness of malware, guess passwords, break CAPTCHAs, impersonate humans to
fool bot detection systems in order to fraudulently gain more traffic to social media pages, cheat in
online games, and aid in hacking.55 Moreover, it’s likely that AI is currently also being used in other
more obfuscated ways that haven’t been detected yet.56 The report also anticipates more malicious
uses of AI that are likely to take place in the near future, such as using AI to drastically improve the
efficiency of currently existing social engineering scams or to improve the success rate of “robocalls”
to individuals in phishing scams.57 Also, a particularly serious concern consists of using AI to generate
“deepfakes”, whereby AI is used to manipulate or generate audio-visual content.58 Deepfakes could
potentially be used for a wide variety of different criminal purposes, such as defamation, fraud,
extortion, and the spreading of disinformation, although so far deepfake technology has mainly
been used for the purposes of generating non-consensual pornography, with only some
exceptions.59 Lastly, the report also refers to the increasing importance of “AI-as-a-Service”,
whereby criminals without any expertise in the field of AI can pay for access to malicious AI systems
to be used to commit crimes, as a part of the emerging business model of offering Cybercrime-as-
a-Service.60

52
Ibid.
53
I. BIKEEV, P. KABANOV, I. BEGISHEV and Z. KHISAMOVA, “Criminological Risks and Legal Aspects of Artificial
Intelligence Implementation” in ASSOCIATION FOR COMPUTING MACHINERY, Proceedings of the International
Conference on Artificial Intelligence, Information Processing and Cloud Computing (AIIPCC '19), New York, Association
for Computing Machinery, 2019, ch 4, nr. 20.
54
B. DOCHERTY, Mind the Gap: The Lack of Accountability for Killer Robots, New York, Human Rights Watch, 2015, 18-
36. Others argue that such a gap does not exist: see N. REITINGER, “Algorithmic Choice and Superior Responsibility:
Closing the Gap Between Liability and Lethal Autonomy by Defining the Line Between Actors and Tools”, Gonzaga Law
Review 2015, vol. 51(1), 79-119.
55
Report (Europol). Malicious Uses and Abuses of Artificial Intelligence, 19 November 2020,
https://www.europol.europa.eu/publications-documents/malicious-uses-and-abuses-of-artificial-intelligence, 6-29.
56
Ibid., 6.
57
Ibid., 30-49.
58
Ibid., 52.
59
Ibid., 56-59.
60
Ibid., 4.
- 11 -
2.3 Characteristics of AI that complicate criminal liability

As should be apparent from the sections above, AI can have some specific characteristics that can
potentially complicate the question of criminal liability. These characteristics will now briefly be
outlined in more detail.

A first characteristic is autonomy. While once again a vague concept, autonomy can be considered
as having different options to select from, coupled with the ability to select and implement a
particular option, while no external agent can override this decision.61 From autonomous vehicles
to weapons defence systems, it is clear that various kinds of AI systems and devices can have a high
degree of autonomy. These devices and systems therefore cannot always be considered as mere
tools that are being used instrumentally in the hands of a human being anymore. Instead, the
involvement of any particular human being may be very limited or even non-existent. This new
element of a partial or even total lack of human control is precisely what separates AI technology
from previous technological innovations.62 Consequently, holding human beings responsible for the
actions of a highly autonomous AI system can become difficult and is perhaps not always justified.

A second characteristic is a potential lack of transparency. This can be the result of the high
complexity of the AI, which is especially problematic for deep neural networks, as these are
networks that are made up of thousands of artificial neurons that are interconnected and all work
together to make decisions.63 Such a lack of transparency may also be the consequence of the
dimensionality of the AI, which entails that the AI uses algorithms that involve geometric
relationships that are impossible to visualize for human beings.64 As a result, an AI system may be a
so-called black-box AI, meaning that it becomes impossible to trace back the reasons that the AI
used to make decisions.65 This can also be at odds with the notion of (technical) explainability: the

61
J.P. GUNDERSON and L.F. GUNDERSON, Intelligence =/= Autonomy =/= Capability, 2004,
https://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.78.8279&rep=rep1&type=pdf, ch 4.2.
62
P.M. ASARO, “The Liability Problem for Autonomous Artificial Agents” in AAAI, AAAI Spring Symposium, Palo Alto, The
AAAI Press, 2016, 190-191.
63
Y. BATHAEE, “The Artificial Intelligence Black Box and the Failure of Intent and Causation”, Harvard Journal of Law &
Technology 2018, vol. 31(2), 901-903.
64
Ibid., 901-905.
65
Report (AI HLEG). A definition of Artificial Intelligence: main capabilities and scientific disciplines, 8 April 2019,
https://ec.europa.eu/digital-single-market/en/news/definition-artificial-intelligence-main-capabilities-and-scientific-
disciplines, 5.
- 12 -
ethical requirement that humans should be able to understand and trace back the decisions made
by AI systems.66

Third, artificial intelligence may also be unpredictable. Because of the complexity of AI systems and
their ability to learn from previous behaviour, it can become very hard to predict the future
behaviour of an AI system.67 This is exacerbated by the lack of common sense or “brittleness” of AI
systems, which can make them behave very stupidly in new and unfamiliar contexts.68 Further,
there is also the phenomenon sometimes referred to as emergence, where an AI system is expected
to behave in a relatively simply way, but where after deployment the system turns out to behave in
more sophisticated ways that were not initially expected.69 All of this can lead to unpredictability of
the AI system, making future decisions made by the AI hard to foresee beforehand.

All of the above criteria may lead to a fourth characteristic: the potential for unaccountability.
Finding out whether the current criminal law framework is capable of assessing liability for crimes
involving AI is precisely the point of the thesis, but for now, at least, it seems clear that the above
criteria can complicate holding anyone criminally liable, and hence, accountable. AI could then be
misused as a kind of criminal shield to avoid accountability.70 Thus, to avoid such a potential
accountability gap, it is important for criminal law to either find existing ways to attribute criminal
liability or to maybe consider new options such as criminal liability of AI systems themselves.

66
Report (AI HLEG). Ethics Guidelines for trustworthy AI, 8 April 2019, https://ec.europa.eu/digital-single-
market/en/news/ethics-guidelines-trustworthy-ai, 18.
67
P. SCHARRE and M.C. HOROWITZ, Artificial Intelligence: What Every Policymaker Needs to Know, Center for a New
American Security, 2018, https://www.cnas.org/publications/reports/artificial-intelligence-what-every-policymaker-
needs-to-know, 11.
68
Ibid.
69
T.C. KING, N. AGGARWAL, M. TADDEO and L. FLORIDI, "Artificial Intelligence Crime: An Interdisciplinary Analysis of
Foreseeable Threats and Solutions", Science and Engineering Ethics February 2020, vol. 26, 94.
70
K.J. HAYWARD and M.M. MAAS, “Artificial intelligence and crime: A primer for criminologists”, Crime, Media, Culture
2020, vol. 00(0), https://journals.sagepub.com/doi/abs/10.1177/1741659020917434?journalCode=cmca, 9.
- 13 -
3 Relevant offenders in case of crimes involving artificial intelligence

3.1 Purpose of this chapter

Before analysing the constituent elements of criminal offences under the general criminal law
theory and subsequently applying this to crimes involving AI, it is useful to first briefly outline who
the actual offenders can be in case of such crimes. This will by no means be an exhaustive
enumeration, but rather the intent of this chapter is to, one the one hand, give some examples of
various offenders that may be involved in crimes such as these, and on the other hand, to illustrate
that a plethora of different individuals and entities may be involved and that realistically no
exhaustive enumeration could be given. This then allows the thesis to focus on a select few
offenders.

3.2 Various relevant offenders

The most straightforward offender when AI is involved in a crime may be the user of the AI system
or device. Examples include the driver behind the wheel of a self-driving car, somebody who pays
for AI-as-a-Service to generate deepfakes, or someone who deploys malicious high-frequency
trading algorithms to manipulate the financial market.

Of course, also relevant are the programmers behind the AI system, seeing as they wrote the code
that constitutes the AI entity. Often, programmers are also responsible for providing the AI with
training data. To this extent, this can concern independent programmers or programmers who are
employees and who program the AI system in the context of an employment contract, in which case
their employer or the company they work for may also potentially be considered as offenders.
Programmers may also be working alone or they can be part of a bigger programming team working
together on a particular AI project.

Next, an interesting question is whether an AI system or device could be held criminally liable by
itself. After all, AI systems can sometimes be considered as highly intelligent and autonomous
entities. In this context, it might be best to use the terminology “AI agents” to emphasize their
autonomy. Holding these AI agents criminally liable could then perhaps resolve potential
accountability issues.

- 14 -
Apart from these three possibilities, a multitude of other offenders should also be considered.
Anyone that falls under the broad concept of the operator of the AI may be an offender, whereby
the term “operator” refers to any individual who has some measure of control over the risks
connected with the AI and who benefits from the operation of the AI.71 This includes central back
end providers who provide essential back end support and who also benefit from the operation of
the AI.72 Manufacturers of various dangerous and malicious AI devices also come to mind, as well
as the distributors and transporters that are involved. Anyone who sells such AI devices may also
bear criminal liability, and the same is true for criminals who offer access to AI-as-a-Service for
criminal purposes. Hackers may also hack into other people’s AI systems and use these to commit
crimes. Data scientists may provide faulty or biased training data to an AI system, ultimately leading
to the AI learning to commit crimes. In the context of military AI devices, the criminal liability of
military officers becomes relevant as well. And the list goes on, as other conceivable scenarios still
remain.

3.3 Focus of the thesis

The thesis will focus on the three most straightforward offenders: users, programmers, and AI
agents themselves. Moreover, “programmers” will be used to refer to independent programmers,
as the thesis does not aim to delve into employee and corporate criminal liability. The main reason
for this focus is that the majority of the literature seems to focus on users, programmers, and AI
agents as well. Considering the limited scope of the thesis, it is also desirable to focus on a limited
number of offenders, rather than on the entire (non-exhaustive) list of offenders mentioned above.
Additionally, narrowing the focus helps to simplify the issue a bit, as taking a plethora of different
offenders into account would make things more complicated.

However, this does not mean that the arguments and reasoning set out below do not apply to any
other offenders. In fact, most of the arguments and reasoning will still be quite applicable. But
narrowing the focus of the thesis does mean that the thesis will be tailored to those three entities
and that they will be the starting point of the arguments that will be made. This also means that not

71
Report (DG JUST). Liability for Artificial Intelligence and other emerging digital technologies, 27 November 2019,
https://op.europa.eu/en/publication-detail/-/publication/1c5e30be-1197-11ea-8c1f-01aa75ed71a1/language-en, 41.
72
Ibid.
- 15 -
necessarily every single argument that will be made would also be applicable to those other
perpetrators. If the thesis were to focus on, for example, the use of military AI devices by military
commanders, the thesis might look quite different.

- 16 -
4 Two constituent elements of crimes required for criminal liability

4.1 Actus reus

4.1.1 The scope of “acts”

A first requirement of criminal liability is that the perpetrator has committed the actus reus as
described in the specific offense. Hence, criminal liability requires an act, rather than mere thoughts,
beliefs, or intentions alone.73

As for what exactly constitutes an “act”, it is clear that an act should not be understood in the
traditional sense of requiring a physical, bodily (or in case of certain AI devices, mechanical)
movement of some kind. Even though most crimes do involve bodily movement, criminal liability
does not and should not have such a requirement.74 To decide otherwise would lead to a stringent
and rigorous interpretation of criminal law. Today, computer crimes and cybercrimes are of
tremendous importance. These types of crimes are, of course, very relevant for crimes involving AI.
Because criminal law does not have a movement requirement of any kind, a potential lack of
movement does not pose a problem for such crimes. It is therefore unnecessary to make the
argument that electronic impulses in a computer system could still be considered as constituting a
form of physical movement.75

Furthermore, the act requirement does not necessarily require an active behaviour of any kind, or
a so-called commission. Instead, criminal liability may also be imposed in case of omissions: failures
to act when there is a legitimate duty to do so.76 Omissions must be distinguished from mere
inactions: if there is no legitimate duty to act, there cannot be any omission that results in criminal
liability.77 Omissions are considered “acts” in all western jurisdictions.78 Perhaps it would therefore

73
D. LIMA, "Could AI Agents Be Held Criminally Liable? Artificial Intelligence and the Challenges for Criminal Law", South
Carolina Law Review 2018, vol. 69(3), 679.
74
R.A. DUFF, Answering for Crime: Responsibility and Liability in the Criminal Law, Oxford, Hart Publishing, 2007, 99.
75
P.M. FREITAS, F. ANDRADE and P. NOVAIS, "Criminal Liability of Autonomous Agents: from the Unthinkable to the
Plausible" in P. CASANOVAS, U. PAGALLO, M. PALMIRANI and G. SARTOR, AI Approaches to the Complexity of Legal
Systems: AICOL 2013 International Workshops, AICOL-IV@IVR, Belo Horizonte, Brazil, July 21-27, 2013 and AICOL-
V@SINTELNET-JURIX, Bologna, Italy, December 11, 2013, Revised Selected Papers, Berlin Heidelberg, Springer, 2014,
152.
76
G. HALLEVY, Liability for Crimes Involving Artificial Intelligence Systems, Cham, Springer, 2015, 60-63.
77
Ibid.
78
D. LIMA, "Could AI Agents Be Held Criminally Liable? Artificial Intelligence and the Challenges for Criminal Law", South
Carolina Law Review 2018, vol. 69(3), 680.
- 17 -
be better to speak of a conduct requirement rather than an act requirement. "Conduct" is the term
used in the U.S to refer to both acts and omissions.79

4.1.2 Voluntariness

Additionally, and more controversially, various legal scholars also require that an act must be
voluntary in order to result in criminal liability.80 After all, the actus must be reus: guilty. In the U.S.,
for example, the Model Penal Code precludes liability for, inter alia, reflexes, convulsions, and even
conduct during hypnosis, seeing as these acts cannot be considered as meeting the requirement of
voluntariness.81 Other examples of involuntary conduct include muscle spasms, acts following
concussions, and physically coerced movements.82 Clearly, involuntariness is mainly relevant in case
of bodily (or mechanical) movements. Involuntary movements are sometimes also called
automatisms.83 A claim of automatism is not a defence strictly speaking, but instead precludes
criminal liability for all crimes because it undermines the actus reus requirement.84 Automatisms
not only require that a movement is uncontrolled, but also uncontrollable: “The essence of
automatism lies in D's inability to control the movement (or non-movement) of his body at the
relevant time”.85

79
Ibid., 679; See AMERICAN LAW INSTUTE, Model Penal Code: Official Draft and Explanatory Notes, 1984,
https://archive.org/details/ModelPenalCode_ALI/mode/2up, art. 2.01(1).
80
D. LIMA, "Could AI Agents Be Held Criminally Liable? Artificial Intelligence and the Challenges for Criminal Law", South
Carolina Law Review 2018, vol. 69(3), 682; R. ABBOTT and A. SARCH, “Punishing Artificial Intelligence: Legal Fiction or
Science Fiction”, UC Davis Law Review 2019, vol. 53, 131; T.C. KING, N. AGGARWAL, M. TADDEO and L. FLORIDI,
"Artificial Intelligence Crime: An Interdisciplinary Analysis of Foreseeable Threats and Solutions", Science and
Engineering Ethics 2020, vol. 26, 95; P.M. FREITAS, F. ANDRADE and P. NOVAIS, "Criminal Liability of Autonomous
Agents: from the Unthinkable to the Plausible" in P. CASANOVAS, U. PAGALLO, M. PALMIRANI and G. SARTOR, AI
Approaches to the Complexity of Legal Systems: AICOL 2013 International Workshops, AICOL-IV@IVR, Belo Horizonte,
Brazil, July 21-27, 2013 and AICOL-V@SINTELNET-JURIX, Bologna, Italy, December 11, 2013, Revised Selected Papers,
Berlin Heidelberg, Springer, 2014, 152.
81
AMERICAN LAW INSTUTE, Model Penal Code: Official Draft and Explanatory Notes, 1984,
https://archive.org/details/ModelPenalCode_ALI/mode/2up, art. 2.01(2).
82
A. ASHWORTH and J. HORDER, Principles of Criminal Law, Oxford, Oxford University Press, 2013, 88.
83
Ibid., 86-87.
84
Ibid., 87.
85
Ibid., 89.
- 18 -
As to what it means for an act to be voluntary, there is no consensus in the legal doctrine.86 Often,
this leads to a discussion involving philosophy, psychology, and neurology.87 Moreover, some legal
scholars such as Hallevy, Lagioia, and Sartor disagree with the idea that, in order for the act
requirement to be met, acts should be voluntary in the first place.88 Also, the introduction of an
element of voluntariness to the act requirement might be a consequence of attempting to make
too strict of a distinction between the actus reus and mens rea elements, when they should be
considered as being inherently intertwined.89

4.2 Mens rea

4.2.1 General sense: culpability or blameworthiness

Mens rea, or the mental element in criminal law, can be understood in a general sense or a special
sense.90 In the general sense, the mens rea requirement means to ensure that criminal liability is
only imposed when there is blameworthiness on the part of the perpetrator.91 In other words, the
perpetrator has to meet the requirement of culpability: the capacity for culpable conduct, as a
general prerequisite of criminal law.92 This requirement is also manifested in criminal law in other
ways, such as infancy and insanity defences.93

86
P.M. FREITAS, F. ANDRADE and P. NOVAIS, "Criminal Liability of Autonomous Agents: from the Unthinkable to the
Plausible" in P. CASANOVAS, U. PAGALLO, M. PALMIRANI and G. SARTOR, AI Approaches to the Complexity of Legal
Systems: AICOL 2013 International Workshops, AICOL-IV@IVR, Belo Horizonte, Brazil, July 21-27, 2013 and AICOL-
V@SINTELNET-JURIX, Bologna, Italy, December 11, 2013, Revised Selected Papers, Berlin Heidelberg, Springer, 2014,
152.
87
Ibid.
88
See G. HALLEVY, Liability for Crimes Involving Artificial Intelligence Systems, Cham, Springer, 2015, 60-63; F. LAGIOIA
and G. SARTOR, "AI Systems Under Criminal Law: a Legal Analysis and a Regulatory Perspective", Philosophy &
Technology 2020, vol. 33, ch 3.
89
R.A. DUFF, Answering for Crime: Responsibility and Liability in the Criminal Law, Oxford, Hart Publishing, 2007, 202-
203.
90
P.M. FREITAS, F. ANDRADE and P. NOVAIS, "Criminal Liability of Autonomous Agents: from the Unthinkable to the
Plausible" in P. CASANOVAS, U. PAGALLO, M. PALMIRANI and G. SARTOR, AI Approaches to the Complexity of Legal
Systems: AICOL 2013 International Workshops, AICOL-IV@IVR, Belo Horizonte, Brazil, July 21-27, 2013 and AICOL-
V@SINTELNET-JURIX, Bologna, Italy, December 11, 2013, Revised Selected Papers, Berlin Heidelberg, Springer, 2014,
153.
91
Ibid.
92
R. ABBOTT and A. SARCH, “Punishing Artificial Intelligence: Legal Fiction or Science Fiction”, UC Davis Law Review
2019, vol. 53, 350.
93
Ibid.
- 19 -
4.2.2 Special sense: mental states required by specific offences

Mens rea as understood in the special sense means that, in order for criminal liability to be imposed,
a certain mental state has to be present in the perpetrator, as prescribed by the specific offense.94
To this extent, different mental states can be prescribed by different offenses. The exact mental
states that are applicable vary per jurisdiction, but, in general, a distinction can be made between
intent, negligence, and recklessness.95

4.2.2.1 Intent

Intention, while perhaps a simple concept in everyday language, is a complex legal concept that
differs between jurisdictions and throughout the legal doctrine.96 However, it is commonly argued
that intent can be seen as closely relating to motives: “Acting intentionally is acting for a reason.
That is not just acting where there is a reason, even if the agent recognises that there is a reason,
but acting for a reason.”97

Further, a general distinction can be made between three types of intent, namely direct intent,
indirect or oblique intent, and dolus eventualis. In case of direct intent, an agent acts in order to
bring about a particular result.98 In other words, the prospect of producing this result was one of
the links in the chain of causes that led to the decision of the person to act.99 In case of indirect or
oblique intent, an agent acts whilst foreseeing with practical certainty that his or her behaviour will
bring about a particular result, without desiring to bring about that result. 100 In the U.S., indirect
intent is referred to as the requirement of knowledge.101 Finally, dolus eventualis entails that an

94
P.M. FREITAS, F. ANDRADE and P. NOVAIS, "Criminal Liability of Autonomous Agents: from the Unthinkable to the
Plausible" in P. CASANOVAS, U. PAGALLO, M. PALMIRANI and G. SARTOR, AI Approaches to the Complexity of Legal
Systems: AICOL 2013 International Workshops, AICOL-IV@IVR, Belo Horizonte, Brazil, July 21-27, 2013 and AICOL-
V@SINTELNET-JURIX, Bologna, Italy, December 11, 2013, Revised Selected Papers, Berlin Heidelberg, Springer, 2014,
153.
95
See R. DREMLIUGA and N. PRISEKINA, "The Concept of Culpability in Criminal Law and AI Systems", Journal of Politics
and Law 2020, vol. 13(3), 257.
96
See R. VERESHA, "Criminal and legal characteristics of criminal intent", Journal of Financial Crime 2017, vol. 24(1),
118-128.
97
V. TADROS, Criminal Responsibility, Oxford, Oxford University Press, 2005, 216.
98
A. ASHWORTH and J. HORDER, Principles of Criminal Law, Oxford, Oxford University Press, 2013, 170; J. BENTHAM,
An Introduction to the Principles of Morals and Legislation, Kitchener, Batoche Books, 2000, 69-73.
99
J. BENTHAM, An Introduction to the Principles of Morals and Legislation, Kitchener, Batoche Books, 2000, 70.
100
I. KUGLER, "The definition of Oblique Intention", The Journal of Criminal Law 2004, vol. 68(1), 79.
101
AMERICAN LAW INSTUTE, Model Penal Code: Official Draft and Explanatory Notes, 1984,
https://archive.org/details/ModelPenalCode_ALI/mode/2up, art. 2.02(2)(b).
- 20 -
agent wants to make a particular result occur, but foresees that in doing so, there is a possibility
that another event may also occur.102 If the agent nonetheless decides to act to achieve his desired
result, and if the other event proceeds to ensue, the agent has dolus eventualis towards that other
event.103 This means that there are two requirements for dolus eventualis: the agent has to foresee
the possibility that the other event might occur, and the agent has to reconcile himself with this
possibility by accepting it when deciding to act to achieve his desired result.104

4.2.2.2 Negligence

Negligence is a lower mental state that can be required by offences. To attribute criminal liability
based on negligence, it is required that there is a duty of care and that the agent, through action or
inaction, has failed to use reasonable care in order to prevent harm where a reasonable person
would have done so.105

Negligence-based liability is especially relevant when it comes to taking risks. Seeing as a reasonable
person only takes reasonable risks, criminal liability may be imposed upon taking unreasonable
risks, whereby the reasonableness of a risk must be measured through the reasonable person
standard.106 This reasonable person is not merely a general abstract person, but an abstract person
adapted to the relevant circumstances of the specific offender.107 At the same time, taking
reasonable risks cannot lead to criminal liability.108 To this extent, there is a foreseeability
requirement of risks: in order for the reasonable person to be able to purposely avoid a risk, a risk
must be foreseeable by the reasonable person. 109 Thus, taking unreasonable risks that even the
reasonable person could not foresee cannot lead to criminal liability.

102
E. KAYITANA, The Form of Intention Known as Dolus Eventualis in Criminal Law, 2008,
https://papers.ssrn.com/sol3/papers.cfm?abstract_id=1191502, 0-2; D.W. MORKEL, “On the Distinction between
Recklessness and Conscious Negligence”, The American Journal of Comparative Law 1982, vol. 30(2), 328-329.
103
Ibid.
104
Ibid.
105
N. OSMANI, "The Complexity of Criminal Liability of AI Systems", Masaryk University Journal of Law and Technology
2020, vol. 14(1), 63-64.
106
G. HALLEVY, Liability for Crimes Involving Artificial Intelligence Systems, Cham, Springer, 2015, 121-122.
107
Ibid., 126.
108
Ibid., 121-122.
109
N. OSMANI, "The Complexity of Criminal Liability of AI Systems", Masaryk University Journal of Law and Technology
2020, vol. 14(1), 65-67.
- 21 -
Furthermore, negligence (as opposed to recklessness) is only relevant when an agent takes
unreasonable risks while not being aware of these risks, even though he or she ought to be aware
of them (because the reasonable person would be aware of them).110

Finally, negligence-based liability also requires the capacity to act differently. 111 This means that an
agent must have physical control over his or her own movements, as well as the rational capacities
required to have control over one’s own conduct.112 Hence, the question is not merely whether an
agent has failed to take due care, but instead whether an agent has failed to take due care given his
mental and physical capacities.113

4.2.2.3 Recklessness

Recklessness as the third mental state that is required by various offenses is very similar to
negligence, but with one key difference. As was argued above, in the case of negligence an agent
has taken an unreasonable risk while being unaware of this risk, even though they should have been
aware.114 In the case of recklessness on the other hand, an agent has taken an unreasonable risk
despite having been subjectively aware of this risk.115

Recklessness should further be distinguished from oblique intent. In case of oblique intent, an agent
has acted while knowing with practical certainty that this will bring about a particular
consequence.116 In case of recklessness, however, an agent merely acted knowing that a particular
consequence might occur.117 In other words, recklessness requires a mid-level of probability that
the harm materializes, rather than practical certainty.118

110
R. DREMLIUGA and N. PRISEKINA, "The Concept of Culpability in Criminal Law and AI Systems", Journal of Politics and
Law 2020, vol. 13(3), 259.
111
R.A. DUFF, Answering for Crime: Responsibility and Liability in the Criminal Law, Oxford, Hart Publishing, 2007, 72.
112
Ibid.
113
A. ASHWORTH and J. HORDER, Principles of Criminal Law, Oxford, Oxford University Press, 2013, 182.
114
R. DREMLIUGA and N. PRISEKINA, "The Concept of Culpability in Criminal Law and AI Systems", Journal of Politics and
Law 2020, vol. 13(3), 259; G. HALLEVY, Liability for Crimes Involving Artificial Intelligence Systems, Cham, Springer, 2015,
121-122.
115
M. DSOUZA, "Don't panic: artificial intelligence and Criminal Law" in D.J. BAKER and P.H. ROBINSON (eds.), Artificial
Intelligence and the Law: Cybercrime and Criminal Liability, Oxford, Routledge, 2021, 257.
116
V. TADROS, Criminal Responsibility, Oxford, Oxford University Press, 2005, 217.
117
Ibid.
118
R. ABBOTT and A. SARCH, “Punishing Artificial Intelligence: Legal Fiction or Science Fiction”, UC Davis Law Review
2019, vol. 53, 360.
- 22 -
The distinction between recklessness and dolus eventualis is harder to make and would lead to a
theoretical comparative law discussion that is outside the scope of the thesis. But it can briefly be
mentioned that recklessness can be seen as a broader common law concept which encompasses
the civil law concept of dolus eventualis, while additionally also including so-called conscious
negligence or the German bewusste Fahrlässigkeit.119

4.2.2.4 Exception: strict liability

Strict liability is an exception to the mens rea requirement in the special sense. This kind of liability
is not the default position in criminal law.120 But in practice many western jurisdictions do have strict
liability offences, such as for drug possession and driving infractions.121

A distinction can be made between substantively and formally strict liability. 122 Substantively strict
liability entails that no proof of moral culpability is required for a conviction, whereas formally strict
liability means that no proof of any particular mental state (intent, negligence, or recklessness) is
required.123 Antony Duff argues that only formally strict liability can be justified as substantively
strict liability “mandates conviction without proof of fault that would justify condemning the
defendant for committing a wrong; it is flatly inconsistent with the principle that criminal conviction
should require proof of the commission of a presumptive wrong”.124 As a result, substantively strict
liability would also violate the presumption of innocence, as this requires courts to consider
defendants as innocent unless and until sufficient proof of the commission of a wrong is delivered
by the prosecution.125 Hence, blameworthiness is still required for an agent to be held criminally
liable, even when no particular mental state is required to be present in the offender. Mens rea in
the general sense therefore remains a requirement.

119
D.W. MORKEL, “On the Distinction between Recklessness and Conscious Negligence”, The American Journal of
Comparative Law 1982, vol. 30(2), 325-333. For the distinction between conscious negligence, unconscious negligence
and dolus eventualis, see ibid. and E. KAYITANA, The Form of Intention Known as Dolus Eventualis in Criminal Law, 2008,
https://papers.ssrn.com/sol3/papers.cfm?abstract_id=1191502, 3-4.
120
M. HILDEBRANDT, “Ambient Intelligence, Criminal Liability and Democracy”, Criminal Law and Philosophy 2008, vol.
2, 166.
121
D. LIMA, "Could AI Agents Be Held Criminally Liable? Artificial Intelligence and the Challenges for Criminal Law", South
Carolina Law Review 2018, vol. 69(3), 692.
122
R.A. DUFF, Answering for Crime: Responsibility and Liability in the Criminal Law, Oxford, Hart Publishing, 2007, 233-
235.
123
Ibid.
124
Ibid., 252.
125
Ibid., 195-196.
- 23 -
Further, strict liability results in a relative presumption that the defendant was at least negligent,
provided that the prosecution delivers proof that the defendant has committed the actus reus of
the offense.126 Strict liability then means that a conviction only requires that the prosecution
delivers proof of the actus reus element, as well as moral culpability in the general sense. However,
the defendant can refute this relative presumption, and the possibility to refute this presumption
can now even be considered as integral to strict liability in criminal law.127 But refuting strict liability
is very difficult, as there are two cumulative conditions: the defendant must prove no intent or
negligence existed in him, and additionally, he must deliver proof that he took all reasonable
measures to prevent the offense from happening.128

126
G. HALLEVY, Liability for Crimes Involving Artificial Intelligence Systems, Cham, Springer, 2015, 138-139.
127
Ibid.
128
Ibid.
- 24 -
5 Attributing criminal liability in case of crimes involving artificial
intelligence

5.1 Introduction

In this chapter, the three ways to attribute liability in case of crimes involving AI will be assessed.
The first two ways concern human perpetrators and address intentional crimes and criminal
negligence, respectively. The third way concerns holding AI agents themselves criminally liable. It
should be noted that these ways of attributing liability are not exclusive, in the sense that multiple
possibilities can be applicable at the same time.129 For example, the criminal liability of a user who
instrumentally uses AI to manipulate the market through high-frequency trading would not
preclude any criminal negligence on the part of the programmer of the AI. Moreover, these
possibilities are also independent, meaning that the liability of, for example, an AI agent does not
depend upon any liability of the user or programmer.130 The same two principles are used in the
context of corporate criminal liability and can be applied analogously to crimes involving AI.131

5.2 Intentional crimes: instrumental use of AI

5.2.1 Concept

This section will concern the situation where an AI system or device is used as a means to an end,
namely the commission of a crime, by an individual. In other words, an individual “behind the
curtain” has the intent (and hence, the required mens rea for intentional crimes) to commit a
particular crime.132 There are different ways for criminal law to deal with this, depending on the
level of intelligence and autonomy an AI system or device possesses. Admittedly, it is difficult to
establish clear criteria to make this distinction between various levels of intelligence. Here, a general
distinction is made between unintelligent, semi-intelligent and fully intelligent AI. This is purposely

129
See E. GRUODYTE and P. CERKA, "Artificial Intelligence as a Subject of Criminal Law: A Corporate Liability Model
Perspective" in J.-S. GORDON (ed.), Smart Technologies and Fundamental Rights, Rodopi, Brill, 2020, 264; OECD ACN,
Liability of Legal Persons for Corruption in Eastern Europe and Central Asia, 2015,
https://www.oecd.org/corruption/acn-liability-of-legal-persons-2015.pdf, 27.
130
See ibid.
131
Ibid.
132
D. LIMA, "Could AI Agents Be Held Criminally Liable? Artificial Intelligence and the Challenges for Criminal Law", South
Carolina Law Review 2018, vol. 69(3), 690.
- 25 -
left somewhat open to interpretation, but the general idea is that unintelligent AI includes (very)
narrow forms of AI, whereas fully intelligent AI means general AI as defined in the second chapter.133
Semi-intelligent AI then means AI that falls somewhere in between and which can neither be
considered as being (very) narrow nor general. This would include the most advanced AI systems
that exist today.

It should also be mentioned that when AI is used instrumentally to commit a crime, this generally
refers to direct intent, as this seems the most applicable form of intent here. Of course, applying
indirect intent or dolus eventualis remains possible, but in those cases, there is a requirement that
the unlawful result must have been foreseen by the perpetrator.134 This might lead to some
problems which will become apparent when later discussing foreseeability in the context of
negligence-based liability. While this should be kept in mind, this issue will not be addressed here.
Instead, the focus will be on direct intent.

5.2.2 AI as an unintelligent and inanimate tool

Some measure of autonomy and unpredictability does not preclude AI from being considered a tool:
“An unpredictable, or not entirely predictable tool, is still a tool.”135 Hence, when an AI system or
device cannot be considered as having a measure of intelligence that is significant (but while still
falling within the very broad scope of the concept of AI), the AI is best considered as a mere tool in
the hands of the real perpetrator: the human using the AI to commit a crime. This is similar to using
a hammer, a gun, or a knife to intentionally commit a crime. In such cases, it would be ridiculous to
speak of “ascribing” the action of the tool to the human using the tool, but rather the action is
immediately ascribed to the human.136 Hence, both the actus reus and mens rea of intent crimes
can be ascribed directly to the human perpetrator.

Yet, when an AI system or device does have a significant measure of intelligence, equating the AI to
a mere inanimate tool does not seem to be warranted. Especially in the long run this does not seem
to be a feasible and permanent solution. The European Parliament for example has already stated

133
See section 2.1.1.
134
See section 4.2.2.1.
135
M. DSOUZA, "Don't panic: artificial intelligence and Criminal Law" in D.J. BAKER and P.H. ROBINSON (eds.), Artificial
Intelligence and the Law: Cybercrime and Criminal Liability, Oxford, Routledge, 2021, 251.
136
Ibid.
- 26 -
that “the more autonomous robots are, the less they can be considered to be simple tools in the
hands of other actors”.137

5.2.3 AI as a semi-intelligent entity

When AI can be considered as having a significant measure of intelligence despite not being “fully
intelligent”, considering the AI as a mere inanimate tool does not adequately reflect the reality. As
was noted already, in the current state-of-the-art programming only narrow or weak AI is possible.
Strong or general AI on the other hand has remained a fiction for now.138 Hence, even the most
advanced AI systems today do not have general-purpose reasoning or common sense.139 The
currently most advanced AI systems should therefore best be considered as semi-intelligent:
something between a mere tool and a fully capable, rational being. There are two ways criminal law
can deal with crimes involving such AI systems.

First, parallels can be drawn with using animals to commit crimes, seeing as animals do have some
measure of intelligence and autonomy and can also learn from previous behaviour. Therefore,
animals cannot simply be equated to inanimate tools such as a hammer. Unlike tools, they cannot
be controlled in an absolute sense by the human perpetrator, but they can be considered as being
able to be manipulated into doing things by their master.140 Hence, when an animal is deliberately
trained or manipulated into committing a specific crime (such as assaulting an individual), the
animal’s master can and should be convicted for that crime.141 In such cases, it is clear that the mens
rea requirement is present on the part of the animal’s master. Moreover, while the physical act is

137
Resolution (EP) with recommendations to the Commission on Civil Law Rules on Robotics, 16 February 2017,
(2015/2103(INL)), art. AB.
138
Report (AI HLEG). A definition of Artificial Intelligence: main capabilities and scientific disciplines, 8 April 2019,
https://ec.europa.eu/digital-single-market/en/news/definition-artificial-intelligence-main-capabilities-and-scientific-
disciplines, 5.
139
P. SCHARRE and M.C. HOROWITZ, Artificial Intelligence: What Every Policymaker Needs to Know, Center for a New
American Security, 2018, https://www.cnas.org/publications/reports/artificial-intelligence-what-every-policymaker-
needs-to-know, 4 and 10.
140
D. LIMA, "Could AI Agents Be Held Criminally Liable? Artificial Intelligence and the Challenges for Criminal Law", South
Carolina Law Review 2018, vol. 69(3), 690.
141
M. DSOUZA, "Don't panic: artificial intelligence and Criminal Law" in D.J. BAKER and P.H. ROBINSON (eds.), Artificial
Intelligence and the Law: Cybercrime and Criminal Liability, Oxford, Routledge, 2021, 250.
- 27 -
technically being executed by the animal, the actus reus nonetheless has to be ascribed to the
human perpetrator, who is merely using an animal instrumentally to achieve his goals.142

Second, the doctrine of innocent agency might also provide a solution. Here, a comparison is made
not with animals but with human beings who cannot satisfy the mens rea requirement: “Under the
innocent agency doctrine, criminal liability attaches to a person who acts through an agent who
lacks capacity — such as a child or someone with an insanity defense”.143 Hallevy, who worked out
three criminal liability models in case of crimes involving AI,144 refers to this as perpetration-via-
another liability, where “the intermediary is regarded as a mere instrument, albeit a sophisticated
instrument, while the party orchestrating the offense (the perpetrator-via-another) is the real
perpetrator as a principal in the first degree and is held accountable for the conduct of the innocent
agent”.145 Again, there is the requirement of mens rea in the form of the intent on the part of the
perpetrator-via-another to manipulate the innocent agent into committing a crime.146 Applying this
doctrine then also allows the actus reus to be ascribed to the perpetrator-via-another either by ab
initio attributing the act of the innocent agent to the perpetrator-via-another, or by using a legal
fiction which causes the perpetrator-via-another to be deemed as the one who has acted even
though the act is initially considered as being committed by the innocent agent.147 It should also be
noted that while the “another” in perpetration-via-another does imply another human being, there
is no reason that the doctrine of innocent agency could not be applied to animals as well, 148 and
hence this should not pose a problem for AI agents either.

5.2.4 AI as a fully intelligent entity (strong AI)

If one day AI were to become fully intelligent in the sense of general or strong AI, attributing criminal
liability based on using AI as an instrument loses much of its relevance for those kinds of AI agents.

142
D. LIMA, "Could AI Agents Be Held Criminally Liable? Artificial Intelligence and the Challenges for Criminal Law", South
Carolina Law Review 2018, vol. 69(3), 690.
143
R. ABBOTT and A. SARCH, “Punishing Artificial Intelligence: Legal Fiction or Science Fiction”, UC Davis Law Review
2019, vol. 53, 369.
144
G. HALLEVY, "The Criminal Liability of Artificial Intelligence Entities – from Science Fiction to Legal Social Control",
Akron Intellectual Property Journal 2016, vol. 4(2), 171-201.
145
Ibid., 179.
146
R. ABBOTT and A. SARCH, “Punishing Artificial Intelligence: Legal Fiction or Science Fiction”, UC Davis Law Review
2019, vol. 53, 370; P. ALLDRIDGE, "The Doctrine of Innocent Agency", Criminal Law Forum 1990, vol. 2, 70-71.
147
P. ALLDRIDGE, "The Doctrine of Innocent Agency", Criminal Law Forum 1990, vol. 2, 66.
148
Ibid., 72.
- 28 -
Such a scenario would mean that AI gains a broad level of intelligence that is equivalent to or higher
than that of humans. Hence, it might become difficult to use an AI instrumentally in the same way
that it is difficult to use another perfectly capable, rational human being instrumentally.
Instrumental use of AI as a way of attributing criminal liability would then perhaps become obsolete.
However, two interesting points can be noted in this context.

First, the doctrine of innocent agency might still remain relevant if the (perhaps controversial)
assumption is made that even strong AI cannot ever meet the mens rea requirement in the general
sense. Whether AI does or could ever meet this requirement will be discussed in more detail when
discussing the criminal liability of AI agents. But if such viewpoint were adopted, this would mean
that even strong AI would lack the capacity for culpable conduct, rendering the AI an innocent agent
that could perhaps be manipulated into committing a crime. This would be similar to, for example,
manipulating a person who is not considered sane for the purposes of criminal law into committing
crimes.

Second, when someone is able to use a strong AI system to commit a crime, an interesting possibility
may be to use a kind of vicarious liability model akin to crimes committed by employees, so that the
“employer”, or rather the user of the AI system may be held criminally liable.149 Hence, a strong AI
system could then be considered as an employee, or perhaps more perversely, as a slave. After all,
vicarious liability is rooted in respondeat superior,150 which goes back to Roman laws relating to
responsibility of slave masters for their slaves.151 This does, however, presuppose that the human
perpetrator is able to control the strong AI system by somehow being able to “employ” the AI to
commit crimes or by being able to coerce the AI into doing so.

That being said, strong AI remains a merely theoretical, futuristic possibility and hence the above
possibilities are merely speculative. It is hard to imagine how strong AI would look like in reality, let
alone how it should be regulated. The bringing about of strong AI would be a dramatic technological
innovation that would likely call upon a drastic revision of many areas of law.

149
E. GRUODYTE and P. CERKA, "Artificial Intelligence as a Subject of Criminal Law: A Corporate Liability Model
Perspective" in J.-S. GORDON (ed.), Smart Technologies and Fundamental Rights, Rodopi, Brill, 2020, 272-273.
150
Ibid.
151
P. CERKA, J. GRIGIENE and G. SIRBIKYTE, "Liability for damages caused by artificial intelligence", Computer Law &
Security Review 2015, vol. 31, 385.
- 29 -
5.3 Negligence-based liability

5.3.1 Concept

In case of intentional crimes, attributing criminal liability requires that an AI system is used
instrumentally by a human perpetrator to deliberately commit a crime. However, the potentially
large degree of autonomy, the ability to learn from previous behaviour, and the consequent
unpredictability of AI can all lead to an AI system committing crimes while this was never the
intention of the programmer or user. In cases such as these, criminal law must resort to a different
pathway: negligence-based liability.

5.3.2 Negligence

5.3.2.1 Requirements

As set out above, negligence-based criminal liability requires that an individual has failed to use
reasonable care to prevent harm where the reasonable person would have done so, particularly by
taking an unreasonable risk that they were not aware of (but ought to have been aware of).152 The
assessment of the reasonableness of risks follows from the applicable duty of care imposed on the
reasonable person, and these risks are further required to be foreseeable.153 These last two
elements pose some difficulties in the context of crimes involving AI and will now be highlighted.

5.3.2.2 The applicable duty of care

The unpredictability of AI does not preclude the existence of a duty of care. In fact, it is the very
unpredictability of AI that gives rise to such duties of care, in much the same way that the owner of
a zoo cannot release a tiger on the streets and then argue that tigers are wild animals that cannot

152
N. OSMANI, "The Complexity of Criminal Liability of AI Systems", Masaryk University Journal of Law and Technology
2020, vol. 14(1), 63-64.; G. HALLEVY, Liability for Crimes Involving Artificial Intelligence Systems, Cham, Springer, 2015,
121-122; R. DREMLIUGA and N. PRISEKINA, "The Concept of Culpability in Criminal Law and AI Systems", Journal of
Politics and Law 2020, vol. 13(3), 259.
153
N. OSMANI, "The Complexity of Criminal Liability of AI Systems", Masaryk University Journal of Law and Technology
2020, vol. 14(1), 65-67; G. HALLEVY, Liability for Crimes Involving Artificial Intelligence Systems, Cham, Springer, 2015,
121-122.
- 30 -
be controlled.154 The question at hand, however, is the standard of care that this duty of care brings
about.

To this end, there are simply no international legal norms that determine safety standards for AI
systems that could be used to evaluate the extent of the duty of care.155 Consequently, other
standards should be considered. Reference could be made to industry standards instead, as is used
in the classic industry standard defence where a defendant proves that they acted in accordance
with the stated or unstated standards of their industry.156 However, this comes with a few problems.
Standards that do not follow from legal norms not only generally lack democratic legitimacy and
transparency, but they are also usually developed for the purposes of civil rather than criminal
liability and are therefore largely disconnected from any moral aspects that are relevant for criminal
law.157 An even bigger problem is that, at the moment, there are barely any of these industry
standards in the first place.158

Thus, it seems that a judge (or a jury) who must assess the existence of negligence on the part of
the user or programmer of an AI system must fall back on the general principle: the behaviour of
the reasonable person, who does not take unreasonable risks. Seeing as the reasonable person
standard must be adapted to the relevant circumstances of the specific offender,159 the reasonable
person in this context is the reasonable user or the reasonable programmer of an AI system. There
are two reasons this is not an ideal situation.

First, judges may not be adequately qualified to make this judgment. As was discussed above, more
complex AI systems can be very untransparent and unpredictable, and in the case of black-box AI it
may be impossible for even the programmers of an AI system to trace back the reasons the system
based its decisions on.160 Moreover, judges have little to no expertise in the area of programming
and the field of AI, which might make it impossible for them to properly evaluate the possible

154
S. GLESS, E. SILVERMAN and T. WEIGEND, "If Robots Cause Harm, Who Is to Blame? Self-Driving Cars and Criminal
Liability", New Criminal Law Review 2016, vol. 19(3), 427.
155
N. OSMANI, "The Complexity of Criminal Liability of AI Systems", Masaryk University Journal of Law and Technology
2020, vol. 14(1), 65.
156
P.M. ASARO, "Robots and Responsibility from a Legal Perspective", Proceedings of the IEEE 2007, vol. 4(14), ch II.
157
S. BECK, "Intelligent agents and criminal law – Negligence, diffusion of liability and electronic personhood", Robotics
and Autonomous Systems 2016, vol. 86, 139.
158
Ibid.
159
G. HALLEVY, Liability for Crimes Involving Artificial Intelligence Systems, Cham, Springer, 2015, 126.
160
Report (AI HLEG). A definition of Artificial Intelligence: main capabilities and scientific disciplines, 8 April 2019,
https://ec.europa.eu/digital-single-market/en/news/definition-artificial-intelligence-main-capabilities-and-scientific-
disciplines, 5.
- 31 -
negligence of a user or programmer. Thus, without proper legal or industry standards, and especially
in the case of more complex systems and cases that are not clear-cut, judges cannot adequately
make such judgment. To resolve this, judges could appeal to experts in the field of computer
science, but this would likely make the court proceedings very expensive and lengthy. For more
simple and straightforward AI systems this might be less of a problem, but such systems are less
likely to lead to negligence-based liability in the first place as they are more predictable.

Second, the assessment of what risks should be considered reasonable may be a matter that is
better suited for policy making. After all, a delicate balance must be struck between the interests of
various parties, and in particular between on the one hand allowing and promoting technological
innovation (benefiting society in the long run) and protecting society from crime on the other.
Setting the standard of care too low may obstruct any criminal liability and may therefore lead to
accountability gaps. At the same time, setting the standard of care too high may oppose the
economy and the innovation of AI by disincentivizing creators of AI systems, who would too easily
risk criminal persecution.161 Also relevant for this assessment is the question of whether AI can be
held criminally liable. If AI could indeed be held liable, there may be less need for a higher standard
of care in order to avoid accountability gaps. But if it turns out that AI cannot bear criminal liability,
the potential for accountability gaps is much higher, and this might be an argument in favour of a
higher standard of care for programmers and users. Because of the careful balancing act that must
be done, this seems an issue that is better tackled by democratically elected lawmakers through the
creation of legal standards. Admittedly, creating such standards would be challenging, as it can be
very hard to measure novel risks that arise because of new technological developments. AI is no
exception to this and even seems to be especially problematic, considering its specific
characteristics.

5.3.2.3 Foreseeability of risks

The second requirement for someone to be criminally negligent is that the unreasonable risks that
were taken must also have been foreseeable by the reasonable person.162 This second requirement
again poses problems for crimes involving AI. In particular, the characteristic of unpredictability may

161
N. OSMANI, "The Complexity of Criminal Liability of AI Systems", Masaryk University Journal of Law and Technology
2020, vol. 14(1), 67.
162
Ibid., 65-67.
- 32 -
complicate things: when an AI system is unpredictable, it may be very hard for even the reasonable
user or programmer to foresee the risks the system poses. Machine learning also makes it much
harder to foresee risks, as systems are then no longer explicitly programmed to produce particular
outcomes, but instead they adapt their own architecture through experience.163 Again, it seems
unlikely that judges are adequately qualified to properly assess whether the reasonable
programmer or user would have foreseen particular risks. Appealing to an expert panel in every
case seems unfeasible. Again, legal standards or at least industry standards would be very beneficial.

One might, however, argue that exactly because of the unpredictable and maybe even somewhat
dangerous nature of AI, negative consequences are foreseeable by definition. But adopting such a
viewpoint would simply be detrimental to innovation in the field of AI and is therefore not desirable.
The opposite extreme viewpoint would be that, because of the autonomy and unpredictability of AI
systems, risks posed by them are unforeseeable by definition. But this would also not be
appropriate. Rather, a delicate balance must again be struck between on the one hand facilitating
the imposition of criminal liability to avoid any accountability gaps, and on the other hand
promoting technology, innovation, and the economy. Again, this is an issue that is better addressed
by policymakers.

5.3.3 Recklessness

When an AI system commits a crime while the user or programmer has taken an unreasonable risk
with a mid-level probability of materializing, despite being aware of this risk, an appeal must be
made to recklessness rather than negligence.164 Apart from this one difference, the reasoning as set
out above relating to negligence remains identical and the same concerns and problems apply. An
additional problem in the context of recklessness, however, is that judges would not only need to
be able to assess that a risk was unreasonable, but also that a risk had a mid-level probability of
materializing. Assessing the probability of risks in the context of AI would be very difficult.

163
I. EL NAQA and M.J. MURPHY, "What Is Machine Learning?" in I. EL NAQA, R. LI and M.J. MURPHY (eds.), Machine
Learning in Radiation Oncology: Theory and Applications, Cham, Springer International Publishing, 2015, 4.
164
M. DSOUZA, "Don't panic: artificial intelligence and Criminal Law" in D.J. BAKER and P.H. ROBINSON (eds.), Artificial
Intelligence and the Law: Cybercrime and Criminal Liability, Oxford, Routledge, 2021, 257; R. ABBOTT and A. SARCH,
“Punishing Artificial Intelligence: Legal Fiction or Science Fiction”, UC Davis Law Review 2019, vol. 53, 360.
- 33 -
5.3.4 Strict liability: not a viable solution

At first thought, some of the issues that arise in the context of negligence-based liability might be
solved by adopting a strict liability regime, akin to product liability in civil matters. However, this is
not an adequate solution.

A first reason for this is that strict liability is used as a means to incentivize certain behaviour like
taking the necessary precautions to avoid harm.165 Hence, this only makes sense when both the
harmful effects and the decision-making process of an AI system can be anticipated beforehand, so
that the AI can be programmed or used accordingly.166 As a result, strict liability assumes a measure
of control over the AI system as well as the ability to predict its behaviour, so that adequate
measures can be taken beforehand.167 But the lack of control over AI systems and the
unpredictability of AI are precisely the problems that an appeal to strict liability is supposed to
address when replacing negligence-based liability.

Another problem is that, as already explained, the possibility to refute the presumption that exists
in cases of strict liability can be considered an integral part of strict liability.168 The defendant can
do this by proving no intent or negligence existed in him and by proving he took all reasonable
measures to prevent the offense from happening.169 This means that negligence and all its problems
in this context still remain very much relevant, while the burden of proof is being shifted to the
defendant. Perhaps it is indeed easier for users and especially programmers of AI systems to prove
that they were not negligent than it is for prosecutors to deliver proof of such negligence, but the
fact remains that this can still be a very high burden. In the case of very complex AI systems, such
proof may be very difficult if not impossible to deliver, for the same reasons discussed above.
Coupled with the additional requirement that the defendant would also have to prove he took all
reasonable measures to prevent the offense from happening, the burden that would be placed on
users and programmers would be much too high, potentially even compromising the right to a fair
trial. Moreover, in the end judges would still have to assess the arguments of the defendant as to

165
Y. BATHAEE, “The Artificial Intelligence Black Box and the Failure of Intent and Causation”, Harvard Journal of Law &
Technology 2018, vol. 31(2), 894.
166
Ibid., 931.
167
Ibid., 931.
168
G. HALLEVY, Liability for Crimes Involving Artificial Intelligence Systems, Cham, Springer, 2015, 139.
169
Ibid., 138.
- 34 -
why they were not negligent, so this simply does not solve the problem of judges lacking expertise
in this area.

Further, strict liability would also be an inadequate way to balance the interests at stake. While this
may help to satisfy society’s demand for accountability and to improve trust in AI, strict liability
would also significantly undermine the further technological development of AI systems, as users
and programmers would be discouraged by the high risk of being held criminally liable.170
Consequently, strict liability would then also pose a significant economic barrier to entry into the
market as only large players with lots of funds would be willing to take such high risks, resulting in
market concentration in the long run.171

5.3.5 The doctrine of natural and probable consequences

It is interesting to note that Gabriel Hallevy has addressed the scenario in which an AI system
commits a crime without this ever being the intention of the user or programmer by working out a
“natural-probable-consequence” liability model.172 This is an unusual application of the doctrine of
natural and probable consequences, as this doctrine is normally used when the defendant is an
accomplice to the crime of another.173 The doctrine then provides that “where A intentionally aided
B’s underlying crime C1 (say theft), but then B also goes on to commit a different crime C2 (say
murder), then A would be guilty of C2 as well, provided that C2 was reasonably foreseeable.”174

Appealing to this doctrine to assess criminal liability in cases of negligence, as Hallevy does, 175 seems
to be inappropriate. However, he also uses this doctrine for a second type of scenario, namely when
a user or programmer intentionally used an AI system to commit a crime “but the AI entity deviated
from the plan and committed another offense, in addition to or instead of the planned offense”.176

170
D. LIMA, "Could AI Agents Be Held Criminally Liable? Artificial Intelligence and the Challenges for Criminal Law", South
Carolina Law Review 2018, vol. 69(3), 693.
171
Y. BATHAEE, “The Artificial Intelligence Black Box and the Failure of Intent and Causation”, Harvard Journal of Law &
Technology 2018, vol. 31(2), 932.
172
G. HALLEVY, "The Criminal Liability of Artificial Intelligence Entities – from Science Fiction to Legal Social Control",
Akron Intellectual Property Journal 2016, vol. 4(2), 181-186.
173
R. ABBOTT and A. SARCH, “Punishing Artificial Intelligence: Legal Fiction or Science Fiction”, UC Davis Law Review
2019, vol. 53, 370.
174
Ibid., 370-371.
175
G. HALLEVY, "The Criminal Liability of Artificial Intelligence Entities – from Science Fiction to Legal Social Control",
Akron Intellectual Property Journal 2016, vol. 4(2), 184.
176
Ibid.
- 35 -
The doctrine of natural and probable consequences then allows the user or programmer to be held
directly liable for the additional crime committed by the AI entity.177 This can indeed be a valuable
way of attributing criminal liability to the user or programmer in such cases, rather than having to
resort to negligence crimes. While the requirement of foreseeability may still pose problems, the
threshold here should be significantly lower, considering that the user or programmer already had
the intention to commit a crime by using unpredictable and perhaps even dangerous means.

5.4 Criminal liability of AI agents

5.4.1 Concept

The purpose of this section is to consider if AI agents themselves could be held criminally liable for
the crimes they commit. First, the issue of legal personality will be discussed, followed by an
assessment of whether AI could meet the actus reus and mens rea requirements. After that, some
problems that arise in the context of prosecuting and punishing AI will be outlined.

5.4.2 Legal personality

AI agents currently cannot be held criminally liable for the simple reason that they lack legal
personality, and legal personality is a necessary requirement to be able to be charged and convicted
of a crime.178 In other words, AI agents are mere objects of law, rather than subjects of law.179
However, it should be noted that some initiatives have already been taken on the EU level to at
least consider the possibility of granting legal personality to some AI systems. In particular, the
Committee on Legal Affairs recommended that the Commission considered the following possibility:
“creating a specific legal status for robots in the long run, so that at least the most sophisticated
autonomous robots could be established as having the status of electronic persons responsible for
making good any damage they may cause, and possibly applying electronic personality to cases

177
Ibid.
178
R. ABBOTT and A. SARCH, “Punishing Artificial Intelligence: Legal Fiction or Science Fiction”, UC Davis Law Review
2019, vol. 53, 328.
179
P. CERKA, J. GRIGIENE and G. SIRBIKYTE, "Is it possible to grant legal personality to artificial intelligence software
systems?", Computer Law & Security Review 2017, vol. 33(5), 685-699.
- 36 -
where robots make autonomous decisions or otherwise interact with third parties independently”.180
This was later repeated in a resolution of the European Parliament again containing
recommendations to the Commission.181 These recommendations were met with some resistance,
as illustrated by an open letter signed by many experts, arguing that the recommendation is “based
on an overvaluation of the actual capabilities of even the most advanced robots, a superficial
understanding of unpredictability and self-learning capacities and, a robot perception distorted by
Science-Fiction and a few recent sensational press announcements”.182

On a theoretical level at least, there are no legal barriers that obstruct the granting of legal
personality to AI agents.183 Legal personality ultimately means no more than being granted duties
and rights by the law: “When we say that an actor has legal personality, we mean that a legal system
addresses its rules to the actor, both to give the actor rights and to subject it to obligations. Legal
personality is not necessarily correlated with a metaphysical or ethical notion of personhood.”184
Legal personality is thus a legal fiction, granted by the law when doing so is in the interest of society,
regardless of whether an agent can really be considered a person.185 A more accurate term may
therefore be legal capacity rather than personality.186

Thus, the artificial nature of AI agents should not preclude them from being granted legal
personality when this would be beneficial to society. In fact, legal personality has already been
granted to entities other than natural persons. The most widespread example, of course, would be
corporations, of which even criminal liability has been gradually accepted in many jurisdictions,
replacing the old Roman principle of societas delinquere non potest.187 Ultimately, however,

180
Report (JURI) with recommendations to the Commission on Civil Law Rules on Robotics, 27 January 2017,
(2015/2103(INL)), art. 59(f).
181
Resolution (EP) with recommendations to the Commission on Civil Law Rules on Robotics, 16 February 2017,
(2015/2103(INL)), art. 59(f).
182
X, Open letter to the European Commission: Artificial Intelligence and Robotics, http://www.robotics-openletter.eu/.
183
R. DREMLIUGA, P. KUZNETCOV and A. MAMYCHEV, "Criteria for Recognition of AI as a Legal Person", Journal of
Politics and Law 2019, vol. 12(3), 106.
184
J.J. BRYSON, M.E. DIAMANTIS and T.D. GRANT, "Of, for, and by the people: the legal lacuna of synthetic persons",
Artificial Intelligence and Law 2017, vol. 25, 277.
185
Ibid., 277-278.
186
F. LAGIOIA and G. SARTOR, "AI Systems Under Criminal Law: a Legal Analysis and a Regulatory Perspective",
Philosophy & Technology 2020, vol. 33, ch 2.2.
187
P.M. FREITAS, F. ANDRADE and P. NOVAIS, "Criminal Liability of Autonomous Agents: from the Unthinkable to the
Plausible" in P. CASANOVAS, U. PAGALLO, M. PALMIRANI and G. SARTOR, AI Approaches to the Complexity of Legal
Systems: AICOL 2013 International Workshops, AICOL-IV@IVR, Belo Horizonte, Brazil, July 21-27, 2013 and AICOL-
V@SINTELNET-JURIX, Bologna, Italy, December 11, 2013, Revised Selected Papers, Berlin Heidelberg, Springer, 2014,
147.
- 37 -
corporations are still made up of human beings who operate the entity, whereas AI agents are
simply made by humans.188 But legal personality has also already been granted to Hindu idols,189 as
well as various environmental features such as national parks and rivers.190

Moreover, granting legal personality to AI agents would by no means result in such agents having
the same rights and obligations as natural persons: legal personality is divisible, and entities can be
granted more or fewer rights and obligations.191 Just as the scope of rights and obligations of
corporations does not extend to those rights and obligations that relate to traits only natural
persons have, so too would the scope of rights and obligations granted to AI agents necessarily be
limited.192 In fact, legal personality of AI could even lead to only obligations and no rights
whatsoever, although this would not be workable in practice (for example, in order to pay damages
or fines, an AI agent would first need to have property rights).193

In essence, the lack of legal personality is not a true barrier to hold AI agents criminally liable, as AI
could easily be granted legal personality for the purposes of criminal law if this were to be desired.

5.4.3 Actus reus

AI agents can quite easily be considered as being able to bring about conduct in the form of both
commissions and omissions. This also applies when there is no mechanical movement of any kind,
as this is not a requirement.194 Hence, if one does not adopt the additional requirement of
voluntariness, AI agents can perfectly meet the actus reus requirement.

188
S.M. SOLAIMAN, "Legal personality of robots, corporations, idols and chimpanzees: a quest for legitimacy", Artificial
Intelligence and Law 2017, vol. 25, ch 6.
189
Ibid., ch 4.
190
J.J. BRYSON, M.E. DIAMANTIS and T.D. GRANT, "Of, for, and by the people: the legal lacuna of synthetic persons",
Artificial Intelligence and Law 2017, vol. 25, 280; B. ROUSSEAU, "In New Zealand, Lands and Rivers Can Be People (Legally
Speaking)”, The New York Times 13 July 2016, https://www.nytimes.com/2016/07/14/world/what-in-the-world/in-
new-zealand-lands-and-rivers-can-be-people-legally-speaking.html; M. SAFI, “Ganges and Yamuna rivers granted same
legal rights as human beings”, The Guardian 21 March 2017,
https://www.theguardian.com/world/2017/mar/21/ganges-and-yamuna-rivers-granted-same-legal-rights-as-human-
beings.
191
J.J. BRYSON, M.E. DIAMANTIS and T.D. GRANT, "Of, for, and by the people: the legal lacuna of synthetic persons",
Artificial Intelligence and Law 2017, vol. 25, 280.
192
P. CERKA, J. GRIGIENE and G. SIRBIKYTE, "Is it possible to grant legal personality to artificial intelligence software
systems?", Computer Law & Security Review 2017, vol. 33(5), ch 6.
193
S. CHESTERMAN, "Artificial intelligence and the limits of legal personality", International and Comparative Law
Quarterly 2020, vol. 69(4), 824-825.
194
R.A. DUFF, Answering for Crime: Responsibility and Liability in the Criminal Law, Oxford, Hart Publishing, 2007, 99.
- 38 -
However, the actus reus requirement does become problematic when such additional requirement
of voluntariness is adopted. As was stated above, while there is no consensus about the meaning of
this requirement in the legal doctrine, the arguments often involve philosophy, psychology, and
neurology.195 Voluntariness could be considered as being rooted in the ability for judgment and free
will.196 To this extent, German criminal law theorists generally require an autonomous will before
something is considered an “act”.197 While AI agents can have a large degree of autonomy, the
question of whether or not they can be considered as having a “will”, let alone an autonomous or
free will at that, should still be answered negatively. Arguing otherwise would not be in accordance
with the current state-of-the-art in AI programming, or would require a definition of free will that
significantly differs from its conventional meanings. The conduct of AI agents may then not amount
to true acts, but rather “mere happenings”.198 Or their conduct may be seen as automatisms that
are uncontrolled and uncontrollable, as a lack of free will can be interpreted as meaning that AI
does not have any real ability to control its own conduct.199 That being said, it is not unthinkable
that AI systems might one day be advanced enough to be considered as having an autonomous will,
although this remains science fiction for now.

All in all, the actus reus requirement may definitely cause some doctrinal problems for AI agents,
but as will now be made clear in the next section, these problems are quickly overshadowed by the
problems caused by the mens rea requirement.200

195
P.M. FREITAS, F. ANDRADE and P. NOVAIS, "Criminal Liability of Autonomous Agents: from the Unthinkable to the
Plausible" in P. CASANOVAS, U. PAGALLO, M. PALMIRANI and G. SARTOR, AI Approaches to the Complexity of Legal
Systems: AICOL 2013 International Workshops, AICOL-IV@IVR, Belo Horizonte, Brazil, July 21-27, 2013 and AICOL-
V@SINTELNET-JURIX, Bologna, Italy, December 11, 2013, Revised Selected Papers, Berlin Heidelberg, Springer, 2014,
152.
196
D. LIMA, "Could AI Agents Be Held Criminally Liable? Artificial Intelligence and the Challenges for Criminal Law", South
Carolina Law Review 2018, vol. 69(3), 682.
197
S. GLESS, E. SILVERMAN and T. WEIGEND, "If Robots Cause Harm, Who Is to Blame? Self-Driving Cars and Criminal
Liability", New Criminal Law Review 2016, vol. 19(3), 419.
198
Y. HU, “Robot Criminals”, University of Michigan Journal of Law Reform 2019, vol. 52, 518.
199
See A. ASHWORTH and J. HORDER, Principles of Criminal Law, Oxford, Oxford University Press, 2013, 86-89.
200
M. SIMMLER and N. MARKWALDER, "Guilty robots? – Rethinking the Nature of Culpability and Legal Personhood in
an Age of Artificial Intelligence", Criminal Law Forum 2019, vol. 30, 10.
- 39 -
5.4.4 Mens rea

5.4.4.1 General sense: culpability or blameworthiness

A first question that must be answered to assess whether AI agents can meet the mens rea
requirement is the question of whether they possess the capacity for culpable conduct, 201 or mens
rea in its general sense as blameworthiness.202

The question of blameworthiness involves the question of whether AI agents are capable of
performing morally wrongful actions. An act is morally wrongful when the act belongs to a category
of acts that is deemed immoral by a moral rule (which is absolute) or a moral principle (which is
relative).203 Blameworthiness then requires that an AI agent knows that a moral principle or rule is
applicable to its act and that the act violates this moral principle or rule.204 As a result, the AI must
be equipped with the necessary moral algorithms required to be able to make such judgment. To
this end, most AI agents currently do not seem to be equipped with sufficiently advanced moral
algorithms. But this is not entirely unthinkable. The algorithms that deal with trolley dilemmas that
have to be incorporated into self-driving vehicles, for example, come to mind.205 However, even if
AI is equipped with the algorithms required to make moral judgments, the objection might still be
raised that these AI agents do not choose the moral algorithms upon which they act. 206 Rather,
these algorithms are engineered by external forces and do not reflect any desires or beliefs of the
AI agent itself, so that there can still be no blameworthiness on the part of the AI agent.207 When AI
has the capacity to self-learn, however, this objection may be partly void, as the moral algorithms
could then be worked out by the AI itself and hence could actually reflect the desires and beliefs of
the AI agent. But that would lead the discussion to the controversial question of whether AI can

201
R. ABBOTT and A. SARCH, “Punishing Artificial Intelligence: Legal Fiction or Science Fiction”, UC Davis Law Review
2019, vol. 53, 350.
202
P.M. FREITAS, F. ANDRADE and P. NOVAIS, "Criminal Liability of Autonomous Agents: from the Unthinkable to the
Plausible" in P. CASANOVAS, U. PAGALLO, M. PALMIRANI and G. SARTOR, AI Approaches to the Complexity of Legal
Systems: AICOL 2013 International Workshops, AICOL-IV@IVR, Belo Horizonte, Brazil, July 21-27, 2013 and AICOL-
V@SINTELNET-JURIX, Bologna, Italy, December 11, 2013, Revised Selected Papers, Berlin Heidelberg, Springer, 2014,
153.
203
Y. HU, “Robot Criminals”, University of Michigan Journal of Law Reform 2019, vol. 52, 522.
204
Ibid.
205
Ibid., 496.
206
Ibid., 523.
207
Ibid., 523.
- 40 -
even be considered as having true beliefs or desires, which will be addressed in the next section
relating to intentions.

Furthermore, blameworthiness not only requires the ability to make moral judgments but also the
possibility to make choices and to avoid committing the crime in question: “blameworthiness
presupposes the actor’s ability to decide between doing right and doing wrong, or in other words,
presupposes the actor’s ability to avoid committing a wrongful act”.208 Hence, when someone
behaved unlawfully when they could have acted differently, there is blameworthiness on their
part.209 This opportunity of being able to behave in a different manner is once again rooted in free
will.210 But when AI decides to commit a crime, this decision is not the result of a free and conscious
choice but instead simply follows from the algorithms of the AI, while the AI cannot avoid following
these algorithms. Without getting into what it actually means to have free will on a deeper
neurological or philosophical level, it could again be stated that, today, even the most autonomous
AI agents can hardly be considered as having free will in its usual meaning. The possible objection
that also human beings may not actually have free will is not relevant, as social relations and legal
norms are simply based on the (perhaps fictitious) assumption that human beings have free will.211
Such assumption may perhaps one day be made for very strong AI systems as well, but this is simply
not the case today.

In this respect, it might also be useful to make a comparison with human beings who can only
partially be considered as psychopaths. These so-called partial psychopaths are able to recognise
what norms are enforced in society as a result of what society considers legally or morally wrong,
and they may comply with these norms when it is in their own interest to do so (to avoid
punishment).212 But they do not comply with these norms because of their moral merit (out of
empathy for victims and to avoid causing them harm and suffering), as they are emotionally

208
S. GLESS, E. SILVERMAN and T. WEIGEND, "If Robots Cause Harm, Who Is to Blame? Self-Driving Cars and Criminal
Liability", New Criminal Law Review 2016, vol. 19(3), 420.
209
P.M. FREITAS, F. ANDRADE and P. NOVAIS, "Criminal Liability of Autonomous Agents: from the Unthinkable to the
Plausible" in P. CASANOVAS, U. PAGALLO, M. PALMIRANI and G. SARTOR, AI Approaches to the Complexity of Legal
Systems: AICOL 2013 International Workshops, AICOL-IV@IVR, Belo Horizonte, Brazil, July 21-27, 2013 and AICOL-
V@SINTELNET-JURIX, Bologna, Italy, December 11, 2013, Revised Selected Papers, Berlin Heidelberg, Springer, 2014,
153.
210
M. SIMMLER and N. MARKWALDER, "Guilty robots? – Rethinking the Nature of Culpability and Legal Personhood in
an Age of Artificial Intelligence", Criminal Law Forum 2019, vol. 30, 10.
211
S. GLESS, E. SILVERMAN and T. WEIGEND, "If Robots Cause Harm, Who Is to Blame? Self-Driving Cars and Criminal
Liability", New Criminal Law Review 2016, vol. 19(3), 421.
212
F. LAGIOIA and G. SARTOR, "AI Systems Under Criminal Law: a Legal Analysis and a Regulatory Perspective",
Philosophy & Technology 2020, vol. 33, ch 6.1.
- 41 -
uncapable of fully appreciating the moral wrongfulness of their behaviour.213 Despite this, partial
psychopaths are generally still held criminally liable.214 That being said, the comparison is by no
means perfect and holding partial psychopaths criminally liable may simply be a violation of the
requirement of blameworthiness.

5.4.4.2 Special sense: mental states required by specific offences

5.4.4.2.1 Intent
To be held criminally liable, AI agents would not only have to meet the requirement of
blameworthiness, but they would also have to be able to fulfil the various mental states as
prescribed by specific offences. Intent is perhaps the most important of these.

As was stated above, acting intentionally is often considered as closely relating to motives and as
acting for a reason.215 The moral capacity of AI agents is not relevant here, as intent is not a moral
concept but instead a purely descriptive concept that describes the psychology of an agent. 216 In a
basic sense, AI agents can indeed be considered as acting for a reason and as having motives. Gabriel
Hallevy, for example, argues that being programmed to have a purpose or aim and to take actions
to achieve that purpose constitutes (even specific) intent.217 Similarly, an AI agent can perhaps be
seen as acting intentionally when the AI guides its behaviour to promote achieving a particular
outcome.218 The argument can also be made that AI agents can act intentionally in the same way as
corporations can: by considering the algorithms that guide its behaviour as similar to the internal
decision structure of corporations, acts done pursuant to these algorithms are done for the AI’s own
reasons and are therefore intentional.219

On a deeper level, however, objections can be raised to challenge the notion that AI agents are
capable of having actual intentions. To this extent, reference can be made to the highly controversial

213
Ibid.
214
Ibid.
215
V. TADROS, Criminal Responsibility, Oxford, Oxford University Press, 2005, 216-217.
216
Ibid., 214.
217
G. HALLEVY, "The Criminal Liability of Artificial Intelligence Entities – from Science Fiction to Legal Social Control",
Akron Intellectual Property Journal 2016, vol. 4(2), 189.
218
R. ABBOTT and A. SARCH, “Punishing Artificial Intelligence: Legal Fiction or Science Fiction”, UC Davis Law Review
2019, vol. 53, 358.
219
Y. HU, “Robot Criminals”, University of Michigan Journal of Law Reform 2019, vol. 52, 521.
- 42 -
and highly debated Chinese Room Argument of John Searle.220 He uses this thought experiment to
argue that computers and algorithms are able to simulate understanding, but that this does not
amount to true understanding.221 Searle even goes so far as to say that “Whatever else intentionality
is, it is a biological phenomenon, and it is as likely to be as causally dependent on the specific
biochemistry of its origins as lactation, photosynthesis, or any other biological phenomena. (…)
Whatever it is that the brain does to produce intentionality, it cannot consist in instantiating a
program since no program, by itself, is sufficient for intentionality”.222 Reference can also be made
to the Turing test, which was never intended to be a test of true metaphysical computer sentience
but was merely an “Imitation Game” instead.223

Similarly, the reasons that are behind the behaviour of AI agents today cannot be seen as actual
intentions but at best only imitations of actual intentions. Considering those reasons as true motives
and intentions would require a much too flexible and lenient interpretation of those terms. This
significantly complicates both direct intent and dolus eventualis, as these forms of intent require
that an AI agent can be considered as desiring to bring about a particular result,224 and hence, as
being capable of having real desires and motives that underlie its behaviour. The requirement of
indirect intent on the other hand only requires that the AI acts whilst foreseeing with practical
certainty that its behaviour will bring about a particular result, without desiring to bring about that
result.225 Indirect intent thus mitigates the problematic question of whether AI agents can be
considered as having real desires. However, if one adopts the viewpoint that AI agents cannot have
any true understanding, it would still not be justified to impose liability on AI also in cases of indirect
intent. This is because it would not make sense to punish AI for acting despite foreseeing certain

220
J.R. SEARLE, "Minds, brains, and programs", Behavioral and Brain Sciences 1980, vol. 3(3), 417-424; P.M. FREITAS, F.
ANDRADE and P. NOVAIS, "Criminal Liability of Autonomous Agents: from the Unthinkable to the Plausible" in P.
CASANOVAS, U. PAGALLO, M. PALMIRANI and G. SARTOR, AI Approaches to the Complexity of Legal Systems: AICOL
2013 International Workshops, AICOL-IV@IVR, Belo Horizonte, Brazil, July 21-27, 2013 and AICOL-V@SINTELNET-JURIX,
Bologna, Italy, December 11, 2013, Revised Selected Papers, Berlin Heidelberg, Springer, 2014, 153.
221
Y. HU, “Robot Criminals”, University of Michigan Journal of Law Reform 2019, vol. 52, 519.
222
J.R. SEARLE, "Minds, brains, and programs", Behavioral and Brain Sciences 1980, vol. 3(3), 424.
223
A.M. TURING, "Computing Machinery and Intelligence", Mind 1950, vol. 59(236), 433-460; HAYWARD, K.J. and MAAS,
M.M., “Artificial intelligence and crime: A primer for criminologists”, Crime, Media, Culture 2020, vol. 00(0),
https://journals.sagepub.com/doi/abs/10.1177/1741659020917434?journalCode=cmca, 4.
224
A. ASHWORTH and J. HORDER, Principles of Criminal Law, Oxford, Oxford University Press, 2013, 170; J. BENTHAM,
An Introduction to the Principles of Morals and Legislation, Kitchener, Batoche Books, 2000, 69-73; E. KAYITANA, The
Form of Intention Known as Dolus Eventualis in Criminal Law, 2008,
https://papers.ssrn.com/sol3/papers.cfm?abstract_id=1191502, 0-2; D.W. MORKEL, “On the Distinction between
Recklessness and Conscious Negligence”, The American Journal of Comparative Law 1982, vol. 30(2), 328-329.
225
I. KUGLER, "The definition of Oblique Intention", The Journal of Criminal Law 2004, vol. 68(1), 79-83.
- 43 -
consequences, when the AI cannot truly understand and appreciate whatever consequences it is
foreseeing. This is also illustrated by the fact that in the U.S. indirect intent as “knowledge” of certain
consequences is sometimes also referred to as “awareness”,226 while AI perhaps cannot be
considered as being capable of having any real awareness.

All in all, AI agents today are just not similar enough to human beings. Too many objections can be
raised, and it must necessarily be concluded that AI currently is not capable of having true intentions
but only imitations of intentions. Additionally, even if AI were capable of having true intentions, it
would again be very problematic for the prosecution to deliver proof of such intentions and for
judges to assess this.

5.4.4.2.2 Negligence
A second question that must be answered is whether AI can be considered as being able to meet
the requirements of criminal negligence. Once again, it must be reiterated that negligence-based
criminal liability requires that an agent has failed to use reasonable care to prevent harm where the
reasonable person would have done so, particularly by taking an unreasonable risk that they were
not aware of (but ought to have been aware of).227 The (un)reasonableness of risks follows from the
applicable duty of care, and risks must also be foreseeable.228

If AI agents were granted legal personality, it would be easy for the law or for contracts to impose
duties of care on such agents. So, this does not pose a problem. The reasonable person standard
can also be applied to AI: seeing as the reasonable person standard must be adapted to the relevant
circumstances of the specific offender, the reasonable person in this context is simply the
reasonable AI agent of the same type.229

However, negligence-based liability again poses some significant problems when applied to AI. The
first such problem is the requirement that the agent had the capacity to act differently and had the

226
G. HALLEVY, Liability for Crimes Involving Artificial Intelligence Systems, Cham, Springer, 2015, 83.
227
N. OSMANI, "The Complexity of Criminal Liability of AI Systems", Masaryk University Journal of Law and Technology
2020, vol. 14(1), 63-64.; G. HALLEVY, Liability for Crimes Involving Artificial Intelligence Systems, Cham, Springer, 2015,
121-122; R. DREMLIUGA and N. PRISEKINA, "The Concept of Culpability in Criminal Law and AI Systems", Journal of
Politics and Law 2020, vol. 13(3), 259.
228
N. OSMANI, "The Complexity of Criminal Liability of AI Systems", Masaryk University Journal of Law and Technology
2020, vol. 14(1), 65-67; G. HALLEVY, Liability for Crimes Involving Artificial Intelligence Systems, Cham, Springer, 2015,
121-122.
229
G. HALLEVY, Liability for Crimes Involving Artificial Intelligence Systems, Cham, Springer, 2015, 128.
- 44 -
rational capacities required to have control over one’s own conduct.230 As a result, the question at
hand becomes whether an AI agent has failed to take due care given its mental and physical
capacities.231 Clearly, this closely relates to blameworthiness in the general sense as the possibility
to make choices and to avoid committing the crime in question,232 which is rooted in having free
will.233 Today, at least, considering AI agents as having true control over their own conduct given
their mental and physical capacities, or as having free will, seems inappropriate and far-fetched.

Another problem is that in case of negligence, an agent has taken risks without having been aware
of these risks, but while they ought to have been aware of them.234 In other words, negligence
requires that an agent was unaware of risks despite the agent’s capability to form such
awareness.235 If one adopts the viewpoint that AI agents can never have true understanding and
awareness, as set out above, this requirement becomes problematic.

Next, judges would again face difficulties if they were to attempt to hold AI agents accountable for
negligence. Judges are already not adequately qualified to assess whether the conduct of
programmers or users was reasonable in the light of the applicable duty of care. This would pose an
even bigger problem when judges would have to assess the reasonableness of the behaviour of a
complex AI agent, measured through some kind of reasonable AI standard. Without an expensive
appeal to a panel of experts or without any widely accepted standards or norms, properly making
such assessment seems all but impossible. The same problem arises when trying to determine what
risks are foreseeable by an AI agent.

5.4.4.2.3 Recklessness
As for recklessness, the same problems apply. The key difference is that in case of recklessness, an
AI agent has taken an unreasonable risk despite having been subjectively aware of this risk.236

230
R.A. DUFF, Answering for Crime: Responsibility and Liability in the Criminal Law, Oxford, Hart Publishing, 2007, 72.
231
A. ASHWORTH and J. HORDER, Principles of Criminal Law, Oxford, Oxford University Press, 2013, 182-183.
232
S. GLESS, E. SILVERMAN and T. WEIGEND, "If Robots Cause Harm, Who Is to Blame? Self-Driving Cars and Criminal
Liability", New Criminal Law Review 2016, vol. 19(3), 420.
233
M. SIMMLER and N. MARKWALDER, "Guilty robots? – Rethinking the Nature of Culpability and Legal Personhood in
an Age of Artificial Intelligence", Criminal Law Forum 2019, vol. 30, 10.
234
R. DREMLIUGA and N. PRISEKINA, "The Concept of Culpability in Criminal Law and AI Systems", Journal of Politics and
Law 2020, vol. 13(3), 259.
235
G. HALLEVY, Liability for Crimes Involving Artificial Intelligence Systems, Cham, Springer, 2015, 125.
236
M. DSOUZA, "Don't panic: artificial intelligence and Criminal Law" in D.J. BAKER and P.H. ROBINSON (eds.), Artificial
Intelligence and the Law: Cybercrime and Criminal Liability, Oxford, Routledge, 2021, 257.
- 45 -
Hence, not only should AI agents then have the capacity to form true awareness, but it should also
be proven that the AI agent was indeed subjectively aware of the risk. Such proof would be very
hard to deliver in case of complex and untransparent systems.

5.4.4.2.4 Exception: strict liability


One way to deal with AI agents potentially not being able to have these various mental states could
be to use strict liability. But some problems regarding strict liability were already highlighted above,
and those arguments can be applied analogously to AI agents.237 Moreover, when applying strict
liability to AI, two additional problems arise that make strict liability a particularly inadequate
solution. First, there is the problem that only formally strict liability can be justified, so that proof of
moral culpability would still be required.238 Thus, the requirement of blameworthiness would still
have to be met by AI agents so that the mens rea requirement in the general sense is fulfilled.
Second, it is an integral part of strict liability that the offender has the possibility to refute the
presumption that exists in strict liability, which he can do by proving that no intention or negligence
existed in him and that he took all reasonable measures to prevent the offense.239 As a result, this
form of liability can only be used for offenders that have the sufficient mental capability to do this,
which means that the offender needs to be capable of having intentions and being negligent in the
first place.240 Clearly, strict liability again ultimately does not solve any problems.

5.4.5 Prosecuting and punishing AI

5.4.5.1 Prosecuting AI in practice

Even if AI could be held criminally liable, many problems would still arise in the context of
prosecuting and punishing AI. A first problem is that actually prosecuting AI would simply be very
difficult in practice. This seems to be an issue that has barely been addressed in the legal doctrine
as of yet. For example, Gabriel Hallevy, before considering punishment adjustments for AI agents,
states “Let us assume an Al entity is criminally liable. Let us assume it is indicted, tried, and

237
See section 5.3.4.
238
R.A. DUFF, Answering for Crime: Responsibility and Liability in the Criminal Law, Oxford, Hart Publishing, 2007, 233-
235.
239
G. HALLEVY, Liability for Crimes Involving Artificial Intelligence Systems, Cham, Springer, 2015, 138-140.
240
Ibid.
- 46 -
convicted”.241 But also the latter part is quite the assumption to make. How could AI agents possibly
stand trial today? To suggest that modern AI agents could stand before a court and argue they are
not guilty is quite ridiculous. Another solution would have to be found, such as representation by a
human being akin to the representation of legal entities. The fact that this practical problem is still
barely being discussed in the literature goes to show that, while criminal liability of AI agents is
forward-looking and makes us re-evaluate old criminal law principles, this very much remains a
speculative discussion for now with no practical application anytime soon.

Even if AI agents were to somehow be able to stand trial, significant legal changes would still be
required before such a trial could be conducted. In particular, it seems inevitable that AI agents
should then be granted at least some fundamental rights such as the right to a fair trial and property
rights, as a trial without such rights would not make sense. Criminal law would also have to tackle
the issue of defences that might be relevant for AI agents. For example, in a number of cases relating
to cybercrime offences, defendants were able to successfully argue that their computer had been
infected with malware (the “Trojan defence”).242 It seems justified that similar defences should be
available for AI agents. To this extent, questions would have to be raised about whether AI agents
could appeal to existing defences, or similarly, whether existing defences could perhaps obstruct
criminal conviction altogether because of the nature of AI.243 Perhaps new defences would also have
to be introduced to deal with problems specific to AI agents.

5.4.5.2 Imposing criminal punishments on AI

Before AI could be punished under the current criminal law framework, there is also the question
of whether the currently existing criminal sanctions could be imposed on AI agents. To this extent,
Hallevy argues that most of the currently existing sanctions can indeed be applied to AI agents.244
He refers to the legal technique of conversion that is used to apply sanctions to corporations,245 and

241
G. HALLEVY, "The Criminal Liability of Artificial Intelligence Entities – from Science Fiction to Legal Social Control",
Akron Intellectual Property Journal 2016, vol. 4(2), 194.
242
J.K.C. KINGSTON, "Artificial Intelligence and Legal Liability" in M. BRAMER and M. PETRIDIS (eds.), Research and
Development in Intelligent Systems XXXIII, s.l., Springer, 2016, 272.
243
See G. HALLEVY, Liability for Crimes Involving Artificial Intelligence Systems, Cham, Springer, 2015, 150-184; G.
HALLEVY, "The Criminal Liability of Artificial Intelligence Entities – from Science Fiction to Legal Social Control", Akron
Intellectual Property Journal 2016, vol. 4(2), 192-193.
244
G. HALLEVY, "The Criminal Liability of Artificial Intelligence Entities – from Science Fiction to Legal Social Control",
Akron Intellectual Property Journal 2016, vol. 4(2), 199.
245
G. HALLEVY, Liability for Crimes Involving Artificial Intelligence Systems, Cham, Springer, 2015, 43-45.
- 47 -
he argues that existing punishments can be used and adjusted by asking the following three
questions: “(1) What is the fundamental significance of the specific punishment for a human? (2)
How does that punishment affect Al entities? (3) What practical punishments may achieve the same
significance when imposed on Al entities?”246 This does indeed seem a viable legal technique,
provided that punishments can be found that can be considered as having the same significance for
AI entities.

This might even mean that the death sentence could make its entry again into various legal systems.
This is because the deletion or destruction of an AI agent could maybe be considered as the death
sentence equivalent for AI agents.247 However, this could perhaps only have a comparable effect
when the AI is imbued with a will to live.248 Hallevy also argues that imprisonment can be applied
to AI agents by putting them out of use for a period of time, as a way to restrict their liberty.249 Yet,
current AI agents do not experience time, and during their period of incapacitation they would not
be able to reflect on their actions, as they are not capable of self-awareness and self-reflection.250
AI agents today cannot even be considered as having any desire for liberty. Sentencing AI to
community service, where somehow the AI agent is compelled to labour for the benefit of the
community,251 also does not seem a viable option until AI has general intelligence and can perform
a wide variety of different tasks. For now, this would only be possible in the rare instances where
such an alternative way to benefit the community can actually be found.252

The most realistic and practical solution, however, would be to impose fines on AI agents. But as
was already explained, AI agents currently do not have legal personality and as a result they cannot
own any money, as they do not have any property rights. One way to deal with this would be by
sentencing an AI agent to perform labour and then assigning a monetary value to such labour.253

246
G. HALLEVY, "The Criminal Liability of Artificial Intelligence Entities – from Science Fiction to Legal Social Control",
Akron Intellectual Property Journal 2016, vol. 4(2), 195.
247
Ibid., 196.
248
S. GLESS, E. SILVERMAN and T. WEIGEND, "If Robots Cause Harm, Who Is to Blame? Self-Driving Cars and Criminal
Liability", New Criminal Law Review 2016, vol. 19(3), 424.
249
G. HALLEVY, "The Criminal Liability of Artificial Intelligence Entities – from Science Fiction to Legal Social Control",
Akron Intellectual Property Journal 2016, vol. 4(2), 196-197.
250
S. GLESS, E. SILVERMAN and T. WEIGEND, "If Robots Cause Harm, Who Is to Blame? Self-Driving Cars and Criminal
Liability", New Criminal Law Review 2016, vol. 19(3), 424.
251
G. HALLEVY, "The Criminal Liability of Artificial Intelligence Entities – from Science Fiction to Legal Social Control",
Akron Intellectual Property Journal 2016, vol. 4(2), 198.
252
R. CHARNEY, "Can Androids Plead Automatism – A Review of When Robots Kill: Artificial Intelligence under the
Criminal Law by Gabriel Hallevy", University of Toronto Faculty of Law Review 2015, vol. 73(1), 72.
253
G. HALLEVY, Liability for Crimes Involving Artificial Intelligence Systems, Cham, Springer, 2015, 226-227.
- 48 -
But again, this would currently only be possible in rare instances. Another way to resolve this
problem might be to pay fines using some kind of mandatory insurance regime.254 The funds for this
regime could then be paid by programmers and users and could be supplemented with some of the
financial gains obtained because of the AI agent.255 In the context of civil liability, members of the
European Parliament have already advocated the creation of such a regime for autonomous
vehicles.256 However, because of the unpredictable nature of AI which complicates the calculation
of the level of risk, it might be difficult to determine the amount of insurance premiums.257

5.4.5.3 Functions of criminal law and AI

Finally, even if the existing criminal sanctions could properly be imposed on AI, there would still be
the requirement that punishing AI serves the purposes of criminal law. To this end, it should be
assessed what some of the most important functions of criminal law are, as well as whether
punishing AI could contribute to those functions.

A first function of criminal law is its censuring function. This function distinguishes criminal law from
tort law, which is relatively morally neutral, and entails that criminal law means to send a message
that an action is morally wrong and the agent behind it morally blameworthy.258 In other words, the
state, through punishment and criminal convictions, officially condemns culpable conduct that
violates the core values of society.259 This function is sometimes also referred to as the expressive
function of criminal law.260 Such an official condemnation may have benefits when AI agents commit
crimes. This may, for example, lead to greater satisfaction of victims of crimes committed by AI and
may result in a greater sense of security in society.261 If AI agents were to be considered as

254
Y. HU, “Robot Criminals”, University of Michigan Journal of Law Reform 2019, vol. 52, 530.
255
F. LAGIOIA and G. SARTOR, "AI Systems Under Criminal Law: a Legal Analysis and a Regulatory Perspective",
Philosophy & Technology 2020, vol. 33, ch 8.4.
256
Y. HU, “Robot Criminals”, University of Michigan Journal of Law Reform 2019, vol. 52, 530; I. LIETZEN, "Robots and
artificial intelligence: MEPs call for EU-wide liability rules", Press room (EP) 16 February 2017,
https://www.europarl.europa.eu/news/en/press-room/20170210IPR61808/robots-and-artificial-intelligence-meps-
call-for-eu-wide-liability-rules.
257
U. PAGALLO, "When Morals Ain't Enough: Robots, Ethics, and the Rules of the Law", Minds and Machines 2017, vol.
27, 631.
258
Y. HU, “Robot Criminals”, University of Michigan Journal of Law Reform 2019, vol. 52, 504.
259
R. ABBOTT and A. SARCH, “Punishing Artificial Intelligence: Legal Fiction or Science Fiction”, UC Davis Law Review
2019, vol. 53, 341.
260
Y. HU, “Robot Criminals”, University of Michigan Journal of Law Reform 2019, vol. 52, 504.
261
R. ABBOTT and A. SARCH, “Punishing Artificial Intelligence: Legal Fiction or Science Fiction”, UC Davis Law Review
2019, vol. 53, 346.
- 49 -
blameworthy by a large part of the population, punishing AI may also be useful to ensure that
criminal law does not risk losing its legitimacy.262 However, as should be clear by now, AI agents
today cannot be considered as being able to bear moral blameworthiness or engage in culpable
conduct. Hence, any such official condemnation by the state would not be grounded in reality but
would instead be based on a fiction.

This ties in with a second function of criminal law, namely retribution. Retributivist theory means to
justify punishment by arguing that agents should be punished for their wrongdoing because they
deserve to be punished.263 Such punishment should be proportional with the level of wrongdoing
and again is only appropriate when the agent is morally culpable.264 Leaving aside the question of
whether retributivism actually justifies punishment, which is the subject of debate,265 in the context
of AI some additional problems arise other than the recurring problem of AI lacking moral
culpability. A first additional problem is that AI agents cannot truly be punished as they do not
experience anything as painful or unpleasant, although this may (perhaps debatably) be resolved by
arguing that punishment only requires that interests are objectively set back, independent of their
subjective perception by the one being punished.266 Another problem is the potential for a so-called
“retribution gap”. If humans were to punish AI agents as a result of an innate desire for retribution
while AI is not an appropriate subject of retributive blame, such a gap may come into existence.267
This may lead to moral scapegoating and may be at odds with the rule of law. 268 However, a recent
survey showed that, at least currently, people do not even seem to think that punishing AI agents
would satisfy any demand for retribution.269 Nevertheless, there seems to be a “punishment gap”:
people do not seem to ascribe any mental states to AI, nor do they believe punishment of AI would
achieve its retributive and deterrent functions (the latter will be discussed next), but yet people are
still willing to punish AI agents.270 This may be the result of an innate desire, not only for retribution,
but for revenge. Revenge does not only revolve around what a wrongdoer deserves but also involves

262
Ibid., 347.
263
J. DANAHER, "Robots, Law and the Retribution Gap", Ethics and Information Technology 2016, vol. 18, 301-302.
264
Ibid., 302.
265
R. ABBOTT and A. SARCH, “Punishing Artificial Intelligence: Legal Fiction or Science Fiction”, UC Davis Law Review
2019, vol. 53, 341.
266
Ibid., 364-365.
267
J. DANAHER, "Robots, Law and the Retribution Gap", Ethics and Information Technology 2016, vol. 18, 299-309.
268
Ibid., 307-308.
269
G. LIMA, M. CHA, C. JEON and K. PARK, The Punishment Gap: The Infeasible Public Attribution of Punishment to AI
and Robots, 2020, https://arxiv.org/abs/2003.06507.
270
Ibid.
- 50 -
the personal desires of victims.271 To this extent, Christina Mulligan argues that vengeful responses
to AI agents who committed crimes may result in significant psychological satisfaction in victims,272
and that this is justified precisely because AI does not experience pain or shame. 273 Yet, seeing as
retributivism as a justification of punishment is already the subject of debate,274 justifications based
on revenge should be even more controversial.

A third aim of criminal law is to reduce crime rates by means of deterrence. Deterrence theory
claims that threats of punishment for wrongful conduct, made credible by the consequent actual
imposition of punishment, will decrease the incidence of such conduct.275 Deterrence can be either
specific or general. In case of specific deterrence, the previous punishment of an offender means to
deter him from later reoffending, whereas general deterrence aims to punish an offender in order
to deter other potential wrongdoers from offending.276 Specific deterrence for AI agents, while
possible in theory, is very unlikely to be possible currently as this would require the AI to be able to
detect and respond to the imposition of criminal sanctions on itself.277 In principle, this may be
possible through machine learning, but more than likely current AI agents would have to be
specifically programmed to be able to do this. This would be a very convoluted way of achieving a
deterrent effect, and such effort would be better spent on directly programming the AI to refrain
from committing the crime in the first place.278 Punishing AI agents could, nevertheless, result in
general deterrence, not towards other AI agents but towards human perpetrators such as users and
programmers.279 It should be noted, however, that the question of whether deterrence actually
works is also a very controversial topic in criminology.280

One last function of criminal law that will be highlighted is rehabilitation of the offender. From a
rehabilitation perspective, punishment means to reduce crime “by reducing offenders’ inclinations

271
C. MULLIGAN, "Revenge Against Robots", South Carolina Law Review 2018, vol. 69, 582.
272
Ibid., 579-595.
273
Ibid., 592.
274
R. ABBOTT and A. SARCH, “Punishing Artificial Intelligence: Legal Fiction or Science Fiction”, UC Davis Law Review
2019, vol. 53, 341.
275
M.N. BERMAN, "The justification of Punishment" in A. MARMOR (ed.), The Routledge Companion to Philosophy of
Law, New York, Routledge, 2012, 145.
276
Ibid.
277
R. ABBOTT and A. SARCH, “Punishing Artificial Intelligence: Legal Fiction or Science Fiction”, UC Davis Law Review
2019, vol. 53, 345.
278
Y. HU, “Robot Criminals”, University of Michigan Journal of Law Reform 2019, vol. 52, 507.
279
R. ABBOTT and A. SARCH, “Punishing Artificial Intelligence: Legal Fiction or Science Fiction”, UC Davis Law Review
2019, vol. 53, 345.
280
M. SIMMLER and N. MARKWALDER, "Guilty robots? – Rethinking the Nature of Culpability and Legal Personhood in
an Age of Artificial Intelligence", Criminal Law Forum 2019, vol. 30, 28.
- 51 -
to offend, by reforming their character, strengthening their attachment to moral or legal norms and
even by providing them with job-related skills”.281 Applied to AI, rehabilitation could be interpreted
as an AI agent reprogramming itself through machine learning to change its “character” and further
embed moral or legal norms, because of the imposition of criminal sanctions. To this extent, the
aforementioned survey showed that most people do have positive attitudes towards the potential
of AI agents to be reformed (whereas they held negative attitudes towards the punishing of AI for
retributive and deterrent aims).282 This seems closely related to the ability to benefit from specific
deterrence, and again it can be argued that, while possible in theory, this idea is interesting but
simply not grounded in actual reality as of today. In other words, “it is not impossible—though
certainly unneeded, outlandish, and merely speculative under the present circumstances—that
advanced AI systems might be subject to criminal sanctions, under a deterrence and rehabilitative
rationale”.283

281
M.N. BERMAN, "The justification of Punishment" in A. MARMOR (ed.), The Routledge Companion to Philosophy of
Law, New York, Routledge, 2012, 145.
282
G. LIMA, M. CHA, C. JEON and K. PARK, The Punishment Gap: The Infeasible Public Attribution of Punishment to AI
and Robots, 2020, https://arxiv.org/abs/2003.06507.
283
F. LAGIOIA and G. SARTOR, "AI Systems Under Criminal Law: a Legal Analysis and a Regulatory Perspective",
Philosophy & Technology 2020, vol. 33, ch 8.4.
- 52 -
6 Conclusion

Crimes involving AI are already very relevant today, and the importance of these crimes will only
increase in the future. But it turns out that the involvement of AI severely complicates the
attribution of criminal liability when such crimes are committed.

First, when an individual uses an AI system or device as a means to an end to intentionally commit
a crime, not many problems occur, and criminal law does more or less have the appropriate means
to address this situation. When the AI cannot be considered as having any significant measure of
intelligence and autonomy, the AI is best seen as a mere tool in the hands of the real human
perpetrator operating behind the curtain. In such cases, the AI can be equated to inanimate objects
such as a hammer or knife. However, when the AI does have a significant measure of intelligence
and autonomy, equating the AI to a mere inanimate tool is no longer appropriate. Currently, the
most advanced AI systems are best considered as semi-intelligent: somewhere between a mere tool
and a fully intelligent, rational being. Here, there are two options. Parallels can either be drawn to
an animal's master who manipulates the animal into committing a crime, or an appeal can be made
to the doctrine of innocent agency, whereby criminal liability can be imposed upon someone who
acts through an agent who does not satisfy the mens rea requirement. Further, although strong AI
will likely remain science fiction for at least the next few decades, it is plausible that one day AI will
be capable of general-purpose reasoning. Then the AI could be considered as being fully intelligent,
equalling or even surpassing the level of intelligence of human beings. It is difficult to imagine how
strong AI would look like in practice and how it should be regulated, but it can be interesting to
already speculate about this. It is likely that using AI instrumentally would then lose much of its
relevance, in the same way that it is difficult to use another fully capable human being
instrumentally. But even then, criminal law may still have the adequate means to address this
situation. This could be either in the form of the doctrine of innocent agency, or by using some kind
of vicarious liability model rooted in respondeat superior.

Second, AI can also commit crimes while this was never the intention of the user or programmer.
This can be the result of the large degree of autonomy, unpredictability, and self-learning capacities
of AI. In such cases, an appeal must be made to criminal negligence instead. But here, significant
problems do occur. First, the duty of care that is applicable in this context remains very unclear.
There are no legal norms and barely any industry standards that are relevant. As a result, judges
must fall back on the general principle: the behaviour of the reasonable person, who uses
- 53 -
reasonable care and does not take unreasonable risks. But the potential complexity,
unpredictability, and lack of transparency of AI systems can all significantly complicate this.
Moreover, judges have little to no expertise in the area of programming and the field of AI. Without
proper legal or industry standards or a necessarily lengthy and expensive appeal to a panel of
experts, judges do not seem adequately qualified to make this assessment when cases are not clear-
cut. Furthermore, determining the standard of care is also a matter that is better suited for
policymakers, as a delicate balance must be struck between the interests of various parties and also
between on the one hand promoting technological innovation and protecting society from crime on
the other. Another issue is that criminal negligence requires that the risks that were taken must
have been foreseeable by the reasonable person. But the characteristics of AI can make it very hard
to foresee the risks of AI systems. Moreover, when recklessness is applicable instead of negligence,
judges would then also have to assess the probability of risks materializing. Again, all of this seems
very difficult and unfeasible.

Third, an interesting possibility would be to hold AI agents themselves criminally liable for the crimes
they commit. An obstacle that currently precludes this is the lack of legal personality of AI. But
seeing as legal personality is detached from any notion of personhood and ultimately means no
more than being granted rights and obligations by the law, legal personality could easily be granted
to AI for the purposes of criminal law. However, various problems arise when trying to apply the
current criminal law theory to AI agents. First, if the additional condition is adopted that acts must
be voluntary for the actus reus requirement to be met, AI cannot meet this requirement.
Voluntariness requires free will, which current AI agents simply do not have. Second, AI agents today
cannot meet the mens rea requirement in its general sense as blameworthiness, as this requires the
ability to make moral judgments and to make free choices. Third, mens rea also poses problems in
its special sense as mental states required by specific offences. On a deeper level, AI currently
cannot be considered as having any true reasons, motives, and desires that explain its conduct.
Hence, AI cannot have any real intentions but at best only imitations of intentions. AI also cannot
meet the requirements of negligence and recklessness, as this requires the capacity to act
differently as well as the capability to form awareness of risks. Fourth, various problems also arise
in the context of prosecuting and punishing AI. Prosecuting AI is just not possible in practice today
and would require significant legal changes. Currently existing criminal sanctions can also not
adequately be imposed on AI. Moreover, punishing AI would not even serve many of the functions
of criminal law.

- 54 -
7 Bibliography

7.1 Legal and policy documents

7.1.1 EU

Report (JURI) with recommendations to the Commission on Civil Law Rules on Robotics, 27 January
2017, (2015/2103(INL)).

Resolution (EP) with recommendations to the Commission on Civil Law Rules on Robotics, 16
February 2017, (2015/2103(INL)).

Opinion (EESC). Artificial intelligence: The consequences of artificial intelligence on the (digital)
single market, production, consumption, employment and society, 31 August 2017, 2017/C 288/01.

Communication (Comm.). Artificial Intelligence for Europe, 25 April 2018, COM(2018)237 final.

Report (AI HLEG). A definition of Artificial Intelligence: main capabilities and scientific disciplines, 8
April 2019, https://ec.europa.eu/digital-single-market/en/news/definition-artificial-intelligence-
main-capabilities-and-scientific-disciplines.

Report (AI HLEG). Ethics Guidelines for trustworthy AI, 8 April 2019, https://ec.europa.eu/digital-
single-market/en/news/ethics-guidelines-trustworthy-ai.

Report (DG JUST). Liability for Artificial Intelligence and other emerging digital technologies, 27
November 2019, https://op.europa.eu/en/publication-detail/-/publication/1c5e30be-1197-11ea-
8c1f-01aa75ed71a1/language-en.

Report (Europol). Malicious Uses and Abuses of Artificial Intelligence, 19 November 2020,
https://www.europol.europa.eu/publications-documents/malicious-uses-and-abuses-of-artificial-
intelligence.

7.1.2 Other

AMERICAN LAW INSTUTE, Model Penal Code: Official Draft and Explanatory Notes, 1984,
https://archive.org/details/ModelPenalCode_ALI/mode/2up, art. 2.01 and 2.02.

OECD ACN, Liability of Legal Persons for Corruption in Eastern Europe and Central Asia, 2015,
https://www.oecd.org/corruption/acn-liability-of-legal-persons-2015.pdf.
- 55 -
7.2 Doctrine

7.2.1 Books and contributions to books

ASARO, P. M., “The Liability Problem for Autonomous Artificial Agents” in AAAI, AAAI Spring
Symposium, Palo Alto, The AAAI Press, 2016, 190-194.

ASHWORTH, A. and HORDER, J., Principles of Criminal Law, Oxford, Oxford University Press, 2013,
510 p.

BENTHAM, J., An Introduction to the Principles of Morals and Legislation, Kitchener, Batoche Books,
2000, 248 p.

BERMAN, M.N., "The justification of Punishment" in A. MARMOR (ed.), The Routledge Companion
to Philosophy of Law, New York, Routledge, 2012, 141-156.

BIKEEV, I., KABANOV, P., BEGISHEV, I. and KHISAMOVA, Z., “Criminological Risks and Legal Aspects
of Artificial Intelligence Implementation” in ASSOCIATION FOR COMPUTING MACHINERY,
Proceedings of the International Conference on Artificial Intelligence, Information Processing and
Cloud Computing (AIIPCC '19), New York, Association for Computing Machinery, 2019, 1-7, nr. 20.

DOCHERTY, B., Mind the Gap: The Lack of Accountability for Killer Robots, New York, Human Rights
Watch, 2015, 38 p.

DSOUZA, M., "Don't panic: artificial intelligence and Criminal Law" in BAKER, D.J. and ROBINSON,
P.H. (eds.), Artificial Intelligence and the Law: Cybercrime and Criminal Liability, Oxford, Routledge,
2021, 247-264.

DUFF, R.A., Answering for Crime: Responsibility and Liability in the Criminal Law, Oxford, Hart
Publishing, 2007, 322 p.

EL NAQA, I. and MURPHY, M.J., "What Is Machine Learning?" in EL NAQA, I., LI, R. and MURPHY, M.J.
(eds.), Machine Learning in Radiation Oncology: Theory and Applications, Cham, Springer
International Publishing, 2015, 3-11.

FREITAS, P.M., ANDRADE, F. and NOVAIS, P., "Criminal Liability of Autonomous Agents: from the
Unthinkable to the Plausible" in CASANOVAS, P., PAGALLO, U., PALMIRANI, M. and SARTOR, G., AI
Approaches to the Complexity of Legal Systems: AICOL 2013 International Workshops, AICOL-
- 56 -
IV@IVR, Belo Horizonte, Brazil, July 21-27, 2013 and AICOL-V@SINTELNET-JURIX, Bologna, Italy,
December 11, 2013, Revised Selected Papers, Berlin Heidelberg, Springer, 2014, 145-156.

GRUODYTE, E. and CERKA, P., "Artificial Intelligence as a Subject of Criminal Law: A Corporate
Liability Model Perspective" in J.-S. GORDON (ed.), Smart Technologies and Fundamental Rights,
Rodopi, Brill, 2020, 260-281.

HALLEVY, G., Liability for Crimes Involving Artificial Intelligence Systems, Cham, Springer, 2015, 257
p.

KINGSTON, J.K.C., "Artificial Intelligence and Legal Liability" in BRAMER, M. and PETRIDIS, M. (eds.),
Research and Development in Intelligent Systems XXXIII, s.l., Springer, 2016, 272.

RUSSELL, S.J. and NORVIG, P., Artificial intelligence: a modern approach, Upper Saddle River,
Pearson Education, 2010, 1132 p.

TADROS, V., Criminal Responsibility, Oxford, Oxford University Press, 2005, 389 p.

WANG, P., “What Do You Mean by ‘AI’?” in WANG, P., GOERTZEL, B. and FRANKLIN, S. (eds.),
Artificial General Intelligence 2008: Proceedings of the First AGI Conference, Amsterdam, IOS Press,
2008, 362-373.

YUDKOWSKY, E., “Artificial Intelligence as a Positive and a Negative Factor in Global Risk” in
BOSTROM, N. and CIRKOVIC, M.M. (eds.), Global and Catastrophic Risks, New York, Oxford
University Press, 2008, 308-345.

7.2.2 Articles in journals

ABBOTT, R. and SARCH, A., “Punishing Artificial Intelligence: Legal Fiction or Science Fiction”, UC
Davis Law Review 2019, vol. 53, 323-384.

ALLDRIDGE, P., "The Doctrine of Innocent Agency", Criminal Law Forum 1990, vol. 2, 45-83.

ASARO, P.M., "Robots and Responsibility from a Legal Perspective", Proceedings of the IEEE 2007,
vol. 4(14), 20-24.

BATHAEE, Y., “The Artificial Intelligence Black Box and the Failure of Intent and Causation”, Harvard
Journal of Law & Technology 2018, vol. 31(2), 889-938.

- 57 -
BECK, S., "Intelligent agents and criminal law – Negligence, diffusion of liability and electronic
personhood", Robotics and Autonomous Systems 2016, vol. 86, 138-143.

BRYSON, J.J., DIAMANTIS, M.E. and GRANT, T.D., "Of, for, and by the people: the legal lacuna of
synthetic persons", Artificial Intelligence and Law 2017, vol. 25, 273-291.

CERKA, P., GRIGIENE, J. and SIRBIKYTE, G., "Liability for damages caused by artificial intelligence",
Computer Law & Security Review 2015, vol. 31, 376-389.

CERKA, P., GRIGIENE, J. and SIRBIKYTE, G., "Is it possible to grant legal personality to artificial
intelligence software systems?", Computer Law & Security Review 2017, vol. 33(5), 685-699.

CHARNEY, R., "Can Androids Plead Automatism – A Review of When Robots Kill: Artificial Intelligence
under the Criminal Law by Gabriel Hallevy", University of Toronto Faculty of Law Review 2015, vol.
73(1), 69-72.

CHESTERMAN, S., "Artificial intelligence and the limits of legal personality", International and
Comparative Law Quarterly 2020, vol. 69(4), 819-844.

DANAHER, J., "Robots, Law and the Retribution Gap", Ethics and Information Technology 2016, vol.
18, 299-309.

DREMLIUGA, R., KUZNETCOV, P. and MAMYCHEV, A., "Criteria for Recognition of AI as a Legal
Person", Journal of Politics and Law 2019, vol. 12(3), 105-112.

DREMLIUGA, R. and PRISEKINA, N., "The Concept of Culpability in Criminal Law and AI Systems",
Journal of Politics and Law 2020, vol. 13(3), 256-262.

GLESS, S., SILVERMAN, E. and WEIGEND, T., "If Robots Cause Harm, Who Is to Blame? Self-Driving
Cars and Criminal Liability", New Criminal Law Review 2016, vol. 19(3), 412-436.

KUGLER, I., "The definition of Oblique Intention", The Journal of Criminal Law 2004, vol. 68(1), 79-
83.

LIMA, D., "Could AI Agents Be Held Criminally Liable? Artificial Intelligence and the Challenges for
Criminal Law", South Carolina Law Review 2018, vol. 69(3), 677-696.

HALLEVY, G., “Unmanned Vehicles: Subordination to Criminal Law under the Modern Concept of
Criminal Liability”, Journal of Law & Information Science 2011, vol. 21(2), 200-211.

- 58 -
HALLEVY, G., "The Criminal Liability of Artificial Intelligence Entities – from Science Fiction to Legal
Social Control", Akron Intellectual Property Journal 2016, vol. 4(2), 171-201.

HAYWARD, K.J. and MAAS, M.M., “Artificial intelligence and crime: A primer for criminologists”,
Crime, Media, Culture 2020, vol. 00(0),
https://journals.sagepub.com/doi/abs/10.1177/1741659020917434?journalCode=cmca, 1-25.

HILDEBRANDT, M., “Ambient Intelligence, Criminal Liability and Democracy”, Criminal Law and
Philosophy 2008, vol. 2, 163-180.

HU, Y., “Robot Criminals”, University of Michigan Journal of Law Reform 2019, vol. 52, 487-531.

KING, T.C., AGGARWAL, N., TADDEO, M. and FLORIDI, L., "Artificial Intelligence Crime: An
Interdisciplinary Analysis of Foreseeable Threats and Solutions", Science and Engineering Ethics
2020, vol. 26, 89-120.

LAGIOIA, F. and SARTOR, G., "AI Systems Under Criminal Law: a Legal Analysis and a Regulatory
Perspective", Philosophy & Technology 2020, vol. 33, 433-465.

LEGG, S. and HUTTER, M., “A collection of Definitions of Intelligence”, Frontiers in Artificial


Intelligence and Applications 2007, vol. 157, 17-24.

MCCARTHY, J., MINSKY, M.L., ROCHESTER, N. and SHANNON, C.E., “A Proposal for the Dartmouth
Summer Research Project on Artificial Intelligence”, AI Magazine 2006, vol. 27(4), 12-14.

MOORE GEIST, E., “It’s already too late to stop the AI arms race–We must manage it instead”,
Bulletin of the Atomic Scientists 2016, vol. 72(5), 318-321.

MORKEL, D.W., “On the Distinction between Recklessness and Conscious Negligence”, The
American Journal of Comparative Law 1982, vol. 30(2), 325-333.

MULLIGAN, C., "Revenge Against Robots", South Carolina Law Review 2018, vol. 69, 579-595.

OSMANI, N., "The Complexity of Criminal Liability of AI Systems", Masaryk University Journal of Law
and Technology 2020, vol. 14(1), 53-82.

PAGALLO, U., "When Morals Ain't Enough: Robots, Ethics, and the Rules of the Law", Minds and
Machines 2017, vol. 27, 625-638.

RAJARAMAN, V., “John McCarthy – Father of Artificial Intelligence”, Resonance 2014, vol. 19, 198-
207.
- 59 -
REITINGER, N., “Algorithmic Choice and Superior Responsibility: Closing the Gap Between Liability
and Lethal Autonomy by Defining the Line Between Actors and Tools”, Gonzaga Law Review 2015,
vol. 51(1), 79-119.

SEARLE, J.R., "Minds, brains, and programs", Behavioral and Brain Sciences 1980, vol. 3(3), 417-424.

SIMMLER, M. and MARKWALDER, N., "Guilty robots? – Rethinking the Nature of Culpability and
Legal Personhood in an Age of Artificial Intelligence", Criminal Law Forum 2019, vol. 30, 1-31.

SOLAIMAN, S.M., "Legal personality of robots, corporations, idols and chimpanzees: a quest for
legitimacy", Artificial Intelligence and Law 2017, vol. 25, 155-179.

STEVANOVIC, A. and PAVLOVIC, Z., "Concept, Criminal Legal Aspects of the Artificial Intelligence and
Its Role in Crime Control", Journal of Eastern-European Criminal Law 2018, vol. 2018(2), 31-45.

TURING, A.M., "Computing Machinery and Intelligence", Mind 1950, vol. 59(236), 433-460.

VERESHA, R., "Criminal and legal characteristics of criminal intent", Journal of Financial Crime 2017,
vol. 24(1), 118-128.

WENG, Y.-H., CHEN, C.-H. and SUN, C.-T., "Toward the Human–Robot Co-Existence Society: On
Safety Intelligence for Next Generation Robots", International Journal of Social Robotics 2009, vol.
1, 267-282.

7.2.3 Internet sources

BUCHANAN, B. and MILLER, T., Machine Learning for Policymakers: What It Is and Why It Matters,
Belfer Center for Science and International Affairs, 2017,
https://www.belfercenter.org/publication/machine-learning-policymakers.

BUCHANAN, B.G., Artificial intelligence in finance, 2019, https://zenodo.org/record/2612537.

GUNDERSON, J.P. and GUNDERSON, L.F., Intelligence =/= Autonomy =/= Capability, 2004,
https://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.78.8279&rep=rep1&type=pdf.

KAYITANA, E., The Form of Intention Known as Dolus Eventualis in Criminal Law, 2008,
https://papers.ssrn.com/sol3/papers.cfm?abstract_id=1191502.

- 60 -
LIMA, G., CHA, M., JEON, C. and PARK, K., The Punishment Gap: The Infeasible Public Attribution of
Punishment to AI and Robots, 2020, https://arxiv.org/abs/2003.06507.

MARTINEZ-MIRANDA, E., MCBURNEY, P. and HOWARD, M.J., Learning Unfair Trading: a Market
Manipulation Analysis From the Reinforcement Learning Perspective, 2020,
https://arxiv.org/abs/1511.00740.

SCHARRE, P. and HOROWITZ, M.C., Artificial Intelligence: What Every Policymaker Needs to Know,
Center for a New American Security, 2018, https://www.cnas.org/publications/reports/artificial-
intelligence-what-every-policymaker-needs-to-know.

X, Open letter to the European Commission: Artificial Intelligence and Robotics,


http://www.robotics-openletter.eu/.

7.3 News sources

CELLAN-JONES, R., “Uber's self-driving operator charged over fatal crash”, BBC News 16 September
2020, https://www.bbc.com/news/technology-54175359.

KASPERKEVIC, J., “Swiss police release robot that bought ecstasy online”, The Guardian 22 April
2015, https://www.theguardian.com/world/2015/apr/22/swiss-police-release-robot-random-
darknet-shopper-ecstasy-deep-web.

LEE, T.B., “Safety driver in 2018 Uber crash is charged with negligent homicide”, Ars Technica 16
September 2020, https://arstechnica.com/cars/2020/09/arizona-prosecutes-uber-safety-driver-
but-not-uber-for-fatal-2018-crash/.

LIETZEN, I., "Robots and artificial intelligence: MEPs call for EU-wide liability rules", Press room (EP)
16 February 2017, https://www.europarl.europa.eu/news/en/press-
room/20170210IPR61808/robots-and-artificial-intelligence-meps-call-for-eu-wide-liability-rules.

MARSHALL, A., “Why Wasn't Uber Charged in a Fatal Self-Driving Car Crash?”, Wired 17 September
2020, https://www.wired.com/story/why-not-uber-charged-fatal-self-driving-car-crash/.

NEUMAN, S., “Uber Reaches Settlement With Family Of Arizona Woman Killed By Driverless Car”,
National Public Radio 29 March 2018, https://www.npr.org/sections/thetwo-

- 61 -
way/2018/03/29/597850303/uber-reaches-settlement-with-family-of-arizona-woman-killed-by-
driverless-car.

PAVIA, W., “Driverless Uber car ‘not to blame’ for woman’s death”, The Times 21 March 2018,
https://www.thetimes.co.uk/article/driverless-uber-car-not-to-blame-for-woman-s-death-
klkbt7vf0.

POWER, M., “What happens when a software bot goes on a darknet shopping spree?”, The Guardian
5 December 2014, https://www.theguardian.com/technology/2014/dec/05/software-bot-darknet-
shopping-spree-random-shopper.

ROUSSEAU, B., "In New Zealand, Lands and Rivers Can Be People (Legally Speaking)”, The New York
Times 13 July 2016, https://www.nytimes.com/2016/07/14/world/what-in-the-world/in-new-
zealand-lands-and-rivers-can-be-people-legally-speaking.html.

SAFI, M., “Ganges and Yamuna rivers granted same legal rights as human beings”, The Guardian 21
March 2017, https://www.theguardian.com/world/2017/mar/21/ganges-and-yamuna-rivers-
granted-same-legal-rights-as-human-beings.

SCHMELZER, R., “What Happens When Self-Driving Cars Kill People?”, Forbes 26 September 2019,
https://www.forbes.com/sites/cognitiveworld/2019/09/26/what-happens-with-self-driving-cars-
kill-people/?sh=7077c435405c.

VICTOR, D., "Microsoft Created a Twitter Bot to Learn From Users. It Quickly Became a Racist Jerk.",
The New York Times 24 March 2016, https://www.nytimes.com/2016/03/25/technology/microsoft-
created-a-twitter-bot-to-learn-from-users-it-quickly-became-a-racist-jerk.html.

WAKABAYASHI, D., “Self-Driving Uber Car Kills Pedestrian in Arizona, Where Robots Roam”, The New
York Times 19 March 2018, https://www.nytimes.com/2018/03/19/technology/uber-driverless-
fatality.html.

- 62 -

You might also like