Download as pdf or txt
Download as pdf or txt
You are on page 1of 12

Khadija Khan

BALLB II YEAR
Roll no. 37
Criminal law

Topic: Criminal liability of artificial


intelligence.

Submitted to :Dr. Owais Farooqui


Acknowledgement
I would like to express my special thanks of gratitude to my teacher Dr. Owais Farooqi. who gave
me the golden opportunity to do this project on the topic Criminal Liability of Artificial
Intelligence. which also helped me in doing a lot of Research and I came to know about so many
new things I am really thankful.

Secondly, I would also like to thank my parents and friends who helped me a lot in finalizing this
project within the limited time frame.
Content
• Introduction
• Legal issues of artificial intelligence
• Essential for the imposition criminal liablity of
artificial intelligence.
• Three modes of criminal liability of artifical
intelligence.
• Coordination of three liability modes.
• Conclusion.
Introduction
Artificial intelligence (AI) may play an increasingly essential role in criminal acts in the future.
Criminal acts are defined here as any act or ommission constituting an offence punishable under
English criminal law without loss of generality to jurisdiction that similarly define crime.
Evidence of “AI crime” (AIC) is provided by two theorotical research experiments. In the first
one, two computational social scientists (Seymour and Tully2016) used AI as an instrument to
convince social media users to click on phishing links with mass produced messages. Because
each message was constructed using machine learning techniques applied to users past
behaviours and public profiles, the content was tailored to each individual , thus camouflaging
the intention behind each message. If the potential victim had clicked on the phishing link and
filled in the subsequent web-form, then (real world circumstances) a criminal would have
obtained personal and private information that could be used for theft and fraud. 1
The importance of AIC as a distinct phenomenon has not yet been acknowledged. The literature
on AI’s ethical and social implications focuses on regulating and controlling AI’s civil uses,
rather than considering its possible role in crime (Kerr 2004). Furthermore, the AIC research that
is available is scattered across disciplines, including socio-legal studies, computer science,
psychology, and robotics, to name just a few. This lack of research centered on AIC undermines
the scope for both projections and solutions in this new area of potential criminal activity.
The technological world is changing rapidly. Robots and computers are replacing more and more
simple human activities. As long as humanity used computers as mere tools, there was no real
difference between computers and screwdrivers, cars, or telephones. When computers became
sophisticated, we used to say that computers "think" for us. The problem began when computers
evolved from "thinking" machines (machines that were programmed to perform defined thought
processes/computing) into thinking machines (without quotation marks), or Artificial Intelligence
(AL). Al is the capability of a machine to imitate intelligent behavior.6 Al is the simulation of
human behavior and cognitive processes on a computer and hence is the study of the nature of the
whole space of intelligent minds. Al research began in the 1940s and early 1950s. 2 Since then, Al
entities have become an integral part of modem human life, functioning much more
sophisticatedly than other daily tools. Could they become dangerous? In fact, they already are, as
the above incident attests. In 1950, Isaac Asimov set down three fundamental laws of robotics in
his science fiction masterpiece L Robot. (1) A robot may not injure a human being or, through
inaction, allow a human being to come to harm; (2) A robot must obey the orders given it by
human beings, except where such orders would conflict with the First Law;(3) A robot must protect its
own existence, as long as such protection does not conflict with the First or Second laws.

1 Greenblatt N.A.: Self-Driving Cars and the Law. IEEE Spectrum, p.42 (16 February 2016).
2 Dobbs D.B.: Law of Torts. West Academic Publishing (2008)
Legal issue of Artificial Intelligence.
The legal problems are deeper, especially in the case of robots. A system that learns from
information it receives from the outside world can act in ways that its creators could not have
predicted and predictability is crucial to modern legal approaches. What is more, such systems
can operate independently from their creators or operators thus complicating the task of
determining responsibility. These characteristics pose problems related to predictability and the
ability to act independently while at the same time not being held responsible.

There are numerous options in terms of regulation, including regulation that is based on existing
norms and standards. For example, technologies that use artificial intelligence can be regulated as
items subject to copyright or as property. Difficulties arise here, however, if we take into account
the ability of such technologies to act autonomously, against the will of their creators, owners or
proprietors.
Essentials for the Imposition of Criminal
liability on Artificial Intelligence.
The basic question of criminal law is the question of criminal liability whether the specific entity
(human) bears criminal liability for a specific offense committed at a specific point in time and
space. In order to impose criminal liability upon a person, two main elements must exist. The first
is the external or factual element criminal conduct (actus reus) while the other is the internal or
mental element knowledge or general intent vis-A-vis the conduct element (mens rea). If one
element is missing, no criminal liability can be imposed.
Gabriel Hallevy 3 discusses how, and whether, artificial intelligent entities might be held
criminally liable. Criminal laws normally require both an actus reus (an action) and a mens rea (a
mental intent), and Hallevy helpfully classifies laws as follows:

1. Those where the actus reus consists of an action, and those where the actus reus consists of a
failure to act.

2. Those where the mens rea requires knowledge or being informed; those where the mens rea
requires only negligence (“a reasonable person would have known”); and strict liability offences,
for which no mens rea needs to be demonstrated.

No other criteria or capabilities are required in order to impose criminal liability, not from
humans, nor from any other kind of entity, including corporations.In order to impose criminal
liability on any kind of entity, it must be proven that the above two elements existed.

3Hallevy G.: The Criminal Liability of Artificial Intelligence entities. http://ssrn.com/ abstract=1564096 (15 February
2010).
Three models of criminal liability of artificial
intelligence.
Hallevy goes on to propose three legal models by which offences committed by AI systems might
be considered:

1.Perpetrator -via another.


If an offence is committed by a mentally deficient person, a child or an animal, then the
perpetrator is held to be an innocent agent because they lack the mental capacity to form a mens
rea (this is true even for strict liability offences). 4 However, if the innocent agent was instructed
by another person (for example, if the owner of a dog instructed his dog to attack somebody),
then the instructor is held criminally liable (see for US case law). According to this model, AI
programs could be held to be an innocent agent, with either the software programmer or the user
being held to betheperpetrator viaanother.

2.Natural-probable-consequence.
In this model, part of the AI program which was intended for good purposes is activated
inappropriately and performs a criminal action. Hallevy gives an example (quoted from in which
a Japanese employee of a motorcycle factory was killed by an artificially intelligent robot
working near him. The robot erroneously identified the employee as a threat to its mission, and
calculated that the most efficient way to eliminate this threat was by pushing him into an adjacent
operating machine. Using its very powerful hydraulic arm, the robot smashed the surprised
worker into the machine, killing him instantly, and then resumed its duties. The normal legal use
of “natural or probable consequence” liability is to prosecute accomplices to a crime. If no
conspiracy can be demonstrated, it is still possible (in US law) to find an accomplice legally
liable if the criminal acts of the perpetrator were a natural or probable consequence (the phrase
originated in of a scheme that the accomplice encouraged or aided , as long as the accomplice
was aware that some criminal scheme was under way. J.K.C. Kingston So users or (more
probably) programmers might be held legally liable if they knew that a criminal offence was a
natural, probable consequence of their programs/use of an application. The application of this
principle must, however, distinguish between AI programs that ‘know’ that a criminal scheme is
under way (i.e. they have been programmed to perform a criminal scheme) and those that do not
(they were programmed for another purpose). It may well be that crimes where the mens rea

4 The general punishment adjustment considerations are discussed hereinafter at Part IV.
requires knowledge cannot be prosecuted for the latter group of programs (but those with a
‘reasonable person’ mens rea, or strict liability offence can). 5

3. Direct liability.
This model attributes both actus reus and mens rea to an AI system. It is relatively simple to
attribute an actus reus to an AI system. If a system takes an action that results in a criminal act, or
fails to take an action when there is a duty to act, then the actus reus of an offence has occurred.
Assigning a mens rea is much harder, and so it is here that the three levels of mens rea become
important. For strict liability offences, where no intent to commit an offence is required, it may
indeed be possible to hold AI programs criminally liable. Considering the example of self driving
cars, speeding is a strict liability offence; so according to Hallevy, if a self-driving car was found
to be breaking the speed limit for the road it is on, the law may well assign criminal liability to
the AI program that was driving the car at that time. This possibility raises a number of other
issues that Hallevy touches on, including defences (could a program that is malfunctioning
claim a defence similar to the human defence of insanity? Or if it is affected by an electronic
virus, could it claim defences similar to coercion or intoxication?); and punishment (who or what
would be punished for an offence for which an AI system was directly liable?)

5 Hallevy: The Criminal Liability of Artificial Intelligence Entities.


Coordination of Three Liability Models.
The three liability models described above are not alternative models. These models might be
applied in combination in order to create a full image of criminal liability in the specific context
of Al entity involvement. None of the three models is mutually exclusive. Thus, applying the
second model is possible as a single model for the specific offense, and it is possible as one part
of a combination of two of the legal models or of all three of them. When the Al entity plays the
role of an innocent agent in the perpetration of a specific offense, and the programmer is the only
person who directed that perpetration, the application of the perpetration-viaanother model (the
first liability model) is the most appropriate legal model for that situation. 66 In that same
situation, when the programmer is itself an Al entity (when an AI entity programs another Al
entity to commit a specific offense), the direct liability model (the third liability model) is most
appropriately applied to the criminal liability of the programmer of the Al entity.' 67 The third
liability model in that situation is applied in addition to the first liability model, and not in lieu
thereof. Thus, in such situations, the Al entity programmer shall be criminally liable, pursuant to
a combination of the perpetration-via another liability model and the direct liability model. If the
Al entity plays the role of the physical perpetrator of the specific offense, but that very offense
was not planned, then the application of the natural-probable-consequence liability model might
be appropriate. The programmer might be deemed negligent if no offense had been perpetrated
intentionally, or the programmer might be deemed negligent if no offense has been perpetrated
intentionally or the programmer might be held fully accountable for that specific offense if
another offense had indeed been deliberately planned, but the specific offense that was
perpetrated had not been part of the original criminal scheme. Nevertheless, when the
programmer is not human, the direct liability model must be applied in addition to the
simultaneous application of the natural-probable-consequence liability model; likewise, when the
physical perpetrator is human and the planner is an Al entity. The coordination of all three
liability models creates an opaque net of criminal liability. The combined and coordinated
application of these three models reveals a new legal situation in the specific context of Al
entities and criminal law. As a result, when Al entities and humans are involved, directly or
indirectly, in the perpetration of a specific offense, it will be far more difficult to evade criminal
liability. The social benefit derived from such a legal policy is of substantial value. All entities,
human, legal, or Al become subject to criminal law.Al entities have no soul, 6 If the clearest
purpose of the imposition of criminal liability is the application of legal social control in the
specific society, then the coordinated application of all three models is necessary in the very
context of Al entities.

6 Solum, supra note 69, at 1262.


Conclusion.
It has been established that the legal liability of AI systems depends on at least three factors:

1. Whether AI is a product or a service. This is ill-defined in law; different commentators offer different
views.

2. If a criminal offence is being considered, what mens rea is required. It seems unlikely that AI programs
will contravene laws that require knowledge that a criminal act was being committed; but it is very
possible they might contravene laws for which ‘a reasonable man would have known’ that a course of
action could lead to an offence, and it is almost certain that they could contravene strict liability offences.

3. Whether the limitations of AI systems are communicated to a purchaser. Since AI systems have both
general and specific limitations, legal cases on such issues may well be based on the specific wording of
any warnings about such limitations.

There is also the question of who should be held liable. It will depend on which of Hallevy’s three models
apply (perpetrator-by-another; natural-probable consequence; or direct liability):

• In a perpetrator-by-another offence, the person who instructs the AI system – either the user or the
programmer – is likely to be found liable.

• In a natural-or-probable-consequence offence, liability could fall on anyone who might have foreseen
the product being used in the way it was; the programmer, the vendor (of a product), or the service
provider. The user is less likely to be blamed unless the instructions that came with the product/service
spell out the limitations of the system and the possible consequences of misuse in unusual detail.

• AI programs may also be held liable for strict liability offences, in which case the programmer is likely
to be found at fault. However, in all cases where the programmer is deemed liable, there may be further
debates whether the fault lies with the programmer; the program designer; the expert who provided the
knowledge; or the manager who appointed the inadequate expert, program designer or programmer.

You might also like