Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1of 15

ABSTRACT

Artificial intelligence has made human lives easier and better-equipped to counter the
vagaries of life. Ever since its formal foundation in 1956, areas of AI application have
continued to grow indiscriminately. The interaction between the disciplines of law and
Artificial Intelligence is attracting and increasing the attention within the academic and
commercial communities. The paper focuses on the question of determining liability which
arises due to the fault of Artificial Intelligence. The paper mainly concerns the way that the
discipline of law will affect AI. Although the legal issues are hotly debated, no satisfactory
solutions have been arrived at. This article is a novel attempt to scrutinize these issues
comprehensively and to suggest long-term reforms for each issue. The paper also discusses
the absence of direct legal regulation of Artificial Intelligence. Lastly, the author
recommends an international regime for the regulation of AI.

1
“By far the greatest danger of Artificial Intelligence is that people conclude too early that
they understand it”

- Eliezer
Yudkowsky

INTRODUCTION

THE EMERGENCE OF THE CONCEPT OF ARTIFICIAL INTELLIGENCE

Artificial Intelligence (hereinafter ‘AI’) refers to the cognitive abilities of a machine or


system which is not natural, rather human-induced. Alan Turing, the father of computer
science, propounded the Turing Test to determine if a machine is intelligent. According to
this test, a human (called ‘judge’) questions two entities (one being a ‘machine’ and the other
a ‘human’) from behind a computer terminal.1 If the answers given by both of them are such
that the judge cannot distinguish humans from the machine, the machine is said to be
artificially intelligent. These induced capabilities have become so efficient over the years that
the time is not far when human activities, from routine to complex ones, maybe replaced by
these machines.2 For instance, DeepMind, an AI interface developed by Google is currently
used in the United Kingdom to detect and analyze health risks resulting in early diagnosis of
diseases.3

However, these benefits come with their own set of challenges. The objective of this article is
to identify the legal issues which have emerged along with the advancement of AI. In pursuit
of the same, the author has explored the United States’ jurisprudence as it is the largest
developer of AI systems in the world 4 and hence its legal approach to AI would be useful to
understand various issues. The term Artificial intelligence covers a wide range of capabilities.
Some futurists such as Stephen Hawking and Sam Harris fear that AI could one day pose an
existential threat, a “superintelligence” that might quest for the goals that prove not to be
aligned with the present existence of humankind. Such fears relate to “strong” AI or

1
Turing, “On Computable Numbers with an Application to the Entscheidungs Problem” 2 London
Mathematical Soc’y 230 (1936).
2
Erica Fraser, “Computers as Inventors- Legal Policy Implications of Artificial Intelligence on Patent Law” 13
Scripted 305 (2016).
3
Sarah Bloch-Budzier, “NHS using Google technology to treat patients” BBC News, (Nov. 22, 2016), available
at https://www.bbc.com/news/health-38055509 (last visited on 14th January 2020).
4
Nishith Desai Associates, “The future is here : Artificial Intelligence and robotics” (May 2018), available at
http://www.nishithdesai.com/fileadmin/user_upload/pdfs/Research_Papers/
Artificial_Intelligence_and_Robotics.pdf (last visited on 8th January 2020)

2
“artificial general intelligence” (AGI), which is equivalent to human-level awareness, but it
does not exist yet. More than sixty years later, an attempt was made to derive how machine
uses language and form concepts in order to solve the problems, it is predicted that by 2020,
AI will drive up to $33 trillion of annual economic growth.

PATENTABILITY AND INVENTORSHIP ISSUES FOR AI-GENERATED


INVENTIONS
With the new technology, there is always a challenge on how to protect the intellectual
property operating in the unexplored area. And because of abundant growth in artificial
intelligence technology and application obligate many to claim that existing patent protection
mechanism will not satisfy the new industry. Scholars started to deal with issues with
inventorship and with the obvious standard for AI technology.
First, with inventorship, the central question is—who will own the patents for inventions
created solely by artificial intelligence? But the major challenge with the U.S. patent system
that required Human inventiveness for inventorship. The U.S. patent system based on the
conception, which requires “the formation in the mind of the inventor, of a definite and
permanent idea of the complete and operative invention,” for an invention. 5 Different
scholars have provided different views on the inventorship issue of AI. Professor Abbott
argues that treating nonhuman artificial intelligence as inventors would incentivize the
development of creative computers.6 Professors Ravid and Liu suggest that efforts to
identify a single inventor of artificial intelligence systems are not applicable, and instead of
that a Multiplayer Model, which involves contributions from many players based on their
indirect and insignificant involvement, should be utilized but would not meet the current
threshold for inventorship.7

INTELLECTUAL PROPERTY RIGHTS FOR ARTIFICIAL INTELLIGENCE


AI has made such rapid advances in the last three decades that it can now do a variety of
works including writing poetry, creating paintings, surgical operations, etc. There are
demands by the developers of these AI systems that the works generated by AI be protected

5
Sewall v. Walters, 21 F.3d 411 (Fed.Cir. 1994).
6
Ryan Abbott, “I Think, Therefore I Invent: Creative Computers and the Future of Patent Law” 57 B.C.L. REV.
1079 (2016).
7
Shlomt Yanisky Ravid and Xiaoqiong Liu, “When Artificial Intelligence Systems Produce Inventions: The 3A
Era and an Alternative Model for Patent Law” 39 CARDOZO L. REV. (2018).

3
as intellectual property rights.8 In this light, this essay takes up issues in the protection of AI
under two categories of intellectual property rights, namely patents and copyrights.

1) Patents

The patent is an exclusive right over an invention, granted to the person who invented it
provided the invention is novel, involves creativity, and is not obvious to the general public.9
If we analyze this definition, we see two major issues with the grant of patents to AI:
firstly, the patentability of AI and secondly, the identification of the patent holder.

A. Subject matter which can be generated with ordinary labour and skills by a layman is not
patentable.10 Can the functions performed by AI be patented even though they are mere
computerization of ordinary human functions? The US grants patent protection to
computer software.11 Since AI is nothing but an advanced version of computer software, it
can be argued that AI should be protected like computer software. However, in a 2016
judgment, the US Supreme Court held that a generic computer implementation of an abstract
idea cannot make a work patent-eligible.12 In this case, the Court has revived what is called
the ‘mental steps doctrine’ which entails that if an activity can be performed by mental steps
of a human being, it is not patentable. Earlier, the US Courts had evolved a distinction
between a function performed by humans and its performance by a computer. The latter
would be patentable if they had some practical application in the world even though they
could be performed by humans as well. 13 In the author’s view, this distinction is important
and computerized human functions cannot be equated with those mechanically performed by
humans. This is because of the improved accuracy, speed as well as efficiency with which
computers perform human functions. Further, AI has an immense scope as a tool for human
development and progress; if it is not accorded patent protection, its development might be
stifled and even lost.
B. Secondly, AI can be granted to a person (whether natural or legal) only. Since AI, for
instance, a robot is not considered as a legal person, it is not possible to grant the right to
confer patent rights on it for the work created by it. It is understandable that the very
8
Kalin Hristov, “Artificial Intelligence and the Copyright Dilemma, Asia Pacific” 105 J. Health L. & Ethics
(2017).
9
Ben Hattenbach, “Rethinking the Mental Steps Doctrine and Other Barriers to Patentability of Artificial
Intelligence” 19 Sci. & Tech. L. Rev. 313 (2018).
10
Id.
11
Id.
12
Alice Corp v. CLS Bank International, 134 S. Ct. 2347 (2014).
13
Diamond v. Diehr, 450 US 175 (1981).

4
objective of recognition of patents is to incentivize the inventor and in case of AI works, an
AI system in the inventor.14 Since an AI system cannot be incentivized nor can they
appreciate the privileges of patent, it is clear that the patent regime cannot be extended to
grant patent rights to AI. The only possible alternative person on whom such rights can be
conferred is the developer of the AI. However, it has been argued that a developer-only
develops an algorithm to enable the creation of AI, not the ultimate work which is created by
AI. Thus, the developer cannot be conferred with the patent.

Suggestions

To resolve these issues, it is suggested that the nations evolve a legal framework for the
protection of AI, either by way of a new legislations or amendments to already existing
legislations.

It is also recommended that the patentability of an invention should be determined by the


ratio of human to AI participation in the invention. There can be three possible ratios:
firstly, human being invented the product with the help of AI tools, wherein human to AI
ration will be more; secondly, where AI system invents something under constant supervision
of a human, e.g. the robots working in laboratories who supply the materials and mix them on
the instruction of the scientist; lastly, where the AI system itself invents something without
human intervention during the process of invention. In the last case, the patent should mostly
not be granted; unless it is proved that the AI system’s programming was the proximate cause
for the resultant invention.

2) Copyrights

AI has now reached that stage of advancement where machines are used to author literary
works, photography, artistic works, etc.15 AI is the de facto author of a work created using its
algorithm. However, the copyright laws around the world include only persons (both legal
and natural) within the definition of authors.16 This prevents the grant of the copyright for
works created using AI. However, if not to the AI system itself, can we consider the
programmer of the AI system as the author of the work created by the AI? In the author’s
opinion, ownership of computer-generated works shall be presumed to lie with the developers
14
D Czamitzki & A Toole, “Patent Protection, Market Uncertainty and R & D Investment” 93 The Review of
Economics and Statistics 147 (2011).
15
Amanda Levendowski, “How Copyright Law Can Fix Artificial Intelligence’s Implicit Bias Problem” 93
Wash. L. Rev. 579 (2018).
16
Id.

5
of the AI system. Such presumption can be drawn on the basis of the made for hire doctrine
incorporated in the US copyright law. This doctrine states that if a work is expressly
commissioned by a person for his own use, the ownership of the work shall inhere in him and
secondly if any work is created in the course of employment by an employee, its ownership
will lie with the employer.17 By application of this doctrine to the computer-generated works,
it is not difficult to argue that the AI system is commissioned by the developer with the
purpose that any work created by it shall be for hire by him. Thus, there is an implicit
employer-employee relationship between the developer and the AI system. This
interpretation of the doctrine would enable the protection of the work of a non-human author
under the law of copyright, with the developer of AI tagged as the author for availing benefits
of such copyright.

We know that data is required to create artificial intelligence. This data may be in the form of
literary, artistic or other works that may be under the protection of copyright laws. Thus, the
collection of data for training purposes by the AI developers may conflict with the interests of
the copyrights holders and may even constitute an infringement of copyrights. However, it is
still unsettled whether such data collection can be considered as a ‘copy’ for the
purposes of copyright laws. Even if it is a copy, we still need to know whether such copying
of data for feeding it into AI systems is ‘fair use’ of the data.

INDIA’S APPROACH TOWARDS ARTIFICIAL INTELLIGENCE


India is still in a nascent stage in the development of AI. However, many AI such as chatbots
which are automated chatboxes that provide a non-human interface with the visitors, have
penetrated the Indian economy deeply.18 Going a step ahead, in December 2018, a doctor in
Gujarat performed the world’s first telestenting using a robot. In other words, the doctor
instructed a robot from a remote location to conduct the percutaneous coronary intervention. 19
Such sectoral expansion of the use of AI raises the legal concerns as has already been pointed
out above. However, Indian law, as it stands today, is not equipped to regulate the AI
systems, nor are there any judicial decisions (unlike in the US) on the matter of AI to guide
the AI stakeholders. In this backdrop, in June 2018, NITI Aayog came up with a

17
Copyright Act 1977, 17 U.S.C. s. 101.
18
Jayanth Kolla, “India’s diversity can provide lots of fodder for conversational AI products” Live Mint,
Dec.31, 2018.
19
Rinchen Norbu Wangchuk, “In A First, Gujarat Doctor Uses Robots to Conduct Hear Procedure from 32 Kms
Away!”, The Better India, (December 6, 2018), available at https://www.Thebetterindia.Com/166057/India-
Surgery-Robot-Innovation-Gujarat-News/ (last visited on 21st January 2020).

6
discussion paper on National Strategy for Artificial Intelligence.20 Although this
discussion paper focuses on areas where AI can be harnessed, suggests public-private
partnerships accelerate innovation in AI and recommends the reskilling of the workforce to
adopt AI technology, it fails to address the legal vacuum in the regulation of AI and
determining its legal status. The legal issues that need urgent redressal in the Indian context
are as follows:

A. Legal Personality of AI: It is hotly debated in India whether AI can be accorded the status
of a legal person on the same lines as a company. This legal personhood would grant the
AI competency to contract as per the legislation21 and hold property in its name and sue or be
sued. Thus, all the legal issues mentioned earlier would be resolved. However, such conferral
of such personality is problematic because AI cannot be entrusted with responsibility; there
exist fully autonomous AIs, unlike companies that are only fictitiously autonomous. The
liability of a company is ultimately discharged by its management from out of the company’s
assets; there are no such means to discharge the liability of autonomous AIs.
B. AI’s impact on Data Privacy: Protection of privacy of citizens is an issue that has garnered
much attention in India, especially in light of the Supreme Court’s ruling in Justice K S
Puttaswamy (Retd.) & Anr. v. Union of India & Ors.22 where it upheld the right to privacy
as a manifestation of the right to life and liberty. The Parliament is also mulling over the
passage of Personal Data Protection Bill, 2018 which mandates data localization and data
subject’s consent before any data relating to him/her is processed in any manner. However,
AI raises new challenges for data protection as has been pointed out above. It is advisable
that the Legislature first conduct an impact assessment of AI on such human rights as the
right to privacy followed by a comprehensive law regulating the same. Other issues
concerning AI as discussed above will also require the Legislature’s attention in the time to
come, as these issues plague every country that starts on the path of AI development.

LEGAL LIABILITY ARISING OUT OF THE FAULTS OF AN AI


One cannot ignore the continuous increase in the use of artificial intelligence. So, it is a
pertinent question, how should liability be decided in the absence of personhood or any
apparent agency or when actions are entirely autonomous? This area is still blurred, and

20
Discussion Paper National Strategy for Artificial Intelligence, NITI Aayog (June, 2018), available at
http://www.niti.gov.in/writereaddata/files/document_publication/nationalstrategy-for-ai-discussion-paper.pdf
(last visited on 17th January 2020).
21
The Indian Contract Act, 1872 s. 10.
22
(2017) 10 SCC 1.

7
scholars are still trying to figure out a solution. The AI can cause physical injury and damage
to property but also other types of damages such as economic loss, privacy violations and
other like damage. There are two types of possible liabilities, i.e. criminal and civil liability
under which AI can be held liable.

LIABILITY UNDER CRIMINAL LAW:


Criminal laws require both ‘actus reus’ and ‘mens rea’ and Hallevy classifies requirements
as follows:
1. Those where the actus reus consists of an action, and those where the actus reus consists of a
failure to act;
2. Those where the ‘mens rea’ requires knowledge or information; those where the ‘mens rea’
requires only negligence (“a reasonable person would have known”); and strict liability
offences, for which no ‘mens rea’ needs to be demonstrated.23
There are certain circumstances when ‘mens rea’ is not required such as in case of a mentally
deficient person where a perpetrator is held to be as an innocent agent because they do not
have the mental capacity to form a ‘mens rea’. So, in these types of cases, software
programmers or users can be held liable as AI programs could be held as an innocent agent.
In another circumstance can be where an AI program is activated incorrectly and performs a
criminal action, although intended for a useful purpose. Example of this can be a case where
a robot in Japan erroneously identified the employee in a motorcycle factory as a threat to its
mission and figured out that best possible way to eliminate this threat was to push him into an
adjacent machine, killing him instantly, and then continued its duty24. It argued that it was a
natural or probable consequence of what accomplice encouraged or aided 25, as long as an
accomplice was aware that some criminal scheme was underway. So, the user or most likely
the programmers will hold liability if they knew that the criminal offence committed was the
natural or probable consequence of their use/program of the application. The application of
this principle must know the difference between the AI programs that know that a criminal
scheme is underway and those who do not. Crimes, where the ‘mens rea’ requires knowledge,
cannot be a cause for prosecuting in the latter case.
In the context of defences against liability for the AI system, there are numerous cases where
a defendant accused of cybercrime offences has successfully offered the defence that his
23
Hallevy G, “The Criminal Liability of Artificial Intelligence entities” (February 15, 2010), available at
https://papers.ssrn.com/sol3/papers.cfm?abstract_id=1564096 (last visited on 27th January 2020).
24
Weng Y-H, Chen C-H, et.al., “Towards the Human-Robot Co-Existence Society: On Safety Intelligence for
Next Generation Robots” 1 Int.J.Soc.Robot. 267-273(2009).
25
Francis Bowes Sayre, “Criminal Responsibility for the Acts of Another” 43 Harv. L. Rev. 689 (1930).

8
computer has been taken over by a trojan which was committing offences using the
defendant’s computer but without defendant’s knowledge. In a case in the United Kingdom
when a computer containing the indecent photos of a child was found to have eleven trojan
programs. The defendant’s lawyer successfully convinced the jury that such a scenario was
not beyond a reasonable doubt.26

LIABILITY UNDER CIVIL LAW:

NEGLIGENCE
When a party gets injured as a consequence of using software or when the software is
defective, the legal action alleges the tort of negligence rather than criminal liability 27. There
are three constituent elements for a claim of negligence to prevail:
1. A legal duty to exercise due care on the part of the party complained of towards the party
complaining the former’s conduct within the scope of the duty.
2. Breach of the said duty; and
3. Consequential damage.28

Regarding point 1, it is quite clear that the software vendor owes a duty of care to the
customer, but it is challenging to decide the extent of the duty of care owed.
Regarding point 2, there are numerous ways in which an AI system could breach the duty
such as inadequate knowledge base, not keeping the knowledge base update, the user
supplying faulty inputs, using a program for incorrect purpose, or errors in program function
that could have been detected by the program developer.
With regard to point 3, it is also debatable as there can be two situations either the AI system
should recommend action or take an action. In the former case, there must be involvement of
an agent which is not so in the latter case. So, it is hard to prove causation in the former case.

VICARIOUS LIABILITY

26
Brenner S.W., Carrier B., et.al., “The Trojan Horse Defence in Cybercrime Cases” 21 Santa Clara High Tech.
L.J. (2004).
27
John Kingston, “Artificial intelligence and Legal Liabilities” (Mar., 1991) available at
https://arxiv.org/ftp/arxiv/papers/1802/1802.07782.pdf ( last visited on 11th January 2020)
28
Sangarsh Samiti v. Union of India, ILR (2010) 4 Del 293.

9
In accordance with the provision that the AI system is a tool, the parallel with a party's
responsibility for the behaviour of children or employees can be used. The following
relationships are the best examples of vicarious liability:
(a) Liability of the principal for the act of his agent or liability of the parents for their child;

(b) Liability of the master for the act of his servant

This means that for AI's behaviour vicarious liability appears to a person on whose behalf it
acts or at whose disposal and supervision AI is. It can be listed as users of AI or their owners.
The Vicarious Liability Doctrine, widely analysed by Paula Giliker in her Vicarious Liability
in Tort: A Comparative Perspective 29 compares common law systems and civil law systems.
The most common example is in a workplace context where an employer can be liable for the
acts or omissions of its employees, provided it can be shown that they took place in the
course of their employment. In one sense this liability is strict in that the employer cannot
escape it when the mentioned conditions are met. However, the analysis commissioned in
2004 by the European Commission showed that German law constitutes an exception. It is
the only one of the reviewed legal systems 30 which allow the employer to avoid liability by
proving that the employee was carefully selected and controlled. However, no matter how
subtle the differences between legal systems are, the basic rule of vicarious liability remains
the same: responsibility which renders the defendant liable for the torts committed by another
means liability is imposed on the person, not because of his own wrongful act, but due to his
relationship with the tortfeasor. So, the viewpoint defining robot-as-tool, in other words,
means that liability for the actions of AI should rest with their owners or the users.

The concept of vicarious liability provides a suitable framework for which to find a solution
as the intelligent software agent will be in the legal role of that of agent and the software
licensee governing the agent would be the principal. The agent will then use its learning,
mobility and autonomous properties to accomplish specific tasks for the licensee. These
actions of the agents may also lead to certain kinds of activities that may not be accepted by
the society and which would lead to some kind of imposition of the liability for the mistake
caused by this software which is the agent. The responsibility for such actions would be on
the Principal of artificial intelligence if and only if the AI is viewed as a legal agent within

29
Paul Giliker, “Vicarious liability in Tort: A Comparative Perspective” Cambridge University Press (2010).
30
Paulius Cerka & Jurgita Grigiene & Gintare Sirbikyte, “Liability for damages caused by Artificial
Intelligence” Computer Law and Security Review 376-389 (2015).

10
the relationship of principal and agent. Subsequently, the liability can be attributed to the
actions of the software agents, binding the Principal to the legal duties.

STRICT LIABILITY
Strict liability stipulates a party i.e., the manufacturer or owner, as strictly liable for any
damages caused. In other words, the tort law imposes liability imposes on defendants who
are neither negligent nor guilty of intentional wrongdoing. Strict products liability is
predicated on the existence of an unreasonably dangerous product whose foreseeable use has
caused the injury. Whether an autonomous robot would be considered an unreasonably
dangerous product remains to be determined as technology develops and more autonomous
robots enter society; although military and police robots might in some circumstances meet
this standard. Under some fact patterns, consumers who have been injured by defectively
manufactured products may rely on a strict liability cause of action. And under the doctrine of
strict products liability, a manufacturer must guarantee that its goods are suitable for their
intended use when they are placed on the market for public consumption. The law of torts
will hold manufacturers strictly liable for any injuries that result from placing unreasonably
dangerous products into the stream of commerce, without regard to the amount of care
exercised in preparing the product for sale and distribution and without regard to whether the
consumer purchased the product from, or entered into a contractual relationship with, the
manufacturer31.
There are three main cases where strict liability applies:

(a) injuries by wild animals (b) products liability (c) abnormally dangerous activities

 There are no grounds to equate AI to an animal because the activities of AI are based on an
algorithmic process similar to rational human thinking and only partially similar to instincts
and senses like those of animals. It is presumed that AI can understand the consequences of
its actions and distinguish themselves from animals. This leads to the conclusion that we
cannot apply strict liability which would be applied in cases where the damage is caused by
an animal.

31
Woodrow Barfield, “Liability for autonomous and artificially intelligent robots” (Sept, 2018), available at
https://www.degruyter.com/downloadpdf/j/pjbr.2018.9.issue-1/pjbr-2018-0018/pjbr-2018-0018.pdf ( last visited
on 11th January 2020)

11
 Also, in some cases, it would be difficult to apply the product liability case, because AI is a
self-learning system that learns from its experience and can take autonomous decisions. Thus,
for the plaintiff, it would be difficult to prove an AI product defect and especially that the
defect existed when AI left its manufacturer's or developer's hands. It is hard to believe that it
is possible to draw the line between damages resulting from the AI will i.e. be derived from
self-decision, and damages resulting from a product defect; unless we would equate the
independent decision-making (which is a distinctive AI feature) with a defect. In this way,
there is the danger that any AI independent decision responsibility will arise for its
manufacturer and finally for the programmed the final element of the liability chain. So, in
this case, the burden of responsibility would be disproportionate to that person. Too large a
burden of responsibility can lead to fear on the part of the programmer not to reveal his
identity in public, or otherwise, it can stop the progress of technology development in official
markets, moving all the programming work into unofficial markets. Assuming that such a
case is not possible it is obvious that to apply product liability in the case of AI is difficult or
even legally flawed.
 If AI was treated as a greater source of danger and the person on whose behalf it acted was
declared its manager, the person could be held liable without fault. Thus, the question is
whether AI software systems can be recognized as a greater source of danger. There are two
main theories of the greater source of danger: that of the object and that of the activities.
Under the theory of object, the greater source of danger is an object of the physical world that
cannot be fully controlled by a person. The theory of activities provides that the greater
source of danger is certain types of activities associated with greater danger to others. Both
theories imply the greater danger of certain objects to persons.

Therefore, it is clear that a greater source of danger is defined as a specific object of the
physical world that has specific properties. That is precisely what AI is, i.e. a specific object
characterized by specific properties inherent only to it. Since AI is able to draw individual
conclusions from the gathered, structured, and generalized information as well as to respond
accordingly, it should be accepted that its activities are hazardous. Accordingly, the AI
developer should be held liable for the actions of the greater source of danger, and, in this
case, liability arises without fault. Previously discussed examples that reveal in detail the
operating principles and specifics of AI confirm that AI software systems can be regarded as
a greater source of danger. The example with Gaak confirms that activities of AI are risky
and the risk may not always be prevented by means of safety precautions. For this reason, AI

12
meets the requirements for being considered a greater source of danger, and the manager of a
greater source of danger is required to assume liability for its actions by insuring AI.

Liability without fault is based on the theory of risk. The theory is based on the fact that a
person carries out activities that he or she cannot fully control; therefore, a requirement to
comply with the safety regulations would not be reasonable, because even if the person acted
safely, the actual risk of damage would still remain. In this case, it would be useful to employ
the “deep pocket” theory which is common in the US. The “deep pocket” theory is that a
person engaged in dangerous activities that are profitable and useful to society should
compensate for damage caused to the society from the profit gained. Whether the producer or
programmer, the person with a “deep pocket” must guarantee his hazardous activities through
the requirement of compulsory insurance of his civil liability.

Another possibility is to divide responsibility among a group of persons by grafting the


Common Enterprise Doctrine onto a new strict liability regime. This idea has been raised by
David C. Vladeck who argues that each entity within a set of interrelated legal persons may
be held liable jointly and multiply for the actions of other entities that are part of the group.
Such liability theory does not require that the person function jointly; it would be enough to
work towards a common end such as to design, program, and manufacture an AI and its
various component parts. A common enterprise theory permits the law to impose joint
liability without having to lay bare and grapple with the details of assigning every aspect of
wrongdoing to one party or another; it is enough that in pursuit of a common aim the parties
engaged in wrongdoing. This theory of liability in the context of the lack of legal regulation
in the AI field requires more academic interest, discussion, and development32.

IS THE APPLICATION OF THE CONCEPT OF PRODUCT LIABILITY AN


OPTION?
Product liability law is apparent to be the most adequate domain for the discussion of liability
of AI-based systems. The product liability concept mainly includes three components-
manufacturing defects, design defects, and failure to duly instructor warn consumers.33 This
doctrine regarding manufacturing defect holds that a manufacturer will be held liable for the
harm caused by an unintentional defect, as opposed to intended manufacturing specifications.
If an AI-based robot does not function as intended, this doctrine could easily be applied.

32
Supra note 29.
33
David G. Owen, “Products Liability Law Restated” 49 S.C.L.Rev 273 (1998).

13
A manufacturer will be held liable for the harm caused by-product “when the foreseeable
risks of harm posed by the product could have been reduced or avoided by the adoption of a
reasonable alternative design by the seller or other distributors, or a predecessor in the
commercial chain of distribution, and the omission, of the alternative design, renders the
product not reasonably safe.”34 There are two approaches to determining whether a design
defect has occurred. The explicit approach provided by the Restatement is the “risk-utility”
test, meaning that the plaintiff must prove that an alternative design could have reduced the
risk imposed by the product using preventive measures that are reasonable in relation to the
harm. This is basically an analysis of an alternative cost-effective design that could have
reduced the risk. Note, however, that the risks and harms considered under this approach are
only foreseeable risks and harms, and that the reasonableness of the design should not be
considered only with respect to specific harm done, but rather with respect to the product’s
safety at large.
A second approach to the design defect trigger is the “consumer expectations” approach,
which basically asks whether the dangers entailed by a product exceed those reasonably
expected by the potential consumers.
The third component applies due to “inadequate instructions or warnings when the
foreseeable risks of harm posed by the product could have been reduced or avoided by the
provision of reasonable instructions or warnings by the seller or other distributors, or a
predecessor in the commercial chain of distribution, and the omission of the instructions or
warnings renders the product not reasonably safe.”35
With respect to the above discussion, it is important to note that there is a foreseeability
factor involved but in the case of fully autonomous AI robots it is not possible to foresee the
functioning as there is no human intervention involved, it is purely automatic. Had we wanted
a product based on a pre-defined set of rules, we would not have equipped it with full AI
capabilities. The AI factor quantitatively changes the picture like it, by definition, leads to
unexpected results. AI-related risk is unforeseeable by nature and therefore cannot be covered
under any of the underlying components of product liability doctrine.

CONCLUSION
It is the year 2020 and AI’s implications on mankind are manifold. Therefore, it is necessary
to evaluate the need for revamping legal systems to match the development of AI. The
34
Ibid.
35
Id.

14
present position of AIs is in the law is not very clear, Stephen Hawking stated: “The short-
term impact of AI depends on who controls it, the long-term impact depends on whether it
can be controlled at all.” Traditional laws are no more pertinent in the age of advancement.
At first, behind every law, there was a human inventor, but we are living in the new era of
machines where no human is behind the invention, the invention is now that subject or area
which can be called the ultimate produce of machines. An AI system is all Creative, Rational,
independent and autonomous, indeterminable, accurate and efficient at the same time. So,
based on these features we can say that AI systems are capable to produce inventions that had
been done by a human that’s why it should be registered as patents. Although legal issues
concerning AI raised in each jurisdiction are almost similar, it is evident from the author’s
research that not many initiatives have been taken by countries. Developing countries such as
India especially lag behind in identifying the challenges posed by AI. Consequently, the
development of the law relating to AI in these unaware jurisdictions is staggering. These
issues are being discussed all across the world and new attempts are being made to figure out
the liability. However, there are still no solutions which can cover all the above concerns. The
UK parliament has introduced a bill known as the Vehicle Technology and Aviation Bill,
2016 which states that where an accident is caused by an automated vehicle then the insurer
is liable for the damage. Whereas, when there is no insurance, the owner of the vehicle will
be held liable. The author also suggests that the liability should also be based upon the
doctrine of the last opportunity i.e. if the injured person had the last opportunity to prevent
the injury, then he should be held partially liable. It is because of this disparity that the
authors call for international regulation of AI so that even those countries who have little
exposure to AI get an opportunity to participate in the deliberations. With this agenda in
mind, the authors conclude that the development of AI must be informed by national as well
as international law so that the future of mankind is certain and secured.

15

You might also like