Download as pdf or txt
Download as pdf or txt
You are on page 1of 11

5/17/22, 11:51 PM Products liability law as a way to address AI harms

Report

Products liability law as a way to address AI harms


John Villasenor
Thursday, October 31, 2019

Editor's Note:

This report from The Brookings Institution’s Artificial Intelligence and Emerging Technology (AIET) Initiative is
part of “AI Governance,” a series that identifies key governance and norm issues related to AI and proposes
policy remedies to address the complex challenges associated with emerging technologies.

A
rtificial intelligence (AI) is a transformative technology that will have a profound
impact on manufacturing, robotics, transportation, agriculture, modeling and
forecasting, education, cybersecurity, and many other applications. The positive
benefits of AI are enormous. For example, AI-based systems can lead to improved safety
by reducing the risks of injuries arising from human error. AI-based systems can also make
decisions that are more objective, consistent, and reliable than those made by humans.
And AI has the power to rapidly and efficiently analyze enormous amounts of data,
identifying and making good use of correlations that would elude even the most expert
human analyst.

But AI also involves risks. Put simply, AI systems will sometimes make mistakes. A
driverless car might fail to avoid an accident that later analysis shows was preventable. An
AI-driven algorithm used to evaluate mortgage applications might make decisions that are
biased by consideration of impermissible factors, such as race. An AI-enabled robotic
surgery tool might take an action during an operation that results in avoidable harm to a
patient.

Given the volume of products and services that will incorporate AI, the laws of statistics
ensure that—even if AI does the right thing nearly all the time—there will be instances
where it fails. While some of those failures may be benign, others could result in harm to
persons or property. When that occurs, questions of attribution and remedies will arise.

https://www.brookings.edu/research/products-liability-law-as-a-way-to-address-ai-harms/ 1/11
5/17/22, 11:51 PM Products liability law as a way to address AI harms

Whose fault is it if an AI algorithm makes a decision that causes harm? How should fault
be identified and apportioned? What sort of remedy should be imposed? What measures
can be taken to ensure that the same mistake will not be repeated in the future?

“Whose fault is it if an AI algorithm makes a decision that


causes harm? How should fault be identified and
apportioned? What sort of remedy should be imposed?”

Answering these questions requires examining the intersection of products liability and
artificial intelligence. In this policy brief, I provide an overview of key concepts in products
liability and their application to AI. I describe the challenges of attribution for AI-induced
harms, explain why I believe that products liability frameworks are well-positioned to
adapt to address AI questions, and why it is important to promote consistency across
states in AI products liability approaches. If implemented with appropriate frameworks,
products liability law represents an important mechanism to mitigate possible AI harms.

Products liability in the AI context


Products liability is the area of law that addresses remedies for injuries or property
damage arising from product defects, as well as for harms arising from misrepresentations
about products. As I explained in a 2014 Brookings Institution paper:

https://www.brookings.edu/research/products-liability-law-as-a-way-to-address-ai-harms/ 2/11
5/17/22, 11:51 PM Products liability law as a way to address AI harms

“[Products liability] is a complex and evolving mixture of tort


law and contract law. Tort law addresses civil, as opposed to
criminal, wrongs (i.e., ‘torts’) that cause injury or harm, and
for which the victim can seek redress by filing a lawsuit
seeking an award of damages. A common tort, both in
products liability and more generally, is negligence. Contract
law is implicated by the commercial nature of product
marketing and sales, which can create explicit and implicit
warranties with respect to the quality of a product. If a
product fails to be of sufficient quality, and that failure is the
cause of an injury to a purchaser who uses the product in a
reasonable manner, the seller could be liable for breach of
warranty.”[1]

Within the broad umbrellas of tort law and contract law, there are multiple specific (and
often simultaneous) theories of liability that can be asserted in a products liability claim,
including negligence, design defects, manufacturing defects, failure to warn,
misrepresentation, and breach of warranty. All of these liability theories can arise in the
context of AI. For example, consider the tort of negligence. Manufacturers have an
obligation to make products that will be safe when used in reasonably foreseeable ways. If
an AI system is used in a foreseeable way and yet becomes a source of harm, a plaintiff
could assert that the manufacturer was negligent in not recognizing the possibility of that
outcome.

To take a more specific example, consider an AI-enabled system for automatically


identifying abnormalities in MRI images, and marketed to medical professionals as a tool
for increasing their efficiency at interpreting MRIs. Suppose that the system works well for
images with resolution meeting or exceeding a particular precision level, but is unreliable

https://www.brookings.edu/research/products-liability-law-as-a-way-to-address-ai-harms/ 3/11
5/17/22, 11:51 PM Products liability law as a way to address AI harms

for lower-resolution images. If the maker of this system sells it without providing
information about the requisite image resolution, a misdiagnosis on a low-resolution
image could be grounds for a products liability claim citing both negligence and failure to
warn.

Suppose that, based on the experience of reading thousands of MRI images, the AI system
evolves in a manner that makes it better at identifying some abnormalities but
significantly worse at identifying others. This could lead to an allegation of a design
defect, with the plaintiff arguing that the human designers of the original algorithm could
have and should have built the AI system so that it would evolve in ways that would avoid
trading off performance enhancements on some abnormalities with performance
degradation on others.

Another variant occurs when the manufacturer of the AI-based MRI reading system
informs customers that, over time, the algorithm will not only learn from its own
accumulated experience in analyzing images, but also from human radiologists who
independently review and provide conclusions regarding a randomly selected subset of the
images. Suppose further that, due to an oversight by the manufacturer, the feature for
including independent radiologist feedback is never incorporated into the AI software; as
the algorithm evolves, its accuracy improves less quickly than it otherwise would have.
This might lead to a products liability claim asserting that the manufacturer engaged in
misrepresentation and the product contains a manufacturing defect. To the extent that a
manufacturer makes assurances in the marketing and sale of the MRI reading system that
turn out not to true, any resulting harms could give rise to a breach of warranty claim.
This would be handled under well-established approaches in accordance with the Uniform
Commercial Code, which addresses the explicit and implicit warranties that are created
through the sale of goods.

In addition to specific theories of liability based on identifying a particular source (e.g., a


manufacturing defect) of a product flaw, another feature of products liability law is strict
liability. Under strict liability, manufacturers—including those making AI products—can
be held liable for unsafe defects without requiring an inquiry as to whether the defect
arose from an identifiable failure, such as a design defect, a manufacturing defect, or

https://www.brookings.edu/research/products-liability-law-as-a-way-to-address-ai-harms/ 4/11
5/17/22, 11:51 PM Products liability law as a way to address AI harms

manufacturer negligence. Instead, strict liability reflects the view that consumers have a
right to expect safe products. When that expectation is not met, a consumer who suffers a
resulting harm and brings a strict liability claim would not have the burden of identifying
specifically where in the design or manufacturing process the defect was introduced.

An overarching issue that will arise with the growth of AI is an increase in the potential
for software-induced harms. In non-AI systems, post-sale software updates have long
been a standard approach to fixing product flaws. In many cases, there are no associated
harms. For instance, it is common for software security vulnerabilities to be identified and
then fixed through updates before being exploited by malicious actors. But there have also
been cases where software design issues have reportedly contributed to enormous harms,
including two crashes of the Boeing 737 Max, each of which left no survivors. Software
was also the reported cause of terrifyingly close call in 1983 when a Soviet early-warning
system falsely reported a set of incoming U.S.-launched ICBMs. Fortunately, the officer on
duty correctly suspected a software glitch, and no retaliatory strike was ordered.

“As AI becomes ubiquitous across transportation, defense,


manufacturing, and many other sectors, the stakes involved
in decisions made by AI software will grow.”

While these examples don’t involve AI, they underscore that software decisions can have
profound consequences. As AI becomes ubiquitous across transportation, defense,
manufacturing, and many other sectors, the stakes involved in decisions made by AI
software will grow. That heightens the need to design AI systems in a manner that ensures
dangerous flaws are not present when the system is initially placed into the marketplace,
and ensures that they’re not inadvertently introduced by the system as the AI algorithms
evolve over time.

The Challenge of Attribution for AI Harms


https://www.brookings.edu/research/products-liability-law-as-a-way-to-address-ai-harms/ 5/11
5/17/22, 11:51 PM Products liability law as a way to address AI harms

A key characteristic that distinguishes AI is its ability to learn. Stated another way, AI
systems don’t simply implement human-designed algorithms. Instead, they create their
own algorithms—sometimes by revising algorithms originally designed by humans, and
sometimes completely from scratch. This raises complex issues in relation to products
liability, which is centered on the issue of attributing responsibility for products that
cause harms. After all, if an algorithm designed largely or completely by computers makes
a mistake, whose fault is it?

The answer to this question should lie in the recognition that companies need to bear
responsibility for the AI products they create, even when those products evolve in ways
not specifically desired or foreseeable by their manufacturers. When there are multiple
companies that have had a hand in designing an AI system (or in shaping the post-sale
algorithm evolution), there can be difficult questions of how to apportion blame when
things go wrong. But a defense along the lines of “it’s the algorithm’s fault” won’t be
legitimate.

An AI company targeted by a products liability lawsuit will assert multiple defenses. First,
it will claim that the AI algorithm isn’t flawed. In addition, it will claim that even if a jury
concludes that, despite the company’s belief otherwise, an AI algorithm has evolved in a
way that introduced defects,[2] that isn’t the responsibility of the company. Instead, the
company will argue that since those flaws were introduced after the product had already
been placed into the field, blame should be placed elsewhere. The company might try to
pin blame for any algorithmic defects on one or more of: the AI itself; the data provided to
the system that was used as a basis for the AI-driven algorithm evolution; the human
users of the AI system; and/or other companies in the supply chain either downstream or
upstream from the company being sued. I consider each of these in turn.

Blaming AI: As noted above, companies should generally[3] not be able to escape
liability by blaming AI-driven evolution to algorithms that they originally designed.
If companies want to reap the benefits of intelligent algorithms, they also need to be
willing to accept the attendant risks. AI enables learning—and therefore, automated
post-sale changes to the algorithm aimed at improving its performance. But the
anticipation of those future benefits is present at the time of the original sale, and

https://www.brookings.edu/research/products-liability-law-as-a-way-to-address-ai-harms/ 6/11
5/17/22, 11:51 PM Products liability law as a way to address AI harms

will be reflected in marketing strategies and in product pricing. Thus, even though a
company at the time of sale would not know specifically how the AI algorithm might
evolve in the future, the fact that it will do so will be portrayed as an asset to
prospective customers. If it turns out that, against all expectations, the automatic
evolution occurs in a manner that renders the product harmful, the company needs
to bear responsibility for that as well.
Blaming the data: Companies might also assert that there is nothing wrong with the
AI system, and that the problem instead lies with bad data that caused it to evolve in
harmful ways. Though there is a potential exception (i.e., cases where a malicious
user intentionally provides a system with data intended to sabotage its performance),
[4] generally the “blame the data” argument won’t hold water. Makers of AI systems

have a responsibility to anticipate the types of data that might be provided under
reasonably foreseeable usage scenarios—and to build in appropriate guardrails so
that the resulting algorithm evolution is beneficial or neutral as opposed to
detrimental.
Blaming the users: Depending on the specific facts underlying a particular liability
claim, users might have partial or full responsibility for harms arising in association
with an AI system. Just as a purchaser of an automobile who drives it at twice the
speed limit and then gets in an accident can’t reasonably blame the manufacturer, a
user of an AI-based system who applies it in clearly inappropriate ways will bear
responsibility for resulting harms. But manufacturers cannot credibly blame users
who engage with an AI system in reasonably foreseeable ways and, in doing so,
inadvertently cause it to evolve in a manner introducing defects that cause
immediate or later harm. The human/AI interface also raises its own set of products
liability issues, as building good AI solutions will require an understanding not only
of the interactions among the different software components of a system, but also of
how humans will interact with those components. Companies that fail to fully
anticipate the assumptions and decisions that will shape human/AI interactions risk
releasing products that might behave in unintended and potentially harmful ways.
Blaming the upstream or downstream supply chain: As occurs with non-AI
products, products liability in relation to AI will often involve finger pointing at other
places in the supply chain. In the AI ecosystem, there will typically be multiple

https://www.brookings.edu/research/products-liability-law-as-a-way-to-address-ai-harms/ 7/11
5/17/22, 11:51 PM Products liability law as a way to address AI harms

suppliers upstream of a consumer. To start with, there is the company that sold the
product directly to the consumer. That company in turn may have purchased a
software component of the product from a separate entity. And that entity may have
built some portions of the software in-house and licensed other portions from yet
another company. Apportioning blame within the supply chain will involve not only
technical analysis regarding the sources of various aspects of the AI algorithm, but
also the legal agreements among the companies involved, including any associated
indemnification agreements.

Products Liability as an Adaptive Area of the Law

As the examples above make clear, the products liability questions that will arise in
relation to AI will be highly complex. But that doesn’t mean we need a whole new set of
laws to address them. As I wrote in 2014:

“Products liability has been one of the most dynamic fields of


law since the middle of the 20th century. In part, this is
because the new technologies that emerged over this period
have led courts to consider a continuing series of initially
novel products liability questions. On the whole, the courts
have generally proven quite capable of addressing these
questions.”[5]

In light of this demonstrated history of adaptation to new technologies, products liability


law is well positioned to address the questions that will arise in relation to artificial
intelligence.

https://www.brookings.edu/research/products-liability-law-as-a-way-to-address-ai-harms/ 8/11
5/17/22, 11:51 PM Products liability law as a way to address AI harms

There are several ways in which this adaptation will need to occur. Inquiries into fault in
AI-based systems will need to be informed by an understanding that, while the immediate
decisions resulting in alleged harms (e.g., a driverless car’s decision to make a turn at a
particular instant in time) are made by computers, those decisions can be traced upstream
to choices made by companies. That’s where the responsibility will need to be placed when
things go wrong. In addition, frameworks for applying risk-utility tests in relation to AI
will need to be developed. Risk-utility tests have long been employed in products liability
lawsuits to evaluate whether an alleged design defect could have been mitigated “through
the use of an alternative solution that would not have impaired the utility of the product

or unnecessarily increased its cost.”[6] This same test can be applied in relation to AI as
well; however, the mechanics of applying it will need to consider not only the human-
designed portions of an algorithm, but also the post-sale design decisions and alternatives
available to an AI system as it automatically updates its algorithms.

“Inquiries into fault in AI-based systems will need to be


informed by an understanding that, while the immediate
decisions resulting in alleged harms … are made by
computers, those decisions can be traced upstream to
choices made by companies. That’s where the responsibility
will need to be placed when things go wrong.”

Just as occurs in other areas of products liability, AI liability will generally be handled

through state court systems and legislatures.[7] Of course, it will take many years to
develop a body of case law and statutory law specific to the intersection of AI and products
liability, and not all courts will reach the right answer in every case. Over time, though,
products liability law will adapt to address the specific questions raised by AI as it has in
relation to other emerging technologies.

https://www.brookings.edu/research/products-liability-law-as-a-way-to-address-ai-harms/ 9/11
5/17/22, 11:51 PM Products liability law as a way to address AI harms

One way to smooth and accelerate this process as well as to reduce the challenges that can
arise due to state-by-state variations is through voluntary frameworks. For example, the
American Law Institute (ALI) is a respected organization that produces “scholarly work to
clarify, modernize, and otherwise improve the law.” If the ALI or a similar organization
were to develop and publish model principles of law and/or legislation specific to AI
products liability, this could help to promote greater predictability and uniformity in
state-level approaches.

The Brookings Institution is a nonprofit organization devoted to independent research and


policy solutions. Its mission is to conduct high-quality, independent research and, based on
that research, to provide innovative, practical recommendations for policymakers and the
public. The conclusions and recommendations of any Brookings publication are solely those of
its author(s), and do not reflect the views of the Institution, its management, or its other
scholars.

Microsoft provides support to The Brookings Institution’s Artificial Intelligence and Emerging


Technology (AIET) Initiative. The findings, interpretations, and conclusions in this report are
not influenced by any donation. Brookings recognizes that the value it provides is in its
absolute commitment to quality, independence, and impact. Activities supported by its donors
reflect this commitment.

Report Produced by Center for Technology Innovation

Footnotes
1. 1
John Villasenor, “Products Liability and Driverless Cars: Issues and Guiding Principles for Legislation”
(Washington, D.C.: Brookings Institution, April 24, 2014), 7, https://www.brookings.edu/wp-
content/uploads/2016/06/Products_Liability_and_Driverless_Cars.pdf.
2. 2
Of course, it is also possible that a plaintiff could also assert that the original algorithm, before any post-
sale evolution, was already defective. Those claims would raise many (though not all) of the same issues
addressed in this paper.
3. 3
One exception to this could occur in the event of algorithmic sabotage by a third party. Though even that
exception should not be automatic, as companies have the responsibility to make their algorithms resilient
to sabotage attempts that can be reasonably foreseeable.
4. 4
Even in the case where a malicious user sabotages an AI system by intentionally introducing data aimed
at causing the AI system to behave in a harmful manner, the outcome would still not be the fault of the
data. The malicious user would bear responsibility, as potentially would the company for failing to design
the system to be more robust against this sort of manipulation.
5. 5
Villasenor, “Products Liability and Driverless Cars,” 15.
6. 6
Villasenor, 9.

https://www.brookings.edu/research/products-liability-law-as-a-way-to-address-ai-harms/ 10/11
5/17/22, 11:51 PM Products liability law as a way to address AI harms

7. 7
While products liability is generally handled in state courts, there are some exceptions. See, e.g., the
Class Action Fairness Act of 2005, Pub. L. No. 109-2, 119 Stat. 4.

https://www.brookings.edu/research/products-liability-law-as-a-way-to-address-ai-harms/ 11/11

You might also like