Proving Liability For Highly and Fully Automated Ve - 2018 - Computer Law - Secu PDF

You might also like

Download as pdf or txt
Download as pdf or txt
You are on page 1of 19

computer law & security review 34 (2018) 1314–1332

Available online at www.sciencedirect.com

journal homepage: www.elsevier.com/locate/CLSR

Proving liability for highly and fully automated


vehicle accidents in Australia ✩

Tom Mackie∗
TC Beirne School of Law, The University of Queensland, Australia

a r t i c l e i n f o a b s t r a c t

Article history: This article considers how liability questions will be resolved under current Australian laws
for automated vehicle (‘AV’) accidents. In terms of the parties that are likely to be held re-
sponsible, I argue that whether the human driver remains liable depends on the degree to
Keyword: which the relevant AV is automated, and the degree of control the human driver had over the
Automated vehicle events leading up to the particular accident. Assuming therefore that human drivers would
Artificial intelligence not be held liable for the majority of highly and fully automated vehicle accidents, plaintiffs
Liability will have to establish liability on part of those who manufacture, maintain or contribute to
Product liability the operation of AVs, under the claims available in Australia’s product liability regime.
This article then turns to the problems of proof that plaintiffs are likely to face in estab-
lishing AV manufacturer liability in negligence, or in a defective goods claim under Part 3–5
of the Australian Consumer Law (‘ACL’). Firstly, it may be difficult to determine the cause of
the AV accident, due to the technical complexity of AVs and due to ongoing concerns as to
the explainability of AI-decision making. Secondly, plaintiffs may struggle to prove fault in
a negligence claim, or that the vehicle was defective for the purposes of Part 3–5 of the ACL.
Essentially, under both actions, manufacturers will be held to a duty to undertake reason-
able testing of their AVs. Given that it is currently impracticable to completely test for, and
eliminate all AV errors, and due to the broader social utility the technology is likely to offer,
plaintiffs may face evidentiary challenges in proving that the manufacturer’s testing was
unreasonable.
© 2018 Tom Mackie. Published by Elsevier Ltd. All rights reserved.


This article is based upon a research project submitted in fulfilment of the requirements for the LLB (Hons) at UQ. I would like to thank
Associate Professor Mark Burdon for his supervision whilst teaching at UQ and for his invaluable guidance more broadly. I would like to
thank Professor Kit Barker for his comments. All remaining errors remain my own.

LLB, TC Beirne School of Law, The University of Queensland, Brisbane, Queensland, 4072, Australia.
E-mail address: t.mackie@uq.edu.au

https://doi.org/10.1016/j.clsr.2018.09.002
0267-3649/© 2018 Tom Mackie. Published by Elsevier Ltd. All rights reserved.
computer law & security review 34 (2018) 1314–1332 1315

dispense of the need for a human driver to supervise the driv-


1. Introduction ing task whilst the vehicle is engaged in autonomous mode.8
These advanced self-driving cars are expected to be intro-
Automated vehicle (‘AV’) technology is set to take up an ever- duced from 2020,9 and offer the potential for individuals to
larger role on Australian roads. At present, there are com- reclaim commuting time for productive purposes, as well as
mercially available vehicles equipped with driver assistance providing mobility benefits for those who are currently unable
or partial automation technology,1 and testing initiatives for to drive conventional cars, such as the blind, disabled, youth,
more advanced automated vehicles are currently taking place and elderly.10
in most of Australia’s states and territories.2 The technology Although, despite these predicted far-reaching benefits,
has been touted to play an important role in addressing the automated vehicles will not be entirely immune to failure.
significant social and economic harm that is currently caused They have already caused a handful of accidents,11 and it is
by motor vehicle accidents. In 2016 alone, over 1300 deaths reasonable to assume that they will continue to do so.12 In the
were recorded on Australian roads3 and the Australian Trans- wake of these events, questions have arisen as to how current
port Council has estimated that vehicle accidents cost the liability principles will apply to accidents caused by this next
Australian economy up to $27 billion per annum.4 Further- generation of autonomous car. Firstly, who is likely to be held
more, by some estimates over 90% of road accidents are princi- liable for injury or damage that may result? Secondly, on what
pally caused by human error.5 Automated vehicles, which are basis will these parties be held liable? Finally, will prospec-
equipped with sensors that react faster than human drivers, tive plaintiffs struggle to successfully claim under our current
are constantly attentive and can perceive 360° around the car, liability laws in respect of loss flowing from an AV-caused ac-
therefore have the potential to reduce the frequency of these cident?
accidents.6 In Australia, these questions have been partly considered
However, the potential benefits of AVs are not merely by the National Transport Commission (‘NTC’), an indepen-
safety-related. The next generation of highly and fully auto- dent statutory body which develops land transport reform
mated vehicles,7 which constitute the focus of this article, will proposals for Australia’s Federal and State governments. In
2015, the NTC was tasked with identifying regulatory bar-
1
National Transport Commission, ‘Regulatory Reforms for Au- riers associated with the introduction of automated vehi-
tomated Vehicles’ (Policy Paper, National Transport Commission, cles to Australia.13 With respect to liability, the NTC recom-
November 2016) 7, 10. mended in their 2016 policy paper14 that assigning fault for
2
Austroads, Trials, <https://austroads.com.au/drivers-and- crashes involving AVs could be ‘complex,’ but that at this
vehicles/connected- and- automated- vehicles/trials>. In relation
stage, in essence, only the first of the above three questions
to how state and federal authorities have addressed liability in
these testing initiatives, see footnote 15, below. requires a ‘clarification,’ or reform of Australian laws. Specif-
3
Bureau of Infrastructure, Transport and Regional Economics, ically, the NTC recommended that “whether human monitor-
‘Road Deaths Australia – December 2016’ (Statistical Report, De- ing of an automated vehicle constitutes legal control of the
partment of Infrastructure and Regional Development, December vehicle requires clarification”, and for the purposes of road
2016) 1. See, also, World Health Organisation, ‘Global Status Report rules and insurance schemes, the legal definition of a ‘driver’
on Road Safety, 2016’ (Report, World Health Organisation, 2015) xi,
should be clarified on the same basis.15 It is argued that legal
which estimates that the worldwide annual road death toll ex-
ceeds 1.2 million.
4
Australian Transport Council, ‘National Road Safety Strategy
2011-2020’ (Report, Australian Transport Council, May 2011) 23. tems for On-Road Motor Vehicles’ (Recommended Practice Report
See also Thomas Oriti, ‘Government Estimates Road Crashes Cost- J3016, SAE International, September 2016).
8
ing the Australian Economy $27 Billion a Year’, ABC News (on- Ibid 23-24.
line), 2 January 2017 <http://www.abc.net.au/news/2017- 01- 02/ 9
National Transport Commission, above n 1, 10.
road- crashes- costing- australian- economy- billions/8143886>. 10
James Anderson et al, ‘Autonomous Vehicle Technology: A
5
National Highway Traffic Safety Administration, ‘National Mo- Guide for Policymakers’ (Research Report, RAND Corporation,
tor Vehicle Crash Causation Survey’ (Report to Congress, US De- 2016) 16-33.
11
partment of Transportation, July 2008) 23-24, noting that the criti- See, eg, Google, ‘Google Self-Driving Car Project Monthly
cal cause of 5,096 out of 5,471 crashes analysed could be attributed Report: February 2016’ (Report, Google, 29 February 2016)
to the driver. <https://static.googleusercontent.com/media/www.google.com/
6
Paul Cleary, ‘End of the Road Toll: Driverless Cars Could en//selfdrivingcar/files/reports/report-0216.pdf>; Jim Kerstetter,
Save Lives’, The Australian (online) 29 December 2016 <http:// ‘Daily Report: Tesla Driver Dies in ‘Autopilot’ Accident’ New York
www.theaustralian.com.au/life/end- of- the- road- toll- driverless- Times (online), 1 July 2016 <http://www.nytimes.com/2016/07/02/
cars- could- save- lives/news- story/83a2d2ad016f2d263b7317a6a technology/daily-report-tesla-driver-dies-in-autopilot-accident.
341f589>; John Markoff, ‘Google Cars Drive Themselves, in Traffic’, html>.
New York Times (online), 9 October 2010 <http://www.nytimes.com/ 12
Alexander Hevelke and Julian Nida-Rumelin, ‘Responsibility
2010/10/10/science/10google.html>; Mike Ramsay, ‘Self-Driving for Crashes of Autonomous Vehicles: An Ethical Analysis’ (2015)
Cars Could Cut Down on Accidents, Study Says’, Wall Street 21(3) Science and Engineering Ethics 619, 620.
Journal (online), 5 March 2015 <http://www.wsj.com/articles/ 13
National Transport Commission, above n 1, 8.
14
self- driving- cars- could- cut- down- on- accidents- study- says- Ibid.
1425567905>. 15
Ibid 59-60, see also Chapter 3 at 32-36; Chapter 5 at 43-
7
This report adopts the Society of Automotive Engineers’ taxon- 48. On a separate note, in relation to liability issues during
omy of automated vehicles, which categorises types of automated AV trials in Australia, the NTC has not recommended regula-
vehicles by degrees of automation. See, SAE International, ‘Taxon- tion on this point but have issued guidelines for testing. In
omy and Definitions for Terms Related to Driving Automated Sys- the guidelines, the NTC recommends at 4.1 that to obtain a
1316 computer law & security review 34 (2018) 1314–1332

reform of this sort will, in turn, clarify responsibility for a sumer Law (‘ACL’)20 in respect of defects arising in the course
crash.16 of manufacture, design and development, or marketing of the
By contrast, there is a body of literature in the United States vehicle.
which argues that beyond clarifying who is in control of an In relation to the third question, I argue that prospective
automated vehicle, there is an underlying need for reform of claimants are likely to face two barriers to establishing liabil-
US product liability law.17 Many of these scholars suggest that ity in a negligence or ACL defective goods claim. Plaintiffs may
due to the uniquely adaptive, and at times unforeseeable na- firstly struggle to determine what the cause of the defect was,
ture of AI-decision making, existing US product liability laws and who should consequently be sued. This inability to de-
are ill-equipped to protect plaintiffs who have suffered injury termine the cause of an AV accident can be partly tied to the
or damage following an AV accident, and that this might de- ongoing debate on the ‘explainability,’ or lack thereof, of AI
mand a ‘new approach’18 to liability. Similarly, in Australia, decision-making. Secondly, it may be difficult to prove that the
some scholars are suggesting that a new system of liability manufacturer was at fault in negligence, or that the automated
is required, whereby the automated vehicle manufacturer is vehicle was defective for the purposes of Part 3–5 of the ACL. Es-
deemed to be at fault.19 sentially, under both actions, manufacturers will be held to a
In addressing the three questions posed above, and the duty to undertake reasonable testing of their AVs. Given that it
emerging debate in the literature, this article firstly argues that is currently impracticable to completely test for, and eliminate
whether the human driver remains liable depends on the de- all AV errors, and due to the broader social utility the technol-
gree to which the relevant AV is automated, and the degree ogy is likely to offer, plaintiffs may face evidentiary challenges
of control the human driver had over the car in the events in proving that the manufacturer’s testing was unreasonable.
leading up to the accident. Thus, I argue that for accidents This situation means that a plaintiff’s problem will be
caused by highly and fully automated vehicles, plaintiffs will largely evidentiary in nature when trying to bring a negligence
primarily have to bring claims against the parties which man- claim. However, with regards to the defective goods action in
ufacture, maintain or somehow contribute to the operation of Part 3–5 of the ACL, I argue that these barriers reflect an inher-
AVs. Liability would generally have to be established under the ent problem in the law if it is indeed to act as a ‘strict’ liabil-
various claims available in Australia’s product liability regime. ity action. Concerns about victims going uncompensated for
Whilst contractual warranties and statutory guarantees will accidents, where the AV manufacturer could not reasonably
be available to those who have purchased an AV, those outside have undertaken further testing, and similar concerns about
the contract of sale will have to bring a claim of negligence or liability for AI-decision making more broadly, might therefore
a defective goods action under Part 3–5 of the Australian Con- demand reconsideration of the ongoing debate in Australia
and abroad about the role of a strict product liability regime in
torts, or the necessity of specialised AV compensation scheme.
testing permit, the trialling organisation should demonstrate Part II of this article will provide an overview of automated
that they have appropriate insurance, including public liabil- vehicle technology, with a particular focus on the machine
ity or product liability insurance. See, National Transport Com- learning algorithms that enable AVs to make the decisions re-
mission, ‘Guidelines for Trials of Automated Vehicles in Aus- quired to navigate the inherently complex road environment.
tralia’ (2017) <https://www.ntc.gov.au/Media/Reports/(00F4B0A0-
Part III will consider the parties which may be held liable for
55E9-17E7-BF15-D70F4725A938).pdf>. This approach has been fol-
lowed in AV trial legislation in Victoria (Road Safety Amendment (Au-
AV accidents. Part IV will provide a descriptive overview of the
tomated Vehicles) Bill 2017 (Vic)) and New South Wales (Transport tort of negligence and the ACL defective goods action, under
Legislation Amendment (Automated Vehicle Trials and Innovation) Act which parties that manufacture, maintain or contribute to the
2017 (NSW), and was already the approach taken in South Aus- operation of the automated driving systems might be held li-
tralia (Motor Vehicles Act 1959 (SA), section 134H). able. In Part V, the heart of this article, I will identify the com-
16
Ibid 60. plexities that plaintiffs will likely face in negligence or Part
17
See, e.g.,; Kevin Funkhouser, ‘Paving the Road Ahead: Au-
3–5 ACL claims, namely proving causation and fault. Finally,
tonomous Vehicles, Products Liability, and the Need for A New
Approach’ [2013] (1) Utah Law Review 437; Jeffrey Gurney, ‘Sue Part VI will offer conclusions and potential ways forward for
My Car Not Me: Products Liability and Accidents Involving Au- addressing these barriers to proving liability.
tonomous Vehicles’ [2013] (2) University of Illinois Journal of Law,
Technology and Policy 247; Andrew Garza, ‘Look Ma, No Hands!:
Wrinkles and Wrecks in the Age of Autonomous Vehicles’ (2012)
2. How automated driving systems function
46(3) New England Law Review 581; Sophia Duffy and Jamie Hopkins,
‘Sit, Stay, Drive: The Future of Autonomous Car Liability’ (2013)
16(3) SMU Science and Technology Law Review 453; David Vladeck, 2.1. The decision-making process of an automated vehicle
‘Machines Without Principals: Liability Rules and Artificial Intel-
ligence’ (2014) 89(1) Washington Law Review 117; cf, Gary Marchant Whilst there are inevitably some differences in automated ve-
and Rachel Lindor, ‘The Coming Collision Between Autonomous hicle design between manufacturers,21 general observations
Vehicles and the Liability System’ (2012) 52(4) Santa Clara Law Re-
view 1321.
18 20
Funkhouser, above n 17. Competition and Consumer Act 2010 (Cth) sch 2 (‘Australian Con-
19
See, eg, Lynden Griggs, ‘A Radical Solution for Solving the Liabil- sumer Law’), pt 3–5.
21
ity Conundrum of Autonomous Vehicles’ (2017) 25 Competition and See, e.g., the debate surrounding Tesla’s choice to rely solely
Consumer Law Journal 151; see also, Mark Brady et al, ‘Automated on radar and camera sensors, whereas most other manu-
Vehicles and Australian Personal Injury Compensation Schemes’ facturers are incorporating an additional LIDAR laser sensor.
(2017) 24(1) Torts Law Journal 32. Olivia Solon, ‘Lidar: The Self-Driving Technology that Could
computer law & security review 34 (2018) 1314–1332 1317

can be drawn about how the next generation of highly and techniques and probabilistic reasoning,33 categorise these ob-
fully automated driving systems are likely to function. jects and make predictions about how the obstacles are likely
Automated vehicles are installed with an array of sensors to act. For instance, Google’s automated vehicle prototypes
that gather data about the car’s surrounding environment.22 have been trained to be able to identify a cyclist through cam-
Cameras and radar units are widely used to detect nearby ob- era footage and recognise that a cyclist who holds their arm
jects and the car’s distance from them, and Light Detection out to one side is likely signalling their intent to turn.34 How-
and Ranging (‘LIDAR’) units may also be used, which generate ever, as automated vehicle manufacturers are not able to train
a 360° scanned image of the car’s surrounding environment the algorithms to recognise every possible contingency that
using laser rangefinders.23 Software algorithms then interpret might take place within the driving environment, the cars
this data and control the vehicle’s course.24 The process by must also be programmed to take into account anomalous
which the algorithms drive the car can be simplistically de- road situations. Automated vehicles have already faced road
scribed to involve three stages, namely: (a) car localisation, situations as obscure as a cyclist performing a track-stand on
(b) obstacle detection and behaviour prediction, and (c) path a fixed gear bike,35 and “a woman in a wheelchair, armed with
planning.25 a broom, chasing a turkey.”36 In such novel circumstances,
The automated driving system firstly locates its position whereby the algorithm is unable to make a probabilistically
in the world by way of GPS and an inertial navigation system confident decision as to what the perceived object is or how it
(INS), the latter using gyroscopes and accelerometers to cal- is likely to behave, the car will proceed conservatively or po-
culate the car’s position, orientation and speed.26 A number tentially stop altogether, depending on the clarity of the situ-
of additional techniques may also be used to locate the car’s ation.37
position more precisely on the streetscape. For example, both Finally, the driving system plans a path towards its desti-
camera27 and LIDAR units28 can be used to detect lane mark- nation. Machine learning techniques are again utilised at this
ings and position the car appropriately. Furthermore, compa- stage, with the actions of the vehicle being defined by over-
nies such as Ford29 and Google30 use techniques that compare all priorities. For instance, Audi’s prototype automated vehi-
this sensor data with ‘prebuilt maps’ to further improve the cle plans its decisions on the basis of two overall questions,
accuracy of car’s location on the road.31 namely, (1) ‘is it possible?’ (safe and legal) and, (2) ‘is it ben-
In the second stage, the car’s sensors detect any poten- eficial?’ (for a more comfortable ride).38 Thus, in planning its
tial obstacles in its path, such as other cars, cyclists, pedes- path forwards, the vehicle avoids detected obstacles, whilst
trians and road construction sites.32 The driving system al- also observing road rules, such as changes of traffic lights as
gorithms, drawing heavily on machine learning programming recognised through the camera sensor.39

Help Tesla Avoid Another Tragedy’, The Guardian (online), 7 2.2. Training and testing machine learning algorithms
July 2016 <https://www.theguardian.com/technology/2016/jul/06/
lidar- self- driving- technology- tesla- crash- elon- musk>.
22
Anderson et al., above n 10, 58; Guilbert Gates et al, One of the principal difficulties faced in the development of
‘When Cars Drive Themselves’, New York Times (online), 14 De- automated vehicle technology is programming the cars to be
cember 2016 <https://www.nytimes.com/interactive/2016/12/14/ able to cope with the complex and dynamic nature of road
technology/how- self- driving- cars- work.html>. driving.40 Plainly, it would be impossible to code explicit rules
23
Anderson et al, above n 10, 58, 60–62; see also, Sebastian Thrun for the car to follow due to the sheer variety of road situations
et al, ‘Stanley: The Robot that Won the DARPA Grand Challenge’
that an automated vehicle will encounter. Automated vehi-
(2006) 23(9) Journal of Field Robotics 661, 664, which provides a de-
cle technology therefore “pervasively” uses machine learning
tailed description of some of the machine learning techniques and
sensors developed by researchers during the DARPA Grand Chal- techniques,41 which allow an algorithm to make a prediction
lenge in 2005, many which are still used in automated vehicles or decision about something in the world without the need for
today. Sebastian Thrun, who was involved in the DARPA project,
was later recruited to head Google’s automated vehicle project.
24 33
Anderson et al, above n 10, 58. Thrun et al, above n 23, 662.
25 34
Eric Jaffe, ‘The First Look at How Google’s Self-Driving Jaffe, above n 25.
35
Car Handles City Streets’, CityLab (online), 28 April 2014 Matt McFarland, ‘How Fixed-Gear Bikes Can Confuse Google’s
<http://www.citylab.com/tech/2014/04/first- look- how- googles- Self-Driving Cars’, The Washington Post (online), 26 August 2015
self- driving- car- handles- city- streets/8977/>; see also, Anderson <https://www.washingtonpost.com/news/innovations/wp/
et al, above n 10, 58–59. 2015/08/26/how- fixed- gear- bikes- can- confuse- googles- self-
26
Anderson et al, above n 10, 63–64. driving-cars/?utm_term=.11f75e05e3c7>.
27 36
See, eg, Samuel Gibbs, ‘What’s it Like to Drive with Tesla’s Alex Davies, ‘Google’s Lame Demo Shows Us How Far Its
Autopilot and How Does it Work?’, The Guardian (online), 1 Robo-Car Has Come’, Wired (online), 10 May 2015 <https:
July 2016 <https://www.theguardian.com/technology/2016/jul/01/ //www.wired.com/2015/10/googles- lame- demo- shows- us- far-
tesla- autopilot- model- s- crash- how- does- it- work>. robo- car- come/>.
28 37
Albert Huang et al, ‘Finding Multiple Lanes in Urban Road Net- Ibid; Jaffe, above n 25.
38
works with Vision and Lidar’ (2009) 26(2) Autonomous Robots 103. Alex Davies, ‘I Rode 500 miles in a Self-Driving Car and
29
Tamara Warren, ‘We Took a Ride in Ford’s Self-Driving Car’, Saw the Future: It’s Delightfully Dull’, Wired (online), 1 July
The Verge (online), 13 September 2016 <http://www.theverge.com/ 2015 <https://www.wired.com/2015/01/rode- 500- miles- self-
2016/9/13/12895690/ford- self- driving- car- ride- detroit- video>. driving- car- saw- future- boring/>.
30 39
Jaffe, above n 25. Jaffe, above n 25.
31 40
Anderson et al, above n 10, 64. Anderson et al., above n 10, 60.
32 41
Jaffe, above n 25. Thrun et al., above n 23, 691.
1318 computer law & security review 34 (2018) 1314–1332

human intervention.42 The algorithm is designed to achieve cise decision-making process of a machine learning algorithm
this through its ability to learn from data, fed to it by example is often unclear, the programmer does define the overall pri-
and through its own trial and error experience. On the basis orities by which it should make decisions, and has the ability
of this learned experience, the algorithm programs its own in- to tweak the weighting of factors by which it makes decisions.
ternal decision logic as to what the optimal way to complete One clear example can be seen whereby Google has incorpo-
a task is.43 rated into its automated vehicle algorithm decision-making
Once the algorithm has established an internal decision- process the premise that buses and large vehicles have a lower
making logic, it is able to make its own driving decisions in probability of yielding to a merging car than other types of ve-
the inexorably changing driving environment, without human hicles.50
intervention. For example, at the ‘obstacle detection’ stage, However, what needs to ‘tweaked’ or modified in the al-
it is by repeatedly feeding the algorithm camera footage of gorithm may not always be obvious, or foreseeable, until that
a pedestrian that the algorithm begins to ‘learn’ the logic of error has arisen under testing. Indeed, Matthias notes that er-
what a pedestrian looks like. A similar process would be used rors are an unavoidable feature of learning algorithms.51 Con-
to train the driving system to make predictions as to how sequently, algorithm testing constitutes one of the most im-
pedestrians are likely to behave.44 An example of machine portant aspects of automated vehicle development as it high-
learning techniques at the ‘path planning’ stage can be seen lights to manufactures how their automated driving systems
in how Google has taught its AVs to approach turns at an inter- need refining.52 The importance of testing can be observed in
section, whereby the algorithm was presented with examples the emphasis that automated vehicle manufacturers place on
showing how human-driven cars approach a turn differently maximising the distance their algorithms have driven both on
depending on whether the vehicle is approaching an intersec- the road and in simulations.53
tion with a green light, or is turning from a stationary posi- However, it is not feasible to completely test the vehicles for
tion.45 As a result, the algorithm learned its own decision logic the indeterminate number of contingencies that may arise in
as to how it should approach left-hand turns in the future to the driving environment.54 This is aptly summarised by the
optimise smooth driving and passenger comfort. CEO of Toyota’s Research Institute, who observed that even
However, the fundamental problem posed by machine after “testing millions of miles, which car manufacturers, in-
learning techniques for the allocation of liability is that man- cluding us, are doing on physical cars, that doesn’t get you
ufacturers are not, in principle, capable of fully predicting the anywhere near close coverage of possible things that could
future behaviour of the algorithms.46 This is due to the adap- happen to all cars in the world in the course of a year.” 55 Thus
tive quality of the algorithms and precisely because they are automated vehicles are likely to operate with a small, but in-
able to learn from their experiences and interactions with the herent degree of accident risk. The question then arises, how
driving environment.47 Thus the decision logic by which they much testing is enough? We will return to this question in fur-
operate is not fixed at the time of production, as it is partly ther detail, and its implications for establishing liability, in Part
shaped by the experiences of the machine.48 There is there- V of this article.
fore a possibility that automated vehicle accidents may occur
as a result of the automated driving system making an unfore-
3. Who will be held liable?
seeable decision, based on logic partially adopted through its
own experience.
The majority of motor vehicle accidents on our roads today are
It should be noted however that it is inaccurate to think
caused by human driver error,56 and this is reflected in the def-
of these machines as exercising unfettered discretion, or as
being able to set their own priorities. It is the manufacturers
50
and software engineers who create the parameters, or ‘space’ Google, above n 11.
51
in which these machines exercise discretion.49 Whilst the pre- Matthias, above n 42, 182.
52
Evan Ackerman, ‘Toyota’s Gill Pratt on Self-Driving Cars and
the Reality of Full Autonomy’, IEEE Spectrum (online), 23 January
2017 <http://spectrum.ieee.org/cars- that- think/transportation/
42
Andreas Matthias, ‘The Responsibility Gap: Ascribing Re- self- driving/toyota- gill- pratt- on- the- reality- of- full- autonomy#
sponsibility for the Actions of Learning Automata’ (2004) 6 qaTopicSeven>; see also, Philip Koopman and Michael Wagner,
Ethics and Information Technology 175, 177; Michael Copeland, ‘Challenges in Autonomous Vehicle Testing and Validation’ (2016)
‘What’s the Difference Between Artificial Intelligence, Ma- 4(1) SAE International Journal of Transportation Safety 15, which
chine Learning, and Deep Learning?’, Nvidia, 29 July 2016 outlines the crucial role that vehicle testing will play in ensuring
<https://blogs.nvidia.com/blog/2016/07/29/whats-difference- that automated vehicles are adequately safe for widespread use.
artificial- intelligence- machine- learning- deep- learning- ai/>. 53
Ibid; Tim Higgins, ‘Google’s Self-Driving Car Program Odometer
43
Carol Reiley, ‘Deep Driving’, MIT Technology Review (online), Reaches 2 Million Miles’, Wall Street Journal (online), 5 October 2016
18 October 2016 <https://www.technologyreview.com/s/602600/ <http://www.wsj.com/articles/googles- self- driving- car- program-
deep-driving/>; Matthias, above n 42, 179. odometer- reaches- 2- million- miles- 1475683321>; Google, ‘Google
44
See, Jaffe, above n 25. Self-Driving Car Project Monthly Report: January 2016’ (Re-
45
Ibid. port, Google, 31 January 2016) <https://static.googleusercontent.
46
Matthias, above n 42. com/media/www.google.com/en//selfdrivingcar/files/reports/
47
Emad Dahiyat, ‘Intelligent Agents and Liability: Is It A Doctrinal report-0116.pdf>.
54
Problem or Merely a Problem of Explanation?’ (2010) 18(1) Artificial Koopman and Wagner, above n 52.
55
Intelligence Law 103, 106. Ackerman, above n 52.
48 56
Matthias, above n 42, 177, 182. National Transport Commission, ‘Regulatory Options for Au-
49
Ibid 182. tomated Vehicles’ (Discussion Paper, National Transport Commis-
computer law & security review 34 (2018) 1314–1332 1319

initions of Australia’s various compulsory third-party (‘CTP’ ) to the manufacturer’s instructions, and whether the car gave
insurance schemes.57 However, advanced automated vehicles the driver enough time to respond.65
are likely to change this paradigm,58 and thus the question In relation to human control over conditionally auto-
arises, which parties are likely to be held liable for injury or mated vehicles, Australia’s National Transport Commission
damage that might result from an AV accident? has adopted the position that the human driver of a Level 3
AV should be presumed to be in full legal control of the ve-
3.1. Extent of human driver liability hicle, unless or until a new position is developed.66 It should
be noted though that this position was proposed mostly with
This question firstly turns on the degree to which the relevant an eye to law enforcement, and the question of who should
car is automated and the level of control the human driver be held responsible for breaches of road rules.67 It is unclear
had over the events leading up to the accident.59 There will be whether the NTC’s approach would extend to civil liability, as
differences in design between automated vehicle models and the human driver might not necessarily be at fault if, for ex-
the extent to which they will allow the human driver to switch ample, the ADS makes an erroneous decision without grant-
off from the driving task. ing the driver the opportunity to retake control of the vehicle.
The Society of Automotive Engineer’s taxonomy of auto- At this stage, the NTC has not revisited this policy position in
mated driving systems60 is instructive in clarifying these dif- relation to civil liability, or in relation to the next generation
ferences between models, and also helps clarify when a hu- of highly and fully automated vehicles.
man driver should remain liable. It is particularly necessary to However, the focus of this article is on proving liability for
distinguish between conditionally automated vehicles (Level accidents involving the next generation of highly and fully au-
3) and the next generation of highly (Level 4) and fully (Level tomated vehicles. These automated driving systems will no
5) automated vehicles. longer require a human driver to supervise the driving task or
Conditionally automated driving systems, such as Audi’s to remain alert and ready to intervene.68 For example, Volvo’s
‘Highway Pilot,’ 61 are designed to operate autonomously on prototype level 4 automated driving system is marketed in a
the basis that a human driver remains alert and ready to re- way that implies that the driver will be able to read a newspa-
take control of the vehicle when either: (a) there is an evident per, work, watch television or use a mobile phone whilst the
system failure, or (b) they are requested to intervene by the system is engaged.69 Furthermore, both Ford70 and Google71
automated driving system.62 For example, the automated sys- are developing level 4 prototypes that are not equipped with
tem may request the human driver to intervene when the car steering wheels or pedals, in which case there is no human
has approached an anomalous road situation such as a con- driver control of the vehicle in any relevant sense. Instead,
struction site or a passing emergency vehicle.63 For these level level 4 and level 5 systems will have inbuilt redundancy that is
3 automated vehicles, the human driver might still be held li- able to pilot the car to a ‘minimal risk condition,’ and will no
able for an accident if, for instance, the vehicle requests them longer rely on a fall-back human driver to prevent accidents.72
to take control of the vehicle but they negligently fail to do It should be noted though that level 4 automated vehicles,
so due to distraction.64 However, there could be a number of as distinct from level 5 automated vehicles,73 will still be lim-
variables that come into play in determining whether, and to ited in use to a particular operational domain. These opera-
what extent, a human driver ought to be held responsible in
a particular case, such as what the driver was distracted by, 65
National Transport Commission, above n 1, 96.
66
what the proper/reasonable use of the vehicle was according National Transport Commission, above n 1, 11.
67
Ibid 32-36.
68
SAE International, above n 7, 23.
sion, May 2016) 98. In addition to driver error, vehicle accidents 69
Volvo, Autonomous Driving <http://www.volvocars.com/intl/
might also be attributed to a vehicle malfunction or force majeure, about/our-innovation-brands/intellisafe/autonomous-driving>;
see, Marchant and Lindor, above n 17, 1326. Alissa Walker, ‘Volvo Is Designing the Autonomous Car
57
See, Mark Brady et al, ‘Automated Vehicles and Australian Per- That Could Eliminate All Traffic Deaths’, Gizmodo, 18
sonal Injury Compensation Schemes’ (2017) 24(1) Torts Law Journal November 2015 <http://gizmodo.com/volvo- is- designing-
32. the- autonomous- car- that- could- elimin- 1743302437>.
58 70
Ibid. See Brady et al for a detailed consideration of how Aus- Ford, Ford Targets Fully Autonomous Vehicle for Ride Shar-
tralia’s CTP regimes are unlikely to apply to automated vehicle ac- ing in 2021 <https://media.ford.com/content/fordmedia/
cidents under existing definitions. fna/us/en/news/2016/08/16/ford-targets-fully-autonomous-
59
National Transport Commission, above n 1, 60; Marchant and vehicle- for- ride- sharing- in- 2021.html>.
71
Lindor, above n 17, 1326; John Markoff, ‘Google’s Next Phase in Driverless Cars: No
60
SAE International, above n 7. Steering Wheel or Brake Pedals’, New York Times (online), 27 May
61
Audi USA, Piloted Driving <https://www.audiusa.com/ 2014 <https://www.nytimes.com/2014/05/28/technology/googles-
newsroom/topics/2016/audi- piloted- driving>; Davies, above next- phase- in- driverless- cars- no- brakes- or- steering- wheel.
n 32. html>.
62 72
SAE International, above n 7, 23. Ibid.
63 73
See, eg, the following account of a test drive in one It should be noted that arguably we’re still far away from tech-
of BMW’s level 3 automated vehicles, Tim Adams, ‘Self- nologically achieving full level 5 autonomy, defined as being able
Driving Cars: From 2020 You Will Become a Permanent to operate autonomously in any condition that a human driver
Backseat Driver’, The Guardian (online), 13 September 2015 could. See, Ackerman, above n 52; SAE International, above n
<https://www.theguardian.com/technology/2015/sep/13/ 7, 24. The Australian Driverless Vehicle Initiative has predicted
self- driving- cars- bmw- google- 2020- driving>. that level 4 autonomy is likely to be implemented by 2020-2025,
64
Marchant and Lindor, above n 17, 1327. whereas level 5 autonomy can be expected by 2026 at the ear-
1320 computer law & security review 34 (2018) 1314–1332

tional domains will likely be defined by geo-fenced areas,74 ride-sharing service83 fails to exercise reasonable care in se-
such as operation within a pre-mapped central business dis- lecting an outsourced automated vehicle manufacturer.
trict or campus area,75 or on specified highways.76 Ultimately, the relevant party for liability purposes will de-
The human driver might still be held liable if, for example, pend heavily on the specific facts of the case, and not all po-
they fail to exercise reasonable care by engaging the vehicle’s tential liability exposures could be anticipated nor enumer-
automated driving mode against manufacturer instructions ated in this article. Thus, as a matter of scope, this article fo-
in an improper operational domain, or in inclement weather cuses predominantly on potential claims against automated
conditions, and this decision can be shown to have caused the vehicle manufacturers and their component and service sub-
accident.77 It is worth noting though that at this stage it ap- contractors, as arguably, these parties will likely be held re-
pears that many of the proposed level 4 automated vehicles sponsible most of the time.84
will not permit a human driver to engage the automated mode
outside these specified operational domains, and the driving
system will instead notify the driver when it is appropriate to 4. Potential product liability claims
do so.78
However, assuming a human driver (if a driver is required Under current Australian law, claims against automated ve-
at all79 ) activates a level 4 or level 5 automated driving system hicle manufacturers and their subcontractors would primar-
at the appropriate time, they are unlikely to be held liable for ily be handled through the various theories of liability avail-
an accident that occurs whilst the system is engaged.80 able in Australia’s product liability regime. These include the
tort of negligence, the defective goods action under Part 3–5 of
3.2. Other liable parties the Australian Consumer Law,85 contractual warranties, and
statutory guarantees under Part 3–2 of the ACL.86 As a mat-
Assuming human drivers would not be held liable for the ma- ter of scope, this article limits its analysis to tortious liability
jority of accidents involving highly (level 4) and fully (level 5) and the ACL defective goods action. Actions on the basis of
automated vehicles, the parties that manufacture, maintain or contractual warranties and statutory guarantees would only
somehow contribute to the operation of the automated driv- generally be available to claimants who have been party to
ing system would principally be held liable. a contract of sale of an automated vehicle.87 These actions,
There are a number of parties that might potentially be in- whilst providing significant benefits for eligible claimants due
volved in manufacturing and operating automated vehicles, to the lack of a fault requirement, would not provide an avenue
and who might be responsible for accident-causing defects. of redress for a wide range of other parties who might suffer
Automated vehicle manufacturers, or any subcontractors to injury or damage from automated vehicle accidents, such as
whom they may outsource tasks, such as the provision of soft- third-party passengers or other road users.88
ware engineering and mapping services, or the manufacture The tort of negligence is likely to be an important basis of
of sensor components,81 might all cause an accident-causing liability in the context of automated vehicle accidents. It is
driving system defect, and may subsequently be held liable. accepted that manufacturers owe a duty to take reasonable
Similarly, providers of smart infrastructure systems, or road care in preventing their products from causing foreseeable in-
authorities who fail to rectify a known flaw in road lane mark- jury or damage to the ultimate user of that product.89 Further-
ings or signage (upon which automated vehicles will rely in more, in the case of a defective vehicle, the scope of the man-
the future) might be held liable.82 Businesses might also be ufacturer’s duty may extend beyond the driver to cover injury
held liable for conduct that did not contribute to an accident- or damage suffered by a passenger or other third-party road
causing defect, but is nonetheless found to be negligent. For users.90
example, this could arise where an operator of an automated Assuming that automated vehicle manufacturers will
broadly owe a duty of care to road users, the question to be an-
liest. See, ADVI, Levels of Automated Vehicles <http://advi.org.au/
swered in a given case would be whether the particular auto-
australia/levels- of- automation/>. mated vehicle manufacturer breached this duty in producing
74
Ackerman, above n 52. a defective vehicle. Under the statutory formula adopted by
75
See, eg, Ford, above n 70. Australia’s Civil Liability Acts (‘CLA’), a breach is found whereby
76
SAE International, above n 7, 23. the risk of harm posed by the allegedly defective product was
77
Marchant and Lindor, above n 17, 1327.
78
See, eg, Walker, above n 69.
79 83
See, Ford, above n 70; Markoff, above n 71. Adam Thierer, ‘When the Trial Lawyers Come for the
80
Though, one could envisage exceptional circumstances where Robot Cars’, Slate (online), 10 June 2016 < http://www.slate.
the human driver engages in plainly negligent behaviour, such as com/articles/technology/future_tense/2016/06/if _a_driverless_
hacking or tampering with the automated driving system. See, car_crashes_who_is_liable.html>.
84
Steve Connor, ‘First Self-Driving Cars will be Unmarked so that Marchant and Lindor, above n 17, 1327–1328.
85
Other Drivers don’t try to Bully Them’, The Guardian (online), 30 Oc- Australian Consumer Law, pt 3–5.
tober 2016 <https://www.theguardian.com/technology/2016/oct/ 86
Ibid pt 3-2.
30/volvo- self- driving- car- autonomous>. 87
See, Kit Barker et al, The Law of Torts in Australia (Oxford Univer-
81
See, eg, the partnerships that Ford has developed with a LI- sity Press, 5th ed, 2012) 633–4.
88
DAR supplier, machine learning and computer vision companies, Ibid.
89
as well as a 3D mapping startup. Ford, above n 70. Donoghue v Stevenson [1932] AC 562, 599 (Lord Atkin).
82 90
Clayton Utz, ‘Driving into the Future: Regulating Driverless Ve- Stennett v Hancock [1939] 2 All ER 578; Marschler v Masser’s Garage
hicles in Australia’ (Report, Clayton Utz, August 2016), 17. (1956) 2 DLR (2d) 484.
computer law & security review 34 (2018) 1314–1332 1321

foreseeable and ‘not insignificant’, and a reasonable manu- position than victims to spread the losses that flow from prod-
facturer in their position91 would have taken precautions to uct defects; and (2) to reduce the costs of apportioning liabil-
alleviate this risk of harm.92 Determining whether a reason- ity for product-related losses.101 At the time, it was perceived
able manufacturer would have taken precautions requires the that the existing causes of action were unable to provide vic-
consideration of a number of factors, such as the probability of tims with appropriate remedies - in the case of negligence,
harm occurring if care were not taken, the likely seriousness of due to the complexities and difficulties of proving fault and
the harm, the burden of taking precautions to avoid the risk of causation and the legal costs required to do so.102 However,
harm and the social utility of the activity that creates the risk whether these objectives animated the final version of the de-
of harm.93 The test is essentially the same as the risk-utility fective goods action (which lacks an objects clause), is not en-
calculus94 that is increasingly being employed in the United tirely clear.103 The extent to which the defective goods action
States for product liability matters,95 in that it requires a cost- indeed achieves these objectives and imposes truly ‘strict’ li-
benefit analysis balancing the risks inherent in the design or ability, and the implications of this for AV liability, is further
formulation of a product against the costs of adopting a rea- discussed below.
sonable alternative design.96 If the design of the system is not In the context of automated vehicles, liability would be es-
itself unreasonably risky in these terms, then the issue will go tablished under Part 3–5 if an AV manufacturer supplies a ve-
to the reasonableness of the warnings and information that hicle with a “safety defect”,104 and that defect causes loss.105
accompany the purchase or use of the AV. The relevant test for determining ‘defectiveness’ in a Part 3–
Automated vehicle manufacturers might alternatively be 5 action is the consumer expectations test, which stipulates
held liable under the defective goods action in Part 3–5 of the that a good is defective if its “safety is not such as persons gen-
ACL. When initially passed,97 the defective goods action was erally are entitled to expect” .106 The Part 3–5 defective goods
intended to “introduce a strict product liability regime into test is again broadly analogous to the consumer expectations
Australian law”98 and was modelled on the 1985 EC Product test that has historically been applied in the United States.107
Liability Directive,99 though Australian courts have rarely re- Manufacturers may be held liable under either of these the-
ferred to European jurisprudence in their decisions.100 It was ories for defects stemming from the process of manufacture,
argued during the reform process that a defective goods ac- design and development, or marketing.108 Potential examples
tion would be desirable to: (1) ensure that manufactures bear of such claims in the AV context will be discussed below.
the risks of defective products, as they are arguably in a better
4.1. Manufacturing defects
91
In other words, “a reasonable person in the position of the de-
A manufacturing defect, or ‘bench error’, refers to an inadver-
fendant”, see, Graham Barclay Oysters v Ryan (2002) 211 CLR 540,
[192] (Gummow and Hayne JJ).
tent error in the production process which results in a flawed
92
Civil Law (Wrongs) Act 2002 (ACT) s 43(1); Civil Liability Act 2002 product. In cases where a defect in manufacture is alleged,
(NSW) s 5B(1); Civil Liability Act 2003 (Qld) s 9(1); Civil Liability Act the product typically does not meet the manufacturer’s own
1936 (SA) s 32(1); Civil Liability Act 2002 (Tas) s 11(1); Wrongs Act 1958 intended design specifications.109 Therefore, Davies observes
(Vic) s 48(1); Civil Liability Act 2002 (WA) s 5B(1); see also, Wyong Shire that in such cases the acceptable standard of safety for the
Council v Shirt (1980) 146 CLR 40, 47-48 (Mason J); Martin Davies, particular type of product is not in issue,110 as distinct from
‘Product Liability’ in Carolyn Sappideen and Prue Vines (eds), Flem-
design defects.
ing’s The Law of Torts (Lawbook, 2011) 555, 557-558.
93
Civil Law (Wrongs) Act 2002 (ACT) s 43(2); Civil Liability Act 2002 In the automated vehicle context, a manufacturing defect
(NSW) s 5B(2); Civil Liability Act 2003 (Qld) s 9(2); Civil Liability Act might firstly arise in a ‘faulty workmanship’ style case.111 For
1936 (SA) s 32(2); Civil Liability Act 2002 (Tas) s 11(2); Wrongs Act example, this type of defect might occur where a component
1958 (Vic) s 48(2); see also Wyong Shire Council v Shirt (1980) 146 CLR part, such as an AV LIDAR sensor, is carelessly wired together
40, 47-48 (Mason J). and a resulting malfunction causes an accident. For an action
94
Under the risk-utility test, liability is established where it can
be shown that the “foreseeable risks of harm posed by the product
101
could have been reduced by the adoption of a reasonable alterna- Kellam and Nottage, above n 99, 27–28; citing, Australian Law
tive design”, see, Restatement (Third) of Torts: Products Liability §2(b) Reform Commission, ‘Product Liability’ (Report No. 51, Australian
(1998). Law Reform Commission, 1989), para 7.06.
95 102
See, Aaron Twerski and James Henderson, ‘Manufacturer’s Li- Hammond, above n 98, 29; Barker et al, above n 87, 649.
103
ability for Defective Product Designs: The Triumph of Risk-Utility’ See, Hammond, above n 98.
104
(2009) 74(3) Brooklyn Law Review 1061, 1065. Australian Consumer Law ss 2(1); 138(1).
96 105
See, above n 93; see also, Davies, above n 92, 557-558. Ibid ss 139–141.
97 106
See, Trade Practices Act 1974 (Cth), Part VA. The defective goods Ibid s 9(1).
107
action was retained in substance in Part 3-5 of the Australian Con- Defectiveness under the US test is established where “the dan-
sumer Law, which replaced the Trade Practices Act 1974 (Cth) in ger posed by the design is greater than an ordinary consumer would ex-
2011. pect when using the product in an intended or reasonably foreseeable
98
Marnie Hammond, ‘The Defect Test in Part VA of the Trade Prac- manner.” See, Terrence Kiely and Bruce Ottley, Understanding Prod-
tices Act 1974 (Cth): Defectively Designed?’ (1998) 6 Torts Law Jour- ucts Liability Law (LexisNexis, 2006), 135; cited in Funkhouser, above
nal 29, 29; citing Explanatory Memorandum to the Trade Practices n 17.
108
Amendment Bill 1992 (Senate). Davies, above n 92, 557.
99
Jocelyn Kellam and Luke Nottage, ‘Happy 15th Birthday, Part VA 109
Ibid; Barker et al, above n 87, 643.
110
TPA! Australia’s Product Liability Morass’ (2007) 15 Competition & Davies, above n 92, 557.
111
Consumer Law Journal 26, 26-27. See, e.g. Phillips v E W Lundberg & Son (1968) 88 WN (Pt 1) (NSW)
100
Ibid. 166.
1322 computer law & security review 34 (2018) 1314–1332

of negligence, mere proof that a person has suffered loss due sky’ behind it, and the driving system consequently failed to
to a defective LIDAR sensor would not be itself conclusive of a brake when the semi-trailer pulled in front of the car.117 Tesla
manufacturer’s liability. Plaintiffs will also have to prove that had made two design decisions which arguably were key con-
the relevant manufacturer or subcontractor caused the defect. tributing factors to the accident, namely: (1) to rely solely on
Difficult questions of causation may therefore arise, as the de- camera and radar sensors, rather than incorporating a further
fect may have arisen subsequent to the product leaving the LIDAR sensor;118 and; (2) for the decision-making algorithm to
manufacturer’s control, for instance from ordinary wear and treat the camera sensor as a ‘primary’ source of information,
tear, intervening repairs, tampering, improper handling from with the radar sensor’s data being merely ‘supplementary.’119
an intermediate party, or a range of other possible causes. This Tesla subsequently updated their algorithm to also classify
issue is considered in more depth in Section V, below. the radar data as a primary source of information, which Elon
Manufacturing defects might also arise from inadequate Musk later noted, would likely have prevented the accident.120
quality control. An example in the AV context could be an Furthermore, some commentators have claimed that LIDAR
unintended software bug which impairs the operation of the sensors might be a necessary AV design choice, in order to
driving system. For instance, Hynes notes that inadvertent balance the shortcomings of camera and radar sensors and to
mistakes such as typographical errors made by programmers improve the obstacle detecting accuracy of automated driving
could be characterised as defects arising in the course of man- systems overall.121
ufacture.112 However, it is also recognised that the removal of If such an accident were to occur in Australia, claimants
all bugs or errors from a computer program may often be im- might be more likely to succeed in alleging that the deci-
possible,113 and thus the relevant breach that might have to be sion to treat the camera as the primary source of decision-
alleged in certain cases may cut to the question of whether a making data constituted a design defect attracting liability
reasonable amount of testing was undertaken to detect errors. in negligence or under Part 3–5 of the ACL. In this respect,
In this regard, the distinction between manufacturing and de- it is well known in the industry that camera sensors alone
sign defects may to some extent break down when consider- have their limitations and there is a risk that they may be
ing software defects and errors.114 unable to collect useful data in certain ambient conditions,
such as in fog, heavy rain, or when faced with direct and
4.2. Design and development defects bright light.122 Furthermore, the costs of tweaking the algo-
rithm so that the radar data was treated as an equal primary
A plaintiff might also look to establish that an automated ve- source would presumably not be great, particularly in com-
hicle manufacturer should be held liable for a design or devel- parison to the foreseeable and serious risks of relying solely
opment defect. These defects arise as a result of an error of on camera data. Finally, the requirement of such a standard
judgment in the design or formulation process of a product. of care would arguably not be so burdensome so as to crip-
A useful illustration of a potential design defect claim in ple the industry’s ability123 to provide a potentially life-saving
the AV context can be seen in the well-publicised fatal ac- technology.124 The fact that there was an alternative and safer
cident involving a Tesla vehicle in Florida,115 and the design design available which was relatively cost-effective to imple-
choices behind Tesla’s Model S ‘autopilot’ driving system.116 ment could overall lead to a conclusion that Tesla’s design was
In the accident, the Tesla car’s camera sensors failed to dis- defective.125
tinguish the white side of a semi-trailer from the ‘brightly lit A plaintiff might face more difficulty in establishing fault or
defectiveness with respect to Tesla’s omission of LIDAR sen-
112
Paul Hynes, ‘Doctors, Devices and Defects: Products Liability for sors in its driving system design. The courts can be reticent
Defective Medical Expert Systems in Australia’ (2004) 15 Journal of to decide questions that cut to the appropriateness of a de-
Law, Information and Science 7. liberate design decision made on the basis of cost and a cer-
113
Ibid. tain probability of risk, both for claims of negligence126 and
114
Ibid.
115
Mike Spector and Ianthe Dugan, ‘Tesla Draws Scrutiny
After Autopilot Feature Linked to a Death’, Wall Street Jour- 117
Tesla, A Tragic Loss, June 30 2016, < https://www.tesla.com/blog/
nal (online), 30 June 2016 <https://www.wsj.com/articles/ tragic-loss>.
118
tesla- draws- scrutiny- from- regulators- after- autopilot- feature- is- Solon, above n 21.
linked- to- a- death- 1467319355>; Alex Davies, ‘Tesla’s Autopilot 119
Harrison Weber, ‘Elon Musk unveils ‘massive’ Tesla Autopilot
has had it’s First Deadly Crash’, Wired (online), 30 June 2016 <https: 8.0 update using existing radar and fleet learning’, Venture Beat
//www.wired.com/2016/06/teslas- autopilot- first- deadly- crash/>. (online), 11 September 2016 <http://venturebeat.com/2016/09/11/
116
It should be noted that Tesla’s autopilot feature is considered elon- musk- unveils- massive- tesla- autopilot- 8- 0- update- using-
to provide only level 2, or partial driving automation, see Will existing- radar- and- fleet- learning/>.
120
Oremus, ‘Is Autopilot a Bad Idea?’ Slate (online), 6 July 2016 Ibid.
<http://www.slate.com/articles/technology/future_tense/2016/ 121
Solon, above n 21.
122
07/is_tesla_s_style_of _autopilot_a_bad_idea_volvo_google_and_ Anderson et al, above n 10, 61; Solon, above n 21.
others_think_so.html>; SAE International, above n 7, 23. However, 123
Most automated vehicle manufactures do not currently design
in the context of liability, this effectively only raises questions as their vehicles to rely primarily on a single camera sensor, see, e.g.,
to the ability of the human driver to intervene prior to a crash, as Gates et al., above n 22 (Google); Warren, above n 29 (Ford); Adams,
discussed in Part IIIA, above. The example is still pertinent to the above n 63 (BMW).
124
question of an alleged design defect if it is assumed the human See, Cleary, above n 6; Ramsay, above n 6.
125
driver could not intervene, or if one assumes a hypothetical Abouzaid v Mothercare (UK) Ltd [2000] EWCA Civ 348; see also,
that the same design choice was made in the design of a level 4 Barker et al, above n 87, 654.
126
automated vehicle. Davies, above n 92, 558-559; Barker et al, above n 87, 644–645.
computer law & security review 34 (2018) 1314–1332 1323

under the consumer expectations test127 adopted by Part 3–5 or instructions be negligently inadequate, and a human driver
of the ACL. The concern is that such issues can be ‘polycen- uses the automated mode improperly as a result, the manu-
tric’ 128 and that they may involve the court going beyond the facturer could be held liable for any resulting loss or damage.
limits of the particular case by effectively setting an accept- It is also worth noting that an automated vehicle manufac-
able level of risk and precaution for the activity across soci- turer’s duty to warn may extend to warning consumers of the
ety. These questions of economic and social policy might con- inherent, though improbable, risk of a driving system failing
demn how a large part of an industry does business, and may due to a defect or from an incorrect decision made by the al-
ultimately deprive society of those products.129 Indeed, the LI- gorithm. Such a duty might be imposed for the reason that it
DAR question might present such a polycentric issue, as the would “at least give the user an informed choice whether to
LIDAR sensors that Google uses can cost up to $70,000 each. run the risk” of using the product.135 Thus, if a manufacturer
Furthermore, Tesla appears to embrace a design methodology doesn’t provide adequate warnings about this irreducible (al-
that relies on making its vehicles affordable for consumers at beit potentially small) degree of risk, and a plaintiff who has
present, and requiring the use of LIDAR could be prohibitive suffered damage can successfully assert that they would not
to this approach.130 Their design methodology looks to max- have used the automated vehicle, or would have used it dif-
imise the amount of everyday driving to which Tesla algo- ferently had proper warnings been given,136 the manufacturer
rithms are exposed through ‘fleet learning’, whereby the algo- might be held liable.
rithms are trained using the data provided by its fleet of con-
sumer drivers using a ‘beta’ autopilot feature.131 In doing so, it
could be argued that the system will ‘learn’ quicker from this 5. Complexities of proof arising under
large pool of data and that the technology will progress faster Australia’s product liability regime
than by merely testing with a small number of test drivers,
and for this reason, the use of LIDAR is unnecessary.132 How- This section of the article seeks to assess the extent to which
ever, other factors may emerge as the technology develops Australia’s product liability regime will be able to handle
that could influence such a determination, such as a decline claims arising from AV accidents and appropriately balance
in the cost of LIDAR,133 or the development of a common in- the interests of plaintiffs and manufacturers when apportion-
dustry practice.134 ing liability. Whilst these issues were not explicitly raised in
the NTC’s 2016 policy report as a regulatory barrier to the in-
4.3. Information defects and the duty to warn troduction of AVs into Australia, the NTC recognised that this
position might need to be revisited if “evidence emerges of a
Finally, automated vehicle manufactures might be held liable market failure that impedes the efficient and reliable assign-
in negligence for failing to warn consumers of the dangers as- ment of fault.”137
sociated with the use of their AVs. As was observed in Part I argue that plaintiffs will likely face two critical barriers to
IIIA above, many automated driving systems will only be suit- recovery following automated vehicle accidents, namely: (1)
able for use in a particular operational domain. In this regard, identifying the source of the defect and proving what caused
manufacturers will have a duty to warn consumers of the lim- the accident; and (2) actually proving fault on part of the au-
itations of their automated driving systems, such as whether tomated vehicle manufacturer in a negligence claim, or that
they ought to only be activated in certain conditions, or in cer- the AV was ‘defective’ for the purposes of Part 3–5 of the ACL,
tain geographical areas. Should the manufacturer’s warnings especially where an accident is caused by a decision made by
the automated driving system algorithm.

127
Davies, above n 92, 574. 5.1. Difficulties in proving causation
128
Lon L Fuller ‘The Forms and Limits of Adjudication’ (1978) 92(2)
Harvard Law Review 353, 394–400.
129 Determining what caused a particular AV accident may not
Davies, above n 92, 558–559; Barker et al, above n 87, 644–645.
130
See, Anderson et al, above n 10, 61–64, where it is noted that be straightforward for plaintiffs. This could be the case due to
most individual sensors have limitations and that different combi- the multiplicity of parties involved in the production, mainte-
nations of sensors offer different levels of capability, however cost nance and operation of automated vehicles (all who may have
is a key constraint in determining which sensors are ultimately caused the defect), as well as our current technological inabil-
used. ity to analyse machine learning algorithms in order to deter-
131
Hope Reese, ‘Tesla’s new Autopilot makes a big bet on
mine how or why they have made a particular decision.
radar’, TechRepublic (online), 16 September 2016 <http://www.
techrepublic.com/article/teslas- new- autopilot- makes- a- big-
Firstly, establishing who caused an automated vehicle ac-
bet- on- radar- musk- said- system- would- have- prevented- cident may be complex where the process of manufacturing
deadly-crash/>. the vehicle or relevant defective component was distributed
132
Ibid. It should be noted as well that the Tesla fatality was the across a number of different parties within a chain of supply.
first in 130 million miles of Autopilot operation, whereas the fatal- Unlike the developing doctrine in the United States,138 there is
ity rate for conventional vehicles worldwide is estimated as one
fatality per 60 million miles, so Tesla’s design could well be a de-
135
fensible balance of cost and risk, see, Tesla, above n 117; see also, Davies, above n 92, 559.
136
Solon, above n 21. Chappel v Hart (1998) 195 CLR 232; Rosenberg v Percival (2001) 205
133
See, Solon, above n 21. CLR 434.
134 137
See, Cross v TNT Management (1987) 46 SASR 105; Middleton v National Transport Commission, above n 1, 62.
138
Erwin [2009] NSWSC 108. Vladeck, above n 17, 147-148; Davies, above n 92, 568.
1324 computer law & security review 34 (2018) 1314–1332

no trend in Australian case law to hold the manufacturers of why you get the outcome you get.”146 Often, even the engi-
a finished product vicariously liable in negligence for defects neers who have designed a machine learning system cannot
that stemmed from subcontractors upstream in the produc- pinpoint the reasoning behind a single action taken,147 and in-
tion process.139 Thus for example, if a plaintiff sought to prove stead the decision-making process of the automated driving
negligence in respect of an allegedly defective LIDAR sensor, system’s algorithms can only be deduced by observing and in-
they might struggle to identify where the cause of the defect terpreting the behaviour of the vehicle.148
arose from, and subsequently who to sue, if the automated Given that automated driving system algorithms will be
vehicle manufacturer outsources the production and instal- weighing a multitude of different factors and sources of data
lation of the LIDAR sensors to a number of different subcon- in their decision-making processes, the probable cause, or
tractors.140 However, the doctrine of res ipsa loquitur (the neg- causes, of an algorithm’s erroneous decision might then not
ligence speaks for itself) may be of some assistance to plain- always be clear.149 One example of the diversity of factors
tiffs, particularly in ‘manufacturing defect’ type cases referred that automated driving systems might be weighing up can
to above.141 The presumption is available in circumstances be seen in the work of the start-up, BRAIQ, which is devel-
where, despite a lack of evidence as to how or why the defect oping an algorithm that, using in-cabin sensors, can read a
exactly arose, a logical inference can be drawn that a particu- passenger’s emotional comfort or discomfort and adjust the
lar manufacturer or subcontractor, or someone for which they driving style of the automated vehicle accordingly.150 With
are vicariously liable, must have negligently caused the de- an automated driving system weighing up everything from
fect.142 However, this causal inference is not always available the predicted behaviour of other road users to the emotional
and plaintiffs will have to exclude on the balance of proba- states of its occupants, it might be difficult to determine what
bilities that the malfunction did not arise due to a different actually caused the incorrect algorithm decision and conse-
cause, such as from the faulty fitting of the LIDAR sensors by quently, who should be held responsible.
a subsequent party143 or from intervening repairs.144 However, whilst the decision-making processes of AI sys-
A second complexity may arise where the accident is tems currently take place in somewhat of a ‘black box,’ there
caused by an error in the decision-making process of the has been a recent push across disciplines towards making
automated driving system’s algorithms. At present, machine AI decision-making more transparent and accountable. There
learning systems are incapable of explaining, or being anal- has been significant work in the computer science field on re-
ysed to reveal, precisely how or why they have reached a par- searching ‘explainable AI,’151 for instance, systems that are ca-
ticular decision. This is because the algorithms lack an explicit pable of explaining the factors animating their actions and
representation of their decision-making process,145 which is decisions (i.e. why the decision was made) to a human-in-
necessary so that they can make decisions without the need the-loop, as distinct from explaining how the decision was
for human input, and which allows machine learning algo- made.152 In the legal sphere, the General Data Protection Reg-
rithms to quickly make sense of large volumes of data in ways
that humans cannot. The former head of machine learning
146
at Uber succinctly summarises the problem in observing that See, Devin Coldewey, ‘Affectiva and Uber want to Brighten
“you can’t look back into it… it’s almost impossible to explain Your Day with Machine Learning and Emotional Intelligence’,
Tech Crunch (online), 13 September 2016 <https://techcrunch.
com/2016/09/13/affectiva- and- uber- want- to- brighten- your-
day- with- machine- learning- and- emotional- intelligence/>.
147
Will Knight, ‘The Dark Secret at the Heart of AI’, MIT Technol-
139
Peake v Steriline Manufacturing Pty Ltd [1988] Aust Torts Re- ogy Review (online), 11 April 2017 <https://www.technologyreview.
ports 80. Note however that manufacturers of the finished prod- com/s/604087/the- dark- secret- at- the- heart- of- ai/>.
148
uct would still be subject to a duty to reasonably minimise the See, Matthias, above n 42, 178–179; 182.
149
risk of injury and damage from these components, though prob- Dahiyat, above n 47, 106.
150
ing tests of the components are not necessary and the standard Andrew Thomson, ‘Emotionally intelligent computers may
of care required is less stringent than what testing is required of already have a higher EQ than you’, Tech Crunch (online), 2 Decem-
the component manufacturer, see Taylor v Rover Co Ltd [1966] 2 ber 2016 <https://techcrunch.com/2016/12/02/emotionally-
All ER 181 (Baker J). Furthermore, the manufacturer of the overall intelligent- computers- may- already- have- a- higher- eq-
product would be under a duty to ensure that component suppli- than-you/>.
151
ers/specialists are competent, see Davies, above n 92, 567–568. See, eg, DARPA’s work in this space, David Gunning, ‘Ex-
140
See, e.g., above n 81. plainable Artificial Intelligence’ (Program Update, November
141
Barker et al, above n 87, 644. 2017, DARPA) <http://explainablesystems.comp.nus.edu.sg/wp-
142
Grant v Australian Knitting Mills Ltd (1935) 54 CLR 49, 62 (Lord content/uploads/2018/03/XAI%20for%20IUI%202018.pdf>; see for
Wright); Phillips v E W Lundberg & Son (1968) 88 WN (Pt 1) (NSW) a summary, Wojciech Samek, Thomas Wiegand and Klaus-Robert
166; Martin v Thorn Lighting Industries Pty Ltd [1978] WAR 10; Hill Müller, ‘Explainable Artificial Intelligence: Understanding, Visual-
v James Crowe (Cases) Ltd [1978] 1 All ER 812; see also, Kilgannon v izing and Interpreting Deep Learning Models’ (2017) Special Issue
Sharpe Bros Pty Ltd (1986) 4 NSWLR 600; Fletcher v Toppers Drinks Pty 1, ITU Journal 1.
152
Ltd [1981] 2 NSWLR 911. Ibid; see also; Doshi-Velez, ‘Accountability of AI Under the
143
See, eg, Evans v Triplex Glass [1936] 1 All ER 283, whereby a shat- Law: The Role of Explanation’ (Harvard Public Law Work-
tered car windscreen could not be proven to be caused by defective ing Paper No. 18-07, 2017, Berkman Klein Center Working
fabrication on the balance of probabilities, as opposed to the faulty Group on Explanation and the Law); Emerging Technology
installation of the windscreen. from the arXiv, ‘AI Can Be Made Legally Accountable for Its
144
See, eg, Phillips v Chrysler Corp (1962) 32 DLR (2d) 347. Decisions’, MIT Technology Press (online), 15 November 2017
145
Matthias, above n 42, 178; Ackerman, above n 52; Dahiyat, above <https://www.technologyreview.com/s/609495/ai- can- be- made-
n 47, 106. legally-accountable-for-its-decisions/>.
computer law & security review 34 (2018) 1314–1332 1325

ulation153 has introduced rights in relation to automated next to a storm water drain and were blocking the AV’s path
decision-making.154 The GDPR may also provide EU citizens down the lane. The vehicle came to a stop while it waited for
with a ‘right to explanation’ of AI-decision making that relates an opportunity to merge to the inner lane. After allowing some
to them,155 however that the decision must be ‘based solely on cars to pass, the Google AV incorrectly predicted that a bus
automated processing’ and must ‘significantly affect’ the indi- travelling down the inner lane would yield for it and whilst an-
vidual are key limitations to the scope of the right.156 Certainly gling in to merge, the Google AV struck the side of the bus. In
for Level 3, 4 and 5 automated vehicles, many, if not all of the their subsequent monthly report, Google stated that they later
decisions involved in the driving task would be made ‘solely’ refined their algorithm to “more deeply understand” that as a
on the basis of the ADS’ automated processing. It ought to be matter of probability, large vehicles are generally less likely to
noted however that some have questioned the extent to which yield than other types of vehicles.162
a right to explanation can properly be said to exist in the text Say for instance, this situation played out in Australia and
of the GDPR.157 the company who operates the damaged bus sought compen-
Under Australian liability laws, the ACL defective goods ac- sation for the damage. What is the nature of the defect that
tion may mitigate the above two problems for plaintiffs to would have to be alleged by the plaintiff?
some extent, as a claimant would merely need to prove that The first thing to note is that from Google’s report, it ap-
the manufacturer supplied the defective vehicle, as opposed pears that the problem which caused the accident was not an
to proving that the particular manufacturer caused the defect. unintended software bug. Rather, the algorithm functioned as
Furthermore, a ‘manufacturer’ is defined in the ACL to cover designed but was not yet sophisticated enough and had not
potentially anyone in the supply chain,158 including compo- yet been trained to accurately predict the yielding patterns of
nent manufacturers and importers (if the actual manufac- large vehicles. As such, a plaintiff would be required to bring
turer is based entirely outside Australia).159 Prospective plain- a design or development defect type claim.163
tiffs would therefore merely have to demonstrate that any Per the statutory formula, an action of negligence would
of these parties ‘supplied’ the automated vehicle in trade or involve consideration of the foreseeability and magnitude of
commerce and that it was defective.160 They would then be the risks inherent in the design,164 as well as a cost-benefit
required to prove that the automated vehicle was ‘defective’ analysis that balances the risk of harm involved in the design
however, which is discussed below. against the costs of minimising that risk through a different
design.165 As has been noted, this calculus is broadly similar
5.2. Difficulties in proving fault and/or defectiveness for to the risk utility test that is increasingly used in United States
accidents caused by a driving system decision jurisdictions for design defect claims.166
The first breach that might be alleged in this instance is
5.2.1. Accidents caused by an underdeveloped algorithm that Google unreasonably exposed the plaintiff to a foresee-
The question of whether an automated vehicle manufacturer able and “not insignificant” risk of harm by failing to take pre-
should be held liable becomes significantly more complex in caution in their design process with respect to the automated
respect of accidents caused by an on-road decision of the au- vehicle’s lack of ‘understanding’ of large vehicle yielding pat-
tomated driving system algorithm. Plaintiffs may struggle to terns, with the reasonableness of Google’s conduct requiring
prove the requisite fault or defectiveness for a negligence or a balancing of the risks of harm with the costs of taking alle-
defective goods claim, even where it can be shown that the viating action.167
accident was caused by an underdeveloped algorithm. In the early stages of the technology, it might be difficult
The problem can be illustrated well by applying current for plaintiffs to succeed in an argument that a reasonable au-
product liability principles to the first accident caused by of tomated vehicle manufacturer ought to have foreseen traffic
one of Google’s automated vehicle prototypes (‘Google AV’ ).161 risks as specific as those associated with merging in front of
In February 2016, a Google AV was driving down the outermost large vehicles, and ought to have pre-emptively refined their
lane of a busy road in Mountain View, California, when it de- algorithm on this basis. This road risk is but one amongst a
tected sandbags on the road ahead. The sandbags were lying countless number of contingencies that automated vehicles
will face on the roads. Thus, expecting too high a standard
153
General Data Protection Regulation (EU) 2016/679 ("GDPR"). of foresight and care from automated vehicle manufacturers,
154
Ibid, Art. 22. too early, would likely cripple the development of the poten-
155
Ibid, Recital 71. tially life-saving technology, a factor that would run strongly
156
See, Bryce Goodman and Seth Flaxman, ‘European Union Reg-
ulations on Algorithmic Decision-Making and a "Right to Explana-
162
tion"’ (2017) 38(3) AI Magazine 50. Google, above n 11.
157 163
Sandra Wachter, Brent Mittelstadt and Luciano Floridi, ‘Why a See, Sven Beiker, ‘Legal Aspects of Autonomous Driving’ (2010)
Right to Explanation of Automated Decision-Making Does Not Ex- 52(4) Santa Clara Law Review 1145, 1152, “As the vehicle navigates
ist in the General Data Protection Regulation’ (2017) 7(2) Interna- itself through traffic, it makes ’mission-critical’ decisions, which,
tional Data Privacy Law 76. in a narrow range of circumstances, can and will contribute to ac-
158
See, Australian Consumer Law ss 2, 7. cidents. Such an event cannot necessarily be classified as a tech-
159
Ibid s 7(e). nical failure, however, the same way as, for instance, a damaged
160
Ibid ss 138-141. tire.”
161 164
Chris Ziegler, ‘A Google Self-Driving Car Caused a Crash See, above n 91.
165
for the First Time’, The Verge (online), 29 February 2016 See, above n 92-93; see also, Davies, above n 92, 557–558.
<http://www.theverge.com/2016/2/29/11134344/google-self- 166
See, above n 94–96.
driving- car- crash- report>. 167
Ibid.
1326 computer law & security review 34 (2018) 1314–1332

against an inference of negligence.168 However, it may be that coverable,177 as opposed to being possibly discoverable in light
this duty on manufacturers to pre-emptively refining their al- of available knowledge at the time.178 This application of the
gorithms and eliminate risks could become stricter over time development risks defence by Australian courts, as distinct
as the technology matures. from the position taken in other jurisdictions’ applications of
The plaintiff might instead look to allege a defect in the the EC Directive defence,179 is one aspect in which the Part 3–5
course of developing the algorithm, on the basis that Google defective goods action may not operate as a truly ‘strict liabil-
failed to take reasonable steps to test their system in order to ity’ scheme,180 and in this case may only require reasonable
uncover that the algorithm did not account for the risk.169 testing of AV manufacturers.
The question is then: what amount of testing is reason- The outcome of an ACL claim may also be uncertain for a
able? It is hard to predict in advance of the widespread im- second reason: it is not clear how the consumer expectations
plementation of AV technology what a reasonable standing of test would apply. Specifically, it is difficult to pinpoint what
testing will be. However, in this scenario, Google could quite the ‘consumer’, as an objective standard,181 is ‘entitled to ex-
likely argue effectively that their testing procedure complied pect’ in relation to complex design defect cases and special-
with the state of the art at the time,170 and as a consequence, ist technical questions such as algorithm training and test-
that reasonable steps were taken towards discovering the al- ing approaches and their implications for ADS safety risks.182
gorithm deficiency. Plaintiffs may thus face evidentiary chal- As Kevin Funkhouser observes,183 the average consumer may
lenges during the early implementation of AVs in proving neg- have bought into the ‘hype’ 184 surrounding AVs and may ex-
ligent fault on part of AV manufacturers. pect infallible safety standards, which could differ from the
Over time, an accepted industry practice171 might develop reasonable expectations of an informed mechanic or software
and provide a benchmark which could assist in such a deter- engineer. Some have argued that in order to make the con-
mination. Importantly though, it will be impossible to com- sumer expectations test workable in complex design defect
pletely test these algorithms for the immeasurable number cases, the courts ultimately resort to a cost-benefit analysis185
of contingencies that they will face on the roads,172 and thus and that the test potentially imposes a standard of liability
automated driving systems will assumedly always contain a which is effectively ‘indistinguishable’ from common law neg-
certain degree of safety imperfection in their decision-making ligence.186 If the consumer expectations test as applied to au-
process. tomated driving system errors imposes a standard of liability
Other factors may be significant in relieving manufacturers that is indeed indistinguishable from the negligence standard,
from liability, such as the potential impracticability of testing then similar obstacles might face plaintiffs relying on the Part
for, and eliminating every possible error, and that on a society- 3–5 defective goods action to those noted with respect to neg-
wide scale, AVs are predicted to have great utility.173 Finally, ligence claims, above.
AV design defect questions could involve the court addressing In sum, plaintiffs are likely to face a number of evidentiary
the sorts of ‘polycentric’ issues noted above,174 and the courts problems in proving negligent fault on part of AV manufac-
may be reticent to condemn testing methods that are widely
used in the industry.
177
In an ACL defective goods claim, a similar obstacle to re- Ryan v Great Lakes Council [1999] FCA 177 (Wilcox J); Peterson v
Merck Sharpe & Dohme [2010] FCA 180.
covery for plaintiffs may lie in the development risks defence. 178
Barker et al, above n 87, 656.
Under this defence, manufacturers are excused from liability 179
See, eg, European Commission v United Kindgom [1997] 3 CMLR 923;
where it can be shown that “the state of scientific or techni- A v National Blood Authority [2001] 3 All ER 289.
cal knowledge at the time when the goods were supplied by 180
Kellam and Nottage, above n 99.
181
their manufacturer was not such as to enable that safety de- Carey-Hazell v Getz Bros & Co (Aust) Pty Ltd [2004] FCA 853, [186].
182
fect to be discovered.”175 Barker et al observe however that in Tsui notes relevantly with respect to alleged design defects re-
interpreting this provision, Australian courts have effectively lating to pharmaceuticals: “The idea of a consumer or a person, no
matter how reasonable, determining the acceptable legal standards for
imported a reasonableness requirement,176 as the defence has
goods as complex as pharmaceuticals does give rise to legitimate con-
been held to be available where a risk was not practically dis- cerns. The test has been described as the statutory replication of a major
shortcoming of negligence: being vague, indeterminate and difficult to es-
tablish prior to a court hearing… Griffiths acknowledges the rationale of
consumer protection, and although the test might make sense in manu-
168
Davies, above n 92, 557–558. facturing defects, how would a consumer know about safety in design?...
169
See, Cuckow v Polyester Reinforced Products Pty Ltd (1970) 19 FLR The idea of a consumer determining drug design standards is absurd, es-
122, 143 (Fox J); Vacwell Engineering Co Ltd v BDH Chemicals Ltd [1971] pecially when manufacturers themselves are unable to reasonably foresee
1 QB 88, 109 (Rees J); Peterson v Merck Sharpe & Dohme [2010] FCA 180. all side effects of their creations, due to the unique interaction with each
170
See, Abalos v Australian Postal Commission (1990) 171 CLR 169, 172 individual patient’s biology.”; see, Mabel Tsui, ‘An Analysis of Aus-
(McHugh J); cf, Vacwell Engineering Co Ltd v BDH Chemicals Ltd [1971] tralia’s Legal Regime for Imposing Liability on Manufacturers of
1 QB 88; Cuckow v Polyester Reinforced Products Pty Ltd (1970) 19 FLR Pharmaceutical Drugs’ (2014) 21 JLM 700, 713–714.
183
122. Funkhouser, above n 17, 456.
171 184
Cross v TNT Management Pty Ltd (1987) 46 SASR 105; see also, See also, Vladeck, above n 17, 136.
185
Vladeck, above n 17, 136. Roger O’Keefe, ‘Is the Concept of ‘Defect’, Defective?’ (1994) 5
172
Koopman and Wagner, above n 52. Australian Product Liability Reporter 84, 87.
173 186
See, above n 93; see Funkhouser, above n 17, 457. Barker et al, above n 87, 656; see also, Hammond, above n 98;
174
See, above n 126–132. O’Keefe, above n 185; Jane Stapleton, ‘Products Liability Reform –
175
Australian Consumer Law s 142(c). Real or Illusory?’ (1986) 6(3) Oxford Journal of Legal Studies 392, 393–
176
Barker et al, above n 87, 654–657. 405.
computer law & security review 34 (2018) 1314–1332 1327

turers. However, as the technology matures, and as industry It is also difficult to envisage what reasonable amount of
standards emerge, stricter standards of care may be imposed testing could be expected to reveal such dynamic design risks.
on manufacturers regarding the degree to which they are ex- It would currently be impossible to undertake a test that could
pected to pre-emptively eliminate, and test for algorithm de- pinpoint the nature and timing of these evolving safety de-
fects. Furthermore, the National Transport Commission is cur- fects in all situations, as the algorithm logic is constantly
rently undertaking extensive work in developing mandatory changing on the basis of information that itself acquires and
ADS safety standards, and whilst the substance of this work analyses.192 Such testing would have to be undertaken contin-
is beyond the scope of this article, the standards will likely be uously, even after the automated vehicle has left the control
important in establishing a baseline standard of care for AV of the manufacturer, and the courts might be reticent to en-
safety.187 force such a high standard that may, once again, threaten the
development of the technology.
5.2.2. Accidents arising from ‘learned’ machine logic These difficulties of proof are also likely to face plaintiffs
Plaintiffs could face further barriers to recovery where an acci- who seek to claim under Part 3–5 of the ACL. The develop-
dent is caused by an on-road decision of the automated driv- ment risks defence would likely be invoked, as manufactur-
ing system, and where this decision has stemmed from the ers could well argue that the state of scientific or technical
‘learned’ logic of the machine learning algorithm. knowledge is not such as to enable that latent defect could be
The following hypothetical may serve to demonstrate how practically discovered, or that it would be impracticable to un-
this could occur. Say a Google AV drives over a large, water- dertake testing that could identify errors in the ever-evolving
filled pothole, which causes the vehicle to lose traction of the decision-making logic of the AI system. The criticisms raised
road and collide into a parked vehicle. Furthermore, assume above as to whether the defective goods action merely im-
the vehicle was programmed with an ‘anomaly algorithm,’ in poses a negligence standard in complex design defect cases
that it was the manufacturer’s intention that if the driving sys- would again be relevant, and would merely hold AV manufac-
tem were faced with an anomalous road situation that it could turers to a standard of reasonable testing.
not make confident predictions about, such as a pothole, the Thus, the courts are likely to be hesitant to impose a stan-
vehicle would proceed cautiously or avoid the obstacle en- dard of care on manufacturers that would require them to ex-
tirely. However, in the course of its 2 million miles of opera- pose many of these possible ‘learned’ algorithm risks. To be
tion,188 the vehicle algorithm has been exposed to a number clear, manufacturers would still be required to undertake a
of shallower pools of water and has ‘learned’ that it can, in reasonable amount of testing. But assuming this reasonable
general, safely drive across what appears to be a shallow pool amount of testing would not be able to practically uncover all
of water. On the basis of this learned logic, the algorithm de- risks that arise from the machine’s adaptive nature, there will
cided that the pothole was not an anomalous road situation, likely be a class of plaintiffs who will have no recourse under
which ultimately caused the accident. a claim of negligence or under Part 3–5 of the ACL following
It is likely to be nearly impossible for plaintiffs to assert AV accidents.
in this instance that a reasonable automated vehicle manu-
facturer ought to have known about, or foreseen, the specific
6. Conclusions and ways forward
latent risk that was being gradually learned by the algorithm.
As the machine’s decision-making logic is partly shaped by
In summary, I have argued that the critical factor for deter-
its own experience and interaction with the driving environ-
mining who is held liable for an automated vehicle accident
ment,189 the behaviour of the algorithm is fundamentally dy-
will be the degree of control that the relevant party has over
namic and not precisely foreseeable.190 Furthermore, it would
the events leading up to the accident. Thus, for highly (level
be difficult to argue that the manufacturers should have been
4) and fully (level 5) automated vehicles, the focus of this ar-
able to analyse the algorithm to forecast its behaviour, as by
ticle, plaintiffs will generally be required to pursue AV manu-
their nature, machine learning algorithms lack an explicit rep-
facturers for compensation, and using the actions available in
resentation as to why they make a decision and are thus diffi-
Australia’s product liability regime.
cult to analyse.191 A negligence claim would therefore be un-
In relation to the ongoing debate as to the adequacy of Aus-
likely to be founded on the basis of an argument that the man-
tralia’s product liability laws to resolve AV accidents, I have
ufacturer ought to have pre-emptively tweaked the algorithm
highlighted two difficulties that plaintiffs are likely to face in
to address the risk. Indeed, in the scenario above, Google had
a negligence action, or an ACL defective goods action. Firstly,
accounted for the road risk posed by the pothole by way of the
it may be challenging to identify the source of the defect and
‘anomaly algorithm,’ but the adaptive nature of the machine
the probable cause of the accident, particularly where an in-
rendered this precaution ineffective in the circumstances.
quiry into an allegedly incorrect decision of the driving sys-
tem is required. As shown in Part V above, the reasoning of
187
See, National Transport Commission, ‘Safety Assurance for Au- machine learning based techniques currently take place in
tomated Driving Systems: Consultation Regulation Impact State- somewhat of a ‘black box,’ and it may be unclear who plain-
ment’ (Regulation Impact Statement, National Transport Commis- tiffs should pursue in a negligence claim if a number of parties
sion, May 2018). have contributed to designing and maintaining the ADS. How-
188
Higgins, above n 53. ever, there is technical and legal work being done in this space
189
Dahiyat, above n 47, 106; Matthias, above n 42, 177.
190
Dahiyat, above n 47, 114.
191 192
See, above n 145-157. Vladeck, above n 17, 121; Dahiyat, above n 47, 110.
1328 computer law & security review 34 (2018) 1314–1332

to ensure that AI systems are explainable, and the ACL defec- fective goods action, plaintiffs who have suffered loss follow-
tive goods action may alleviate problems of proving causation ing an AV accident may face similar challenges to establishing
to some extent. liability to those described in relation to negligence, above.
Secondly, plaintiffs will likely face challenges in proving This represents an inherent problem in the law if Part 3–
fault on part of the AV manufacturer in a negligence claim, 5 of the ACL is indeed to act as a ‘strict’ liability action with
or that the automated vehicle was defective in a Part 3–5 ACL the objective of ensuring that manufacturers bear the costs of
claim. In relation to negligence actions, the problem is essen- defective products.198 Furthermore, recovery might come at a
tially an evidentiary one. Plaintiffs will be required to prove significant cost to plaintiffs due to the complexity of the litiga-
that the automated vehicle manufacturer failed to undertake tion required. The cost-benefit deliberations required to deter-
reasonable testing of their AV algorithms. However, given that mine the reasonableness of an AV manufacturer’s design, or
it is impracticable for AV manufacturers to completely test for, testing process, could require the enlistment of a several ex-
and eliminate all potential ADS errors, and that on a macro- pert witnesses such as software engineers and economists,199
scale AV technology is predicted to provide great social utility, the cost of which could be excessive for those pursuing com-
the courts may be reticent to impose too burdensome a stan- pensation for fairly minor damage to their car.
dard of care (especially during the early development of the Whilst perhaps the policy argument for a strict defective
technology). One can thus foresee similar evidentiary prob- goods action applicable across all of society is not settled,200
lems arising for plaintiffs seeking compensation for AV ac- there may be a policy argument for a more specialised scheme
cidents to what has been observed in relation to liability for applicable to AV accidents on the basis of the considerations of
alleged pharmaceutical design defects.193 this article. Specifically, both Australia, and other jurisdictions
These evidentiary challenges are likely to be even more whose product liability regimes rest heavily on the notion of
pronounced in relation to AV accidents that have resulted reasonable manufacturer conduct (such as the United States),
from the learned decision-making logic of the driving sys- should consider the merits of a strict AV liability scheme, or
tem. At present, no reasonable amount of testing could reveal a specialised compensation scheme funded by state govern-
all dynamic risks of this sort, and as manufacturers will be ments and/or the entities responsible for maintaining the ADS
merely held to a reasonable standard of testing, there may be (‘ADSE’). Such a scheme might not necessarily be limited to
a number of plaintiffs who go uncompensated for their losses. covering losses arising from AV accidents, but might also cover
It is important to keep in mind however that compensation is liability for AI decision-making more broadly. Decisions made
but one objective of the tort of negligence, and furthermore, on the basis of machine learning reasoning will become ubiq-
the formulation of the negligence calculus is chiefly directed uitous over the coming decades. Not only will algorithms be
at sanctioning careless, or negligent, conduct.194 piloting our cars, but they are likely to be making consequen-
Plaintiffs are likely to face similar evidentiary challenges tial decisions such as cancer diagnoses, what insurance pre-
under a defective goods action in Part 3–5 of the ACL. As has mium a person should pay, or whether a person is granted
been noted, there is concern that the consumer expectations bail.201 And whilst the technology will likely be transforma-
test, when applied to complex design defect cases, can begin tive and has the capacity to bring immense good to society, it
to resemble a reasonableness test and may impose a stan- will not be infallible.
dard of care that is broadly similar to the tort of negligence. A specialised strict liability or compensation scheme to
The shortcomings of the consumer expectations test in this which ADSEs contribute could well be justified on the basis
respect has been well documented in US decisions195 and by that the manufacturers are in a better position to internalise
US scholars.196 Furthermore, the wide application of the de- losses caused by automated vehicle accidents, as they would
velopment risks defence by Australian courts will again likely be able to reflect the costs of insuring the vehicles through
hold AV manufacturers to a standard of reasonable testing in pricing.202 It could ensure that victims are compensated ef-
the nascent stages of the technology.197 Thus, under the de-
198
In other words, a loss-spreading objective. See, Kellam and Not-
193
See, Claudia Newman-Martin, ‘Manufacturers’ Liability for tage, above n 99, 27-28; citing, Australian Law Reform Commission,
Undiscoverable Design Flaws in Prescription Drugs: A Merck-Y ‘Product Liability’ (Report No. 51, Australian Law Reform Commis-
Area of the Law’ (2011) 19 Torts Law Journal 26, 31–32 (‘(i) Negli- sion, 1989), para 7.06.
199
gence’). Vladeck, above n 17, 138.
194 200
See, Rosalie Balkin and Jim Davis, Law of Torts (LexisNexis But- See, Barker et al, above n 87, 657–658.
terworths, 3rd ed, 2004) 200. 201
See, eg, Agence France Presse, ‘Computer learns to detect
195
See, e.g. Soule v. General Motors Corp., 882 P.2d 298 (Cal. 1994), 309- skin cancer more accurately than doctors,’ The Guardian (online),
310; Montag v Honda Motor Co, 75 F.3d 1414 (10th Cir. 1996); Pruitt v 29 May 2018 <https://www.theguardian.com/society/2018/may/
General Motors Corp., 86 Cal. Rptr. 2d 4 (Ct. App. 1999), 6; Potter v 29/skin- cancer- computer- learns- to- detect- skin- cancer- more-
Chicago Pneumatic Tool Co. 694 A.2d 1319 (Conn. 1997), 1333. accurately- than- a- doctor>; Alston Ghafourifa, ‘AI and insur-
196
See, e.g., Marchant and Lindor, above n 17, 1324; Kiely and Ot- ance: Exchanging privacy for a cheaper rate,’ Venture Beat
tley, above n 107, 139; David Owen, ‘Design Defects’ (2008) 73 Mis- (online), 26 March 2017 <https://venturebeat.com/2017/03/26/
souri Law Review 291, 292. Note also that the Restatement (Third) of ai- and- insurance- exchanging- privacy- for- a- cheaper- rate/>;
Torts ultimately dispensed with the consumer expectations test as Sam Corbett-Davies, Sharad Goel and Sandra González-
the prevailing test for design defects on the basis that “consumer Bailón, ‘Even Imperfect Algorithms Can Improve the Crim-
expectations do not constitute an independent standard for judg- inal Justice System,’ New York Times (online), 20 Decem-
ing the defectiveness of product designs,” see, Restatement (Third) ber 2017 <https://www.nytimes.com/2017/12/20/upshot/
of Torts: Products Liability §2 cmt. g (1998). algorithms- bail- criminal- justice- system.html>.
197 202
See, above n 176–180. See, eg, Davies, above n 92, 572; Vladeck, above n 17, 146–147.
computer law & security review 34 (2018) 1314–1332 1329

ficiently, that the costs of erroneous AI decision-making are Griggs, Lynden, ‘A Radical Solution for Solving the Liability
spread equitably, and that AV manufacturers or other compa- Conundrum of Autonomous Vehicles’ (2017) 25 Competition
nies employing AI-based decision making for matters conse- and Consumer Law Journal 151
quential to an individual’s wellbeing are held to account and Gurney, Jeffrey, ‘Sue My Car Not Me: Products Liability and
are incentivised to ensure the quality, safety and fairness of Accidents Involving Autonomous Vehicles’ [2013] (2) Univer-
their systems. sity of Illinois Journal of Law, Technology and Policy 247
Hammond, Marnie, ‘The Defect Test in Part VA of the Trade
Practices Act 1974 (Cth): Defectively Designed?’ (1998) 6 Torts
References Law Journal 29
Hevelke, Alexander, and Julian Nida-Rumelin, ‘Responsibil-
A articles/books/reports ity for Crashes of Autonomous Vehicles: An Ethical Analysis’
(2015) 21(3) Science and Engineering Ethics 619
American Law Institute, Restatement (Third) of Torts: Products Huang, Albert, et al, ‘Finding Multiple Lanes in Urban Road
Liability (1998) Networks with Vision and Lidar’ (2009) 26(2) Autonomous
Anderson, James, et al, ‘Autonomous Vehicle Technology: A Robots 103
Guide for Policymakers’ (Research Report, RAND Corporation, Hynes, Paul, ‘Doctors, Devices and Defects: Products Liabil-
2016) ity for Defective Medical Expert Systems in Australia’ (2004) 15
Australian Law Reform Commission, ‘Product Liability’ (Re- Journal of Law, Information and Science 7
port No. 51, Australian Law Reform Commission, 1989) Kellam, Jocelyn and Luke Nottage, ‘Happy 15th Birthday,
Australian Transport Council, ‘National Road Safety Strat- Part VA TPA! Australia’s Product Liability Morass’ (2007) 15
egy 2011–2020 (Report, Australian Transport Council, May Competition & Consumer Law Journal 26
2011) Kiely, Terrence and Bruce Ottley, Understanding Products Li-
Balkin, Rosalie and Jim Davis, Law of Torts (LexisNexis But- ability Law (LexisNexis, 2006)
terworths, 3rd ed, 2004) Koopman, Phillip, and Michael Wagner, ‘Challenges in Au-
Barker, Kit, et al, The Law of Torts in Australia (Oxford Univer- tonomous Vehicle Testing and Validation’ (2016) 4(1) SAE Inter-
sity Press, 5th ed, 2012) national Journal of Transportation Safety 15
Beiker, Sven, ‘Legal Aspects of Autonomous Driving’ (2010) Marchant, Gary, and Rachel Lindor, ‘The Coming Colli-
52(4) Santa Clara Law Review 1145 sion Between Autonomous Vehicles and the Liability System’
Brady, Mark, et al, ‘Automated Vehicles and Australian Per- (2012) 52(4) Santa Clara Law Review 1321
sonal Injury Compensation Schemes’ (2017) 24(1) Torts Law Matthias, Andreas, ‘The Responsibility Gap: Ascribing Re-
Journal 32 sponsibility for the Actions of Learning Automata’ (2004) 6
Bureau of Infrastructure, Transport and Regional Eco- Ethics and Information Technology 175
nomics, ‘Road Deaths Australia – December 2016 (Statistical National Highway Traffic Safety Administration, ‘National
Report, Department of Infrastructure and Regional Develop- Motor Vehicle Crash Causation Survey’ (Report to Congress, US
ment, December 2016) Department of Transportation, July 2008)
Clayton Utz, ‘Driving into the Future: Regulating Driverless National Transport Commission, ‘Guidelines for Trials
Vehicles in Australia’ (Report, Clayton Utz, August 2016) of Automated Vehicles in Australia’ (2017) https://www.
Dahiyat, Emad, ‘Intelligent Agents and Liability: Is It A Doc- ntc.gov.au/Media/Reports/ (00F4B0A0-55E9-17E7-BF15-
trinal Problem or Merely a Problem of Explanation?’ (2010) D70F4725A938).pdf
18(1) Artificial Intelligence Law 103 National Transport Commission, ‘Regulatory Options for
Davies, Martin, ‘Product Liability’ in Carolyn Sappideen and Automated Vehicles’ (Discussion Paper, National Transport
Prue Vines (eds), Fleming’s The Law of Torts (Lawbook, 2011) 555 Commission, May 2016)
Doshi-Velez, ‘Accountability of AI Under the Law: The Role National Transport Commission, ‘Regulatory Reforms for
of Explanation’ (Harvard Public Law Working Paper No. 18–07, Automated Vehicles’ (Policy Paper, National Transport Com-
2017, Berkman Klein Center Working Group on Explanation mission, November 2016)
and the Law) National Transport Commission, ‘Safety Assurance for Au-
Duffy, Sophia and Jamie Hopkins, ‘Sit, Stay, Drive: The Fu- tomated Driving Systems: Consultation Regulation Impact
ture of Autonomous Car Liability’ (2013) 16(3) SMU Science and Statement’ (Regulation Impact Statement, National Transport
Technology Law Review 453 Commission, May 2018)
Fuller, Lon, ‘The Forms and Limits of Adjudication’ (1978) Newman-Martin, Claudia, ‘Manufacturers’ Liability for
92(2) Harvard Law Review 353 Undiscoverable Design Flaws in Prescription Drugs: A Merck-Y
Funkhouser, Kevin, ‘Paving the Road Ahead: Autonomous Area of the Law’ (2011) 19 Torts Law Journal 26
Vehicles, Products Liability, and the Need for A New Approach’ O’ Keefe, Roger, ‘Is the Concept of ‘Defect’, Defective?’ (1994)
[2013] (1) Utah Law Review 437 5 Australian Product Liability Reporter 84
Garza, Andrew, ‘Look Ma, No Hands!: Wrinkles and Wrecks Owen, David, ‘Design Defects’ (2008) 73 Missouri Law Review
in the Age of Autonomous Vehicles’ (2012) 46(3) New England 291
Law Review 581 SAE International, ‘Taxonomy and Definitions for Terms
Goodman, Bryce and Seth Flaxman, ‘European Union Reg- Related to Driving Automated Systems for On-Road Motor
ulations on Algorithmic Decision-Making and a "Right to Ex- Vehicles’ (Recommended Practice Report J3016, SAE Interna-
planation"’ (2017) 38(3) AI Magazine 50 tional, September 2016)
1330 computer law & security review 34 (2018) 1314–1332

Samek, Wojciech, Thomas Wiegand and Klaus-Robert Stennett v Hancock [1939] 2 All ER 578
Müller, ‘Explainable Artificial Intelligence: Understanding, Vi- Taylor v Rover Co Ltd [1966] 2 All ER 181
sualizing and Interpreting Deep Learning Models’ (2017) Spe- Vacwell Engineering Co Ltd v BDH Chemicals Ltd [1971] 1
cial Issue 1, ITU Journal 1 QB 88
Stapleton, Jane, ‘Products Liability Reform – Real or Illu- Wyong Shire Council v Shirt (1980) 146 CLR 40
sory?’ (1986) 6(3) Oxford Journal of Legal Studies 392
Thrun, Sebastian et al, ‘Stanley: The Robot that Won the C legislation
DARPA Grand Challenge’ (2006) 23(9) Journal of Field Robotics
661 Civil Law (Wrongs) Act 2002 (ACT)
Tsui, Mabel, ‘An Analysis of Australia’s Legal Regime for Im- Civil Liability Act 1936 (SA)
posing Liability on Manufacturers of Pharmaceutical Drugs’ Civil Liability Act 2002 (NSW)
(2014) 21 JLM 700 Civil Liability Act 2002 (Tas)
Twerski, Aaron and James Henderson, ‘Manufacturer’s Li- Civil Liability Act 2002 (WA)
ability for Defective Product Designs: The Triumph of Risk- Civil Liability Act 2003 (Qld)
Utility’ (2009) 74(3) Brooklyn Law Review 1061 Competition and Consumer Act 2010 (Cth)
Vladeck, David, ‘Machines Without Principals: Liability General Data Protection Regulation (EU) 2016/679
Rules and Artificial Intelligence’ (2014) 89(1) Washington Law Motor Vehicles Act 1959 (SA)
Review 117 Road Safety Amendment (Automated Vehicles) Bill 2017
Wachter, Sandra, Brent Mittelstadt and Luciano Floridi, (Vic)
‘Why a Right to Explanation of Automated Decision-Making Trade Practices Act 1974 (Cth)
Does Not Exist in the General Data Protection Regulation’ Transport Legislation Amendment (Automated Vehicle Tri-
(2017) 7(2) International Data Privacy Law 76 als and Innovation) Act 2017 (NSW)
World Health Organisation, ‘Global Status Report on Road Wrongs Act 1958 (Vic)
Safety, 2016 (Report, World Health Organisation, 2015)

B case law D other

A v National Blood Authority [2001] 3 All ER 289 Ackerman, Evan, ‘Toyota’s Gill Pratt on Self-Driving Cars
Abalos v Australian Postal Commission (1990) 171 CLR 169 and the Reality of Full Autonomy’, IEEE Spectrum (online),
Abouzaid v Mothercare (UK) Ltd [2000] EWCA Civ 348 23 January 2017 <http://spectrum.ieee.org/cars- that- think/
Carey-Hazell v Getz Bros & Co (Aust) Pty Ltd [2004] FCA 853 transportation/self-driving/toyota-gill-pratt-on-the-reality-
Chappel v Hart (1998) 195 CLR 232 of- full- autonomy#qaTopicSeven>
Cross v TNT Management (1987) 46 SASR 105 Adams, Tim, ‘Self-Driving Cars: From 2020 You Will Be-
Cuckow v Polyester Reinforced Products Pty Ltd (1970) 19 come a Permanent Backseat Driver’, The Guardian (online), 13
FLR 122 September 2015 <https://www.theguardian.com/technology/
Donoghue v Stevenson [1932] AC 562 2015/sep/13/self-driving-cars-bmw-google-2020-driving>
European Commission v United Kindgom [1997] 3 CMLR ADVI, Levels of Automated Vehicles <http://advi.org.au/
923 australia/levels- of- automation/>
Graham Barclay Oysters v Ryan (2002) 211 CLR 540 Agence France Presse, ‘Computer learns to detect skin can-
Grant v Australian Knitting Mills Ltd (1935) 54 CLR 49 cer more accurately than doctors,’ The Guardian (online), 29
Evans v Triplex Glass [1936] 1 All ER 283 May 2018 <https://www.theguardian.com/society/2018/may/
Fletcher v Toppers Drinks Pty Ltd [1981] 2 NSWLR 911 29/skin- cancer- computer- learns- to- detect- skin- cancer-
Hill v James Crowe (Cases) Ltd [1978] 1 All ER 812 more-accurately-than-a-doctor>
Kilgannon v Sharpe Bros Pty Ltd (1986) 4 NSWLR 600 Audi USA, Piloted Driving <https://www.audiusa.com/
Marschler v Masser’s Garage (1956) 2 DLR (2d) 484 newsroom/topics/2016/audi- piloted- driving>
Martin v Thorn Lighting Industries Pty Ltd [1978] WAR 10 Austroads, Trials, <https://austroads.com.au/drivers-and-
Middleton v Erwin [2009] NSWSC 108 vehicles/connected- and- automated- vehicles/trials>
Montag v Honda Motor Co, 75 F.3d 1414 (10th Cir. 1996) Cleary, Paul, ‘End of the Road Toll: Driverless Cars Could
Peake v Steriline Manufacturing Pty Ltd [1988] Aust Torts Save Lives’, The Australian (online) 29 December 2016 <http:
Reports 80 //www.theaustralian.com.au/life/end- of- the- road- toll-
Peterson v Merck Sharpe & Dohme [2010] FCA 180 driverless- cars- could- save- lives/news- story/83a2d2ad016f
Phillips v Chrysler Corp (1962) 32 DLR (2d) 347 2d263b7317a6a341f589>
Phillips v E W Lundberg & Son (1968) 88 WN (Pt 1) (NSW) Coldewey, Devin, ‘Affectiva and Uber want to Brighten
166 Your Day with Machine Learning and Emotional Intelli-
Potter v Chicago Pneumatic Tool Co. 694 A.2d 1319 (Conn. gence’, Tech Crunch (online), 13 September 2016 <https:
1997) //techcrunch.com/2016/09/13/affectiva- and- uber- want- to-
Pruitt v General Motors Corp., 86 Cal. Rptr. 2d 4 (Ct. App. brighten- your- day- with- machine- learning- and- emotional-
1999) intelligence/>
Rosenberg v Percival (2001) 205 CLR 434 Condliff, Jamie, ‘New Patents Hint That Amazon and Google
Soule v. General Motors Corp., 882 P2d 298 (Cal. 1994) Each Have Plans to Compete with Uber’, MIT Technology Review
computer law & security review 34 (2018) 1314–1332 1331

(online), 18 January 2017 <https://www.technologyreview. <http://explainablesystems.comp.nus.edu.sg/wp


com/s/603389/new- patents- hint- that- amazon- and- google- content/uploads/2018/03/XAI%20for%20IUI%202,018.pdf>
each- have- plans- to- compete- with- uber/> Higgins, Tim, ‘Google’s Self-Driving Car Program Odometer
Connor, Steve, ‘First Self-Driving Cars will be Unmarked Reaches 2 Million Miles’, Wall Street Journal (online), 5 October
so that Other Drivers don’t try to Bully Them’, The Guardian 2016 <http://www.wsj.com/articles/googles- self- driving- car-
(online), 30 October 2016 <https://www.theguardian.com/ program- odometer- reaches- 2- million- miles- 1475683321>
technology/2016/oct/30/volvo- self- driving- car- autonomous> Jaffe, Eric, ‘The First Look at How Google’s Self-Driving Car
Copeland, Michael, ‘What’s the Difference Between Arti- Handles City Streets’, CityLab (online), 28 April 2014 < http:
ficial Intelligence, Machine Learning, and Deep Learning?’, //www.citylab.com/tech/2014/04/first- look- how- googles- self-
Nvidia, 29 July 2016 <https://blogs.nvidia.com/blog/2016/ driving- car- handles- city- streets/8977/>
07/29/whats- difference- artificial- intelligence- machine- Kerstetter, Jim, ‘Daily Report: Tesla Driver Dies in ‘Au-
learning- deep- learning- ai/> topilot’ Accident’ New York Times (online), 1 July 2016
Corbett-Davies, Sam, Sharad Goel and Sandra González- <http://www.nytimes.com/2016/07/02/technology/daily-
Bailón, ‘Even Imperfect Algorithms Can Improve the Crim- report- tesla- driver- dies- in- autopilot- accident.html>
inal Justice System,’ New York Times (online), 20 Decem- Knight, Will, ‘The Dark Secret at the Heart of AI’,
ber 2017 <https://www.nytimes.com/2017/12/20/upshot/ MIT Technology Review (online), 11 April 2017 <https://
algorithms- bail- criminal- justice- system.html> www.technologyreview.com/s/604087/the- dark- secret- at-
Davies, Alex, ‘Google’s Lame Demo Shows Us How Far Its the- heart- of- ai/>
Robo-Car Has Come’, Wired (online), 10 May 2015 <https:// Knight, Will, ‘What to Know Before You Get In a Self-Driving
www.wired.com/2015/10/googles- lame- demo- shows- us- far- Car’, MIT Technology Review, 18 October 2016 <https://www.
robo- car- come/> technologyreview.com/s/602492/what- to- know- before- you-
Davies, Alex, ‘I Rode 500 miles in a Self-Driving Car get- in- a- self- driving- car/>
and Saw the Future: It’s Delightfully Dull’, Wired (on- Markoff, John, ‘Google Cars Drive Themselves, in Traffic’,
line), 1 July 2015 <https://www.wired.com/2015/01/rode-500- New York Times (online), 9 October 2010 <http://www.nytimes.
miles- self- driving- car- saw- future- boring/> com/2010/10/10/science/10google.html>
Davies, Alex, ‘Tesla’s Autopilot has had it’s First Deadly Markoff, John, ‘Google’s Next Phase in Driverless Cars: No
Crash’, Wired (online), 30 June 2016 < https://www.wired.com/ Steering Wheel or Brake Pedals’, New York Times (online), 27
2016/06/teslas- autopilot- first- deadly- crash/> May 2014 <https://www.nytimes.com/2014/05/28/technology/
Emerging Technology from the arXiv, ‘AI Can Be Made googles-next-phase-in-driverless-cars-no-brakes-or-
Legally Accountable for Its Decisions’, MIT Technology Press steering-wheel.html>
(online), 15 November 2017 <https://www.technologyreview. McFarland, Matt, ‘How Fixed-Gear Bikes Can Con-
com/s/609495/ai- can- be- made- legally- accountable- for- its- fuse Google’s Self-Driving Cars’, The Washington Post (on-
decisions/> line), 26 August 2015 <https://www.washingtonpost.com/
Ford, Ford Targets Fully Autonomous Vehicle for Ride Sharing news/innovations/wp/2015/08/26/how- fixed- gear- bikes- can-
in 2021 <https://media.ford.com/content/fordmedia/fna/us/ confuse- googles- self- driving- cars/?utm_term=
en/news/2016/08/16/ford-targets-fully-autonomous- .11f75e05e3c7>
vehicle- for- ride- sharing- in- 2021.html> Milford, Michael and Jonathan Roberts, ‘The Winners
Gates, Guilbert, et al, ‘When Cars Drive Themselves’, and Losers in the Race for Driverless Cars’, The Conversa-
New York Times (online), 14 December 2016 <https: tion (online), 29 August 2016 <https://theconversation.com/
//www.nytimes.com/interactive/2016/12/14/technology/ the- winners- and- losers- in- the- race- for- driverless-cars-
how- self- driving- cars- work.html> 63874>
Ghafourifa, Alston, ‘AI and insurance: Exchanging pri- Oremus, Will, ‘Is Autopilot a Bad Idea?’ Slate (online),
vacy for a cheaper rate,’ Venture Beat (online), 26 March 2017 6 July 2016 <http://www.slate.com/articles/technology/
<https://venturebeat.com/2017/03/26/ai- and- insurance- future_tense/2016/07/is_tesla_s_style_of _autopilot_a_bad_
exchanging-privacy-for-a-cheaper-rate/> idea_volvo_google_and_others_think_so.html>
Gibbs, Samuel, ‘What’s it Like to Drive with Tesla’s Au- Oriti, Thomas, ‘Government Estimates Road Crashes
topilot and How Does it Work?’, The Guardian (online), 1 July Costing the Australian Economy $27 Billion a Year’, ABC
2016 <https://www.theguardian.com/technology/2016/jul/01/ News (online), 2 January 2017 <http://www.abc.net.au/
tesla- autopilot- model- s- crash- how- does- it- work> news/2017- 01- 02/road- crashes- costing- australian- economy-
Google, ‘Google Self-Driving Car Project Monthly Re- billions/8143886>
port: January 2016 (Report, Google, 31 January 2016) Ramsay, Mike, ‘Self-Driving Cars Could Cut Down on Ac-
<https://static.googleusercontent.com/media/www.google. cidents, Study Says’, Wall Street Journal (online), 5 March 2015
com/en//selfdrivingcar/files/reports/report-0116.pdf> <http://www.wsj.com/articles/self- driving- cars- could- cut-
Google, ‘Google Self-Driving Car Project Monthly Re- down- on- accidents- study- says- 1425567905>
port: February 2016 (Report, Google, 29 February 2016) Reese, Hope, ‘Tesla’s new Autopilot makes a big bet
<https://static.googleusercontent.com/media/www.google. on radar’, TechRepublic (online), 16 September 2016 <
com/en//selfdrivingcar/files/reports/report-0216.pdf> http://www.techrepublic.com/article/teslas- new- autopilot-
Gunning, David, ‘Explainable Artificial Intelli- makes- a- big- bet- on- radar- musk- said- system- would- have-
gence’ (Program Update, November 2017, DARPA) prevented-deadly-crash/>
1332 computer law & security review 34 (2018) 1314–1332

Reiley, Carol, ‘Deep Driving’, MIT Technology Review (on- Volvo, Autonomous Driving <http://www.volvocars.com/
line), 18 October 2016 <https://www.technologyreview.com/s/ intl/about/our-innovation-brands/intellisafe/
602600/deep-driving/> autonomous-driving>
Solon, Olivia, ‘Lidar: The Self-Driving Technology that Walker, Alissa, ‘Volvo Is Designing the Autonomous Car
Could Help Tesla Avoid Another Tragedy’, The Guardian (on- That Could Eliminate All Traffic Deaths’, Gizmodo, 18 Novem-
line), 7 July 2016 <https://www.theguardian.com/technology/ ber 2015 <http://gizmodo.com/volvo- is- designing- the-
2016/jul/06/lidar- self- driving- technology- tesla- crash- autonomous- car- that- could- elimin- 1743302437>
elon-musk> Warren, Tamara, ‘We Took a Ride in Ford’s Self-Driving
Spector, Mike, and Ianthe Dugan, ‘Tesla Draws Scrutiny Car’, The Verge (online), 13 September 2016 <http://www.
After Autopilot Feature Linked to a Death’, Wall Street Jour- theverge.com/2016/9/13/12895690/ford- self- driving- car- ride-
nal (online), 30 June 2016 <https://www.wsj.com/articles/ detroit-video>
tesla- draws- scrutiny- from- regulators- after- autopilot- Weber, Harrison ‘Elon Musk unveils ‘massive’ Tesla Autopi-
feature- is- linked- to- a- death- 1467319355> lot 8.0 update using existing radar and fleet learning’, Venture
Tesla, A Tragic Loss, June 30 2016 < https://www.tesla.com/ Beat (online), 11 September 2016 < http://venturebeat.com/
blog/tragic-loss> 2016/09/11/elon- musk- unveils- massive- tesla- autopilot- 8- 0-
Thierer, Adam, ‘When the Trial Lawyers Come for the Robot update- using- existing- radar- and- fleet- learning/>
Cars’, Slate (online), 10 June 2016 < http://www.slate.com/ Ziegler, Chris, ‘A Google Self-Driving Car Caused a Crash
articles/technology/future_tense/2016/06/if _a_driverless_car_ for the First Time’, The Verge (online), 29 February 2016
crashes_who_is_liable.html> <http://www.theverge.com/2016/2/29/11134344/google-self-
Thomson, Andrew, ‘Emotionally intelligent computers driving- car- crash- report>
may already have a higher EQ than you’, Tech Crunch (on-
line), 2 December 2016 <https://techcrunch.com/2016/12/
02/emotionally- intelligent- computers- may- already- have- a-
higher- eq- than- you/>

You might also like