Ergonomic

You might also like

Download as pdf or txt
Download as pdf or txt
You are on page 1of 6

See discussions, stats, and author profiles for this publication at: https://www.researchgate.

net/publication/341829524

The Good, The Bad, and The Ugly: Evaluating Tesla's Human Factors in the Wild
West of Self-Driving Cars

Article in Proceedings of the Human Factors and Ergonomics Society Annual Meeting · October 2020
DOI: 10.1177/1071181320641020

CITATIONS READS

5 5,420

2 authors:

Samineh Gillmore Nathan Tenhundfeld


University of Alabama in Huntsville University of Alabama in Huntsville
7 PUBLICATIONS 13 CITATIONS 65 PUBLICATIONS 325 CITATIONS

SEE PROFILE SEE PROFILE

Some of the authors of this publication are also working on these related projects:

Human-Autonomy Teaming in Command-and-Control Environment View project

All content following this page was uploaded by Nathan Tenhundfeld on 02 June 2020.

The user has requested enhancement of the downloaded file.


The Good, The Bad, and The Ugly: Evaluating Tesla’s Human
Factors in the Wild West of Self-Driving Cars
Samineh Gillmore and Nathan L. Tenhundfeld
University of Alabama in Huntsville

With self-driving vehicles no longer a pipe-dream of science fiction comes the growing pains of the new
technology. Tesla Motors is the industry leader in implementing new self-driving technologies. Tesla has
used new technology in many ways to improve the human factors of their cars, but there are also design flaws
that represent threats to efficiency and safety. This paper details the good, the bad, and the ugly of Tesla’s
designs, not as a way to negatively impact Tesla’s reputation, but to point out the potential human factors
issues that relate to the rise of self-driving cars in general. While the future for autonomous cars has never
seemed so promising, it is abundantly clear that we are nowhere near a reality in which the human does not
need to be considered.

Introduction We have broken our evaluation into three distinct


Self-driving vehicles are no longer relegated to the works categories. ‘The good’ section will focus on the aspects of
of science fiction. With companies like Tesla, Audi, Toyota, design that Tesla has done well. ‘The bad’ section will identify
and Waymo investing heavily in the development of self- the human factors problems that are poorly designed but that
driving capabilities, this cutting-edge technology will soon be do not represent an obvious nor immediate danger. Finally,
available in even the most economical models. However, such ‘the ugly’ section will focus exclusively on the design choices
advances in technology come with substantial growing pains. Tesla had made, which we believe pose an immediate and
One such growing pain has been the high profile fatal direct risk to users.
accidents which resulted from instances of drivers seemingly
becoming complacent in their supervision over the system Review
(Abrams & Kurtz, 2016; Griggs & Wakabayashi, 2018). This The Good
complacency is not a rare occurrence. Results indicate that Tesla’s redesign of their vehicles has led to novel human
most drivers become complacent while tasked with monitoring factor choices for various aspects in the car. Some choices are
a self-driving system (Banks, Eriksson, O’Donoghue, & idiosyncratic changes to old technologies, such as the ways in
Stanton, 2018). Part of this complacency could be explained which one opens the glovebox or doors which has made Tesla
by the simple naming of the self-driving system. Names like stand out in a unique way. Other design choices relate to the
Tesla’s ‘Autopilot’ result in nearly half of respondents new technologies and new capabilities Teslas offer.
indicating that it would be fine to keep their hands off the One of the most obvious features in the ‘good’ category
steering wheel, despite Tesla explicitly stating that hands must is the configurability afforded by switching the user profiles.
be kept on the steering wheel at all times (Teoh, 2019). The profiles are incredibly user friendly, requiring at most 2
Beyond concerns of complacency have been evaluations clicks to change profiles. Like most cars, the user profiles
of familiarity (Tenhundfeld, de Visser, Haring, et al., 2019), contain seat, mirror, and steering wheel settings, but Tesla
individual differences (Tenhundfeld, de Visser, Ries, goes one step further by saving driving preferences as well;
Finomore, & Tossell, 2019), transfer of control (Eriksson, preferences such as steering mode, braking modes, autopilot
Banks, & Stanton, 2017; Stanton, Young, & McCaulder, settings, and climate are tied to a user’s profile. This way users
1997), training (Ebnali, Lamb, & Fathi, 2020), and situation can tailor settings based on preferences that can change
awareness (Endsley, 2017). However, to date there has not between profiles. This configurability allows for multiple
been a critical human factors evaluation of any of these drivers to readjust to their preferred or needed settings,
commercially available systems. Such an evaluation is of enhancing safety and user experience at the literal touch of a
benefit to the field and industry alike. The purpose of this button (Lee, Wickens, Liu, & Boyle, 2017).
paper is to highlight the areas in which Tesla, the industry Another redesign that has a positive benefit is the model
leader in self-driving technology, has implemented designs 3 dashboard redesign. Tesla decided to remove the traditional
which range from mild inconveniences to potential dashboard and relocate the information to a center console
catastrophes waiting to happen. The purpose is not to target screen located to the right of the steering wheel. While there
Tesla, but instead to start a dialogue about the need for careful are cons to removing the dash (discussed later), one benefit is
Human Factors evaluations when developing these cutting that it allows for greater visibility of the road and one’s
edge technologies. While Tesla is far from the only surroundings while driving the car.
manufacturer with these problems, the authors have nearly With the increases in technology comes an increase in
three years of combined experience driving Teslas, and thus the host of various sensors that allow the vehicle to detect both
have chosen to critically evaluate the systems based on their proximal and distal objects. The information that is then
own experiences. subsequently presented to the driver is often done so in an
intuitive manner. For instance, the Tesla displays the other particularly concerning given the history of the console screen
vehicles in the immediate vicinity it detects, allowing the user going blank in the middle of driving (Rishabh0428, 2019).
to understand what the vehicle ‘sees’. Similar projections are
given for objects which are within approximately 2 feet, by
presenting a color-coded indication on a top down display of
the vehicle. The color-coded indication is displayed in the
region where the object is detected (Figure 1). This display
nicely adheres to design principles recommended for
likelihood alarms, which makes such presentation of
information intuitive, learnable, and non-taxing (Lee et al.,
2017; Sorkin, Kantowitz, & Kantowitz, 1988).

Figure 2. Model 3 dashboard (DeBord, 2017)

Another poor design is the system for non-emergency


alerts on the car. For example, in the case that the car detects
low tire pressure or a certain amount of weight on a seat that
does not have a seatbelt fastened, an alert will be displayed on
the dashboard (or center console in the Tesla Model 3). These
alerts will show up and can be dismissed, but leave a symbol
on the dash. However, even once an alert has been dismissed,
Figure 1. Color coded indications (LivingTesla, 2018). the vehicle will re-alert the driver by showing a banner on the
representation of the car every few minutes until the issue has
A final area in which Tesla should be lauded, is in the been resolved. This presents a significant concern for several
redundancy of information presented to the driver. For reasons. First, a salient cue will redirect driver attention from
example, when engaging the Autopilot feature Tesla provides the road and towards the cue itself (Meyer & Lee, 2013). This
both transient feedback (a chime) and persistent feedback attention capture does not pose an immediate risk, but could
(image on the display). This allows users the ability to quickly serve to diminish situational awareness (SA) (Endsley, 2001).
detect that the Autopilot has been engaged, if they are While the alerts do not represent false alarms, their salience
expecting the feedback (Lee et al., 2017). Similarly, when the indicates a level of immediate importance that is not inherent
Autopilot has been disengaged there is a distinct chime in what is being alerted to. This is likely to elicit a response in
(different from the chime signifying it was engaged), and the line with the ‘cry wolf’ effect, such that subsequent more
persistent feedback of the image disappears. important information will be ignored as a function of less
Tesla’s positive redesigns show that they have worked to important information being continuously cued (Wickens et
enhance safety and efficacy in their vehicles through human al., 2009). Finally, this repetitive alerting for a non-urgent
factors. Despite the positives, there are many cases in which situation can lead to users to be unnecessarily annoyed and
there seems to be significant neglect of human factors irritated. This annoyance and irritation can lead to users
principles in design. ignoring or turning off alarms rather than taking them
seriously (Marshall & Lee, 2007).
The Bad In addition to the aforementioned concerns which can
With a redesign of any new system comes aspects that affect the user in both manual control and auto-driving, exist
introduce new problems. One main example of this is the concerns inherent to self-driving systems. Transparency refers
relocation of the dashboard information in the Model 3. In this to the system’s communication of relevant information to the
redesign, Tesla has moved the information from behind the operator (e.g. Kraus, Scholz, Stiegemeier, & Baumann, 2019;
steering wheel, to the center console (Figure 2). While this Mercado et al., 2016). Transparency is an important factor in
redesign comes with the benefit of a larger out the window the development and repair of trust in a system (de Visser,
view, it also has a detrimental effect that drivers have to look Pak, & Shaw, 2018; Matthews, Lin, Panganiban, & Long,
farther away from the road to get speed or alert information 2019). There are undeniably cases in which the vehicle does a
than they otherwise would have. Whereas normal dashboard good job communicating with the user, such as its detection of
displays allow for a quick shifting of the eyes, this new vehicles and objects around. However, much of the design of
configuration requires substantial movement of the head. The Teslas seems to be agnostic towards transparency. One such
N-SEEV model of noticing would predict that users are example is the lack of communication with regards to
significantly less likely to notice relevant changes in automation disengagement. Once the vehicle is unable to
information as a function of increased required effort to attend continue in self-driving mode, or is unable to complete
to the information (Steelman-Allen, McCarley, Wickens, autoparking it simply disengages without any presentation of
Sebok, & Bzostek, 2009). While the information is displayed information as to why. This poses a significant threat to the
clearly, the requirement that the user looks away from the road calibration of trust such that the user is unable to tie the
to acquire pertinent information is a concern. This presentation disengagement to a specific environmental event, but rather
of such important information on the center console is attributes it to the vehicle itself. There are also instances in
which communication of the vehicle’s intention would be console in the Model 3) telling the driver to regain control
beneficial. Previous work has shown over 50% intervention immediately. However, at this moment the vehicle has already
rates when using the autoparking feature for the first time relinquished control over to the human. If the human is
(Tenhundfeld, de Visser, Haring, et al., 2019; Tenhundfeld, de unable, or unprepared to regain control the vehicle has no fail-
Visser, Ries, et al., 2019; Tomzcak et al., 2019). Part of this safe: the vehicle will simply continue ahead as would a normal
high intervention rate may be attributable to the idiosyncratic car once the driver’s hands are removed from the steering
method by which the Tesla parks perpendicularly (backing in wheel. This has resulted in several close calls for both of the
with a three-point turn versus pulling in nose first). Once users authors, wherein the vehicle would straighten out in the
learn what the vehicle is doing, and how it plans to accomplish middle of a turn, directly into oncoming traffic. While the
the autoparking task, they are quick to develop trust, letting beep is jarring and would get the attention of drivers who were
the vehicle park itself. However, the lack of communication actively monitoring the car, the lack of a fail-safe when it
mars early experiences, and can lead to subsequently unsafe comes to the autopilot is concerning for those who are not.
intervention behaviors from the driver. Even for those users who are not distracted with secondary
The final ‘bad’ design feature is the unintuitive tasks (e.g. Banks et al., 2018) there remain issues of the loss of
organization of the settings menu. For example, information SA (Endsley, 2001), performance decrements once regaining
regarding emergency braking, lane deviation warnings, and control (Stanton et al., 1997), and even out-of-the-loop
blind spot collision warnings are all located within the unfamiliarity (OOTLUF) over long periods of use (Sebok &
‘Autopilot’ header rather than the ‘Safety & Security’ header. Wickens, 2017), each of which could result in potentially
This is particularly perplexing given that they are not part of disastrous consequences for the user or others. While a default
the features that Tesla advertises as part of the Autopilot ‘fail-safe’ would be potentially difficult to implement,
system, and instead as ‘Safety Features’ (see inclusion of driver monitoring may be a useful addition in that
https://www.tesla.com/autopilot). Additionally, it is housed it would allow a self-driving car to abstain for transferring
separately from the park assist chimes feature, which is found control to an individual who is clearly unable to resume
under the ‘Safety Features’. Another example includes the control. However, when pressed about incorporation of a
button to turn off the vehicle being under ‘Safety & Security’ driver monitoring system on Lex Fridman’s Artificial
rather than ‘Quick Controls’. Unfortunately, there is also no Intelligence podcast, Elon Musk responded that it was ‘totally
search feature which would otherwise allow users to overcome moot’ because Tesla is so close to a fully autonomous vehicle
this nonintuitive display layout. This all contributes to poor (Fridman, 2019). We are inclined to disagree on both points.
learnability for the system (Lee et al., 2017). Such a system may also help in cases wherein the driver
While the examples in this section represent poor unintentionally disengages the autopilot, by analyzing for
designs, the consequence of such poor design is not likely to behaviors inconsistent with full control over the vehicle. This
be severe. With that said, we have identified several design misalignment of belief and reality about the state of the
flaws which we believe constitute severe risk. automation (i.e. engaged or not) is known as mode confusion
(Sarter, Woods, & Billings, 1997). While there are both
The Ugly auditory and visual indications that the autopilot has been
Our final category is ‘the ugly’ of Tesla designs. This disengaged, they are relatively non-salient, and unlikely to
category represents our most serious concerns regarding the result in the driver noticing them, especially when expectancy
design of Tesla vehicles. This is not necessarily to say that the is low (Steelman-Allen et al., 2009). There have been many
concerns addressed herein represent the greatest deviation instances we have noticed wherein we, or others we have
from optimal design, but rather that the design flaws here pose observed, have unintentionally disengaged the autopilot
the greatest overall risk to users. It is worth prefacing that, without noticing, only to jarringly realize we had moments
once again, these concerns are not limited to Tesla. Thus, later when the vehicle did not respond to the environment.
points discussed herein should be considered by all auto Similarly, the method by which a user engages the
manufacturers. autopilot on the Model S and Model X is a source of concern.
The creation of new driving technologies comes with the On the left side of the steering wheel there is a smaller stick
reality that some features can impose dangers that are not which is to be pulled towards the driver twice in order to
inherent in normal driving. For instance, in vehicles with self- engage the autopilot (Figure 3). Despite there being a very
driving capabilities like Tesla has, there needs to be careful noticeable size difference between the ‘cruise control lever’
consideration regarding the transfer of control (TOC) between and the two other control sticks on the left-hand side, the
the vehicle and human. There are two circumstances under action of pulling the stick towards the driver is one that does
which TOC occurs: from the human to the vehicle and the not require the driver’s hand to be oriented in such a way as to
vehicle to the human. notice the size difference. As such, on many occasions this has
In the first instance of TOC (human to vehicle), Tesla resulted in the authors (or those they observed) pulling one of
seems to have done a nice job with the design (as discussed the other two aforementioned sticks toward them, rather than
above). However, one of the main issues with TOC revolves the desired ‘cruise control lever’, which has caused mode
around the disengagement caused by the vehicle itself. When confusion. A simple shape redesign of the ‘cruise control
the autopilot is unable to navigate a particular situation, it will lever’ would help prevent such confusion (Hunt, 1953). It is
spontaneously “give up”. It alerts the driver with a salient worth noting however, that in the Model 3 only, Tesla has
beeping, and displayed image on the dashboard (or center integrated the features into a single stick. Baring a shape
redesign, the actions necessary to engage the autopilot should when they are not, to reading or playing games while the car
be distinct from the available actions with the other sticks. For drives itself, the human will always be the relatively
example, requiring that operators push the ‘cruise control unpredictable variable in human-automation interaction. Thus,
lever’ in towards the steering column would require an action as we move forward there needs to be greater consideration as
not afforded by the other sticks. Simple reliance on the to the ways in which humans will struggle with imperfect
transient feedback of the chime, and persistent feedback of the designs. This paper has attempted to highlight some of the
light is insufficient for a task which normally takes place in areas in which these imperfect designs pose a risk to the
noisy or rich environments which are attentionally demanding human operator.
(Lee et al., 2017).
Discussion
Again, many other car manufacturers have their own
human factors design issues, and so this paper should not be
construed to insinuate that Tesla has committed a sin of
human factors neglect that has not been committed by others.
Simply put, we have extensive experience with Tesla and have
spent considerable amounts of time contemplating and
discussing the human factors issues. Inherent in these sorts of
cutting-edge technologies is the reality that many of these
companies are in a race with one another to provide the first
Figure 3. Autopilot control stick referred to as the ‘cruise control lever’ by fully autonomous car on the market. While there is much to be
Tesla, indicated by arrow (Amadeo, 2015) said for the advancements made by Tesla, we would caution
all car manufacturers against delivering systems to market
Our final point of concern regards an idiosyncrasy without thorough evaluation of human factors. This is
afforded by the advance of technology. Because vehicles particularly relevant for tasks related to the human-automation
today are so reliant on computers for their functioning, interactions and TOC, for which most users are naïve.
enhancements to the vehicles can be made with simple In conclusion, Tesla has brought to market the most
software updates. Tesla has adopted a method by which they cutting-edge technologies for self-driving vehicles. There are
are able to update vehicles through Wi-Fi and cellular certain design choices which Tesla has made, that should be
connection. While owners are able to turn off automatic lauded for their adherence to human factors principles while
updates, failure to update a vehicle’s software will result in other design flaws represent threats to efficiency and safety
subsequent damage not being covered by the warranty (Tesla, alike. We have detailed the good, the bad, and the ugly of
2017). There have been many documented cases of Tesla Tesla’s design of its vehicles, as it relates to what the field of
pushing updates to vehicles which have unexpected human factors already about design principles. Our hope is
consequences. One such example is the changing of default that this paper serves three distinct purposes. First, we believe
settings pertaining to things like whether the car will roll or it is important to illustrate the shortcomings of Tesla’s design,
hold its position on a hill (Brandonlive, 2019). The pushing of not to negatively impact perceptions of Tesla, but rather to
software updates presents a substantial concern amidst illustrate the deficiencies in consideration of human factors as
criticism that the new features are far from flawless, a clear it relates to self-driving vehicles. Secondly, we hope to inform
issue in the face of user overtrust. The most recent evidence of other researchers of potential safety and experimental control
this was the multitude of issues, including accidents, near issues which should be considered for research being
misses, and complete failures, when using the new ‘Smart conducted in a Tesla. Finally, we hope this paper sparks a
Summons’ feature, all resulting in ‘#TeslaSummonIssues’ dialogue about the best ways for the field of Human Factors to
emerging on Twitter (Lekach, 2019). communicate our understandings of complex fields of study,
Each of these aforementioned issues presents what we such as human-automation interactions, to the industries
believe is clear and significant risk to Tesla owners and those which need this information the most. While the future for
in the surrounding environment. While the probability of fatal autonomous cars has never seemed so promising, it is
consequences may be low, there needs to be significant abundantly clear that we are nowhere near a reality in which
consideration about the design of such systems such that there the human does not need to be considered.
is consideration of the human element.
References
New Contribution Abrams, R., & Kurtz, A. (2016). Joshua Brown, Who Died in
Elon Musk has publicly stated that a Tesla is “…not a Self-Driving Accident, Tested Limits of His Tesla. The
car, it’s a thing to maximize enjoyment” (Rogan, 2018). New York Times.
While, in our opinion, Tesla has certainly provided a highly Amadeo, R. (2015, October 15). Driving (or kind of not
enjoyable ‘thing’, there are serious human factors concerns driving) a Tesla Model S with Autopilot. Retrieved from
regarding various design choices. The unfortunate truth is that https://arstechnica.com/cars/2015/10/driving-or-kind-of-
users will find ways to either intentionally or unintentionally not-driving-a-tesla-model-s-with-autopilot/
push the limits of these self-driving vehicles. From trying to Banks, V. A., Eriksson, A., O’Donoghue, J., & Stanton, N. A.
trick the car into believing the driver’s hands are on the wheel (2018). Is partially automated driving a bad idea?
Observations from an on-road study. Applied Mercado, J. E., Rupp, M. A., Chen, J. Y. C., Barnes, M. J.,
Ergonomics, 68, 138–145. Barber, D. J., & Procci, K. (2016). Intelligent Agent
Brandonlive. (2019). 36.2.3 reverted my HOLD mode setting. Transparency in Human-Agent Teaming for Multi-UxV
de Visser, E. J., Pak, R., & Shaw, T. H. (2018). From Management. Human Factors, 58(3), 401–415.
“automation” to “autonomy”: the importance of trust https://doi.org/10.1177/0018720815621206
repair in human-machine interaction. Ergonomics. Meyer, J., & Lee, J. D. (2013). Trust, reliance, and
DeBord, M. (2017, July 29). The Tesla Model 3 has the most compliance. In A. Kirlik & J. D. Lee (Eds.), The Oxford
minimalistic interior I've ever seen. Retrieved from Handbook of Cognitive Engineering (pp. 109–124).
https://www.businessinsider.com/tesla-model-3- New York: Oxford University Press.
minimalistic-interior-2017-7 Rishabh0428. (2019). Screen keeps going blank while driving.
Ebnali, M., Lamb, R., & Fathi, R. (2020). Familiarization Rogan, J. (2018). Joe Rogan Experience #1169 - Elon Musk.
tours for first-time users of highly automated cars: Sarter, N. B., Woods, D. D., & Billings, C. E. (1997).
Comparing the effects of virtual environments with Automation Surprises. In G. Salvendy (Ed.), Hanbook of
different levels of interaction fidelity 1. Preprint, 1–28. Human Factors & Ergonomics (Second).
Endsley, M. R. (2001). Designing for Situation Awareness in https://doi.org/10.1109/VTSA.2003.1252543
Complex System The Challenge of the Information Age. Sebok, A., & Wickens, C. D. (2017). Implementing
Proceedings of the Second Intenational Workshop on Lumberjacks and Black Swans into Model-Based Tools
Symbiosis of Humans, Artifacts and Environment, 1–14. to Support Human-Automation Interaction. Human
Endsley, M. R. (2017). Autonomous Driving Systems: A Factors, 59(2), 189–203.
Preliminary Naturalistic Study of the Tesla Model S. https://doi.org/10.1177/0018720816665201
Journal of Cognitive Engineering and Decision Making, Sorkin, R. D., Kantowitz, B. H., & Kantowitz, S. C. (1988).
11(3), 225–238. Likelihood alarm displays. Human Factors, 30(4), 445–
https://doi.org/10.1177/1555343417695197 459.
Eriksson, A., Banks, V. A., & Stanton, N. A. (2017). Stanton, N. A., Young, M. S., & McCaulder, B. (1997). Drive-
Transition to Manual: comparing simulator with on- by-wire: The case of driver workload and reclaiming
road control transitions. Accident Analysis & control with adaptive cruise control. Safety Science,
Prevention, 102C, 227–234. 27(2), 149–159.
Fridman, L. (2019). Artificial Intelligence Podcast -- Elon Steelman-Allen, K. S., McCarley, J. S., Wickens, C. D.,
Musk: Tesla Autopilot. Sebok, A., & Bzostek, J. (2009). N-SEEV: A
Griggs, T., & Wakabayashi, D. (2018). How a Self-Driving Computational Model of Attention and Noticing.
Uber Killed a Pedestrian in Arizona. The New York Proceedings of the Human Factors and Ergonomic
Times. https://doi.org/10.1097/INF.0b013e3181c94d9e; Society, 774–778.
10.1097/INF.0b013e3181c94d9e https://doi.org/10.1518/107118109x12524442637381
Hunt, D. P. (1953). The Coding of Aircraft Controls. Tenhundfeld, N. L., de Visser, E. J., Haring, K. S., Ries, A. J.,
https://doi.org/10.31826/9781463239909-006 Finomore, V. S., & Tossell, C. C. (2019). Calibrating
Kraus, J., Scholz, D., Stiegemeier, D., & Baumann, M. (2019). trust in automation through familiarity with the
The More You Know: Trust Dynamics and Calibration autoparking feature of a Tesla Model X. Journal of
in Highly Automated Driving and the Effects of Take- Cognitive Engineering and Decision Making, 13(4),
Overs, System Malfunction, and System Transparency. 279–294. https://doi.org/10.1177/1555343419869083
Human Factors, 1–19. Tenhundfeld, N. L., de Visser, E. J., Ries, A. J., Finomore, V.
https://doi.org/10.1177/0018720819853686 S., & Tossell, C. C. (2019). Trust and Distrust of
Lee, J. D., Wickens, C. D., Liu, Y., & Boyle, L. N. (2017). Automated Parking in a Tesla Model X. Human
Designing for People (3rd ed.). Charleston, SC. Factors, 1–18.
Lekach, S. (2019). Tesla owners immediately tested the new Teoh, E. R. (2019). What’s in a name? Drivers’ perceptions of
Smart Summon in parking lots. Retrieved February 6, the use of five SAE Level 2 driving automation systems.
2020, from Mashable website: Tesla. (2017). Model X Owner’s Manual. Retrieved from
https://mashable.com/article/tesla-smart-summon- https://www.tesla.com/sites/default/files/model_x_owne
roundup/ rs_manual_north_america_en.pdf
LivingTesla. (2018, April 2). Retrieved February 12, 2020, Tomzcak, K., Pelter, A., Gutierrez, C., Stretch, T., Hilf, D.,
from https://www.youtube.com/watch?v=-Lhf15RYHiA Donadio, B., … Tossell, C. C. (2019). Let Tesla Park
Marshall, D. C., Lee, J. D., & Austria, P. A. (2007). Alerts for Your Tesla: Driver Trust in a Semi-Automated Car.
in-vehicle information systems: Annoyance, urgency, Proceedings of the Annual Systems and Information
and appropriateness. Human factors, 49(1), 145-157. Engineering Design Symposium (SIEDS) Conference.
Matthews, G., Lin, J., Panganiban, A. R., & Long, M. D. Wickens, C. D., Rice, S., Keller, D., Hutchins, S., Hughes, J.,
(2019). Individual Differences in Trust in Autonomous & Clayton, K. (2009). False alerts in air traffic control
Robots: Implications for Transparency. IEEE conflict alerting system: Is there a “cry wolf” effect?
Transactions on Human-Machine Systems, Human Factors, 51(4), 446–462.
PP(November), 1–11. https://doi.org/10.1177/0018720809344720
https://doi.org/10.1109/THMS.2019.2947592

View publication stats

You might also like