Automisation

You might also like

Download as pdf or txt
Download as pdf or txt
You are on page 1of 8

Research article

Journal of Cognitive Engineering


and Decision Making
Understanding Automation Failure 2023, Vol. 0(0) 1–8
© 2023, Human Factors
and Ergonomics Society
Article reuse guidelines:
sagepub.com/journals-permissions
DOI: 10.1177/15553434231222059
Mica R. Endsley  journals.sagepub.com/home/edm

Abstract
The implementation of automation in many domains has led to well-documented accidents and incidents,
resulting from reduced situation awareness that occurs when operators are out-of-loop (OOTL), au-
tomation confusion, and automation interaction difficulties. Wickens coined the term lumberjack effect to
summarize the finding that while automation works well most of the time in typical or normal situations,
the performance problems that occur in novel or unexpected situations also increase the likelihood of
catastrophic errors. Skraaning and Jamieson have criticized the lumberjack effect due to a study in which
they failed to find it. I show that this claim is unsupported due to a number of methodological limitations in
their study and conceptual errors. They also provide a model of automation failure that fails to clearly
delineate the many barriers to accidents that are available, instead emphasizing the ways in which au-
tomation can fail technically and different types of human error. An alternate automation failure model is
presented that provides a broader socio-technical perspective emphasizing the design features, processes,
capabilities, organizational policies, and training that support people in improving system safety when
automation fails.

Keywords
errors, topics, automation, topics, human-automation interaction, topics, level of automation, topics,
situation awareness, topics

Automation-Induced Failures Gawron (2019) documents 26 automation-


related aircraft accidents and incidents. In re-
Severe accidents and safety compromising inci- viewing this list, 27% can be categorized as in-
dents are a well-known side effect of automation volving automation confusion, 58% as OOTL/SA
(Wiener, 1988). These accidents arise from a problems, and 36% involved automation interac-
number of automation problems, largely induced tion problems (with some accidents having mul-
by the design of these systems, that affect people’s tiple problems). Contributing to these challenges
ability to oversee and interact with it in order to were problems with (a) 61% of cases having highly
maintain operational safety. This includes (a) au- complex automation logic not performing as ex-
tomation confusion where operators have trouble pected or as appropriate for the situation, (b) 38%
in understanding what the automation is doing, (b) of cases having mode errors in which the wrong
reduced situation awareness (SA) that results from
being out-of-the-loop (OOTL) leading to slow
detection and response to events that the auto- SA Technologies, Gold Canyon, AZ, USA
mation cannot handle, and (c) automation inter-
Corresponding Author:
action difficulties where people have problems Mica R. Endsley, SA Technologies, 5301 S. Superstition
directing the automation, particularly in time- Mountain Drive, #104377, Gold Canyon, AZ 85118, USA.
critical situations. Email: mica@satechnologies.com
2 Journal of Cognitive Engineering and Decision Making 0(0)

mode was accidentally selected or the system was somewhat overlapping however. For example,
accidentally disengaged, and (c) 8% of cases in- inadvertent activation/deactivation of automa-
volving erroneous inputs causing the automation tion is listed as a human/organizational slip
to perform incorrectly for the situation. (which may be a precipitating input to an ac-
Based on this history, Wickens (Onnasch et al., cident), but it is also the outcome of poor
2014; Sebok & Wickens, 2017) coined the term human-automation interface design. An oper-
“lumberjack effect” to refer to the general finding that ator’s incorrect mental model may be due to
while automation may work well most of the time in inadequate training (listed as human/
typical or normal situations, the performance prob- organizational slips) or due to overly com-
lems that occur in novel or unexpected situations also plex and hidden automation (listed as a human-
increase the likelihood of catastrophic errors. This automation interaction breakdowns). Yet the
generalization of the problem summarizes these real- fact that the automation logic is highly complex
world findings in which automation-related accidents in the first place does not seem to be listed as a
arise in edge case situations that the automation may failure of the automation design itself.
not be designed to handle, where it acts inappropri- (2) If fails to provide a socio-technical view of error
ately due to inaccurate inputs, and where it adds extra that can assist automation developers in
complexity to the operators’ job. The challenge lies avoiding negative outcomes, instead blaming
not just in the robustness and appropriateness of the the operator at the pointy end of the stick for
systems’ programming, but also in many deficiencies errors (Woods et al., 2017). For example, the
in the design of the human-automation interaction and human and organizational slips/misconceptions
resultant deficiencies in SA. While not specific to just all cite the operator as performing improperly in
the OOTL problem, the lumberjack effect includes some way without identifying these issues as a
OOTL SA deficits within its purview. consequence of automation design features or
organizational actions.
(3) No evidence is provided that many of the factors
Automation Failures in the taxonomy (i.e., different etiologies for
Skraaning and Jamieson (2023) believe that there automation failure) matter in terms of differ-
is a need to better define the concept of automation ential effects on human performance with au-
failure in order to ensure that researchers more tomation. That is, while I agree that the list of
clearly understand the nature of the event, par- potential automation problems may be a good
ticularly in the context of automation-related hu- one, and any or all of them may be at fault in a
man performance challenges. I agree with them given accident, there is no evidence at present
that it is worthwhile to fully define what is meant that the nature of human response to the auto-
by automation failure; however, their proposed mation fault will be different based on the
taxonomy has several drawbacks. 13 categories of failure listed. In the review of
automation accidents in the aviation field pro-
(1) It fails to distinguish causes from effects, falling vided by Gawron (2019), for example, only a
short of providing either a taxonomy or a model subset of these automation problems is ob-
of automation failure. It includes elements and served, and there is no clear difference in the
systemic causes of automation failure charac- subsequent outcomes.
terizing a number of difficult types of short-
comings that can occur within the automation
(e.g., logic deficiencies, programming errors, Prevention of Automation-Induced
and malfunctioning hardware) as well as inputs
Performance Failures
to the automation (such as sensor failures) and
combinatorial effects from interactions between As a remedy to these shortcomings, Figure 1
automated systems. It also includes a partial list shows an alternate depiction of automation fail-
of human-automation interaction problems, and ure, clearly distinguishing inputs from outputs as
a partial list of human/organizational slips or well as relevant intervention factors. The model
misconceptions. These categories are depicts a number of factors that mediate between
Endsley 3

particular automation-related events and negative fidelity simulator with experienced power plant
outcomes of concern. This approach more clearly operators (Jamieson & Skraaning, 2018).
illustrates that accidents are not a direct result of Wickens et al. (2020) disputed their criticism of
just an automation failure, but that a series of the lumberjack effect on the basis that their find-
barriers to error are available (Reason, 2000). In ings were limited due to:
this light, automation accidents are not the sole
result of operator error, but rather induced by a (1) The automation failure involved a malfunc-
series of predicating events and automation fea- tion in an underlying relief valve, not the
tures as well as failures to provide tools, capa- automation itself,
bilities, policies, and training that support people (2) Reliance on an unvalidated measure of SA
in improving system safety under these condi- called the Important Parameter Assessment
tions. This depiction more clearly emphasizes the Questionnaire (IPAQ),
specific actions that system developers and or- (3) Discounting participants’ negative ratings of
ganizations can take to maintain operational the high level of automation (LOA) condi-
safety by reducing the occurrence of human tions that were consistent with lumberjack
performance problems in the face of failure predictions and highly significant,
events. (4) Lack of statistical power associated with a
small sample size, and
(5) Lack of comparison between routine and
OOTL, SA, and Level of abnormal conditions.
Automation—Tempests in Teapots
While I agree with Skraaning and Jamieson (2023)
Skraaning and Jamieson (Jamieson & Skraaning, that the automation failure condition in their study
2020a, 2020b; Skraaning & Jamieson, 2023) was relevant in terms of being representative of many
criticize the lumberjack effect on the basis of their automation failures, disputing point 1, I largely agree
inability to find it in a study they conducted in a with Wickens et al.’s (2020) other four points. There
nuclear power simulation with an automated aid are a number of issues, both methodological and
for performing a checklist. They believe the reason conceptual, that significantly limit the generalizability
for this is that their study was conducted in a high- of the Jamieson and Skraaning (2020a) study.

Figure 1. Anatomy of automation failure: Barriers for the prevention of human performance degradation.
4 Journal of Cognitive Engineering and Decision Making 0(0)

Methodological Limitations end of the scenario, asked for operators to rate the
importance of eight process parameters. As a key
Small Sample Size. The study involved only eight limitation, by assessing this information only at the
teams of operators each performing in four auto- end of the scenario, it is likely to only capture
mation conditions, providing only eight measures of operators’ understanding at that point, well after
performance per condition. This is an extremely small participants may have figured out what was going
number to detect differences between conditions. on in the scenario (Endsley, 2021). Thus, it does
While Skraaning and Jamieson argue that somehow not necessarily capture people’s SA at the time of
the power of the test is not important, this fails to the anomalous event or any delays in ascertaining
address a key limitation of their study: perhaps eight what was really happening that could lead to
measures of an event in each condition is simply negative outcomes in many domains. The measure
insufficient for showing statistical significance be- therefore may suffer from significant hindsight
tween conditions? Another study cited by Skraaning bias. Further, a high degree of overlap is shown in
and Jamieson as contradicting the lumberjack effect is the 95% confidence intervals on this measure
Calhoun et al. (2009). Yet that study only had six between the four LOA conditions casting some
participants, an issue that its authors credited for the doubt on the strength of their findings of SA
performance measure failing to reach statistical sig- differences between conditions.
nificance even though the trends followed the ex-
pected direction of higher LOA yielding slower Other Relevant Factors. There are several other
performance when the automation is unreliable (p = factors that also could have contributed the lack of
.16). Similarly, Cummings and Mitchell (2007) had an LOA performance effect in the Jamieson and
only three participants in each LOA condition, yet Skraaning study. The negative effects of high
still found that high automation (management by LOAs have been shown to be worse for continuous
exception) was worse for performance (p = .07), control tasks and those involving advanced
particularly when there were a high number of re- queueing of tasks (Endsley & Kaber, 1999; Kaber
planning events. Skraaning and Jamison do not et al., 2000), and when other tasks are present that
provide any trend data. compete for people’s attention (Kaber & Endsley,
2004). The Jamieson and Skraaning study, how-
Limits of Testing. The Jamieson and Skraaning ever, involved a checklist that operators were
study involved eight teams (of three people each) working through with different levels of automa-
performing in four scenarios. All scenarios in- tion. This type of task, particularly with no
volved a fault in which one valve remained open, competing tasks, is less likely to fall prey to OOTL
varying only in terms of which valve and why it problems.
failed to open. After the first fault, it is highly Secondly, it has been shown that display
unlikely the operators were surprised or not ex- transparency can reduce or eliminate OOTL per-
pecting a fault in the remaining scenarios. The formance decrements (Bagheri & Jamieson, 2004;
authors do not report on any testing of order effects Bass et al., 2013; Bean et al., 2011; Dzindolet
which could easily have contributed to the lack of et al., 2002; Mercado et al., 2016; Selkowitz et al.,
statistically significant difference between condi- 2017; Seppelt & Lee, 2007; Stowers et al., 2017) as
tions. With four LOA conditions and only eight well as SA problems (Boyce et al., 2015; Chen
teams, attempts at counter-balancing for order et al., 2014; Schmitt et al., 2018; Selkowitz, et al.,
effects will have been only partial at best. 2017). It is entirely possible that Jamieson and
Skraaning did a good job in providing automation
Poor SA Measure. Skraaning and Jamieson relied transparency with their automated checklist dis-
on IPAQ as a measure of SA. Yet this is an un- play design. While it cannot be stated categorically
validated measure that has not been used in any that either of these factors contributed to the lack of
published literature. Their only citations for it are significant differences in failure detection perfor-
two unpublished internal documents from an or- mance between LOAs in their study, neither can
ganization that no longer exists and are not pub- they be ruled out with enough certainty to call the
licly available. This measure, administered at the lumberjack effect into question.
Endsley 5

Conceptual Errors Sethumadhavan, 2009). Understanding this ap-


parent discrepancy relies on looking more closely
More than OOTL. Much discussion about the at the data.
lumberjack effect is built around studies of OOTL. High LOAs can have the advantage of freeing
However, it is worth pointing out that automation- up cognitive resources so that people have more
related performance problems result from not just time to take-in information, at least initially. But
OOTL, but also automation confusion and auto- when secondary tasks are introduced, this potential
mation interaction errors. Wicken’s description of benefit disappears (Kaber & Endsley, 2004; Ma &
the lumberjack effect is not limited to OOTL; Kaber, 2005; Weaver & DeLucia, 2022). En-
rather, catastrophic events can happen due to au- gagement decreases and attention to other tasks
tomation problems of varying kinds. increases over time, lowering SA on automation-
related information (Carsten et al., 2012) and in-
Probabilistic versus Deterministic Models. Much of creasing complacency (Wickens et al., 2015). This
Skraaning and Jamieson’s argument lies in the is more likely with automation that is more reliable
presumption that the “burden of proof for a pre- which acts to increase trust (Wickens & Dixon,
diction of human performance effects rests on 2007). SA essentially can become more variable
those making the prediction,” and that even a few under increased automation, increasing with effort,
studies that have findings that contradict that but decreasing during periods of distraction and
model are enough to cast doubt on it. This higher workload (Endsley, 2017a). Importantly,
viewpoint would only hold true if the lumberjack studies have shown a significant correlation be-
effect were deterministic rather than probabilistic. tween SA and manual take-over time following an
In fact, it is plainly obvious that automation in automation problem (Clark et al., 2017;
many cases works well much of the time and that Sethumadhavan, 2009), demonstrating the im-
people are often able to detect and correct for its portance of SA at the time that an event occurs for
shortcomings, or it would never be implemented at people’s ability to recover from the automation
all. Rather, the lumberjack effect merely states that failure.
the probability of these OOTL events increases Thus, if SA actually did increase in the higher
with automation. The more detailed research lit- LOA conditions in Skraaning and Jamieson’s
erature provides information on the many factors studies (which is questionable given their measure
that affect that probability (e.g., LOA, competing of SA), it would not be particularly damning with
tasks, time on task, task implementation, display respect to the lumberjack effect. Rather it is
transparency, and training) (Endsley, 2017b). somewhat consistent for a study in which partic-
As for the burden of proof, not only does the ipants were not subject to dual tasking.
wide body of research literature on this matter
show a preponderance of evidence for the effect of Task Complexity. Jamieson and Skraaning’s central
LOA on OOTL from laboratory studies (Onnasch, rationale for not finding an OOTL effect centers on
et al., 2014) but also a large body of real-world the fact that their study employed experienced
examples (e.g. Gawron, 2019) shows that these operators in a high-fidelity simulation environ-
problems are not just the result of the artificialities ment, claiming the lumberjack effect does not
of the laboratory, but are a significant problem in extend to complex environments. However, there
complex settings with experienced operators. is not much evidence to back up this claim. Two of
the other studies cited as additional evidence by
SA and LOA. Some confusion seems to exist on the Jamieson and Skraaning also had a very small
effects of higher LOAs on SA. The research base number of inexperienced participants, with their
includes studies that show increases in SA at trends generally supporting the lumberjack effect
higher LOAs (Endsley & Kaber, 1999; Ma & (Calhoun, et al., 2009; Cummings & Mitchell,
Kaber, 2005); however, most studies show de- 2007). The third study employed 20 experienced
creases in SA at higher LOAs (Endsley & Kiris, air traffic controllers using a conflict detection aid
1995; Franz et al., 2015; Jipp & Ackerman, 2016; (Metzger & Parasuraman, 2005). It also supports
Kaber, et al., 2000; Manzey et al., 2012; the lumberjack effect showing improved
6 Journal of Cognitive Engineering and Decision Making 0(0)

performance with the aid in normal situations (p = automation system in difficult and novel scenarios,
.01) and a trend toward better manual performance providing clear communications regarding system
compared to with the automated aid when it was capabilities and limitations, setting appropriate
unreliable (p = .14). The authors felt this finding policies, and providing needed training.
had practical significance in safety critical envi-
ronments, particularly since order effects were Declaration of Conflicting Interests
present and the less reliable condition was always
The author(s) declared no potential conflicts of interest
presented last. SA was not measured in that study.
with respect to the research, authorship, and/or publi-
While many experimental research studies that
cation of this article.
have been done in this area employed either low-
or medium-fidelity simulations, the large number
of accidents and incidents from highly complex Funding
settings in the real world with experienced oper- The author(s) received no financial support for the
ators discussed at the beginning of this paper research, authorship, and/or publication of this article.
shows that the OOTL event occurs outside of
laboratory settings as well. ORCID iD
Mica R. Endsley  https://orcid.org/0000-0002-2359-
Conclusions 947X

Skraaning and Jamieson’s “failure to find auto-


mation failure” is far more the result of significant References
methodological issues, conceptual errors, and Bagheri, N., & Jamieson, G. A. (2004). The impact of
logical flaws than it is of any shortcomings in the context-related reliability on automation failure detection
well-documented OOTL phenomenon. There is and scanning behaviour. Proceedings of the 2004 IEEE
little reason (in their words) to “throw the baby out International Conference on Systems, Man and Cyber-
with the bath water.” Endsley (2017b) and netics, The Hague, Netherlands: IEEE (pp. 212–217).
Onnasch et al. (2014) reviews of experimental Bass, E. J., Baumgart, L. A., & Shepley, K. K. (2013).
research on this concept over the past 40 years add The effect of information analysis automation dis-
significant depth to our understanding of the play content on human judgment performance in
OOTL phenomenon that has been repeatedly noisy environments. Journal of Cognitive Engi-
demonstrated in complex, real-world settings, with neering and Decision Making, 7(1), 49–65. https://
sometimes catastrophic effects. Importantly, the doi.org/10.1177/1555343412453461
lumberjack effect and the human-autonomy sys- Bean, N. H., Rice, S. C., & Keller, M. D. (2011). The
tem oversight (HASO) model (Endsley, 2017b) effect of gestalt psychology on the system-wide trust
demonstrate predictive validity, with a new crop of strategy in automation. Proceedings of the Human
accidents being found for automobile automation Factors and Ergonomics Society Annual Meeting,
of continuous control functions (Siddiqui & Santa Monica, CA, (pp. 1417–1421). Human Factors
Merrill, 2023) and benefits for automation di- and Ergonomics Society.
rected at improving SA (Cicchino, 2018). Boyce, M. W., Chen, J. Y. C., Selkowitz, A. R., &
This paper shows the robust effects of auto- Lakhmani, S. G. (2015). Effects of agent transpar-
mation failures on human performance due to ency on operator trust. Proceedings of the Tenth
automation confusion, SA and OOTL problems, Annual ACM/IEEE International Conference on
and automation interaction problems, with acci- Human-Robot Interaction, Portland, Oregon (pp.
dents often stemming from multiple contributing 179–180).
factors. Importantly, there are many interventions Calhoun, G. L., Draper, M. H., & Ruff, H. A. (2009).
that are available to break the accident chain, as Effect of level of automation on unmanned aerial
outlined in Figure 1. Automation developers and vehicle routing task. Proceedings of the Human
organizations should focus on improving the de- Factors and Ergonomics Society Annual Meeting,
sign of the automation and the human-automation Santa Monica, CA, (pp. 197–201). Human Factors
interface, detailed testing of the combined human- and Ergonomics Society.
Endsley 7

Carsten, O., Lai, F. C. H., Barnard, Y., Jamson, A. H., & automation. Human Factors, 37(2), 381–394. https://
Merat, N. (2012). Control task substitution in sem- doi.org/10.1518/001872095779064555
iautomated driving: Does it matter what aspects are Franz, B., Haccius, J. P., Stelzig-Krombholz, D.,
automated? Human Factors, 54(5), 747–761. https:// Pfromm, M., Kauer, M., & Abendroth, B. (2015).
doi.org/10.1177/0018720812460246 Evaluation of the SAGAT method for highly auto-
Chen, J. Y. C., Procci, K., Boyce, M., Wright, J., Garcia, mated driving. Proceedings of the 19th Triennial
A., & Barnes, M. (2014). Situation awareness-based Congress of the IEA, Melbourne, Australia (pp. 14).
agent transparency (ARL-TR-6905). Army Research Gawron, V. (2019). Automation in aviation accidents:
Laboratory. Accident analyses McLean (MTR190013). MITRE
Cicchino, J. B. (2018). Effects of blind spot monitoring Corporation.
systems on police-reported lane-change crashes. Jamieson, G. A., & Skraaning, G. (2018). Levels of
Traffic Injury Prevention, 19(6), 615–622. https:// automation in human factors models for automation
doi.org/10.1080/15389588.2018.1476973 design: Why we might consider throwing the baby
Clark, H., McLaughlin, A. C., & Feng, J. (2017). Sit- out with the bathwater. Journal of Cognitive Engi-
uational awareness and time to takeover: Exploring neering and Decision Making, 12(1), 42–49. https://
an alternative method to measure engagement with doi.org/10.1177/1555343417732856
high-level automation. Proceedings of the Human Jamieson, G. A., & Skraaning, G. (2020a). The absence
Factors and Ergonomics Society Annual Meeting, of degree of automation trade-offs in complex work
Los Angeles, CA, (pp. 1452–1456). Sage. settings. Human Factors, 62(4), 516–529. https://
Cummings, M., & Mitchell, P. (2007). Operator scheduling doi.org/10.1177/0018720819842709
strategies in supervisory control of multiple UAVs. Jamieson, G. A., & Skraaning, G. (2020b). The harder they
Aerospace Science and Technology, 11(4), 339–348. fall? A response to Wickens et al (2019) regarding the
https://doi.org/10.1016/j.ast.2006.10.007 generalizability of lumberjack predictions to complex
Dzindolet, M. T., Pierce, L., Peterson, S., Purcell, L., & work settings. Human Factors, 62(4), 535–539, https://
Beck, H. (2002). The influence of feedback on au- doi.org/10.1177/0018720820904623
tomation use, misuse, and disuse. Proceedings of the Jipp, M., & Ackerman, P. L. (2016). The impact of higher
Human Factors and Ergonomics Society Annual levels of automation on performance and situation
Meeting, Santa Monica, CA, (pp. 551–555). Human awareness: A function of information-processing ability
Factors and Ergonomics Society. and working-memory capacity. Journal of Cognitive
Endsley, M. R. (2017a). Autonomous driving systems: Engineering and Decision Making, 10(2), 138–166.
A preliminary naturalistic study of the tesla model S. https://doi.org/10.1177/1555343416637517
Journal of Cognitive Engineering and Decision Kaber, D., Onal, E., & Endsley, M. R. (2000). Design of
Making, 11(3), 225–238. https://doi.org/10.1177/ automation for telerobots and the effect on performance,
1555343417695197 operator situation awareness, and subjective workload.
Endsley, M. R. (2017b). From here to autonomy: Les- Human Factors and Ergonomics in Manufacturing,
sons learned from human-automation research. Hu- 10(4), 409–430. https://doi.org/10.1002/1520-
man Factors, 59(1), 5–27. https://doi.org/10.1177/ 6564(200023)10:4<409::aid-hfm4>3.0.co;2-v
0018720816681350 Kaber, D. B., & Endsley, M. R. (2004). The effects of
Endsley, M. R. (2021). A systematic review and meta- level of automation and adaptive automation on
analysis of direct objective measures of situation human performance, situation awareness and
awareness: A comparison of SAGAT and SPAM. workload in a dynamic control task. Theoretical
Human Factors, 63(1), 124–150. https://doi.org/10. Issues in Ergonomics Science, 5(2), 113–153. https://
1177/0018720819875376 doi.org/10.1080/1463922021000054335
Endsley, M. R., & Kaber, D. B. (1999). Level of au- Ma, R., & Kaber, D. (2005). Situation awareness and
tomation effects on performance, situation awareness workload in driving while using adaptive cruise
and workload in a dynamic control task. Ergonomics, control and a cell phone. International Journal of
42(3), 462–492. https://doi.org/10.1080/ Industrial Ergonomics, 35(10), 939–953. https://doi.
001401399185595 org/10.1016/j.ergon.2005.04.002
Endsley, M. R., & Kiris, E. O. (1995). The out-of-the- Manzey, D., Reichenbach, J., & Onnasch, L. (2012).
loop performance problem and level of control in Human performance consequences of automated
8 Journal of Cognitive Engineering and Decision Making 0(0)

decision aids: The impact of degree of automation Siddiqui, F., & Merrill, J. B. (2023, June 10). 17 fa-
and system experience. Journal of Cognitive Engi- talities, 736 crashes: The shocking toll of Tesla’s
neering and Decision Making, 6(1), 57–87. https:// autopilot, The Washington Post. https://www.
doi.org/10.1177/1555343411433844 washingtonpost.com/technology/2023/06/10/tesla-
Mercado, J. E., Rupp, M. A., Chen, J. Y. C., Barnes, M. J., autopilot-crashes-elon-musk/
Barber, D., & Procci, K. (2016). Intelligent agent Skraaning, G., & Jamieson, G. A. (2023). The failure to
transparency in human–agent teaming for Multi-UxV grasp automation failure. Journal of Cognitive En-
management. Human Factors, 58(3), 401–415. https:// gineering and Decision Making. https://doi.org/10.
doi.org/10.1177/0018720815621206 1177/15553434231189375
Metzger, U., & Parasuraman, R. (2005). Automation in Stowers, K., Kasdaglis, N., Rupp, M., Chen, J. Y. C.,
future air traffic management: Effects of decision aid Barber, D., & Barnes, M. (2017). Insights into
reliability on controller performance and mental human-agent teaming: Intelligent agent transparency
workload. Human Factors, 47(1), 35–49. https://doi. and uncertainty. Advances in human factors in robots
org/10.1518/0018720053653802 and unmanned systems (pp. 149–160). Springer.
Onnasch, L., Wickens, C. D., Li, H., & Manzey, D. Weaver, B. W., & DeLucia, P. R. (2022). A systematic review
(2014). Human performance consequences of stages and meta-analysis of takeover performance during con-
and levels of automation: An integrated meta- ditionally automated driving. Human Factors, 64(7),
analysis. Human Factors, 56(3), 476–488. https:// 1227–1260. https://doi.org/10.1177/0018720820976476
doi.org/10.1177/0018720813501549 Wickens, C. D., & Dixon, S. R. (2007). The benefits of
Reason, J. (2000). Human error: Models and manage- imperfect diagnostic automation: A synthesis of the lit-
ment. British Journal of Management, 320(7237), erature. Theoretical Issues in Ergonomics Science, 8(3),
768–770. https://doi.org/10.1136/bmj.320.7237.768 201–212. https://doi.org/10.1080/14639220500370105
Schmitt, F., Roth, G., Barber, D., Chen, J. Y. C., & Wickens, C. D., Onnasch, L., Sebok, A., & Manzey, D.
Schulte, A. (2018). Experimental validation of pilot (2020). Absence of DOA effect but no proper test of
situation awareness enhancement through transpar- the lumberjack effect: A reply to Jamieson and
ency design of a scalable mixed-initiative mission Skraaning (2019). Human Factors, 62(4), 530–534.
planner. Proceedings of the International Conference https://doi.org/10.1177/0018720820901957
on Intelligent Human Systems Integration, Dubai, Wickens, C. D., Sebok, A., Li, H., Sarter, N., & Gacy,
Springer (pp. 209–215). A. M. (2015). Using modeling and simulation to
Sebok, A., & Wickens, C. D. (2017). Implementing predict operator performance and automation in-
lumberjacks and black swans into model-based tools duced complacency with robotic automation: A case
to support human–automation interaction. Human study and empirical validation. Human Factors,
Factors, 59(2), 189–203. https://doi.org/10.1177/ 5 7 ( 6 ) , 9 5 9 – 9 7 5 . h t t p s : / / d o i . o r g / 1 0 . 11 7 7 /
0018720816665201 0018720814566454
Selkowitz, A. R., Lakhmani, S. G., & Chen, J. Y. C. Wiener, E. L. (1988). Cockpit automation. In E. L.
(2017). Using agent transparency to support sit- Wiener & D. C. Nagel (Eds.), Human factors in
uation awareness of the autonomous squad aviation (pp. 433–461). Academic Press.
member. Cognitive Systems Research, 46(- Woods, D., Dekker, S., Cook, R., Johannesen, L., &
December), 13–25. https://doi.org/10.1016/j. Sarter, N. (2017). Behind human error. CRC Press.
cogsys.2017.02.003
Seppelt, B. D., & Lee, J. D. (2007). Making adaptive cruise
control (ACC) limits visible. International Journal of Dr Endsley is president of SA Technologies and a
Human-Computer Studies, 65(3), 192–205. https://doi. former Chief Scientist of the U.S. Air Force. She
org/10.1016/j.ijhcs.2006.10.001 has published over 200 articles on situation
Sethumadhavan, A. (2009). Effects of automation types awareness and the effects of AI and automation on
on air traffic controller situation awareness and human performance. Dr. Endsley has a PhD in
performance. Proceedings of the Human Factors and Industrial & Systems Engineering from the Uni-
Ergonomics Society 53rd Annual Meeting, Santa versity of Southern California and is a fellow and
Monica, CA, (pp. 1–5). Human Factors and Ergo- former President of the Human Factors and Er-
nomics Society. gonomics Society.

You might also like