Neuro Ergonomics

You might also like

Download as pdf or txt
Download as pdf or txt
You are on page 1of 271

Neuroergonomics

Also by Addie Johnson


TRAINING FOR A RAPIDLY CHANGING WORKPLACE: Applications of
psychological research (ed. with Quiñones, M.).

Also by Robert W. Proctor


CONTEXTUALISM IN PSYCHOLOGICAL RESEARCH? A critical review
(with Capaldi, E. J.)
EXPERIMENTAL PSYCHOLOGY (ed. with Healy, A. F.) (1st and 2nd editions).
PSYCHOLOGY OF SCIENCE: Implicit and explicit processes (ed. with Capaldi, E. J.).
WHY SCIENCE MATTERS: Understanding the methods of psychological
research (with Capaldi, E. J.).
CULTURAL FACTORS IN SYSTEMS DESIGN: Decision making and action
(ed. with Nof, S. & Yih, Y.).
ATTENTION (ed. with Read, L. E.).
STIMULUS-RESPONSE COMPATIBILITY: An integrated perspective (ed. with
Reeve, T. G.).
HUMAN FACTORS IN SIMPLE AND COMPLEX SYSTEMS (ed. with Van Zandt, T.).
STIMULUS-RESPONSE COMPATIBILITY PRINCIPLES: Data, theory, and
application (with Vu, K.-P. L.).
HANDBOOK OF HUMAN FACTORS IN WEB DESIGN (ed. with Vu, K.-P. L.)
(1st and 2nd editions).

Also by Addie Johnson and Robert W. Proctor


Theory and practice.
SKILL ACQUISITION AND HUMAN PERFORMANCE.
Neuroergonomics
A Cognitive Neuroscience Approach to
Human Factors and Ergonomics

Edited by
Addie Johnson
Psychology Department, University of Groningen, the Netherlands

and

Robert W. Proctor
Department of Psychological Sciences, Purdue University, USA
Addie Johnson and Robert W. Proctor © 2013
Individual Chapters © Contributors 2013
Softcover reprint of the hardcover 1st edition 2013 ISBN 978-0-230-29972-6
All rights reserved. No reproduction, copy or transmission of this
publication may be made without written permission.
No portion of this publication may be reproduced, copied or transmitted
save with written permission or in accordance with the provisions of the
Copyright, Designs and Patents Act 1988, or under the terms of any licence
permitting limited copying issued by the Copyright Licensing Agency,
Saffron House, 6–10 Kirby Street, London EC1N 8TS.
Any person who does any unauthorized act in relation to this publication
may be liable to criminal prosecution and civil claims for damages.
The authors have asserted their rights to be identified as the authors of this
work in accordance with the Copyright, Designs and Patents Act 1988.
First published 2013 by
PALGRAVE MACMILLAN
Palgrave Macmillan in the UK is an imprint of Macmillan Publishers Limited,
registered in England, company number 785998, of Houndmills, Basingstoke,
Hampshire RG21 6XS.
Palgrave Macmillan in the US is a division of St Martin’s Press LLC,
175 Fifth Avenue, New York, NY 10010.
Palgrave Macmillan is the global academic imprint of the above companies
and has companies and representatives throughout the world.
Palgrave® and Macmillan® are registered trademarks in the United States,
the United Kingdom, Europe and other countries.
ISBN 978-1-349-33530-5 ISBN 978-1-137-31652-3 (eBook)
DOI 10.1057/9781137316523
This book is printed on paper suitable for recycling and made from fully
managed and sustained forest sources. Logging, pulping and manufacturing
processes are expected to conform to the environmental regulations of the
country of origin.
A catalogue record for this book is available from the British Library.
A catalog record for this book is available from the Library of Congress.
10 9 8 7 6 5 4 3 2 1
22 21 20 19 18 17 16 15 14 13
Contents

List of Tables and Figures ix

Preface xiii

Acknowledgements xiv

Notes on Contributors xv

List of Abbreviations xviii

Prologue xxi

1 The Working Brain 1


Addie Johnson, Jacob Jolij, Raja Parasuraman and Paolo Toffanin
Brain structures and networks 4
A default mode of brain function 5
Assessing and influencing brain function 6
TMS and tDCS 6
fNIRS 7
EEG 8
Information processing in the brain 12
Perception 12
Working memory 13
Attention and arousal 15
Decision-making 19
Action and motor control 20
Emotion and social interaction 20
Prediction of prospective activity 22
Direct augmentation of human performance 23
Conclusion 25
2 Cognitive Neuroergonomics of Perception 26
Jacob Jolij, Addie Johnson and Robert W. Proctor
Visual processing 29
Top-down and bottom-up processing in perception 32
Learning to see 35
Auditory perception and sonification 38
Touch and the display of haptic information 40
Multimodal perception 43

v
vi Contents

Perception of space and self 46


Perceptual docking for robotic control 49
Conclusion 50
3 Visual Attention and Display Design 51
Jason S. McCarley and Kelly S. Steelman
Modes of orienting 52
Why is mental processing selective? 55
Perceptual selection 56
Central selection 57
Applications to display design 58
Visual search 58
Grouping and object displays 61
Head-up and head-mounted displays 64
Large-scale attention 65
Conclusion 68
4 Attentional Resources and Control 69
Paolo Toffanin and Addie Johnson
Quantifying and describing attention 70
Eye movements and pupil diameter 70
EEG 71
Brain networks and fMRI 76
fNIRS 77
Augmented interaction 78
Brain–computer interfaces 79
Adaptive interfaces 82
Augmenting attention and cognition 85
Enhancing attention through training 86
Using drugs to enhance attention 89
Conclusion 90
5 Performance Monitoring and Error-related
Brain Activity 91
Addie Johnson and Rasa Gulbinaite
Performance monitoring 93
Neural correlates of performance monitoring 96
Error- and feedback-related processing 99
Error prediction 101
Applications based on error- and feedback-related
neural signals 103
Maintaining attentional control 104
Contents vii

Learning from errors 106


Online classification of feedback processing 108
Conclusion 109
6 Neuroergonomics of Sleep and Alertness 110
Jon Tippin, Nazan Aksan, Jeffrey Dawson and Matthew Rizzo
The neurobiology of sleep and alertness 111
Sleepiness, performance and sleepiness countermeasures 111
OSA and driving 114
Effects of disordered sleep on arousal and cognition 115
Self-awareness of sleep impairments 116
Impaired sleep in OSA and PAP treatment 117
Assessing naturalistic driving behaviour in the real world 119
Case study 122
Conclusion 127
7 Affective and Social Neuroergonomics 129
Jacob Jolij and Yana Heussen
The neural basis of emotion 130
How emotion guides vision and cognition 133
Reading emotional states 135
The social brain 138
Social human–computer communication and interaction 139
Social robotics 141
Conclusion 143
8 Neuroergonomics of Individual Differences in Cognition:
Molecular Genetic Studies 144
Raja Parasuraman
Genomics 144
Why look at individual differences? 146
A theoretical framework for the molecular genetics
of cognition 150
Visual attention 152
Working memory 154
Decision-making 155
Conclusion 160
9 Validating Models of Complex, Real-life
Tasks Using fMRI 163
Jelmer P. Borst, Niels A. Taatgen and Hedderik van Rijn
Standard fMRI analysis 164
Cognitive architectures 165
viii Contents

Cognitive architectures and fMRI 165


Task and model 167
The task 167
The model 169
ROI analysis 170
Model-based fMRI analysis 174
Applications to task design 176
Conclusion 179

References 181

Subject Index 233

Author Index 239


List of Tables and Figures

Tables

1.1 Components of the event-related potentials (ERP), their


onset, topography and the functionality they reflect 9
2.1 Guidelines for the design of auditory icons 40
6.1 Substances that affect arousal and sleep, their mechanisms
of action and side effects 113

Figures

1.1 The location of some major areas of relevance for


information processing in the brain and the areas proposed
by Posner and Rothbart (2007) to be involved in alerting,
attentional orienting and executive function 15
2.1 A spectrogram of a one-second sound generated by
The vOICe. Reprinted with permission (http://www.
seeingwithsound.com) 46
3.1 Low display proximity between vertical tape gauges (a) allows
an operator to read the value of a single gauge easily, but
increases the difficulty of comparing values across the two
gauges. High display proximity between the gauges (b) allows
for easier comparisons across the gauges but makes the task
of isolating and reading a single gauge more difficult 62
4.1 A user interface such as that used by Müller et al. (2008).
The image on the left shows the interface as it is displayed
when the user ‘moves’ the arrow to selects the hexagon
containing the letter ‘I’. The image on the right shows the
interface as displayed once the original contents of the
hexagons have been replaced by the items in the previously
selected hexagon 81
5.1 Conditional accuracy functions in the Eriksen flanker task.
As illustrated, reaction times for both fast compatible and
incompatible trials are at chance level, which indicates that
influence of the flankers is strongest early in the trial and

ix
x List of Tables and Figures

is reduced gradually as attention is focused on the target.


[Adapted from Gratton, G., Coles, M. G., Sirevaag, E. J.,
Eriksen, C. W., & Donchin, E. (1988). Pre- and poststimulus
activation of response channels: A psychophysiological
analysis. Journal of Experimental Psychology. Human Perception
and Performance, 14, 331–344. Used with permission from
the American Psychological Association.] 97
5.2 Electroencephalography (EEG) components related to
performance monitoring [upper panel: correct-related
negativity (CRN) and error-related negativity (ERN); lower
panel: feedback-related negativity (FRN)] 101
6.1 Video and electronic data from the black box event
recorder. Cameras capture driver behaviour (upper left
panel) and forward view of the road (lower left panel;
in this case indicating that approach to an intersection
where traffic is stopped at a traffic signal). GPS indicates
the location of the driver in a geospatial map (dot, upper
right panel). The graphs of the electronic data (lower right
panels) show that the driver’s speed has decreased from
almost 70 kph to approximately 15 kph over about 15 s on
the x-axis (Time) 121
6.2 (a–c) Hours spent asleep and in bed as indicated by wrist-
worn actigraphy in relation to PAP use for two PAP recipients
(OSA002 and OSA004) and a control participant (CS)
matched to OSA002. (d) Average number of awakenings per
hour of sleep per participant. The first two weeks are prior to
PAP use and the second two weeks post-PAP use 124
6.3 The number of high g events (top) and number of safety
errors per high g event (bottom) in OSA patients before and
after starting PAP relative to the control individual. 125
6.4 Measures of sleepiness (top) versus alertness (bottom) based
on video clip reviews during high g events 126
8.1 Amplitudes of the P1 component (μV) of the event-related
potential (ERP) at the Pz electrode site for attended and
unattended stimuli for 16 individual participants in a
visuospatial attention task (top panel). Group-averaged
ERPs (for 16 participants) at three midline electrode sites for
attended and unattended stimuli (bottom panel). Reprinted
from Figure 3 in Fu et al. (2008), NeuroImage, 39, 1349.
Reprinted with permission from Elsevier Inc. 149
List of Tables and Figures xi

8.2 Mean decision accuracy (in percent) in the command and


control task when carried out manually, and on reliable and
unreliable trials in the Automation 80% condition
(bars show standard errors). 158
9.1 The interface of the experiment, with the subtraction task
on the left and the text entry task on the right. For the
subtraction task, only one column is shown at a time, but
participants were trained to consider the problems as part
of a ten-column subtraction problem. The task that is not
currently performed is masked with hash marks (#): for the
text entry task, the mask marks the spot where the next
letter will appear. As soon as a participant enters a digit for
the subtraction task, this mask changes into the next letter
to be typed and the subtraction task is masked. Reprinted
from Borst et al. (2010b). The neural correlates of problem
states: Testing fMRI predictions of a computational model
of multitasking. PLoS ONE, 5, e12966 168
9.2 Example of model activity for a complete trial in each
condition of the experiment. On the y-axis the different
resources of ACT-R are shown; the x-axis represents time.
Each box indicates that a resource is active at that moment
in time. Reprinted from Borst et al. (2010b). The neural
correlates of problem states: Testing fMRI predictions of a
computational model of multitasking. PLoS ONE, 5, e12966 170
9.3 (a) Haemodynamic response function (HRF).
(b) Convolution example. (c) Model activity for the
problem–state resource and the manual resource, raw and
convolved with the HRF over the course of four trials 171
9.4 Results of the regions-of-interest analysis for (a) the
problem–state resource and (b) the manual resource. Graphs
on the left show model predictions; graphs on the right
recorded BOLD data in the region indicated in the brain 172
9.5 Results of the model-based analysis for (a) the problem–state
resource and (b) the manual resource. On the left the located
brain regions, significance maps were thresholded with p<0.01
(family-wise error-corrected) and 100 contiguous voxels.
Coordinates indicate the most significant voxel in the
region. White squares show the predefined mapping of ACT-R.
The graphs on the right show the average BOLD data in the
100 most significant voxels in the region on the left 175
This page intentionally left blank
Preface

Neuroergonomics combines neuroscience techniques and discoveries


with ergonomics to contribute to the design of products and systems
that enhance performance and safety, broadly defined, and to place
constraints on concepts used to describe human performance in com-
plex environments. Since the coining of the term ‘neuroergonomics’
by Raja Parasuraman in 2003, special issues of journals and an edited
book have been devoted to this topic, several review chapters have been
published, major research laboratories have identified themselves as
‘neuroergonomic’ laboratories and neuroergonomics has been a featured
topic at conferences. Clearly, neuroergonomics is becoming a leading
approach to the study of work and behaviour, and there is every reason
to think that its impact will continue to increase in the future as our
knowledge of brain function related to performance of complex tasks
becomes progressively more sophisticated.
This book introduces the field of neuroergonomics and gives the
background needed to understand research in neuroergonomics. Our
goal was to produce a book that is enjoyable to read—thanks to many
examples—while providing an in-depth examination of the possibilities
(and limitations) of neuroergonomics. This book is intended for upper-
level undergraduates, graduate students, practising ergonomists who
wish to acquaint themselves with cognitive neuroscience and cognitive
neuroscientists who wish to broaden their thinking about the range
of application of their work. We hope that the book serves to increase
interest in the exciting and growing field of neuroergonomics.

xiii
Acknowledgements

Many thanks to our colleagues and students for their helpful comments,
support, inspiration and patience while this work was being completed.

xiv
Notes on Contributors

Nazan Aksan is a Research Scientist at the Department of Neurology,


University of Iowa, USA.
Jelmer Borst works in the laboratory of Professor John Anderson at
Carnegie Mellon University, USA. His main research interest is the
connection between computational models and neuroimaging data.
Currently, he investigates how this connection can be advanced and
how it can be used to learn more about human cognition.
Jeffrey D. Dawson is a Professor of Biostatistics, as well as the Associate
Dean for Faculty Affairs, in the College of Public Health at the University
of Iowa, USA. He has more than 15 years of experience in applied and
methodological research in driver performance and safety.
Rasa Gulbinaite is a PhD student in experimental psychology at the
University of Groningen, the Netherlands. She studies cognitive mecha-
nisms underlying automatic and controlled behaviour and, in particular,
the brain activity that precedes and follows behavioural errors and the
role of individual differences in error perception on subsequent strategic
adjustments.
Yana Heussen is a PhD candidate in Social and Affective Neuroscience
at the University Hospital Schleswig-Holstein, Germany. She is inter-
ested in the neural basis of social cognition, and the conscious and
unconscious processing of facial expressions, which she studies using
neuroimaging methods, such as functional magnetic resonance imaging
and electroencephalography.
Addie Johnson is Professor of Human Performance and Ergonomics at
the University of Groningen. Her research focuses on the intersection
of memory and attention.
Jacob Jolij is an Assistant Professor at the Department of Experimental
Psychology of the University of Groningen, the Netherlands, and asso-
ciate editor of Frontiers in Emotion Science. His main research interest
is conscious visual perception and, in particular, the role of top-down
factors, such as memory, expectancy and emotion, in perception.

Waldemar Karwowski is Professor and Chair of Industrial Engineering


and Management Systems and Executive Director of the Institute for

xv
xvi Notes on Contributors

Advanced Systems Engineering at the University of Central Florida, USA.


He is past president of the International Ergonomics Association and the
Human Factors and Ergonomics Society, and served on the Committee
on Human Factors/Human Systems Integration of the National Research
Council, USA.

Jason McCarley is currently a Professor in the School of Psychology at


Flinders University, South Australia.

Raja Parasuraman is Professor of Psychology and Director of the


Center of Excellence in Neuroergonomics, Technology, and Cognition
(CENTEC) at George Mason University, USA. His research covers human–
machine systems, particularly the role of human attention, memory
and vigilance in automated and robotic systems, and the cognitive neu-
roscience of attention. He is Fellow of the American Association for the
Advancement of Science, the American Psychological Association, the
American Psychological Society, the Human Factors and Ergonomics
Society, the International Ergonomics Association and a National
Associate of the National Academy of Sciences.

Robert W. Proctor is Distinguished Professor of Psychological Sciences,


with a courtesy appointment in Industrial Engineering, at Purdue
University. His research focuses on basic and applied aspects of human
performance, with an emphasis on stimulus–response compatibility.

Niels Taatgen is the Head of the Cognitive Modeling group within the
department of Artificial Intelligence of the University of Groningen,
the Netherlands. He has a background in both computer science and
psychology, with computational models of human cognition as his main
research focus. More specifically, he studies human multitasking, skill
acquisition and transfer, and time perception. In addition to cognitive
simulation and behavioural experiments, he pursues the goal of using
cognitive models to analyse neuroimaging data.

Jon Tippin is Clinical Professor of Neurology at the University of


Iowa Hospitals and Clinics and is director of the Sleep Disorders/EEG
Laboratory at the Iowa City Veterans Affairs Medical Center, USA. He is
a fellow of the American Academy of Neurology, the American Academy
of Sleep Medicine and the American Clinical Neurophysiology Society.

Paolo Toffanin gained his PhD in neuroscience from the University


of Groningen, the Netherlands. His research focuses on basic aspects
of attention using both electroencephalography and eye movement
recording. His interests include the neural basis of individual differences
Notes on Contributors xvii

in intelligence, working memory and perfectionism, and optimizing


human–machine interaction through adaptive support/automation and
brain–computer interfaces.

Hedderik van Rijn is Associate Professor in the Psychology Department


at the University of Groningen. His work focuses on the refinement
of psychological theories by means of formal modelling and behav-
ioural and neuroimaging-based experimentation. Current projects
include extending and refining formal models of the neurobiological
foundations of interval timing, and examining how working memory
constrains and influences strategy selection and language processing.

Matthew Rizzo is Professor of Neurology and Director of the University


of Iowa (UI) Aging Mind and Brain Initiative, USA. He is Vice-Chair
for Translational and Clinical Research, Director of the Division of
Neuroergonomics and its laboratories, a senior member of the Division
of Behavioral Neurology and Cognitive Neuroscience, and a senior
attending physician in the Memory Disorders Clinic. He has partici-
pated in the National Academy of Sciences Board on Human–Systems
Integration.

Kelly S. Steelman completed a PhD in Psychology at the University of


Illinois at Urbana–Champaign in 2011. She is currently a postdoctoral
researcher at Flinders University in South Australia. Her research focuses
on the attentional control mechanisms that drive performance in
technological environments.
List of Abbreviations

ANT attention network task


ACC anterior cingulate cortex
ADHD attention deficit/hyperactivity disorder
BCI brain–computer interface
BAS behavioural activation system
BIS behavioural inhibition system
BMI brain–machine interface
BOLD blood oxygenation level dependent
CNV contingent negative variation
CRN correct-related negativity
DA dopamine
DBH dopamine beta hydroxylase
DTI diffusion tensor imaging
EDR electrodermal response
EDS excessive daytime sleepiness
EEG electroencephalogram or electroencephalography
EPP error-preceding positivity
ERN error-related negativity
ERP event-related potential
ERP components P1, N1, C1, N1-a, ELAN, N1-v, N170, IIN, P2,
MMN/N2a, N2b, N2c, N2pc, LDAP, EDAN, ADAN,
P3/P300/P3b, P3a, P4pc, RON, P600, CNV, LRP,
ERN/Ne
FACS facial action coding system
fMRI functional magnetic resonance imaging
fNIRS functional near infrared spectroscopy
FRN feedback-related negativity
GWAS genome-wide association

xviii
List of Abbreviations xix

HbO2 oxy-haemoglobin
HbR deoxy-haemoglobin
HCI human–computer interaction
HMD head-mounted display
HRF haemodynamic response function
HUD head-up display
IErrPs interaction error potentials
IT inferotemporal cortex
LED light emitting diode
LGN lateral geniculate nucleus
lPFC lateral PFC
MEG magnetoencephalogram
MFC medial frontal cortex
mPFC medial PFC
MRI magnetic resonance imaging
MSLT multiple sleep latency test
NE norepinephrine
PAP positive airway pressure treatment
PET positron emission tomography
PFC prefrontal cortex
PHC parahippocampal cortex
PPA parahippocampal place area
PPC posterior parietal cortex
RA rapidly adapting (afferent)
ROI region of interest
rTMS repetitive-pulse TMS
SA1 slowly adapting type 1 (afferent)
SART Sustained Attention to Response Test
SCN suprachiasmatic nucleus
SNPs single nucleotide polymorphisms
SSEP steady-state evoked potential
xx List of Abbreviations

STS superior temporal sulcus


tDCS transcranial direct current stimulation
TMS transcranial magnetic stimulation
UAV uninhabited air vehicle
V1, A1, S1, V2, V4, MT sensory processing regions in the brain
VWM visual working memory
WIMP Windows-Icons-Mouse-Pointer
Prologue
Neuroergonomics: A Complex
Systems Perspective
Waldemar Karwowski

Contemporary human factors and ergonomics (HF/E) focuses on the


discovery and understanding of the true nature of human–artefact
interactions, viewed from the unified perspective of science, engineer-
ing, design, technology and management. Human-compatible systems
include a variety of natural and artificial consumer products, work
processes and living environments to satisfy people’s demands and
requirements (Karwowski, 2005). The discipline of HF/E promotes a
human-centred approach to the design of work systems and technology
that considers physical, cognitive, social, organizational, environmen-
tal and other relevant factors of human–systems interactions, broadly
defined, in order to make them compatible with the needs, abilities
and limitations of people, with the ultimate goal of optimizing human
wellbeing and overall system performance (IEA, 2004).
The discipline of HF/E advocates the systematic use of knowledge of
human characteristics in order to design interactive systems of people,
machines, environments and devices of all kinds that ensure that system
goals are met (HFES, 2012). Typically, such goals include improved
system effectiveness, productivity, safety, ease of performance, and the
contribution to overall human wellbeing and quality of life (Karwowski,
2005). Furthermore, HF/E discovers and applies information about
human behaviour, abilities, limitations and other characteristics to
the design and evaluation of work systems, consumer products and
working environments in which human–machine interactions affect
human performance and product usability, including tools, machines,
systems, tasks, jobs and environments for productive, safe, comfortable
and effective human use (Helander, 1997; Sanders & McCormick, 1993).
Karwowski (2000) introduced the term human–compatible systems in
order to focus on the need for comprehensive treatment of compatibility
in HF/E.
The science of HF/E aims to understand and model how people interact
with their environments given the specific human characteristics,
xxi
xxii Prologue

including human performance capabilities and limitations. It should


be noted that contemporary ergonomics faces the problems of
increased systems’ complexity (i.e., the scaling of human factors)
and the related human- and system-based nonlinearity and fuzziness.
Nonlinear dynamics and fuzziness characterize not only the states of
the human mind resulting from neural information processing, but
also the essence of human development and existence, and are neces-
sary conditions for human learning, growth, and survival (Karwowski,
1992). According to Ashby’s (1956) law of requisite variety, a model
system or controller can only model or control something to the
extent that it has sufficient internal variety to represent it. In general,
the larger the variety of actions available to a control system, the larger
the variety of perturbations it is able to compensate. In this context,
our ability to understand and model complex human–system interac-
tions at work will depend on our understanding of the complexity of
neural processes.

Neuroscience and neuroergonomics

Since the times of Hippocrates, scientists have continued their efforts to


understand how the human brain works. Contemporary neuroscience
applies various levels of analysis in investigating human brain activity,
including molecular neuroscience, cellular neuroscience, systems neu-
roscience, behavioural neuroscience and cognitive neuroscience (Bear
et al., 2007). Of particular interest to understanding people at work are
the last three neuroscience approaches. While systems neuroscience
explores the function of neural circuits and systems, behavioural
neuroscience applies biological principles to the study of genetic,
physiological and developmental mechanisms of behaviour. Finally,
cognitive neuroscience studies the neural substrate of mental processes
and how the activity of the brain creates what is known as the human
mind (Bear et al., 2007). These three perspectives on human brain
functioning, which are immediately relevant and necessary for advanc-
ing the study and understanding of human–systems interactions at
work, led to realization of the utility of the knowledge of neuroscience
in HF/E and vice versa, which consequently led to the emergence of
neuroergonomics. As stated by Parasuraman (2003), ‘Neuroergonomics
focuses on investigations of the neural bases of mental functions and
physical performance in relation to technology, work, leisure, transpor-
tation, health care and other settings in the real world’ (p. 5).
Prologue xxiii

A systems view of neuroergonomics

Neuroergonomics, as the study of brain and behaviour at work, aims to


explore the premise of designing work to match the neural capacities and
limitations of people (Parasuraman, 2003). As such, neuroergonomics
focuses on the neural control and brain manifestations of the percep-
tual, physical, cognitive and emotional inter-relationships of human
work activities (Parasuraman & Rizzo, 2007). Experimental research in
neuroergonomics has benefited in recent years from the emergence of
many noninvasive techniques for human brain monitoring that can
be used to study various aspects of neural activity and behaviour in
relation to technology and work systems, including such domains as
mental workload, visual attention, working memory, motor control,
human–automation interaction and adaptive automation. The poten-
tial benefits of this important new branch of HF/E are far-reaching, from
advances in occupational health (Karwowski et al., 2003) and medical
therapies, to development of neuroadaptive technologies (Fafrowicz
et al., 2013), to applications of new human-centred design principles of
complex technological, industrial or service ‘systems-of-systems’.
By extension of the original classification of the discipline of HF/E
(Karwowski, 2005) one can distinguish three main paradigms of neuro-
ergonomics, namely: (i) neuroergonomics theory, (ii) neuroergonomics
abstraction and (iii) neuroergonomics design. In this context, neuro-
ergonomics theory is concerned with the ability to identify, describe and
evaluate human brain indicators and markers of human performance,
and brain–system interactions in the context of work and technology.
Neuroergonomics abstraction is concerned with the ability to use
these brain indicators and interactions to make predictions that can
be validated in the real world. Neuroergonomics design is concerned with
the ability to implement knowledge about human brain indicators and
relevant interactions, and to use them to develop systems that satisfy
human compatibility requirements from the neural processing point
of view.
From the system-of-systems perspective, the focus of neuroergonomics
is on the understanding and designing of complex, and often nonlinear,
interactions between the human brain and the artefact systems. The
neuroergonomics design process can be represented as mapping human
brain capabilities and limitations to system–technology–environment
requirements and affordances, and, ultimately, to human– brain compat-
ibility requirements at work. Furthermore, as a unique new discipline that
combines the scientific knowledge of ergonomics and neuroscience,
xxiv Prologue

neuroergonomics should allow the making of a significant leap forward


in the near future in our understanding of the true nature of com-
plex inter-relationships between human operators (human capacities
and limitations), technology (products, machines, devices, processes)
and broadly-defined work systems (business processes and organiza-
tional structures). Without a doubt, the current book will contribute
significantly to meeting the aforementioned challenges by bridging
ergonomics and neuroscience for the benefit of these two respective
fields of scientific endeavour, and, by doing so, will also positively
affect the welfare of the global society.
1
The Working Brain
Addie Johnson, Jacob Jolij, Raja Parasuraman and
Paolo Toffanin

Neuroergonomics has been defined as the study of the brain and


behaviour at work (Parasuraman & Rizzo, 2007). The major goal of this
new field is to use existing and emerging knowledge from the neuro-
sciences to inform understanding of human behaviour and performance
in work-relevant tasks. As such knowledge is gained, the hope is that
we can design systems and work environments that are safe, efficient
and enjoyable for their users. Reaching such a goal is made even more
important by the relentless march of new, small information technolo-
gies in the marketplace—iPhones, global positioning systems (GPS),
voice-operated devices and so forth impose new information-processing
demands on their users. These devices can impair safety if they are used
by people while they are simultaneously engaged in other activities,
such as driving or walking across a busy intersection (e.g. Strayer &
Drews, 2004).
Such technological advances are affecting not only civilian life, but the
military as well. For example, the United States Air Force Chief Scientist
recently released a report that details the expanding mismatch between
human ‘warfighters’ and the technology available to them (Chief
Scientist Air Force, 2010). The report attempts to make the case that
as technological capacity continues to increase it will become increas-
ingly important to examine the role of the human as a link in security
and other systems. It is evident that now, more than ever, the military
needs ergonomics research to enhance human–machine systems. The
Air Force Chief Scientist report goes on to identify ‘augmentation of
human performance’ as one of two key areas where substantial growth
is possible in the coming decade (p. vii) and calls specifically for ‘direct
augmentation of humans via drugs or implants to improve memory,
alertness, cognition, or visual/aural acuity’ (p. viii), as well as ‘direct
1
2 Neuroergonomics

brainwave coupling between humans and machines, and screening of


individual capacities for key specialty codes via brainwave patterns and
genetic correlators’ (p. 58). In other words, cognitive science and cog-
nitive neuroscience must be combined with traditional ergonomics to
create new neuroergonomic applications.
Although much of the interest in, and funding for, neuroergonomic
research has come from the military, key areas of neuroergonomics,
such as operator–state-based adaptive automation and brain–computer
interfaces, have applications in a range of work, transportation and
leisure environments. The development of neuroergonomic applications
requires an understanding of the tasks to be performed, the cognitive
processing involved in their performance, and the existence of a set of
techniques to measure or influence cognitive processing. Many books
and articles have been devoted to task analysis and cognitive work anal-
ysis (e.g. Diaper & Stanton, 2004; Vicente, 1999). It can be argued that
neuroergonomics, in seeking to apply neuroscience to system design,
starts where cognitive task analysis leaves off. Instead of describing
cognitive processing activities as, for example, ‘memory’ or ‘decision
making’, an understanding of functional neuronal networks underlying
those activities is applied to assess the quality of information processing
and to intervene to improve system performance.
Much of what we know or hypothesize about neuronal networks
comes from single-cell studies with animal subjects (e.g. Buzsáki, 2006;
Kandel et al., 2013). Developments in neuroimaging are, however,
making it increasingly possible to test hypotheses about how informa-
tion is processed in the brain in healthy humans. Neuroimaging methods
allow us to infer neuronal activity in terms of localized changes in blood
flow or metabolism [positron emission tomography (PET)] or in terms
of changes in blood oxygenation level dependent (BOLD) responses
[functional magnetic resonance imaging (fMRI)]. Tracers that bind to
different receptors have been used in combination with PET to examine
transmitter density; pathways of activation can now be imaged using
diffusion tensor imaging (DTI), in which MRI is used to trace white matter
tracts. Lesion studies have played an important role in mapping brain
regions to function and, in addition to the study of naturally-occurring
lesions, transcranial magnetic stimulation (TMS) is being used to induce,
on a short timescale, disruptions in normal brain processing to test con-
clusions about causal relations between brain activity and behaviour.
Finally, measurement of electric [electroencephalogram (EEG)] or mag-
netic [magnetoencephalogram (MEG)] signals at the scalp can be used
to provide detailed information about the time-course of information
The Working Brain 3

processing and, increasingly, its locus. Used separately or together, these


methods and others (e.g. transcranial Doppler sonography, near-infrared
spectroscopy, deep brain stimulation) form the toolkit of the neuro-
scientist studying the physiology of human brain networks.
These various neuroimaging techniques have been used extensively in
cognitive neuroscience studies in which naïve participants, typically
college undergraduates, are tested while performing simple laboratory
tasks of perception and cognition (Gazzaniga, 2009). In contrast to cog-
nitive neuroscience, one of the goals of neuroergonomics is to examine
brain function in the more complex and dynamic tasks representative
of everyday, naturalistic environments at work, in the home or in trans-
portation, and—where possible—in expert populations, such as pilots,
physicians or military personnel. From small beginnings, following the
initial call for such research (Parasuraman, 2003), a growing number of
studies have examined human brain function in work-relevant tasks.
Examples include fMRI studies of frontal and parietal cortical networks
involved in simulated driving ( Just et al., 2008) and how they are altered
in intoxicated drivers (Calhoun & Pearlson, 2012); EEG investigations
of pilot mental workload during actual and simulated flight (Wilson,
2001) and the usability of hypermedia systems (Schultheis & Jameson,
2004); and functional near infrared spectroscopy (fNIRS) studies of frontal
lobe activation in experts performing simulated minimally-invasive
surgery ( James et al., 2011).
Brain stimulation techniques, such as TMS and transcranial direct
current stimulation (tDCS), can supplement the use of neuroimaging
techniques in neuroergonomics. Such techniques are of interest because
of their potential for showing that brain networks that have been iden-
tified in neuroimaging studies are not only active, but are necessary
for performance of a given task. This is typically achieved by showing
that task performance is impaired when the associated brain region is
momentarily inhibited using TMS (Walsh & Pascual-Leone, 2005) or
tDCS ( Jacobson et al., 2012). Of the two techniques, tDCS has some
advantages over TMS for neuroergonomic studies because of its relative
non-invasiveness, greater portability and lower cost. These methods
also allow for the possibility of enhancement of human performance
through electrical or magnetic stimulation of the brain. Again, whereas
these techniques have been used primarily in studies of basic perception
and cognition, they are making their way into neuroergonomic research.
Examples include TMS investigations of reasoning and complex decision-
making, (McKinley et al., 2012) and tDCS studies of detection of military
threats in naturalistic scenes (Clark et al., 2012).
4 Neuroergonomics

Imaging tells us much more about the brain than simply ‘where things
are happening’ (but see, e.g., Uttal, 2011, for a dissenting view). For
example, measurements of brain activity before a stimulus is presented
can tell us how well subjects will remember a stimulus (Otten et al., 2002;
Turk-Browne et al., 2006) or are prepared for a task (Leber et al., 2008;
Toffanin et al., 2009). Posner and Rothbart (2007) argue that imaging is
just beginning to realize its potential in elucidating (a) different brain
networks, (b) neural computation in real time, (c) how assemblies
develop over the lifespan and (d) neural plasticity following brain insult
or training. As discussed in Chapter 8, a new development, the map-
ping of the human genome (Venter et al., 2001), offers great potential
for understanding the physical basis for individual differences. Molecular
genetics provides a set of methodological tools that can inform many
issues concerning human brain function. Methods such as candidate-
gene analysis and genome-wide association (GWAS) are being used to
relate genetic differences to individual performance in tasks involving
the network influenced by particular types of genes.

Brain structures and networks

The starting point for neuroergonomics is the brain. A comprehensive


introduction to the structure and workings of the brain is beyond the
scope of this book, but it is helpful to sketch the major structures and
processing networks involved in perceptual, cognitive, motor and emo-
tional processing (see Figure 1.1). A rough guide to the brain ascribes
executive function to the frontal lobe; motor planning and execution
to primary motor cortex (somatomotor cortex) and pre-motor areas of
the frontal lobe; the integration of sensory information from different
modalities, particularly when the spatial location of objects must be
determined, to the parietal lobe—and to the temporal lobe when objects
must be identified; visual processing to the occipital lobe; and auditory
processing to the temporal lobes. The temporal lobes are also involved
in semantic processing of both speech and vision, and the hippocampus,
located in the medial portion of the temporal lobes, is involved in
memory formation. The cerebellum plays an important role in the
integration of sensory perception and motor output. The cerebellum
interacts with the motor cortex and spinocerebellar tract (which provides
proprioception) to fine-tune equilibrium, posture and motor learning.
The brain stem (pons, medulla oblongata and midbrain) is a small
structure involved in sensation, vision, arousal, consciousness, motor
function, emotion, alertness and autonomic reflexes.
The Working Brain 5

Assigning cognitive functions to brain areas has heuristic value, but


most information processing involves multiple areas of the brain and
depends on dynamic changes in the brain. Perhaps the most important
dynamic process in the brain (other than neural transmission itself) is
long-term potentiation, a long-lasting enhancement in signal transmis-
sion between two neurons that results from synchronous firing of
the neurons. Long-term potentiation enhances synaptic transmission,
improving the ability of pre- and postsynaptic neurons to communicate
with one another across a synapse, and thus contributes to synaptic
plasticity, such as that underlying learning and memory. Another
aspect of brain dynamics is synchronization of brain areas. Many recent
hypotheses of how information is transmitted between brain areas
(e.g. Donner & Siegel, 2011) suggest that it occurs via synchronization
of oscillations in different frequency bands.

A default mode of brain function

A relatively recent discovery (Raichle & Snyder, 2007; Raichle et al.,


2001) is that the brain not only increases in activity during informa-
tion processing, but also that there is a ‘default mode’ of brain function
supported by a processing system which includes the posterior cingulate
cortex and adjacent precuneus. Evidence for a default mode of brain
function comes from neuroimaging studies that show task-specific
deactivation. Many neuroimaging techniques rely on the comparison
of task and control conditions, and almost always report an increase
in activity in the task compared with the control condition. However,
subtracting task-state data from control-state data in some cases reveals
negative activity or task-specific deactivation (e.g. Petersen et al., 1998;
Raichle et al., 1994). Surprisingly, these decreases in activity have been
found even when the control condition is resting with the eyes closed
or simply keeping the eyes at fixation. In other words, even when peo-
ple are assumed to be refraining from information processing activity,
engaging in some other activity results in a reduction of brain activity.
This reduction relative to baseline is the key piece of evidence pointing
to a default mode of brain function.
How does one characterize activity in a passive or resting condition?
Raichle et al. (2001) answer this question with regard to activity observed
during task processing. They argue that the regional decreases seen during
the performance of a task represent the presence of functionality that
is ongoing in the resting state and attenuated only when resources are
temporarily reallocated during goal-directed behaviour. Default activity
6 Neuroergonomics

can thus be defined only in reference to task activity. The fact that the
spatially coherent, spontaneous BOLD activity that is the hallmark of
intrinsic activity is also present under anaesthesia (Vincent et al., 2007)
suggests that the activity is not associated with conscious mental activ-
ity, but rather may reflect a fundamental property of the functional
organization of the brain. An intriguing idea is that the default network
reveals the maintenance of information for interpreting, responding to
or even predicting environmental changes. In this sense, understanding
the default network may help us to understand much more about how
we adapt to, and learn from, the environment.

Assessing and influencing brain function

Of the techniques available for neuroimaging and mapping brain func-


tion, the ones with the most direct application in neuroergonomics are
TMS, tDCS, fNIRS and EEG. However, even though techniques such
as MRI and DTI are relatively expensive to use, the development of
magnet-compatible virtual reality systems has led to fMRI studies of
complex cognition, including simulated driving (Calhoun & Pearlson,
2012) and complex spatial navigation (Maguire, 2007). Moreover, struc-
tural MRI and DTI have been used to quantify the effects of training
methods, such as emphasis change (Boot et al., 2010), video game
training (Voss et al., 2012) or working memory training (Takeuchi
et al., 2010a).

TMS and tDCS


TMS has been used since 1985 to manipulate brain function in a non-
invasive, focal manner (Barker et al., 1985). Designed on the principle
of electromagnetic induction, TMS involves passing an electrical current
through a magnetic coil placed close to the head of the subject. The
magnetic field penetrates through the skull and into the outer layers
of cortical tissue where it induces electrical activity in neurons in the
targeted area. Although the technique has been used primarily to explore
the function of various brain regions, an important observation is that
exposure to TMS sometimes leads to enhancement in perceptual or
cognitive abilities. Exactly how TMS influences brain function is not
completely clear, but it is thought to work by changing the membrane
potential in neurons. When cognitive abilities are enhanced, this may
be through neuronal pre-activation or priming. According to this view,
stimulating a region of the brain pre-activates the neurons in that region,
increasing their propensity to fire. Support for this view comes from the
The Working Brain 7

finding that TMS applied to an area leads to an increase in regional blood


flow (as measured by fMRI) and metabolism (George & Belmaker, 2007).
Most cognitive research has used single-pulse TMS, in which only
one magnetic pulse is delivered. Clinical work, however, has focused on
the possibility of using repetitive-pulse TMS (rTMS). Just a single pulse
of TMS can interfere momentarily with the functioning of a region or
enhance excitability in a region for a short amount of time [less than
500 ms—although these stimulations may have longer-lasting effects
on more distant cortical regions (Pascual-Leone et al., 2000)]. rTMS
often has longer-lasting effects (Walsh & Cowey, 2000). Unfortunately,
the possibilities of rTMS with respect to enhancing cognitive function
(or ameliorating the effects of a disorder) are offset by questions about the
safety of the technique and the risk of unpleasant side-effects. A related
technique, tDCS, involves passing a mild direct electrical current between
electrodes on the scalp to modify neuronal membrane resting potential.
This is done in a polarity-dependent manner, such that neuron excitabil-
ity in a given region is either elevated or lowered (Wagner et al., 2007).
Both TMS and tDCS might be used to enhance perceptual and cognitive
performance (see Chapter 3 and, for a review, Thut et al., 2011).

fNIRS
fNIRS is a relatively new technique that shows promise as a field-
deployable, non-invasive monitor of prefrontal cortex (PFC) activity.
This technology uses light to measure changes in blood oxygenation
as oxy-haemoglobin (HbO2) converts to deoxy-haemoglobin (HbR)
during neural activity (i.e. the haemodynamic response). Because the
light can be introduced at the scalp via a sort of headband containing
light emitting diodes (LEDs), the technology is portable and relatively
non-intrusive. The spatial resolution of fNIRS is about 1 cm2, making
it possible to test hypotheses about changes in the use of brain regions
as a function of learning, in addition to testing general mental activ-
ity (Ayaz et al., 2012). Moreover, fNIRS can be combined with EEG to
achieve better temporal resolution (Gratton & Fabiani, 2007). In one
implementation of fNIRS (Ayaz et al., 2012), light sources and detectors
for 16 optodes are placed in a flexible sensor pad which is worn over
the forehead. Source detectors are separated by 2.5 cm, allowing for
approximately 1.25 cm penetration depth. The LEDs are activated one
at a time, with a temporal resolution of 500 ms per scan. The placement
of the detectors allows the monitoring of dorsal and inferior frontal
cortical areas. Changes in light absorption are analysed using spec-
troscopy for the detection of the chromophores of HbO2 and HbR.
8 Neuroergonomics

As discussed in Chapter 4, fNIRS shows promise as a means to measure


mental workload and changes in level of expertise.

EEG
EEG is a graph of electrical brain activity in which the vertical axis rep-
resents the difference in voltage between two different scalp locations
(as measured by electrodes attached to the scalp) and the horizontal axis
time (Fisch, 1999). The EEG is composed of three types of neural activity
(Hermann et al., 2004): (1) spontaneous activity uncorrelated with any
particular task; (2) induced activity related to the task, but unrelated
to particular events (not phase-locked); and (3) evoked activity related
to particular events (phase-locked).

A Event-related potentials
Much EEG research relies on the event-related potential (ERP; see Luck,
2005). To compute the ERP, a sample of the EEG activity is recorded
just prior to and after a discrete stimulus event. Many (usually at least
100) such samples are taken and are averaged offline, thus ‘averaging
out’ spontaneous EEG activity and resulting in an ERP waveform con-
taining activity that is phase-locked to the stimulus onset. Changes in
the amplitude and latency of the different positive and negative peaks
in the ERP are used to draw conclusions about the mental operations
associated with the task.
A number of components of the ERP have been identified and linked
to information processing (Handy, 2004; Luck, 2005; Regan, 1989;
see Table 1.1). ERP components can be divided roughly into early,
exogenous components, which reflect the processing of stimuli, and
late-onset, endogenous components related to cognitive processing.
For example, the latency and amplitude of the P1 and N1 (where
P stands for positive polarity, N for negative polarity, and 1 for the order
of the two components in the timeline of the ERP) depend on stimulus
properties, and the amplitude of P1 and N1 are related linearly to the
amount of attention allocated to the stimulus (Mangun & Hillyard,
1990). In addition to amplitude and latency differences as a function
of task demands, the scalp distribution of ERP components can be
informative. For example, although a P1 is observed for both visual
and auditory stimuli, the spatial distribution of P1 in these two cases is
different, with the auditory P1 being largest at frontocentral electrode
sites (i.e. perpendicular to the primary auditory areas) and the visual P1
being largest at occipito-lateral electrode sites (i.e. perpendicular to the
primary visual areas; Luck, 2005).
Table 1.1 Components of the event-related potentials (ERP), their onset, topography and the functionality they reflect

Component Onset (ms) Topography (cortex) Functionality

Early processing*
C1 65–90 Striate Visual processing
P1 80–120 Extrastriate Visual processing ⫹ attention
N1-a 80–120 Frontocentral Auditory processing ⫹ attention

Stimulus-processing related*
ELAN 100–300 Frontal (left) Violation of word category or phrase structure
N1-v 150–200 Occipito-parietal/temporal Visual processing ⫹ attention
N170 130–200 Occipito-temporal (right) Processing of faces
IIN 200–300 Posterior-ipsilateral Attention disengagement and reorienting
P2 150–275 Centro-front parieto-occipital Comparison with internal representation
MMN/N2a 150–250 Primary auditory/visual Detection of change (oddball)
N2b 200–350 Anterior Response inhibition/conflict, error monitoring
N2c 200–300 Posterior Degree of attention allocated to stimulus
N2pc 200–300 Posterior-contralateral Attention allocation
LDAP ⫺200–0† Posterior-contralateral Preparatory activation visual cortex
EDAN 150–350 Occipital-contralateral Decoding of the attentional cue
ADAN 300–500 Frontal-contralateral Initiation of an attentional shift

Stimulus-categorization related*
P300/P3b 300–600 Parietal Stimulus evaluation and categorization
P3a 250–280 Frontocentral Attention engagement, processing of novelty

(continued)
9
Table 1.1 Continued 10

Component Onset (ms) Topography (cortex) Functionality

N400 250–500 Centro-parietal Semantic processing


P4pc 350–450 Posterior-contralateral Deallocation of attention
RON 400–600 Frontocentral Reorienting towards target
P600 500–1200 Centro-parietal syntactic processing

Response related‡
CNV 260–470 Vertex Contingency between two stimuli
LRP 260–470 Centro-contralateral Response preparation
ERN/Ne 80–150 Frontocentral Error processing

*
ERP components time-locked to the stimulus.

The LDAP is measured before target onset.

ERP components time-locked to the response.
The Working Brain 11

B Time-frequency analysis
Whereas ERPs are computed in the time domain, time-frequency
analysis involves quantifying the power in each of the frequencies of
the EEG signal [estimated, e.g., with fast Fourier transformation (Regan,
1989); see Pfurtscheller and Lopes da Silva (1999) for a method based on
event-related desynchronization, and Samar et al. (1999) for a method
based on wavelet analysis]. Time-frequency analysis has some advan-
tages over ERP-based analysis of task performance. The most important
advantage of time-frequency analysis is that it allows the observation
of changes in cerebral activity that are not phase-locked to a particular
event (Pfurtscheller & Lopes da Silva, 1999). This property overcomes
the limitation of ERPs that, because they are computed by an averaging
process that eliminates any activity not phase-locked to the onset of
an event, their computation results in the loss of any induced activity
that is time-locked to the onset of the event, but not phase-locked
(see, e.g., Pfurtscheller & Lopes da Silva, 1999; Tallon-Baudry & Bertrand,
1999). Moreover, the range of frequencies that compose the EEG is
better covered with frequency analysis than ERP analysis because the
computation of ERPs requires filtering, with the result that frequencies
outside the range of the filter are lost (see, e.g., Luck 2005). However,
filtering the EEG to isolate the ERP removes movement artefacts (such
as microsaccades; see Fries et al., 2008).
In the earliest known documentation of the EEG signal, Berger
(1929) described a relatively slow (8–12 Hz) rhythmic oscillation,
which he termed the alpha band. Subsequently, oscillations with
a periodicity of 12–30 Hz (the beta band), 30–80 Hz (the gamma
band), less than 4 Hz (the delta band) and 4–8 Hz (the theta band)
have been described. A very general way to link the activity of the
nervous system with the cognitive demands imposed by a task is to
observe the intervals of synchronization and desynchronization in a
given band. Synchronization indexes the state of cortical rhythmicity
(i.e. that the nervous system is synchronized with a certain frequency
band), whereas desynchronization refers to the interruption of corti-
cal rhythmicity. For example, Nunez et al. (2001) showed that alpha
desynchronization correlates with mental effort, such that the alpha
rhythm decreases with increases in mental effort. In other words,
in a state of relaxation or ‘idling’ state, alpha waves are of relatively
high amplitude, or synchronized. Desynchronization is not always
associated with an increase in mental effort. For example, Nunez
et al. (2001) showed that theta tends to increase (i.e. synchronize)
with increased mental effort.
12 Neuroergonomics

Information processing in the brain

Perception
Information from the different senses is processed in dedicated brain
areas, which have a roughly similar functional organization in the case
of vision, audition and somatosensation (Amedi et al., 2005). Sensory
information arrives from the senses in a deep brain structure called
the thalamus, which may be thought of as the brain’s switchboard for
incoming information. Sensory information then propagates to the
so-called primary sensory cortices: visual information is sent to V1 in the
occipital lobe, auditory information to A1 in the temporal lobes, and
somatosensory information to S1 in the parietal lobe. These primary
sensory cortices process information at a basal level, such as brightness
in the case of vision, or tone frequency in the case of audition. The
early sensory areas are organized topically in feature space. For example,
in the somatosensory cortex the brain cells processing input from the
index finger are next to the brain cells processing input from the middle
finger and so forth. In the primary visual area, neurons are retinotopi-
cally organized. Activity in these neurons shows a one-to-one relation
with the image projected on the retina. In the primary auditory cortex,
the organization is tonotopic: neurons are organized according to the
frequency to which they respond. After processing in the sensory cor-
tices, information is fed forward to so-called association cortices in the
temporal and parietal lobes, where sensory information is integrated in
higher-level processes, such as memory and decision-making.
Of the sensory brain systems, the visual system has been studied the
most extensively. The visual system consists of two separate pathways:
the ventral and dorsal routes (see Chapter 2). The ventral route is from
the occipital lobe to the temporal lobe. This pathway is sensitive to
objects (i.e. this pathway processes constellations of features, e.g. a face).
The dorsal route leads from the occipital lobe to the parietal lobe and is
involved in processing the location of objects, and guiding movements.
The two pathways are organized into distinct visual areas in both hier-
archical and parallel fashions. The organization is hierarchical in that
the complexity of the represented object increases as the information is
passed on from area to area, but also parallel in the sense that distinct
areas are specialized in distinct features (e.g. there are areas specializing
in colour, in motion and even specifically in recognizing faces).
The traditional view of visual perception is that visual images are
‘decomposed’ into composite features in the early visual areas and
subsequently ‘recomposed’ in higher visual areas (Marr, 1982). This view,
The Working Brain 13

however, has been challenged in the past decade. Visual areas higher
up in the hierarchy of processing not only receive information from
lower areas (so-called feed-forward information processing), but also
send back information to lower visual areas, thus influencing infor-
mation processing at more basal levels (Lamme & Roelfsema, 2000).
This feedback, or recurrent processing plays an important role in many
contemporary theories of visual processing, attention and awareness
(Lamme, 2003, 2006; Roelfsema, 2006). Other sensory modalities
appear to have a similar structure, though far less is known about the
role of cortico-cortical interactions in modalities other than vision
(see Raizada & Grossberg, 2003).
Processing of sensory information is, in many respects, modality-
specific: information presented in separate modalities tends to result
in less interference than when that same information is presented
within one modality, which suggests that each of the five senses has
its own capacity limits. Information from the various senses does seem
to be subject to interference (or crosstalk) at more central stages of
processing (Spence & Driver, 2004). For example, tactile stimulation
can affect visual attention, suggesting that multimodal parietal areas
receive tactile input and project to the visual cortex, which can result
in attentional enhancement of visual signals (Macaluso et al., 2000), at
least when the visual and tactile stimuli spatially coincide (Macaluso
et al., 2002). Moreover, recent studies point to the possibility of direct
interactions between primary sensory areas. For example, direct connec-
tions from auditory integration cortex to V1 have been demonstrated
in the primate brain (Rockland & Ojima, 2003) and are believed to play
an important role in the so-called sound-induced flash illusion (Shams
et al., 2000), in which a single flash of a visual stimulus is perceived as
two flashes when it is accompanied by two auditory beeps. Moreover,
in conditions of sensory deprivation, early visual cortex adapts dynami-
cally to the absence of visual stimulation and will also process auditory
information as rapidly as after two weeks of visual deprivation (Merabet
et al., 2008), suggesting that multimodal integration may also occur at
very early levels of processing.

Working memory
One of the most important ways to enhance performance in complex
environments is to aid the operator in directing attention to relevant
elements so that this information can be selected for representation in
working memory. Working memory influences selection and holds task
goals, and thus has a key role in decision-making and action selection.
14 Neuroergonomics

Because of the position of working memory as an interface between


external stimulation and internal states, it is sometimes considered a
process fundamental to attention (Knudsen, 2007). Working memory
holds the objects of attention (Cowan, 2005). In turn, attention is
needed to maintain representations in working memory. More evidence
of the interaction between memory and attention is that maintenance
of spatial locations in working memory biases attention to those loca-
tions (Awh & Jonides, 2001).
Many studies have been conducted to elucidate the neural under-
pinnings of working memory. The PFC, in particular, seems to play a
central role in working memory, as evidenced by selective interference
with working memory of lesions in the PFC (Duncan & Owen, 2000;
Goldman-Rakic, 1995). However, working memory is likely widely dis-
tributed in the brain, with the PFC acting as an executive controller that
engages with cortical and subcortical regions involved in the processing
of sensory, motor and internally-generated information (e.g. Miller &
Cohen, 2001). Marois et al. (2000) proposed that a frontoparieto-temporal
network is involved in raising perceived information into awareness
and working memory; synchronization between the areas involved
in this network may be especially informative (Dehaene et al., 2003;
Gross et al., 2004). Visual working memory (VWM), which supports
the maintenance of visual information for relatively short periods, is,
in particular, supported by sustained neuronal activity in a cortical
network involving frontal, parietal, occipital and temporal areas (Palva
et al., 2010). Sustained, stable and VWM-load dependent interareal
phase synchrony is found among frontoparietal and visual areas
during a VWM-retention period in alpha (10–13 Hz), beta (18–24 Hz)
and gamma (30–40 Hz) frequency ranges, consistent with the idea
that interareal synchrony has the function of sustaining object
representations in VWM.
An interesting and important phenomenon is the ‘delay-period’
or ‘persistent’ neural correlate of working memory first described by
Fuster and Alexander (1971). Namely, neurons in the PFC of monkeys
that had been trained to remember target stimuli for a brief period of
time responded both when the target was present and in the seconds
between target disappearance and the making of a response—even
when visual distractors were shown after the target (Fuster, 1995). In
tasks such as this, brain dopamine (DA) in the PFC seems to play a
role in the stabilization of the earlier presented stimulus across the
short delay. Brain DA has further been implicated in working memory,
and cognitive control processes in general. Cools and D’Esposito (2011)
The Working Brain 15

suggest that cognitive control, such as exercised by working memory,


requires a dynamic balance between cognitive stability (i.e. the ‘online’
stabilization of task-relevant representations), cognitive flexibility
(i.e. flexible updating of task representations in response to novel infor-
mation), and that these distinct components of control might call
upon the PFC and the striatum respectively. They note that whereas
the effects of DA on cognition have often been ascribed to modulation
of the PFC, recent data suggest a complementary role for DA in the
striatum for working memory and cognitive control. In the striatum,
DA might have a qualitatively different function from that of DA
in the PFC, with striatal DA being more important for the ability to
flexibly update goal representations when new information becomes
available.

Attention and arousal


Michael Posner, one of the leading pioneers in the neuroscience of
attention, defines attention as ‘the regulating of various brain networks
by attentional networks involved in the alert state, orienting, or regula-
tion of conflict’ (Posner & Rothbart, 2007, p. 2; see Figure 1.1; Posner &

Posterior area Motor cortex


Superior
Frontal eye field
parietal lobe
Anterior cingulate gyrus
Temporoparietal
junction
Frontal area

Thalamus
Pulvinar
Amygdala Prefrontal cortex

Hippocampus

Superior colliculus
Cerebellum
Alerting
Orienting
Executive function

Figure 1.1 The location of some major areas of relevance for information
processing in the brain and the areas proposed by Posner and Rothbart (2007) to
be involved in alerting, attentional orienting and executive function
16 Neuroergonomics

Fan, 2007). This definition emphasizes the importance of temporal


correspondence between multiple cell assemblies (Hebb, 1949;
cf. synchronization; Womelsdorf et al., 2006). Other definitions or models
of attention emphasize that control can be stimulus-driven (bottom-up)
or under voluntary control (top-down), or focus on whether attention
must be focused or divided in order to perform a task.
Alerting refers to achieving and maintaining a state of high sensitivity
to incoming stimuli. Alerting is often studied by presenting warning
signals before stimuli appear, and the effects of such warning signals
have been related to modulation of neural activity by the neurotrans-
mitter norepinephrine (Marrocco & Davidson, 1998). Alerting has been
associated with the thalamus, as well as the frontal and parietal cortex
(Fan et al., 2005). Alerting is similar to the older concept of arousal,
which refers to an individual’s level of activity, whether reflected in
general behavioural states, such as active wakefulness or sleep, or in
subjective experience, such as alertness or drowsiness. Changes in arousal
can be indexed by recording brain activity. For example, increased EEG
theta activity recorded from posterior electrode sites on the scalp is
associated with lowered arousal and poor performance on prolonged,
monotonous tasks (O’Hanlon & Beatty, 1977). Also, fMRI studies have
shown that variations in arousal are linked to activation in the brain
stem and in widespread frontal-parietal networks in the right hemisphere
(Sturm & Wilmes, 2001).
Posner and Rothbart (2007) define orienting as ‘the interaction of a brain
network with sensory systems designed to improve the selected signal’
(p. 7). Orienting, in this sense, is well-captured by the model of attention
proposed by Knudsen (2007) in which attention can be controlled exter-
nally, by salient features or objects in the environment, or voluntarily, by
working memory, top-down sensitivity control and competitive selection.
In the case of visual attention, information that falls within the visual
field is processed according to its salience, with infrequent or important
stimuli being differentially responded to (e.g. Koch & Ullman, 1985).
These representations and any other activated information are selected
according to a competitive process whereby information with the highest
signal strength enters working memory (cf. Desimone & Duncan, 1995).
The results of competitive selection can influence top-down sensitiv-
ity control without the involvement of working memory, or working
memory can bias the top-down signals that modulate the sensitivity
of neural representations competing for entry into working memory
(e.g. Egeth & Yantis, 1997). Thus, voluntary attention can be described as
a recurrent loop involving working memory, top-down sensitivity control
The Working Brain 17

and competitive selection. Moreover, competitive selection and working


memory directly influence eye movements, thus determining what future
input is possible, and neural discharges associated with gaze control
modulate sensitivity control.
Orienting has been related to the frontal eye fields, the superior
parietal lobe and temporoparietal junction, and also to the pulvinar
in the thalamus and the superior colliculus. Event-related fMRI stud-
ies have linked cuing effects specifically to the superior parietal lobe
(Corbetta & Schulman, 2002), an area that is related closely to the lateral
intraparietal area in monkeys—an area that is involved in the produc-
tion of eye movements. When attention must be disengaged from one
location and moved to another, activity is seen in the temporoparietal
junction (Corbetta & Schulman, 2002).
Neuroscientific evidence that working memory, top-down sensitivity
control and competitive selection are dissociable is accumulating. For
example, whereas PFC is associated with executive control, the poste-
rior parietal cortex (PPC) seems to be more associated with top-down
sensitivity control and competitive selection. Top-down bias signals
have been observed directly in monkeys trained to discriminate between
sensory stimuli: when a given stimulus is relevant on a given trial, the
responses of neurons representing that stimulus are greater than when
the same stimulus is presented in a behaviourally-irrelevant context
(Desimone & Duncan, 1995). Sensitivity control can be described
as utilizing space-specific bias signals that improve localization
(see Chapter 3). The fact that the PPC receives inputs from the senses,
as well as movement-related corollary discharges and proprioceptive
feedback (Andersen et al., 1997), makes it a tenable candidate for the
translation of spatial information from the retinotopic frames of refer-
ence of the visual cortex to the more abstract frames of reference of
working memory. An interesting hypothesis is that attentional selec-
tion of items for representation in working memory may be associated
with oscillations in the gamma band (30–80 Hz) that are synchronized
in higher-order sensory areas, PFC and PPC (e.g. Bauer et al., 2006;
Womelsdorf et al., 2006).
Top-down, goal-directed attention is associated with prefrontal areas,
whereas bottom-up, stimulus-driven attention is associated with more
parietal activity. Voluntary spatial shifts to task-relevant locations are
directed by areas in the parietal cortex that contain representations
in topographic maps of attentional foci (Sereno et al., 2001; Silver
et al., 2005). Reorientation to a target in an unattended location relies
on a circuit that includes the right temporoparietal junction (Corbetta
18 Neuroergonomics

et al., 2008). According to Corbetta et al., ‘reorienting’, or the ability


to change a current course of action to respond to potentially advanta-
geous or threatening stimuli, is supported by the coordinated action of
a right hemisphere dominant, ventral frontoparietal network that inter-
rupts and resets ongoing activity, and a dorsal frontoparietal network
specialized for selecting and linking stimuli and responses. The two
networks are distinct and internally correlated when the organism is at
rest, but, when attention is focused, the ventral network is suppressed
to prevent reorienting to distracting events.
Executive attention, such as that involved in responding to so-called
conflict tasks, in which irrelevant, to-be-ignored information causes
interference with the information to be responded to, is commonly asso-
ciated with the PFC and the anterior cingulate cortex (ACC; Botvinick
et al., 2001; see Chapter 5). Connections between the ACC and sensory
areas suggest that the ACC regulates sensory input (Crottaz-Herbette &
Menon, 2006). The ACC has large-scale connectivity to many brain
areas, which suggests that it is ideally situated to exercise cognitive
control over other brain networks. The more dorsal part of the ACC
has been associated with the regulation of cognitive tasks, whereas the
more ventral part of the ACC is involved in the regulation of emotion
(Bush et al., 2000). Dorsal ACC is connected strongly to the frontal and
parietal areas involved in cognitive processing, and is active during task
performance. For example, during a visual selection task, dorsal ACC
activity correlates with visual brain areas and during an auditory task
the activity of the dorsal ACC correlates with auditory areas (Crottaz-
Herbette & Menon, 2006).
Cognitive control is, by definition, attentional in the sense that it
involves maintaining task goals. Switching between tasks, response
selection and retrieval from long-term memory all require cognitive
control (Chun & Turk-Browne, 2007), and it is likely that all are subject
to a common processing bottleneck in lateral frontal cortex (Marois &
Ivanoff, 2005), although task-switching may be subject to additional
limitations on selection. Internal control requires that one response
or task goal be given priority above others, and therefore implies inhi-
bition of competing options. Prefrontal, parietal and basal ganglial
regions are involved in internal control (e.g. Braver et al., 2003). As will
be discussed at more length in Chapter 5, there is a close connection
between attentional control and lapses of attention. In fact, reduced
prestimulus activity in attentional control regions such as the anterior
cingulate and right PFC has been associated with attentional lapses
(Weissman et al., 2006).
The Working Brain 19

Decision-making
Making decisions is an important aspect of human life. Even simple
decisions, such as deciding whether or not to take an umbrella with
you on a cloudy day, depend on many factors (e.g. the distance to be
travelled, the weight of the umbrella and the actual chance of rain).
Within the field of economics, decision-making was long dominated
by a rational-choice perspective (e.g. Leiser & Azar, 2008; Loewenstein
et al., 2008). Over the past few decades, however, growing interest in
the neural basis of decision-making has brought together economists
and psychologists in the new field of ‘neuroeconomics’ (Glimcher et al.,
2008; Loewenstein et al., 2008).
Numerous studies have shown that our decision-making is guided
not only by rational considerations, but also by emotion. For example,
what we choose is, for a large part, guided by our expectations of how
we would feel as a consequence of our actions (Kahneman & Tversky,
1990). The question of what the best option is in a particular case is
thus a complicated one. Different brain mechanisms are responsible
for weighing relatively more rational and more emotional considera-
tions in decision-making, with the dorsolateral PFC being involved in
more rational aspects of decision-making, and the orbitofrontal cortex
and the limbic system playing a role in the evaluation of the emotional
aspects of decision making (Damasio, 1996; Sanfey et al., 2006).
The concept of reward is a key factor in understanding why people
make specific decisions. Reward is most easily understood (and studied)
in terms of monetary gains, but comes in many forms. Research in
behavioural economics has shown that the value associated with a
reward is not fixed, but is strongly influenced by the temporal con-
text in which it is offered. For example, if given the choice between
receiving $100 today or $110 tomorrow, most people would opt
for receiving $100 today. If offered the choice between $100 in a year
or $110 in 13 months, more people will tend to choose the larger pay-
out (Thaler, 1981). It has been suggested that two different processes
underlie how short-term and longer-term rewards are evaluated,
and that the two processes operate fairly independently (Rilling &
Sanfey, 2011). A recent, controversial line of investigation suggests
that choices between alternatives may be more rational when the
pros and cons of the different options are not consciously deliberated
than when they are (e.g. Dijksterhuis et al., 2009). The suggestion that
distraction during decision-making can lead to better decisions points to
the need to clarify the roles of emotion and unconscious thought in the
decision-making process.
20 Neuroergonomics

Action and motor control


Any interaction of people with the physical world involves selection
and execution of action. Many cortical and subcortical brain regions
have been implicated as playing a role in action selection and execu-
tion, including the PFC, the temporal and parietal cortices, the basal
ganglia and the motor cortices (Bunge, 2004). A distinction is often
made between an action hierarchy, which represents actions in terms
of goals and the actions required to attain them, and a motor control
hierarchy for the processes that control execution of those actions,
although issues remain as to how these hierarchies can be recon-
ciled and coordinated (Uithol et al., 2012). The discovery of ‘mirror
neurons’ in the ventral premotor cortex (di Pelligrino et al., 1992) and
posterior parietal cortex of monkeys (Rizzolatti & Craighero, 2004),
which respond not only when an action is executed, but also when it
is observed being performed by another, has suggested that the neural
structures underlying execution of action may also be involved in the
understanding of action, as described in the next section.
The participation in the 2012 Olympics of Oscar Pistorius, who won
a spot in the 400 m race final on two prosthetic legs, provided a vivid
illustration of the state of modern technology allowing control of pros-
thetic devices by an amputee. Myoelectric limbs are interfaced through
electrodes with the remaining muscles of the amputee’s limb, and the
electrodes detect minute activity of the muscles and their associated
nerves, which are translated into movements of the prosthetic limb.
Considerable advances in these technologies have been made over the
past 35 years, although improvements are needed continually to match
advances in the prosthetic hardware (Parker et al., 2006). Technology
is also developing rapidly for brain–computer interfaces, which allow
communication with devices through signals recorded from the
brain (Schwartz et al., 2006). This technology has potential benefits
for individuals with acute motor disabilities, in particular, but is also
being explored as an interface for high-performance environments
(see Chapter 4). At present, research is shifting from basic studies and
individual demonstrations to translational research intended to generate
commercial products (Brunner et al., 2011b).

Emotion and social interaction


Emotions play a major role in cognitive processes, such as decision-
making (Damasio, 2001), and an even greater role in social interactions.
Our capacity to feel emotion influences the way we make decisions, as
is demonstrated by the seminal work of Bechara et al. (1997). In a now
The Working Brain 21

classic experimental paradigm, the Iowa Gambling Task, participants


repeatedly draw a card from one of two stacks. The cards indicate the
win or loss of a given amount of money. The critical manipulation is
that one of the two stacks is more profitable than the other in the long
run. Healthy participants notice this quite easily, but patients with
prefrontal damage have significant difficulty in learning which stack is
the more profitable one. Learning which stack to choose is associated
with a so-called somatic marker: winning or losing evokes an emotional
response in participants, which is associated with increased autono-
mous activity. In patients with prefrontal brain damage, however, this
emotional response is absent. According to Damasio’s somatic marker
hypothesis, these faint emotional feelings play an important role in regu-
lating behaviour. The somatic marker hypothesis (Damasio, 1996) has
been applied to everyday behaviours, such as driving (Lewis-Evans et al.,
2012), where it has been shown that even emotions that people are not
aware of can make people adjust their behaviour: unconscious negative
emotions, signalling danger, make people drive more slowly.
Emotions can also play a role in how we perceive the world: percep-
tual, attentional and mnemonic processes are all influenced by how we
feel. A well-known phenomenon is the shift in global–local focus that
occurs as a function of positive or negative mood. People in a positive
state of mind tend to have a global focus, focusing on the gist of a visual
scene or remembering the overall outline of a story, whereas people in
a negative mood adopt a more local focus, focusing on the details of an
image or remembering the details of a story (Clore & Huntsinger, 2007).
Moreover, perception itself is altered by mood. For example, when in
a happy mood, people are quicker in reading positive words and tend
to interpret ambiguous facial expressions as happy, whereas the oppo-
site is true for people in a negative mood (Barrett & Niedenthal, 2004;
Bouhuys et al., 1995; Jolij & Meurs, 2011).
A better understanding of the emotional brain, and of how emotional
responses can be monitored, is therefore of great importance in the field
of neuroergonomics. Emotions, and physical emotional responses, are
regulated by the limbic system, in particular by the amygdala, a nucleus
that regulates emotional behaviour and processes the emotional content
of sensory input. Activity in this emotion network does not necessar-
ily lead to conscious emotional experience, but does alter autonomous
activity, and can prime fight or flight decisions (LeDoux, 1996).
How we feel emotions is less well understood than the physical
correlates of emotion. Studies in patients have shown that the orbito-
frontal cortex seems to play a critical role in the ability to feel emotions,
22 Neuroergonomics

but little evidence links particular brain areas to the feeling of specific
emotions. However, there does appear to be some hemispheric localiza-
tion of emotions, with the left hemisphere seeming to mediate positive
moods and the right hemisphere being more active in a negative state
(see Chapter 7). The only emotion that has been linked to a specific
brain area is disgust, which appears to be localized in the insular cortex
(Keysers & Gazzola, 2006).
Emotions form an important aspect of social communication. Recent
studies on the perception of emotional expressions have shown that
seeing an emotion induces that emotion, possibly by triggering mimicry
of the perceived expression, as if we are trying to understand the emotional
state of someone by emulating what we see (Keysers & Gazzola, 2006; Neal
& Chartrand, 2011). The idea that mimicry is crucial to understanding the
emotions of others fits well with the aforementioned discovery of mirror
neurons, the neurons of the motor areas of the brain that fire if an action
is observed that would normally require the involvement of the same
neurons (see, for a review, Rizzolatti & Craighero, 2004). For example,
neurons that would normally trigger a movement of the arm will also fire
if an observer sees someone else moving an arm (Rizzolatti & Sinigaglia,
2010). Mirror neurons are supposed to play an important role in social
communication by allowing us to understand the intentions of others by
virtue of simulating the mental state of the other (Keysers, 2009).
In the context of neuroergonomics, the agents people interact with
may not be human agents, but computer agents. It has been shown that
humanoid robots—and even computer programs—may evoke brain
activity in observers that is consistent with such interactions being
treated as if they were social interactions. Interacting with something
that looks even remotely human activates the so-called social brain
network that includes the tempoparietal junction and the superior tem-
poral sulci, which is involved in the processing of social information,
such as trustworthiness and agency (Krach et al,. 2008). A better under-
standing of the functioning of this brain network may therefore allow
the optimization of interface design (see Chapter 7). Given that recent
research suggests that our brain is optimized for social interaction, mod-
elling human–computer interaction on social interaction may prove to
be an efficient way of optimizing human–computer interaction.

Prediction of prospective activity

From the standpoint of the ergonomist, one of the most important


questions regarding neural activity is whether we can use it to predict
The Working Brain 23

how the operator will perform within a relatively short period of time.
If we see, for example, that attentiveness is flagging, a signal or even
some direct ‘refreshment’, such as a stimulating odour (Kato et al., 2012)
could be given to reorient the operator to the task. Non-intrusive, real-
time computation of the neural correlates of internal states that precede
changes in performance is a holy grail of adaptive technologies. How
can looking at the time period before an action (whether an attention
shift or the selection of a response) is performed inform us about the
action that will be taken or the quality of information processing
that will occur? One line of research geared towards answering this
question is to develop pattern classification algorithms that predict
future performance on the basis of the analysis of antecedent states
(see Chapters 4 and 5).
Much of the proof of concept that performance prediction is possible
comes from studies using fMRI. Because fMRI is non-invasive, it is pos-
sible to scan repeatedly in order to examine changes that occur with
learning. Generally, learning on a task is associated with a decrease in
the number and amount of activation of associated brain networks,
although the rate of change of these networks may vary from milli-
seconds to years, depending on what is being learned (Posner, 2012),
and connectivity within networks may be enhanced with practice
(McNamara et al., 2007). The EEG signal has also been studied exten-
sively in this regard. Synchronous EEG activity has been shown to
predict enhanced visual perception (Hanslmayr et al., 2007) or to relate
to anticipatory attention when an event can be predicted (Rohenkohl &
Nobre, 2011). Moreover, various studies have shown that the EEG signal
can be used to predict participants’ actions before they initiate them
(Libet et al., 1983) and that fMRI can reveal which decision will be taken
up to ten seconds before the decision is actually made (Falk et al., 2010;
Soon et al., 2008).

Direct augmentation of human performance

Neural augmentation has its roots in research conducted in animals and


in humans with severe psychiatric disorders. An early pioneer in the
field, Jose Delgado, showed that electronic devices could be implanted
in the brain to manipulate actions or emotions by receiving signals from,
or transmitting them to, neurons (Horgan, 2005). Although his attempts
to treat disorders with brain implants were met with some success,
results were variable across patients and even for individual patients.
To date, the most reliable results have been obtained for Parkinson’s
24 Neuroergonomics

disease patients (Weaver et al., 2009) and severe depression (Kennedy


et al., 2011), both using deep brain stimulation. Cochlear implants, in
which the auditory nerve is stimulated directly, can also be considered
brain implants. Brain implants are also being explored as input for
brain–computer interfaces for locked-in patients (see Chapter 4).
Direct augmentation of human function via drugs or brain implants
is probably the most controversial and emotionally-charged topic in
neuroergonomics. Although few would seem to begrudge Parkinson’s
disease patients the brain implant that allows them respite from
tremor, the possibility that brain implants will someday—perhaps
soon—enhance healthy function has led to heated debate. From a
neuroergonomic point of view, enhancement of normal function is
exactly what researchers hope to achieve. Farah et al. (2004) summarize
many of the arguments for and against such neurocognitive enhancement.
They note that prescription stimulants, such as methylphenidate and
dextroamphetamine, are already used by healthy high school and college
students hoping to boost test scores (Babcock & Byrne, 2000), that
drugs that target either the onset of long-term potentiation or memory
consolidation are being developed to improve memory, and that drugs
targeting the dopamine and noradrenaline neurotransmitter systems
not only improve deficient executive function, but may improve normal
executive function (although such improvements may be limited to
low-performing individuals; Elliott et al., 1997; Mehta et al., 2000).
Major ethical and practical issues are: who should decide whether or not
performance enhancing drugs will be administered, and how should the
performance of people benefiting from enhancement be evaluated? At
a more basic level, safety, coercion, distributive justice and personhood
are major concerns.
Safety is a concern with all health interventions, but neurocognitive
enhancement involves intervening in a complex, not fully understood
system. Coercion refers to explicit or implicit pressure to engage in neuro-
cognitive enhancement either because of pressure from an employer who
recognizes the benefits of a more attentive and less forgetful workforce,
or because of fears of competing against enhanced co-workers. Distributive
justice is a concern because there will likely be cost and social barriers
to neurocognitive enhancement, as there are for other benefits, such
as health care and schooling. Finally, because modifying brains affects
individuals, it is important to consider how neurocognitive enhance-
ment affects our understanding of ‘what it means to be a person, to be
healthy and whole, to do meaningful work, and to value human life in
all its imperfection’ (Farah et al., 2004, p. 424). Although it can be
The Working Brain 25

argued that the quest to improve on natural endowments brings with it


the risk of ‘pathologizing’ normal function, ardent coffee drinkers can
attest to the capacity of individuals to become adapted to some kinds
of enhancement and to how widespread societal acceptance of such
enhancement is.

Conclusion

Since Parasuraman’s (2003) call for a neuroergonomic approach to


improving safety and efficiency at work the field of cognitive neuro-
science has grown by leaps and bounds. Technological advances make
new kinds of measurement of brain activity and human performance
possible, and these new technologies—together with our growing
understanding of the brain networks underlying memory, attention and
decision-making—are leading to new possibilities for monitoring and
enhancing behaviour in real-world environments. Although questions
regarding the desirability of monitoring and enhancing behaviour will
need to continue to be addressed, it seems clear that neuroscience can
be profitably be applied in work contexts and that the opportunities to
do so are increasing.
2
Cognitive Neuroergonomics
of Perception
Jacob Jolij, Addie Johnson and Robert W. Proctor

Perception is the process of transforming sensory input into internal


representations to guide cognition and action. Understanding this proc-
ess is vital for designers of information systems and interfaces. After
all, to understand what people do with information, it is necessary to
know what information is available to them perceptually. Traditionally,
perception, cognition and action have been treated as fairly independ-
ent processes in which the perceptual systems—located in the sensory
cortices of the occipital (vision), temporal (vision and audition) and
parietal (vision and somatosensation) lobes—carry out their respective
tasks and pass information to higher, cognitive areas located in the
frontal lobe. That way of conceptualizing perception, however, has been
challenged by developments in the cognitive neuroscience of percep-
tion, which suggest that lower-level processes in the perceptual systems
receive feedback from higher-level processes. Indeed, some have gone
so far as to propose that what we think, know and feel may change the
way we perceive the world by directly altering our perceptual processing
(e.g. Stefanucci et al., 2011). Moreover, perceptual processing is highly
dynamic: perceptual learning occurs on a continuous basis, and the
different senses show crosstalk in which vision informs audition, touch
informs vision and so forth (e.g. Grahn et al., 2011).
Neuroimaging technologies such as functional magnetic resonance
imaging (fMRI) and new algorithms to analyse electroencephalography
(EEG) data on a single-trial level allow us to view the workings of the
perceptual systems with ever-increasing accuracy. Using fMRI-decoding,
a technique for deriving mental states from looking at brain activity, for
example, it is possible to deduce what a person is viewing, thus allowing
the researcher to read the ‘mind’s eye’ (Miyawaki et al., 2008; Tong &
Pratte, 2012). Novel analysis methods are now being employed to achieve
26
Cognitive Neuroergonomics of Perception 27

similar things using EEG, with some degree of success (Bobrov et al.,
2011). The possibility of determining in such a direct manner what people
are processing can be expected to open up new possibilities for their
interactions with computers and other machines.
Advances in the neuroscience of perception are likely to be espe-
cially relevant in the area of human–computer interaction (HCI)
because, at the same time that theoretical and technical developments
in cognitive neuroscience further our understanding of perception,
HCI is undergoing a revolution. The past few years have seen a shift
from the commonly-used Windows–Icons–Mouse–Pointer (WIMP)
interface on desktop computers towards mobile computing devices,
such as smartphones and tablet computers, which feature touch-based
interfaces. In fact, it is predicted that in the near future the majority
of all consumer computing will be mobile computing using touch-
screen interfaces (see, e.g., Wong, 2007). Touchscreen interfaces, in
which the user touches or drags icons on the screen, can be argued
to afford a more direct way of manipulating information than WIMP
interfaces, which rely on use of a keyboard or mouse that is separated
physically from the screen. This difference may have far-reaching con-
sequences for information and interaction design, in part because of
an apparent distinction in the human visual system between ‘vision
for perception’ and ‘vision for action’. As introduced in Chapter 1,
there is much evidence that the pathway in the brain that processes
visual information for the benefit of recognizing objects is distinct from
the pathway for processing spatial information that supports touch-
ing and manipulating objects (Milner & Goodale, 2006). The WIMP
interface has seemingly been optimized for the vision-for-perception
pathway. Touchscreen-based devices, however, are arguably more
action-centred and potentially allow for more direct manipulation
using the vision-for-action pathway. Given that the two pathways have
different underlying neural mechanisms and characteristics, interface
design may need to be rethought to capitalize on the characteristics of
what is likely to be the dominant processing stream.
Other technological developments that present opportunities and
challenges for the perceptual researcher are those surrounding virtual
reality and augmented reality (in which a computer-generated virtual
image is overlapped with the natural world). Computer technology
now allows us to build virtual environments and act in these environ-
ments. Gaming is a major area of application of these technologies,
and so-called ‘serious gaming’ (virtual reality simulations, such as flight
simulators, construction equipment simulators, and combat and control
28 Neuroergonomics

room simulations made for training purposes) is becoming increasingly


sophisticated and widely used (Dunston et al., 2012; Vincenzi et al.,
2009). Important questions remain regarding how the brain reacts
to such artificial environments and why some people are better than
others at becoming immersed in virtual realities. Recent neuroimag-
ing studies showing that there are specific brain regions for perceiving
oneself give insight into how the feeling of immersion in virtual
environments might occur. As discussed later in this chapter, these
brain regions are dynamic, so that even perception of the spatial limits
of one’s own body can be modified (Ganesh et al., 2012; Lenggenhager
et al., 2007).
The flexibility of perception underlies the success of sensory substitu-
tion. Sensory processing is multimodal; this fact can be capitalized on to
enhance perception. In blind and deaf persons, brain areas that are
ordinarily associated with the deprived visual or auditory sense are
sometimes engaged by the remaining sensory modalities (Merabet &
Pascual-Leone, 2010). Even in healthy participants, the visual cortex
will process nonvisual information if there is no visual input for a
prolonged period of time. Merabet et al. (2008), for example, demon-
strated this by showing that when participants are blindfolded for as
little as two weeks, the visual cortex responds increasingly to auditory
and tactile information. This activity has been shown to result in an
improved ability to recognize objects by their sound, and to read Braille.
Other evidence for multimodal processing comes from research on The
vOICe (Meijer, 1996; Proulx & Harder, 2008), a sensory substitution
device for the blind. As described in a later section of this chapter, The
vOICe transforms a visual image into a soundscape. Some users of
The vOICe report having visual sensations after prolonged use, suggest-
ing that capitalizing on the brain’s flexibility with such devices may, in
some cases, provide a safe and inexpensive alternative to neurosurgical
implants (Ward & Meijer, 2010).
The emerging view of perception as a highly active and dynamic proc-
ess is only gradually affecting the field of interface design, which tends
to be driven more by technology than by an understanding of the user.
One of main challenges of neuroergonomics in the coming decade will
be to keep up with developments in both cognitive neuroscience and
in computer technology in order to achieve optimal integration of the
two in designing new user interfaces. A major goal of this chapter is to
present the basics of the neuroscience of perception in such a way as to
illustrate how a better understanding of the perceptual systems may aid
in improving user interface design.
Cognitive Neuroergonomics of Perception 29

Visual processing

Of all the senses, vision is the best understood. Neuroscientists have


been measuring responses to visual stimuli in different brain areas
since the 1950s (Barlow, 2004) and have built detailed models of how
the brain processes visual information. More than 30 distinct cortical
and subcortical areas involved in visual processing have been identified
(van Essen et al., 1992). These areas form a complex hierarchical and
parallel system of a series of filters that decomposes an image into its
constituent features, such as colour, brightness and orientation, and,
influenced by memory, expectancy and even emotion (Schupp et al.,
2003), recomposes this image into a representation that is available for
the cognitive systems of the brain.
Sensory processing of visual information starts when light energy,
entering through the pupil, is focused onto the retina, the thin layer
of photosensitive cells (rods and cones) at the back of the eye. Whereas
the cones are specialized for highly detailed colour vision under
daylight viewing conditions, rods are specialized for high sensitivity
to light energy under night-time viewing conditions. Most cones are
concentrated around the fovea, a spot right behind the pupil. The dense
concentration of cones is, in part, responsible for our being able to
see, in great detail, the part of a visual image on which we fixate. In
the periphery of the visual field there are fewer cones, but many rods.
Owing to more convergence of information from individual receptors
in early visual processing, visual information from the periphery is less
detailed than foveally-presented information.
The photoreceptors are connected to several layers of neurons in the
retina, with the fibres of the last layer, the ganglion cells, making up the
optic nerve. Most of the fibres go to the lateral geniculate nucleus (LGN),
a part of the thalamus, which projects information to the primary visual
cortex (area V1) via an extensive web of neural fibres, called the optic
radiation. A basic feature of the visual system is that every neuron, from
the retina to the higher visual areas, has its own receptive field, that is a
region of the visual world to which it responds. These receptive fields
are relatively small for neurons in the retina, but increase in size higher
up the hierarchy. Whereas cells in the retina and early visual areas
respond to only a small part of the world, cells at the top of the hier-
archy can respond to stimuli anywhere in the visual field.
Beginning with the first neurons to which the receptors connect,
a distinction between parvocellular (P) and magnocellular (M) pathways
is evident. Colour information from the cones is transmitted by P cells
30 Neuroergonomics

to the visual cortex, whereas brightness information from the rods is


transmitted by M cells. P-cells respond slowly, but are sensitive to detail
(i.e. have small receptive fields). The M-cells, however, respond quickly
and are sensitive to small differences in illumination, but they lack
the spatial detail of their P-cell counterparts (i.e. have larger receptive
fields). The LGN contains four layers devoted to P-cells and two devoted
to M-Cells, with each layer providing a retinotopic map of the opposite
visual field (Frishman, 2005). The receptive fields of LGN cells are circu-
lar and serve mainly to code differences in intensity.
V1 also contains a retinotopic map, and it is composed of nine layers.
At V1, most cells have elongated receptive fields, meaning that the
preferred stimulus is a bar whose orientation corresponds to that of
the long axis of the receptive field. Simple cells respond only when the
oriented bar is at a specific location on the retina, whereas complex cells
respond to the particular orientation regardless of where it is located in
the visual field. Some cells show directional and velocity sensitivity, and
many show binocular interaction. Finally, the cells in V1 are organized
in columns such that, when moving inward from the outside of the
cortex, the preferred orientation is similar for all cells in the column.
Kamitani and Tong (2005) noted that the orientation columns in V1 are
much more finely spaced (at submillimetre scales) than the resolution
that can be obtained with current fMRI techniques (at millimetre scales).
However, using fMRI decoding, which examines patterns of activity
among multiple voxels (volumetric pixels, or basic units, in imaging),
they were able to obtain accurate predictions about which of several
orientation gratings a person is viewing at a given moment on the basis
of patterns of cerebral activity. Several possible mechanisms underlying
this decoding of stimulus orientation have been proposed and are the
target of ongoing research (e.g. Chaimow et al., 2011).
The distinction between M and P pathways is the first evidence
for a dual-route architecture of the visual system (Milner & Goodale,
2006; Mishkin et al., 1983). This seems to be maintained after V1, with
a division between ventral and dorsal pathways. The ventral stream
receives its input primarily from parvocellular cells and leads from the
primary visual cortex via extrastriate areas to the object-sensitive areas
in the temporal cortex. It is often called the what pathway because it
comprises visual areas that are involved in object recognition and is
believed to allow us to recognize what we see. The dorsal stream receives
its input primarily from magnocellular cells, and leads from the primary
cortex to the parietal cortex. This pathway is often referred to as the
where (or how) pathway (Milner & Goodale, 2006) and allows us to locate
Cognitive Neuroergonomics of Perception 31

objects in space relative to ourselves so that we can adjust movements


in response to visual input, such as when catching a ball or picking up
a coin.
Evidence suggests that the what and where pathways operate in paral-
lel, and, to some extent, independently (Westwood & Goodale, 2011;
but see Schenk et al., 2011 for an opposing view). For example, patient
studies show that a lesion in the what pathway impairs the ability
to recognize objects. However, such patients may be perfectly able to
perform actions (e.g. picking up, pointing, rotating) with or on objects
they cannot recognize (Milner & Goodale, 2006). In normal observers,
a similar discrepancy is observed in visual size illusions, such as the
Ebbinghaus illusion. In this illusion, a circle or cylinder surrounded
by smaller circles or cylinders appears to be larger in diameter than an
equally sized cylinder surrounded by larger cylinders. Although recent
neuroimaging evidence shows that such size illusions do, indeed, dis-
tort the cortical representations of affected objects in the visual areas
(Schwarzkopf et al., 2011), when observers are asked to pick up such
size-distorted objects, the aperture between thumb and middle fin-
ger is not affected by the illusion (Goodale & Milner, 1992). In other
words, even though observers consciously perceive such a cylinder to
be smaller or larger than it actually is, the movements required to pick
it up remain unaffected.
How information from the what and where pathways is integrated
is not well understood and remains a topic of study (e.g. Rosetti &
Revonsuo, 2000). For example, Arbib (2011, p. 12) developed a model
for the grounding of language in action, of which he says:

We stress the cooperation between the dorsal and ventral streams


in praxis [action] and language. Both have perceptual and motor
schemas but the perceptual schemas in the dorsal path are affordances
linked to specific motor schemas for detailed motor control, whereas
the ventral path supports planning and decision making.

Exactly how this cooperation of the streams is worked out will require
much more research to be understood in detail.
As mentioned in the introduction to this chapter, the distinction
between vision for perception and vision for action is becoming more
relevant to interface design as computing with touchscreens becomes
more customary. Studies on interaction design for touchscreen inter-
faces have tended to focus on physical aspects of the displays, such
as the optimal size for icons (Benko et al., 2006; Forlines et al., 2007).
32 Neuroergonomics

The relative independence of the visual systems for perception and


action suggests that more attention should be focused on the major
difference between the WIMP interface and the touchscreen interface:
the ‘directness’ of the manipulation. It is possible that interacting with
a touchscreen will turn out to be more dependent on visual process-
ing in the where pathway than is the WIMP interface, and this may
have unforeseen consequences if not taken into account. For example,
because information processed in the dorsal stream does not necessarily
reach conscious awareness (Goodale & Milner, 1998), using a touch-
screen may give rise to a qualitatively different user experience than
using a WIMP interface. Implications of the dual-route architecture of
vision are deserving of more attention from researchers in ergonomics
and human factors.

Top-down and bottom-up processing in perception

As stated earlier in the chapter, the visual system decomposes a visual


image into its constituent features, to later reassemble it. Apart from
feedforward processing—in which information is transmitted from
lower brain areas to higher ones—there is also a feedback, or recurrent,
stream carrying information in the opposite direction (Lamme &
Roelfsema, 2000; Sillito et al., 2006). Vision scientists have studied
the contribution of this feedback stream to visual processing since the
1990s. Both physiological and modelling studies have shown that feed-
back is critical for the success of a basic visual process known as scene
segmentation. Scene segmentation is the parsing of a visual scene into
its constituent parts, such as different objects or surfaces. An important
step in this process is figure-ground segregation, that is determining
which parts of a visual scene belong to objects (e.g. icons on a Windows
desktop), and which parts belong to the background (the desktop itself;
Lamme, 1995).
Studies in monkeys have shown that cells in the early visual cortex
do not discriminate between information that is part of an object versus
a background in the first 100 ms of information processing. However,
after this period, cells in the early visual cortex do discriminate between
objects and background, firing at a faster rate if the information in
their receptive fields belongs to an object than if it belongs to the back-
ground (Lamme et al., 1993). This finding is remarkable because the
receptive fields of early visual cortex neurons are typically too small for
the neuron to determine whether the information within the receptive
field belongs to an object or the background. The increase in firing rate
Cognitive Neuroergonomics of Perception 33

as a result of visual context outside the receptive field of a neuron seen


after about 100 ms is called contextual modulation (Zipser et al., 1996).
Feedback (or recurrent) processing has been shown to play a critical role
in contextual modulation. For example, selective lesioning of higher
visual areas interrupts feedback and also abolishes contextual modula-
tion (Hupé et al., 1998; Supèr & Lamme, 2007).
The role of feedback is believed to be one of allowing for better
processing of spatial detail. Neurons in the inferotemporal cortex, for
example, respond to faces after approximately 220 ms. However, at
that stage they respond to any face. It is only about 50 ms later that
the neural response becomes more specific, such that discrimination
between individual faces is possible (Sugase et al., 1999). A modelling
study ( Jehee et al., 2007) suggests that this increased specificity is the
result of feedback processing. Although the initial, feedforward stream
of information does not yet contain information about figure-ground
organization, it does contain information about discontinuities in
the visual field, for example, where two colours meet. It appears that
higher-order neurons use this information to perform border detection,
and thus build a rough representation of objects. Higher-level neurons
subsequently send feedback to the lower-order neurons within their
receptive fields to increase their responses if the receptive field of the
lower-order neuron is on an object. In this way, all lower-level neurons
whose receptive fields are on an object increase their firing rate, which,
in turn, changes the activity and selectivity of neurons in higher areas.
This idea of increasing spatial detail by means of feedback processing
has been dubbed the reverse hierarchy theory (Hochstein & Ahissar, 2002).
According to this theory, visual perception is a two-stage process in
which information is analysed ‘at a glance’ during the first pass through
the visual hierarchy. Such ‘vision at a glance’ is sufficient for quickly
scanning a visual scene or display. It does not, however, allow for
precise discriminations; this one needs ‘vision with scrutiny’, a mode
of vision in which feedback processing adds spatial detail by recruiting
cells in lower visual areas with smaller receptive fields.
The reverse hierarchy theory makes the assumption that feedforward
processing is sufficient to initiate voluntary action. However, a recent
study suggests that this may not be the case. In texture-segmentation tasks,
reaction times to texture stimuli correlate strongly with the latency of feed-
back processing, but not with the latency of feedforward processing. This
suggests that overt reactions, such as button presses or verbal responses,
are guided by feedback processing. Unconscious, reflexive movements
towards a stimulus that do not require a spatial transformation, such as
34 Neuroergonomics

quick saccades or pointing movements, can, however, be initiated on the


basis of feedforward processing (Jolij et al., 2011).
The finding that some movements can occur without feedback
processing may have implications for interaction design. Interaction
with WIMP interfaces often requires the pressing of a key or the click of
a mouse. Some such actions require a spatial transformation and, there-
fore, require feedback processing. A touchscreen, which allows for direct
manipulation of information, may allow for direct initiation of action
based on feedforward processing. Although this may suggest an advan-
tage in terms of user performance for touchscreens, using feedforward
processing may come at a cost. Neuroimaging studies suggest that the
brain cannot inhibit distracting information if it is processed only at a
glance (Jolij, 2008; Jolij & Lamme, 2005; Tsushima et al., 2006). In some
cases, thus, distracting information may produce stronger interference
with user actions when present on a touchscreen than when present in
a WIMP display.
Although interaction with a WIMP interface may not be as direct as
interaction with a touchscreen, time required for spatial transforma-
tion can be minimized by maintaining spatial compatibility (Proctor &
Vu, 2006). A computer mouse and other continuous control devices
typically have spatial correspondence with the cursor that they control.
This relation between the mouse movements and the cursor movements
is highly compatible, supporting rapid positioning of the computer
mouse. How to maintain spatial compatibility is more of an issue with
WIMP interfaces than with touchscreens. On touchscreens, it is natural
that people swipe/scroll their fingers in the direction in which they
want the content to move. However, on WIMP interfaces, whether this
mapping is best is not so obvious. The most recent operating systems
for Macintosh computers adopt the ‘move content in the direction of
finger movement’ mapping as the default scrolling option, consistent
with touchscreens, but the default mapping for prior Macintosh and
Windows operating systems is ‘move content in the opposite direction
of finger movement’. To evaluate which mapping is more compatible for
a WIMP interface, Chen and Proctor (2012) performed a study in which
participants used the up and down arrow keys for scrolling. An array
of four digits, at the corners of an imaginary diamond shape, was pre-
sented, with the topmost or lowermost digit partially off the screen so
that it could not be identified. The task was to bring that digit into full
view as quickly as possible in order to make a subsequent nonspeeded
odd–even parity judgement. Scrolling responses were faster and more
accurate when the movement direction of the window content was
Cognitive Neuroergonomics of Perception 35

consistent with the direction of the control movement (the arrow keys)
than when it was inconsistent. In this task, at least, the best mapping
for the WIMP interface seemed to be the same as that for the natural
mapping of a swiping action on a touchscreen interface.
Feedback processing may allow for more than increasing spatial
detail, or ‘vision with scrutiny’. The abundance of connections from
higher brain areas to lower sensory areas also seems to allow for modu-
lation of sensory processing by these higher brain areas. Attention is an
obvious example of a modulatory process. So-called top-down attention
(e.g. watching out for the onset of a red light) is known to modulate
activity in perceptual areas (Desimone & Duncan, 1995; Lamme &
Roelfsema, 2000; see Chapters 1, 3 and 4). Another example of top-down
processing is context-based expectancy. Context is known to modulate
our ability to recognize objects. For example, an object in context, such
as a cow in a meadow, is recognized faster than an object out of context
(a cow in a living room; see Bar, 2003; but see Hollingworth & Henderson,
1998 for alternative explanations of this effect). Bar showed that this
ability to use context is mediated by a network consisting of areas in the
prefrontal cortex, the splenial cortex (an area near the hippocampi that
seems to be involved in object memory) and the visual areas. Context
information modulates the way early visual areas respond to ambiguous
stimuli by biasing interpretations that are consistent with a specific con-
text. Because of this context-based expectancy, providing context allows
for more efficient perceptual processing in the visual system for stimuli
(or design elements) that fit into the provided context, but items that
do not fit into the context may incur a processing penalty.
In summary, perceptual processing is not simply a matter of feeding
information forward throughout the visual system. Instead, widespread
feedback connections allow higher-level information, including expect-
ancy and semantics, to influence lower-level perceptual processes.

Learning to see

Over the course of a lifetime our ability to perceive the environment


improves by processes of perceptual learning (Fahle & Poggio, 2002),
although with advanced age deficits in sensation and perception
also occur (e.g. Faubert, 2002). Some perceptual learning takes place
during infancy and childhood during so-called critical periods. If no
appropriate perceptual input is received during such a period, normal
development of the sensory system is disrupted. A common example of
this is amblyopia, a developmental deficit of the neurons in the early
36 Neuroergonomics

visual cortex caused by abnormal visual input during a critical stage,


such as that caused by childhood strabismus (a ‘crossed eye’; Holmes &
Clarke, 2006). A consequence of strabismus is misalignment of the
images that fall on the retinas in the visual cortex. This misalignment
results in blurry, or even double, vision, thus impairing visual function.
To compensate for this impairment, the brain attenuates neural signals
coming from the strabismic eye. Although neural attenuation resolves
the problem of double vision, strabismus in early childhood, if left
untreated, results in under-stimulation and, thus, impaired develop-
ment of the neurons that are driven by the strabismic eye. That the
vision deficits associated with amblyopia are neural in nature has been
confirmed with voxel-based analysis of MRI brain images (Mendola
et al., 2005) and single-cell measurements that demonstrate that neurons
driven by amblyopic eyes have abnormal neural responses (Roelfsema
et al., 1994).
Amblyopia is a clear illustration of the fact that perception is not
a built-in feature of the central nervous system, but that it needs to
be learned. Even the processing of facial expressions, which was long
believed to be an innate ability, appears to be learned to some degree.
There are marked differences between cultures in the processing of
facial expressions, which hints at a learned component in emotion
recognition ( Jack et al., 2012). Perceptual learning can also lead to the
development of specific visual skills depending on the environment and
task demands. For example, a plane spotter or bird enthusiast learns
to identify a type of plane or a species of bird without apparent effort
(e.g. Gagné & Gibson, 1947). Another example of perceptual learning
is fingerprint reading. Interpreting fingerprints is a learned skill that
requires a great amount of practice. Busey and Vanderkolk (2005)
provided evidence that expert fingerprint readers use different brain
processes than novices. In contrast to novices, fingerprint experts show
a clear negativity in the N170, a visual evoked potential associated
with configural processing. Apparently, expert fingerprint examiners
have learned to process the complex pattern of ridges that makes up a
fingerprint as an integrated whole, and this expertise can be assessed by
measuring brain activity.
Improvements in visual performance as a function of experience or
training can be long-lasting. In a visual detection task, for example,
improvements lasting up to a year have been reported after intensive
training (Karni & Sagi, 1991). These improvements are associated with
faster and more pronounced brain responses to target stimuli in the
early visual areas (Casco et al., 2003; Karni & Sagi, 1991). Learning can
Cognitive Neuroergonomics of Perception 37

occur even without effort—and without attending to the stimuli to be


learned. In a random dot motion task in which the observer is to judge
the direction of movement of a coherent patch of moving dots present
in an array of randomly moving dots, for example, observers can, with
training, determine the direction of motion, even if the percentage of
coherently moving dots is as low as 5% (Watanabe et al., 2001). Such
improvements in motion discrimination also occur without attention:
if the dot displays are used as a background while participants are per-
forming another task, learning still occurs. Visual stimuli may not even
need to be present physically in order for learning to occur: simply
imagining performing a visual task can result in performance improve-
ments (Tartaglia et al., 2009).
It may be possible to improve learning strategies or even to directly
induce learning by means of neurostimulation. In fact, it has been
suggested that brain stimulation by means of transcranial magnetic
stimulation (TMS; see Chapter 1) may increase neural plasticity in early
visual cortex. As described earlier in this chapter, amblyopia is character-
ized by visual deficits that are the result of abnormal neural processing.
It has been shown that repetitive stimulation of the early visual cortex
can lead to improvement in visual contrast discrimination in amblyopic
participants, even without extensive training (Thompson et al., 2008).
This improvement suggests that TMS induces the plasticity needed to
alter neural functioning.
An impressive example of neurofeedback-induced perceptual learning
is provided by a recent demonstration of performance improvements in
an orientation discrimination task (Shibata et al., 2011). In this experi-
ment, participants first viewed Gabor patches (circular grating patterns
with blurred edges) of three slightly different orientations while brain
activity in the early visual cortex was recorded using fMRI. This record
of brain activity was used to create so-called ‘templates’ of brain activity
in response to each of the three orientations. In the second stage of the
experiment, the participants were given neurofeedback training to learn
to modulate their brain activity to resemble one of the three templates.
Visual feedback indicated the extent to which their brain activity
resembled the target template, but observers were not told which tem-
plate was being used as a referent. Even though the participants did
not know which stimulus they had learned to discriminate on the basis
of the neurofeedback, in a subsequent test session they recognized the
orientation that corresponded to the template used during training
better than the two untrained orientations, thus demonstrating the
feasibility of neurofeedback-induced perceptual learning.
38 Neuroergonomics

A fascinating implication of the work on neurofeedback-based


learning is that it may be possible to use brain activity recorded from
experts on a specific task as the target brain activity in a neurofeed-
back setting. However, the intriguing possibility that a visual skill can
be acquired by learning to modulate brain activity in such a way that
it resembles expert performance—rather than by training on the task
itself—remains to be tested.

Auditory perception and sonification

Auditory signals form an important source of information in many


environments. Much communication depends on spoken language, but
speech is not the only auditory signal to carry information. Auditory
displays capitalize on the omnidirectionality of auditory perception and
the alerting effects of sound to present warning or other time critical
information. Processing of auditory sensory information begins when
a sound wave, created by mechanical disturbance and transmitted
through the air, enters through the outer ear and causes movements of
the tympanic membrane, or eardrum. Bones connected to the eardrum
convert its movements into waves in the fluid of the inner ear, which
cause movements of the basilar membrane, which runs the length of
the inner ear. These movements bend the cilia of hair cells on the basilar
membrane, which starts a neural signal in the auditory nerve.
Tone frequency is coded by place of stimulation. Differences in
stiffness across the length of the basilar membrane cause the place of
maximum displacement (and, hence, stimulation) to vary systemati-
cally as a function of the frequency of the sound wave. In the auditory
nerve, this is reflected in the characteristic frequency to which a neuron
responds most strongly as a function of which location on the basilar
membrane that the neuron receives its input. From the auditory nerve,
the auditory pathway proceeds through many neural centres, all of
which show tonotopic organization, with the last two being the medial
geniculate nucleus in the thalamus and the primary auditory cortex in
the temporal lobe (Moore, 2005). Many of the cortical neurons respond
to complex features of auditory stimuli, such as the direction (upward
or downward) in which the frequency of a tone changes.
As is the case for the visual system, evidence suggests that function-
ally distinct what and where pathways exist for the auditory system
(Leavitt et al., 2011). From the primary auditory cortex, auditory sensory
information travels by way of a dorsal pathway to the parietal lobe for
analysis of spatial information. A ventral pathway projects to medial
Cognitive Neuroergonomics of Perception 39

and inferior temporal cortex for processing of features more closely


linked to object recognition, such as the spectral content of the signal.
Thus, analysis of spatial information in the parietal lobe distinct from
processing of object recognition information in the temporal lobe
seems to be a general property of brain organization.
Increasingly, designers are using ‘earcons’ or auditory icons to convey
both simple and complex information (Hermann et al., 2011). Earcons
are brief, distinctive sounds—typically short, structured musical phrases—
used to signal specific events or to convey information (McGookin &
Brewster, 2011). Common examples of earcons are the auditory alerts
used by some electronic mail programs or the sound an operating system
makes when a computer is started or shut down. Auditory icons have
similar purposes, but are based on natural sounds (Brazil & Fernström,
2011). Like visual icons, auditory icons capitalize on relations to everyday
events to enhance interaction with a system. In other words, auditory
icons rely on analogy. For example, the act of deleting a mail message
might be indicated with a sound like the crumpling of a piece of paper.
When tasks are complex, finding understandable, distinguishable
auditory icons or earcons can be a challenge (see Table 2.1 for guidelines
for creating auditory icons). For example, in a hospital environment,
using the sound of a beating heart to convey heart rate seems a logical
choice, as does the use of a breathing sound for respiratory rate. But
what do changes in blood pressure sound like? Fitch and Kramer (1994)
showed that changing the pitch of the heart sound could effectively
convey information about systolic pressure. Even when more or less
arbitrary parameters are used to convey information, it can still be
important to take details of auditory and crossmodal perception into
account (Stanton & Edworthy, 1998; Walker, 2002). In many cases, some
pairings of concepts and sounds will be more natural and readily under-
stood than others. For example, rising pressure seems to correspond to
rising pitch. This relation may be mediated by estimations of the mag-
nitude of the dimensions of pressure and pitch (Krantz, 1972), or there
may be a conceptual relation between the dimensions (e.g. Melara &
Marks, 1990). Thus, although earcons may be chosen arbitrarily (such
as a trademark sound that identifies a certain operating system), con-
sideration of the relations between stimulus and display dimensions
can result in the design of better auditory stimuli for conveying certain
types of information. For example, if one is using rising or falling tones
to convey position information, it is worth knowing that the overall
pitch of the signal will affect perception of pitch change, and vice versa
(Walker & Ehrenstein, 2000).
40 Neuroergonomics

Table 2.1 Guidelines for the design of auditory icons

1 Use short sounds with a wide bandwidth


2 Ensure that the sounds reflect the variety and meaning needed for the
design problem
3 Allow observers to describe their reactions to the sounds being considered
4 Evaluate the learnability of auditory cues that are not identified
immediately
5 Test possible conceptual mappings between auditory cues and display
dimensions
6 Evaluate auditory icons in sets to identify potential problems with
masking, discriminability and conflicting mappings
7 Test the usability of interfaces using the auditory icons

Adapted with permission from Mynatt, E. D. (1994). Designing with auditory icons. In:
G. Kramer & S. Smith (Eds.), Second International Conference on Auditory Display (ICAD ’94),
(pp. 109–119). Santa Fe, NM: Santa Fe Institute.

As for vision, providing context for perception in the auditory


domain enhances performance, such as by speeding up reaction times
(e.g. Holcomb & Neville, 1990). Apart from performance enhancement,
providing context and thus allowing for efficient top-down processing
increases listener comfort (Başkent, 2012). Top-down modulation of
sensory processing plays an important role in speech perception. Words
in context are perceived more readily than words in isolation (Tulving
et al., 1964), and a missing or masked syllable in a spoken sentence is
typically not noticed. Rather, the missing information is filled in on
basis of semantic context (Warren, 1970), and the speech is perceived as
being continuous. Limiting top-down context also makes speech com-
prehension more effortful, even when performance itself is not affected
(Başkent, 2012).

Touch and the display of haptic information

Aside from hearing and vision, the modality that offers the most prom-
ise in neuroergonomic applications is touch. Although not as exact in
conveying spatial information as is vision, or as sensitive to temporal
information as audition, somesthesis has been argued to have the
most balanced emphasis on spatial and temporal information (Hollins,
2010). Two types of afferent neurons support touch perception: slowly
adapting type 1 (SA1) afferents and rapidly adapting (RA) afferents.
The modulation of touch-related neural activity as textures are moved
across receptive fields is greater for SA1 afferents than for RA afferents,
Cognitive Neuroergonomics of Perception 41

consistent with the view that SA1 afferents are the most important
conveyors of spatial information (Bensmaïa et al., 2006a). The RA
afferents, however, are more sensitive to vibration. Signals from the two
types of afferents may interfere with each other, as evidenced by the
fact that vibrating stimuli while holding them against the skin reduces
the acuity of human observers while having virtually no effect on the
neural response to the stimuli (Bensmaïa et al., 2006b). This finding
suggests that relatively blurred signals from the vibration-sensitive RAs
might combine with, and degrade, the precise spatial signals carried
by SA1s. This hypothesis is supported by Bensmaïa et al.’s finding that
‘adapting out’ the RA channel with strong vibration actually improves
spatial acuity. Similarly, the localization of isolated cutaneous taps is
more accurate than the localization of vibratory stimuli, presumably
because the travelling waves produced by vibration (especially at high
frequencies) stimulate Pacinian corpuscles, which are characterized by
relatively large, diffuse receptive fields.
Texture is perceived when mechanoreceptors detect vibrations
formed by lateral stimulus movement (forming the basis for the per-
ceived roughness of fine surfaces; Hollins & Risner, 2000). Pacinian
corpuscles play an important role in the ability to sense textures, as
illustrated by the fact that the ridges of the fingerprints contribute
to the ability to sense fine textures by amplifying the vibration fre-
quencies to which the Pacinian corpuscles are sensitive (Scheibert
et al., 2009). In fact, vibrotactile signals can themselves lead to the
perception of roughness. Hollins et al. (2000) found that surrepti-
tiously vibrating a surface as the finger moved across it increased
the perception of roughness, even when the observer was unaware
of the vibration. Coarse and fine texture information are processed
differently. Whereas desensitizing vibrotactile channels with strong
(100 Hz) vibration does not affect the discriminability of coarse
surfaces (which is thought to depend more on spatial than on vibro-
tactile coding; Bensmaïa & Hollins, 2003), it has been shown to
virtually eliminate discriminability for fine surfaces, which depends
on Pacinian activation (Hollins et al., 2001).
With respect to tactile interfaces—cutaneous devices that can convey
spatial information not available visually—the perception of location
and movement are paramount. It appears that the navel and spine serve
as landmarks when a belt of tactors is worn around the waist (Cholewiak
et al., 2004). Mislocalizations of tactile stimuli depend primarily on
their distance along radii from the centre of the torso (van Erp, 2008).
Localization also suffers when two or more spatially proximate stimuli
42 Neuroergonomics

are presented in close temporal proximity (e.g. with an interstimulus


interval <0.3 s). Distances between two stimuli tends to be under-
estimated as the interstimulus interval between them decreases, regardless
of the site of stimulation (thigh, palm or fingertip; Cholewiak, 1999).
When the spatial separation of the stimuli is smaller than the two-point
threshold (the minimum distance at which two separate sources of stim-
ulation can be distinguished), the perception of the extent of stimulation
depends only on the temporal separation of the two stimuli; only at very
large separations (e.g. 30 cm on the thigh) does temporal separation fail
to influence perceived distance.
The body midline is a distinctive feature of spatiotemporal inter-
action in touch. Take the example of cutaneous saltation (Geldard,
1982). When three taps are administered—the first two (T1 and T2)
at one location and the third (T3) at a different location—stimulation
appears to ‘hop’ from the first stimulated location to an intermediate
(unstimulated) spot and then to the location of the final tap. This
spatiotemporal interaction reflects the attraction of the second tap
toward the location of the third one. Saltation is a compelling illu-
sion and is indistinguishable from a control condition in which the
second tap is delivered to the intermediate location (Eimer et al.,
2005). Taps cannot, however, be induced to cross the midline. Eimer
et al. demonstrated that when T1 and T2 were delivered to one arm
and T3 to the other, T2 hopped along the first arm towards T3, but
did not cross arms.
Vibrotactile displays are increasingly being used to convey information
(e.g. Ferris & Sarter, 2011), cue attention (e.g. Salzer et al., 2011) and
aid spatial navigation (e.g. Kärcher et al., 2012). In a study conducted
in a driving simulator, van Erp and van Veen (2004) showed that a
vibrotactile display consisting of eight vibrating elements (tactors)
mounted in a driver’s seat, which signalled the direction of a course
change by the location of vibration and the distance by rhythm of the
pulses, was capable of reducing driver load during a navigation task
relative to a visual display. Vibrotactile displays may be of particular use
to blind persons. For example, a vibrotactile belt that continually signals
the direction of magnetic north shows promise of aiding blind persons
in way-finding tasks, such as keeping a direction over longer distances or
taking shortcuts in familiar environments (Kärcher et al., 2012). In
short, it can be argued that the tactile modality is currently underused
in ergonomic applications and that it shows great promise in terms
of relieving overload in visual and auditory modalities and providing
spatial insights to its users.
Cognitive Neuroergonomics of Perception 43

Multimodal perception

Perception in the different modalities does not occur independently.


Rather, the various modalities in which we sense the world (audition,
vision, olfaction, taste and somatosensation) are integrated into a coher-
ent experience by multisensory (or multimodal) integration (Spence &
Driver, 2004). The literature on multisensory integration shows that
different sensory modalities interact, starting from the earliest stages
of processing. In vision and audition, for example, it has been demon-
strated that direct connections between the auditory and visual cortices
allow for early modulation of visual processing by auditory information
and vice versa (e.g. van der Burg et al., 2011).
Many types of interactions between auditory and visual stimuli have
been observed. For example, visual cues can help to segregate streams
of auditory information, such as two melodies. If two series of tones are
played, one with relatively high pitch and one with lower pitch, listeners
can integrate these two series into one melody or hear them as two sepa-
rate melodies (e.g. a soprano and a bass line). Visual cues can alter the
perception of the two streams of notes, biasing the perceiver to integrate
or keep separate the two streams: if a stimulus light is shown that flickers
at the same frequency as the two auditory streams combined, listeners will
tend to integrate the two streams into one melody, but if the visual cue
is presented in phase with just one of the two streams, listeners will
report hearing a polyphonic piece (Rahne et al., 2007). Audio-visual
integration can also occur at a conceptual level: simply seeing an object
appears to automatically activate a mental representation of the sound
that the object makes. These activations of sound representations are not
sufficiently strong to trigger auditory perception, but are strong enough
to be measured in a fMRI scanner. In one study, researchers could
determine with relatively high accuracy which objects participants were
looking at by examining brain activity in the auditory cortex (Meyer
et al., 2010).
Sound can also modulate vision. A well-known finding in electro-
physiological studies is that the amplitude of early components of
the evoked potential to visual stimuli is increased if these stimuli are
accompanied by a sound, suggesting attentional enhancement for
multimodal stimuli (Eimer & Driver, 2001; van der Burg et al., 2011).
TMS studies suggest that this enhancement is likely to be mediated
by direct connections from the auditory cortex to the visual cortex.
For example, it has been found that auditory stimuli enhance the
excitability of the visual cortex, such that presenting a sound makes it
44 Neuroergonomics

easier to evoke visual sensations by stimulating the visual cortex (Romei


et al., 2009). This activation of the visual cortex by sounds can even
result in illusory perception. When a single, flashed stimulus is accom-
panied by two short sounds, participants often report that they have
seen two flashes instead of one (Shams et al., 2000). Perception of such
an illusory flash is accompanied by an increase in early visual cortex
activity (Watkins et al., 2007). Such a double-flash illusion can also be
induced by touch, and the touch-induced illusion is also accompanied
by early visual cortex activity (Lange et al., 2011).
Other interactions of tactile and other stimuli include that deci-
sions about a tactile stimulus (e.g. whether it pulses or is steady) are
faster when a visual or auditory cue first draws attention to the site of
stimulation (Spence et al., 1998). When stimuli in different modalities
are presented simultaneously in different locations, it is sometimes pos-
sible to attend to one modality while ignoring the other (Martino &
Marks, 2000), but, even in this case, crossmodal interactions are present
in that the attended stimulus may be perceived as occurring slightly
before the unattended one (Spence et al., 2001). As mentioned earlier,
one function of crossmodal interactions may be to make our percep-
tual experience coherent and unitary. For example, visual and haptic
information combine when viewing and touching an object to result in
unified perceptual experience of that object (Hollins, 2010). When
inputs are contradictory, such as when a textured surface is paired with
a less coarse (or coarser) visual texture, input from one modality should
be suppressed or neglected if the task is to process the stimuli according
to variation in the other modality. In such a case, however, an observer
may have trouble ignoring one of the modalities, even if instructed
to do so. In the case of textured objects, ignoring conflicting visual
information is easier than ignoring conflicting texture information, as
evidenced by longer classification times when classifying visual objects
according to appearance (while ignoring the feel of tactile stimuli)
as opposed to classifying the tactile stimuli according to feel, ignoring
the appearance of the visual stimuli (Guest & Spence, 2003).
An arguably more direct (i.e. less likely to be influenced by non-
sensory factors, such as distraction) example of intermodal interference
is Jousmäki and Hari’s (1998) finding that when observers listen to
the sound they make by rubbing their hands together through head-
phones that amplify or modify the sound, their reported perception
of the texture of their own skin varies (e.g. feeling rougher or drier;
see also Guest et al., 2002). Other examples of crossmodal interactions
include that vibrotactile frequency discrimination is impaired by the
Cognitive Neuroergonomics of Perception 45

simultaneous presentation of an auditory stimulus of similar frequency


(Yau et al., 2009) and that judgements of the speed of movement of
two tactile gratings is affected by the speed and direction of movement
of a visual grating accompanying one of the tactile stimuli (Bensmaïa
et al., 2006c).
Sensory substitution, described in the introduction to this chapter, is
another example of the interplay between the senses. When a sensory
area in the cortex no longer receives input it does not become idle,
but, instead, may start processing information from other modalities.
This has been demonstrated most often in the visual modality, where it
has been shown, for example, that the visual cortex contributes to the
processing involved in reading Braille. This has been demonstrated by
using TMS to disrupt visual cortex during Braille reading. Compared to
a baseline condition, Braille readers have significantly more difficulty
with reading when occipital cortex is disrupted (Sathian & Zangaladze,
2002). As is the case with sighted people, TMS of the visual cortex can
induce conscious percepts in blind individuals. Rather than being per-
ceived as visual percepts, these perceptions are somatosensory. Braille
readers who read with the fingers will experience these percepts as a
tactile sensation of the fingers (Ptito et al., 2008), whereas those who
use a ‘tongue display unit’ (a sensory substitution device that trans-
lates visual displays into electrotactile tongue stimulation) report the
experience of somatopically organized tactile sensations on the tongue
(Kupers et al., 2006).
Engagement of the visual areas in nonvisual tasks appears not to be
limited to blind individuals, at least not when sighted persons have
been blindfolded for some time. Merabet et al. (2008) found that when
normally-sighted observers were blindfolded for a prolonged period
of time (e.g. two weeks), the visual cortex responded to auditory and
somatosensory stimuli, playing a role in auditory object discrimination
and Braille reading. However, within hours of the removal of the blind-
fold, auditory discrimination and Braille reading performance returned
to pre-blindfold levels as the visual cortex again took up the processing
of visual input. This experiment shows that sensory substitution is not
necessarily the result of long-term adaptations in the brain and suggests
that neural reorganization can occur within a relatively short period
of time.
The vOICe (Meijer, 1996; Proulx & Harder, 2008), introduced earlier
in this chapter, is a sensory substitution device that shows great promise
for increasing the mobility of blind persons. The vOICe uses a camera-
to-sample visual input and then uses software to translate the snapshot
46 Neuroergonomics

Figure 2.1 A spectrogram of a one-second sound generated by The vOICe.


Reprinted with permission (http://www.seeingwithsound.com)

into a ‘soundscape’ (see Figure 2.1). Once per second the camera scans
the environment from left to right, and the translated information is
fed polyphonically to the left and right ear of the wearer respectively.
Brightness is coded as loudness, with bright objects being louder, and
frequency is used to denote the height of objects in the visual field. For
example, a bright disc on a dark background in the upper left corner of
the snapshot would be translated as a loud, high-pitched sound in the
left ear, whereas a dim disc in the lower right corner would result in a soft,
low-pitched sound in the right ear (see http://www.seeingwithsound.com
for more details). Blind and partially-sighted users who have used the
device for extended periods report that their perception of the world is
qualitatively different than it was before using the device. Some blind
users have even reported having visual sensations after prolonged use
of The vOICe. These ‘visual’ sensations are accompanied by activity in
the early visual cortex, reflecting the brain’s capacity for reorganizing
cortical processing (Merabet et al., 2009).

Perception of space and self

Knowing where one is in space, and where objects are located in relation
to oneself, is crucial for successful interaction with the environment.
Cognitive Neuroergonomics of Perception 47

A distinction can be made between objects that are in peripersonal space


(i.e. within reach), and extrapersonal space (i.e. out of reach). Experiments
showing that targets in peripersonal space are processed faster and appear
more salient than targets that are in extrapersonal space—even when the
retinal image of the objects is similar (Li et al., 2011)—suggest that objects
in peripersonal space are given priority in processing. The attention-
getting property of peripersonal stimuli suggests that peripersonal
warnings could be more effective than warnings presented outside of
peripersonal space. According to Spence and Ho (2008), finding ways to
bring distal events occurring in extrapersonal space (e.g. a deer crossing
the road one is driving on) into the more behaviourally relevant peri-
personal space of the operator is a major challenge in warning design.
We know from research on stimulus–response compatibility effects
(see, e.g., Proctor & Vu, 2006) that there are important synergies between
the stimulation of particular body parts and specific task requirements.
Advances in the understanding of mulitsensory cuing (e.g. cuing the
location of a hazard with tactile and visual cues) are making it increas-
ingly possible to design warnings that both capture attention and suggest
courses of action (e.g. Oskarsson et al., 2012).
It may be possible to extend peripersonal space using a tool such as
a pointer. Some evidence for the extension of peripersonal space comes
from Iriki et al.’s (1996) study in which macaque monkeys were trained
to retrieve objects using a rake. Iriki et al. found bimodal neurons in the
caudal postcentral gyrus whose visual receptive fields were expanded
during tool use to include the rake, implying that peripersonal space
was extended. In humans, evidence consistent with the extension of
peripersonal space during tool use has been obtained using crossmodal
congruency tasks in which the interference caused by visual distractor
stimuli on tactual judgements is used as an indicator of the extent of
visual processing. In a review and reanalysis of the literature on cross-
modal congruency effects, Holmes (2012) concluded that visual distractor
stimuli presented near the tip of a tool do have a greater interfering effect
than ones presented at the middle when the participant is engaged in a
task that requires manipulation of the whole tool, but not otherwise. This
result is consistent with the view that the space surrounding the tool-
tip is integrated in the representation of peripersonal space, leading to
prioritized processing of information near the tooltip. However, Holmes
cautioned that the effect was small and, after discussing alternative
possible interpretations of the result, concluded, ‘In my view, and to my
knowledge, there has not yet been a convincing demonstration that tool
use extends peripersonal space’ (p. 281).
48 Neuroergonomics

Knowing where objects are in space requires knowledge of where the


body is in space, and the brain’s representation of the body appears to
be flexible. Evidence for such flexibility comes from the ‘rubber hand
illusion’ (Botvinick & Cohen, 1998). In this illusion, a rubber hand is
placed before the participant (covered by a cloth to hide the fact that it
is not attached to an arm) and the participant’s own hand is moved out
of view. The experimenter then strokes the fingers of the rubber hand
and, in tandem, the participant’s own, hidden hand with a paintbrush.
Viewing the rubber hand leads the participant to feel as if the rubber
hand is their own.
The rubber hand illusion demonstrates that the brain is capable
of integrating alien elements into its representation of the body. In
immersive virtual reality environments, the rubber hand illusion can
be extended to the entire body. Lenggenhager et al. (2007) showed that
watching a virtual body (an avatar) being stroked on the back while
simultaneously being stroked on the back oneself can induce a mild
out-of-body illusion, in which observers feel a displacement of their
‘self’ out of the boundaries of their own body in the direction of the
avatar. Such illusions may have consequences for the design of multi-
modal information. For example, haptic body suits worn in a virtual
reality environment (Lindeman et al., 2004) could be used to induce the
out-of-body effect by simultaneous administration of tactile stimulation
of the user’s body and visual stimulation of an avatar. The practical
value of such an illusion remains to be seen, but one can imagine that
it may lead to more vivid and realistic experiences in virtual reality
environments.
Identification with an avatar does not require a completely immersive
virtual reality environment. Most online multiplayer computer games
are based on avatars that symbolize the player. Gamers sometimes report
identifying themselves more strongly with their avatars than with their
own bodies as they—through the avatars—interact with people, make
new friends and develop social-cognitive skills (Yee, 2006). A recent brain
imaging study has shown that this identification of the self with an ava-
tar in expert gamers recruits the same brain areas as self-perception: when
expert gamers see their avatar, the left inferior parietal lobe is activated
in a manner similar to when they see a picture of themselves (Ganesh
et al., 2012). This does not hold for novices: the self-identification network
for novices is activated when viewing a self-portrait, but not by viewing
one’s avatar. For expert gamers, the amount of activation in the self-
perception network induced by the avatar is correlated positively with
their ability to integrate new elements in their body schema.
Cognitive Neuroergonomics of Perception 49

Perceptual docking for robotic control

A great deal of perceptual research has focused on how to make modern


surgical techniques, such as minimally-invasive surgery, more effective
and safer for the patient. A surgeon performing minimally-invasive
surgery is reliant on tools and a two-dimensional camera image of
the patient, and the complexity of manipulating the instruments places
high demands on surgeons’ manual dexterity and visual–motor control.
Increasingly, surgeons are assisted by robotics to increase dexterity
(e.g. by scaling motion with microprocessor-controlled mechanical wrists)
and protect the patient from damage due to tremor or intentional—but
mistaken—movement of surgical instruments. The use of surgical robots
highlights the need for tightly integrated control between the operator
and the robot.
In addition to focusing on how the surgeon’s movements can be
translated to robotic control, much research has focused on how to
improve the information that is passed from the robot to the surgeon.
This research has resulted in an array of devices that can be used
to translate sensory information (such as force feedback) that is lost
when tools are used (e.g. Kennedy et al., 2002; Mendoza & Laugier,
2003) and to the development of stereoscopic optics (e.g. daVinci;
Intuitive Surgical CA). A new, integrative approach called perceptual
docking (Yang et al., 2008) addresses robotic control within the frame-
work of a surgeon performing his or her job. Perceptual docking relies on
a characterization of operator-specific motor and perceptual behaviour
during human–robot interaction and uses this information to enable
perceptual learning and knowledge acquisition in robotic systems.
Yang et al. describe a gaze-contingent framework in which saccadic eye
movements and ocular vergence are used to infer attentional selection
and to create three-dimensional representations of the space in which
the surgeon is operating. Perceptual docking has the goals of increasing
image understanding, feature tracking, three-dimensional perception
and the integration of different sources of visual cues by supplementing
or replacing computer vision techniques with information gained from
the human vision system by remote eye-tracking.
The attention and search strategies of a surgeon performing an operat-
ion are reflected in saccadic eye movements and fixations. Yang et al.
(2008) describe how video-oculography, a non-intrusive video-based
eye tracking method based on the corneal reflection from a fixed infra-
red light source in relation to the centre of the pupils, can be used to
extract quantitative information about ocular vergence to determine
50 Neuroergonomics

the depth of the fixation point by tracking both eyes. Issues that are
being addressed include detecting visual salience and relevancy, and
characterizing attentional shifts. Gaze-contingent perceptual docking
uses binocular eye tracking for controlling simple instrument manoeuv-
res, such as automatic targeting and panning of the laparoscope,
extracting depth information and tracking tissue deformation, and for
modelling the intention and visual search strategies of the surgeon to
characterize the pathways within which hand–eye control occurs. Such
techniques show promise in creating a situation in which surgery can
be performed on a moving object (such as a beating heart or expanding
lung) within a static frame of reference. Dynamic tracking of the eyes
also has promise as a means of guiding movements made within the
patient. Currently, most applications use information gained before
surgery to define safe working spaces. Under such ‘active constraint’,
movements are restricted according to constraints provided by the
robot. It is hoped that extracting three-dimensional information dur-
ing an operation via binocular eye tracking will be able to be used to
dynamically update the constraints as tissue is deformed or surgical
conditions change. Using such methods will allow better information
to be transmitted to augment surgical abilities. For example, force inter-
actions could be based on the relative separation between the point of
fixation and the position of a surgical instrument to improve hand–eye
coordination (Mylonas et al., 2012).

Conclusion

Applying knowledge of perceptual processes has long been a concern in


the field of ergonomics. Advances in neuroscience are paving the way
for new applications in perceptual enhancement, mulitisensory displays
and perceptual learning. The effective use of multimodal interfaces
that integrate visual, auditory and tactile feedback has the potential to
greatly enhance user performance, especially in environments that need
to be realistic, such as immersive virtual reality environments used for
serious gaming, and when control is complex, such as in microsurgery
or robotics.
A major theme in multisensory displays is that it is important to
understand interactions between dimensions within and across modali-
ties. New insights in the neural underpinnings of such interactions
and the application of neuroscience techniques to characterize these
interactions should provide the designer with better tools for deciding
on the best information displays for a given context.
3
Visual Attention and Display
Design
Jason S. McCarley and Kelly S. Steelman

Human operators in aviation, process control and other high-stress


domains must monitor constantly for warnings and alerts among a rush
of visual stimulation. More prosaically, website visitors and software
users scan their computer screens for interesting or useful information
among icons, online advertisements and other visual clutter. In all of
these cases, performance hinges on the effective functioning of visual
selective attention to find and extract useful information from the
visual environment (Eriksen & Hoffman, 1973; Posner, 1980). At their
best, failures of visual attention cause slowdowns and annoyances.
At their worst, they cost lives and property. Well-designed displays that
allow the viewer to readily find and extract the information required for
the task at hand are crucial for efficient and safe performance of com-
plex human–machine systems (Johnson & Proctor, 2004; Moray, 1993;
Wickens & Holland, 2000; Wickens & McCarley, 2008).
Although the proto-psychologist Helmholtz and early psychologists
such as Wundt, James and Titchener all studied and discussed attention,
the behaviourist movement largely tabled the topic for much of the
early twentieth century. Spurred by wartime concerns over human per-
formance, modern research on attention did not begin in earnest until
the 1950s (Broadbent 1957, 1958; Cherry 1953; Fitts et al., 1950). For
largely technical reasons, much (though not all; e.g. Fitts et al., 1950)
of the earliest post-war research on attention examined mechanisms of
auditory, rather than visual, selection. With Sperling’s (1960) studies of
partial report from visual iconic memory, however, vision became the
predominant modality of interest, and work since has elucidated many
of the information processing mechanisms that constitute visual atten-
tion and has linked them to underlying brain mechanisms. Along the
way, engineering psychologists have drawn from developing theoretical
51
52 Neuroergonomics

knowledge of attention to inform the practice of human factors and


ergonomics. Fundamental findings about the neural and cognitive
workings of visual attention are now being applied to display design,
with the goal of enabling the ready selection of task-critical information
and filtering of irrelevant information.

Modes of orienting

Reiterating distinctions made by the early psychologist William James


(1890/1950) and the ground-breaking cognitive neuroscientist Michael
Posner (1980), Klein and colleagues (Klein et al., 1992; Klein & Shore,
2000) distinguished four modes of attentional orienting. In their 2 ⫻ 2
taxonomy of orienting, attentional control is described as top-down or
bottom-up, and overt or covert. Top-down, or knowledge-driven, atten-
tional shifts are guided by the observers’ goals, memory and attentional
settings. For instance, an observer who knows what colour a file’s
icon is (Folk et al., 1992; Wolfe, 1994) or where the icon is likely to be
located (Chun & Jiang, 1998) can use that knowledge to guide her/his
attention toward likely files on a computer desktop. Bottom-up control,
sometimes described as exogenous or stimulus-driven control, is guided
by characteristics of the observer’s environment, independently of the
observer’s goals or expectations. In vision, bottom-up shifts are driven
by stimulus salience, a signal-to-noise measure of the feature contrast
between an object and its background (Itti & Koch, 2000; Li, 2002). For
example, an object that differs strongly in brightness, colour or shape
from its surroundings is likely to be high in salience and attract visual
attention in a bottom-up manner (Theeuwes 1991, 1992).
Neurobiological data suggest that top-down and bottom-up attention
shifts are executed by a common network of cortical and subcortical
areas, including the superior frontal and posterior parietal regions (Kim
et al., 1999; Peelen et al., 2004), although the signals driving top-down
and bottom-up shifts may emerge from different loci (Buschman &
Miller, 2007; Corbetta & Shulman, 2002). Top-down control signals
evidently progress backwards from the prefrontal and superior frontal
cortex to the parietal cortex, and then to lower-level visual regions
(Bressler et al., 2008; Buschman & Miller, 2007; Saalman et al., 2007),
whereas bottom-up signals arise in the parietal cortex and earlier visual
regions (Arcizet et al., 2011; Buschman & Miller, 2007; Burrows &
Moore, 2009; Constantinidis & Steinmetz, 2005; Mazer & Gallant, 2003;
Nothdurft et al., 1999). The frontal eye fields (Thompson & Bichot,
2005) and lateral intraparietal regions (Gottlieb et al., 1998) appear
Visual Attention and Display Design 53

to integrate bottom-up and top-down signals within neural ‘priority


maps’ that represent the momentary distribution of attentional or
behavioural relevance across the visual field. Computationally, salience
can be derived through operations similar to those performed by
V1 neurons (Itti & Koch, 2000; Li, 2002) and, indeed, correlates of visual
salience have been identified in V1 of the macaque cortex (Nothdurft
et al., 1999).
The second dimension of Klein et al.’s (1992) taxonomy distinguishes
mechanisms of attentional orienting, one of which James (1890/1950)
described as ‘the accommodation or adjustment of the sensory organs’
(p. 434) and the second of which he described as ‘the anticipatory prepa-
ration from within of the ideational centres concerned with the object
to which the attention is paid’ (p. 434). In modern parlance, these are
known, respectively, as overt and covert attention (Posner, 1980). An
overt visual attention shift is an eye, head or body movement that re-
orients the observer’s gaze. Overt attention shifts are necessary because
detailed vision is limited to a small, central region of the visual field, the
fovea, roughly one degree of visual angle in diameter (Wandell, 1995;
see Chapter 2). Outside the fovea, visual acuity declines precipitously, in
part because of the spacing of photoreceptors in the retina and in part
because of changes in the neural wiring connecting the retina to the
visual cortex. Most often, overt shifts occur as saccades, fast, ballistic eye
movements that flick the observer’s gaze from one location to another.
Saccades, which typically last only a few tens of milliseconds, are inter-
spersed with fixations, dwells of the eye which tend to last 250–500 ms
(Moray, 1993; Rayner, 1998).
A covert attention shift, in contrast to an overt shift, reorients the
observer’s cognitive focus invisibly, without an accompanying eye or
body movement (Eriksen & Hoffman, 1973; Posner, 1980). Covert ori-
enting, in other words, is purely mental selectivity. Overt and covert
attention are linked, but asymmetrically: a covert attention shift can
occur without provoking an eye movement, but before the eyes move,
a covert attention shift precedes them to the saccade target location
(Hoffman & Subramaniam, 1995; Kowler et al., 1995). Neuroimaging
data have confirmed that covert and overt attentional movements acti-
vate overlapping networks of parietal and frontal regions in the human
brain, including the intraparietal sulcus, postcentral sulcus and the
frontal eye fields (Corbetta, 1998). Correspondingly, neurophysiological
studies in monkeys indicate that electrical stimulation of neurons within
brain regions involved in oculomotor control, even when it is too weak
to trigger an eye movement, enhances covert attentional processing at
54 Neuroergonomics

the corresponding retinal sites and the response sensitivity of visual


neurons (Armstrong & Moore, 2007; Moore & Armstrong, 2003; Moore &
Fallah, 2001; see Chapter 1), although oculomotor and attentional proc-
esses may be dissociated at the level of single cells within these regions
(see Awh et al., 2006). Such effects accord well with so-called premotor
(Rizzolatti et al., 1987) or oculomotor readiness (Klein, 2000) theories of
attention, which hold that shifts of covert attention reflect preparatory
activation for eye movements.
Expanding on Klein’s taxonomy, we can also characterize modes of
attentional orienting along a third dimension, based on the forms of
visual representation over which they operate. Many theorists, most
notably Posner (Posner, 1980; Posner et al., 1980) and Eriksen (Eriksen &
Hoffman, 1973) have likened covert visual attention to a mental
spotlight or focus that highlights a circumscribed region of the visual
field. A related idea conceptualizes covert attention as a zoom lens that
can be focused with high resolution on a small area of the visual field
or spread broadly with lower resolution over a large area (Eriksen &
St. James, 1986). Both the spotlight and zoom lens metaphors suggest
that attentional selection is determined strictly by location: every-
thing inside the spotlight or field of view is privileged and everything
outside is not. This mode of selection has therefore been described as
space-based. Spatial attention has been shown to modulate the baseline
response level (e.g. Kastner et al., 1999; Luck et al., 1997a; O’Connor
et al., 2002) and response gain (e.g. Brefczynski & DeYoe, 1999; Heinze
et al., 1994; Hopfinger & Mangun, 1998) of neurons throughout
visual brain areas, in regions as early as the lateral geniculate nucleus
(O’Connor et al., 2002).
In an alternative mode of processing, the units of selection are not
simply regions of space but more complex representations of visual
groups or objects. Here, perceptual organization occurs before attentional
processing, grouping regions of the visual field based on properties such
as proximity, good continuation, similarity and connectedness (Palmer &
Rock, 1994; Wertheimer, 1938). Attentional processes then select a per-
ceptual group or object, processing multiple properties of the selected
item—for example, shape, colour, texture—in parallel (Duncan, 1984;
Kramer & Jacobson, 1991). This mode of selection has been described as
object-based (Duncan, 1984; Egly et al., 1994; Kramer & Jacobson, 1991).
The effects of object-based selection are that one object can be attended
independently of another, even if the two are intertwined or spatially
superimposed (Duncan, 1984; O’Craven et al., 1999); conversely, two
regions of an object can be selected together even if they are spatially
Visual Attention and Display Design 55

separated and disconnected within the retinal image (Behrmann et al.,


1998). Finally, multiple properties of a single object can be processed and
reported with no more effort than is necessary to process one property
(Duncan, 1984; Kramer et al., 1985). All three of these phenomena bear
on display design, as discussed later. Object-based selection manifests as
early as V1, where single-cell responses are stronger to the contours of an
attended curve than to the contours of an intertwined, but unattended,
curve (Roelfsema et al., 1998).
A third mode of operation selects neither objects nor locations per se,
but instead prioritizes specific visual features wherever they occur
within the visual field. In this feature-based mode (Maunsell & Treue,
2006), attentional processes can prioritize all stimuli of, for example, a
particular colour, orientation or direction of motion. The influence of
feature-based attention has been shown in single-cell data from regions
MT (Treue & Martinez-Trujillo, 1999) and V4 of the monkey cortex
(McAdams & Maunsell, 2000), in human neuroimaging data (Liu et al.,
2007; Serences & Boynton, 2007) and in event-related potentials (ERPs)
recorded from human subjects (Zhang & Luck, 2008).
Some final comments about space-, object- and feature-based atten-
tion are in order. First, although the three modes of attention have been
dissociated, they may operate simultaneously or hierarchically (Bichot
et al., 1999; Egly et al., 1994; Hayden & Gallant, 2005; Kramer et al., 1997).
Second, although the space-/object-/feature-based distinction applies
most obviously to covert attention, it can also characterize overt atten-
tion. As an observer’s gaze can be directed to only one location at a time,
overt orienting is always to some degree space-based. However, observers
are more likely to make saccades within an object than between objects,
evincing an object-based influence on eye-movement programming
(McCarley et al., 2002). Observers can also target their saccades selectively
to objects of a particular colour or shape (Findlay, 1997; Williams, 1967),
demonstrating a form of feature-based overt selection.

Why is mental processing selective?

Though researchers have long agreed that covert selection is possible,


they have often disagreed about why it should be. Donald Broadbent
(1958), an early attention theorist whose influence still resonates,
argued that covert attention is necessary to filter signals from the senses
before they reach the regions of the brain where detailed perception
and recognition occur. Broadbent held that the brain’s mechanisms for
perceiving and recognizing stimulus input are of limited capacity and
56 Neuroergonomics

that without attention to regulate the flow of information from the


eyes, ears and other senses to the cortex, the brain would be constantly
overwhelmed. Because attention in this account filters away sensory
input before it reaches high-level perceptual processing, Broadbent’s
theory and others like it have been termed early selection models. Other
researchers have questioned the notion of perceptual resource limits
(Deutsch & Deutsch, 1963; Duncan, 1980), arguing that the processing
capacity of the brain is easily sufficient to recognize all incoming infor-
mation from the senses (although recognition may still be limited by
factors such as poor acuity in the peripheral retina). According to this
line of thought, attentional selection becomes necessary only after
recognition to regulate access to conscious working memory or behav-
ioral control systems. Attentional theories of this form have been called
late selection models.
In fact, neither a pure early nor pure late selection model is empiri-
cally adequate (Lavie et al., 2004; Lavie & Tsal, 1994). Consistent with
Broadbent’s early selection model, attention is, indeed, necessary for
visual object recognition (studies that claim otherwise generally have
not controlled well for inadvertent slips of attentional focus; see Lachter
et al., 2004 and Yantis & Johnston, 1990, for details; see Wood &
Cowan, 1995 for evidence that attention is also necessary for auditory
stimulus recognition). When perceptual processing demand is low,
however, attention may spill over from an observer’s target stimulus
to produce automatic recognition of distractor stimuli (Lavie, 1995).
Effects such as these suggest at least two forms of covert selection within
the visual processing stream, one at an early level and one at a later
level (cf. Johnston et al., 1995). We can call these perceptual selection and
central selection.

Perceptual selection
Neurobiological research over the past few decades has not only con-
firmed Broadbent’s (1958) view that the brain’s capacity for object
perception and recognition is limited, but has also clarified how and
why capacity is constrained. The need for perceptual selectivity appears
to arise largely in the extrastriate cortex, a region of the brain dedicated
to processing complex visual information. Although receptive fields
in the striate cortex are generally no larger than a degree or two in dia-
meter, extrastriate receptive fields can cover large regions of the visual
field (Desimone & Gross, 1979; Gatass et al., 1988; Hubel & Wiesel,
1968; Smith et al., 2001) and may, therefore, subtend multiple objects
simultaneously. As a result, a single neural response train can conflate
Visual Attention and Display Design 57

the properties of multiple different objects, producing an ambiguous or


degraded representation (Reynolds et al., 1999).
To achieve a disambiguated representation, visual objects compete for
control of extrastriate neural responses (Desimone & Duncan, 1995).
Competitive interactions between stimuli modulate the gain of signals
arising from the multiple objects within a set of shared receptive fields
(Ghose & Maunsell, 2008; Reynolds & Desimone, 2003; Womelsdorf
et al., 2008), allowing the winning object to dominate the neuron’s
response train. In effect, the neuron’s receptive field shrinks around the
attended stimulus (Moran & Desimone, 1985; Reynolds et al., 1999).
Limits on the number of available receptive fields restrict the perceptual
quality with which multiple objects can be represented simultaneously,
producing effects consistent with an early selection model of attention
(e.g. McCarley & Mounts, 2008). The competition can be tilted in favour
of one object or another bottom-up by stimulus salience (Itti & Koch,
2000; Reynolds & Desimone, 2003) and top-down by the observer’s
attentional settings (e.g. Reynolds & Desimone, 2003). Human neuro-
imaging and electrophysiological data have confirmed both that
interobject competition for selection degrades stimulus processing in
the extrastriate visual cortex and that attentional processes can offset
the costs of this competition (Kastner et al., 1998; Luck et al., 1997b).
Such effects confirm Broadbent’s suggestion that attention operates
early in visual representation, directly honing our percepts.

Central selection
Even stimuli that are not perceptually degraded may be filtered before
reaching conscious awareness. This is best illustrated by the phenom-
enon of inattentional blindness. In a landmark study, Mack and Rock
(1998) asked observers to focus attention narrowly on a single object
in an empty visual field in order to make a difficult perceptual judge-
ment. After observers had performed several trials of this task, a critical
trial occurred during which an unexpected probe object appeared a few
degrees away from the attended object. This was followed by a control
trial on which observers were warned to expect the probe. Remarkably,
many observers failed to notice the unexpected probe, even though
they saw the same probe easily when they were warned to anticipate it;
that is, without attending to the probe object, observers were effectively
blind to it.
At least two pieces of evidence suggest that inattentional blindness of
this form is not a result of early selection. First, the detection rates for
a probe item in an inattentional blindness task depend on the probe’s
58 Neuroergonomics

semantic properties (Koivisto & Revonsuo, 2007). For instance, observers


are more likely to notice their own name as the probe than to notice
a different word (Mack & Rock, 1998), demonstrating that an object
can be processed to the point of recognition before it is blocked from
consciousness by inattentional blindness. Second, inattentional blind-
ness can be precipitated by nonvisual distractions. Even if it involves no
direct visual processing demands, for example, a sufficiently challenging
verbal working memory task can increase the risk of inattentional
blindness (Fougnie & Marois, 2007). This suggests that inattentional
blindness occurs at a central, post-perceptual, but preconscious, locus.
Neurophysiological data have indicated a potential neural correlate of
inattentional blindness in the prefrontal cortex (Everling et al., 2002).
Mack and Rock’s findings on the phenomenon of inattentional blind-
ness have been broadly extended. The phenomenon has been replicated
using complex, dynamic and naturalistic stimuli (Most et al., 2001),
including films of people interacting in naturalistic scenes (Simons &
Chabris, 1999), and the effect has even been shown to affect observ-
ers interacting in real-world environments (Chabris et al., 2011; Furley
et al., 2010; Hyman et al., 2010). The consequences of inattentional
blindness are evident in the ‘looked-but-failed-to-see’ errors responsible
for many traffic accidents (Herslund & Jørgensen, 2003).

Applications to display design

Knowledge of the neural and cognitive underpinnings of attention has


allowed engineering psychologists to better predict and manipulate the
likelihood that a viewer will select critical information within a display
for privileged processing while deprioritizing less important display com-
ponents. Here, we consider four topics in which attention theory and
application intersect: visual search, grouping and object displays, head-up
displays and large-scale attentional control.

Visual search
Many real-world tasks require an operator to search through a display
for a target object whose presence and location among nontarget objects
are uncertain. A radiologist, for example, may inspect a chest x-ray
searching for potential abnormalities, and a naval radar operator may
monitor his/her display for enemy ships. Researchers from both basic
psychology (e.g. Duncan & Humphreys, 1989; Treisman & Gelade, 1980;
Wolfe, 1994) and human factors/ergonomics (e.g. Drury, 1975; Nodine &
Kundel, 1987) have developed models of visual search to elucidate
Visual Attention and Display Design 59

factors that determine search efficiency. Search in these models begins


with a stage of processing known as pre-attention (Treisman & Gelade,
1980). Here, low-level visual processors operate across the field of view
to register rudimentary visual properties such as colour, brightness
and motion (Wolfe & Horowitz, 2004), assess the general layout of the
search field, and detect obvious targets or note potential targets for
more careful inspection (Kundel & Nodine, 1975; Kundel et al., 2007).
All of this occurs in parallel and within a glance.
If a target is not detected during parallel processing, the operator scans
the image by making focal attention shifts or saccadic eye movements
to scrutinize regions of interest serially. This focal attentional processing
seems necessary to resolve the details of individual stimuli and integrate
pre-attentively detected features (e.g. yellow, black, a curved line, a dia-
mond) into recognizable objects [e.g. a yellow traffic sign with a black
‘curve ahead’ symbol; Treisman & Gelade (1980) and Wolfe & Bennett
(1997)]. Consistent with the proposal that efficient and inefficient
search differ qualitatively, the search for low-discriminability targets
engages prefrontal and parietal neurons that very easy search does not
(Corbetta et al., 1995; Donner et al., 2002; Nobre et al., 2003). In creating
displays for visual search, the designer’s goal is to render target detection
as efficient and effortless as possible. Ideally, the target will ‘pop-out’ at
the operator during pre-attentive processing (Treisman & Gelade, 1980),
regardless of how many distracting objects the search field contains. In
the event that a display designer cannot ensure target pop-out, she/he
should, nonetheless, aim to make search as efficient as possible.
What makes visual search efficient? Researchers have identified a
number of factors. As noted, pre-attentive processes register primitive
visual features such as colour, rough shape and motion, but focused
attention is needed to integrate features into unitary, multifeature
objects. Accordingly, targets defined by a conjunction of features are
generally more difficult to detect than targets defined by a single,
unique feature (Treisman & Gelade, 1980). For instance, a tilted black
rectangle may pop-out among vertical black rectangles, while a grey
tilted rectangle may be difficult to detect among black tilted and grey
vertical rectangles.
The presence of a unique target feature does not guarantee efficient
search. Even for unique feature targets the ease of search varies with
target salience. As described earlier, salience is a measure of the feature
contrast between an object and its surroundings (Itti & Koch, 2000;
Li, 2002). It therefore increases with differences between a target and
distractors, and with the homogeneity of the distractors themselves
60 Neuroergonomics

(Duncan & Humphreys, 1989). The more dissimilar the target is from
the distractors and the more similar the distractors are to one another,
the easier search will generally be. Dissimilarity along separate feature
dimensions is processed in parallel and synergistically, meaning that
a target that differs from distractors redundantly in two properties
(e.g. both colour and shape) will be more salient than a target that differs
from distractors in a single property (Krummenacher et al., 2001).
A more surprising influence on target detectability is evident in an
effect known as search asymmetry. A search asymmetry occurs when
detection of a target stimulus of Type A among distractors of Type B
is more efficient than detection of a target stimulus of Type B among
distractors of Type A (Treisman & Gormican, 1988; Wolfe, 2001). Often,
the favoured target–distractor mapping within an asymmetry produces
target pop-out, while the disfavoured mapping produces search that is
slow and inefficient (Dosher et al., 2004; Zelinsky & Sheinberg, 1997).
Vision scientists typically attribute search asymmetries to differences
in the neural signal-to-noise ratio with which stimuli within an asym-
metrical pair are encoded (Dosher et al, 2004; Rauschenberger & Yantis,
2006; Treisman & Gormican, 1988; Treisman & Souther, 1985). Li (2002)
notes that asymmetries of this sort for basic visual features are a natural
consequence of the neural coding scheme in V1, where neurons tuned
to similar features tend to inhibit one another. For example, in a dis-
play with a ‘⫹’ in a field of vertical lines, mutual inhibition between
the vertical segments will reduce the neural activation generated by
the distractors and the vertical segment of the ‘⫹’, allowing the target’s
horizontal segment, which receives no inhibition, to pop-out. When the
target is a vertical line in a field of ‘⫹’ distractors the vertical segments
within the distractors, in contrast, will inhibit the vertical target, dis-
allowing efficient search. Presumably, similar coding relationships at levels
of representation beyond V1 account for asymmetries between familiar
and novel objects, such as letters and mirror-reversed letters. Notably,
search asymmetries persist even within heavy visual noise (Yamani &
McCarley, 2010, 2011), implying that coding of symbology to produce
asymmetries in favour of critical information can provide a potential
technique for facilitating search, even within cluttered displays.
Under circumstances in which visual clutter makes bottom-up target
pop-out impossible, top-down attentional control can facilitate target
detection. Operators who know what the target they are searching for
looks like can adopt an attentional set to guide them selectively toward
likely target objects (Egeth et al., 1984; Wolfe, 1994). A computer
user searching for a particular PDF document on her/his desktop, for
Visual Attention and Display Design 61

instance, can selectively target red and white icons, and ignore icons
of different colours, thereby effectively reducing the size of the display
to be searched. In the visual cortex, this top-down set manifests in an
enhancement of neural responses to stimuli that share the known target
feature (Bichot et al., 2005; Buracas & Albright, 2009). Display designers
can thus facilitate search by creating feature-coded symbology to allow
this form of selective search. For example, colour coding of aircraft by
altitude within an air traffic display allows controllers to search effi-
ciently for flight path conflicts, allowing them to focus their attention
on pairs of aircraft that are of roughly the same altitude (Remington
et al., 2000). Similarly, colour coding of different object classes within a
battlefield map (e.g. terrain features vs troops) allows operators to scan
efficiently for a target of a known class (Yeh & Wickens, 2001). Coding
of information by luminance contrast can serve as an alternative to
colour coding in the event that chromatic displays are unavailable
(Wickens et al., 2004; Yeh & Wickens, 2001).

Grouping and object displays


Display design choices do not just determine the ease of locating criti-
cal signals, as in visual search, but also influence the ease of extracting
and interpreting information after the operator has found it. Wickens
and Carswell (1995) classified data reading tasks along a continuum
from low to high processing proximity. Tasks of low proximity require
the display reader to extract a single data value. Tasks of high processing
proximity require the reader to attend to, and integrate, multiple data
values. To recognize that a patient under anaesthesia has a dangerously
low heart rate, for example, an anaesthesiologist need only attend to
the patient’s heart rate monitor. This is a task low in processing proxi-
mity. To diagnose a condition of shock due to blood loss, in contrast,
the anaesthesiologist must attend to a pattern of multiple physiological
variables (Blike et al., 1999). This is a task high in processing proximity.
Wickens and Carswell (1995) likewise classified display elements
along a continuum from low to high display proximity. Here, display
proximity denotes not just the physical nearness of information chan-
nels, but refers more generally to the strength of perceived grouping
or cohesion between channels as determined by the Gestalt laws of
perceptual organization: spatial proximity, similarity, common motion,
good continuation (Wertheimer, 1938), common region (Palmer, 1992)
and connectedness (Palmer & Rock, 1994). For example, circular gauges
in a display panel will tend to group more strongly with one another
than they will with rectangular gauges directly adjacent to them, even
62 Neuroergonomics

when the distance between them is relatively large. This results from
the similarity of the circular gauges and from the presence of a common
region enclosing them.
How are the concepts of processing and display proximity impor-
tant to display design? The proximity compatibility principle (Wickens &
Carswell, 1995) holds that a display will encourage good data-reading
performance to the extent that display proximity between channels
matches the processing proximity of the channels. If an operator needs
to focus attention on a single display channel, performance will be
best if that channel does not group strongly with others. If the opera-
tor needs to access and integrate information from multiple channels,
however, performance will be best if the relevant channels group
strongly with one another. Thus, in Figure 3.1(a), where vertical tape
gauges are not grouped strongly, a reader might check the value of a
single gauge easily, but have difficulty comparing values across the three
gauges. In the bottom panel, where the vertical gauges are grouped
strongly, the reader might compare values across gauges easily, but be
slower to isolate and check a single gauge. (It is worth noting, though,

Figure 3.1 Low display proximity between vertical tape gauges (a) allows an
operator to read the value of a single gauge easily, but increases the difficulty
of comparing values across the two gauges. High display proximity between the
gauges (b) allows for easier comparisons across the gauges but makes the task of
isolating and reading a single gauge more difficult
Visual Attention and Display Design 63

that manipulations of display proximity tend to produce asymmetrical


effects; whereas high-proximity displays produce small costs to focused
attentional judgements, low-proximity displays produce fairly large
costs to data-integration judgements; Bennett & Flach, 1992). The
benefits of high display proximity to divided attention are evident
in behavioral performance and in electrophysiological measures of
perceptual and attentional processing quality; ERPs elicited by a task-
irrelevant or secondary-task probe are larger if the probe is perceptually
grouped with an attended target object than if the target and probe are
perceptually segregated (Boehler et al., 2011; Kasai et al., 2011; Kramer
et al., 1985).
Wickens and Carswell (1995) noted three psychological factors under-
lying the proximity–compatibility principle. First, strong grouping can
enable object-based selection of multiple channels, allowing the opera-
tor to process two or more channels in parallel (Duncan, 1984). Second,
even if parallel processing is not possible, strong grouping reduces the
difficulty of attentional scanning back and forth between channels.
Third, grouping allows for emergent features (Pomerantz et al., 1977),
easily noticeable patterns or configural properties that result from an
arrangement of constituent parts. In Figure 3.1(b), to illustrate, the
emergent feature of collinearity provides a strong perceptual cue that
the values on the vertical gauges are in roughly the same range, and
a disruption of collinearity would signal a deviation from that range.
In part A, where gauges do not produce a salient percept of collinearity
when they are aligned, a deviation from alignment will be less obvious
(Sanderson et al., 1989).
In the extreme, strong grouping between elements or regions of an
image produces the percept of a single, unified object. In an object dis-
play, the dimensions of a single object are used to represent multiple data
values. In a surgical patient monitoring display designed by Blike and
colleagues (1999), for example, heart rate and stroke volume (the amount
of blood pumped with each heartbeat) are represented by the width and
height of a rectangle. The rectangle’s area, an emergent feature, thus
represents the patient’s cardiac output, the total amount of blood being
pumped per minute. Simply by noting that the rectangle is small, a dis-
play reader can determine that cardiac output is low.
To be useful, an emergent feature within an object display (e.g. object
size or symmetry) must be mapped to a useful higher-order variable
(e.g. cardiac output) derived from a combination of simpler underlying
variables [e.g. heart rate and stroke volume; Bennett & Flach (1992)
and Sanderson et al. (1989)]. There is little point in mapping low-order
64 Neuroergonomics

variables to the dimensions of an object if some configural property


of the object as a whole does not represent a meaningful higher order
value. Why not display the higher order variable directly? By encoding
low order variables as dimensions of the object and higher order vari-
ables as an emergent property, object displays allow the data reader
direct access to both sets of values. In the anaesthesiological object
display described earlier, for example, the aspect ratio of the heart rate ⫻
stroke volume rectangle indicates the cause of a patient’s low cardiac
output: a tall, narrow rectangle signals a low heart rate, and a short,
wide rectangle signals low stroke volume. A well-designed object display
thus preserves low order variables while making higher-order variables
visually salient.

Head-up and head-mounted displays


Head-up displays (HUDs) and head-mounted displays (HMDs) provide
another example of the application of object-based attention to display
design. In a HUD visual display components are projected onto a trans-
parent screen in a vehicle operator’s (typically a pilot’s) forward field of
view. In a HMD display components are projected onto a see-through,
helmet-mounted shield. In effect, both forms of display overlay infor-
mation onto the operator’s view of the outside world, producing what
has been described as augmented reality (Milgram et al., 1994). The
primary purpose of HUDs and HMDs is to move information from a
head-down location (e.g. embedded in the cockpit instrument panel or
displayed on a handheld device) to the forward field of view, allowing
users to read the display without need to look away from their sur-
roundings. Evidence confirms that they, indeed, serve this purpose well
(Ververs & Wickens, 1998; Wickens & Long, 1995).
HUDs and HMDs exploit object-based attention in two diametrically
opposed ways. First, object-based attention enables selective processing,
allowing the operator to process HUD or HMD symbology while filtering
away the background scene against which its viewed. If selection was
purely spatial or spotlight-based, attention would, presumably, have
difficulty isolating elements of a HUD or HMD from the scene on which
they were superimposed.
Second, object-based attention can enable divided attention, allowing
the operator to process display symbology in parallel with the back-
ground scene. The symbology displayed in a HUD or HMD can be either
non-conformal or conformal (Wickens & Long, 1995). Non-conformal
symbology includes display elements such as gauges or alphanumeric
messages that do not directly align with or demarcate objects in the
Visual Attention and Display Design 65

outside world. Conformal symbology, however, conforms to, or aligns


with, objects or regions in the outside world. Runway markings and
highway-in-the-sky flight path markings (Fadden et al., 1998) are exam-
ples. Notably, their close alignment allows strong perceptual grouping
between conformal display elements and the background objects to
which they conform, encouraging the object-based spread of attention
between the display and the world. This may allow the operator to
better process the display while simultaneously monitoring the outside
scene (Foyle et al., 1995; Levy et al., 1998; Wickens & Long, 1995).
HUDs and HMDs, however, also carry at least two potential costs to
attentional performance. First, clutter produced by excessive HUD or
HMD symbology may degrade processing of information in the out-
side scene, and, conversely, clutter in the outside scene may degrade
perception of display symbology. Pilots’ ability to follow flight path
information displayed on a HUD, for example, can be hindered by
visual noise in the scene beyond the HUD (Wickens & Long, 1995).
Returning again to an earlier distinction, this is likely a failure of
perceptual attention.
Second, and perhaps more surprisingly, HUDs and HMDs may
induce inattentional blindness for unanticipated events in the outside
scene. Flying an approach to landing, for instance, pilots using a HUD
are slower and less likely than those using a head-down display to notice
that another aircraft has taxied unexpectedly onto the runway in front
of them (Fischer et al., 1980; Wickens and Long, 1995). What might
cause this effect? Psychophysical data (Jarmasz et al., 2005), as well as
electrophysiological data (Valdes-Sosa et al., 1998), indicate that object-
based selection of HUD-like symbols inhibits processing of background
information. This suppression is seen in the P1 and N1 components
of the ERP (Valdes-Sosa et al., 1998), believed to reflect relatively early
visual processing (Heinze et al., 1990; Luck et al., 1990). However, the
finding of HUD-induced blindness for background information also
parallels Mack and Rock’s (1998) laboratory finding that a narrow atten-
tional focus produced inattentional blindness for a surprise probe—a
phenomenon that appears to reflect late selection. These considerations
suggest that an attentional focus on HUD symbology might compromise
background processing at both early and late levels.

Large-scale attention

Much of the discussion to this point in this chapter has concerned atten-
tional processes that tend to operate within a single display or display
66 Neuroergonomics

channel: visual search is frequently conducted within the bounds of


a computer monitor, and the entire point of object representations and
HUDs is to integrate multiple sources of information efficiently within
a single display. Often, though, an operator is not faced with a single
display, but is surrounded by information channels in a large and com-
plex visual workspace. A pilot or a nuclear power station operator, for
instance, may find himself immersed in meters, gauges and alerts, many
separated by large eye and head movements. What guides attention as
an operator monitors such an environment?
Models of supervisory monitoring emerging from the engineering
disciplines have long addressed this issue [Senders (1983) and Sheridan
(1970); see Moray (1986) for review], but have often analysed the mathe-
matical distribution of information across channels with little concern
for other task and stimulus characteristics that might influence attention.
To address these questions, recent work has begun to consider the influ-
ence of psychological processes in the control of scanning behaviour in
large-scale workspaces. Most notably, Wickens’ SEEV model (Wickens &
McCarley, 2008; Wickens et al., 2003, 2008) posits four sources of atten-
tional control within a large-scale environment: stimulus Salience, the
Effort required to execute a shift of attention, the operator’s Expectancy
of finding critical information within various channels as determined
by their information bandwidth, and the Value or importance of the
information contained in each channel.
A recent extension of the model (Steelman et al., 2011) formalized
and expanded these ideas. The model integrates elements from exist-
ing quantitative models of basic attentional processes (Bundesen, 1987,
1990; Itti & Koch, 2000; Wolfe, 1994) within the SEEV framework
to predict attentional behaviour in large-scale, dynamic workspaces.
The revised model incorporates two forms of expectancy: channel
prioritization, based on the operator’s knowledge of the bandwidth and
value of the various display channels (cf. Senders, 1983), and feature
prioritization, based on the operator’s attentional set for a given colour
(cf. Wolfe, 1994). The model also distinguishes between two forms of
visual salience: static salience, based on local feature contrast (Itti &
Koch, 2000), and dynamic salience (cf. Yantis & Jonides, 1990), based
on moment-to-moment changes of static salience. Values of static and
dynamic salience are calculated using a well-validated computational
model (Itti & Koch, 2000; Walther & Koch, 2006).
The model operates by first producing a set of base maps representing
various sources of attentional guidance. Each of these maps is then
assigned a pertinence value (Bundesen, 1990) based on the operator’s
Visual Attention and Display Design 67

task set. Pertinence values are used to weight various sources of attentional
guidance, allowing strategic changes of attentional policy in response
to task demands. For example, to allow attentional guidance driven
by salience in an entirely bottom-up manner, the modeller can assign
positive pertinence values to the static and dynamic salience maps,
and pertinence values of zero to the other maps. Alternatively, to allow
guidance based purely on top-down influences of channel and feature
prioritization, the modeller can assign positive pertinence values to the
two channel priority maps and values of zero to the remaining maps.
Pertinence values are assigned a priori by the modeller or a subject-
matter expert based on judgements about the usefulness of each source
of attentional guidance within a given task (see Steelman et al., 2011 for
a discussion of heuristic methods of assigning pertinence values).
Pertinence-weighted base maps are averaged to produce a master map
of attentional activation (cf. Wolfe, 1994) reflecting the combined influ-
ence of various forms of attentional guidance. A spatial filter is then
applied to the master activation map, damping down activation values
based on their distance from the current point of fixation (cf. Parkhurst
et al., 2002). This process serves to inhibit long attention shifts, simu-
lating the influence of movement effort (e.g. Ballard et al., 1995) and
peripheral acuity losses on attentional scanning. Finally, a probabilistic
choice model (Bundesen, 1987; Luce, 1959) selects the location of the
operator’s next fixation based on the relative attentional activation
levels of the various information channels within the workspace. In
effect, this choice implements a race between information channels to
be selected as the next target for a fixation, where the speed of each
channel is determined by the channel’s attentional activation level
(Bundesen, 1987).
The model thus incorporates well- established psychological
mechanisms—multiple sources of bottom-up and top-down attentional
control, the possibility of context-driven changes of attentional strategy
and an inhibitory influence of movement effort on scanning behaviour—
to predict attentional behaviour. It can be used to predict both the
steady-state distribution of attention across display channels and the
time that the operator will take to notice a target event once it has
occurred in a given channel. It therefore provides a theory-grounded
method for predicting the effects of display choices on human attentional
performance early in the design process, gauging and minimizing the risk
that critical information will be overlooked, and for retrospectively
assessing display design during accident and incident investigations
(Wickens et al., 2009).
68 Neuroergonomics

Conclusion

Attention is a fundamental component of human perception and


cognition, and a concern for attentional constraints, biases and compe-
tencies is a fundamental element of display design. Design guidelines
informed by attentional theory can improve operator performance and
alleviate operator workload, ultimately increasing system safety and
productivity.
What’s next in the study of attention and display design? Researchers
have begun to explore the possibility of displays designed not simply
to be attended by a human operator, but to attend to the operator
in turn. These attention-aware systems would monitor the operator’s
attentional state and tailor display parameters to optimize performance
and minimize workload. An attention-aware display might delay task
interruptions until moments of low attentional demand, for instance,
to prevent unnecessary workload spikes (Bailey & Konstan, 2006) or
might dynamically highlight task-critical information that it infers
the operator has not yet noticed (Molenaar & Roda, 2008; Rapp, 2006;
see Chapter 4). Inferences about the user’s attentional state would be
drawn from cognitive models working from behavioral measures such
as eye- or head-movement data (Perreira Da Silva et al., 2008) or neuro-
ergonomic indices such as electroencephalography and ERPs. Of course,
an attention-aware system is likely to draw valid conclusions about
user state only if its judgements are rooted in a sound understanding
of the mechanisms and manifestations of attentional selection (Roda,
2010). The study and design of attention-aware systems will thus be an
important front in neuroergonomic research and will provide another
avenue of application for our still-growing knowledge of basic attentional
mechanisms.
4
Attentional Resources and Control
Paolo Toffanin and Addie Johnson

How we select, modulate and keep focused on information that is


relevant to behaviour is critical to understanding human performance.
Such diverse processes as memory storage and retrieval, action selection
and decision-making cannot be fully described without consideration
of the role that attention plays in them. On the one hand, attention is
involved in the selection and modulation of incoming sensory informa-
tion or information from memory. In this sense, attention determines
the fate of selected items. Items that receive attention are processed
more quickly and remembered better than items that do not receive
attention’s ‘boost’ (Levin & Simons, 1997). On the other hand, because
competition for cognitive resources is an integral part of most activities,
attention is needed to maintain goal-directed behaviour. The role of
attention in modulating sensory processes to select locations or objects
in space was discussed in Chapter 3. In this chapter, we consider the
investment of attentional resources across time and different tasks.
Whether driving a car in the city or playing a game of Ultimate
Frisbee®, multiple stimuli and options for action compete for selection,
and attention is needed to bias behaviour to fit action goals. Tasks dif-
fer in the extent to which they call on the ability to suppress irrelevant
information, maintain focus or divide attention. Understanding and
aiding task performance thus depends on a characterization of which
of these abilities is employed at any point in time. An understanding of
the neural basis of attention sheds further light on how attention sup-
ports performance by maintaining focus, keeping task goals active and
coordinating information processing. A topic in attention particularly
relevant to neuroergonomics is how attention can be measured online
so that the operator can be aided or the task environment adapted to
augment human performance.
69
70 Neuroergonomics

Quantifying and describing attention

If we are to monitor human behaviour and adapt the task environ-


ment to improve task performance, it is necessary to be able to measure
and characterize the different aspects of attention. The measurement
of attention is made complicated by the fact that attention shifts are
not always overt (i.e. not always accompanied by eye movements) and
by the fact that the task environments of real interest—such as the
cockpit or the control room—require that a range of attentional abilities
be deployed dynamically. Chapter 1 provided an overview of the
techniques most often used in basic research on human performance.
In this section, we cover how these techniques have been applied to
characterizing attention and, in particular, how they might be used in
realistic task environments.

Eye movements and pupil diameter


The eyes may not be the mirror of the soul, but they tell us much about
how attention is allocated. As discussed in Chapter 3, eye movements
and attention are tightly coupled (Findlay, 2009). For example, the
attention-capturing, abrupt onset of an object drives a saccade toward
it (Ludwig & Gilchrist, 2002). These stimulus-driven saccades are triggered
by sensorial events and are automatic in the sense that they occur even
when they hurt task performance. In fact, Theeuwes (2004, 2010) has
argued that items in the visual field capture attention according to their
salience irrespective of the task at hand, and that goal-driven control
comes into play only after capture has taken place. This view is based on
the finding that when a display contains a salient singleton, attention
seems to be captured by that singleton, thus increasing reaction times
to the target (e.g. Theeuwes, 1992).
Voluntary, goal-driven saccades, such as those made in response to a
cue which indicates the possible location of a forthcoming target, serve
the purpose of moving the eye to search for relevant information or
to bring that information into focus. In any environment where eye
movements are needed to bring information into focus, the patterns
of saccades, or scanpaths, can tell us much about how attention is allo-
cated. In addition to providing information about what is looked at,
and where, scanpaths can be analysed to uncover attentional strategies.
According to one theory (Noton & Stark, 1971a, 1971b), observers scan
new stimuli during a first exposure and store the sequence of fixations
in memory as a spatial model. This spatial model is the scanpath. During
subsequent viewings of the same stimulus, the scanpath is followed,
Attentional Resources and Control 71

at least in part, thus facilitating stimulus recognition (Noton & Stark,


1971a, 1971b) or search efficiency (Myers & Gray, 2010). The availabil-
ity of new software for computing and comparing scanpaths (Cristino
et al., 2010) will likely lead to new insights into how scanpaths can be
used to predict performance.
Whereas eye movements allow us to infer where attention has been
allocated, the diameter of the pupil allows us to infer the degree to
which resources are invested in a task. The degree of engagement of a
person with a task (or with another person) is reflected directly in the
diameter of the pupil (Kahneman, 1973). The relationship between
pupil diameter and mental effort was first reported in depth by Hess
and Polt (1964), who measured pupil dilation during the mental
multiplication of two numbers. When the task was relatively difficult
(e.g. 16 × 23) pupil diameter was greater than when the multiplication
was relatively easy (e.g. 7 × 8). The so-called task-evoked pupillary response
(change in pupil diameter as a function of task requirements; Beatty, 1982)
is computed from the raw pupillary record in much the same way as
an event-related potential (ERP) is computed from electroencephalo-
graphic (EEG) activity (see Chapter 1). The averaging process reveals
short-latency (onset 100–200 ms), phasic, task-evoked dilations that
terminate rapidly when processing is completed.
Beatty (1982) reviewed several decades of work based on using the
task-evoked pupillary response to reveal the degree of difficulty of per-
ceptual, short-term memory, language, reasoning and attention tasks.
For example, the amplitude of the task-evoked pupillary response is
found to be reduced across a session in a vigilance task, and the reduction
in amplitude is similar to the reduction in performance. More recently,
Kristjansson et al. (2009) applied a polynomial curve-fitting method
for quantifying parameters from single task-evoked pupillary responses.
They used a multilevel modelling framework to identify parameters
associated with long latency responses (responses for which alertness
was presumed to be low) and normal latency responses (presumably
reflecting an alert state). Pupil diameter, linear pupil dilation rate and
curvilinear pupil dilation rate were found to differ significantly between
the long latency and normal latency responses, leading Kristjansson
et al. to suggest that these parameters might be useful neurocognitive
markers of operator state in an alertness monitoring system.

EEG
Whereas overt attentional shifts can be studied with eye movements,
covert shifts of attention (i.e. shifts in attention made without moving
72 Neuroergonomics

the eyes, head or body; Posner, 1978) cannot. In combination with


behavioural measures, such as reaction time and accuracy in detection
or identification tasks, EEG has been used extensively to study the
covert allocation of attention.

A Event related potentials (ERPs)


The ERP and its components were introduced in Chapter 1 (see Table 1.1).
In general, the components that occur early in the ERP, in particular
the P1 and the N1, are modulated in amplitude or latency depending
on the degree of attention given to the event used as the reference for
computing the ERP (Eimer, 1994). Later ERP components, such as the
P300 (which reflects the identification of a target object), are modul-
ated by attention in an all-or-none fashion. In the case of the P300, it
is absent for objects that must be ignored (Kok, 1999, Toffanin et al.,
2011), and is delayed as a function of increasing attentional demands
(Dell’Acqua et al., 2005; Vogel & Luck, 2002). Some ERP components
are related directly to specific mechanisms of attention rather than
being modulated by attention. Four examples of such components are
the N2 posterior-contralateral (N2pc), the reorienting negativity (RON),
the ipsilateral-invalid negativity (IIN) and the P4 posterior-contralateral
(P4pc). The N2pc is a negative peak observed about 200 ms after target
onset which reflects spatial shifts of visual attention towards the target
(Woodman & Luck, 1999), or attentional capture (Kiss et al., 2008). The
RON (Schröger & Wolff, 1998) is a negative deflection observed at fronto-
central sites between 400 and 600 ms after onset of a distracting event
and reflects the reorienting of attention towards task-relevant stimuli.
The INN (Hopfinger, 2005) is a negative-going waveform appearing over
ipsilateral–posterior scalp sites between 200 to 300 ms after the appear-
ance of a target at an uncued location. Hopfinger suggested that INN
reflects disengagement from an erroneously cued location and reorient-
ing towards the target location after the onset of the target. Because the
INN is triggered exogenously it seems to reflect exogenous disengage-
ment, or disengagement evoked by the capture of attention by another
object. Toffanin et al. (2011) proposed that endogenous attentional
disengagement might be reflected by a positivity observed 400 ms after
target onset at posterior–contralateral sites (the P4pc). The P4pc can be
interpreted as reflecting the undoing of attentional capture as required
to prepare for the onset of a forthcoming target.
An extensively used paradigm for measuring the spatial allocation
and control of visual attention is the Posner cuing paradigm (Posner
et al., 1980). In this paradigm observers are cued to direct attention
Attentional Resources and Control 73

(while keeping the eyes at a central fixation point) toward the left or
right by the appearance of a cue in the left or right hemifield respectively.
When the interval between the cue and target is short (about 100 ms)
and the cue is valid, such that the cue and target appear on the same
side, reaction time to the target is faster than when the cue is invalid
(i.e. when cue and target appear on different sides). Posner and Petersen
(1990) explained this cuing effect in terms of a sequential model accord-
ing to which attention must be disengaged from the cued location on
invalid trials before being moved to the target location. A common
finding using the Posner cuing paradigm is that the P1 component
of the ERP is enhanced for targets in the cued versus uncued location
(e.g. Hopfinger & Mangun, 1998). Additionally, three ERP components
[the early directing attention negativity (EDAN); the anterior directing
attention negativity (ADAN); and the late directing attention positivity
(LDAP); Eimer et al. (2003) and Praamstra et al. (2005) have been identi-
fied as occurring in the time interval between the onset of the cue and
the onset of the target, and are therefore thought to reflect the orienting
of covert attention in anticipation of an expected event. The EDAN is a
negative deflection measured at occipital electrodes contralateral to the
direction indicated by the cue and is thought to reflect the decoding of
the direction indicated by the cue. The ADAN is a negativity observed
at frontal sites contralateral to the direction indicated by the cue and is
thought to reflect the initiation of an attention shift. Finally, the LDAP
is a posterior positivity contralateral to the direction indicated by the
cue and seems to reflect preparatory activation of the visual cortex in
anticipation of the onset of the target.
Components such as the EDAN, ADAP and LDAP may provide inform-
ation about where attention will be allocated or how information will be
processed (Eimer et al., 2003). However, these components are not always
found when expected. It has been suggested that the LDAP will only
appear when attention-directing cues accurately indicate when a target
will appear, which limits the usefulness of the component as a predictor
of readiness to process visual information (Green & McDonald, 2010).
In this respect, lateralized changes in alpha-band EEG oscillations
(see Chapter 1), which have also been linked to biasing of visual cortex
in anticipation of an impending target (e.g. Worden et al., 2000), may
provide a more reliable index of upcoming performance.

B EEG rhythmicity
Although ERPs have proven to be useful in the study of the temporal
dynamics of attention, they are limited in that they fail to capture
74 Neuroergonomics

brain activity related to stimulus processing that is not time-locked to


event onset (i.e. induced activity; Tallon-Baudry & Bertrand, 1999), nor
is it possible to draw conclusions about how different brain networks
(or regions of interest) interact on the basis of ERPs alone. Both induced
activity and interactions between brain networks can be visualized
using time-frequency analysis of the EEG (Donner & Siegel, 2011; Fries,
2005). Time-frequency analysis involves quantifying the amplitude
(or power) of a certain frequency band across time. The frequency band
is determined by the experimenter and typically includes frequencies
ranging from 0.1 to 100 Hz. One way of using the resulting time-frequency
spectrograms is to compare them for different experimental conditions
much as one might compare results of functional magnetic resonance
imaging (fMRI). Another approach is to compute coherence values, that is
‘correlations’ across time (i.e. autocorrelations) between regions of
interest covered by the electrodes used. These coherence values reflect
whether two or more regions of interest are in communication with
each other.
Time-frequency analysis has led to new insights about attention,
including how it is allocated. For example, Lakatos et al. (2009)
performed a time-frequency analysis of data recorded from primary
visual and auditory cortices in macaques performing a cross modal
selective attention task. They observed that the amplitude of the EEG
response increased only in the modality-specific area corresponding
to the attribute of the stimulus to be attended. However, supramodal
modulation of the EEG by attention was also observed: the phases of
the oscillations in both cortices were synchronized to the onset of the
attended stimuli, regardless of the modality to be attended.
The EEG rhythm most commonly coupled to attention mechanisms
is the gamma rhythm (30–80 Hz; Fries et al., 2001). Gradual increases
in the level of gamma synchronization have been found to depend on
the degree of attention directed toward a stimulus (Kahlbrock et al.,
2012), suggesting that gamma rhythm could be used as a reliable
measure of attentional allocation. The relation between gamma-band
synchronization and attention is, however, likely to be more complex
than a simple increase in synchronization as a function of the amount
of attention allocated. In fact, gamma oscillations have been linked
to oscillations at lower frequencies, and the role of gamma oscillation
in selecting information may depend on interactions between gamma
rhythms and lower ones, such as the alpha and theta rhythm. For
example, Fries (2009) described the process of viewing natural scenes as
one of segmenting the scene and selecting the relevant segment. In this
Attentional Resources and Control 75

context segmentation is served by a lower rhythm (theta or alpha) and


selection by the faster gamma rhythm. Attention to, or enhancement
of, specific objects results from iterative loops in which the visual scene
is segmented at lower rhythms and the relevant segment is selected with
the faster rhythm until the object of interest is selected and ‘in focus’.
Brain rhythms of other frequencies than gamma have been related to
the investment of resources. For example, high-amplitude alpha rhythm
is associated with a state of cortical ‘idling’ (i.e. a resting state) and lower
alpha amplitude (desynchronization) is associated with task engage-
ment (Klimesch, 1999). Frontal theta synchronization (which produces
high amplitude theta rhythm), however, is higher when engaged in
a task (Gevins et al., 1998). Beta has also been related to resources
invested in a task, with increased amplitude in the beta rhythm being
related to increasing task difficulty (Brookings et al., 1996). Because
alpha, theta and beta activity all are related to task engagement, Pope
et al. (1995) proposed the engagement index, an index of the workload asso-
ciated with a task that combines information about the alpha, beta and
theta rhythms according to the formula (beta power/(alpha power ⫹
theta power)). This index has been used experimentally in real time to
determine whether an operator is over- or under-loaded while perform-
ing a task (Freeman et al., 1999).

C Steady state evoked potentials


An alternative way of measuring attentional allocation can be seen as
a compromise between the use of ERPs and time-frequency analysis of
the EEG. This method involves measuring the steady-state potentials
evoked by rapidly changing, repetitive stimulation (Regan, 1989).
Regan (1966) introduced the steady-state evoked potential (SSEP) as a
means of overcoming some of the disadvantages of ERP analysis, such
as the sensitivity of the ERP signal to muscle or movement artefacts
and the difficulty of determining the spectral composition of the ERP.
In this method, repetitive stimulation (such as a flashing background
of a certain frequency) evokes an ERP-like waveform in the EEG which
is repeated for the duration of the repeated stimulus; these repetitions
increase the frequency resolution of the SSEP. The power of the SSEP is
thus concentrated within the frequency band of the stimulation and
can be extracted easily from noise.
An aspect of the SSEP that may make it especially useful in neuro-
ergonomic applications is that it is modulated by visuospatial attention
(e.g. Morgan et al., 1996). Use of the SSEP to track attentional allocation
is sometimes referred to as ‘frequency tagging’ (Tononi et al., 1998).
76 Neuroergonomics

For example, when two steams of visual information are presented on


backgrounds of two different frequencies, and the observer is instructed
to attend to either one stream or the other, the amplitude of the SSEP
is greater for the frequency at the location to be attended. Moreover,
the amplitude of the SSEP reflects the amount of attention received by
the object evoking the SSEP, being greater when attention is selectively
devoted to one stream of information rather than divided across two
streams of information (Toffanin et al., 2009). Although the relation-
ship between attentional allocation and the visual SSEP is robust, the
link between the auditory SSEP and attentional allocation is not as
transparent: some studies report modulation of auditory steady-state
responses by attention (e.g. Saupe et al., 2009), but others do not
(e.g. de Jong et al., 2010; Linden et al., 1987). Attending to the frequency
of the oscillation (and not just to the target stimulus) might be a neces-
sary condition for modulation of the auditory SSEP by attention (Saupe
et al., 2009). Alternatively, many of the studies that have reported null
effects of attention on auditory SSEPs may have been confounded
because attention has an effect on the power of gamma band oscil-
lations in the visual areas, but not in the auditory areas (Khalbrock
et al., 2012), and many auditory SSEP studies have used a 40-Hz (gamma
band) oscillation. More work on the use of the SSEP as an index of
auditory attention is needed before conclusions as to the efficacy of the
measure can be reached.

Brain networks and fMRI


Many fMRI studies have been conducted in an attempt to pinpoint
which brain areas and networks are responsible for attentional select-
ion and attention orienting. As discussed in Chapters 1 and 2, two
processing pathways are involved in visual processing, and each of
these may be modulated by attention. The occipito-temporal, or
ventral path, carries information regarding object identity from V1
and V2 in the occipital lobes to the inferotemporal cortex (IT) and
V4 where the object is ‘recognized’, and on to ventral area 46 in the
prefrontal cortex (PFC) if the recognized object is to be maintained
in working memory (Deco & Zihl, 2006; Desimone & Duncan 1995).
The occipito-parietal, or dorsal, path is involved in processing the
spatial location of the object. Information is carried from V1 and V2
to the posterior parietal cortex (PPC), where object location and the
spatial relationship of an object with other objects are processed. The
dorsal part of area 46 in the PFC is involved in maintaining the spatial
location of the object.
Attentional Resources and Control 77

The structures of the dorsal and ventral pathways illustrate the


dependency between working memory and attention (Knudsen, 2007;
Miller & Cohen, 2001). Information about the current target is assumed
to be stored in a ‘template’ in area 46 of the PFC. The template influ-
ences the competition between stimuli in V1 and V2 by means of
recurrent loops in PPC and IT. Endogenous or top-down attention
results from interaction between PFC, PPC and IT: feedback biases the
primary visual areas to process information about identity and location
present in the template (for extensive reviews see Corbetta et al., 2008,
Corbetta & Shulman, 2002, and Kastner & Ungerleider, 2002).
fMRI studies have revealed that the presence or absence of activity
in PFC provides a measure of attention in terms of cognitive control
(Miller & Cohen, 2001). With the passing of time the execution of
a repetitive task becomes increasingly automatic, as reflected by the
withdrawal of attention from the task. In other words, a consequence
of practice is the reduction of the need for active control in coordinat-
ing the actions required to achieve a goal. This is reflected by a shift of
cerebral activity; whereas performance of a novel task is characterized
by interplay between frontal and parietal areas, automated tasks can be
performed relying on parietal areas only (Petersen et al., 1998). Miller
and Cohen proposed that this shift stems from withdrawing cognitive
control from the areas necessary for the task. PFC involvement is thus
necessary for the coordination of the different brain areas involved
in performing a task when the task is novel, but, with practice, new
connections between task-relevant areas are made, circumventing the
need for cognitive control.

fNIRS
Following its introduction in 1993 by Villringer et al., functional
near infrared spectroscopy (fNIRS) (see Chapter 1) has become an
increasingly popular measure of attention (Huppert et al., 2006). The
portability and user-friendliness of fNIRS are promoting its popularity
among neuroergonomists.
That fNIRS is a reliable index of the investment of resources in a task
was shown in a study by Ayaz et al. (2012). As mentioned in Chapter 1,
fNIRS measures changes in concentration of oxygenated and deoxygen-
ated haemoglobin. Therefore, increases in resource investment should
be reflected by a relative increase in oxygenation when comparing high
versus low task load conditions. To establish the reliability of fNIRS as
an indicator of mental workload (defined as the difference between the
resources available to the operator and the resource demand of the task),
78 Neuroergonomics

Ayaz et al. monitored fNIRS responses during an n-back task, a standard


memory task in which participants monitor a stream of individually
presented stimuli and indicate whenever the current stimulus matches
the one before (i.e. the one on trial n ⫺ 1; ‘1-back’ task) or the one on
trial n ⫺ 2 (the 2-back task) and so forth (Smith & Jonides, 1997), as
well as during a complex real-life task (i.e. air traffic control). Moreover,
in order to investigate whether it was possible to capture changes in
brain activity as a function of practice or developing expertise, fNRIS
was measured across 9 consecutive days during 2–3-hour sessions,
during which participants learned to manoeuvre a simulated unmanned
air vehicle.
Because the left PFC, located in the inferior frontal gyrus, reflects
working-memory-related activity, Ayaz et al. (2012) measured differ-
ences in fNIRS between high- and low-memory load conditions from a
sensor located above the left PFC, at the inferior frontal gyrus. Higher
oxygenation was observed for a 3-back task than for a 0- or 1- back task,
but not for a 2-back task. Differences in task demands in the air traf-
fic control task were measured from a different site, within the medial
PFC, or frontopolar cortex, and showed a difference in oxygenation
when the easy version of the task was compared with the hard one.
Levels of oxygenation changed with increments in task practice in line
with the general reduction in brain activity following practice that has
also been observed in imaging studies (Kelly & Garavan, 2005; Petersen
et al., 1998): Average total haemoglobin measured from the left PFC was
higher during the beginning than during the advanced phase of the
training. In summary, although subtle changes in task load may not
always be reflected in blood oxygenation differences as measured by fNIRS,
there is promise that fNIRS could be an effective tool for characterizing
task demands.

Augmented interaction

As discussed in Chapter 1, identifying neural markers of operator state


that can be used to predict performance in the short term is one of the
most important goals in neuroergonomics. With regard to attention,
much evidence for the existence of such markers comes from fMRI and
EEG studies. However, embedding EEG and fMRI in real-world settings
is beset by practical problems. In addition to cost considerations, these
problems include that (a) fMRI, in particular, is not portable, (b) both
EEG and fMRI require trained personnel for measurement and analysis,
and (c) most research has relied on averaging across many trials.
Attentional Resources and Control 79

Moreover, many of the attentional measures that can be made with


fMRI and EEG require nearly total immobilization of the participant.
The susceptibility of the techniques to motion artefacts have, until now,
precluded the use of EEG and fMRI in many applied settings, but issues
of portability and embedding of EEG, in particular, are gradually being
resolved (Parasuraman, 2011b).

Brain–computer interfaces
Much of the excitement about the use of EEG and other methods to
trace correlates of attention in real time comes from research on brain–
computer interfaces (BCIs; also referred to as brain–machine interfaces,
or BMIs). BCIs rely on measurement of brain activity to interact with a
computer. A BCI aims to support, enhance or substitute human func-
tion to ‘elevate the computer to a genuine prosthetic extension of the
brain’ (Vidal, 1973, p. 158). In general, a BCI translates brain activity
into computer commands (Cecotti, 2011). Most BCI applications have
been clinical in nature. For example, Donchin and colleagues (Donchin
et al., 2000; Farwell & Donchin, 1988) describe how locked-in patients
(patients who are essentially immobilized and unable to speak) could
learn to use a ‘P300 speller’. The P300 speller works on the principle that
the appearance of an infrequent target evokes a P300. In the original
P300 speller, a 6 × 6 matrix of characters is presented. The patient is
to focus attention on just one of the 36 characters of the display (the
one they wish to ‘spell’) while the individual rows and columns of the
matrix are intensified one at a time in a rapid (e.g. 100 ms with 75 ms
between intensifications), random sequence. The probability that a row
or column containing the target is intensified is one in six. Because tar-
gets are rarely highlighted, they can be considered ‘oddballs’ and should
elicit a P300 (Donchin, 1981). The EEG of the patient is measured while
the task is performed, and the P300 is computed online and linked to
the symbol that evoked it. The interface then displays the selected let-
ter. Research on the P300 speller illustrates many characteristics of BCI
research.
Since the introduction of the P300 speller, much work has been done
to improve the online calculation and classification of the P300, and to
improve the speed of spelling (e.g. Allison & Pineda, 2006; Cecotti,
2011; McFarland et al., 2011; Pires et al., 2012). Spelling devices have
also been based on the visual SSEP (Gao et al., 2003). SSEP-based spelling
devices use the changes in amplitude of the SSEP evoked by an object
presented on an oscillating background to determine what object or
command is receiving attention. Stimulus selection requires simply
80 Neuroergonomics

that attention be focused on the oscillating background of the desired


command. The major advantage of SSEP-based BCI in comparison with
other systems is that a lengthy calibration period (during which the user
learns to make the appropriate responses and the classifier is ‘taught’ to
recognize them) is not required: the system is ready to spell as soon as
the participant has been prepared for EEG acquisition.
SSEP-based BCIs have also been used for tasks such as map-based
navigation (Bakardjian et al., 2010), control of neuroprosthetic devices
(Muller-Putz & Pfurtscheller, 2008) and video gaming (Lalor et al.,
2006). Importantly, whereas in the P300 speller attention can be
directed toward only one command at a time, in a SSEP-based BCI
attention can be directed toward multiple commands simultaneously.
However, simultaneous execution of different commands has not yet
been introduced to SSEP-based BCIs. Moreover, it appears that part
of the activity driving the SSEP in these applications is related to
eye movements (Cecotti, 2011). Given that eye tracking also has the
potential of establishing which object the operator is focusing on—and
enjoys the advantage of being simpler to use and analyse than EEG—it
still needs to be proven that the SSEP gives more information than eye
movements alone.
Still other types of spellers are based on imagined movement as, for
example, moving the right hand or the left foot (Ramoser et al., 2000).
Motor-imagery BCI uses spatial information in the EEG that is available
because activity related to lateralized movements is also lateralized in
the brain, and hand and foot movements are represented in different
brain locations. Motor-imagery BCI is based on identifying the brain
state correlated with thinking of a lateralized movement and using this
information to send commands to the computer. For example, in the
Hex-o-spell graphical user interface (Müller et al., 2008; see Figure 4.1)
the participant attempts to control a cursor displayed on the centre of
the screen which rotates when, for example, the participant imagines
moving the right hand and stops when the user imagines moving
the left hand. The goal is to point the cursor to one of six hexagons
arranged around a circle to select a command within that hexagon.
Once a hexagon is selected, the commands within that hexagon are
distributed around the circle such that just one command is in each of
the hexagons around the circle (see Figure 4.1).
BCIs based on motor imagery are active BCIs because the brain activ-
ity used by the BCI system is generated by the user independently of
external events (Zander & Kothe, 2011). SSEP- and P300-based BCIs are,
instead, reactive BCIs in the sense that a brain response is evoked by an
Attentional Resources and Control 81

C
B E
Z A D J J
< – H G
? I H G
F
V L
W O
U Y K N
I
X P R M
F
S Q
T

Figure 4.1 A user interface such as that used by Müller et al. (2008). The image
on the left shows the interface as it is displayed when the user ‘moves’ the arrow
to selects the hexagon containing the letter ‘I’. The image on the right shows
the interface as displayed once the original contents of the hexagons have been
replaced by the items in the previously selected hexagon

external stimulus (an oscillation or an oddball stimulus respectively).


In some senses active BCIs can be considered a more ‘pure’ form of
brain–machine communication because the user evokes a brain state
and the machine interprets the user’s state. Active BCIs are, however,
susceptible to BCI illiteracy (Kübler & Müller, 2007), the phenomenon
that only some users can learn how to interact with the machine.
To solve the problem of BCI illiteracy recent approaches have com-
bined different types of BCI in one, hybrid, BCI (Pfurtscheller et al.,
2010). Brunner et al. (2010), for example, created a model hybrid BCI
based on SSEP and motor imagery. The model was based on EEG data
of participants who had been shown arrows and instructed either to
imagine moving the corresponding hand (e.g. left-pointing arrow ⫽
imagine left hand movement) or to pay attention to a set of spatially
corresponding, flickering light-emitting diodes (LEDs) (e.g. left-pointing
arrow ⫽ attend LEDs on the left side of the computer screen) or to do
both. The performance of the model was better when both the motor
imagery and SSEP signals were used than when only one of the signals
was used. Moreover, the authors showed that this was not an artefact
owing to the fact that more data was supplied to the classification
algorithm. However, in a follow-up study in which the hybrid BCI was
actually implemented online, the performance of the hybrid BCI was less
promising than in the simulation study (Brunner et al., 2011a). In fact,
the performance of the hybrid BCI was not significantly better than the
performance of an SSEP-based BCI. Moreover, participants reported that
82 Neuroergonomics

using the hybrid BCI was more difficult than using the SSEP-based BCI,
which is likely a result of dual-task interference as a result of having to
imagine hand movements while focusing attention on the LEDs.
Other researchers have proposed applications of BCI based on the
fMRI signal (Weiskopf et al., 2004a). The fMRI allows a very fine-
tuned analysis of the spatial distribution of brain activity (Haynes &
Rees, 2006; Spiers & Maguire, 2007) and therefore could potentially
be used to implement more BCI commands than an interface using
EEG. Moreover, recent developments in fMRI research suggest that the
time constraints associated with acquiring and processing a MRI image
(approximately 1 s) do not pose a significant limitation for the analysis
of the signal in real time. Weiskopf et al. (2007), for example, showed
that fMRI can be used for self-regulation of brain activity, or neurofeed-
back (Weiskopf et al., 2004b). Factors which have prevented fMRI-based
BCI from becoming more popular than EEG are that fMRI is more
expensive, less portable, requires more training and relies more heavily
on skilled personnel than does EEG.

Adaptive interfaces
Whereas BCI research, as such, has focused mostly on clinical applica-
tions or on active and reactive BCIs, research on adaptive interfaces
focuses on using information about operator state to allocate tasks
to the operator versus the machine in work environments (Sheridan,
2011). Adaptive interfaces are intended for use in any environment in
which tasks or processes are partly, or fully, automated. Such a system
requires that information about operator state can be measured and
classified in real time and that some tasks can be allocated to either
the human operator or to the machine itself. Adaptive systems have
the potential to solve problems of operator underload and overload
(Parasuraman, 2011b; Young & Stanton, 2002a, 2002b). When an
operator is underloaded, arousal levels may decrease below a desired
level or an operator may become complacent and fall ‘out of the
loop’, losing situation awareness as a result (Wiener & Curry, 1980).
Effects of overload include excessive mental workload (defined as the
difference between the processing demands imposed by a task and
the resources available to the operator at a given point in time), stress
or other costs of compensating for the need to maintain primary
task performance, or performance decrement (Matthews et al., 2000;
Sperandio, 1978). Thus, accurate, online assessment of mental work-
load has the potential to reduce human error by signalling overload
(or underload) and may provide data that can be used to modify the
Attentional Resources and Control 83

task environment to match the available resources of the operator to


task demands.
Most work on adaptive interfaces takes the approach of measuring
mental workload online, and classifying load as either too low or too
high (e.g. Byrne & Parasuraman, 1996). In order to create an adaptive
system, one must therefore have a reliable indicator of load and a means
of calculating load online. In the past several decades most research
has focused on cardiovascular and EEG measures of load. For example,
the amplitude of the P300 component of the ERP, which reflects the
classification of a target object (Donchin, 1981), has been shown to be
sensitive to workload. The P300 is an attractive option in adaptive auto-
mation because it is sensitive to the momentary demands of the task. The
added value of the P300 as an indicator of workload was demonstrated
by Prinzel et al. (2003). They had participants perform a compensatory
tracking task together with an auditory oddball task. EEG was measured
and the tracking task was switched from a manual to an automated mode
based on the engagement index (Pope et al., 1995). Performance of those
for whom the adaptive automation was based on the engagement index
was better than for yoked control participants (i.e. participants who
received the same automation schedule as that of a participant in the
EEG group). ERPs were computed offline, and the P300 evoked by the
auditory oddball stimulus was found to parallel the sensitivity to task
demands of the performance and subjective measures across conditions.
The measurement of EEG may also provide insight into dual-task
demands. In one study of changes in workload due to dual-task demands,
Lei and Rötting (2011) measured the EEG of people who drove in a
driving simulator while performing an n-back task. The difficulties of
the driving and n-back tasks were manipulated, and modulations of the
EEG spectrum evoked by the changes in task difficulty were measured.
Lei and Rötting found that alpha power was attenuated and theta power
was increased when workload was high compared with when it was
low. Most importantly, the changes in power depended on which task
was manipulated, with changes in alpha power being more sensitive
to workload changes during the driving task and changes in theta
power being more sensitive to workload changes in the n-back task.
These results suggest that it may be possible to use combinations of
different frequency bands to determine which tasks should be supported.
The results also suggest that the use of a combination of different
frequencies, as in the engagement index (Pope et al., 1995), may provide
a more general and reliable estimate of mental workload than reliance
on any one frequency band.
84 Neuroergonomics

Basing adaptive support on cardiovascular measures, such as heart


rate variability (which decreases as workload increases; Tattersall &
Hockey, 1995), is less intrusive and therefore potentially more widely
applicable than EEG-based adaptive support. For example, cardio-
vascular measures might be used to make the task of the ambulance
dispatcher easier. Ambulance dispatching requires that emergency situ-
ations be understood and that ambulances be dispatched to accident
sites as promptly as possible while keeping coverage of the region of
which the operator is in charge. Mulder et al. (2009) showed that it was
possible to track workload in an ambulance-dispatching task simulator
with measures such as heart rate variability. However, the task support
they provided in high-workload conditions (shading the area on a map
of the region being monitored that an ambulance could cover within
15 minutes) did not result in performance improvements. These results
point to a challenge in adaptive automation: knowing that the operator
is overloaded is not enough—there must also be a means available of
supporting the operator in a meaningful way.
A marker of resource investment discussed earlier in this chapter is
pupil dilation. Pupil dilation has been used together with eye move-
ment activity to measure workload in applied settings. When people
observe a screen or display, scanpaths are characterized by a certain
amount of randomness or entropy. One important finding is that
scanpath randomness is inversely related to workload (Harris et al., 1986):
as workload increases, scanning patterns become more stereotyped.
Whether or not performance suffers will depend on whether the relevant
information is still viewed as scanpaths become more stereotyped.
Hilburn et al. (1995) used scanpath randomness—together with pupil
dilation and heart rate variability—in an adaptive decision-aiding system
for air traffic control. When high workload was detected support was
provided by reallocating part of the human task to the machine (for
similar applications based on the relationship between eye movements
and workload see, e.g., Ahlstrom & Friedman-Berg, 2006, and Di Stasi
et al., 2010).
Multiple workload indices were also used in a study by Hwang
et al. (2008) in which the workload of operators performing the shut-
down procedure for a nuclear power plant was measured. Hwang et al.
estimated workload from a combination of measures, which included
parasympathetic/sympathetic ratio, heart rate, and diastolic and systolic
blood pressure (all of which tend to increase as workload increases),
and heart rate variability, and eye blink frequency and duration
(all of which tend to decrease as a function of increasing workload).
Attentional Resources and Control 85

The different indices were used as input to a neural network model, and
the model was run to determine the contribution of each parameter to
workload in the task (the procedure by which the weights were assigned
can be likened to the estimation of coefficients in a multiple regression
model). All seven of the predictors used by Hwang et al. were found to
contribute significantly to the capacity of the neural network model to
discriminate between workload states.

Augmenting attention and cognition

New technologies (e.g. medical scanning technology and unmanned


military drones) are producing an unprecedented number of complex
images. Humans outperform machines in processing these images,
but are limited in the number of images that they can process in a
given amount of time. An important issue is thus how target detection
(whether of a tumour or a weapons installation) can be enhanced. It
is beginning to be evident that neuroergonomics can take up this
problem where image processing leaves off. For example, Gerson et al.
(2006) describe an EEG-based BCI that can be used to make a selection
of images presented to observers in rapid serial visual presentation to be
re-presented to the observers for additional analysis. The BCI can classify
in real time the stereotypical spatiotemporal response associated with
targets (in this case, natural scenes containing people as opposed to
unpopulated scenes). Images identified as potentially containing targets
can then be examined in detail. Such a technique has the advantage of
allowing large numbers of images to be scanned quickly (the typical
presentation rate is 100 ms per item), leaving time to devote to the
further processing of potential targets. The accuracy of classification of
targets embedded in scenes presented in rapid serial visual presentation
may be able to be increased by using pupil diameter measures in addi-
tion to the EEG response (Qian et al., 2009). Using classifiers to triage
images has the potential to help image analysts who must classify many
images, and promises reductions in detection time and improvements in
detection accuracy.
Another ambitious project for enhancing cognition is the attempt
to implement binoculars with image processing functionality. The
United States Department of Defense is currently working to develop
image-enhancing binoculars under the name ‘Cognitive technology
threat warning system’ (CT2WS). The aim of the CT2WS project is to
support soldiers in identifying possible threats. Real-time EEG signals,
measured via an in-helmet EEG system are subjected to algorithms to
86 Neuroergonomics

classify the visual inputs gathered through the binoculars. The program
uses saliency maps (Koch & Ullman, 1985), first developed as a compu-
tational model of bottom-up attentional selection, as the basis for
threat-detection. The saliency map algorithm analyses the visual inform-
ation the soldier is seeing and determines which information is the
most salient by decomposing the visual information into saliency maps.
A saliency map combines elementary features such as colour, orienta-
tion, direction of movement and disparity to determine which objects
in a visual scene are salient. In the CT2WS context, the saliency map
selects potential targets from the visual scene. When the soldier views
a scene, the saliency map algorithm marks potential threats and the
EEG of the soldier is monitored to determine whether or not a threat is
perceived. For example, the saliency map presented to a soldier looking
into a forest may identify a deer or a tank—both of which have features
which distinguish them from the surrounding trees. Because the saliency
map itself cannot distinguish between objects, the EEG signal evoked
by the soldier in response to the two ‘threats’ is used to classify objects
as friend or foe. The process of threat identification is monitored by a
learning algorithm to optimize the identification process. The algorithm
is adaptive in the sense that it learns the combination of the EEG
response and the stimulus which evoked it, thereby optimizing the
classification capacity of the threat-detection algorithm.

Enhancing attention through training


Improving attentional state through ‘brain training’ has a long and
venerable history. Neurofeedback, a form of biofeedback in which some
feature of an individual’s brain activity (e.g. alpha rhythm) is made
visible to the participant (e.g. via a ball that bounces higher as alpha
synchronizes), has been used to treat children with attention deficit/
hyperactivity disorder (ADHD) since the 1970s (e.g. Lubar & Shouse,
1977). In neurofeedback some aspect of the EEG, such as the ampli-
tude of the alpha rhythm, is used to change the state of the displayed
activity, such as the bounce of the ball. As the person learns to make
the ball bounce faster or higher, alpha amplitude is either increased or
decreased, depending on the goal of the neurofeedback (e.g. Hardt &
Kamiya, 1976; see Weiskopf et al., 2004b for an example using fMRI).
In the case of ADHD, neurofeedback has been used to increase or
decrease the amplitude of alpha, beta and theta oscillations. Children
with ADHD show spectral abnormalities in the EEG, such as increased
frontal theta amplitude, and decreased alpha and beta oscillations in
comparison with non-ADHD children (Clarke et al., 1998). Lubar et al.
Attentional Resources and Control 87

(1995) showed that training children to increase the amplitude of the


upper alpha rhythm (12–15 Hz; also referred to as the sensorimotor
rhythm) and the lower beta rhythm (15–18 Hz) can enhance sustained
attention and alleviate the symptoms of ADHD. In fact, neurofeedback
training of the sensorimotor and lower beta rhythm may improve the
functioning of ADHD sufferers to the same extent as does medication
by methylphenidate (a commonly used psychostimulant; Fuchs et al.,
2003). Importantly, the effects of neurofeedback training are more
long-lasting than the administration of medication, suggesting that
brain training can have powerful effects on attention and behaviour
(Tang & Posner, 2009).
Neurofeedback training has also been shown to improve cognitive
performance in non-clinical populations. Using a mental rotation task,
for example, Hanslmayr et al. (2005) found that neurofeedback training
of the alpha rhythm improved task performance. Hanslmayr et al.
used neurofeedback to train their participants to maximally synchro-
nize upper alpha rhythm (12–15 Hz)—indicating a relaxed state—in
the interval between the task trials. This neurofeedback training led
to improvements in task performance, but only for the participants
who were successful in learning to increase their alpha response.
Enhancement with neurofeedback has also been found for memory-
task performance (Lantz & Sterman, 1988), attention tasks (Egner &
Gruzelier, 2004) and memory capacity (Vernon et al., 2003).
It may also be possible to train attention using basic cognitive tasks.
Rueda et al. (2005) devised a training module to augment executive
attention in four- and six-year-old children. The training program
involved a series of exercises, such as object tracking, Stroop-like exer-
cises, discrimination of stimuli, anticipation of events and resolution of
conflict. ERPs were measured before and after the five-session training
programme during the performance of the Attention Network Task
(ANT; Rueda et al., 2004). The ANT is a modified flanker task (i.e. a task
in which a target must be attended and distractors assigned to a com-
peting response must be ignored) that measures attention orienting,
alerting and the capacity to resolve conflict. Rueda et al. found a general
benefit of the training when comparing the performance of the group
receiving training against the group not receiving training. However,
the effect of training was limited largely to the four-year-old children,
which suggests that such training is beneficial only in early stages of
development.
Although the benefits of the type of training given by Rueda et al.
(2005) may be restricted to young children, training of attentional skill
88 Neuroergonomics

has also been shown, in some studies, to transfer across tasks in adults.
For example, several studies have shown that playing ‘first-person
shooter’ action video games may improve performance in basic atten-
tional tasks, such as the flanker task, attentional blink task (detecting
two targets presented in rapid serial visual presentation) and an
enumeration task (e.g. Green & Bavalier, 2003). Green and Bavalier
reported that people who habitually played action video games spread
attention more widely in time and space than non-gamers. However, many
attempts to replicate findings such as these have failed (e.g. Murphy &
Spencer, 2009; see Boot et al., 2011 for a critical meta-analysis of
improved cognition after video-gaming).
There is promising work showing that attention can be enhanced.
It may also be possible to identify when one is most likely to be able
to learn new material and to capitalize on these ‘optimum learning’
moments. Our ability to remember new information changes from
moment to moment (Corkin, 2002). Yoo et al. (2012) used this fact to
select optimal learning intervals by monitoring the activity of brain
areas associated with the formation of memories. The parahippocampal
cortex (PHC), located in the medial temporal lobe, is responsible for the
successful formation of memories of scenes, as reflected by greater PHC
activation for remembered than forgotten scenes (Brewer et al., 1998).
Moreover, prestimulus activity in a particular area of the PHC, the para-
hippocampal place area (PPA), is correlated with successful memory
for scenes. Yoo and collaborators measured PPA activity in real time to
determine when good or poor time intervals in which to present infor-
mation occurred. They found that memory for scenes was significantly
better for scenes presented during ‘good’ time intervals (indicated by
low PPA activity) compared with scenes presented during ‘poor’ time
intervals (indicated by high PPA activation).
Attention is intertwined with working memory in exercising control
over thoughts and action. ‘Brain training’ with video games intended
to enhance or maintain cognitive function often shows benefits for
executive function (e.g. Nouchi et al., 2012; Rabipour & Raz, 2012).
It has been suggested that working memory training actually changes
structural connectivity between brain areas involved in memory tasks.
Takeuchi et al. (2010b) performed MRI scans and carried out diffusion
tensor imaging (DTI) analysis on a group of participants before and
after they engaged in two months of training on a battery of working
memory tasks: a visuospatial task in which the location and order of
presentation of stimuli was to be remembered; a task in which sums
were memorized while classifying stimuli according to when in the trial
Attentional Resources and Control 89

they last were shown; and a task in which the identity and position
of stimuli had to be maintained across a number of trials. Working
memory training was related to increased fractional anisotropy (roughly
speaking, the degree to which diffusion occurs along one of all possible
axes) in the white matter regions adjacent to the intraparietal sulcus
and the anterior part of the body of the corpus callosum, suggesting
that areas critical to working memory exhibit plasticity.

Using drugs to enhance attention


Using drugs to enhance cognitive ability may be becoming as prevalent
as doping in sports. The question is whether some cognition-enhancing
drugs have a place in healthy human performance. One drug, caffeine,
has a long and proven history of use as a performance enhancer. One
recent study using the ANT (Fan et al., 2002), for example, demon-
strated that caffeine improves the attentional functions of alerting and
executive control, although a too-high dose of caffeine (400 mg) hurts
orienting of attention (Brunyé et al., 2010). Despite the fact that caffeine
is widely accepted in Western societies, a new trend toward the use
of drugs designed originally to alleviate symptoms associated with
neuropsychological impairments by individuals hoping to boost cogni-
tive capacities gives cause for concern (see Sahakian & Morein-Zamir,
2007). Drugs used to treat ADHD, in particular, are increasingly being
used by students to improve concentration when cramming for exams
(Babcock & Byrne, 2000), and other stimulants are used by long-haul
truckers (da Silva et al., 2009) and aircrew members on military missions
(e.g. Ramsey et al., 2008). Another commonly used drug, Modafinil,
promotes wakefulness and is used by people with disturbed sleeping
patterns (e.g. due to jet lag, shiftwork or sleep apnoea; see Chapter 6),
but also by people hoping to improve their ability to concentrate
(Sahakian & Morein-Zamir, 2007).
Individuals vary in how they react to different drugs. For example,
bromocriptine, a dopamine agonist, enhances various executive
functions for low-working memory capacity individuals, but has
a detrimental effect on the performance of high-working memory
capacity individuals (Kimberg et al., 1997). The fact that different
people react differently to the same drugs makes it difficult to specify a
general protocol for the use of drugs to enhance cognitive function,
and uncertainty about the long-term effects various drugs might have,
as well as ethical considerations (see Chapter 1), put into question the
desirability of recommending performance enhancement for healthy
human operators.
90 Neuroergonomics

Conclusion

In nearly all tasks, adequate performance depends on the availability of


attentional resources and the appropriate allocation of attention. Many
techniques to measure attention have been developed. Some of these
measures (such as pupil dilation and the EEG-based engagement index)
are nonspecific in that they reflect the degree of arousal of a person
or the overall effort being exerted, whereas others (such as the P300)
are specific to the processing of a particular stimulus. Much work has
focused on using these measures to improve basic attentional skills or
task performance. Physiological measures of mental workload, in parti-
cular, have been applied in adaptive automation. Neurofeedback and
BCIs have, to date, primarily seen clinical applications, but the tech-
niques being developed show promise for augmentation of attentional
abilities and improvements in perception and cognition. The extent to
which attention can be augmented by training, drugs or neurofeedback
remains controversial. Exciting lines of research do, however, suggest
that basic abilities can be improved upon. Even if this should turn out
not to be the case, neural and other physiological measures of attention
are already proving to enhance performance in many tasks by using
data obtained from the operator to modify how stimuli are presented
and processed.
5
Performance Monitoring and
Error-related Brain Activity
Addie Johnson and Rasa Gulbinaite

Everyone knows how easy it is to mistype words when under time


pressure or when emotions run high, has felt the embarrassment of a
slip of the tongue, or has had to clean up after adding water to a coffee
maker twice or forgetting to replace a filter. These so-called slips of action
are generally detected immediately and corrected without any additional
feedback for the simple reason that the outcome of a slip violates the
intention of the actor. In this respect, action slips differ from what are
termed in the human factors/ergonomics literature mistakes, and which
originate from incorrect assessment of a situation or failures to select an
appropriate goal or means to achieve it. That is, mistakes can be seen
as failures to select an appropriate plan, and slips as failures to execute
such a plan (Fedota & Parasuraman, 2010).
Paradoxically, slips of action occur because of the way skilled action
is organized and automated: such errors arise when actions are per-
formed more or less automatically, without conscious mediation or
monitoring (Reason, 1979). According to Reason, the basis for slips of
action is that well-learned behaviours can be performed in an open-
loop manner—that is, without feedback or conscious monitoring,
governed only by the appropriate motor program. Slips of action are
thus most likely to occur during the automatic execution of highly
practised, routine actions or during lapses of attention. They occur
either because the wrong action plan is maintained or because atten-
tion is switched to the wrong elements of a plan or aspects of the
environment. Action slips tend to happen at transitional points in
sequential actions or between steps needed to complete a task, at the
moment that attention is required to evaluate what has been com-
pleted so far in the context of overall task goals (Botvinick & Bylsma,
2005). Cooking provides a good real-world example: failing to check
91
92 Neuroergonomics

the seasoning can be considered missing an attentional checkpoint,


and might result in an unpalatable dish. A similar failure made while
following a checklist in nursing or aviation can have more serious
consequences.
According to Norman (1981, p. 3), ‘For a slip to be started, yet
caught, means that there must exist some monitoring mechanism of
behaviour—a mechanism that is separate from that responsible for the
selection and execution of the act’. Such top-down control of action
is needed when a less familiar version of an action sequence must be
performed in place of a more familiar one or when no schema exists
for performing an action. In Norman and Shallice’s (1986) influential
model of action control, routine actions can be carried out on the basis
of contention scheduling, a more or less passive process that emerges
naturally as a result of the way schemas are learned and performed.
The contention-scheduling system directly activates and orders action
schemas that are linked to each other via inhibitory or excitatory
connections. A schema is triggered for execution when environmental
conditions match the triggering conditions incorporated in the schema,
and slips of action occur when intentions are not actively maintained
and schemas become activated simply because their triggering condi-
tions are present in the environment. Whenever a course of action
deviates from the routine, executive control over action is required.
This control is implemented by means of the supervisory attention
system, which biases selection by inhibiting some schemas and activating
others. Executive control of this sort is typically attributed to the
prefrontal cortex (PFC). For example, Miller and Cohen (2001) suggest
that cognitive control involves increasing the gain of sensory or motor
neurons that are engaged by task- or goal-relevant elements of the
external environment.
Human factors and ergonomics engineers have typically sought to
understand action slips as a function of task demands and environ-
mental conditions, and to redesign task environments or procedures
to minimize the chances that such errors will occur. Another
approach to predicting and preventing errors is to follow the endog-
enous fluctuations in brain states that precede and follow errors.
Performance fluctuations resulting from physiological variations in
vigilance, alertness and sleepiness affect performance on a relatively
long time-scale (hours to days), and are discussed in Chapter 6. In this
chapter we focus on performance fluctuations that occur on a shorter
time-scale (milliseconds to minutes) and which are marked by specific
error- or feedback-related brain activity.
Performance Monitoring and Error-related Activity 93

Performance monitoring

Many everyday actions are performed in a more or less automatic


manner, and are both efficient and fast. Consciously controlled, attentive
behaviour, however, is effortful, resource demanding and relatively
slow. Therefore, it can be argued that control should be applied only
when needed and should be adjusted according to task demands. People
are capable of regulating the level of cognitive control exercised. Rabbitt
and Browes (see Rabbitt, 1990), for example, found that people were able
to detect risky task contexts and exert higher levels of control to avoid
fast-response errors in a perceptual motor task. Similar adjustments of
cognitive control might be needed when overthinking threatens the
execution of skills. Skilled behaviour is characterized by a reduction
in conscious monitoring: when too much attention is paid to skilled
behaviour (such as when the crowd’s eyes are on a player making a pen-
alty shot), performance can break down (Wulf, 2007). However, a high
level of cognitive control may be just what is needed if a habit or skill
must be modified (Johnson, 2013). Maintaining the balance between
automatic and controlled behaviour requires a system that ‘knows’
when control processes can be withdrawn without causing impairment
of performance and when their application is needed (Goschke, 2003).
So-called conflict tasks are used frequently in the experimental
investigation of automatic and controlled behaviour (for a review,
see Egner, 2007). Such tasks are characterized by conflict between response
alternatives, and errors in performance occur when automatic response
tendencies override controlled processing (as when non-UK residents
first look to the right and then to the left when crossing a road in
the UK). Although these tasks seem rather simple compared with the
complexity of many work environments, as Fedota and Parasuraman
(2010) have argued, they provide a test-bed for studying the compo-
nent processes of error—pre-response conflict, errors themselves, and
post-response processes of error detection and compensation—which
are present in complex tasks. Probably the best-known conflict task is
the colour–word Stroop task (MacLeod 1991; Stroop, 1935) in which the
colour of a colour word printed in a congruent (e.g. ‘GREEN’ presented
in green) or incongruent (e.g. ‘GREEN’ presented in red) colour is to be
named. Because word reading is relatively automatic, the tendency to
read the word interferes with the colour-naming process on incongru-
ent trials. Such interference is resolved by recruiting cognitive control
processes (Botvinick et al., 2001) either within a trial or in preparation
for an upcoming trial.
94 Neuroergonomics

Evidence that the level of cognitive control in conflict tasks can be


adjusted dynamically—as is required if the efficiency of task perform-
ance is to be optimized—comes from studies in which the proportion
of incongruent and congruent trials in an experiment was manipulated
(e.g. Logan & Zbrodoff, 1979). If the proportion of incongruent trials
is high, congruency effects (quantified as the reaction–time difference
between incongruent and congruent trials) are smaller than when the
proportion of congruent and incongruent trials is equal. This effect of
the frequency of incongruent trials can be explained in terms of the
maintenance of a high level of control throughout the task, thereby
keeping the focus on relevant stimulus dimensions (e.g. word colour)
and reducing interference from irrelevant dimensions (the words them-
selves; Botvinick et al., 2001). When the proportion of congruent trials
is relatively high, relatively large congruency effects are found. In such
a case, responses can be based on the automatic response tendency
(read the word) on a majority of trials, making a high level of control
unnecessary and, because control takes time and effort, maladaptive.
Performance on the relatively few incongruent trials in such a condition
suffers from the low level of control, such that the difference in reaction
time to congruent and incongruent trials is large.
How and when cognitive control is exercised depends not only on
the overall frequency of incongruent and congruent trials, but also
on the recency of occurrence of incongruent trials in the history of
the task. Reduced congruency effects are observed when the directly
preceding (n ⫺ 1) trial was incongruent: after an incongruent trial reac-
tions are faster and less error prone on incongruent trials, but slower
and, often, more error prone on congruent trials (Gratton et al., 1992;
Kerns et al., 2004). This effect of trial sequence likely reflects dynamic
changes in cognitive control. Control is relaxed after congruent trials,
during which processing can be relatively automatic, and engaged after
the incongruent ones (Ullsperger et al., 2005).
Another source of conflict in task performance arises from the need to
balance a desire for speed with a requirement for accuracy. Compromises
between speed and accuracy characterize many real-world situations,
such as exiting to the right in busy highway traffic. Changing to a slow-
moving right-hand lane will ensure that the exit is not missed, but time
might be gained (at the possible risk of a dangerous situation or missing
one’s exit) by staying in the fast lane as long as possible. If a last-ditch
effort to reach the exit results in an accident, the accident would likely
be attributed to poor planning, or a mistake. In general, errors that arise
from speed–accuracy trade-off can be categorized as slips: people intend
Performance Monitoring and Error-related Activity 95

to make the correct response, but fall into an incorrect response pattern.
Speed–accuracy trade-off has been studied extensively using reaction-
time tasks (for a review, see Bogacz et al., 2010). In such tasks people are
usually instructed to respond as quickly and as accurately as possible.
Of course, it is impossible to go as fast as possible without making
some mistakes, and error-free performance would, necessarily, be very
slow. One way of optimizing performance is to speed up until an error
is made and then to slow down to avoid another error. In fact, such a
pattern of relatively fast reaction times preceding errors accompanied
by significant slowing after errors has often been found (Laming, 1968,
1979; Rabbitt, 1966). Shifts along the speed–accuracy trade-off function
have been proposed to account for both pre-error speed-up and post-
error slowing (e.g. Brewer & Smith, 1984; Laming, 1968, 1979).
Explanations of pre-error speed-up and post-error slowing in terms
of a shift along a speed–accuracy trade-off function imply that levels of
control can be regulated dynamically to optimize overall performance.
According to such an account, errors trigger a more conservative setting
(higher level of control) and, because more information is accumulated
before a response is made, post-error trials should not only be slower
than post-correct trials, but also more accurate. In many cases, however,
post-error trials are not, on average, more accurate than post-correct
trials. (e.g. Hajcak & Simons, 2008; Hajcak et al., 2003; Rabbitt &
Rodgers, 1977). In fact, when the interval between the response and
the following stimulus is short (200–500 ms), post-error trials are less
accurate than post-correct trials (Dudschig & Jentzsch, 2009). The
finding that post-error trials are especially error prone has led some to
propose that post-error slowing reflects an enduring effect of whatever
factors led to the error on the previous trial (e.g. Gehring et al., 1993).
Not only is the performer in an error-prone state after making an error,
the evaluation of an error is assumed to be a time-consuming process,
which might interfere with processing of subsequent stimuli.
Another alternative account of post-error slowing is that it is caused
by attentional capture due to the relative infrequency of error responses.
Indeed, when experimental conditions are manipulated so that errors are
more common than correct responses, post-correct slowing (relative to
post-error responses) is observed (Notebaert et al., 2009). Error-induced
attentional lapses have been implicated in mishaps in the airline and
healthcare industries (Reason, 1990), making an understanding of
why they occur (and what can be done to prevent them) of critical
importance. To date, however, no one account of post-error slowing is
capable of explaining the range of results that has been found. Moreover,
96 Neuroergonomics

the different accounts are not mutually exclusive in their proposed


mechanisms (for a review, see Danielmeier & Ullsperger, 2011).
Cognitive control, whether triggered by sequential dependencies in
the task environment, the recognition that an error has been made or
a conscious decision to heighten the level of monitoring of behaviour
(such as when luggage screening slows down after the security level has
been raised) requires monitoring of one’s actions and their outcomes.
Rabbitt (1966) was the first to suggest that errors trigger such a monitoring
process. He asked people performing a choice–reaction task to make an
additional response whenever an error was made. Reaction times for both
errors and error-correction responses were faster than correct responses,
and error and error-correction response time were closely related. This
led Rabbitt to suggest that a cognitive process links error identification
with a mechanism to correct the detected errors. Gratton et al. (1988)
examined in detail the time-course of the response activation that leads
to behavioural modifications of the type documented by Rabbitt. Using
the Eriksen flanker task (Eriksen & Eriksen, 1974), a conflict task in which
a response is to be made to a central target (e.g. a left or right pointing
arrow) that is flanked by either congruent (stimuli assigned to the
same response, e.g. >>>>>) or incongruent (stimuli assigned to the opposite
response, e.g. <<><<) distractors, Gratton et al. showed that the pattern of
performance in conflict tasks depends on the speed with which responses
are made. For the fastest responses to both congruent and incongruent
stimuli, performance is of close to chance accuracy. For intermediate
response times, accuracy on incongruent trials drops to below chance,
whereas accuracy to congruent stimuli improves, and, on slow trials, accu-
racy on both types of trials approaches perfect performance. Gratton et al.
suggested that two processes underlie performance on conflict tasks: a first
stage of processing, which is rapid and allows parallel processing of stimulus
dimensions, and which does not discriminate between relevant and irrel-
evant information, and a second stage of processing in which attention is
focused on the target, thus reducing the effect of incompatible information.
Performance on congruent trials benefits from first-stage processing, and
shows no cost to the release of control implied by its operation, but when
stimuli are incongruent, accuracy suffers until second-stage processing
kicks in (see Figure 5.1).

Neural correlates of performance monitoring


An early report of electrophysiological evidence for performance
monitoring (Cooke & Diggles, 1984) showed that the onset of error-
correction responses (i.e. suppression of the muscles involved in the
Performance Monitoring and Error-related Activity 97

Compatible Incompatible
100

80
Accuracy (%)

60
Chance level

40

20

0
100 200 300 400
RT bins (ms)

Figure 5.1 Conditional accuracy functions in the Eriksen flanker task. As illus-
trated, reaction times for both fast compatible and incompatible trials are at
chance level, which indicates that influence of the flankers is strongest early in
the trial and is reduced gradually as attention is focused on the target. [Adapted
from Gratton, G., Coles, M. G., Sirevaag, E. J., Eriksen, C. W., & Donchin, E.
(1988). Pre- and poststimulus activation of response channels: A psycho-
physiological analysis. Journal of Experimental Psychology. Human Perception
and Performance, 14, 331–344. Used with permission from the American
Psychological Association.]

error movement, as measured using electromyography) was too fast


to be based on external—or even proprioceptive—feedback related to
the initial, erroneous response. Cooke and Diggles suggested that early
responses to error movements are based on central monitoring of move-
ment commands. There is now general agreement that the anterior
cingulate cortex (ACC), a structure in medial PFC (mPFC), is involved
in performance monitoring and contributes to behavioural adjustments
and increases in cognitive control via lateral PFC (lPFC; for a review, see
Ridderinkhof et al., 2004). One of the theories explaining how the need
for cognitive control is detected and implemented in the brain is conflict
monitoring theory (Botvinick et al., 2001, 2004). According to this theory,
the ACC monitors for conflicts in information processing at the level
of response selection. Following trials or events that are high in conflict
(such as incongruent trials or trials on which errors were made), activa-
tion of mPFC increases, which, in turn, engages lPFC.
Lateral PFC is assumed to be involved in the representation and main-
tenance of task goals, as well as in the suppression of task-irrelevant
98 Neuroergonomics

information that might interfere with task goals (Kok et al., 2006;
Miller & Cohen, 2001). Conflict detection and concomitant increases
in cognitive control should thus lead to enhanced processing of task-
relevant stimulus features and inhibition of task-irrelevant information.
In support of this hypothesis, it has been found that post-error trials
are associated with increased activity in task-relevant sensory areas
(thought to be associated with enhanced processing of task-relevant
features), accompanied by decreased activity in supplementary motor
cortex (thought to be associated with the inhibition of inappropriate
responses induced by task-irrelevant information; King et al., 2010).
Conflict monitoring theory (Botvinick et al., 2001) predicts post-error
slowing and relatively high accuracy on post-error trials. That is, speed
on post-error trials is traded off for accuracy. As in speed–accuracy trade-
off accounts of post-error slowing (Rabbitt, 1966; Rabbitt & Rogers,
1977), Botvinick et al. suggest that responding is more conservative
after an error. This conservatism is implemented as a decrease in base-
line response activation after commission of an error. Several findings
from functional magnetic resonance imaging (fMRI) studies support the
hypothesis that changes in baseline activation in motor areas correlate
with the degree of post-error slowing (e.g. Danielmeier et al., 2011;
King et al., 2010). For example, individuals who show a greater decrease
in motor activity in post-error trials relative to post-correct trials tend
to slow down more after an error than do individuals who show less
pronounced decreases in motor activity (Danielmeier et al., 2011).
It remains an open question whether increased activity in mPFC causes
a decrease in motor activity or whether decrease in motor activity is
just a general post-error effect related to, for example, the orientation of
attention to infrequent events (Notebaert et al., 2009).
Several fMRI studies using a Stroop task have demonstrated that there
is a close link between ACC and lPFC (Kerns et al., 2004; van Veen &
Carter, 2005). In these studies, activity in ACC associated with high-
conflict, incongruent trials was accompanied by increased activity in
lPFC on subsequent incongruent trials and a corresponding decrease in
reaction times compared with incongruent trials preceded by congruent
ones. The means by which the anatomically and functionally related
areas of ACC and lPFC interact may be synchronized oscillations (Varela
et al., 2001). As described in Chapter 1, different frequency bands
of the electroencephalogram (EEG) show power changes in specific
frequency bands as a function of the cognitive demands of a particular
task. In the case of error processing, theta band oscillations generated
by ACC have been linked to feedback processing and response errors
Performance Monitoring and Error-related Activity 99

(Cohen & Ranganath, 2007; Wang et al., 2005). The increase in overall
power in the theta band following errors has been proposed to reflect
not only the activity of an action monitoring network (Luu et al., 2004;
Trujillo & Allen, 2007), but also to be a mechanism by which communi-
cation between ACC and lPFC occurs (Cavanagh et al., 2009). Analyses
of the power and phase characteristics of theta oscillatory activity on
error trials, as well as on trials preceding and following response errors,
has revealed increased oscillatory theta phase synchrony between ACC
and lateral-frontal sites combined with enhanced theta activity over
ACC in response to errors. Furthermore, a higher degree of theta
phase coherence between medial and lateral PFC has been shown to be
correlated positively with post-error slowing (Cavanagh et al., 2009).

Error- and feedback-related processing

Most tasks are forgiving in the sense that small lapses do not result
in serious consequences. However, in time-critical tasks, tasks which
require a high level of vigilance, or in tasks for which the consequences
of error are severe, an argument can be made for the monitoring of
brain behaviour for even small lapses that might lower the optimal-
ity of performance. Moreover, neural measures of performance could
provide information on the underlying causes of human error (Fedota &
Parasuraman, 2010). Neural correlates of performance monitoring have
been looked for using both electroctrophysiological and haemodynamic
measures. The first electrophysiological correlate of performance moni-
toring was found in two independent EEG studies (Falkenstein et al.,
1991; Gehring et al., 1990). Namely, a negative electrical potential was
observed over a medial frontal region of the brain following incorrect
responses in speeded response conflict tasks in which it was necessary
to overcome habitual response tendencies in order to make a correct
response. This negative electrical potential, which peaks 50–100 ms after
an error has been made, has been termed the error-related negativity
(ERN) (Gehring et al., 1990; see Figure 5.2) or error negativity (Ne)
(Falkenstein et al., 1991).
Initially, The ERN was described as a ‘mismatch signal’, which occurs
when representations of the actual response and the required response
are not the same (Gehring et al., 1993). Findings of negativity similar
to the ERN after correct responses, however, suggest that the ERN may
reflect the comparison of the actual and required responses, rather than
the outcome of this comparison process (Falkenstein et al., 1996; Vidal
et al., 2000). This ERN-like, correct-related negativity (CRN; see Figure 5.2)
100 Neuroergonomics

is smaller in amplitude than the ERN and more pronounced on high


conflict correct trials on which a small amount of electromyographic
activity is observed in the muscles that would be used to make the
incorrect response (Vidal et al., 2000). Such ‘partial’ errors are charac-
terized by longer reaction times than those for correct trials and are
marked with negative waves of similar latency and scalp topography
as the full-error trials on which the incorrect response is actually
made. Independent component analysis, a technique which allows
the separation of brain sources contributing to scalp-recorded EEG, has
suggested that the ERN and CRN share a common generator in ACC,
more specifically in the rostral cingulate zone (Gentsch et al., 2009;
Roger et al., 2010). Findings of negativity similar to the ERN after a
correct response suggests that both signals reflect the activity of a single
action-monitoring system that is activated by ‘full’ errors, as well as by
high-conflict trials.
Posterior medial frontal cortex (MFC) is activated when actions
result in errors or when responding involves conflict, but also
when responses are to be made under uncertain conditions or when
choices result in unfavourable outcomes (e.g. monetary loss; for a
review, see Ridderinkhof et al., 2004). In situations involving indeter-
minate responses—such as in a probabilistic learning task in which
trial-and-error responses are made to learn which choices are associated
with the highest reward—the correctness of an action cannot be judged
by the performer and external feedback is necessary to evaluate response
outcomes. In such a case, a negative electrical potential called feedback-
related negativity (FRN; see Figure 5.2) is observed approximately
200–350 ms after the feedback is presented (Miltner et al., 1997; see, for
a review, Nieuwenhuis et al., 2004). Negative performance feedback (e.g.
monetary loss) results in FRN of greater amplitude than does positive
feedback (e.g. monetary gain). As learning progresses in a probabilistic
task, expectations regarding task–reward contingencies associated with
particular choices begin to develop. When the outcome following a
response is the expected one, FRN is of lower amplitude than when the
outcome is unexpected (Holroyd et al., 2004). Brain imaging and source
localization studies have indicated that ERN and FRN share a neural
generator in posterior MFC (Miltner et al., 1997; Müller et al., 2005).
It should be noted that the mere observation of incorrect actions or
choices made by another person or even a computer elicits a negativity
similar to the one present during choice-reaction time and probabilistic-
learning tasks (de Bruijn et al., 2009; Ferrez & del R. Millan, 2008).
Apparently, the performance-monitoring system serves a more general
Performance Monitoring and Error-related Activity 101

Error Correct

4
CRN
microVolts

–2

–4 ERN

–100 100 200 300 400 500 600


Response
Time (ms)

Negative feedback Positive feedback

12
10
8
6
microVolts

4
2
0
–2
FRN
–4
–6
–100 100 200 300 400 500 600
Onset
feedback Time (ms)

Figure 5.2 Electroencephalography (EEG) components related to performance


monitoring [upper panel: correct-related negativity (CRN) and error-related
negativity (ERN); lower panel: feedback-related negativity (FRN)]

function than just monitoring one’s own performance and is sensitive


not only to one’s own action errors, but also to violations of predicted
outcomes in motivationally relevant settings.

Error prediction

The finding of event-related potential (ERP) components associated with


error commission raises the important question of whether it might
102 Neuroergonomics

be possible to predict errors before they occur. If errors can be predicted


before they happen, interventions to improve safety, such as training
or task redesign, could be much more efficiently tailored to the people
who need them and the points in time they are likely to be needed.
One marker of an impending error is reaction time: whereas responses
tend to be slower on post-error trials, responses before an error tend to
be relatively fast (Rabbitt, 1966; Rabbitt & Rodgers, 1977). A more direct
way to measure whether an error may be about to be committed may
be to examine the fluctuations in stimulus-related and response-related
frontocentral negativities that have been interpreted as reflecting
changes in the activity of the performance monitoring system. Can
the state of the performance monitoring system serve as a predictor
of successful performance in upcoming trials? One relevant finding
which suggests that it can is that comparison of ERPs time-locked to the
responses on trials that preceded errors with those of trials preceding
correct responses has revealed that CRN amplitude is smaller on pre-error
trials than on pre-correct trials (Hajcak et al., 2005; Ridderinkhof et al.,
2003). Ridderinkhof et al. proposed that this so-called error-preceding
positivity (EPP) may reflect fluctuations in the efficiency of the action
monitoring system and that these fluctuations may occasionally lead to
performance errors. A gradual disengagement of the performance moni-
toring system and an increase in EPP can be observed as early as five
seconds prior to the commission of an error (Hajcak et al., 2005), which
suggests that many action slips may be preceded by traceable lapses in
response monitoring (Simons, 2010).
The view that fluctuations in the efficiency of the action monitoring
system lead to performance errors suggests that errors reflect a shift from
effortful, motivated involvement in the task toward a mental state more
similar to resting conditions. It should be noted, however, that Dudschig
and Jentzsch (2009) failed to find a link between ERN amplitude and
the amount of change in post-error slowing induced by changing the
time available to process errors, which brings into question whether there
is a direct link between ERN amplitude and the degree of subsequent
performance adjustments. Dudschig and Jentzsch found that post-error
slowing increased and performance became more error prone when
the interval between the response in one trial and the presentation of
the stimulus in the next trial [the response–stimulus interval (RSI)] was
reduced. However, speed-up in pre-error trials was virtually unaffected
by RSI, suggesting that pre-error speed-up is not the result of strategic,
time-consuming control processes. Thus, whereas a gradual build-up
of disengagement from task-related activity seems to cause errors,
Performance Monitoring and Error-related Activity 103

committing and detecting the error seems to lead to re-engagement in


the task by reducing task-irrelevant brain activity and enhancing activity
in the areas associated with effort in cognitive tasks.
Most research on error-related brain activity has relied on averaging
many trials to isolate EEG activity related to performance. Techniques
based on the averaging of many trials are, of course, not appropriate
for the real-time tracking of human performance. This fact makes the
approach of Eichele et al. (2010), who looked at the brain activity prior
to errors on a trial-by-trial basis, particularly promising. Eichele et al.
found that series of congruent, low-conflict trials seemed to result in the
gradual disengagement of the performance-monitoring system. Most of
the errors committed on incongruent trials occurred when these trials
were preceded by a series of congruent trials. Although such changes
may reflect an adaptive response to moment-to-moment changes in
the environment, decreases in cognitive control can, of course, result
in suboptimal performance in the case of sudden or novel changes in
the environment.
While the evidence for the relation between fluctuations in the
efficiency of action monitoring and error commission is intriguing, it
has also been proposed that errors in routine tasks result from appro-
priate reduction of task-related effort (Eichele et al., 2008). Eichele et al.
performed trial-by-trial analysis of fMRI data to reveal patterns of activity
that occurred 6–30 s before an error. The patterns of activity were local-
ized to task-related brain areas and co-occurred with an increase in the
activity of the so-called default-mode network (see Chapter 1) associated
with a relaxed state (Eichele et al., 2008; Li et al., 2007). In other words,
errors appeared to reflect a shift from effortful, motivated involvement
in the task toward a mental state more similar to resting conditions.

Applications based on error- and feedback-related neural signals


One area where the use of error-related neural feedback is already
being explored is brain–computer interfaces (BCIs; see Chapter 4).
A typical approach in BCI research is to train a classifier to classify EEG
from single trials online. In addition to the accuracy of the classifica-
tion, important considerations are the amount of training data needed
to achieve good classification rates and how well the classifier works
across different sessions. For example, Llera et al. (2011) developed a
classifier based on what they call ‘interaction error potentials’ (IErrPs).
These potentials fall into the class observation error potentials (van Schie
et al., 2004) and are evoked when the device with which the user is
interacting produces incorrect or surprising performance (Ferrez, 2007).
104 Neuroergonomics

Simulations showed that a learning algorithm based on IErrPs could


successfully solve a binary classification task. Tests with human performers
revealed mixed results, but, in at least some people, the algorithm
worked quite well. The next challenge for such classifiers will be extending
the work beyond binary classification to tasks in which the error signal
does not provide information on what the correct system output should
have been.
Lehne et al. (2009) also explored the idea that passive BCIs, that is
BCI systems that use information about the user’s brain without inter-
fering with other modes of interaction (e.g. Zander et al., 2008), could
use specific brain states associated with human error processing. In
particular, they examined the classification of machine-generated versus
self-generated errors in a tactile discrimination task. Tactile interfaces
have the potential to lower cognitive workload in other modalities and
can be used to intuitively direct a user’s attention by giving the user
a proverbial ‘tap on the shoulder’. Tactile stimuli have been shown to
elicit the P300 (Brouwer & van Erp, 2008) and error potentials (Miltner
et al., 1997). Lehne et al. presented tactile stimuli via the TNO tactile
torso display (model JHJ-3; TNO, the Netherlands), an adjustable vest
worn over the clothes and containing five rows of twelve equally spaced,
custom-built tactors (van Erp, 2007). The participant’s task was to move
the tactile ‘cursor’ (i.e. the currently vibrating tactor) by pressing a key to
‘accept’ or ‘reject’ a visually-presented direction of motion. Participants
were told that the interface might occasionally commit errors. The EEG
was recorded and differences between error and correct trials were com-
puted for both machine- and self-generated errors. Machine errors were
characterized by a positive deflection peaking at about 400 ms after the
occurrence of an error, but self-generated errors tended to show negativity
early in the ERP followed by positivity later, with single-trial classification
accuracies exceeding 50% for all participants.

Maintaining attentional control

Not all ebbs and flows of attention are driven by stimulus properties or
sequential contingencies. Attentional lapses occur throughout the day,
and even more so during the execution of highly practised, routine
tasks (Robertson et al., 1997) or when arousal levels are low. It has been
suggested that such mind-wandering episodes become more frequent as
performance becomes automated and the amount of executive resources
invested in the task at hand decreases (Smallwood & Schooler, 2006).
Although there is no general agreement on why episodes of inattention
Performance Monitoring and Error-related Activity 105

occur, the role of frontal regions of the brain in attentional processing


is well established (Hopfinger et al., 2000). Interaction between frontal
brain regions and sensory areas ensures that behaviourally relevant stim-
ulus properties are given preferential processing (see Chapters 1 and 2).
Lapses of attention can thus lead to poorer stimulus representations
and, because of this limited quality, faulty cognitive or motor perform-
ance (Weissman et al., 2006). Fluctuations in attentional state can also
explain why the experience of ‘not seeing’ another car and hitting the
brakes just in time (or too late) is as common as it is: brief attentional
lapses might prolong stimulus-processing time (e.g. seeing another car)
and result in the delayed initiation of action inhibition.
Endogenously driven, short time-scale fluctuations in sustained and
focused attention appear to occur periodically, with periods of about 10 s
(for a review, see Sonuga-Barke & Castellanos, 2007). A paradigm used
commonly to measure failures of sustained attention is a Go/No-go
task—the Sustained Attention to Response Test (SART) (Robertson et al.,
1997), in which participants are asked to respond to a subset of randomly
presented digits. Some digits require a button press response, whereas
others should be ignored and the response should be withheld. The ‘go’
trials on which a response should be made are more frequent than the
‘no-go’ trials on which the response should be withheld. Errors in the
SART task tend to be ‘errors of commission’ in which a response is made to
a no-go stimulus and thus appear to stem from failures of sustained
attention, which, in turn, lead to failures to inhibit the prepotent motor
response (O’Connell et al., 2009a). Mazaheri et al. (2009) used the SART
task to detect error-prone brain states using magnetoencephalogram
(MEG). Increased prestimulus oscillatory brain activity in the alpha
band was found to predict failures in sustained attention and to fore-
shadow errors of commission. Similar findings have been reported in
detection and discrimination tasks, in which prestimulus fluctuations
in the alpha band have been found to predict performance accuracy on
subsequent trials (Thut et al., 2006; Wyart & Tallon-Baudry, 2009).
Fluctuations in attention appear to be gradual (e.g. Macdonald et al.,
2010). Despite the gradual nature of fluctuations of attention, most
studies of error-prone states (e.g. Mazaheri et al., 2009; Weissmann et al.,
2011) have relied on comparisons of the state of attentional networks on
the trials immediately preceding correct responses with those on trials
immediately preceding erroneous responses. A more general approach
was used by O’Connell et al. (2009b), who measured visual steady-state
evoked potentials (SSEPs) (a response in the EEG that corresponds to
the frequency of a flickering visual stimulus; see Chapter 4) to trace
106 Neuroergonomics

the neural signature of lapses in sustained attention. O’Connell et al.


found that a gradual increase in alpha power (associated with a resting
state; see Chapter 1) started as early as 20 s prior to a missed target
event, and interpreted this as reflecting a lapse of sustained attention.
An important strength of the SSEP technique is that it can be used both
to track fluctuations in attention and to attract or modulate attention
directly by changing the frequency of the stimulus flicker (O’Connell
et al., 2009b; see Chapter 4).
Sustained task performance can induce mental fatigue, and fatigue
results in performance and brain activity changes. In general, mental
fatigue results in reduced action monitoring as indexed by the ERN.
As little as two hours of performance in a conflict task are sufficient
to bring about a decrease in ERN amplitude (Boksem et al., 2005).
Boksem et al. also showed that the contingent negative variation (CNV),
associated with response preparation, is reduced as a function of fatigue.
Manipulating motivation to perform (by paying participants to continue
with the task) revealed individual differences in how people muster
resources to continue to perform. People who increased their perform-
ance accuracy showed an increase in ERN amplitude, whereas people who
increased their response speed showed an increase in CNV amplitude.

Learning from errors

Most error responses are initiated before the person making the error is
aware that an error is being made. Therefore, although encouraging more
conservative behaviour or more mindful processing might reduce the
chances of making some types of error, such interventions are unlikely
to eliminate error. On the one hand, errors are salient events that
attract attention and make the performer aware of a need to change the
response criterion and engage in more controlled behaviour in order to
keep from ‘slipping up’ again. On the other hand, in many cases the
errors themselves—when detected—provide opportunities for learning.
A detected error gives the performer feedback that performance was
less than optimal and serves a signal to update predictions about the
outcomes of future actions to improve performance.
Holroyd and Coles (2002) introduced a theory that links ERN and,
more generally, ACC activity, to reinforcement learning processes in
the brain. According to reinforcement learning theories, a discrep-
ancy, often referred to as reward prediction error, between actual action
outcomes and expected outcomes is a driving force for learning. The
prediction error signal thus specifies how actions should change: if
Performance Monitoring and Error-related Activity 107

an action outcome is worse than predicted (negative prediction error)


changes in future actions must be initiated; if the outcome is better than
that expected (positive prediction error) the same behaviour should be
repeated in the future. Reward prediction errors have been related to
phasic activity in mesencephalic dopamine neurons (Schultz, 1998). For
example, in a study in which the activity of dopamine cells was meas-
ured while training a monkey to associate a stimulus with the reward
of juice, Schultz found that the firing rate of the cells increased when the
juice was delivered early in the training (i.e. when it was unexpected),
fell below the baseline when the expected reward was not delivered later
on in the training (i.e. when expectations were violated) and did not
change when rewards met expectations.
Holroyd and Coles’s (2002) theory proposes that the performance
monitoring system uses dopaminergic prediction error signals to
learn which actions are appropriate in a given context. Scalp recorded
ERN and FRN are supposed to reflect the effect of the mesencephalic
dopaminergic system on motor neurons in ACC and themselves serve
as reinforcement learning signals to improve performance. Indeed,
FRN amplitude has been found to predict whether performers will
learn to avoid an erroneous response the next time an action has to be
performed (van der Helden et al., 2010). In Holroyd and Coles’s theory,
FRN amplitude modulations are proposed to reflect the degree of nega-
tive prediction error—that is, to what extent the actual action outcomes
deviated from expectations (for a review, see Nieuwenhuis et al., 2004).
Although projections from mesencephalic dopamine neurons to ACC
do support the hypothesis that phasic dopamine signals directly affect
the ACC, the slow time course of dopamine re-uptake in frontal areas
has led some researchers to propose that changes in dopaminergic firing
rates have only an indirect effect on ACC (Yeung et al., 2004).
The proposed relation of the ERN to reinforcement learning suggests
that as learning progresses performers will rely more on internal
representations of action values than on externally provided feedback
(Holroyd & Coles, 2002). This proposition was tested using a probabil-
istic learning task in which participants had to make speeded responses
and to learn stimulus–response mappings to maximize overall monetary
reward. Importantly, for some stimuli the mappings were fixed, whereas
for other stimuli the mapping was randomly changed on each trial.
Despite the time pressure, participants were able to select the appropriate
response in the fixed mapping condition on most trials (accuracy
increased from chance level up to 80%). Learning effects were observed
on both ERN and FRN. Early in the learning processes, performance
108 Neuroergonomics

feedback elicited high-amplitude FRN, and the response-locked ERN


was small. As learning progressed the pattern changed gradually until
the stimulus itself became a predictive cue for reward or punishment, as
reflected by increased ERN amplitude accompanied by decreases in FRN,
reflecting decreased reliance on external feedback. Further evidence for
the role of dopamine-dependent, feedback-based learning comes from
work using a molecular genetics approach (see Chapter 8). Klein et al.
(2007) showed that people grouped according to the dopamine D2 recep-
tor gene polymorphism DRD2-TAQ-IA showed differential learning in a
probabilistic learning task. Specifically, A1 allele carriers with reduced
dopamine D2 receptor densities were impaired in the ability to learn to
avoid actions with negative consequences compared with the rest of the
group. fMRI scans showed that the posterior MFC of A1 allele carriers
responded less to negative feedback than did that of other participants,
suggesting that feedback monitoring was deficient in this group.

Online classification of feedback processing


Lopez-Larraz et al. (2010) investigated whether it was possible to deter-
mine online if an observer received positive or negative feedback on a
given trial. They used a time estimation task in which the observer was
to press a button when they thought that a given, target amount of
time had passed. The experiment was carried out in 2 sessions of 300
trials, with 3 weeks between the sessions. The interval for determining
whether responses were considered correct was adjusted dynamically
so that positive feedback was given on approximately half the trials
(e.g. when the task was to produce an interval lasting 1 s, the allowed
margin of error was ± 100 ms, and the observer pressed the key
after 1050 ms) and negative feedback on the other half of the trials
(e.g. when the task was to produce an interval lasting 1 s, the allowed
margin of error was ± 80 ms, and the observer pressed the key after 900 ms).
The data from the first session were used to characterize the potentials
and collect enough examples to be able to train the online classifier. In the
second session, the classifier was applied to recognize positive and nega-
tive feedback signals online. Analysis of the session 1 data showed that
the potentials were different for the positive and negative feedback trials.
Lopez-Larraz et al. identified the channels that could be used to
differentiate the trials (FC1, FC2, CP1, CP2, Fz, FCz, Cz and CPz) and the
time window that contained information relevant to the classification
task (200–600 ms). The resulting feature vectors (channel information
at each sampled time point) obtained from the EEG measurements were
submitted to a classifier. After training, the classifier was applied to the
Performance Monitoring and Error-related Activity 109

session 1 data and, online, to the session 2 data. Recognition of the type
of feedback increased throughout the first session to nearly 80% at the
end of the session. Real-time classification of session 2 trials showed a
decrease in classification accuracy with respect to the end of the first
session, with drops in accuracy of 6.33% and 11.35% for negative and
positive feedback trials respectively. Overall, single trial classification was
found to be about 71% accurate, and cross-validation with all EEG data
performed offline showed about 80% classification accuracy. Thus, it
seems that it is possible to assess whether people have received positive
or negative feedback. From feedback processing, it may be possible to
predict the amount of learning different individuals are likely to show.

Conclusion

Slips of action and other errors are an inescapable part of human


performance. Over the past few decades a number of markers of pre- and
post-error states have been identified. Theories of performance monitor-
ing have been developed that provide the basis for predicting when
errors are likely to occur and how they will be dealt with. Behavioural
change and brain activity patterns have been identified that: (a) precede
errors and thus might serve as a potential predictors of error in
real-life situations; (b) are elicited by errors and might be utilized
to improve the usability of human–machine interfaces; and (c) are
related to learning from errors and could be implemented in select-
ion and training. The first ‘proof of concept’ applications have been
developed to show the potential of performance monitoring based
on error- or feedback-related processing. As the systems underlying
the lapses in control that result in error become better understood it
will be increasingly important to study individual differences in con-
flict monitoring and feedback processing and to determine the best
possible interventions for overcoming the lapses that lead to error.
6
Neuroergonomics of Sleep
and Alertness
Jon Tippin, Nazan Aksan, Jeffrey Dawson and Matthew Rizzo1

Sleepiness is a major problem in modern life. For example, more than


a third of respondents to the National Sleep Foundation 2002 Sleep
in America poll reported being so sleepy that it interfered with their daily
activities at least a few days a month. According to the United States
Bureau of Labor Statistics (2004), about 15% of the US workforce works
outside of the regular daytime work hours of 8 a.m. to 5 p.m., and these
shift workers have been shown to have shorter sleep durations and
increased sleepiness during their major wake periods than those who
regularly work during the day. Because chronic sleepiness is common
and frequently associated with impaired cognition, it should come
as no surprise that sleepy people will, at times, experience failures in
critical aspects of daily functioning that may lead to catastrophic results.
The crash of the Exxon Valdez and the Three Mile Island, Chernobyl
and Challenger disasters are dramatic and often-quoted examples of
the dangers of sleepiness, but countless other, less-celebrated, examples
are found in everyday life. Sleepiness is a public health and work policy
issue that cannot be ignored.
This chapter introduces the neural systems involved in sleep and alert-
ness, and links these neural systems to behaviour in real-world settings
(at home, in transportation and at work). Examples of applications that
take a neuroergonomics approach to sleep and arousal in real-world
settings are given, with an emphasis on naturalistic studies of behaviour
and sleep in drivers with a relatively common sleep disorder known as
obstructive sleep apnoea.

1
Support for Jon Tippin, Nazan Aksan, Jeffrey Dawson and Matthew Rizzo
provided by NIH RO1 HL091917.

110
Neuroergonomics of Sleep and Alertness 111

The neurobiology of sleep and alertness

Our understanding of the neurobiology of the sleep–wake cycle has


evolved dramatically since Von Economo (1930) first identified discrete
regions in the brain that promote wakefulness and sleep. The most
influential current model of the regulation of sleep and wake states was
proposed by Saper et al. (2005). The model relies on the concept of a
‘flip-flop’ switch to explain relatively rapid and largely stable transitions
from one state to the other. The flip-flop switch maintains stability
by mutual inhibition. According to the model, the arousal system,
with neuronal systems located in the brainstem, diencephalon, basal
forebrain and cerebral cortex, is inhibited by sleep-promoting neurons
originating in the anterior hypothalamus. Conversely, areas of the brain
that maintain wakefulness inhibit that part of the hypothalamus that
promotes sleep. When the drive for sleep—or alerting signals—becomes
strong enough, this increased activity in one system relative to the
other leads to a rapid flip of the switch to the new state. Orexinergic
neurons, in particular, which project only to arousal system neurons,
act like a ‘finger on the switch’ and play a major role in determining
which state the organism is in.
In addition to the wake/sleep system, there is a system that regulates
the circadian timing of sleep. The heart of this system is the suprachias-
matic nucleus (SCN), located in the anterior hypothalamus (Shirani &
St. Louis, 2009). The SCN receives input from a variety of sources, the
most important of which comes from specialized, non-vision-related
retinal ganglion cells. These cells respond primarily to short wavelength
blue light, and it is the input from these cells that allows the SCN to
become ‘entrained’ to environmental light cues. Output from the SCN
goes primarily to other areas of the hypothalamus and indirectly to the
pineal gland (the major source of melatonin).

Sleepiness, performance and sleepiness countermeasures

It has become clear that work burden, sleep restriction and circadian
factors can adversely affect alertness, performance and neural func-
tioning (Cabon et al., 1993; Goel et al., 2009), affecting safety-critical
operations across many industries. To address the adverse effects of
sleep disturbance, the military has sponsored applied research on sleep
to mitigate effects of continuous and prolonged operations, as have
civilian industries such as the US Department of Transportation. The
Federal Motor Carriers Safety Administration (responsible for safe carriage
112 Neuroergonomics

of durable goods by commercial truckers) and medical fields (where


protracted duty may increase the risk of errors that affect patient care;
e.g. Institute of Medicine, 2009), in particular, are concerned with
the effects of prolonged wakefulness and disturbed sleep.
Treatment of sleep-impaired individuals includes interventions in and
out of the workplace. These include naps, medications or drugs, changes
in ambient lighting in operational environments, better sleep hygiene
and technical measures, such as alertness or fatigue-detection monitors
or algorithm-based alerting, warning or advisory systems. Naps can
help reduce fatigue severity during prolonged work in many workers,
including soldiers, truckers, pilots and physicians (Dinges & Broughton,
1989; Dinges et al., 1987; Institute of Medicine, 2006, 2009), although
ambient noise, physical comfort, time of day in relation to a person’s
circadian phase (Caldwell et al., 2009; Vgontzas et al., 2007) and pos-
ture may affect nap benefits. Moreover, sleeping in a chair (or upright
like an astronaut) may provide less, and poorer, sleep than sleeping
prone (Nicholson & Stone, 1987). The caffeine found in coffee, many
soft drinks and ‘energy’ drinks is another common countermeasure
to sleepiness and many drugs—legal and otherwise—are now used to
counteract sleep and enhance alertness (see Table 6.1) (for more detail
see Chowdhuri, 2012; Kelly et al., 2004; Roehrs & Roth, 2008).
A novel and potentially useful countermeasure is exposure of sleepy
individuals to short wavelength blue light. Some studies have shown
that this may lead to enhanced subjective alertness and well-being,
and improved performance on tasks of sustained attention (Chellappa
et al., 2011); these results have been corroborated by functional mag-
netic resonance imaging (fMRI) studies (Vandewalle et al., 2006). As
mentioned earlier in this chapter, this effect appears to be mediated via
stimulation of specific retinal ganglion cells not involved with vision,
which project primarily to the SCN in the anterior hypothalamus.
Whereas exposure to blue light may enhance short-term performance,
chronic exposure would be expected to produce long-term impairments
in sleep and alertness by causing a phase shift in circadian timing
(Shirani & St Louis, 2009). The practical utility of blue light exposure
awaits further study.
Technologies to mitigate sleepiness at work aim to determine opera-
tor alertness or performance during duty, or in advance of it (Basner &
Dinges, 2009). These technologies include (Balkin et al., 2011): (a) fitness-
for-duty tests to determine if operators are alert and able prior to work;
(b) real-time monitoring of operator behaviour and physiology during
work; (c) continuous tracking of operator state and behaviours that
Neuroergonomics of Sleep and Alertness 113

Table 6.1 Substances that affect arousal and sleep, their mechanisms of action
and side effects

Substance Mechanism of action Side effects

Amphetamines Increased release and Nervousness, insomnia,


Methamphetamine re-uptake blockade restlessness, palpitations,
(Desoxyn®) of dopamine, dizziness, nausea,
Dextroamphetamine norepinephrine and headache, diarrhoea,
(Dexedrine®) serotonin hypertension, tachycardia,
Methylphenidate psychosis (rare)
(Concerta®, Ritalin®)
Modafinil (Provigil®) Exact mechanism is Common: headache,
Armodafinil (Nuvigil®; unknown nausea, nervousness,
longer-acting R isomer rhinitis, diarrhoea, back
of Modafinil) pain, anxiety, insomnia,
dizziness, dyspepsia
Less common: chest pain,
hypertension, tachycardia
Caffeine Blockade of adenosine Common: insomnia,
receptors in cholinergic nervousness, restlessness,
basal forebrain neurons stomach irritation,
(activation of these nausea, tachycardia
receptors leads to Less common: headache,
decreased firing rate with anxiety, agitation, chest
subsequent decreased pain, tinnitus, arrhythmia
inhibition of GABA
neurons in hypothalamus)
Cocaine Inhibition of presynaptic Nervousness, insomnia,
dopamine transporters, restlessness, agitation,
increased dopamine psychosis, stroke,
availability arrhythmia, myocardial
infarction, addiction

GABA: gamma-aminobutyric acid.

can impair safety; and (d) use of secondary tasks to monitor operator
performance or enhance alertness. These technologies show promise,
particularly in combination (e.g. Dinges et al., 2005), but feasibility,
validity, reliability and acceptance by operators remain to be addressed
(Balkin et al., 2011). In a similar technological vein, mathematical
models have been developed to predict effects of fatigue on worker
performance based on duty time and scheduling, sleep quantity and
quality, circadian and time-zone information, and other variables (e.g.
Mallis et al., 2004). To be more effective these models must consider
individual variability owing to personal biology and task variables
114 Neuroergonomics

(Dawson et al., 2011; van Dongen et al., 2004), compare model predictions
against real-world data, and better predict performance risks from
fatigue over several days (Dinges, 2004; van Dongen et al., 2004). They
must also address the dynamics of chronic sleep restriction in relation
to fatigue (e.g. McCauley et al., 2009), predict adverse events (e.g. Hursh
et al., 2008), and better inform staffing and work schedules to minimize
fatigue (Horrey et al., 2011; National Research Council, 2007).
Automobile driving is a key real-world activity that is adversely
affected by sleepiness. Driving has become an indispensable activity of
daily life, yet vehicular crashes injure millions and regularly kill over
40,000 people in Europe and North America each year, at a cost of
about US $230 billion (Economic Commission for Europe, 2011;
NHTSA, 2004). About 1.2 million people die worldwide as a result of
vehicular crashes and tens of millions are injured (Peden & Sminkey,
2004). Drowsy drivers are at particular risk for an automobile crash
(e.g. Connor et al., 2000), a situation made particularly alarming
because chronic sleepiness is so pervasive in modern society. It has been
estimated that 1.35 million drivers were involved in a drowsy-driving-
related crash in the 5 years prior to a 2002 US Gallup Poll (Royal,
2003) and, in one study, approximately 55% of 1000 drivers surveyed
indicated that they had driven while drowsy and 23% had actually
fallen asleep at the wheel (McCartt et al., 1996). The majority of drowsy
drivers are simply sleep deprived, but otherwise healthy (McCartt et al.,
1996), such as truckers and shift workers (e.g. Hakkanen & Summala,
2000). According to the Centers for Disease Control, more than a third
of Americans routinely sleep fewer than 7 hours a night; a similar
proportion have unintentionally fallen asleep during the day, and nearly
5% did so while driving (CDC, 2011). However, a sizeable subgroup is
affected by a sleep disorder that causes them to be excessively sleepy,
such as obstructive sleep apnoea (OSA), which probably places them at
even greater risk for a crash.

OSA and driving

OSA is a chronic disorder associated with repeated episodes of complete


or partial collapse of the upper airway during sleep that subsequently
affects cognition and performance of affected individuals during their
waking hours. This condition provides an enhanced opportunity for
studying the effects of poor sleep in the real world that can provide
more general lessons in sleep-deprived individuals without a sleep
disorder. The apnoea episodes in OSA lead to fragmented sleep and
Neuroergonomics of Sleep and Alertness 115

intermittent oxygen desaturation, which results in excessive daytime


sleepiness (EDS), and increased cardiovascular morbidity and mortality.
The impact of OSA on society is substantial, as it is known to affect
at least 2–4% of middle-aged adults (Bixler et al., 2001; Young et al.,
1993). But, more importantly, because OSA is such a common cause
of EDS, it provides a useful model for studying the effects of sleep
impairment on performance and safety errors, even in operators without
the diagnosis.
Recent meta-analyses have shown that drivers with OSA have a mean
crash risk ratio of 2.72, indicating that these individuals have a 172%
greater chance of a crash relative to the general population (Sassani
et al., 2004; Tregear et al., 2009). Despite the higher crash risk as a group,
the increased risk is actually attributable to a small subset of these
drivers (Masa et al., 2004). Drivers with a history of sleepy driving, those
with severe disease and those who are obese are clearly at increased
risk (Masa et al., 2004; Pack et al., 2006; Stoohs et al., 1994), but indi-
vidual crash risk is difficult to determine for the majority of OSA drivers
based upon these factors (Teran-Santos et al., 1999). Tregear (2007), for
example, found only a weak association between measures of subjective
sleepiness and crash risk in a large meta-analysis, and objective labora-
tory measures of sleepiness, such as the Multiple Sleep Latency Test
(MSLT), which assesses how quickly a person falls asleep over multiple
trials in a controlled (clinical) setting, have not proven useful in predict-
ing crash risk in this population. Although these measures have been
shown to correlate with simulated and closed-track driving performance
(e.g. Philip et al., 2008), there are conflicting data on the association
between results from objective tests of sleepiness and real-world driving
outcomes in OSA drivers (e.g. Aldrich, 1989; Cassel et al., 1996).

Effects of disordered sleep on arousal and cognition


Dysfunction in multiple areas of cognition important for activities such
as safe driving has been found in many persons with OSA, notably in
the realms of attention, memory and executive function (e.g. Aloia
et al., 2004; Beebe & Gozal, 2002; Feuerstein et al., 1997; Tippin
et al., 2009). Deficits in ‘higher’ cognitive functions, such as memory
and executive control (including decision-making and implementa-
tion), may result from inattention caused by OSA-associated impaired
arousal (Verstraeten & Cluydts, 2004). Whether sleep fragmentation
and resultant EDS or neuronal damage due to hypoxaemia is the major
mechanism by which these cognitive deficits occur is an unsettled ques-
tion. Evidence of impaired attention has been found in patients with
116 Neuroergonomics

mild OSA who have no clinically meaningful hypoxaemia (Redline


et al., 1997) and in sleep-deprived healthy individuals (e.g. Drummond
et al., 2000; Roge et al., 2003). In addition, Verstraeten et al. (2004) suggested,
on the one hand, that the pattern of neuropsychological impairment in
OSA more closely resembles that seen in sleep-deprived individuals than
in those with hypoxaemia due to chronic pulmonary disease. On the
other hand, several lines of evidence point to nocturnal hypoxaemia
as a major factor in the pathogenesis of the attentional deficits (Aloia
et al, 2004; Findley et al., 1986; Roehrs et al., 1995), including the finding
by Antonelli Incalzi et al. (2004) that OSA patients perform similarly
on neuropsychological tests to patients with multi-infarct dementia.
Chronic, repeated episodes of nocturnal hypoxaemia may explain
the irreversible cognitive deficits that have been found in some OSA
patients (Bedard et al., 1991; Nowak et al., 2006).

Self-awareness of sleep impairments


Individuals with OSA are not always aware of their cognitive impair-
ments or even of being drowsy (e.g. Chin et al., 2004; Engleman et al.,
1997), which may lead them to unwittingly engage in unsafe driving
behaviour. Engleman et al. (1997) studied driver ratings of sleepiness
before and after positive airway pressure treatment (PAP; the treatment
of choice for OSA which reverses upper airway collapse by delivering
pressurized air through either a nasal or oral–nasal mask sufficient to
‘splint’ open the airway, thus reducing obstructive respiratory events,
including respiratory effort-related arousals; Kushida et al., 2006) and
found that 62% of the OSA patients had underestimated their pre-
treatment degree of sleepiness when asked to re-evaluate themselves
after starting PAP. Moreover, 25% of OSA drivers recognized only after
treatment with PAP that they had previously unacknowledged trouble
driving (see also Chin et al., 2004). Furuta et al. (1999) demonstrated the
presence of a dissociation between how sleepy patients perceived them-
selves compared with how sleepy they actually were, as measured by
the MSLT. This discrepancy may be analogous to the unawareness of
sleepiness and resultant deterioration in cognitive performance that has
been demonstrated in healthy, sleep-deprived individuals (van Dongen
et al., 2003). Nevertheless, several studies have shown little, or no, cor-
relation between an OSA patient’s perception of sleepiness and motor
vehicle crash history (e.g. Turkington et al., 2001; Yamamoto et al., 2000).
Perhaps because of their unawareness of sleepiness and impaired func-
tioning, these individuals may be less likely to restrict their driving,
despite being at increased risk for crashes.
Neuroergonomics of Sleep and Alertness 117

The critical factors underlying driving performance errors in sleepy


drivers with OSA are not well described, and few tools are available for
detecting and alerting those drivers who are at greatest risk for a crash.
This underscores the need for development of an objective measure of
driver fitness in this population. Symptom minimization may be inten-
tional, as some drivers may fear losing driving privileges. In a recent
study of commercial truck drivers with OSA, Parks et al. (2009) found
that many drivers purposely minimized sleepiness in an occupational
medicine screening clinic, and only 1 out of 20 who were prescribed PAP
used it consistently. This is similar to what has been demonstrated in
epileptics who often under-report seizure frequency to their physicians
in order to prevent the imposition of driving restrictions (Salinsky
et al., 1992).
A further complication, consistent with the sleep/wake model dis-
cussed previously, is that the border between wakefulness and sleep
is indistinct. Gastaut and Broughton (1965) found that 2–4 minutes
of electroencephalography (EEG)-defined sleep must elapse before
more than 50% of those tested recognize that they had actually been
sleeping. Rather than being considered as a discrete occurrence, sleep
onset can be better conceived of as an evolving process characterized by
steadily decreasing arousal, lengthening response time and intermittent
response failure (Ogilvie et al., 1989). The EEG may show progression
from wakefulness to stage I sleep, or sleep onset may be preceded by
‘microsleeps’ in which the EEG shows brief episodes of alpha drop-out
and an increase in theta activity (Harrison & Horne, 1996).
Periods of approaching sleep onset have been correlated with de-
teriorating driving simulator performance (e.g. Boyle et al., 2008; Golz
et al., 2011). It has also been shown that progressive deterioration in
simulated driving performance among healthy, sleep-deprived indivi-
duals correlates with EEG evidence of drowsiness and is associated with
self-reported sleepiness (e.g. Reyner & Horne, 1998). Continuous EEG
recordings among long-haul truck drivers also show signs of drowsiness
associated with subjective sleepiness (Kecklund & Akerstedt, 1993).
Nevertheless, many drivers with OSA may continue to drive without
being aware of their declining performance.

Impaired sleep in OSA and PAP treatment


Cognition improves in many patients who are treated successfully with
PAP (e.g. Barnes et al., 2002; Engleman et al., 1994) as a result of miti-
gation of sleep fragmentation, hypoxaemia or both. In addition, crash
risk and simulated driving performance may both improve even after
118 Neuroergonomics

two weeks of PAP treatment (George, 2001; Orth et al., 2005; Turkington
et al., 2004). A large meta-analysis showed that crash risk declines by 72%
with PAP, although it is unclear if PAP reduces the risk to normal levels
(Tregear et al., 2009).
An important aspect of treatment with PAP is that adherence to PAP
therapy at home in real-world settings can be quantified by download-
ing usage data stored in the patient’s PAP machine. It is thus possible
to observe how adherence to therapy is related to patterns of sleep,
and, ultimately, performance on cognitive tests and on real-world tasks,
such as driving. This is important because PAP remains the standard of
care, adherence to PAP is less than optimal, and the dose of PAP needed
to produce meaningful improvements in cognition, real-world EDS
and real-world behaviour remains unclear (e.g. Weaver et al., 2007).
Moreover, even a single night of PAP noncompliance can negatively
affect surrogate markers of driving safety, such as tests of vigilance and
simulated driving performance (Kribbs et al., 1993; Sforza & Lugaresi,
1995; Turkington et al., 2004).
Adherence to PAP is a problem for many patients (e.g. Weaver &
Grunstein, 2008), sleep and cognitive benefits notwithstanding, partly
because of unawareness and partly because of discomfort. In one
study, only 72% of patients who started on PAP agreed to continue
it after the first night (Rauscher et al., 1991). The patterns of use in
patients who accept PAP are highly variable: about half use it an aver-
age of about 6 hours in 90% of nights, whereas the remaining patients
use it fewer than 4 hours in 2–79% of nights (Weaver et al., 1997).
Disease severity and pretreatment subjective EDS may partially predict
PAP adherence, but it is difficult to predict individual adherence based
upon clinical features (McArdle et al., 1999; Weaver & Grunstein, 2008;
Yetkin et al., 2008).
A dose–response relationship appears to exist between hours of PAP
use and reversal of EDS, with some benefit derived at lower levels of
adherence (Weaver et al., 2007), but a minimum of six hours of nightly
use may be necessary to produce a meaningful improvement in some
aspects of cognition and objective sleepiness, as measured by the MSLT
(Weaver et al., 2007; Zimmerman et al., 2006). What is considered
‘optimal’ adherence may depend upon which outcome measure is being
used: improvement in sleepiness or performance on one test of cogni-
tion may not necessarily indicate that treated patients will perform well
on other measures or that they are functioning ‘normally’, especially
in real-world activities (Weaver & Grunstein, 2008; Zimmerman et al.,
2006). There may also be considerable variability in the individual
Neuroergonomics of Sleep and Alertness 119

need for PAP, just as there is considerable variability in the tolerance to


sleep deprivation in both OSA patients and the rest of the population
(e.g. van Dongen et al., 2004). This interindividual difference in neuro-
behavioral vulnerability (which often underlies complex behaviours in
tasks of interest to neuroergonomics) appears to be trait-like, and may
preferentially affect such areas of function as awareness of sleepiness
and mood, cognitive processing capability and sustained attention
(van Dongen et al., 2004). The relation between PAP adherence and
behavioural effects is, however, further complicated by the fact that up
to 50% of patients who use PAP for at least 6 hours per night remain
sleepy (Weaver et al., 2007), whether because of individual variability
in tolerance to sleep deprivation or to factors such as frequent arousals,
continued respiratory events due to inadequate PAP titration or perma-
nent dysfunction of wake-promoting structures in the brain (e.g. Pack
et al., 2006; Rodrigues et al., 2007).

Assessing naturalistic driving behaviour in the real world

Real-world data collection is an essential method for obtaining critical


human factors data (Klauer et al., 2006; Lees et al., 2010; Rizzo et al.,
2007; Thompson et al., 2011) and is highly relevant to developing
predictive models of driver safety, fair and accurate criteria for driver
licensure, and effective injury prevention countermeasures (McGehee
et al., 2007). Current insights on vehicle use by impaired drivers typi-
cally rely on questionnaires completed by individuals who may have
defective memory and cognition, poor observational skills, unaware-
ness of their impairment (also known as anosognosia) and pragmatic
reasons (e.g. employment, economic, social) to withhold self-report of
problems. Standard cognitive tests that enable clinicians to diagnose
patients and track their progress may fail to capture the difficulties that
individuals with impaired decision-making face in real-world tasks, such
as automobile driving. Standard road tests show driver control over a
vehicle under limited circumstances, but do not address key aspects
of strategic decision-making and planning, such as a driver’s ability to
reason and to respond to altered safety contingencies. Driving simula-
tion provides a unique means to closely control the experimental road
conditions under which driver decisions are made, and can show critical
links between sleep physiology and driver control measures (Boyle et al.,
2008; Golz et al, 2011; Moller et al., 2006; Reyner & Horne, 1998).
Drivers may behave differently in the controlled setting of a driving
simulator than in real life, where life, limb and licensure are at stake
120 Neuroergonomics

(Rizzo, 2011). Observations of real-life driving behaviour in at-risk OSA


drivers can thus provide a wealth of unique information on the causal
and contributing factors that lead to critical incidents, driver errors,
near crashes and crash types that can be related to their real-world sleep
patterns at home, with and without therapy. Naturalistic experiments
in drivers with clinical and physiological evidence of disturbed sleep
are now possible using modern instrumentation and telemetry packages
that provide direct, detailed, long-term information on driver behaviour
from a driver’s own vehicle.
Research in the neuroergonomics laboratory at the University of
Iowa is currently assessing real-world driving performance in OSA
patients by means of extended observations of real-world driver strat-
egies and tactics from a ‘black box’ developed in collaboration with
Digital Artefacts, LLC. This instrument is a video event and electronic
data recorder placed in the drivers’ own cars. Driving performance is
correlated with PAP use each day over multiple, consecutive days to
avoid problems encountered in previous studies in which an average
of nightly PAP use over several months was correlated with change
in performance from the beginning to the end of the study period.
In addition, factors that are predictive of residual impairment in real-world
driving performance in individual OSA patients who are regular users of
PAP are determined.
Figure 6.1 depicts a driver in his own vehicle with the black box event
recorders as deployed in the field. The recorders include two cameras
positioned to observe driver behaviour and a forward view of the road.
The recorders collect video information intermittently, while electronic
vehicle information—including global positioning system (GPS), speed,
and acceleration— is collected continuously.
The recorders permit the capture of driver performance in the real
world. For example, GPS can be used to quantify route choices and char-
acterize them for exposure to road risks, such as frequency of rural or
urban roads, and interstate driving. GPS can be linked with geographic
information system databases to quantify the frequency of driver’s
exposure to inclement weather. Electronic vehicle data also permit a
characterization of the lateral or longitudinal acceleration often associ-
ated with erratic vehicle control, which can be an indication of unsafe
driver behaviour. The frequency of accelerometer events greater than a
specified threshold value (e.g. 0.35 g) and the average magnitude and
range of observed g-forces are two indices of driver safety. Elevations in
accelerometer values, however, do not indicate whether the driver was
at fault or what precipitated the event. Depending on context, elevated
Figure 6.1 Video and electronic data from the black box event recorder. Cameras capture driver behaviour (upper left panel) and for-
ward view of the road (lower left panel; in this case indicating that approach to an intersection where traffic is stopped at a traffic signal).
GPS indicates the location of the driver in a geospatial map (dot, upper right panel). The graphs of the electronic data (lower right
121

panels) show that the driver’s speed has decreased from almost 70 kph to approximately 15 kph over about 15 s on the x-axis (Time)
122 Neuroergonomics

accelerometer values can be an indication of a safe manoeuvre, such as


swerving to avoid another vehicle, or swerving to correct lane deviation
due to inattention or a microsleep event. The black box recorders per-
mit the electronic vehicle data to be contextualized by collecting brief
video data (20 s) around segments of the drive when accelerometers
exceed a set threshold (currently set to 0.35 gs). These event-triggered
clips are then subjected to a detailed review in which each clip is quanti-
fied for unsafe behaviours, such as not obeying traffic signs or signals,
for exposure-related factors, such as weather and road conditions
(e.g. wet, icy), and for driver state, such as distraction and sleepiness (see
Wierwille & Ellsworth, 1994).
Multidimensional reviews permit quantification and characteriza-
tion of safety-relevant events into either appropriate responses to
an unsafe event, or one of three basic categories of inappropriate
responses. Inappropriate responses include crashes (incidents when the
vehicle makes physical contact with objects, vehicles or the roadside),
near-crashes (‘close calls’ or ‘near-misses’ that require a rapid evasive
manoeuvre without physical contact with an object such as a person,
vehicle or guardrail), or safety errors without a near-crash or crash
(e.g. running a red light). This characterization is consistent with the
iceberg analogy that characterizes variability in driver safety (Heinrich
et al., 1980; Maycock, 1997). Visible, above-the-waterline events are
driver errors that produce car crashes resulting in fatality, serious
injury, mild injury or (most often) only property damage. Below-the-
waterline events are behaviours that occur more frequently and are less
directly related to crashes, such as not turning into the proper lane and
improper acceleration at turns.

Case study
To better illustrate the neuroergonomic approach toward understanding
sleep and alertness, we now describe an ongoing study of real-world
driving performance in OSA patients over a 3.5-month period using the
black box technology. Parallel to the daily driving record obtained from
the black boxes, daily sleep patterns are tracked using both self-reports
(i.e. sleep diaries) and wrist-worn actigraphy indices, and PAP adher-
ence is measured. Participants’ subjective ratings of EDS and quality of
life are tracked on a monthly schedule and their cognitive functions
are evaluated pre- and post-PAP using standardized neuropsychological
tests of cognitive function. This dense data collection routine permits:
(a) quantification of real-world driving performance in OSA compared
with matched control drivers in greater detail than epidemiological
Neuroergonomics of Sleep and Alertness 123

records (e.g. state-recorded crash statistics); (b) determination of the


dose–response relationship between PAP use and driver safety; and
(c) determination of the factors that predict residual impairments in real-
world driving performance in drivers with OSA after adjusting for levels
of PAP use.
To deal with large, disease-independent variability in driver safety and
to increase confidence that improvements in safety following initiation
of PAP reflect treatment effects, OSA drivers are compared with control
participants who are matched closely on gender, education, age, geo-
graphical area in which they typically drive, and season of driving, and
both OSA drivers and their controls are observed for two weeks prior to
the beginning of PAP treatment. The recorders are designed to permit
quantification of driver safety, exposure to road risks, and driver state
based on detailed video review in periodic baseline clips in the absence
of accelerometer-triggered event recordings. These features of the study
permit the capture and linking of individual improvements in safety
due to PAP therapy. For example, improvement in safety following
introduction of PAP that is unaccompanied by parallel improvement in
sleepiness should decrease confidence that the improvements are due
to diminished EDS.
Data from three participants illustrate changes in objective measures
of sleep/wake patterns, driving safety, sleepiness and distraction behind
the wheel over the course of four weeks, half pre-PAP and half post-PAP,
for two OSA and one control participant. Note that the 55-year-old
male, OSA002, is compliant with PAP therapy, while the 41-year-old male
OSA004, is not compliant with PAP. The control participant (CS) is
matched to OSA002, and both were observed driving during winter
months and drove in the Iowa City and Cedar Rapids, IA, area.
Figure 6.2 illustrates that all three participants spent considerable
time in bed without being asleep, according to wrist-worn actigraphy
algorithms. The time-in-bed measure includes duration of wakefulness
both prior to and following sleep onset, in addition to duration of
sleep. Wakefulness totals about 2 hours for OSA002 (upper left) and his
control (CS, bottom left), and about 1.5 hours for OSA004 (upper right).
No clear improvement in hours asleep following PAP therapy can be
detected for either of the participants with OSA compared with pre-PAP
levels. However, part (d) shows that the OSA participant (OSA002), who
was compliant with PAP, had consistently fewer awakenings than the
noncompliant OSA participant (OSA004). Although preliminary, these
data suggest that sleep fragmentation, particularly number of awaken-
ings, are significantly lower following PAP therapy (Aksan et al., 2012).
Pap-use Asleep In bed Pap-use Asleep In bed

O SA002 O SA004
124
(a) 10 (b) 10

8 8

6 6

4 4

Average no. of hours


2

Average no. of hours


2

0 0
Week 1 Week 2 Week 1 Week 2 Week 1 Week 2 Week 1 Week 2
Pre-pap Post-pap Pre-pap Post-pap

Asleep In bed (d) 7 CS OSA002 OSA004

(c) 10 6
CS for O SA002
5
8
4
6
3
4
2

Average no. of hours


2 1

No. of awakenings per hour of sleep


0 0
Week 1 Week 2 Week 1 Week 2 Week 1 Week 2 Week 1 Week 2
Pre-pap Post-pap Pre-pap Post-pap

Figure 6.2 (a–c) Hours spent asleep and in bed as indicated by wrist-worn actigraphy in relation to PAP use for two PAP recipients
(OSA002 and OSA004) and a control participant (CS) matched to OSA002. (d) Average number of awakenings per hour of sleep per
participant. The first two weeks are prior to PAP use and the second two weeks post-PAP use
Neuroergonomics of Sleep and Alertness 125

CS OSA002 OSA004

3.5
No. of high g events per trip

2.5

1.5

0.5

0
Week 1 Week 2 Week 1 Week 2
Pre-pap Post-pap
1.2
No. of safety errors per high g event

0.8

0.6

0.4

0.2

0
Week 1 Week 2 Week 1 Week 2
Pre-pap Post-pap

Figure 6.3 The number of high g events (top) and number of safety errors per
high g event (bottom) in OSA patients before and after starting PAP relative to
the control individual

Figure 6.3 shows two indices of driver safety: the number of high g
events (top panel) on a per trip basis before and after PAP as derived from
electronic vehicle data and the number of safety errors on a per high g
event basis (bottom panel) before and after PAP as derived from video
review. As can be seen in the figure, there is considerable fluctuation
in the rate of high g events (set to 0.35 g or more) from week to week,
which is consistent with large individual difference variation in driver
126 Neuroergonomics

safety. But note that, in general, the two OSA participants are less safe
than the control. In contrast, video review-based safety-error rates show
less fluctuation from week to week, suggesting a safety-error rate of one
error per high g event, which also suggests that OSA participants before
PAP treatment may be less safe than controls.
Figure 6.4 shows levels of sleepiness (top panel) and alertness (bottom
panel) in high g event video clips during the pre- and post-PAP phases.
It is immediately apparent that, on average, the control participant
appears less sleepy and more alert behind the wheel during high-g clips
than the participants with OSA. On average, the data suggest that the

CS OSA002 OSA004
0.4
Sleepiness during high g clips

0.35
0.3
0.25
0.2
0.15
0.1
0.05
0
Week 1 Week 2 Week 1 Week 2
Pre-pap Post-pap
1.2
Alertness per high g event

0.8

0.6

0.4

0.2

0
Week 1 Week 2 Week 1 Week 2
Pre-pap Post-pap

Figure 6.4 Measures of sleepiness (top) versus alertness (bottom) based on video
clip reviews during high g events
Neuroergonomics of Sleep and Alertness 127

OSA participants were slightly more alert during the post-PAP phase
than they were prior to PAP. Furthermore, the OSA participant who
adhered to PAP therapy (OSA002) appears to be showing improve-
ments in sleepiness behind the wheel, whereas the OSA participant
who showed minimal compliance with PAP therapy (OSA004) seems
to show fluctuations that do not conform to any particular pattern of
improvement or decline. Another, preliminary, pattern of note from
the data concerns the degree of variability in driver-state measures. For
example, sleepiness measures appear to show higher variability than
alertness measures over time. It is possible that when driver-state evalu-
ations are based on brief clips, proxy measures (alertness as measured
by driving-related gaze movements) rather than direct measures of EDS
(sleepiness as measured by yawning, slow blinks, slack facial and bodily
muscle tone) will be more useful in detecting change over the course of
time as a function of PAP therapy.

Conclusion

Extensive scientific evidence exists on the negative effects of sleepiness


in performance of many cognitive tasks, including those essential for
safely operating a motor vehicle. These include adverse effects of
sleepiness induced by sleep loss on maintaining wakefulness and
alertness, and on vigilance and selective attention, psychomotor and
cognitive speed, accuracy in performing a wide range of cognitive tasks,
working and executive memory, and higher cognitive functions, such as
decision-making, detection of safety threats and problem-solving,
as well as communication and mood. Sleepiness is not, however, an
all-or-none condition in which a driver is either rested with no nega-
tive effects on performance or sleepy with resultant severe negative
effects on performance. There are degrees of sleepiness and degrees of
negative effects on performance. Likewise, the effects of sleepiness on
performance can vary substantially from one driver to another without
differentially affecting driving performance and safety.
The preliminary results from the study of driver behaviour and errors
in the real world illustrate how a neuroergonomic approach can inform
public health and work policy issues on sleepiness-related impairments
and their effects on driver safety. When data collection is complete,
the study will provide unique, community-based data samples on per-
formance outcomes from unprecedented exposure data on driving in
OSA and normal comparison individuals. A better understanding of
how driving performance deteriorates in OSA is relevant to the rational
128 Neuroergonomics

development of interventions that could be used to prevent crashes


involving other drowsy drivers. The techniques used in the study could
be adapted to develop future tools for screening, identifying, advis-
ing and alerting drivers with OSA who are at greater risk for impaired
driving due to drowsiness, cognitive dysfunction, lack of insight into
their impairment and lack of compensatory behaviours (e.g. avoidance
of driving while drowsy). Fair and accurate means of detecting drivers
with OSA who are unsafe will help mitigate the tragedy of motor vehicle
crashes caused by these impaired individuals. Perhaps most importantly
the patterns of performance in drivers with OSA should prove relevant
to understanding driving errors in normal individuals under conditions
of multitasking, stress, fatigue and drug use in our 24-hour society.
7
Affective and Social
Neuroergonomics
Jacob Jolij and Yana Heussen

Most of us have seen a cartoon or a clip on the Internet of an office


employee throwing a computer from the window out of frustration,
and many of us have experienced what we would call ‘computer rage’.
Computer rage is a phenomenon in which users experience frustration,
anger or confusion while interacting with a system. Although we may
make our frustration evident in the way we type or use an interaction
device, most of us realize that our emotional reactions are not conveyed
to the computer, and that even if they were, they would have no effect
on the system’s performance. One’s emotional state does, however,
play an important role in decision-making and even in basic percep-
tion (Damasio, 2005; Clore & Huntsinger, 2007). Emotions also colour
human–computer interaction (e.g. Johnson & Klein, 2006). Optimizing
human–machine interaction thus depends on understanding the neural
basis of emotions, how emotions affect decision-making and percep-
tion, and how emotions can be measured.
Not only do we sometimes express emotion intended for our com-
puters, many of us have experienced the feeling that our computer acts
with intention. For example, just after swearing at your computer
because the text processor seems to have malfunctioned, ruining your
layout yet again, your computer may crash, resulting in the loss of a
day’s work. Many of us will recognize the faint feeling that our computer
must have done that on purpose, and will have quickly dismissed it as a
silly feeling. However, such attributions of intentionality to inanimate
objects, such as computers, may have a basis in the neuroscience of
social interactions. Recent advances in the field of social neuroscience
have shown that the human brain tries to understand the world and,
in particular, other people, by means of simulation. In other words, if
we observe an action in the outside world, we try to understand that
129
130 Neuroergonomics

action by ‘pretending’ we are doing it ourselves. This neural mimicry


enables us to understand how and why people around us behave as they
do, and may also play a role in attributing intentionality to inanimate
objects, such as computers. That, is, the attribution of intentionality to
inanimate objects may occur because our brains process interactions
with computers as social interactions (Keysers & Gazzola, 2006).
The tendency to interpret any interaction as a social interaction is
being capitalized on by corporations such as Apple. For example,
‘Siri™’, a software-based personal assistant that accepts spoken natural
language for input in the Apple iPhone requires that the user engage
in fairly natural and meaningful dialogue with the device. Moreover,
many computer interfaces feature avatars, cartoon figures with whom
the user interacts, such as the infamous paperclip in Microsoft Word.
Even such simple avatars add a social dimension to human–computer
interaction. It should be noted, however, that such a social dimension is
not always appreciated, leading some human–computer interaction spe-
cialists to advise against making interaction too social (Shneiderman,
1998). Finally, many laboratories worldwide are involved in research on
android robots for implementation in the home (e.g. as assistance to eld-
erly persons) or in environments such as healthcare (see, e.g., European
Commission, Information Society and Media, 2008). How such systems
should be designed so that they optimally exploit the brain’s built-in
mechanisms for social communication to facilitate human–computer
interaction is a major focus of this chapter. More specifically, we review
recent advances in the fields of affective and social neuroscience, two
relatively new branches of brain research that deal with the neural cor-
relates of emotion and social cognition, respectively, and illustrate how
they may benefit the field of cognitive neuroergonomics.

The neural basis of emotion

According to most textbook definitions of emotion, emotions are


evolutionarily old behavioural patterns associated with particular psycho-
logical states that are evoked by specific situations or interactions with
others (e.g. Gray, 2006). Emotions range from ‘simple’ emotions that
are largely biologically determined (such as joy or anger) to complex
emotions which may be learned and are culturally determined (such
as pride or shame). All emotions share that they are a compound of
specific cognitive and bodily states, and that they are associated with
a particular phenomenological state of ‘feeling’. There is some consensus
that there are six basic emotions, which are shared among cultures: joy,
Affective and Social Neuroergonomics 131

fear, surprise, anger, disgust and sadness. These six emotions are
accompanied by distinct facial expressions that appear to be innate,
although cultural factors also shape these expressions to some extent
(Ekman & Friesen, 1986; Jack et al., 2012). According to Frijda (1986),
emotions reflect the tendency of an individual to initiate specific
behaviour, based on that individual’s needs. As such, emotions play a
vital role in our behaviour.
One hypothesis regarding the neural basis for emotion is the triune
brain hypothesis (MacLean, 1990). According to this hypothesis, our
emotions are governed by a relatively primitive part of the brain, the
so-called paleomammalian brain. According to MacLean, this part of
the brain first evolved in ancient mammals, and underlies emotion
and feelings. MacLean proposed the alternative name limbic system
for this interconnected system of brain structures, which includes the
septum, the amygdala, the hypothalamus, the hippocampal complex
and the cingulate cortex. Although the triune brain hypothesis, as
originally stated, is considered to be outdated, the idea that emotions
and emotional behaviours are governed by the limbic system is still
generally accepted.
The amygdala, in particular, has been established as playing a key role
in the brain’s emotion network. The amygdala, or better, amygdalae, are
two almond-shaped nuclei that lie deep within the temporal lobes—
one in the left hemisphere and one in the right hemisphere. The amyg-
dala receives input from different cortical areas, including the frontal
and sensory cortices. Moreover, there are direct, subcortical connections
from the thalamus to the amygdala that relay visual input directly from
the optic nerve to the amygdala, bypassing the cortical visual system
(Tamietto & de Gelder, 2010). These subcortical connections allow the
amygdala to quickly analyse and, if necessary, respond to, information
carrying emotional content (LeDoux, 1996). The amygdala, in turn,
projects to the frontal, parietal and sensory cortices, and plays a role
in initiating fight-or-flight behaviours or guiding attention toward
potentially important information.
Although the amygdala plays a key role in the processing of emotional
signals, it cannot be considered the ‘seat of emotion’ in the sense that
activity in the amygdala causes the subjective experience of emotions.
In fact, patients with lesions of the amygdala report normal subjective
emotions and can process emotions, as evidenced by their ability to per-
form rapid discrimination of emotional stimuli (Adolphs et al., 2002).
However, these patients do show impairment in directing attention
toward emotionally salient events, such as words with a strong negative
132 Neuroergonomics

meaning (e.g. ‘murder’ or ‘death’; Adolphs et al., 1998). Such findings


suggest that although the amygdala may be a key structure in detecting
emotions, it is involved primarily in making behavioural adjustments.
Another brain structure for mediating emotion is the orbitofrontal
cortex, the part of the frontal cortex just behind the eyes. A classic
case in cognitive neuropsychology, that of Phineas Gage, illustrates
the critical role of this area in decision-making. Railroad worker Phineas
Gage suffered a massive head trauma in the 1880s after dynamite he
was tamping with an iron rod exploded, launching the iron rod so that
it penetrated his skull under his right eye and proceeded through the
orbitofrontal cortex. After the injury, Gage’s behaviour and personality
were changed. Although the reports are somewhat controversial, Gage’s
physician described his behaviour as becoming more erratic and less
inhibited. Allegedly, Gage had more problems planning his day-to-day
activities and, most importantly, experienced difficulty in evaluating
the affective outcomes of his actions. More specifically, he was unable
to predict whether he would regret doing something or not. In modern
affective neuroscience theories, the ability to predict the emotional con-
sequences of one’s actions is supposed to be one of the major functions
of the orbitofrontal cortex (Damasio, 1996).
Although many studies have attempted to pinpoint the neural gener-
ators of specific emotions, thus far these attempts have met with mixed
results, with the exception of disgust, for which neuroimaging evidence
suggests a locus in the insular cortex (Keysers & Gazzola, 2006). Positive
(e.g. joy, happiness) versus negative (e.g. anger, sadness) emotions
have been localized to some extent. In general, the right hemisphere
appears to be more strongly activated by stimuli carrying emotional
content, such as facial expressions or vocal prosody, than is the left
hemisphere. However, the left hemisphere seems to be more involved
in initiating actions (which is often associated with positive moods),
whereas the right hemisphere is more associated with inhibiting behav-
iour (which tends to be associated with negative moods). The brain
networks governing these different types of responses are aptly termed
the behavioural activation system (BAS) and the behavioural inhibition sys-
tem (BIS), and are believed to be located in the left and right hemispheres
respectively (Gray, 1981, 1982). The asymmetry between brain activity
in the two hemispheres is believed to be an accurate index of current
emotional state, although the exact underlying neural processes remain
poorly understood (Cacioppo, 2004; Coan & Allen, 2004). Nevertheless,
this asymmetry is presently being used as a neuroergonomic marker
in some user-oriented electroencephalography (EEG) systems. These
Affective and Social Neuroergonomics 133

systems are outfitted with an ‘emotion-monitoring’ option based on


frontal hemispheric asymmetry (see, e.g., http://www.emotiv.com).

How emotion guides vision and cognition

Emotion is a powerful modulator of attention. Our perceptual systems


are fine-tuned for picking up signals carrying emotional meaning from
our environment, and emotional content prioritizes and may alter
perceptual processing (Vuilleumier & Huang, 2009). Several research
findings point to potential uses of emotional information. For exam-
ple, a recent study (Bocanegra & Zeelenberg, 2009) demonstrated that
presentation of a fearful expression enhances perception of low spatial
frequencies. From an evolutionary point of view, this makes sense:
the amygdala is differentially sensitive to low spatial frequencies, and
this enhancement allows for more effective detection of subsequent
emotional stimuli. The effect of emotion on processing of low spatial
frequency is present even when it interferes with subsequent perform-
ance. Although there are no practical applications of this finding yet,
it does demonstrate that emotional material may be used to induce a
specific mode of visual processing, potentially to the benefit of a user.
Another example of how emotion may alter behaviour is that of
affective priming. When an emotional face with either a happy or an
angry expression (the ‘prime’) is presented very briefly and immediately
followed by a neutral face—observers do not notice that an emotional
face has been presented. However, the emotion of the face does affect
subsequent judgements about the otherwise neutral stimuli. For exam-
ple, when participants who do not know Japanese have to judge whether
Japanese characters represent a positive or a negative word, they tend to
think the characters represent something positive when primed with a
happy face, but, when primed with an angry face, they tend to think the
characters represent something negative (Murphy & Zajonc, 1993). Such
unconscious processing of emotional primes is attributed to the earlier
mentioned subcortical pathway to the amygdala. Because the subcorti-
cal pathway bypasses visual areas, primes can effect emotional appraisal,
even when not consciously perceived. Numerous studies in the past
decade have shown that emotional stimuli can, indeed, be processed
to considerable depth in the absence of conscious awareness (e.g. Jolij,
2008; Whalen et al., 1998). Given that unconscious information does, to
some extent, influence our actions, a challenge for neuroergonomists will
be to find ways of tapping into the potential benefits of the unconscious
processing of information for tasks such as early detection of threat.
134 Neuroergonomics

The task of tapping the brain’s unconscious potential is made more


difficult by the elusiveness of behavioural markers of unconscious
processing. Deliberate information processing strategies appear to
repress unconsciously perceived emotional information, thus limiting
the effects of unconsciously processed emotional material ( Jolij, 2008;
Jolij & Lamme 2005). However, despite their limited effect, uncon-
sciously processed emotional stimuli can be used to modulate everyday
behaviour, such as driving. In a simulator study, Lewis-Evans, et al.
(2012) let participants drive for three minutes. Unbeknownst to the
participants, images with either negative or neutral emotional content
were shown briefly next to the rear-view mirror. Participants who were
shown the negative images drove, on average, 3 km/h slower than
participants who were shown the neutral images. Although in absolute
terms this effect is small, it does demonstrate that there may be some
potential in using unconscious emotional material to modulate the
behaviour of drivers.
It is clear that the emotional content of a visual image can affect
perceptual processing—and may affect subsequent behaviour—even in
the absence of conscious awareness. However, when we think about
emotion, we tend to think more about our emotional state or, more
specifically, our mood. Mood has a profound effect on the way we proc-
ess information. One of the best known effects of mood on information
processing is the shift in global versus local focus as a result of mood:
in a positive mood, people tend to have a global focus and remember
the gist of materials, for example, a story; in a negative mood, how-
ever, this focus shifts to a local focus and people tend to remember
the details (Clore & Huntsinger, 2007). Interestingly, this global–local
shift is present not only in remembering verbal material, but also in
visual perception. For example, when participants are shown three
squares laid out in a triangle and subsequently asked whether the figure
resembles a square (the local elements) or a triangle (the global shape),
participants are more likely to respond ‘square’ if they are in a negative
mood, but ‘triangle’ if they are in a positive mood (Gasper & Clore,
2002). Although recent studies suggest that the actual mechanisms
behind this global–local shift are more complicated than a one-to-one
relation between positive mood and global focus, and negative mood
and local focus, the effect illustrates that emotional state can affect
perceptual processing.
Whether we interpret an arrangement of three squares as a triangle or
a square is a matter of grouping. As discussed in Chapter 3, grouping is
a basic visual process in which different elements of a visual scene are
Affective and Social Neuroergonomics 135

grouped into coherent object representations. Apparently, how we feel


may influence the process of grouping. For example, Jolij and Meurs
(2011) demonstrated that participants are better in detecting schematic
happy or sad faces in noise when these faces are congruent with the
observer’s mood. Moreover, a negative mood has been shown to narrow
the so-called field-of-view in the early visual cortex such that cells in
the early visual areas become less sensitive to visual information if the
observer is in a negative mood (Schmitz et al., 2009).

Reading emotional states

Given that a user’s emotional state has a profound effect on perception


and performance, the ability to monitor and adapt to a user’s emotional
state may be a useful addition to user interfaces. Research on the auto-
mated reading of emotional states has a long tradition, dating back to
the 1920s. Probably the best known example of inferring emotional
state is embodied in the lie detector. A traditional lie detector is a poly-
graph that measures several autonomic physiological variables [heart
rate, respiratory rate, blood pressure and the electrodermal response
(EDR), which is basically a measure of how much a person sweats].
The rationale behind the lie detector is that lying will evoke a stress
response, thus triggering such symptoms as an increase in heart rate
and EDR. Although a polygraph does, indeed, register a stress response
by measuring autonomic variables (Verschure et al., 2009), lie detection
using the polygraph remains controversial. Most notably, polygraphs
are notorious for their high false-alarm rate. Although they are sensitive
to the questionee’s stress in response to questions, the stress response is
not necessarily related to the truth value of the response.
Despite their limited usefulness in the field of lie detection, autonomic
variables such as heart rate and EDR are useful for measuring subjective
mental states. Both EDR and heart rate are fairly easy to register without
too much discomfort for a user. In fact, there are several commercially
available units that are easy to use and that do not interfere with normal
user activities. Such systems can be used for continuous monitoring
of the user’s state during operation of an interface. Units that measure
autonomic variables are often advertised as ‘emotion monitors’, and,
indeed, they do capture some aspects of emotional state. However, what
is captured is merely one aspect of the emotional response, namely, the
degree of arousal of the user. Although arousal is an important compo-
nent of emotion, it does not capture the full extent of the emotional
response. For example, low arousal as measured with EDR or heart
136 Neuroergonomics

rate may indicate that a user is relaxed. However, it may also indicate
boredom or even sadness. Likewise, high arousal may indicate anger,
fear or excitement. In other words, autonomic physiological measures
do not capture the valence of emotion. Here, measurement of EEG may
be helpful. As mentioned earlier, emotional responses are also character-
ized by behavioural activation or inhibition, mediated by the BAS and
BIS. As the BAS is located in the left hemisphere and the BIS in the right,
differences in left versus right brain activity may be informative about
a person’s present emotional state. Harmon-Jones (2004) proposed a
coding model supported by the literature on the relation between frontal
EEG asymmetry and emotional state (Cacioppo, 2004) in which relatively
high left-frontal activation indicates positive emotions and high relative
right-frontal activation indicates negative ones.
Although EEG may be considered more invasive than measuring
autonomic physiological variables, commercially available systems are
currently available that allow for easy electrode placement and data
acquisition—compared to research systems—while maintaining a fairly
good signal quality at a fraction of the cost of traditional systems. Several
usability laboratories presently use EEG to monitor the emotional
response of participants to specific situations. Although adaptive
automation on the basis of emotional state is still in its infancy, interest
in its possibilities is growing (see, e.g., http://www.emotiv.com).
Another application of reading emotions using neuroimaging technol-
ogy is neuromarketing (e.g. Ariely & Berns, 2010). In neuromarketing,
brain signals in response to advertisements or consumer decisions are
monitored in order to establish the presence of positive emotions and,
by extension, the effectiveness of marketing campaigns. Frontal EEG
asymmetries are used as a dependent measure in this type of research,
but functional magnetic resonance imaging (fMRI) is considered the
‘gold standard’ in neuromarketing research. Using fMRI, it is possible to
establish whether specific emotion-related brain regions, such as the
amygdala or other areas in the limbic system, are activated by specific
advertisements. Recently, it has been shown that noninvasive brain
imaging using functional near infrared spectroscopy (fNIRS) can be
used to decode whether one likes or dislikes a visually presented object.
This finding has obvious relevance for neuromarketers, but may also be
applied to brain–computer interfaces (BCIs) for ‘emotional communi-
cation’ (Hosseini et al., 2011).
Hosseini et al. (2011) relied on the finding that the orbitofrontal
cortex is involved in decoding valence of outcomes, the pleasantness of
sensations and subjective preferences to develop a multivariate pattern
Affective and Social Neuroergonomics 137

classifier that codes short (4-second) segments of fNIRS data (from


optodes placed over the frontal and frontotemporal areas) according
to whether an observer likes or dislikes a visual image. Classification
accuracy exceeding 72% was obtained for the classification ‘positive
versus other’, and classification of about 68% was obtained for the
‘negative versus other’ classification. In line with the literature on brain
areas involved in valence judgements, regions located in the anterior
medial frontal regions were more important for the classification of
attractiveness than unattractiveness. Activity in the lateral frontal
regions contributed to the classification of both attractive and un-
attractive stimuli, explaining, at least in part, why classification accu-
racy was poorer for the classification of unattractiveness. Hosseini
et al.’s participants rated stimuli while viewing them. Thus, although
the classifier differentiated attractive and unattractive stimuli, it is pos-
sible that decision-related activity was used by the classifier instead of,
or in addition to, pleasantness-related activity. Future research with
different demand characteristics will have to determine whether one’s
emotional response alone can be deduced from fNIRS data.
Arguably, the least intrusive way to infer a person’s emotional state
is by measuring non-verbal behaviour. For example, in spoken inter-
actions one’s tone of voice may express an emotion, such as anger or
frustration (Williams & Stevens, 1972). Such information may prove
useful in voice-driven systems, such as smartphones or car navigation
systems. Emotion recognition in vocal expression is most often achieved
by characterizing the prosody, pitch and volume of the auditory signal.
Some researchers are reporting initial successes in this area (e.g. Hoque
et al., 2006).
Humans display their emotion most dominantly in their facial
expressions. In fact, the human face contains many muscles that have
no other function than to form facial expressions. Although there is no
match for the human visual system in terms of accuracy of emotion
detection, computer vision algorithms are becoming more sophis-
ticated in recognizing emotions. Most automated detection is based
on the Facial Action Coding System (FACS), a taxonomy of facial
expressions based on specific facial muscle movements (Cohn et al.,
2007; Ekman & Friesen, 1978). Although facial muscle movements have
traditionally been recorded by means of electromyography (measuring
electric activity of the muscle end plates of neurons), recent advances
in expression detection technologies have made it possible to reliably
detect motion in these individual muscles, and thus classify these accord-
ing to the FACS. FaceReader (Noldus BV, Wageningen, the Netherlands)
138 Neuroergonomics

is an example of a commercially available program that ‘reads’ (via a


digital camera) the facial expressions of a user during a natural task and
classifies emotional expressions according to the FACS. Facial expressions
have been used in usability analysis to classify basic emotions, but
they are also useful for determining whether a user has understood
an instruction or is confused (e.g. Liscombe et al., 2005). Such inform-
ation about a user’s emotional state can be used in intelligent tutors or
in adapting system behaviour in order to support user performance.
Among the latest developments in the area of emotion detection
in facial features is the use of natural image statistics. Natural image
statistics are parameters that describe the global layout of a visual image,
such as its brightness or mean spatial frequency. A recent study has
found that such natural image statistics are dynamic for individual faces
and change with facial expression. These natural image statistics may
even capture very subtle changes in expression, for example changes in
facial expression that accompany lying and deception ( Jolij, 2012).

The social brain

Humans are social beings: we depend on each other for survival.


Although social psychology has a long tradition, the neural basis of social
interaction is still poorly understood. One thing that we have learned
from the young field of social neuroscience is that our brains are wired
for social interaction. It has even been suggested that the brain network
supporting so-called default processing in the brain (i.e. the brain
areas that are active in the absence of task-related activity; Raichle &
Snyder, 2007; Raichle et al., 2001; see Chapter 1) largely overlaps with
brain areas that are responsible for social cognition (Schilbach et al.,
2008). In other words, it appears that the brain is always ready to engage
in social interaction.
Complex social processing occurs in various brain areas, including
the prefrontal cortex. Social visual stimuli appear to be given priority
in processing and are processed in specialized areas (Treves & Pizzagalli,
2002). The superior temporal sulcus (STS), in particular, appears to be a
key node in the so-called social brain network. Although the STS plays
a role in several cognitive functions, including memory, recent neuro-
imaging literature seems to suggest that when coupled with a network
with frontal and parietal areas, the STS is critical in so-called ‘theory-of-
mind’ functions in which we reason about other’s intentions or beliefs
(Singer, 2008). Even non-visual areas contribute to understanding
other’s actions. In the 1990s, Rizzolatti and co-workers (see Chapter 1
Affective and Social Neuroergonomics 139

and, for a review, Rizzolatti & Craighero, 2004) discovered that neurons
in the motor cortex of a monkey fire not only if the animal has to move
its own arm, but also when it sees an action performed with an arm.
These neurons have been called mirror neurons because they ‘mirror’
perceived actions as if they are executed by the observer. Keysers and
Gazzola (2006), who have found some evidence for mirror neurons in
humans using fMRI, suggest that mirror neurons play a crucial role in
social cognition.

Social human–computer communication and interaction

Communication with computers can, at first glance, hardly be called


social. Even when social information is being exchanged, the computer
is simply the medium of the exchange and not its recipient. However,
humans are eager to interpret information exchanges as having a social
character, even when these exchanges are with a machine. ELIZA, for
example, a ‘computer psychiatrist’ (and precursor of today’s ‘chatbots’
and natural language interaction systems) enjoyed some popularity in
the 1990s. Users found ELIZA entertaining and even convincing. One
anecdote relayed by Rosson and Carroll (2003), for example, describes
how a professor was sent out of his office by his secretary because she
was discussing personal problems with ELIZA.
The ability to engage convincingly in social interaction has been put
forward as the litmus test for machine consciousness. The famous Turing
test, proposed by Alan Turing in the 1950s, is based on the idea that a
sentient computer should be able to carry on a conversation without the
human operator realizing that he or she is interacting with a computer.
Our tendency to interpret the actions and events we observe as being
intentional and social in nature may mean that the Turing test is easier to
pass than was originally thought: human–computer interaction may have
more in common with true social interaction than commonly thought,
which may have implications for interface and interaction design.
Thinking about events in terms of beliefs and intentions has been
dubbed the intentional stance (Dennett, 1987). Taking an intentional
stance toward events may help us in understanding them, but may also
be absurd if there is no intention behind the event. For example, when
a light suddenly goes on in a room, we will probably think that this
occurred because someone turned on the light. Although the possibil-
ity of an electrical short might enter some people’s minds, few people
would attribute intention to the lamp itself (e.g. the light turned on
because the lamp felt like shining).
140 Neuroergonomics

Whether we take an intentional stance towards the events we


experience or not has an effect on how the brain processes these events.
In a neuroimaging experiment, for example, Gallagher et al. (2002)
had participants play a game of ‘rock, paper, scissors’ against what they
believed to be human and computer opponents (in reality, the oppo-
nent in the game was most often a computer). Deceiving participants
in this way allowed the researchers to compare series of identical trials
in which a participant believed that they were playing against a human
versus a computer. The belief in a human opponent resulted in greater
activation of the paracingulate cortex, a brain area that has been associ-
ated with mentalizing and theory of mind.
From an ergonomic point of view, whether mentalizing activates a
specific brain area or not is less interesting than the question of whether
performance is affected by one’s intentional stance. Traditionally, user
interface designers have been cautioned not to make interfaces social in
nature, and, in fact, to reduce any kind of mentalizing or attributions of
intentionality to a computer (Shneiderman, 1998). However, in the past
decade, Nass and colleagues have argued that users tend to respond to
computers in the same way they do to people (Nass & Gong, 2000) and
that they appreciate the use of humour in human–computer interaction
(Morkes et al., 1999).
Evidence that the use of social cues can alter performance suggests that
social cues may deserve a more prominent place in human–computer
interfaces. For example, it is well known that spatial attention can be
modulated by cues such as arrows (e.g. Posner, 1980). More recently, it
has been shown that social cues, such as eye gaze, can also be powerful
modulators of attention (e.g. Friesen & Kingstone, 1998, 2003). A pair of
eyes looking to the left or to the right draws the attention of an observer
towards the direction of the gaze, and neuroimaging research shows that
gaze cues are processed in a specialized brain circuit (George & Conty,
2008). There is considerable evidence that infants learn from observing
adults’ eye movements by deploying visual attention to the target of
the adult’s gaze (e.g. Gredebäck et al., 2010). This mechanism appears
to play a role in word learning in infants, in particular, who couple
objects gazed at by an adult to words that are presented simultaneously
(Houston-Price et al., 2006). Such insights may be valuable in develop-
ing instructional materials for children. For example, in the television
show Dora the Explorer (Nick Jr), a show aimed at 4–5-year-olds and that
attempts to teach foreign-language vocabulary, the words to be learned
are often indicated by an arrow pointing at the object that is named
in the foreign language. Research, such as that of Houston-Price et al.,
Affective and Social Neuroergonomics 141

suggests that attention could be more effectively oriented to the words


if the arrows were replaced by gaze cues.
Another area that reveals the influence of social beliefs is that of event
authorship. There is some evidence that suggests that event author-
ship, that is the belief that an external event is caused by oneself versus
someone or something else, modulates the perceived time of the event’s
onset. For example, suppose that an onscreen change is triggered by
pressing a button. In such cases observers perceive that the onscreen
change occurs earlier in time when they press the button themselves
than when they see someone else press the button (Wohlschläger et al.,
2002). Although this phenomenon has not been studied in the context
of human–computer interaction, an impressive illustration of the prin-
ciple is provided by Jacobson’s (2007) ‘undefeatable’ rock, paper, scissors
game based on the illusion of event ownership. The unbeatable rock,
paper, scissors game illusion is based on the fact that the illusion that
self-induced events occur earlier in time than other events is malleable
(Stetson et al., 2006). For example, if a keypress is followed by the onset
of a light 30 ms later, participants will link the onset of the light to the
fact that they have pressed the key. When the light is turned on 500 ms
after the keypress, participants will still report having caused the light to
go on by having pressed the key. However, if a participant has adjusted
to an interval of 500 ms between keypress and light, a sudden decrease
of the interval to 30 ms will cause the participant to perceive the light
as having turned on before the key was pressed (Stetson et al., 2006).
The secret to the unbeatable rock, paper, scissors game illusion is that
the player is conditioned to long keypress–screen-event intervals during
a practice round, and that the interval between keypresses and screen
events is reduced once the game begins. As a result, when a participant
enters his/her response and receives a reaction from the computer
(which is actually based on the participant’s choice), he or she will expe-
rience the illusion that the computer responded first, and thus that it is
‘playing fair’ and not basing its response on the participant’s choice.

Social robotics

One of the most recent advances in human–machine interaction is the


development of social robots. It can be imagined that robots will play
an increasing role in our everyday lives in the near future. According
to some, care for an ageing population will be increasingly provided by
robots, simply because we do not have a sufficient healthcare workforce
(see, e.g., Intuitive Surgical, Inc., 2008). Such robots would assist in
142 Neuroergonomics

day-to-day tasks, such as cleaning (think of the Roomba™ vacuum


cleaner), and play a more social role, providing companionship in addi-
tion to assistance in the household. An important question is whether
it is necessary to make robots appear human.
As discussed earlier in this chapter, the human brain seems to require
little evidence in order to accept human–machine interactions as being
social in nature. For example, interacting with a computer during a
game activates the STS of a participant, albeit only slightly. Interaction
with a real person results in much greater activation. Interestingly,
when a participant interacts with a slightly more ‘humanized’ computer
(in which two pencils attached to servomotors pressed two buttons on
the keyboard), activity in the STS was greater compared with the com-
puter condition and when the participant was exposed to an android
robot, STS activity increased even more (Krach et al., 2008). These
findings suggest that accepting an interaction as being social in nature
is not an all-or-nothing process, and that it is not necessary to have a
humanoid android in order to emulate social interaction. As a matter
of fact, trying too hard to make a robot look human might make the
robot end up in what has been called the ‘uncanny valley’ (Mori, 1970):
a robot that looks too human often evokes negative emotions and makes
people feel uncomfortable. A recent neuroimaging study demonstrates
why this might be so (Saygin et al., 2012). When we look at someone’s
face during a social interaction, the brain continuously tries to predict
what our partner’s face will look like. Our brains are tuned to human
faces, and interacting with a humanoid robot with a human-like face
will trigger this predictive system. However, because the face is not
real, there will be a mismatch between the brain’s prediction and the
actual visual input. This continuous mismatch may be the source of the
uncanny feeling many people have when interacting with a human-
oid robot. Paradoxically, interaction with a less human-looking robot
actually feels more natural, presumably because we do not feel that
‘something’s wrong’ throughout the interaction.
Successful interaction requires more than a nice face. For humans
and robots to cooperate or collaborate on a task, robots need to under-
stand the intentions of their human colleagues. Recent advances in the
neuroscience of observing and understanding action have provided
valuable insights in how humans interpret collaboration (van Schie
et al., 2006). A key aspect of how humans interpret behaviour is that they
tend to understand actions in terms of action goals. For example, when
we see someone pick up a glass, our brains do not simply mimic the
associated action—as one might think on the basis of mirror neuron
Affective and Social Neuroergonomics 143

theory—instead, the brain mirrors the movements necessary to achieve


the goal of the observed action (i.e. drinking the glass’s contents). Van
Schie and co-workers have formalized this idea in a neurocomputa-
tional model implemented in a robot. In one experiment, the robot
assisted a human with building a simple model from building blocks.
The robot and the human were seated on opposite sides of the table
and the human took the lead in building the model. Some blocks were
out of reach of the human and could only be picked up by the robot.
The robot’s task was to hand the blocks that were closer to the robot
than to the human to the human as they were needed. Critically, the
cognitive model driving the robot attempted to predict the goal states
of the human builder, and did so successfully enough that the robot was
experienced as natural and easy to work with (Erlhagen et al., 2006).

Conclusion

Emotional and social factors have long been overlooked in usability


engineering, and advances in the social and affective neurosciences sug-
gest that this should change. Although the exact neural underpinnings
of specific emotions remain elusive, recent advances in emotion science
regarding how emotion guides attention and decision-making may
inform user interface design. Emotional state can have profound effects
on perception, and this may have important consequences for design-
ing interfaces, especially when they are meant to be used in stressful
situations, such as combat situations.
Social neuroscience will likely provide useful insights that can be
applied to the development of new, social user interfaces, game design
and social robots. Recent advances in the social neurosciences have
shown that the human brain is adapted to the processing of social cues,
such as facial expressions and eye gaze, and that such information is
prioritized during processing. These findings may be used to the user
interface designer’s advantage. Using a gaze cue instead of an arrow
to guide attention, for example, may have a stronger effect, especially
in children. Moreover, the mere belief that a user is interacting with a
social agent instead of a machine may alter the perceptual processes
underlying human–machine communication. Capitalizing on the fact
that humans are emotional and social beings may help us to design
more natural, and thus more efficient, user interfaces.
8
Neuroergonomics of Individual
Differences in Cognition:
Molecular Genetic Studies
Raja Parasuraman

Neuroergonomics—the study of brain and behaviour at work (Parasuraman,


2003; Parasuraman & Rizzo, 2007)—has grown rapidly owing, in part, to
the development and availability of noninvasive techniques for imaging
the structure and function of the human brain (Parasuraman, 2011b;
Posner, 2012). As the use of neuroimaging and brain stimulation tech-
niques in human factors and ergonomics has advanced (e.g. Clark et al.,
2012; James et al., 2011; Just et al., 2008; Wilson, 2001), researchers have
increasingly turned to address an important, but relatively neglected,
issue in human performance: individual differences. That people differ,
both in their behaviour and in the functioning of their brains, is
well known and appreciated. But, typically, such variability between
participants is not emphasized when researchers report the results of
their studies. Yet, it can be argued that it is not only important to con-
sider such interindividual differences quantitatively, but also to examine
ways in which one can incorporate such variability in theories of neuro-
cognitive functioning.
There are clearly many factors that contribute to differences between
people, including early development, educational experiences and
training. When cognitive functions are known to be strongly heritable
(as based on twin studies), it is natural to ask whether genetic factors
contribute to normal variation in cognitive functioning. Molecular
genetics or genomics provides an avenue for exploring this question.

Genomics

Genomics is the most recent addition to the methodological toolkit of


neuroergonomics. Laypersons typically think of genetics in terms of
heritability—as when we say such things as ‘she is so tall for a woman
144
Neuroergonomics of Individual Differences 145

because she comes from a long line of lanky Scandinavians’ or ‘dyslexia


runs in families’. But heritability, while important, is only the first step
in asking questions about how genes influence our physical and mental
makeup. Given that a particular trait is heritable (and height and read-
ing disability are), we have to ask, ‘Which specific genes contribute to
that heritability?’. With the completion of the Human Genome Project
(Venter et al., 2001), we now have the tools to find out.
Genes are not only something we are born with. That genes are
primarily important for the early development of an organism was the
dominant view at the beginning of the twentieth century when Gregor
Mendel’s pioneering work, after two decades of being ignored, was
finally appreciated and widely disseminated. We now know, however,
that genes are active throughout the life of an organism, and have
dynamic effects on human mental and physical function in interaction
with other genes and with environmental factors. In fact, twin studies
have shown that genetic influences on cognitive functioning are sub-
stantial, even in 80-year-old humans (McClearn et al., 1997). When
Mendel formulated his laws of inheritance, he did not know where
genes were located. We now know that they reside within the nucleus
of all living cells. But genes are not static cellular structures whose job
is done once an organism is formed, consigning their recipient to either
adverse outcomes—for example a life of poverty or crime, as Cesar
Lombroso argued—or, conversely, as Francis Galton proposed, to one
of privilege, wealth and success because of hereditary talent. Modern
research on molecular genetics has put such simple, deterministic views
to rest. The current view is that genes are dynamic cellular agents that,
when expressed in the brain, have an influence on brain structure and
function throughout development, not only through their intrinsic
actions, but also in interaction with each other and with environmen-
tal factors. Thus, criminals are neither born nor made, but both; while
one can be born to wealth and privilege, that alone does not guarantee
talent.
Modern molecular genetics has, therefore, turned aside the clichéd
debate on which is more important, nature or nurture. It is clear that
both matter. This chapter discusses how genomics, as the latest method
that can be used in neuroergonomics, can advance our understanding
of human performance at work, both by furthering our understanding
of the neural and molecular pathways of cognitive functions, and by
identifying sources of individual differences in these functions. The field
of genomics includes several different techniques, but the two main
methods that have been applied to the study of human cognition are
146 Neuroergonomics

candidate-gene analysis (Green et al., 2008; Parasuraman, 2009; Posner


et al., 2007) and genome-wide association analysis (GWAS; Butcher et al.,
2008). A third method, a hybrid of the two in which a limited GWAS
is followed by targeted candidate-gene analyses (see Reinvang et al.,
2010), has been proposed recently, but as there are, as yet, few studies of
normal human cognition using this method, it is not considered further
in this chapter. The candidate-gene approach is a theory-based method
in which a particular gene is chosen because of its possible influence on
neurotransmitter action in the brain and hence, potentially, on cogni-
tive functioning. The GWAS approach, however, is a so-called ‘unbiased’,
atheoretical method in which the entire genome, not just a single gene,
is examined for potential associations with cognitive performance.
This chapter focuses on candidate-gene studies relevant to individual
differences in human performance in work-related perceptual and
cognitive tasks. To date, there have been relatively few studies of normal
cognition using GWAS, which has been applied mainly to the study of
neurological and psychiatric disorders.

Why look at individual differences?

Before we examine how molecular genetics can be used to study inter-


individual variation, one question needs to be addressed: Why should one
consider individual differences in brain function and human perform-
ance in the first place? There are many reasons. First, many theories
and models of cognition thrive on their ability to provide good fits to
the average performance of a group of participants. Developing quan-
titative or qualitative functions describing human performance based
on the group average is important because of the need to apply such
general principles widely to problems in human factors and ergonomics.
Consider skill acquisition, which is relevant to issues of training in
the workplace. Many studies have shown that acquisition of new
perceptual motor and cognitive skills can be well modelled by a power
law function of practice (Newell & Rosenbloom, 1981). Neuroimaging
studies using functional magnetic resonance imaging (fMRI) have also
shown that posterior brain regions associated with learning a perceptual
task show systematic reductions in activation over time that parallel
such skill acquisition, thus revealing evidence of brain plasticity at the
group level (e.g. Poldrack et al., 1998). But, typically, models of group
performance or brain function apply less well to the skill acquisition
functions of individuals (Parasuraman & Giambra, 1991). In some cases,
adjusting model parameters can handle the problem. For example, slow
Neuroergonomics of Individual Differences 147

and fast learners could be distinguished by altering the exponent in the


same power law function of skill acquisition. But, in other cases, the
performance function for a group may not be characteristic of some, or
even many, of the individuals making up the group. Alternatives to the
power law, including exponential functions, have been found to pro-
vide better descriptors of practice effects for individuals (Heathcote
et al., 2000). Furthermore, no function may adequately describe those
rare, exceptional individuals who demonstrate extraordinarily high
levels of cognitive competence—so-called ‘cognitive superstars’—as in
the case of individuals with superlative face recognition skills (Russell
et al., 2009), superior dual-tasking ability (Watson & Strayer, 2010) or
extremely high working memory performance (Parasuraman, 2011a).
The following two examples of variability in performance and brain
function illustrate why considering individual differences is important
for both theory and practice. In a study examining the role of work-
ing memory in vigilance, a group of young adults were tested under
dual-task conditions—a visual vigilance task combined with either a
verbal or a spatial working memory task (Caggiano & Parasuraman,
2004). At the group level, performance on the vigilance task showed
the typical result in such tasks: a decline in the detection rate of targets
that appear infrequently with increasing time on task (i.e. the vigilance
decrement). The decrement could be well modelled with an exponential
function having two parameters (Giambra & Quilter 1987), in which a
rapid decrease in detection rate was followed by a slower rate of decline.
However, only about 40% of the participants could be modelled with
the double-exponential function. The remainder either showed stable
or variable detection rates over time. Clearly, functions other than
the two-term exponential are required to model individual vigilance
decrements and other methods are required to explain the sources of
individual differences in the basic human ability to sustained attention
over a prolonged period of time.
A second example involves a study of attention not to a single, infre-
quently presented target at a fixed location, but to a target at a spatial
location (left or right visual field) that is cued prior to target presenta-
tion. Behavioural studies have shown that the cue speeds reactions and
improves the accuracy of detection of the target, even when the eyes are
fixated elsewhere—a finding that has been attributed to the facilitatory
effect of covert attention (Posner, 1980; see Chapter 3).
Electrophysiological studies have shown that voluntary covert attention
to the cued location (while maintaining central fixation) enhances the
early-latency P1 component of the event-related potential (ERP). Figure 8.1
148 Neuroergonomics

(bottom) shows group ERP results from one such study (Fu et al., 2008),
displaying the group waveforms for the midline parietal electrode site
(Pz) for attended and unattended stimuli. P1 amplitude was greater for
attended than for unattended stimuli, an effect generally interpreted
to reflect enhancement (‘sensory gain’) by selective attention of early-
stage cortical activity (Hillyard et al., 1998; see Chapter 1). However, the
individual data for P1 amplitude, shown in the top half of Figure 8.1
for the Pz electrode, disclose a different picture. The mean attended and
unattended P1 amplitudes for each individual are shown in the solid and
open symbols respectively. It is evident that much of the P1 attention
effect was driven by participants whose P1-attended values were much
larger than for those of other participants for whom the difference in
P1 amplitude between attended and unattended was quite low. In three
participants P1 amplitude was greater for unattended than for attended
stimuli, and one of these (#16) showed a much greater P1 amplitude for
unattended stimuli. To attribute differences in a physiological measure
such as P1 primarily to stable individual differences assumes that random
effects are not large and that the measure is reliable. The relatively high
test–retest reliability of P1 and the consistency of the P1-enhancement
effect in spatial attention in large sample studies, where measurement
variance is expected to be low (Hillyard et al., 1998), support this view.
Thus, the standard model that attributes the neural effect of visuospatial
attention to an enhancement of early cortical activity needs to explain
why this effect is very large in some individuals, but moderate in many,
and why a few show a reversed effect.
Molecular genetic studies may help in understanding such inter-
individual variability in basic aspects of attention. Such studies may also
tell us something about ‘cognitive superstars’ (Parasuraman, 2011a). For
example, one such individual was administered both the attention tests
described earlier in this chapter: in the vigilance/working memory task
this person did not show any dual-task performance deficit and exhib-
ited no vigilance decrement over time; in the covert attention task, the
individual had a P1-attention effect that was more than three times
that of the mean of the group (Parasuraman, 2011a). Clearly, not all the
variability between individuals in cognitive performance or every case
of exceptional cognition can be attributed to genetic factors, but, given
that many basic cognitive functions have been found to be substantially
heritable in twin studies (e.g. more than 80% for executive functioning;
see Friedman et al., 2008), identifying genes associated with cognition
can help account for a significant proportion of the variance.
149

8
Attended Unattended
6

4
P1 Amplitude at Pz (uV)

0
0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16
–2

–4

–6

–8 Participant Number

Attended Unattended

CPZ PZ POZ
C1 N1m

Low

P1m

N1m
C1
–2μV

High 0
–200 0 200 400ms

P1m
2

Figure 8.1 Amplitudes of the P1 component (μV) of the event-related potential


(ERP) at the Pz electrode site for attended and unattended stimuli for 16 individ-
ual participants in a visuospatial attention task (top panel). Group-averaged ERPs
(for 16 participants) at three midline electrode sites for attended and unattended
stimuli (bottom panel). Reprinted from Figure 3 in Fu et al. (2008), NeuroImage,
39, 1349. Reprinted with permission from Elsevier Inc.
150 Neuroergonomics

A theoretical framework for the molecular genetics


of cognition

From its birth in the late nineteenth century, psychology has had
a major interest in the assessment of differences between people in
personal characteristics and abilities, i.e. personality and intelligence.
Psychometrics, the quantitative and statistical analysis of psychologi-
cal differences between people, was developed for the study of such
interindividual variation. Until recently, psychometrics has provided
the main tool for the study of how individual differences in various
human abilities affect performance on different tasks. For example,
tests of general intelligence, such as IQ and its principal factor g, as well
as subcomponents, such as fluid and crystallized intelligence (Cattell,
1963), have been shown to be correlated with measures of human
performance (Matthews et al., 2000; Szalma, 2009).
The rapidly expanding new field of molecular genetics provides a com-
plementary approach to examining sources of individual differences in
cognition and human performance. The candidate-gene method repre-
sents one way to examine genetic contributions to such interindividual
variation. In the following, a conceptual framework is described that
allows one to link genes to cognition in a principled, theory-based manner
(Greenwood & Parasuraman, 2003; Parasuraman & Greenwood, 2004).
The theoretical framework begins with the observation that normal
variations between people in the specific DNA sequences that make up
a gene can affect the production of proteins encoded by the gene. Such
variations are often in the form of a change in one DNA nucleotide in
a single location within the gene and hence are called single nucleotide
polymorphisms (SNPs). (Other gene variants include omissions or repeats
of DNA nucleotides.) If the SNP has an effect on protein production then
it can, in principle (if the gene is expressed in the brain), influence the
degree to which the efficiency of neurocognitive networks is modulated
by neurotransmitters. But this needs to be demonstrated empirically. Not
all normal genetic variations result in differences in protein products and
therefore in neuromodulative efficacy. When a gene variant is shown to
affect extracellular protein levels, that gene is called a functional gene.
By way of illustration, consider one such functional gene, the dopamine
beta hydroxylase (DBH) gene.
The DBH gene is found on chromosome 9 and is about 23,000 base
pairs (bp) long. One of the major SNPs of the gene is 444 G/A (rs1108580;
where ‘rs’ refers to the sequential number with which the SNP has been
entered into the database of SNPs maintained by the National Center
Neuroergonomics of Individual Differences 151

for Biotechnology Information). This SNP occurs in the second exon


(coding region) of the gene, 444 bp downstream from the start of the
gene, and involves a guanine (G)/adenine (A) substitution (hence the
name ‘444 G/A’). The substituted nucleotides are called alleles and
there are typically two alleles in a SNP. Genes are diploid, meaning that
they consist of double-stranded DNA, each inherited from one parent.
Consequently, at the substitution locus (base 444), an individual can
have two copies of one allele (GG), two copies of the other allele (AA)
or one of each (GA). Thus, in any sample of unrelated individuals there
will be three different genotypes associated with the DBH gene at that
locus: GG, AA and GA.
The DBH gene codes for the enzyme which shares its name (dopamine
beta hydroxylase), which is known to be involved in the breakdown of
dopamine (DA) to norepinephrine (NE) in the vesicles of cortical neu-
rons (Cubells et al., 1998). The levels of DBH can be measured relatively
non-invasively in blood samples and, more invasively, in cerebrospinal
fluid; it has been shown that the G allele of the 444 G/A SNP is associ-
ated with about a threefold increase in DBH levels compared with the
A allele (Cubells & Zabetian, 2004). Greater levels of DBH are associated
with greater conversion of DA to NE and therefore to a higher NE to DA
ratio; conversely, lower levels of DBH are associated with higher DA to
NE levels (Cubells & Zabetian, 2004). Therefore, GG individuals have
greater NE to DA levels, whereas AA individuals have greater DA to NE
levels, with GA individuals being in the middle.
The second aspect of the theoretical framework linking genes and
cognition is to examine the human neuroimaging and the animal phar-
macological literatures. The former allows one to determine the brain
networks that are associated with a given cognitive function; the latter
can be perused to examine the relative roles that different neurotrans-
mitters play in different brain regions and in the efficiency of different
cognitive functions. When these pieces of evidence are put together, it
may be possible to hypothesize regarding the role of specific candidate
genes in the efficiency of particular cognitive functions. Of course, one’s
hypothesis can be wrong, but that is the nature of theory: if empiri-
cal findings do not support the hypothesis, an alternative one can be
framed. Thus, the major advantage of the candidate-gene approach
is that it is testable and falsifiable, which are desirable characteristics
of theory. In contrast, GWAS is ‘hypothesis-free’ because the entire
genome is scanned; the downside of this approach, however, is that any
resulting findings, if found (and often many are because so many genes
are tested), do not have a supporting theory to explain them.
152 Neuroergonomics

The use of the candidate-gene theoretical framework is illustrated in


this chapter by examination of the molecular genetics of three cognitive
functions: visual attention, working memory and decision-making. Related
candidate-gene studies of other cognitive functions have also been
reported in the literature, but are not covered here. These include studies
of executive function (Posner et al., 2007), attentional effort (Espeseth
et al., 2010), episodic memory (Papassotiropoulos & de Quervain, 2011)
and other related cognitive functions. For reviews of this, and related,
work, see Green et al. (2008), Greenwood and Parasuraman (2003), and
Posner et al. (2007).

Visual attention

Directing attention to visual locations is a basic aspect of perception


that is central to many tasks. Typically, eye and head movements are
used to direct attention to a particular location, but, as described earlier
in Chapter 3, attention can also be directed covertly, while the eyes do
not move (Wright & Ward, 2008). Posner (1980) developed a simple task
of covert attention that has since been used in many research studies in
both healthy and clinical populations. Participants are asked to fixate on
a central fixation cross and to press a response key when a dot stimulus
appears in either the left or right peripheral visual field. The stimulus
is preceded by a location cue, typically a bar that appears below the
location of the stimulus or a square that is brightened temporarily. On
different trials the cue can be valid, that is it appears at the location
of the upcoming stimulus, invalid, that is it appears on the opposite
location of the stimulus (e.g. cue on the left, stimulus on the right), or
neutral, in which case the cue appears in both locations. Posner (1980)
found that compared with a neutral cue, reaction time to the stimulus
decreased for valid cues and increased for invalid cues. The speeding up
of reaction time with valid cues was interpreted as being due to the ori-
enting of covert attention to the cued location. Conversely, the slowing
down of reaction time with invalid cues was suggested to result from
the need for participants to reorient their attention from the incorrect
to the correct target location.
There are marked individual differences in the efficiency of covert
attention. Which candidate genes are likely to contribute to such vari-
ability? Neuroimaging studies have shown that the posterior parietal
cortex is a major locus of a distributed brain network that is involved in
covert attention tasks (Corbetta et al., 2008). At the same time, several
animal studies have implicated the neurotransmitter acetylcholine in
Neuroergonomics of Individual Differences 153

covert visuospatial attention (Phillips et al., 2000; Shirtcliff & Marrocco,


2003). Davidson and Marrocco (2000) showed that when a drug that
suppresses acetylcholine levels—scopolamine—was applied directly
to the parietal cortex of monkeys, reorienting of covert attention fol-
lowing invalid location cues was slowed. These findings implicating
the parietal cortex and acetylcholine in covert attention have been
corroborated in human neuropsychological studies. For example,
patients with damage to the parietal lobe—but not those with frontal,
midbrain or temporal lesions—are selectively slowed in the reorienting
of covert attention to a target following invalid cues when the target
is presented to the visual field contralateral to the lesion (Posner
et al., 1984). Patients with Alzheimer’s disease, which is associated with
extensive cortical depletion of acetylcholine, also show impairment in
the speed of reorienting that is proportional to the degree to which their
parietal lobe has reduced cerebral blood flow, as measured by positron
emission tomography (PET) (Parasuraman et al., 1992).
These findings suggest that genes that code for acetylcholine recep-
tors in the brain would represent possible candidates for association
with performance on covert attention tasks. Of the two major cholin-
ergic receptors, nicotinic and muscarinic, the former are thought to be
more directly involved in orienting of attention (Shirtcliff & Marrocco,
2003). Nicotinic receptors are composed of several different combina-
tions of subunits, with the α4β2 receptor being the most common one.
This receptor, which is distributed widely in the cortex, including the
parietal cortex (Kadir et al., 2006), is highly sensitive to nicotine: an in
vivo study of human smokers found that about 50% of α4β2 receptors
were activated for up to 3 hours after one or two puffs of a cigarette
(Cosgrove et al., 2009). In accordance with the theoretical framework
outlined previously, therefore, the CHRNA4 gene, which codes for the
α4β2 nicotinic receptor, represents a candidate gene that could be exam-
ined for association with performance on a visuospatial attention task.
Parasuraman et al. (2005) genotyped a sample of 89 healthy adults for
the 1545 C/T SNP (rs 1044396) of the CHRNA4 gene, which involves
a C to T substitution at the 1545 bp locus. Previous work has shown
that the C allele is associated with increased risk of nicotine addiction,
while the T allele may be protective (Feng et al., 2004), further pointing
to the role of CHNRA4 in controlling nicotinic receptors. Participants
performed a modified version of the Posner (1980) task, in which they
had make consonant–vowel discriminations of a letter presented in the
left or right visual field and which was preceded by a valid, invalid or
neutral location cue. Parasuraman et al. found that the reaction time
154 Neuroergonomics

benefit of a valid location cue was smallest in individuals with two copies
of the T allele (TT genotype), lower in CT individuals and least in those
with CC genotype. Conversely, the reaction time cost of reorienting
attention following an invalid cue was greatest in the TT group, lower in
the CT group and least in the CC group. Thus, the TT group showed little
benefit of the valid cue and participants were particularly slowed follow-
ing invalid cues. The results were extended and replicated in a follow-up
study using the same task in a group of Norwegian participants (Espeseth
et al., 2006). Collectively, the findings suggest that T allele carriers tend
to focus their attention more closely on a cued location, so that they
have little need of the cue, but they are then disproportionately affected
on invalid cues when they have to redirect visuospatial attention to find
the target outside the focus of attention.
The results are also interpretable within the neurocognitive theoreti-
cal framework of the candidate-gene approach described previously. The
T allele of the CHRNA4 may be associated with changes in the affinity
of α4β2 receptors in parietal cortex that activate brain networks linked to
tightly focused visuospatial attention. Related findings on other visual
attention tasks, namely cued visual search, have provided corroborative
evidence that the CHRNA4 gene is associated with individual differ-
ences in focused attention (Greenwood et al., 2005).

Working memory

Neuroimaging studies have shown that the prefrontal cortex (PFC)


plays a critical role in working memory and executive function (Cohen
et al., 1997; Fuster, 2008). Electrophysiological studies in primates have
shown that the activity of prefrontal neurons is modulated by DA
(Abi-Dargham et al., 2002) and NE (Avery et al., 2000) during working
memory tasks. Pharmacological studies in monkeys have also linked DA
activity in a dose-dependent manner to working memory performance
(Vijayraghavan et al., 2011). Thus, one can hypothesize that genes that
code for the relative availability of DA and NE would be associated
with individual differences in spatial working memory. As described
previously, the DBH gene is thought to control the relatively cortical
levels of DA and NE (Cubells & Zabetian, 2004). Accordingly, the DBH
gene provides a candidate for examination of association with working
memory.
Parasuraman et al. (2005) genotyped a group of 103 healthy individu-
als for the 444 G/A polymorphism of the DBH gene and tested them in
a working memory task which required participants to keep in mind
Neuroergonomics of Individual Differences 155

the location (or locations) of 1–3 dots for a period of 3 s. A single, red
test dot then appeared, either at the same location as a target dot or at
a different location, and participants had to decide whether the test dot
location matched or did not match one of the target dots. Matching
accuracy decreased as the number of locations to be maintained in work-
ing memory increased from 1 to 3, demonstrating the sensitivity of the
task to variations in memory load. Accuracy was equivalent for all three
genotypes at the lowest memory load. At the medium (two locations)
and high (three locations) memory loads, however, accuracy was higher
for the AG group than the AA group, and higher still for the GG group.
The effect of the G allele on working memory performance was particu-
larly strong at the highest memory load. Overall, the results point to a
strong association between the DBH gene and working memory.
Participants in this study were also genotyped for the CHRNA4 1545
T/C SNP examined previously in the context of visual attention, and
participants tested on the attention task were also genotyped for the
DBH 444 G/A SNP, thus allowing the examination of patterns of associa-
tion between the two genes and the two cognitive tasks. The CHRNA4
gene was found to be significantly associated with performance on the
visual attention task, but not the working memory task. Conversely,
the DBH gene was significantly associated with performance on the
working memory task, but not the attention task. Thus, there was a
double dissociation between the CHRNA4 and DVH genes and attention
and working memory. These results are consistent with the known func-
tions of these two genes in cholinergic and dopaminergic/noradrenergic
transmission respectively.

Decision-making

The genetic associations found for visual attention and working


memory indicate that it is possible to link individual differences in
basic cognitive functions to candidate genes selected on a theoretical
basis. Similar associations have been reported for other basic cognitive
functions, such as episodic memory, perceptual recognition, language
comprehension and so forth (see Green et al., 2008, for a review). From
the perspective of neuroergonomics, however, it would be desirable if
genetic associations could also be examined for more complex cognitive
tasks that are representative of work and everyday settings. One such
complex cognitive activity is dynamic decision-making, which is a key
feature of many operational settings, such as air traffic control, medical
diagnosis, and military command and control.
156 Neuroergonomics

Decision-making in such settings is often conducted with the use of


decision support tools. Examples include automated decision aids in
radiology (Alberdi et al., 2004), commercial flight (Layton et al., 1994)
and clinical healthcare (Vashitz et al., 2009). Decision aiding can boost
efficiency and throughput in such work domains by speeding up the
decision process. Yet automation can sometimes provide faulty advice
to the user, often because of missing information (e.g. a pilot’s flight
planner that does not know about a suddenly developing bad weather
pattern) or because of unknown bugs in the software of the automation
algorithms (Leveson, 2005). If the human erroneously accepts the com-
puter’s decision in such an instance—a tendency called ‘automation
bias’ (Mosier et al., 1998), the outcomes can be adverse and potentially
catastrophic (Parasuraman & Riley, 1997). One example of the deadly
consequences of automation bias involved military personnel errone-
ously following a decision aid’s recommendation to direct missiles to a
target, resulting in civilian casualties (Cummings, 2006). Accordingly,
it is important to understand interindividual variation in the extent
of automation bias, particularly as it seems not to be diminished with
domain expertise (Mosier et al., 2001) or by training people to be
accountable for their actions (Skitka et al., 2000).
Interindividual variation in cognitive components underlying
speeded decision-making, particularly working memory and executive
function, may contribute to effective decision-making performance
under imperfect automation. The PFC plays a critical role in working
memory and executive function, and their contributions to effec-
tive decision-making (Fuster, 2008; Miller & Cohen 2001). Prefrontal
cortical activity is modulated by DA (Williams & Goldman-Rakic, 2002)
and NE (Avery et al., 2000). DA and NE activity in the PFC have been
linked to simple match-to-sample decisions in working memory tasks
(Abi-Dargham et al., 2002; Avery et al., 2000). Thus, one can hypothe-
size that the DBH gene, which codes for the relative availability of DA
and NE, would be associated with individual differences in complex
decision-making under (imperfect) computer aiding. This was tested
in a recent study at the Center of Excellence in Neuroergonomics,
Technology, and Cognition (CENTEC) at George Mason University
using a simulated command and control task involving time-stressed
decision-making.
Another important gene locus within the DBH gene is the -1021 C/T
SNP (rs1611115), which is found 1021 bp upstream in the promoter
region of the DBH gene (Cubells & Zabetian, 2004). Compared with
the 444 G/A SNP, which is associated with an approximately threefold
Neuroergonomics of Individual Differences 157

change in DBH levels, the -1021 C/T SNP leads to an approximately


tenfold change in DBH enzyme activity. High DBH enzyme activity is
associated with greater conversion of DA to NE in the synapse, and there-
fore to lower post-synaptic DA compared with NE levels; conversely, low
enzyme activity is associated with greater DA compared with NE levels.
Given that increased DA activity is linked to enhanced decision-making
performance (Fuster, 2008), low DBH enzyme activity should be associ-
ated with superior performance on the command and control task. The
T allele of -1021 C/T and the A allele of 444 G/A are associated with
lower DBH enzyme activity. Therefore, assuming additive effects of
the two SNPs, one can predict that individuals with two copies of the
T allele of the -1021 C/T SNP (TT) and two copies of the A allele of the
444 G/A SNP (AA) would show the lowest DBH enzyme activity and
the best decision-making performance compared with individuals with
the CC and GG genotypes on these SNPs.
The command and control task involves not only spatial processing,
as in the simpler spatial working memory task of Parasuraman et al.
(2005), but also requires participants to make judgements about the
relative positions of ‘friendly’ and ‘enemy’ units under time pressure.
The task also includes an automated decision aid that participants can
choose to rely on or not. Imperfect decision-aiding was manipulated by
having the automated advisories be always (100%) correct or, in a sepa-
rate block of trials, 80% correct. Participants had the option of verifying
the automation recommendation before making their decision choice
by clicking on an ‘Information’ button. One hundred adults genotyped
for the -1021 C/T SNP of the DBH gene were divided into two groups,
a low-DBH enzyme activity group and a high-DBH enzyme activity
group, based on their genotypes on the two DBH SNPs.
Decision accuracy on the simulated command and control task was
enhanced by 100% reliable automation compared with when the task
was performed manually. However, accuracy was reduced and decision
time was slowed when the automated decision aid provided advice that
was only 80% reliable, consistent with previous findings pointing to
automation bias when assisted by imperfect decision aids (Mosier et al.,
1998; Rovira et al., 2007). Furthermore, as predicted, individuals with
gene variants associated with low DBH enzyme activity (high DA
compared to NE levels) showed superior decision-making performance
compared with those with high DBH enzyme activity under imperfect
decision-aiding (Figure 8.2). Whereas there were no differences in over-
all decision-making accuracy or decision time between the low- and
high-DBH enzyme activity groups in the manual, Automation 100%,
158 Neuroergonomics

Low DBH enzyme activity High DBH enzyme activity

100

90
Decision accuracy (%)

80

70

60

50
Manual Automation 80% Automation 80%
(reliable) (unreliable)
Conditions

Figure 8.2 Mean decision accuracy (in percent) in the command and control
task when carried out manually, and on reliable and unreliable trials in the
Automation 80% condition (bars show standard errors)

or Automation 80% conditions, the low DBH enzyme activity group


was more accurate and speedier in making engagement decisions in the
Automation 80% condition on those trials when incorrect advice was
given. Thus, whereas the high enzyme activity (lower DA level) group
showed the typical automation bias effect (Mosier et al., 1998)—a 15%
reduction in decision accuracy—the low-enzyme activity (higher DA
level) group showed a significantly reduced automation bias, only a 4%
reduction.
The greatly reduced automation bias for low compared with high
enzyme activity participants indicates that a gene that regulates relative
dopamine availability in PFC, namely the DBH gene, plays a role in
interindividual variation in time-stressed decision-making perform-
ance under imperfect automated aiding. Specifically, individuals with
Neuroergonomics of Individual Differences 159

variants of the DBH gene associated with low levels of DBH enzyme
activity, which converts DA to NE in prefrontal cortical neurons, exhibit
superior decision-making in an automated command and control task
when incorrect advice is given. Thus, the DBH gene influences the
degree to which decision-making performance is adversely affected by
biased use of decision aids.
Supporting evidence for the role of DBH was provided by the results
on information verification rates. The low DBH enzyme activity group,
which showed less susceptibility to automation bias, verified automa-
tion recommendations on unreliable trials at more than twice the rate
of the high DBH enzyme activity group. Moreover, they also reported
lower subjective trust in the automation on unreliable automation
trials. These findings are consistent with the conclusions of Bahner
et al. (2008) that objective data on verification behaviour are needed
to determine whether automation biases decision making in complex,
dynamic tasks, such as command and control, and process control. The
results for the subjective ratings of trust provided further corroborative
evidence: the low DBH enzyme activity group reported lower trust on
the unreliable trials.
Modulation of task performance by normal variation in the DBH
gene may reflect the role of executive functioning in successful
decision-making. Executive functioning is claimed to be composed of
inhibition, set shifting and updating in working memory (Friedman
et al., 2008). Of these three, updating in working memory was found
in a large twin study to be the most heritable and have the strong-
est correlation with general intelligence (Friedman et al., 2008). One
possible interpretation of the results of this study, therefore, is that
the (highly heritable) ability to rapidly update information in working
memory—which is associated with higher DA levels and variation in
the DBH gene (Parasuraman et al., 2005)—may influence the time
needed or resources available to consider automation recommenda-
tions and confirm them in complex decision-making tasks. Previous
studies have found that automation bias occurs in both novices and
in expert populations, such as pilots (Mosier et al., 2001), and while
individual differences have been noted (Parasuraman & Manzey,
2010), their basis has not been identified. Given that the DBH
gene has been linked to executive function and working memory
(Greenwood, et al., 2009a; Parasuraman et al., 2005), these findings
suggest that interindividual variations in these cognitive functions are
major contributing factors.
160 Neuroergonomics

Conclusion

Molecular genetics is the latest addition to the set of neuroscience


methods that can be used by researchers in human factors and ergo-
nomics. Molecular genetics offers at least two advantages. First, it can be
used to further understanding of the molecular pathways underlying
cognition by investigating the links between genes, gene expression,
neurotransmitters and brain networks that are required for the per-
formance of a cognitive task. Our knowledge of such links remains
rudimentary at the present time, but advances in basic molecular neu-
roscience should expand our understanding in the near future. At the
same time, molecular genetics can help in identifying sources of indi-
vidual differences in cognitive performance. This second characteristic
has already yielded fruit. We can now describe how normal variation in
gene sequences between people are linked to interindividual variation
in the efficiency of cognitive functioning. The picture will undoubtedly
become more complex as more research is conducted, and it is likely
that most cognitive functions will be linked to many genes, each of
which have small effects, so that disentangling the specific influence of
single genes will be a challenge, but the initial results are promising.
This chapter began with an argument in favour of consideration of
individual differences. It was proposed that accounting for individual
differences is important for theory and the development of general
principles that can be used in applications. We have several such
principles in human factors and ergonomics—think of Fitts’ Law,
stimulus–response compatibility, limited working memory capacity,
the vigilance decrement and so forth. Molecular genetic studies are
not going to overturn any of these generalizations, but they may
tell us why one sees variability in their expression, both in the
laboratory and in the way that different people behave at work. This
chapter also outlined a theoretical framework in which cognitive
processes are linked to brain networks innervated by neurotransmit-
ters whose functional efficiency can be associated with variability in
the expression of specific genes. Normal variations between unrelated
people in the same genes can be linked to differences between those
individuals in the efficiency of a cognitive function. Examples of the
use of the approach were given for three cognitive functions—visual
attention, working memory and decision-making. The last example
illustrates the potential of the approach for more complex cognitive
processes representative of human performance in complex systems,
as the genetic association was described for a command and control
Neuroergonomics of Individual Differences 161

task that has features of complex decision-making tasks in work


environments.
The field of the molecular genetics of cognition is still in its infancy.
Despite this, the results obtained to date are very promising in regard
to the ultimate goal of providing a neural and genetic basis for
characterizing individual differences in various cognitive functions. The
foundational work that has been done so far needs to be supplemented
by several new directions for research. For example, performance meas-
ures will need to be supplemented by those from neuroimaging, such as
electroencephalography (EEG) and fMRI so as to further exploit the power
of cognitive neuroscience research on the neural networks and neuro-
chemical basis of different cognitive functions. This will be challenging
given the high cost of neuroimaging studies with large samples of parti-
cipants, but the payoff could be considerable. Furthermore, to date,
single-gene associations between SNPs and specific cognitive functions
have been identified. It will be important to determine whether these
genes act independently of each other or, as is more likely, interact
with each other. For example, Espeseth et al. (2006) reported that the
CHRNA4 gene described previously interacts with a neuronal repair
gene, APOE, in its effects on individual differences in the same visu-
ospatial attention task used by Parasuraman et al. (2005). The coming
decade is likely to witness an explosion of these and other types of
molecular genetic research that could well revolutionize understanding
of individual differences in cognition. Another study also reported
interactive effects of two cholinergic genes (CHRNA4 and CHRM2) on
individual differences in Parasurman et al.’s attentional task (Greenwood
et al., 2009b).
The studies described in this chapter dealt, for the most part,
with establishing the theoretical framework for examining genetic
associations for basic cognitive functions. As such, the findings do not
have immediate practical applications. However, preliminary results
show that genetic associations can also be found for more complex
cognitive functions, such as decision-making in command and control
tasks. Possible practical implications of the results include develop-
ment of selection and training procedures aimed at forming teams of
human operators who can make speedy and accurate decisions that
are less biased by imperfect computerized decision aids. As more such
studies are conducted, greater potential for practical applications will
emerge, particularly if gene–environment interactions are examined
(e.g. studies of training in subgroups of individuals defined by genotype;
Parasuraman & Jiang, 2012). With regard to training, understanding
162 Neuroergonomics

individual differences in attention and time-sharing may lead to better


procedures for training high-performance skills, such as those of military
jet pilots (Gopher et al., 1994). The effectiveness of training regimens
might be enhanced if they could be tailored to individuals with certain
genotypes. But the practical effect of improved understanding of
individual differences is not restricted to selection and training alone.
9
Validating Models of Complex,
Real-life Tasks Using fMRI
Jelmer P. Borst, Niels A. Taatgen and Hedderik van Rijn

Researchers in human factors and neuroergonomics increasingly use


neuroimaging techniques to, for example, apply adaptive automation,
investigate mental workload and develop brain–computer interfaces
(e.g. Parasuraman, 2003; Parasuraman & Wilson, 2008). For example,
Wilson and Russell (2004, 2007) describe a task in which operators were
asked to monitor the progress of four independent uninhabited air vehi-
cles (UAVs), download radar images, visually search those images and
mark targets. To assist the operators they developed an artificial neural
network that used online electroencephalography (EEG) measurements
to adapt the difficulty of the task. Especially when tailored to indi-
vidual operators, this improved performance dramatically (Wilson &
Russell, 2007).
However, such complex tasks are often hard to study with techniques
from the standard neuroscience toolbox. In this chapter we describe
a possible solution for this problem: using cognitive architectures in
combination with functional magnetic resonance imaging (fMRI).
Cognitive models and cognitive architectures have been applied in
human factors research since the 1950s (e.g. Card et al., 1983; Fitts,
1954), often with great success (see, e.g. Gray, 2007, 2008; van Rijn
et al., 2011). In recent years, cognitive architectures have been linked
to neuroimaging research by mapping components of the architectures
onto brain regions (e.g. Anderson et al., 2008a, 2008b) or even by model-
ling the detailed information flow during complex tasks (e.g. Stocco
et al., 2010). On the one hand, this linkage has made the architectures
more biologically plausible and on the other hand it has made it possi-
ble to use cognitive architectures to identify neural correlates of tasks
that are too complex to study with regular fMRI analysis methods. It
seems, therefore, natural to use the techniques that were developed to
163
164 Neuroergonomics

connect cognitive architectures and fMRI to analyse the real-life tasks


studied in the field of human factors.
Two techniques that are used to connect cognitive architectures to
neuroimaging data are regions-of-interest (ROI) analysis and model-
based fMRI. ROI analysis is a confirmatory technique that uses a
predefined mapping of the architecture to brain areas, whereas model-
based fMRI is an exploratory technique that shows which brain areas
fit best to components of the architecture. In this chapter, we review
the standard method of analysing fMRI data and its shortcomings, and
introduce cognitive architectures. Regions-of-analysis and model-based
fMRI are illustrated in the context of a relatively complex dual-task
experiment and applied to the UAV task described earlier. Finally, how
the results could be used to improve the adaptive automation method
of Wilson and Russell (2007) is discussed.

Standard fMRI analysis

As introduced in Chapter 1, in a standard fMRI analysis researchers


typically strive to find the neural correlates of a certain cognitive
process. To this end, an experimental condition is compared with a
control condition. The experimental condition places demands on the
process of interest, and the control condition is identical to the experi-
mental condition, except that it does not place demands on the process
of interest. Brain regions that show a different response in the two
conditions (either more or less activity) are assumed to be involved in
the cognitive process under investigation (e.g. Friston et al., 2007).
Although this method of cognitive subtraction works relatively well
for simple tasks (see, for extensive discussions on cognitive subtraction,
e.g. Friston et al., 1996; Logothetis, 2008), it is difficult to apply to more
complex tasks owing to the requirement of subtasks being relatively iso-
lated. In complex tasks it is often very difficult to reliably isolate a single
subtask or cognitive function, as many different cognitive functions
are used throughout the task. Instead of being either on or off – as a
traditional fMRI analysis assumes – these functions are used on a more
continuous scale: during some parts of an experiment they are hardly
used at all, during other parts they are used a little and during still other
parts of the experiment they are used to their maximum. This makes
it difficult to analyse complex tasks with traditional techniques that
assume, either explicitly or implicitly, binary demand functions. One
possible solution is to use cognitive architectures to generate more
detailed demand functions.
Validating Models of Complex Tasks Using f MRI 165

Cognitive architectures

In 1973, Allen Newell boldly argued that psychology focuses too much
on isolated tasks and, as a result, does not progress much beyond
solving ‘small questions’. He expressed his concerns that psychology
would never integrate the results of separate experiments into a unified
theory of human cognition (Newell, 1973). As a solution, he proposed
cognitive architectures: unified theories of cognition within which
computational processing models can be developed for a wide variety
of tasks (see also Newell, 1990). The use of computational models forces
one to specify theories at a very precise level; developing different
models within one theory ensures that models do not explain isolated
phenomena, but that basic mechanisms are shared between tasks.
Currently, there are several cognitive architectures in use, for instance
SOAR (e.g. Newell, 1990), EPIC (e.g. Meyer & Kieras, 1997), 4CAPS
(e.g. Just & Varma, 2007) and ACT-R (e.g. Anderson, 2007). In this chapter
we use the ACT-R theory (see, for the range of tasks that have been
modelled using ACT-R, http://act-r.psy.cmu.edu/) because it focuses on
cognitive mechanisms and has an explicit mapping of architectural
components onto brain areas. However, the methods that we discuss are
applicable to all cognitive architectures.
Cognitive architectures are typically used to simulate complete tasks,
including perceptual and action elements of the tasks. This makes them
especially well suited for neuroergonomics, as often the interaction
between perceptual, motor and more central cognitive resources is crucial
for explaining effects in real-life tasks (Kieras & Meyer, 1997; van Maanen
et al., 2009; van Maanen et al., 2012). It is therefore not surprising that
cognitive architectures have, indeed, been used to analyse real-life tasks,
such as driving (e.g. Salvucci, 2006), air-traffic control (Taatgen & Lee,
2003), piloting UAVs (Gluck et al., 2007) and operating the flight man-
agement system (FMS) of Boeing 777s (Taatgen et al., 2008). Furthermore,
they have been applied to understand more low-level events and tasks
that are of interest to human factors researchers, such as interruptions
(Altmann & Trafton, 2002; Salvucci et al., 2009), the perception of dura-
tion (Taatgen & van Rijn, 2011; Taatgen et al., 2007) and multitasking
(Kieras et al., 2000; Salvucci & Taatgen, 2008, 2011).

Cognitive architectures and fMRI

For decades, the field of information-processing psychology that


argued for cognitive architectures essentially ignored the brain
166 Neuroergonomics

(Anderson, 2007). For instance, Newell himself stated in 1980 that


‘symbolic behaviour (and, essentially, rational behaviour) becomes
relatively independent of the underlying technology. Applied to
the human organism, this produces a physical basis for the appar-
ent irrelevance of the neural level to intelligent behaviour’ (Newell,
1980, p. 175). However, since the 1990s, cognitive psychologists have
recognized the importance of the system in which intelligence is real-
ized and have started to integrate findings of the neurosciences into
cognitive architectures. For example, based on fMRI results, ACT-R’s
original ‘goal’ module was split into a high-level goal component and
a component that can maintain a current representation of the state of
the world (Anderson et al., 2004; Qin et al., 2003). Whereas in the past
cognitive models were constrained only by behavioural data, such as
keypresses and eye movements, at present they can also be validated
by fMRI data.
Cognitive architectures were first linked to fMRI data by mapping ROIs
in the brain to components of the architecture (for a short introduction,
see Anderson et al., 2008b). The basic idea of a ROI analysis is that the
components of a model (e.g. vision, memory) are associated with small
regions in the brain (~10 ⫻ 10 ⫻ 10 mm). Activity of a component is sup-
posed to correspond to neural activity in the associated brain region. For
instance, activity in the motor resource of ACT-R is assumed to correspond
to neural activity in a region in the motor cortex (e.g. Anderson 2005,
2007). The advantage of using ROIs in combination with a computational
model over traditional fMRI analysis is that one can investigate activity
in a brain area over the course of an experiment. That is, instead of hav-
ing to assume that a region is either used or not in a certain condition of
an experiment, one can compare the amount of activity in the region to
the model’s predictions and investigate whether the model gives a good
account of human behaviour. By comparing model and brain data, one
can, on the one hand, validate and constrain cognitive models, and, on
the other hand, give a detailed explanation of the acquired fMRI data.
Recently, a different fMRI analysis method, termed model-based fMRI
(e.g. Gläscher & O’Doherty, 2010; O’Doherty et al., 2007), has been
extended to be applicable to cognitive architectures. In model-based
fMRI, predicted activation from a cognitive model is used as a regressor
in the analysis of the fMRI data. This analysis technique thus shows
regions in the brain where neural activity significantly correlates with
model activity. Model-based fMRI has been very successful in identi-
fying brain areas involved in reinforcement learning (e.g. Daw et al.,
2011; Hampton et al., 2006; Wunderlich et al., 2009) and category
Validating Models of Complex Tasks Using f MRI 167

learning (Davis et al., 2012). Recently, this technique was, for the first
time, applied to a model developed in a cognitive architecture to find
regions related to multitasking (Borst et al., 2011). This analysis showed
that model-based fMRI is a powerful technique for analysing fMRI data
of complex tasks.
Previous work has applied both the ROI analysis method (Borst et al.,
2010a) and the model-based fMRI analysis method (Borst, et al., 2011)
to data pertaining to multitasking. In the remainder of this chapter the
results of the two analysis methods are discussed and compared.

Task and model

The dataset analysed with both the ROI and model-based fMRI analysis
methods was developed to locate the neural correlates of the so-called
problem-state resource in a multitasking setting (Borst et al., 2010b).
The problem–state resource is assumed to store intermediate repre-
sentations in a task, for example 3x ⫽ 16 when solving 3x ⫹ 4 ⫽ 20.
The contents of the problem–state resource are assumed to be acces-
sible without a time cost (Anderson, 2005), unlike other elements
in working memory (e.g. McElree, 2001). It has been shown that the
problem–state resource can, at most, contain one chunk of information
and that it therefore acts as a bottleneck when it is required by multiple
tasks at the same time (Borst et al., 2010a). Given these properties,
the problem–state resource is comparable to the focus of attention in
recent working memory theories (e.g. Jonides et al., 2008; McElree,
2001; Oberauer, 2002, 2009).

The task
The interface of the experiment used to study processing bottlenecks is
shown in Figure 9.1. To localize the neural correlates of the problem–
state resource, a multitask design is used in which participants alternate
between solving ten-column subtraction problems and entering ten-
letter strings (note that although only one column of the subtraction
task was shown at a time, participants were trained to consider each
column as part of a ten-column subtraction problem). Both the subtrac-
tion task and the text entry task had two versions: an easy version that
did not require maintenance of an intermediate representation in the
problem–state resource and a hard version that did. For the subtraction
task, this meant that in the easy version the upper term was always
larger than or equal to the lower term (i.e. no borrowing was required),
whereas in the hard version participants had to borrow in six out of
168 Neuroergonomics

9 #

1 2 3 A B C D E F G

4 5 6 H I J K L M N

7 8 9 O P Q R S T U

0 V W X Y Z

Figure 9.1 The interface of the experiment, with the subtraction task on the left
and the text entry task on the right. For the subtraction task, only one column is
shown at a time, but participants were trained to consider the problems as part
of a ten-column subtraction problem. The task that is not currently performed
is masked with hash marks (#): for the text entry task, the mask marks the
spot where the next letter will appear. As soon as a participant enters a digit
for the subtraction task, this mask changes into the next letter to be typed and
the subtraction task is masked. Reprinted from Borst et al. (2010b). The neural
correlates of problem states: Testing fMRI predictions of a computational model
of multitasking. PLoS ONE, 5, e12966

the ten columns. In the easy version of the text entry task, a letter was
shown, which the participants then had to enter, followed by a new
letter and so forth. In the hard version, a complete ten-letter word
(high-frequency Dutch words) was shown once at the start of a trial, but
as soon as the participant entered the first letter, the word disappeared
and had to be entered without feedback so that the participant had to
mentally keep track of which letters had been entered.
Participants alternated between the tasks after every number or letter.
Thus, in the hard versions of the tasks they had to keep track of whether
a ‘borrow’ was in progress or what word they were entering (and the posi-
tion within the word) while giving a response on the other task. In the
easy versions of the task, sufficient information to perform the tasks was
always on the screen. Previous experimental work (e.g. Borst et al., 2010a)
suggests that participants use the problem–state resource to keep track of
borrowing and their place in the words. A trial in this task is defined as
entering a complete word and solving a complete subtraction problem,
Validating Models of Complex Tasks Using f MRI 169

and thus involves 20 responses. The difficulty of the two tasks is factorially
combined to create four conditions: easy subtraction – easy text-entry,
easy subtraction – hard text entry, hard subtraction – easy text entry and hard
subtraction – hard text entry (for more details see Borst et al., 2010b).

The model
The typical outcome of this type of experiment is that the two difficulty
manipulations result in an over-additive decrease in performance: both
accuracy and speed are considerably worse in the hard subtraction – hard
text entry condition than in all other conditions (Borst et al., 2010a,
2010b). This pattern can be accounted for with a model that assumes
a problem–state bottleneck according to which only one task can use
the problem–state resource to store an intermediate representation at
a time. Because both tasks need to store an intermediate representa-
tion in the respective hard conditions, and as participants have to
alternate between the two tasks, this means that on each step in a trial the
problem–state resource has to be swapped out in the hard subtraction –
hard text entry condition. That is, on every alternation the problem state of
the previous task is stored in declarative memory while the problem state
of the current task is recalled from declarative memory and restored to
the problem–state resource. This is only necessary in the hard subtraction –
hard text entry condition, as in the other conditions no more than one
of the tasks needs a problem state. The model incorporating these time-
consuming and error-prone problem–state replacements provided a good
match to the interference effects in the data (Borst et al., 2010a).
The problem-state bottleneck model is implemented in the cognitive
architecture ACT-R (Anderson, 2007). ACT-R’s visual resource is used
to model perceiving the task, the manual resource to model giving
responses and ACT-R’s declarative memory to model memory processes
(e.g. retrieve “6 ⫺ 4 ⫽ 2”). These elements have been validated previ-
ously (see e.g. Anderson, 2007). Because the model performs the same
task as the human participants (i.e. it interacts with the same interface) it
generates traces of activation that can be compared directly to behav-
ioural performance and neural images of human participants.
Figure 9.2 shows model activation traces for the four different condi-
tions of the multitasking experiment. For each condition, the boxes
indicate when ACT-R’s resources were active in a typical trial. The
figure shows that there is much more resource activity in the more
difficult conditions, but also that the pattern of activity over the different
resources differs per condition. For example, the most problem–state
resource and declarative memory activity can be observed in the hard
170 Neuroergonomics

Easy Subtraction – Easy Text Entry


Problem State

Declarative Mem.

Manual

Visual

Easy Subtraction – Hard Text Entry


Problem State

Declarative Mem.

Manual

Visual

Hard Subtraction – Easy Text Entry


Problem State

Declarative Mem.

Manual

Visual

Hard Subtraction – Hard Text Entry


Problem State

Declarative Mem.

Manual

Visual

Time

Figure 9.2 Example of model activity for a complete trial in each condition of the
experiment. On the y-axis the different resources of ACT-R are shown; the x-axis
represents time. Each box indicates that a resource is active at that moment in time.
Reprinted from Borst et al. (2010b). The neural correlates of problem states: Testing
fMRI predictions of a computational model of multitasking. PLoS ONE, 5, e12966

subtraction – hard text entry condition, whereas there is hardly any activity
in the easy subtraction – easy text entry condition. These traces of activity
can be used to connect the computational model of the task to fMRI
data collected during task performance.

ROI analysis
The model presented in Borst et al. (2010a) gave a good account of
behavioural data (i.e. response times and accuracy) in the subtrac-
tion/text entry task. Validation of the model requires that it can also
make a priori fMRI predictions of the results of an ROI analysis (Borst
et al., 2010b).
Validating Models of Complex Tasks Using f MRI 171

The blood-oxygen-level dependent (BOLD) signal that is measured


with a fMRI scanner lags about 6 s behind neural activity (e.g. Friston,
et al., 2007). To account for this delayed response the model’s activity is
convolved with a haemodynamic response function (HRF). This process
is illustrated in Figure 9.3. Figure 9.3(a) shows the HRF: if there is a
spike of neural activity at time 0, the BOLD response rises to a peak at
around 6 s and then declines again (the HRF is typically modelled with a
gamma function or a mix of gamma functions—here the HRF from the
software package SPM is used; Friston et al., 2007). Figure 9.3(b) shows
activity of a model component as a step function in grey (cf. Figure
9.2). The black line depicts this activity convolved with the HRF—this
is the predicted BOLD response. Figure 9.3(c) shows this process applied
to the problem–state resource (above) and the manual motor resource
(below) for runs through a trial of each of the four different conditions
of the experiment.

Model activity Model activity haemodynamic response function

(a) (b)
BOLD response
BOLD response

0 5 10 15 20 25 0 10 20 30 40 50 60
Time (s) Time (s)

Subtraction – Text Entry


(c) Easy–Easy Hard–Easy Easy–Hard Hard–Hard
Problem
State
Manual

0 50 100 150 200 250 300


Time (s)

Figure 9.3 (a) Haemodynamic response function (HRF). (b) Convolution


example. (c) Model activity for the problem–state resource and the manual
resource, raw and convolved with the HRF over the course of four trials
172 Neuroergonomics

To validate the ACT-R model using ROI analysis, BOLD predictions


were generated before running an fMRI experiment. The left panels of
Figure 9.4 show the predictions for (a) the problem–state resource and
(b) the manual resource (for other resources, see Borst, et al., 2010b).
The x-axis represents time in the form of scans (1 scan ⫽ 2 s) and the
y-axis percent BOLD change. The graphs show the BOLD response over
a complete trial in the task (i.e. entering ten letters in the text entry task
and ten digits in the subtraction task).
The predictions for the problem–state resource are shown in
Figure 9.4(a). Because the problem–state resource is not required for

Subtraction – Text Entry


Easy–Easy Easy–Hard
Hard–Easy Hard–Hard

(a) Model prediction Region-of interest data


0.20

0.0 0.2 0.4 0.6 0.8 1.0


0.15
% BOLD change

% BOLD change
Problem state

0.10
0.05
0.00

0 10 20 30 40 50 0 10 20 30 40 50
Scan Scan
(b)
0.8
0.0 0.1 0.2 0.3 0.4 0.5 0.6

0.6
% BOLD change
% BOLD change
Manual

0.4
0.2
0.0

0 10 20 30 40 50 0 10 20 30 40 50
Scan Scan

Figure 9.4 Results of the regions-of-interest analysis for (a) the problem–state
resource and (b) the manual resource. Graphs on the left show model predictions;
graphs on the right recorded BOLD data in the region indicated in the brain
Validating Models of Complex Tasks Using f MRI 173

either of the tasks in the easy subtraction – easy text entry condition, there
is no activity predicted in this condition (cf. Figure 9.2). In the easy
subtraction – hard text entry and hard subtraction – easy text entry conditions,
only one of the tasks needs the problem–state resource, which leads to
intermediate levels of BOLD activity. In the hard subtraction – hard text
entry condition, the problem–state resource has to be swapped out
between tasks on each step in a trial, resulting in the highest predicted
activation levels. Thus, an over-additive interaction effect of subtrac-
tion and text entry difficulty on total BOLD activity (as measured by
the area under the curve, e.g. Anderson, 2005; Stocco & Anderson,
2008) is predicted for the problem–state resource.
For the manual resource (Figure 9.4b), a completely opposite pattern
is predicted: lowest activation levels for the hard subtraction – hard text
entry condition and highest levels for the easy subtraction – easy
text entry condition. Although the same number of manual actions
must be made in all four conditions, response times increased with
the difficulty of the two tasks. Because response times were longer in
the more difficult conditions, there is more time between responses
and thus more time in which BOLD activity associated with the
manual responses can decay. This results in lower predicted activity
for the more difficult conditions. However, the total amount of activity,
as indicated by the area under the curve (as shown in Figure 9.4),
is predicted to be equal in all conditions as the same number of
responses has to be made in each condition. Note that these predic-
tions are purely qualitative: the magnitude of the predicted BOLD
response can be scaled independently of the shape (Anderson, 2007;
Anderson et al., 2008b).
The results of the fMRI experiment generally confirmed the model
predictions. The right panels of Figure 9.4 show the results for the
regions associated with the problem-state resource and the manual
resource (an area in the parietal cortex, MNI coordinates ⫺24, ⫺67, 44,
and an area in the motor cortex, ⫺42, ⫺23, 54 respectively). The overall
pattern predicted by the model—highest activation levels for the more
difficult conditions for the problem–state resource, lowest for the
manual resource—was confirmed by the data.
In the region associated with the problem–state resource, the order of
the conditions was predicted correctly, but the magnitude of the effects
was not. While no activation was predicted for the easy subtraction –
easy text entry condition, the data show a clear task-related increase in
activation. Furthermore, activity in the hard subtraction – hard text entry
condition is not as high as predicted, resulting in a linear effect on total
174 Neuroergonomics

activation as indicated by the area under the curve instead of the predicted
over-additive effect. The data of the manual resource, Figure 9.4(b),
matches the predictions nicely: both the order of the conditions and
the magnitude seem correct (as predicted, there were no effects on
the area under the curve). For more details on the analysis, see Borst
et al. (2010b).
In summary, the ROI analysis was based on a priori predictions of the
model, and the model predictions were, in general, confirmed by the
data. However, the over-additive interaction that was predicted for
the problem–state resource was not found in the predefined region. To
see if there is a region in the brain that fits better to the model predic-
tions, model-based fMRI analysis can be applied.

Model-based fMRI analysis


In model-based fMRI, model predictions are convolved with the HRF
(Figure 9.3c), and then directly regressed against fMRI data to locate
brain regions that correspond significantly to model predictions. From
a fMRI analysis perspective this means that, instead of entering the
factorial, qualitative experimental conditions into a general linear
model, as is customary, the quantitative model predictions are entered
as regressors into the model.
Because the model data are regressed directly against the raw parti-
cipant data, it is important to have a correct time mapping between
model and data; for example, it does not make sense to compare a
fixation in the model to a keypress in the data. To this end, a model is
first run for each participant on the same stimuli that the participant
received, including all nonexperimental components, such as fixation
and feedback screens. To further improve the timing of the model, the
model’s responses are brought into line with the participant’s responses
using a linear transformation (for details, see Borst et al., 2011). The
activity of the model’s resources is then convolved with the HRF and the
results per resource are entered into the general linear model.
As shown in Figure 9.5, the best-fitting region for the problem–state
resource is located in the inferior parietal lobe, around the intraparietal
sulcus (thresholded at p < 0.01, family-wise error rate-corrected, 100
contiguous voxels). This region overlaps with the predefined problem–state
region in ACT-R that Borst et al. (2010b) used for the ROI analysis (shown
as a white square in Figure 9.5). The signal in the 100 most significant vox-
els in this region is shown on the right in Figure 9.5(a). Although this is
clearly a better fit to the prediction than that found in ACT-R’s predefined
region (Figure 9.4a, right panel), there is still no over-additive interaction
Validating Models of Complex Tasks Using f MRI 175

(a) Region Data in region

0.2 0.4 0.6 0.8


% BOLD change
Problem state

8 x = –31
y = –51
4

0.0
z = 39
0
0 10 20 30 40 50
(b) Subtraction – Text Entry
0.0 0.2 0.4 0.6 0.8 1.0 Easy–Easy
Easy–Hard
Hard–Easy
% BOLD change

Hard–Hard
Manual

15 x = –39
10 y = –9
5 z = 57
0
0 10 20 30 40 50
Scan

Figure 9.5 Results of the model-based analysis for (a) the problem–state resource
and (b) the manual resource. On the left the located brain regions, significance
maps were thresholded with p < 0.01 (family-wise error rate-corrected) and 100
contiguous voxels. Coordinates indicate the most significant voxel in the region.
White squares show the predefined mapping of ACT-R. The graphs on the right
show the average BOLD data in the 100 most significant voxels in the region
on the left

effect present in the area under the curve (see Borst et al., 2011 for details).
Thus, while this is the area in the brain that fits best to the model predic-
tions, the predicted over-additive interaction was not present.
Figure 9.5(b) shows the results for the manual resource. The best-fitting
area was located in the visual cortex (motor actions of the model were
almost always accompanied by visual actions, see Borst et al., 2011).
However, a region in the motor cortex also correlated significantly with
the model predictions (the location of the cross-hairs in Figure 9.5b),
which overlaps with ACT-R’s predefined region. Surprisingly, whereas
the shape of the BOLD response in this region fits best to the model
predictions, on first sight the fit seems to be less good than in the
176 Neuroergonomics

predefined region (compare Figure 9.5b with Figure 9.4b). However,


where the two hard subtraction and the two easy subtraction conditions
are more or less ‘grouped’ in the model predictions (Figure 9.4b, left
panel), the pattern in the predefined region deviates more from this
pattern than does the region that was proposed by the model-based
analysis.
In summary, the model-based analysis showed the following: (i) there
is no region in the brain that shows the interaction effect that was pre-
dicted for the problem–state resource—the best-fitting region overlaps
with ACT-R’s predefined region; and (ii), the region that fits best to
the predictions of the manual resource is located in the visual cortex,
but the significant region in the motor cortex overlaps with ACT-R’s
predefined manual region.

Applications to task design

Given the success of the modelling approach to analysing and under-


standing fMRI data, one can, for instance, ask whether these methods
may provide the basis for real-time task support, such as adaptive
automation. Consider the UAV task of Wilson and Russell (2007). The
first step would be to develop a cognitive model of the task. Gluck
et al. (2007) and Dimperio et al. (2008) modelled the actual operation of
UAVs. As this task is similar to the task described by Wilson and Russell,
the models of Gluck and Dimperio et al. could be used as a starting
point and combined with ROI analysis and model-based fMRI analysis.
Once a reasonably accurate model of the UAV task of Wilson and
Russell (2007) has been made, ROI analysis would be used for model
validation. Part of the UAV task was inspecting radar images to locate
targets for bombing. To create an easy and a difficult condition image
complexity was manipulated. Because inspecting the images yields very
little in the way of behavioural measures (except that more difficult
images presumably take longer to process), it is hard to validate a
model of this part of the task (e.g. does the difficult condition require
more visual processing or more complex spatial representations?).
One way to distinguish between different models would be by using
a ROI analysis to inspect brain areas associated with visual processing
and building spatial representations. Whereas choosing between more
visual processing and more complex spatial representations is relatively
trivial, and could also be done with a standard fMRI analysis, more
similar models do require a ROI analysis. For instance, assume that
there is one model that creates a spatial representation at the start of
Validating Models of Complex Tasks Using f MRI 177

inspecting the images and adapts that representation while scanning


the images, and another model that first scans the whole image and
only then builds a spatial representation. It would be difficult to choose
between these two options with a standard fMRI analysis, as demand
functions are typically too coarse to compare these options. However,
a ROI analysis should find clear timing differences in the area associ-
ated with building spatial representations, enabling one to distinguish
between the models.
This brings us to the second application of ROI analysis: improv-
ing the interpretation of complex fMRI data (Anderson et al., 2008b).
For example, take the manual resource discussed earlier. Before model
predictions were made, differences in the motor region were not
anticipated. After all, participants had to make the same number of
responses in each condition. The model showed that a difference
should be found, and further inspection of the model directly gave
an explanation for this effect. Although this is a very straightforward
example, the principle also holds for more complex effects, such as
differences in activation levels in algebra learning (e.g. Anderson, 2007;
Anderson et al., 2008b) or, perhaps, timing differences in building
spatial representations in the UAV task. These subtle effects are hard to
find with traditional methods and almost impossible to explain with-
out a detailed model. Furthermore, very small effects in a region can be
found between conditions, even if they would not lead to a significant
difference in a traditional analysis. For example, when a traditional
fMRI analysis was applied to the Borst et al. (2010b) task described
earlier, no effects in the motor area were found (as there is no difference
in the total amount of motor activity between the conditions; Borst
et al., 2010b). However, an inspection of the data in the predefined
motor area (Figure 9.4b) revealed a clear difference between conditions.
This difference could be very important if this task is combined with
another motor task, but would not have been found without using the
ROI analysis.
One might wonder why it is important to have a detailed computa-
tional model of a task in the first place. One advantage of having such
a complex, precise model is that it enables model-based fMRI analysis.
Another, possibly more important, advantage is that it shows why a
certain task or condition is causing performance problems. This is
important for human factors because it allows for better targeted inter-
ventions and can even be used to create intelligent tutors (e.g. Anderson
et al., 1985; Ritter et al., 2007; see, for more applications, e.g. van Rijn
et al., 2011).
178 Neuroergonomics

Once a well-defined model of a task exists, model-based fMRI can be


used to locate model components in the brain. By inspecting the activity
in the best-fitting region, one can additionally test the quality of the
model’s predictions. Model-based fMRI allows for more powerful explo-
ratory fMRI analyses than conventional fMRI analysis techniques, such
as cognitive subtraction. One could argue that the model predictions
(Figure 9.3c) are just a more precise specification of the experimental
conditions because the model contains our hypothesis of what happens
during the experiment. The difference is that the model predictions are
more precise: using model-based analysis allows for analysing within-
trial effects (model components that are active during only part of a
trial or change their activity over the course of a trial). Furthermore, it
increases the power of an analysis by predicting between-trial effects
within the same condition. This can be illustrated by comparing the
very detailed temporal predictions in Figure 9.3(c) to a binary func-
tion: it is more powerful to regress this pattern to the brain data than a
binary pattern (see Borst et al., 2011 for evidence that such an approach
outperforms not only a traditional fMRI analysis, but also a parametric
analysis). This is most important for complex tasks, in which it is often
far from trivial to find experimental conditions that reliably isolate a
process of interest.
Applied to the UAV task, model-based fMRI could be used to locate
regions that are, for example, involved in building spatial representa-
tions or in prioritizing targets. Moreover, based on the computational
model it could be determined why users start making mistakes at certain
stages during task execution (e.g. because of working memory limita-
tions). By using model-based fMRI it is possible to locate brain regions
that are responsible for this, and this information could then be used
for adaptive automation (e.g. Parasuraman & Wilson, 2008). Wilson and
Russell (2007) used EEG signals to train an artificial neural network that
assisted users in operating UAVs. Based on online EEG measurements,
the neural network determined when the task became too difficult
for the operators to handle. In that case, the neural network would
automatically decrease the speed of the aircraft to give the operator
more time to perform the task. This led to a dramatic improvement in
performance. Because random adaptations (not based on psychophysio-
logical evidence) decreased performance, Wilson and Russell argued
that it is, indeed, the timing of the adaptations that is important. As
input for their neural network Wilson and Russell used electrodes
distributed over the skull. Their method might be improved by using
the results of a model-based fMRI analysis. For instance, instead of using
Validating Models of Complex Tasks Using f MRI 179

electrodes distributed over the skull as input for the neural network,
it might be more effective to use only regions corresponding to brain
components that caused mistakes during the task. In turn, this might
improve the timing of the adaptations and thereby the performance of
the users.

Conclusion

The drawback of model-based fMRI is that its accuracy depends on the


quality of the model. If a model is incorrect (which is comparable to an
incorrect mapping of scans to conditions in a traditional fMRI analysis),
this will lead to either no significant results or, worse, to an incorrect
model brain mapping. Inspecting the BOLD signal in the located area
allows one to check whether the located area shows the predicted activity
of the model (Figure 9.5, right column), but this does not help if the
predicted activity was incorrect in the first place. Another possibility
is that, while the correct region correlates significantly, another region
correlates even more strongly (as was the case for the manual resource
discussed at length in this chapter). Thus, conventional fMRI methods
(or multiple model-based analyses, as follows) are necessary to check
whether the results of model-based fMRI are plausible. Furthermore,
model-based fMRI cannot be used to validate models or to explain
effects of complex tasks within a region for this ROI analysis can be
applied.
It is tempting to rely on model-based fMRI to relate models to
neuroimaging data. When a fMRI dataset and a model are available,
it is possible to show where model constructs are located in the brain.
However, this is only possible if a model gives a good account of the
human data and if only one region correlates with the model’s activ-
ity, as discussed previously. Conventional fMRI remains necessary to
check the plausibility of the results. Furthermore, this strategy can lead
to over-fitting: model-based fMRI takes idiosyncrasies of the task and
model into account and, as a result, will find (slightly) different regions
between tasks for the same model component.
A better strategy is to conjoin multiple model-based fMRI results to
create predefined regions. Thus, instead of applying model-based fMRI
analysis to only one dataset, better results can be obtained by applying
it to many different datasets, preferably of very different tasks involv-
ing the same basic model components. The resulting component brain
mappings can be subjected to a conjunction analysis to see which
regions are due to task peculiarities and which regions truly reflect
180 Neuroergonomics

processing of the model component. When a stable solution is found,


this results, on the one hand, in a good, general mapping between a
certain model component and a brain region, and, on the other hand,
in a well-grounded tool for ROI analyses. These can then, in turn, be
used to validate models and to explain complex fMRI data. Including
models of complex, real-life tasks, as studied in neuroergonomics, can
only improve these mappings.
Conjunction analysis of multiple model-based fMRI results can arrive
at stable model-brain mappings only if the different models that are
used for the multiple model-based fMRI analyses use the same under-
lying model components. This is almost exactly what Newell argued
for in 1973: use the same basic mechanisms to explain behaviour over
a wide variety of tasks. This ensures that the underlying concepts do
not capture idiosyncrasies of the tasks that are used, but reflect basic
mechanisms of human cognition. It seems, therefore, very natural to
now use cognitive architectures to investigate the relationship between
human cognition and the brain. Developing models of a wide variety of
tasks using the same basic mechanisms was an aim of cognitive archi-
tectures; now these models can be used to do multiple model-based
fMRI analyses.
Although neuroscience provides a welcome addition to human factors
methods, traditional fMRI analysis techniques are often not suited for
analysing data of the complex tasks, real-life tasks often of interest to
ergonomics practitioners. We argue that cognitive architectures can be
used to improve the analysis of fMRI data: ROI methods can be used to
validate models and explain complex data sets, and model-based fMRI
can be used to locate model components in the brain and do more
powerful exploratory fMRI analyses. These methods will, it is hoped,
lead to even more powerful neuroergonomics and to a better mapping
of complex cognition on the brain.
References

Abi-Dargham, A., Mawlawi, O., Lombardo, I., Gil, R., Martinez, D., Huang, Y., et al.
(2002). Prefrontal dopamine D1 receptors and working memory in schizophrenia.
Journal of Neuroscience, 22, 3708–3719.
Adolphs, R., Tranel, D., & Damasio, A. R. (1998). The human amygdala in social
judgment. Nature, 393, 470–474.
Adolphs, R., Baron-Cohen, S., & Tranel, D. (2002). Impaired recognition of social
emotions following amygdala damage. Journal of Cognitive Neuroscience, 14,
1264–1274.
Aksan, N., Schall, M., Dawson, J., Zilli, E. Tippin, J., & Rizzo, M. (2012). Utility
of actigraphy in long-term tracking of sleep quality in patients treated with
CPAP. Presented at the Annual Meeting of the Associated Professional Sleep
Societies, Boston, MA.
Alberdi, E., Povyakalo, A., Strigini, L., & Ayton, P. (2004). Effects of incor-
rect computer-aided detection (CAD) output on human decision-making in
mammography. Academic Radiology, 11, 909–918.
Aldrich, M. S. (1989). Automobile accidents in patients with sleep disorders.
Sleep, 12, 487–494.
Ahlstrom, U. & Friedman-Berg, F. J. (2006). Using eye movement activity as a
correlate of cognitive workload. International Journal of Industrial Ergonomics,
36, 623–636.
Allison, B. Z. & Pineda, J. A. (2006). Effects of SOA and flash pattern manipula-
tions on ERPs, performance, and preference: Implications for a BCI system.
International Journal of Psychophysiology, 59, 127–140.
Aloia, M. S., Arendt, J. T., Davis, J. D., Riggs, R. L., & Byrd, D. (2004).
Neuropsychological consequences of sleep apnea; A critical review. Journal of
the International Neuropsychological Society, 10, 772–785.
Altmann, E. M. & Trafton, J. G. (2002). Memory for goals: An activation-based
model. Cognitive Science, 26, 39–83.
Amedi, A., von Kriegstein, K., van Atteveldt, N. M., Beauchamp, M. S., & Naumer,
M. J. (2005). Functional imaging of human crossmodal identification and
object recognition. Experimental Brain Research, 166, 559–571.
Andersen, R. A., Snyder,H., Bradley, D.C., & Xing, J. (1997). Multimodal
representation of space in the posterior parietal cortex and its use in planning
movements. Annual Review of Neuroscience, 20, 303–330.
Anderson, J. R. (2005). Human symbol manipulation within an integrated cogni-
tive architecture. Cognitive Science, 29, 313–341.
Anderson, J. R. (2007). How can the human mind occur in the physical universe?
New York: Oxford University Press.
Anderson, J. R., Boyle, C., & Reiser, B. J. (1985). Intelligent tutoring systems.
Science, 228, 456–462.
Anderson, J. R., Qin, Y., Stenger, V. A., & Carter, C. S. (2004). The relationship of
three cortical regions to an information-processing model. Journal of Cognitive
Neuroscience, 16, 637–653.

181
182 References

Anderson, J. R., Carter, C. S., Fincham, J. M., Qin, Y., Ravizza, S. M., &
Rosenberg-Lee, M. (2008a). Using fMRI to test models of complex cognition.
Cognitive Science, 32, 1323–1348.
Anderson, J. R., Fincham, J. M., Qin, Y., & Stocco, A. (2008b). A central circuit of
the mind. Trends in Cognitive Science, 12, 136–143.
Antonelli Incalzi, R., Marra, C., Salvigni, B. L. Petrone, A., Gemma, A., Selvaggio,
D., & Mormile, F. (2004). Does cognitive dysfunction conform to a distinctive
pattern in obstructive sleep apnea? Journal of Sleep Research, 13, 79–86.
Arbib, M. A. (2011). Mirror system activity for action and language is embedded
in the integration of dorsal and ventral pathways. Brain & Language, 112,
12–24.
Arcizet, F., Mirpour, K., & Bisley, J. W. (2011). A pure salience response in poste-
rior parietal cortex. Cerebral Cortex, 21, 2498–2506.
Ariely, D. & Berns, G. S. (2010). Neuromarketing: the hope and hype of neuro-
imaging in business. Nature Reviews Neuroscience, 11, 284–292.
Armstrong, K. M., & Moore, T. (2007). Rapid enhancement of visual corti-
cal response discriminability by microstimulation of the frontal eye field.
Proceedings of the National Academy of Science, 104, 9499–9504.
Ashby, W. R. (1956). An introduction to cybernetics. London: Chapman & Hall.
Avery, R. A., Franowicz, J. S., Studholme, C., van Dyck, C. H., & Arnsten, A. F.
(2000). The alpha-2A-adrenoceptor agonist, guanfacine, increases regional
cerebral blood flow in dorsolateral prefrontal cortex of monkeys performing a
spatial working memory task. Neuropsychopharmacology, 23, 240–249.
Awh, E. & Jonides, J. (2001). Overlapping mechanisms of attention and working
memory. Trends in Cognitive Sciences, 5, 119–126.
Awh, E., Armstrong, K. M., & Moore, T. (2006). Visual and oculomotor selection:
links, causes and implications for spatial attention. Trends in Cognitive Sciences,
10, 124–130.
Ayaz, H., Shewokis, P. A., Bunce, S., Izzetoglu, K., Willems, B., & Onaral, B.
(2012). Optical brain monitoring for operator training and mental workload
assessment. NeuroImage, 59, 36–47.
Babcock, Q. & Byrne, T. (2000). Student perceptions of methylphenidate abuse at
a public liberal arts college. Journal of American College Health, 49, 143–145.
Bahner, E., Huper, A.-D., & Manzey, D. (2008). Misuse of automated decision
aids: Complacency, automation bias and the impact of training experience.
International Journal of Human-Computer Studies, 66, 688–699.
Bailey, B. P. & Konstan, J. A. (2006). On the need for attention aware systems:
Measuring effects of interruption on task performance, error rate, and affective
state. Computers in Human Behavior, 22, 685–708.
Bakardjian, H., Tanaka, T., & Cichocki, A. (2010). Optimization of SSVEP brain
responses with application to eight-command Brain-Computer Interface.
Neuroscience Letters, 469, 34–38.
Balkin, T. J., Horrey, W. J., Graeber, R. C., Czeisler, C. A., & Dinges, D. F. (2011).
The challenges and opportunities of technological approaches to fatigue man-
agement. Accident Analysis & Prevention 43, 565–572.
Ballard, D. H., Hayhoe, M. M., & Pelz, J. B. (1995). Memory representations in
natural tasks. Journal of Cognitive Neuroscience, 7, 66–80.
Bar, M. (2003). A cortical mechanism for triggering top-down facilitation in
visual object recognition. Journal of Cognitive Neuroscience, 15, 600–609.
References 183

Barker, A. T., Jalinous, R., & Freeston, I. L. (1985). Non-invasive magnetic


stimulation of human motor cortex. Lancet, 1, 1106–1107.
Barlow, H. (2004). The role of single-unit analysis in the past and future of
neurobiology. In: J. S. Werner & L. M. Chalupa (eds), The visual neurosciences
(Vol. 1, pp. 14–30). Cambridge, MA: MIT Press.
Barnes, M., Houston, D., Worsnop, C. J., Neill, A. M., Mykytyn, I. J., Kay, A., et al.
(2002). A randomized controlled trial of continuous positive airway pressure
in mild obstructive sleep apnea. American Journal of Respiratory and Critical Care
Medicine, 165, 773–780.
Barrett, L.F. & Niedenthal, P. M. (2004). Valence focus and the perception of facial
affect. Emotion, 4, 266–274.
Başkent, D. (2012). Effect of speech degradation on top-down repair: Phonemic
restoration with simulations of cochlear implants and combined electric-
acoustic stimulation. Journal of the Association for Research in Otolaryngology,
13, 683–692.
Basner, M. & Dinges, D. F. (2009). Dubious bargain: Trading sleep for Leno and
Letterman. Sleep, 32, 747–752.
Bauer, M., Oostenveld, R., Peeters, M., & Fries, P. (2006). Tactile spatial attention
enhances gamma band activity in somatosensory cortex and reduces low-
frequency activity in parietooccipital areas. Journal of Neuroscience, 26, 490–501.
Bear, M. F., Connors, B. W., & Paradiso, M. A (2007). Neuroscience: Exploring the
brain. Philadelphia: Lippincott, Williams & Wilkins.
Beatty, J. (1982). Task-evoked pupillary responses, processing load, and the
structure of processing resources. Psychological Bulletin, 91, 276–292.
Bechara, A., Damasio, H., Tranel, D., & Damasio, A. R. (1997). Deciding advanta-
geously before knowing the advantageous strategy. Science, 275, 1293–1294.
Bedard, M. C., Montplaisir, J., Richer, F., Rouleau, I., & Malo, J. (1991). Obstructive
sleep apnea syndrome: Pathogenesis of neuropsychological deficits. Journal of
Clinical and Experimental Neuropsychology, 13, 950–964.
Beebe, D. & Gozal, D. (2002). Obstructive sleep apnea and the pre-frontal cortex:
Towards a comprehensive model linking nocturnal upper airway obstruction to
daytime cognitive and behavioral deficits. Journal of Sleep Research, 11, 1–16.
Behrmann, M., Zemel, R. S., & Mozer, M. C. (1998). Object-based attention
and occlusion: Evidence from normal participants and a computational
model. Journal of Experimental Psychology: Human Perception and Performance,
24, 1011–1036.
Benko, H., Wilson, A. D., & Baudisch, P. (2006). Precise selection techniques for
multi-touch screens. In: CHI 2006 proceedings: Interacting with large surfaces
(pp. 1263–1272). Montréal, Québec, Canada.
Bennett, K. B. & Flach, J. M. (1992). Graphical displays: Implications for divided
attention, focused attention, and problem solving. Human Factors, 34, 513–533.
Bensmaïa, S. J. & Hollins, M. (2003). The vibrations of texture. Somatosensory
Motor Research, 20, 33–43.
Bensmaïa, S. J., Craig, J.C., Yoshioka, T., & Johnson, K. O. (2006a). SA1 and RA
afferent responses to static and vibrating gratings. Journal of Neurophysiology,
95, 1771–1782.
Bensmaïa, S. J., Craig, J.C., & Johnson, K. O. (2006b). Temporal factors in tactile
spatial acuity: Evidence for RA interference in fine spatial processing. Journal of
Neurophysiology, 95, 1783–1791.
184 References

Bensmaïa, S. J., Killebrew, J. H., & Craig, J.C. (2006c). Influence of visual motion
on tactile motion perception. Journal of Neurophysiology, 96, 1625–1637.
Berger, H. (1929). Über das Elektrenkephalogramm des Menschen. Archiv für
Psychiatrie und Nervenkrankheiten, 87, 527–570.
Bichot, N. P., Cave, K. R., & Pashler, H. (1999). Visual selection mediated by
location: Feature-based selection of noncontiguous locations. Perception &
Psychophysics, 61, 403–423.
Bichot, N. P., Rossi, A. F., & Desimone, R. (2005). Parallel and serial neural mecha-
nisms for visual search in macaque area V4. Science, 308, 529–534.
Bixler, E. O., Vgontzas, A. N., Lin, H. M., Ten Have, T., Rein, J., Vela-Bueno, A., &
Kales, A. (2001). Prevalence of sleep-disordered breathing in women: Effects of
gender. American Journal of Respiratory Critical Care Medicine, 163, 608–613.
Blike, G. T., Surgenor, S. D., & Whalen, K. (1999). A graphical object display
improves anesthesiologists’ performance on a simulated diagnostic task.
Journal of Clinical Monitoring and Computing, 15, 37–44.
Bobrov, P., Frolov, A., Cantor, C., Fedulova, I., Bakhnyan, M., & Zhavoronkov, A.
(2011) Brain-computer interface based on generation of visual images. PLoS
ONE, 6, e20674.
Bocanegra, B. R. & Zeelenberg, R. (2009). Emotion improves and impairs early
vision. Psychological Science, 20, 707–713.
Boehler, C. N., Schoenfeld, M. A., Heinze, H.-J., & Hopf, J.-M. (2011).
Object-based selection of irrelevant features is not confined to the attended
object. Journal of Cognitive Neuroscience, 23, 2231–2239.
Bogacz, R., Wagenmakers, E. J., Forstmann, B. U., & Nieuwenhuis, S. (2010).
The neural basis of the speed-accuracy tradeoff. Trends in Neurosciences,
33, 10–16.
Boksem, M. A., Meijman, T. F., & Lorist, M. M. (2005). Effects of mental fatigue
on attention: An ERP study. Cognitive Brain Research, 25, 107–116.
Boot, W. R., Basak, C., Erickson, K. I., Neider, M., Simons, D. J., Fabiani, M., et al.
(2010). Transfer of skill engendered by complex task training under conditions
of variable priority. Acta Psychologica, 135, 349–357.
Boot, W. R., Blakely, D. P., & Simons, D. J. (2011). Do action video games improve
perception and cognition? Frontiers in Psychology, 2, 226.
Borst, J. P., Taatgen, N. A., & van Rijn, H. (2010a). The problem state: A cognitive
bottleneck in multitasking. Journal of Experimental Psychology: Learning, Memory,
& Cognition, 36, 363–382.
Borst, J. P., Taatgen, N. A., Stocco, A., & van Rijn, H. (2010b). The neural cor-
relates of problem states: Testing fMRI predictions of a computational model
of multitasking. PLoS ONE, 5, e12966.
Borst, J. P., Taatgen, N. A., & van Rijn, H. (2011). Using a symbolic process model
as input for model-based fMRI analysis: Locating the neural correlates of prob-
lem state replacements. Neuroimage, 58, 137–147.
Botvinick, M. & Cohen, J. (1998). Rubber hands ‘feel’ touch that eyes see. Nature,
391, 756.
Botvinick, M. M., Braver, T. S., Barch, D. M., Carter, C. S., & Cohen, J. D.
(2001). Conflict monitoring and cognitive control. Psychological Review,
108, 624–652.
Botvinick, M. M., Cohen, J. D., & Carter, C. S. (2004). Conflict monitoring and
anterior cingulate cortex: An update. Trends in Cognitive Sciences, 8, 539–546.
References 185

Botvinick, M. & Bylsma, L. M. (2005). Distraction and action slips in an everyday


task: Evidence for a dynamic representation of task context. Psychonomic
Bulletin & Review, 12, 1011–1017.
Bouhuys, A. L., Bloem, G. M., & Groothuis, T. G. (1995). Induction of depressed
and elated mood by music influences the perception of facial emotional
expressions in healthy subjects. Journal of Affective Disorders, 33, 215–226.
Boyle, L., Tippin, J., Paul, A., & Rizzo, M. (2008). Driver performance in the
moments surrounding a microsleep. Transportation Research Part F: Traffic
Psychology and Behavior, 11, 126–136.
Braver, T. S., Reynolds, J. R., & Donaldson, D. I. (2003). Neural mechanisms
of transient and sustained cognitive control during task switching. Neuron,
39, 713–726.
Brazil, E. & Fernström, M. (2011). Auditory icons. In: T. Hermann, A. Hunt,
& J. G. Neuhoff (eds), The sonification handbook (pp. 325–338). Berlin: Logos
Publishing House.
Brefczynski, J. A. & DeYoe, E. A. (1999). A physiological correlate of the ‘spotlight’
of visual attention. Nature Neuroscience, 2, 370–374.
Bressler, S. L., Tang, W., Sylvester, C. M., Shulman, G. L., & Corbetta, M. (2008).
Top-down control of human visual cortex by frontal and parietal cortex in
anticipatory visual spatial attention. Journal of Neuroscience, 28, 10056–10061.
Brewer, N. & Smith, G. A. (1984). How normal and retarded individuals monitor
and regulate speed and accuracy of responding in serial choice tasks. Journal of
Experimental Psychology: General, 113, 71–93.
Brewer, J. B., Zhao, Z., Desmond, J. E., Glover, G. H., & Gabrieli, J. D. (1998).
Making memories: Brain activity that predicts how well visual experience will
be remembered. Science, 281, 1185–1187.
Broadbent, D. E. (1957). A mechanical model for human attention and immedi-
ate memory. Psychological Review, 64, 205–215.
Broadbent, D. E. (1958). Perception and communication. London: Pergamon Press.
Brookings, J. B., Wilson, G. F., & Swain, C. R. (1996). Psychophysiological
responses to changes in workload during simulated air traffic control. Biological
Psychology, 42, 361–377.
Brouwer, A. M. & van Erp, J. (2008). A tactile P300 BCI and the optimal num-
ber of tactors: effects of target probability and discriminability. Proceedings
of the 4th International BCI Workshop and Training Course. Graz, Austria: Graz
University of Technology Publishing House.
Brunner, C., Allison, B. Z., Krusienski, D. J., Kaiser, V., Muller-Putz, G. R.,
Pfurtscheller, G., & Neuper, C. (2010). Improved signal processing approaches
in an offline simulation of a hybrid brain-computer interface. Journal of
Neuroscience Methods, 188, 165–173.
Brunner, C., Allison, B. Z., Altstatter, C., & Neuper, C. (2011a). A comparison
of three brain-computer interfaces based on event-related desynchronization,
steady state visual evoked potentials, or a hybrid approach using both signals.
Journal of Neural Engineering, 8, 025010.
Brunner, P., Bianchi, L., Guger, C., Cincotti, F., & Schalk, G. (2011b). Current
trends in hardware and software for brain-computer interfaces. Journal of
Neural Engineering, 8, 025001.
Brunyé, T. T., Mahoney, C. R., Lieberman, H. R., & Taylor, H. A. (2010). Caffeine
modulates attention network function. Brain & Cognition, 72, 181–188.
186 References

Bundesen, C. (1987). Visual attention: Race models for selection from


multielement displays. Psychological Research, 49, 113–121.
Bundesen, C. (1990). A theory of visual attention. Psychological Review,
97, 523–457.
Bunge, S. A. (2004). How we use rules to select actions: A review of evidence
from cognitive neuroscience. Cognitive, Affective, & Behavioural Neuroscience,
4, 564–579.
Buracas, G. T. & Albright, T. D. (2009). Modulation of neuronal responses
during covert search for visual feature conjunctions. Proceedings of the National
Academy of Science, 106, 16853–16858.
Burrows, B. E. & Moore, T. (2009). Influence and limitations of popout in the
selection of salient visual stimuli by area V4 neurons. Journal of Neuroscience,
29, 15169–15177.
Buschman, T. J. & Miller, E. K. (2007). Top-down versus bottom-up control of atten-
tion in the prefrontral and posterior parietal cortices. Science, 315, 1860–1862.
Bush, G., Luu, P., & Posner, M. I. (2000). Cognitive and emotional influences in
the anterior cingulate cortex. Trends in Cognitive Science, 4, 215–222.
Busey, T. B. & Vanderkolk, J. R. (2005). Behavioral and electrophysiological
evidence for configural processing in fingerprint experts. Vision Research,
45, 431–448.
Butcher, L. M., Davis, O. S., Craig, I. W., & Plomin, R. (2008). Genome-wide
quantitative trait locus association scan of general cognitive ability using
pooled DNA and 500K single nucleotide polymorphism microarrays. Genes,
Brain, and Behavior, 7, 435–446.
Buzsáki, G. (2006). Rhythms of the brain. New York: Oxford University Press.
Byrne, E. & Parasuraman, R. (1996). Psychophysiology and adaptive automation.
Biological Psychology, 42, 249–268.
Cabon, P., Coblentz, A., Mollard, R., & Fouillot, J. P. (1993). Human vigilance in
railway and long-haul flight operation. Ergonomics, 36, 1019–1033.
Cacioppo, J. T. (2004). Feelings and emotions: Roles for electrophysiological
markers. Biological Psychology, 67, 235–243.
Caggiano, D. & Parasuraman, R. (2004). The role of memory representation in
the vigilance decrement. Psychonomic Bulletin and Review, 11, 932–937.
Caldwell, J. A., Mallis, M. M., Caldwell, J. L., Paul, M. A., Miller, J. C., & Neri, D. F.
(2009). Fatigue countermeasures in aviation. Aviation Space and Environmental
Medicine, 80, 29–59.
Calhoun, V. D. & Pearlson, G. D. (2012). A selective review of simulated driving
studies: Combining naturalistic and hybrid paradigms, analysis approaches,
and future directions. NeuroImage, 59, 25–25.
Card, S. K., Moran, T. P., & Newell, A. (1983). The psychology of human–computer
interaction. Hillsdale, NJ: Lawrence Erlbaum Associates.
Casco, C., Campana, G., Grieco, A., & Fuggetta, G. (2003). Perceptual learning
modulates electrophysiological and psychophysical response to visual texture
segmentation in humans. Neuroscience Letters, 371, 18–23.
Cassel, W., Ploch, T., Becker, C., Dugnus, D., Peter, J. H., & von Wichert, P. (1996).
Risk of traffic accidents in patients with sleep-disordered breathing: Reduction
with nasal CPAP. European Respiratory Journal, 9, 2606–2611.
Cattell, R. B. (1963). Theory of fluid and crystallized intelligence: a critical
experiment. Journal of Educational Psychology, 54, 1–22.
References 187

Cavanagh, J. F., Cohen, M. X., & Allen, J. J. (2009). Prelude to and resolution
of an error: EEG phase synchrony reveals cognitive control dynamics during
action monitoring. The Journal of Neuroscience, 29, 98–105.
Cecotti, H. (2011). Spelling with non-invasive Brain-Computer Interfaces—
current and future trends. Journal of Physiology Paris, 105, 106–114.
CDC (Centers for Disease Control) (2011). Unhealthy sleep-related behaviors—
12 states, 2009. Morbidity Mortality Weekly Report, 60, 233–238.
Chabris, C.F., Weinberger, A., Fontaine, M., & Simons, D.J. (2011). You do not
talk about fight club if you do not notice fight club: Inattentional blindness
for a simulated real-world assault. i-Perception, 2, 150–153.
Chaimow, D., Yacoub, E., Ugurbil, K., & Shmuel, A. (2011). Modeling and analy-
sis of mechanisms underlying fMRI-based decoding of information conveyed
in cortical columns. Neuroimage, 56, 627–642.
Chellappa, S. L., Steiner, R., Blattner, P., Oelhafen, P., Gotz, T., & Cajochen, C.
(2011). Non-visual effects of light on melatonin, alertness and cognitive per-
formance: Can blue-enriched light keep us alert? PLoS ONE, 6: e16429.
Chen, J. & Proctor, R. W. (2012). Up or down: Directional stimulus-response
compatibility and natural scrolling. In: Proceedings of the 56th Annual Meeting of
the Human Factors and Ergonomics Society. Santa Monica, CA: HFES.
Cherry, C. (1953). Some experiments on the recognition of speech with one and
with two ears. Journal of the Acoustical Society of America, 25, 975–979.
Chief Scientist Air Force (2010). Report on Technology Horizons: A Vision for Air
Force Science & Technology During 2010-2030. Volume 1. AF/ST-TR-10-01.
Chin, K., Fukuhara, S., Takahashi, K., Sumi, K., Nakamura, T., Matsumoto, H.,
et al. (2004). Response shift in perception of sleepiness in obstructive sleep
apnea-hypopnea syndrome before and after treatment with nasal CPAP. Sleep,
27, 490–493.
Cholewiak, R. W. (1999). The perception of tactile distance: Influences of body
site, space, and time. Perception, 28, 851–875.
Cholewiak, R. W., Brill, J. C., & Schwab, A. (2004). Vibrotactile localization
on the abdomen: Effects of place and space. Perception & Psychophysics,
66, 970–987.
Chowdhuri, S. (2012). Pharmacology of sleep. In: M. S. Sawfan (ed.) Essentials of
sleep medicine (pp. 17–54). New York: Humana Press.
Chun, M. M. & Jiang, Y. (1998). Contextual cuing: Implicit learning and
memory of visual context guides spatial attention. Cognitive Psychology,
36, 28–71.
Chun, M. M. & Turk-Browne, N. B. (2007). Interactions between attention and
memory. Current Opinion in Neurobiology, 17, 177–184.
Clark, V. P., Coffman, B., Mayer, A. R., Weisend, M. P., Lane, T. D. R., Calhoun, V.,
et al. (2012). TDCS guided using fMRI significantly accelerates learning to iden-
tify concealed objects. NeuroImage, 59, 117–128.
Clarke, A. R., Barry, R. J., McCarthy, R., & Selikowitz, M. (1998). EEG analysis
in Attention-Deficit/Hyperactivity Disorder: A comparative study of two sub-
types. Psychiatry Research, 81, 19–29.
Clore, G. L. & Huntsinger, J. R. (2007). How emotions inform judgment and
regulate thought. Trends in Cognitive Science, 11, 393–399.
Coan, J. A. & Allen, J. J. B. (2004). Frontal EEG asymmetry as a moderator and
mediator of emotion. Biological Psychology, 67, 7–49.
188 References

Cohen, J. D., Perlstein, W., Braver, T., Nystrom, L. E., Noll, D. C., Jonides, J., &
Smith, E. E. (1997). Temporal dynamics of brain activation during a working
memory task. Nature, 386, 604–608.
Cohen, M. X. & Ranganath, C. (2007). Reinforcement learning signals predict
future decisions. The Journal of Neuroscience, 27, 371–378.
Cohn, J. F., Ambadar, Z., & Ekman, P. (2007). Observer-based measurement
of facial expression with the Facial Action Coding System. In: J. A. Coan &
J. J. B. Allen (eds), The handbook of emotion elicitation and assessment
(pp. 203–221). New York: Oxford University Press.
Connor, J., Whitlock, G., Norton, R., & Jackson, R. (2000). The role of driver
sleepiness in car crashes: A systematic review of epidemiological studies.
Accident Analysis and Prevention, 33, 31–41.
Constantinidis, C. & Steinmetz, M. A. (2005). Posterior parietal cortex auto-
matically encodes the location of salient stimuli. Journal of Neuroscience,
15, 233–238.
Cooke, J. D. & Diggles, V. A. (1984). Rapid error correction during human
arm movements: Evidence for central monitoring. Journal of Motor Behavior,
16, 348–363.
Cools, R. & D’Esposito, M. (2011). Inverted-U–shaped dopamine actions on human
working memory and cognitive control. Biological Psychiatry, 69, e113–e125.
Corbetta, M. (1998). Frontoparietal cortical networks for directing attention
and the eye to visual locations: Identical, independent, or overlapping neural
systems? Proceedings of the National Academy of Sciences, 95, 831–838.
Corbetta, M. & Shulman, G. L. (2002). Control of goal-directed and stimulus-
driven attention in the brain. Nature Reviews Neuroscience, 3, 201–215.
Corbetta, M., Shulman, G. L., Miezin, F. M., & Peterson, S. E. (1995). Superior
parietal cortex activation during spatial attention shifts and visual feature
conjunction. Science, 270, 802–805.
Corbetta, M., Patel, G., & Shulman, G. L. (2008). The reorienting system of the
human brain: From environment to theory of mind. Neuron, 58, 306–324.
Corkin, S. (2002). What’s new with the amnesic patient H.M.? Nature Reviews
Neuroscience, 3, 153–160.
Cosgrove, K. P., Batis, J., Bois, F., Maciejewski, P. K., Esterlis, I., Kloczynski, T., et al.
(2009). β2-nicotinic acetylcholine receptor availability during acute and prolonged
abstinence from tobacco smoking. Archives of General Psychiatry, 66, 666–676.
Cowan, N. (2005). Working memory capacity. Hove: Psychology Press.
Cristino, F., Mathôt, S., Theeuwes, J., & Gilchrist, I. D. (2010). ScanMatch:
a novel method for comparing fixation sequences. Behavioral Research Methods,
42, 692–700.
Crottaz-Herbette, S. & Menon, V. (2006). Where and when the anterior cingulate
cortex modulates attentional response: Combined fMRI and ERP evidence.
Journal of Cognitive Neuroscience, 18, 766–780.
Cubells, J. F. & Zabetian, C. P. (2004). Human genetics of plasma dopamine
ß-hydroxylase activity: Applications to research in psychiatry and neurology.
Psychopharmacology, 174, 463–476.
Cubells, J. F., van Kammen, D. P., Kelley, M. E., Anderson, G. M., O’Connor, D. T.,
Price, L. H., et al. (1998). Dopamine beta-hydroxylase: two polymorphisms in
linkage disequilibrium at the structural gene DBH associate with biochemical
phenotypic variation. Human Genetics, 102, 533–540.
References 189

Cummings, M. L. (2006). Automation and accountability in decision support


system interface design. Journal of Technology Studies, 32, 23–31.
da Silva, F, Jr, de Pinho, R., de Mello, M., de Bruin, V., & de Bruin, P. (2009).
Risk factors for depression in truck drivers. Social Psychiatry and Psychiatric
Epidemiology, 44, 125–129.
Damasio, A. R. (1996). The somatic marker hypothesis and the possible func-
tions of the prefrontal cortex. Transactions of the Royal Society (London), 351,
1413–1420.
Damasio, A. R. (2001). Emotion and the human brain. In: A. R. Damasio,
A. Harrington, J. Kagan, B. S. McEwen, H. Moss, & R. Shaikh (eds) , Unity
of knowledge: The convergence of natural and human science (pp. 101–106).
New York: New York Academy of Sciences.
Damasio, A. (2005). Descartes’ error. (10th anniversary edition). New York:
Penguin Books.
Danielmeier, C. & Ullsperger, M. (2011). Post-Error adjustments. Frontiers in
Psychology, 2, 233.
Danielmeier, C., Eichele, T., Forstmann, B. U., Tittgemeyer, M., & Ullsperger, M.
(2011). Posterior medial frontal cortex activity predicts post-error adaptations
in task-related visual and motor areas. Journal of Neuroscience, 31, 1780–1789.
Davidson, M. C. & Marrocco, R. T. (2000). Local infusion of scopolamine into
intraparietal cortex slows covert orienting in rhesus monkeys. Journal of
Neurophysiology, 83, 1536–1549.
Davis, T., Love, B. C., & Preston, A. R. (2012). Learning the exception to the rule:
Model-based fMRI reveals specialized representations for surprising category
members. Cerebral Cortex, 22, 260–273.
Daw, N. D., Gershman, S. J., Seymour, B., Dayan, P., & Dolan, R. J. (2011). Model-
based influences on humans’ choices and striatal prediction errors. Neuron,
69, 1204–1215.
Dawson, D., Noy, Y. I., Härmä, M., Åkerstedt, T., & Belenky, G. (2011). Modeling
fatigue and the use of fatigue models in work settings. Accident Analysis, &
Prevention 43, 549–564.
de Bruijn, E. R., de Lange, F. P., von Cramon, D. Y., & Ullsperger, M. (2009). When
errors are rewarding. Journal of Neuroscience, 29, 12183–12186.
de Jong, R., Toffanin, P., & Harbers, M. (2010). Dynamic crossmodal links revealed
by steady-state responses in auditory-visual divided attention. International
Journal of Psychophysiology, 75, 3–15.
Deco, G. & Zihl, J. (2006). The neurodynamics of visual search. Visual Cognition,
14, 1006–1024.
Dehaene, S., Sergent, C., & Changeux, J.P. (2003). A neuronal network model
linking subjective reports and objective physiological data during conscious
perception. Proceedings of the National Academy of Sciences, 100, 8520–8525.
Dell’Acqua, R., Jolicoeur, P., Vespignani, F., & Toffanin, P. (2005). Central
processing overlap modulates P3 latency. Experimental Brain Research, 165,
54–68.
Dennett, D. C. (1987). The intentional stance. Cambridge, MA: MIT Press.
Desimone, R. & Gross, C. G. (1979). Visual areas in the temporal cortex of the
macaque. Brain Research, 178, 363–380.
Desimone, R. & Duncan, J. (1995). Neural mechanisms of selective visual
attention. Annual Review of Neuroscience, 18, 193–222.
190 References

Deutsch, J. A. & Deutsch, D. (1963). Attention: Some theoretical considerations.


Psychological Review, 70, 80–90.
di Pelligrino, G., Fadiga, L., Fogassi, L., Gallese, V., & Rizzolatti, G. (1992).
Experimental Brain Research, 91, 176–180.
Di Stasi, L. L., Marchitto, M., Antoli, A., Baccino, T., & Canas, J. J. (2010).
Approximation of on-line mental workload index in ATC simulated multi-
tasks. Journal of Air Transport Management, 16, 330–333.
Diaper, D. & Stanton, N. (eds) (2004). The handbook of task analysis for human-
computer interaction. Mahwah, NJ: Lawrence Erlbaum Associates.
Dijksterhuis, A., Baaren, R., Bongers, K. A., Bos, M. W., Leeuwen, M., & Leij, A.
(2009). The rational unconscious: Conscious versus unconscious thought in
complex consumer choice. In: M. Wänke (ed.), Social psychology of consumer
behavior (pp. 89–107). New York: Psychology Press.
Dimperio, E., Gunzelmann, G., & Harris, J. (2008). An initial evaluation of a
cognitive model of UAV reconnaissance. In: J. Hansberger (ed.), Proceedings of
the seventeenth conference on behavior representation in modeling and simulation
(pp. 165–173). Orlando, FL: Simulation Interoperability Standards Organization.
Dinges, D. F. (2004). Critical research issues in development of biomathematical
models of fatigue and performance. Aviation, Space and Environmental Medicine
75, A181–A191,
Dinges, D. F. & Broughton, R. J. (1989). The significance of napping: A synthesis.
In: D. F. Dinges & R. J. Broughton (eds), Sleep and alertness: Chronobiological,
behavioral and medical aspects of napping (pp. 299–308). New York: Raven Press.
Dinges, D. F., Orne, M. T., Whitehouse, W. G., & Orne, E. C. (1987). Temporal
placement of a nap for alertness: Contributions of circadian phase and prior
wakefulness. Sleep, 10, 313–329.
Dinges, D. F., Maislin, G. Brewster, R. M., Krueger, G. P., & Carroll, R. J. (2005).
Pilot test of fatigue management technologies. Journal of the Transportation
Research Board, No. 1922: 175–182. Washington, DC: Transportation Research
Board of the National Academies.
Donchin, E. (1981). Surprise!... Surprise? Psychophysiology, 18, 493–513.
Donchin, E., Spencer, K. M., & Wijesinghe, R. (2000). The mental prosthesis:
Assessing the speed of a P300-based brain-computer interface. IEEE Transactions
on Rehabilitation Engineering, 8, 174–179.
Donner, T. H. & Siegel, M. (2011). A framework for local cortical oscillation
patterns. Trends in Cognitive Sciences, 15, 191–199.
Donner, T. H., Kettermann, A., Diesch, E., Ostendorf, F., Villringer, A., & Brandt,
S. A. (2002). Visual feature and conjunction searches of equal difficulty engage
only partially overlapping frontoparietal networks. NeuroImage, 15, 16–25.
Dosher, B. A., Han, S., & Lu, Z. L. (2004). Parallel processing in visual search asym-
metry. Journal of Experimental Psychology: Human Perception and Performance,
30, 3–27.
Drummond, S. P., Brown, G. G., Gillin, J. C., Stricker, J. L., Wong, E. C., &
Buxton, R. B. (2000). Altered brain response to verbal learning following sleep
deprivation. Nature, 403, 655–657.
Drury, C. G. (1975). Inspection of sheet materials—Model and data. Human
Factors, 17, 257–265.
Dudschig, C. & Jentzsch, I. (2009). Speeding before and slowing after errors: Is it
all just strategy? Brain Research, 1296, 56–62.
References 191

Duncan, J. (1980). The demonstration of capacity limits. Cognitive Psychology,


12, 75–96.
Duncan, J. (1984). Selective attention and the organization of visual informa-
tion. Journal of Experimental Psychology: General, 113, 501–517.
Duncan, J. & Humphreys, G. W. (1989). Visual search and stimulus similarity.
Psychological Review, 96, 433–458.
Duncan, J. & Owen, A.M. (2000). Common regions of the human frontal
lobe recruited by diverse cognitive demands. Trends in Neuroscience, 23,
475–483.
Dunston, P. S., Proctor, R. W., & Wang, X. (2012). Challenges in evaluating
skill transfer from construction equipment simulators. Theoretical Issues in
Ergonomics Science. DOI:10.1080/1463922X.2011.624647
Economic Commission for Europe. (2011). Statistics of road traffic accidents in
Europe and North America (52nd ed.). New York and Geneva: United Nations.
Available at http://www.unece.org/trans/main/wp6/publications/stats_
accidents2011.html
Egeth, H. E. & Yantis, S. (1997). Visual attention: Control, representation, and
time course. Annual Review of Psychology, 48, 269–297.
Egeth, H. E., Virzi, R. A., & Garbart, H. (1984). Searching for conjunctively
defined targets. Journal of Experimental Psychology: Human Perception and
Performance, 10, 32–39.
Egly, R., Driver, J., & Rafal, R. D. (1994). Shifting visual attention between objects
and locations: Evidence from normal and parietal lesion subjects. Journal of
Experimental Psychology: General, 123, 161–177.
Egner, T. (2007). Congruency sequence effects and cognitive control. Cognitive,
Affective & Behavioral Neuroscience, 7, 380–390.
Egner, T. & Gruzelier, J. H. (2004). EEG biofeedback of low beta band compo-
nents: Frequency-specific effects on variables of attention and event-related
brain potentials. Clinical Neurophysiology, 115, 131–139.
Eichele, T., Debener, S., Calhoun, V. D., Specht, K., Engel, A. K., Hugdahl, K.,
et al. (2008). Prediction of human errors by maladaptive changes in event-
related brain networks. Proceedings of the National Academy of Sciences of the
United States of America, 105, 6173–6178.
Eichele, H., Juvodden, H. T., Ullsperger, M., & Eichele, T. (2010). Mal-adaptation
of event-related EEG responses preceding performance errors. Frontiers in
Human Neuroscience, 4, 65.
Eimer, M. (1994). An ERP study on visual spatial priming with peripheral onsets.
Psychophysiology, 31, 154–163.
Eimer, M. & Driver, J. (2001). Crossmodal links in endogenous and exog-
enous spatial attention: evidence from event-related brain potential studies.
Neuroscience and Biobehavioral Reviews, 25, 497–511.
Eimer, M., Forster, B., & van Velzen, J. V. (2003). Anterior and posterior atten-
tional control systems use different spatial reference frames: ERP evidence from
covert tactile-spatial orienting. Psychophysiology, 40, 924–933.
Eimer, M., Forster, B., & Vibell, J. (2005). Cutaneous saltation within and across
arms: a new measure of the saltation illusion in somatosensation. Perception &
Psychophysics, 67, 458–468.
Ekman, P. & Friesen, W. V. (1978). Facial action coding system. Palo Alto, CA:
Consulting Psychologists Press.
192 References

Ekman, P. & Friesen, W. V. (1986). A new pan-cultural facial expression of


emotion. Motivation and Emotion, 10, 159–168.
Elliott, R., Sahakian, B., Matthews, K., Bannerjea, A., Rimmer, J., & Robbins, T.
(1997). Effects of methylphenidate on spatial working memory and planning
in healthy young adults. Psychopharmacology, 131, 196–206.
Engleman, H. M., Martin, S. E., Deary, I. J., & Douglas, N. J. (1994). Effect of
continuous positive airway pressure treatment on daytime function in sleep
apnoea/hypopnoea syndrome. Lancet, 343, 572–575.
Engleman, H. M., Hirst, W. S. J., & Douglas, N. J. (1997). Under reporting of
sleepiness and driving impairment in patients with sleep apnea/hypopnea
syndrome. Journal of Sleep Research, 6, 272–275.
Eriksen, B. A. & Eriksen, C. W. (1974). Effects of noise letters upon identifica-
tion of a target letter in a nonsearch task. Perception & Psychophysics, 16,
143–149.
Eriksen, C. W. & Hoffman, J. E. (1973). The extent of processing of noise elements
during selective encoding from visual displays. Perception & Psychophysics, 14,
155–160.
Eriksen, C. W. & St. James, J. D. (1986). Visual attention within and around
the field of focal attention: A zoom lens model. Perception & Psychophysics 40,
225–240.
Erlhagen, W. Mukovskiy, A., Bicho, E., Panin, G., Kiss, A., Knoll, A., van Schie, H. T.,
& Bekkering, H. (2006). Goal-directed imitation for robots: A bio-inspired
approach to action understanding and skill learning. Robotics and Autonomous
Systems, 54, 353–360.
Espeseth, T. Greenwood, P. M., Reinvang, I., Fjell, A. M., Walvhold, K. B.,
Westlye, E., et al. (2006). Interactive effects of APOE and CHRNA4 on attention
and white matter volume in healthy middle-aged and older adults. Cognitive,
Affective, & Behavioral Neuroscience, 6, 31–43.
Espeseth, T., Sneve, M. H., Rootwelt, H., & Laeng, B. (2010). Nicotinic recep-
tor gene CHRNA4 interacts with processing load in attention. PLoS One, 5,
e14407.
European Commission, Information Society and Media. (2008). Robotics in
medicine and healthcare. eHealth Monthly Focus, December. Available
at: http://ec.europa.eu/information_society/activities/health/docs/monthly_
focus/200812robotics.pdf (accessed 8 October 2012).
Everling, S., Tinsley, C. J., Gaffan, D., & Duncan, J. (2002). Filtering of neural sig-
nals by focused attention in the monkey prefrontal cortex. Nature Neuroscience,
5, 671–675.
Fadden, S., Ververs, P. M., & Wickens, C. D. (1998). Pathway HUDs: Are they
viable? Human Factors, 43, 173–193.
Fafrowicz, M, Marek, T., Karwowski, W., & Schmorrow, D. (eds) (2012).
Neuroadaptive systems: Theory and applications. Boca Raton, FL: CRC Press.
Fahle, M. & Poggio, T. (eds) (2002). Perceptual learning. Cambridge, MA: MIT Press.
Falk, E. B., Berkman, E. T., Mann, T., Harrison, B, & Lieberman, M. D. (2010).
Predicting persuasion-induced behavior change from the brain. Journal of
Neuroscience, 30, 8421–8424.
Falkenstein, M., Hohnsbein, J., & Hoormann, J., (1991.). Effects of crossmodal
divided attention on late ERP components. II. Error processing in choice reac-
tion tasks. Electroencephalography and Clinical Neurophysiology 78, 447–455.
References 193

Falkenstein, M., Hohnsbein, J., & Hoormann, J. (1996). Differential processing of


motor errors. In: C. Ogura, Y. Koga & M. Shimokochi (eds), Recent advances in
event-related brain potential research. Amsterdam: Elsevier.
Fan, J., McCandliss, B. D., Fossella, J., Flombaum, J. I., & Posner, M. I. (2005). The
activation of attentional networks. Neuroimage, 26, 471–479.
Farah, M. J., Illes, J., Cook-Deegan, R., Gardner, H., Kandel, E., King, P., et al.
(2004). Neurocognitive enhancement: What can we do and what should we
do? Nature Reviews Neuroscience, 5, 421–425.
Farwell, L. A. & Donchin, E. (1988). Talking off the top of your head: Toward a
mental prosthesis utilizing event-related brain potentials. Electroencephalography
and Clinical Neurophysiology, 70, 510–523.
Faubert, J. (2002). Visual perception and aging. Canadian Journal of Experimental
Psychology, 56, 164–176.
Fedota, J. R. & Parasuraman, R. (2010). Neuroergonomics and human error.
Theoretical Issues in Ergonomic Science, 11, 402–421.
Feng, Y., Niu, T., Xing, H., Xu, X., Chen, C., Peng, S., et al. (2004). A common
haplotype of the nicotine acetylcholine receptor alpha 4 subunit gene is asso-
ciated with vulnerability to nicotine addiction in men. American Journal of
Human Genetics, 75, 112–121.
Ferrez, P. W. (2007). Error-related EEG potentials in brain–computer interfaces.
Unpublished doctoral dissertation. Thèse Ecole polytechnique federale de
Lausanne EPFL, no. 3928.
Ferrez, P. W. & Millán, J. d. R. (2008). Error-related EEG potentials generated
during simulated brain-computer interaction. IEEE Transactions on Bio-Medical
Engineering, 55, 923–929.
Ferris, T. K. & Sarter, N. (2011). Continuously informing vibrotactile displays in
support of attention management and multitasking in anesthesiology. Human
Factors, 53, 600–611.
Feuerstein, C., Naegele, B., Pepin, J. L., & Levy, P. (1997). Frontal lobe-related
cognitive functions in patients with sleep apnea syndrome before and after
treatment. Acta Neurologica Belgica, 97, 96–107.
Findlay, J. M. (1997). Saccade target selection during visual search. Vision
Research, 37, 617–631.
Findlay, J. M. (2009). Saccadic eye movement programming: Sensory and atten-
tional factors. Psychological Research, 73, 127–135.
Findley, L. J., Barth, J. T., Powers, D. C., Wilhoit, S. C., Boyd, D. G., & Suratt, P. M.
(1986). Cognitive impairment in patients with obstructive sleep apnea and
associated hypoxemia. Chest, 90, 686–690.
Fisch, B. (1999). Fisch and Spehlmann’s EEG primer: Basic principles of digital and
analog EEG. Amsterdam: Elsevier.
Fischer, E., Haines, R. F., & Price, T. A. (1980). Cognitive issues in head-up displays
(NASA Tech. Rep. No. 1711). Moffett Field, CA: National Aeronautics and Space
Administration.
Fitch, W. T. & Kramer, G. (1994). Sonifying the body electric: Superiority of an
auditory over visual display in a complex, multivariate system. In: G. Kramer
(ed.), Auditory display: Sonification, audification and auditory interfaces (pp.
307–326). Reading, MA: Addison-Wesley.
Fitts, P. M. (1954). The information capacity of the human motor system in control-
ling the amplitude of movement. Journal of Experimental Psychology, 47, 381–391.
194 References

Fitts, P. M., Jones, R. E., & Milton, J. L. (1950). Eye movements of aircraft pilots
during instrument-landing approaches. Aeronautical Engineering Review, 9,
24–29.
Folk, C. L., Remington, R. W., & Johnston, J. C. (1992). Involuntary covert ori-
enting is contingent on attentional control settings. Journal of Experimental
Psychology: Human Perception and Performance, 18, 1030–1044.
Forlines, C., Wigdor, D., Shen, C., & Balakrishnan, R. (2007). Direct-touch vs.
mouse input for tabletop displays. In CHI 2007 proceedings: Mobile interaction
techniques I (pp. 647–656). San Jose, CA.
Fougnie, D. & Marois, R. (2007). Executive working memory load induces inat-
tentional blindness. Psychonomic Bulletin & Review, 14, 142–147.
Foyle, D.C., McCann, R.S., & Shelden, S.G. (1995). Attentional issues with super-
imposed symbology: Formats for scene-linked displays. Proceedings of the Eighth
International Symposium on Aviation Psychology (pp. 98–103). Columbus, OH:
Ohio State University.
Freeman, F. G., Mikulka, P. J., Prinzel, L. J., & Scerbo, M. W. (1999). Evaluation of
an adaptive automation system using three EEG indices with a visual tracking
task. Biological Psychology, 50, 61–76.
Friedman, N. P., Miyake, A., Young, S. E., DeFries, J. C., Corley, R. P., & Hewitt,
J. K. (2008). Individual differences in executive functions are almost entirely
genetic in origin. Journal of Experimental Psychology: General, 137, 201–225.
Fries, P. (2005). A mechanism for cognitive dynamics: Neuronal communication
through neuronal coherence. Trends in Cognitive Sciences, 9, 474–480.
Fries, P. (2009). Neuronal gamma-band synchronization as a fundamental process
in cortical computation. Annual Review of Neuroscience, 32, 209–224.
Fries, P., Reynolds, J. H., Rorie, A. E., & Desimone, R. (2001). Modulation of oscil-
latory neuronal synchronization by selective visual attention. Science, 291,
1560–1563.
Fries, P., Scheeringa, R., & Oostenveld, R. (2008). Finding gamma. Neuron, 58,
303–305.
Friesen, C. K. & Kingstone, A. (1998). The eyes have it! Reflexive orienting is
triggered by nonpredictive gaze. Psychonomic Bulletin & Review, 5, 490–495.
Friesen, C. K., & Kingstone, A. (2003). Abrupt onsets and gaze direction cues trig-
ger independent reflexive attentional effects. Cognition, 87, B1–B10.
Frijda, N. H. (1986). The emotions. Cambridge: Cambridge University Press.
Frishman, L. (2005). Basic visual processes. In: E. B. Goldstein (ed.), Blackwell
handbook of sensation and perception (pp. 53–91). Malden, MA: Blackwell.
Friston, K. J., Price, C. J., Fletcher, P., Moore, C., Frackowiak, R. S., & Dolan, R. J.
(1996). The trouble with cognitive subtraction. Neuroimage, 4, 97–104.
Friston, K. J., Ashburner, J. T., Kiebel, S. J., Nichols, T. E., & Penny, W. D. (eds).
(2007). Statistical parametric mapping. The analysis of functional brain images.
London: Academic Press.
Fu, S., Zinni, M., Squire, P. N., Kumar, R., Caggiano, D. M., & Parasuraman, R. (2008).
When and where perceptual load interacts with voluntary visuospatial attention:
An event-related potential and dipole modeling study. NeuroImage, 39, 1345–1355.
Fuchs, T., Birbaumer, N., Lutzenberger, W., Gruzelier, J. H., & Kaiser, J.
(2003). Neurofeedback treatment for attention-deficit/hyperactivity disorder
in children: A comparison with methylphenidate. Applied Psychophysiology and
Biofeedback, 28, 1–12.
References 195

Furley, P., Memmert, D., & Heller, C. (2010). The dark side of visual awareness
in sport: Inattentional blindness in a real-world basketball task. Attention,
Perception & Psychophysics, 72, 1327–1337.
Furuta, H., Kandea, R., Kosaka, K., Arai, H., Sano, J., & Koshino, Y. (1999).
Epworth sleepiness scale and sleep studies in patients with obstructive sleep
apnea syndrome. Psychiatry and Clinical Neurosciences, 53, 301–302.
Fuster, J. M. (1995). Memory in the cerebral cortex: An empirical approach to neural
networks in the human and nonhuman primate. Cambridge, MA: MIT Press.
Fuster, J. M. (2008). The prefrontal cortex. 4th ed. New York: Academic Press.
Fuster, J. M. & Alexander, G. E. (1971). Neuron activity related to short-term
memory. Science, 173, 652–654.
Gagné, R. & Gibson, J. J. (1947). Research on the recognition of aircraft. In:
J.J. Gibson (ed.), Motion picture training and research, Washington, DC: U.S.
Government Printing Office.
Gallagher, H. L., Jack, A. I., Roepstorff, A., & Frith, C. D. (2002). Imaging the
intentional stance in a competitive game. NeuroImage 16, 814–821.
Ganesh, S., van Schie, H. T., de Lange, F. P., Thompson, E., & Wigboldus, J.
(2012). How the human brain goes virtual: Distinct cortical regions of the
person-processing network are involved in self-identification with virtual
agents. Cerebral Cortex, 22, 1577–1585.
Gao, X., Xu, D., Cheng, M., & Gao, S. (2003). A BCI-based environmental
controller for the motion-disabled. IEEE Transactions on Neurological Systems,
Rehabilitation, and Engineering, 11, 137–140.
Gasper, K. & Clore, G. L. (2002). Attending to the big picture: Mood and global
vs. local processing of visual information. Psychological Science, 13, 34–40.
Gastaut, H. & Broughton, R. (1965). A clinical and polygraphic study of episodic
phenomena during sleep. In: J. Wortis (ed.), Recent advances in biological psy-
chology (pp. 197–223). New York: Plenum Press.
Gatass, R., Sousa, A. P. B., & Gross, C. G. (1988). Visuotopic organization
and extent of V3 and V4 of the macaque. Journal of Neuroscience, 8,
1831–1845.
Gazzaniga, M. S. (2009). The cognitive neurosciences. 4th ed.. Cambridge, MA: MIT
Press.
Gehring, W. J., Coles, M. G. H., Meyer, D. E., & Donchin, E. (1990). The error-
related negativity: An event-related brain potential accompanying errors.
Psychophysiology, 27, S34.
Gehring, W. J., Goss, B., Coles, M. G., & Meyer, D. E. (1993). A neural system for
error detection and compensation. Psychological Science, 4, 385–390.
Geldard, F. A. (1982). Saltation in somesthesis. Psychological Bulletin, 92,
136–175.
Gentsch, A., Ullsperger, P., & Ullsperger, M. (2009). Dissociable medial frontal
negativities from a common monitoring system for self- and externally caused
failure of goal achievement. Neuroimage, 47, 2023–2030.
George, C. F. (2001). Reduction in motor vehicle collisions following treatment
of sleep apnoea with nasal CPAP. Thorax, 56, 508–512.
George, M. & Belmaker, R. (ed.). (2007). Transcranial Magnetic Stimulation in
Clinical Psychiatry. Arlington, VA: American Psychiatric Publishing.
George, N. & Conty, L. (2008). Facing the gaze of others. Clinical Neurophysiology,
38, 197–207.
196 References

Gerson, A. D., Parra, L. C., & Sajda, P. (2006). Cortically-coupled computer vision
for rapid image search. IEEE Transactions on Neural Systems and Rehabilitation
Engineering, 14, 174–179.
Gevins, A., Smith, M. E., Leong, H., McEvoy, L., Whitfield, S., Du, R., & Rush, G.
(1998). Monitoring working memory load during computer-based tasks with
EEG pattern recognition methods. Human Factors, 40, 79–91.
Ghose, G. M. & Maunsell, J. H. R. (2008). Spatial summation can explain the
attentional modulations of neuronal responses to multiple stimuli in area V4.
Journal of Neuroscience, 28, 5115–5126.
Giambra, L. M. & Quilter, R.E. (1987). A two-term exponential functional descrip-
tion of the time course of sustained attention. Human Factors, 29, 635–643.
Gläscher, J. P. & O’Doherty, J. P. (2010). Model-based approaches to neuro-
imaging: Combining reinforcement learning theory with fMRI data. Wiley
Interdisciplinary Reviews: Cognitive Science, 1, 501–510.
Glimcher, P. W., Camerer, C. F., Fehr, E., & Poldrack, R. A. (eds) (2008).
Neuroeconomics: Decision making and the brain. Amsterdam: Elsevier.
Gluck, K. A., Ball, J. T., & Krusmark, M. A. (2007). Cognitive control in a compu-
tational model of the predator pilot. In: W. D. Gray (ed.), Integrated models of
cognitive systems: New York: Oxford University Press.
Goel, N., Rao, H., Durmer, J. S., & Dinges, D. F. (2009). Neurocognitive conse-
quences of sleep deprivation. Seminars in Neurology, 29, 320–339.
Goldman-Rakic, P. S. (1995). Cellular basis of working memory. Neuron,
14, 447–485.
Golz, M., Sommer, D., Krajewski, J., Trutschel, U., & Edwards, D. (2011).
Microsleep episodes and related crashes during overnight driving simulations.
Proceedings of the Sixth International Driving Symposium on Human Factors in
Driver Assessment, Training and Vehicle Design, 2011.
Goodale, M. A., & Milner, A. D. (1992). Separate visual pathways for perception
and action. Trends in Neuroscience, 15, 20–25.
Gopher, D., Weil, M., Baraket, T. (1994). Transfer of skill from a computer game
trainer to flight. Human Factors 36, 387–405.
Goschke, T. (2003). Voluntary action and cognitive control from a cognitive
neuroscience perspective. In: S. Maasen, W. Prinz, & G. Roth (eds), Voluntary
action: An issue at the interface of nature and culture (pp. 49–85). Oxford: Oxford
University Press.
Gottlieb, J., Kusunoki, M., & Goldberg, M. E. (1998). The representation of visual
salience in monkey parietal cortex. Nature, 391, 481–484.
Grahn, J. A., Henry, M. J., & McAuley, J. (2011). FMRI investigation of cross-
modal interactions in beat perception: Audition primes vision, but not vice
versa. NeuroImage, 54, 1231–1243.
Gratton, G. & Fabiani, M. (2007). Optical imaging of brain function. In:
R. Parasuraman & M. Rizzo (eds), Neuroergonomics: The brain at work, (pp. 65–81).
Cambridge, MA: Oxford University Press.
Gratton, G., Coles, M. G., Sirevaag, E. J., Eriksen, C. W., & Donchin, E. (1988). Pre-
and poststimulus activation of response channels: A psychophysiological analysis.
Journal of Experimental Psychology. Human Perception and Performance, 14, 331–344.
Gratton, G., Coles, M. G., & Donchin, E. (1992). Optimizing the use of infor-
mation: Strategic control of activation of responses. Journal of Experimental
Psychology: General, 121, 480–506.
References 197

Gray, J. A. (1981). A critique of Eysenck’s theory of personality. In: H. J. Eysenck


(ed.), A model for personality (pp. 246–276). Berlin: Springer-Verlag.
Gray, J. A. (1982). The neuropsychology of anxiety: An enquiry into the functions of the
septo-hippocampal system. New York: Oxford University Press.
Gray, P. O. (2006). Psychology. 6th ed. New York: Worth Publishers.
Gray, W. D. (ed.) (2007). Integrated models of cognitive systems. New York: Oxford
University Press.
Gray, W. D. (2008). Cognitive architectures: Choreographing the dance of mental
operations with the task environments. Human Factors, 50, 497–505.
Gredebäck, G., Fikke, L., & Melinder, A. (2010). The development of joint visual
attention: a longitudinal study of gaze following during interactions with
mothers and strangers. Developmental Science, 13, 839–848.
Green, C. S. & Bavelier, D. (2003). Action video game modifies visual selective
attention. Nature, 423, 534–537.
Green, J. J. & McDonald, J. J. (2010). The role of temporal predictability in the
anticipatory biasing of sensory cortex during visuospatial shifts of attention.
Psychophysiology, 47, 1057–1065.
Green, A. E., Munafò, M., DeYoung, C., Fossella, J. A., Fan, J., & Gray, J. R. (2008).
Using genetic data in cognitive neuroscience: From growing pains to genuine
insights. Nature Reviews Neuroscience, 9, 710–720.
Greenwood, P., & Parasuraman, R. (2003). Normal genetic variation, cognition,
and aging. Behavioral and Cognitive Neuroscience Reviews, 2, 278–306.
Greenwood, P. M., Fossella, J., & Parasuraman, R. (2005). Specificity of the effect
of a nicotinic receptor polymorphism on individual differences in visuospatial
attention. Journal of Cognitive Neuroscience, 17, 1611–1620.
Greenwood, P. M., Sundararajan, R., Lin, M.-K., Fryxell, K. J., & Parasuraman,
R. (2009a). Both a nicotinic single nucleotide polymorphism (SNP) and a
noradrenergic SNP modulate working memory performance when attention is
manipulated. Journal of Cognitive Neuroscience, 21, 2139–2153.
Greenwood, P. M., Lin, M.-K., Sundararajan R., Fryxell, K. J., & Parasuraman, R.
(2009b). Synergistic effects of genetic variation in nicotinic and muscarinic
receptors on visual attention but not working memory. Proceedings of the
National Academy of Sciences, 106, 3633–3638.
Gross, J., Schmitz, F., Schnitzler, I., Kessler, K., Shapiro, K., Hommel, B., &
Schnitzler, A. (2004). Modulation of long-range neural synchrony reflects
temporal limitations of visual attention in humans. Proceedings of the National
Academy of Sciences, 101, 13050–13055.
Guest, S., Catmur, C., Lloyd, D., & Spence, C. (2002). Audiotactile interactions in
roughness perception. Experimental Brain Research, 146, 161–171.
Guest, S. & Spence, C. (2003). Tactile dominance in speeded discrimination of
textures. Experimental Brain Research, 150, 201–207.
Hajcak, G. & Simons, R. F. (2008). Oops!.. I did it again: An ERP and behavioral
study of double-errors. Brain and Cognition, 68, 15–21.
Hajcak, G., McDonald, N., & Simons, R. F. (2003). To err is autonomic: Error-
related brain potentials, ANS activity, and post-error compensatory behavior.
Psychophysiology, 40, 895–903.
Hajcak, G., Nieuwenhuis, S., Ridderinkhof, K. R., & Simons, R. F. (2005). Error-
Preceding brain activity: Robustness, temporal dynamics, and boundary condi-
tions. Biological Psychology, 70, 67–78.
198 References

Hakkanen, J. & Summala, H. (2000). Sleepiness at work among commercial truck


drivers. Sleep, 23, 49–57.
Hampton, A. N., Bossaerts, P., & O’Doherty, J. P. (2006). The role of the
ventromedial prefrontal cortex in abstract state-based inference during
decision making in humans. Journal of Neuroscience, 26, 8360–8367.
Handy, T. C. (2004). Event-related potentials: A methods handbook. Cambridge, MA:
MIT Press.
Hanslmayr, S., Sauseng, P., Doppelmayr, M., Schabus, M., & Klimesch, W. (2005).
Increasing individual upper alpha power by neurofeedback improves cognitive
performance in human subjects. Applied Psychophysiology and Biofeedback, 30,
1–10.
Hanslmayr, S., Aslan, A., Staudigl, T., Klimesch, W., Herrmann, C.S., & Bäuml,
K-H. (2007). Prestimulus oscillations predict visual perception performance
between and within subjects. NeuroImage, 37, 1465–1473.
Hardt, J. V. & Kamiya, J. (1976). Conflicting results in EEG alpha feedback stud-
ies: Why amplitude integration should replace percent time. Biofeedback and
Self-Regulation, 1, 63–75.
Harmon-Jones, E. (2004).Contributions from research on anger and cognitive
dissonance to understanding the motivational functions of asymmetrical
frontal brain activity. Biological Psychology, 67, 51–76.
Harris, R. L., Glover, B. L., & Spady, A. A. (1986). Analytic techniques of pilot scan-
ning behavior and their application (Tech. Rep.). NASA technical paper 2525.
Harrison, Y. & Horne, J. A. (1996). Occurrence of “microsleeps” during day-
time sleep onset in normal subjects. Electroencepholography and Clinical
Neurophysiology, 98, 411–416.
Hayden, B. Y. & Gallant, J. L. (2005). Time course of attention reveals
different mechanisms for spatial and feature-based attention in V4. Neuron,
47, 637–643.
Haynes, J.-D. & Rees, G. (2006). Decoding mental states from brain activity in
humans. Nature Reviews Neuroscience, 7, 523–534
Heathcote, A., Brown, S., & Mewhort, D. J. K. (2000). The power law repealed:
The case for an exponential law of practice. Psychonomic Bulletin & Review,
7, 185–207.
Hebb, D. O. (1949). Organization of behavior. New York: Wiley.
Heinrich, H. W., Petersen, D., Roos, N.R., Brown, J., & Hazlett, S. (1980). Industrial
accident prevention: A safety management approach. New York, NY: McGraw-
Hill.
Heinze, H. J., Luck, S. J., Mangun, G. R., & Hillyard, S. A. (1990). Visual event-
related potentials index focused attention within bilateral stimulus arrays:
I. Evidence for early selection. Electroencephalography and Clinical Neurophysiology,
75, 511–527.
Heinze, H.J., Mangun, G.R., Burchert, W., Hinrichs, H., Scholz, M., Münte, T.F.,
et al. (1994). Combined spatial and temporal imaging of brain activity during
visual selective attention in humans. Nature, 372, 543–546.
Helander, M. G. (1997). Forty years of IEA: Some reflections on the evolution of
ergonomics. Ergonomics, 40, 952–961.
Hermann, C. S., Grigutsch, M., & Busch, N. A. (2004). EEG oscillations and wave-
let analysis. In: T. C. Handy (ed.), Event-related potentials: A methods handbook
(pp. 229–260). Cambridge, MA: MIT Press.
References 199

Hermann, T., Hunt, A., & Neuhoff, J. G. (eds.). (2011). The sonification handbook.
Berlin: Logos Publishing House.
Herslund, M. B. & Jørgensen, N. O. (2003). Looked-but-failed-to-see-errors in traf-
fic. Accident Analysis and Prevention, 35, 885–891.
Hess, E. H., & Polt, J. M. (1964). Pupil size in relation to mental activity during
simple problem-solving. Science, 143, 1190–1192.
Hilburn, B., Jorna, P., Byrne, E. A., & Parasuraman, R. (1997). The effect of adap-
tive air traffic control (ATC) decision aiding on controller mental workload. In:
M. Mouloua & J. M. Koonce (eds), Human-automation interaction: Research and
practice (pp. 84–91). Mahwah, NJ: Lawrence Erlbaum.
Hillyard, S.A., Vogel, E.K., & Luck, S.J. (1998). Sensory gain control (amplification)
as a mechanism of selective attention: electrophysiological and neuroimaging
evidence. Philosophical Transactions of the Royal Society of London-Series B:
Biological Sciences, 353, 1257–1270.
Hochstein, S. & Ahissar, M. (2002). View from the top: Review hierarchies and
reverse hierarchies in the visual system. Neuron, 36, 791–804.
Hoffman, J. E. & Subramaniam, B. (1995). The role of visual attention in saccadic
eye movements. Perception & Psychophysics, 57, 787–795.
Holcomb, P. J. & Neville, H. J. (1990). Auditory and visual semantic priming in
lexical Decision: A comparison using event-related brain potentials. Language
and Cognitive Processes, 5, 281–312.
Hollingworth, A., & Henderson, J. M. (1998). Does consistent scene context
facilitate object perception? Journal of Experimental Psychology: General, 127,
398–415.
Hollins, M. (2010). Somesthetic senses. Annual Review of Psychology, 61, 243–271.
Hollins, M. & Risner, S. R. (2000). Evidence for the duplex theory of tactile
texture perception. Perception & Psychophysics, 62, 695–705.
Hollins, M., Fox, A., & Bishop, C. (2000). Imposed vibration influences perceived
tactile smoothness. Perception, 29, 1455–1465.
Hollins, M., Bensmaïa, S. J., & Washburn, S. (2001). Vibrotactile adaptation
impairs discrimination of fine, but not coarse, textures. Somatosensory and
Motor Research, 18, 253–262
Holmes, N. P. (2012). Does tool use extend peripersonal space? A review and
re-analysis. Experimental Brain Research, 218, 273–282.
Holmes, J. M. & Clarke, M. P. (2006). Amblyopia. Lancet, 367, 1343–1351.
Holroyd, C. B. & Coles, M. G. H. (2002). The neural basis of human error
processing: Reinforcement learning, dopamine, and the error-related negativity.
Psychological Review, 109, 679–709.
Holroyd, C. B., Larsen, J. T., & Cohen, J. D. (2004). Context dependence of
the event-related brain potential associated with reward and punishment.
Psychophysiology, 41, 245–253.
Hopfinger, J. B. (2005). Electrophysiology of reflexive attention. In: L. Itti,
G. Rees, & J. Tsotsos (eds), Neurobiology of attention (pp. 219–225). San Diego:
Academic Press/Elsevier.
Hopfinger, J. B. & Mangun, G. R. (1998). Reflexive attention modulates pro-
cessing of visual stimuli in human extrastriate cortex. Psychological Science,
9, 441–447.
Hopfinger, J. B., Buonocore, M. H., & Mangun, G. R. (2000). The neural mecha-
nisms of top-down attentional control. Nature Neuroscience, 3, 284–291.
200 References

Hoque, M. E., Yeasin, M., & Louwerse, M. M. (2005). Robust recognition of emo-
tion from speech, 6th International Conference on Intelligent Virtual Agents (IVA),
Marina Del Rey, CA, August.
Horgan, J. (2005). The forgotten era of the brain. Scientific American, 293, 66–73.
Horrey, W. J., Noy, Y. I., Folkard, S., Popkin, S. M., Howarth, H. D., & Courtney,
T. K. (2011). Research needs and opportunities for reducing the adverse safety
consequences of fatigue. Accident Analysis & Prevention, 43, 591–594.
Hosseini, H., Mano, Y., Rostami, M., Takahashi, M., Sugiura, M., & Kawashima,
R. (2011). Decoding what one likes or dislikes from single-trial fNIRS measure-
ments. Neuroreport, 22, 269–273.
Houston-Price, C., Plunkett, K., & Duffy, H. (2006).The use of social and
salience cues in early word learning. Journal of Experimental Child Psychology,
95, 27–55.
Hubel, D. H. & Wiesel, T. N. (1968). Receptive fields and functional architecture
of monkey striate cortex. Journal of Physiology, 195, 215–243.
HFES (Human Factors and Ergonomics Society) (2012). Directory and yearbook.
Santa Monica, CA: Human Factors and Ergonomics Society.
Hupé, J. M., James, A. C., Payne, B. R., Lomber, S. G., Girard, P., & Bullier, J.
(1998). Cortical feedback improves discrimination between figure and back-
ground by V1, V2 and V3 neurons. Nature, 394, 784–787.
Huppert, T. J., Hoge, R. D., Diamond, S. G., Franceschini, M. A., & Boas, D. A.
(2006). A temporal comparison of BOLD, ASL, and NIRS hemodynamic
responses to motor stimuli in adult humans. NeuroImage, 29, 368–382.
Hursh, S. R., Raslear, T. G., Kaye, A. S., & Fanzone, J. F. (2008). Validation and
calibration of a fatigue assessment tool for railroad work schedules, final report.
Washington, DC: U.S. Department of Transportation. Available at: http://www.
fra.dot.gov/downloads/research/ord0804.pdf (accessed June 2011).
Hwang, S., Yau, Y., Lin, Y., Chen, J., Huang, T., Yenn, T., & Hsu, C. (2008). Predicting
work performance in nuclear power plants. Safety Science, 46, 1115–1124.
Hyman, I. E., Boss, S. M., Wise, B. M., McKenzie, K. E., & Caggiano, J. M. (2010).
Did you see the unicycling clown? Inattentional blindness while walking and
talking on a cell phone. Applied Cognitive Psychology, 24, 597–607.
Institute of Medicine. (2006). Sleep disorders and sleep deprivation: An unmet
public health problem. Committee on Sleep Medicine and Research. In:
H. R. Colten & B. M. Altevogt, (eds), Board on health sciences policy. Washington,
DC: The National Academies Press.
Institute of Medicine. (2009). Resident duty hours: Enhancing sleep, supervision, and
safety. In: C. Ulmer, D. M. Wolman, & M. M. E. Johns (eds). Committee on
Optimizing Graduate Medical Trainee (Resident) Hours and Work Schedule to
Improve Patient Safety. Washington, DC: The National Academies Press.
IEA (International Ergonomics Association). (2004). www.iea.cc [website].
Intuitive Surgical, Inc. (2008). Robotics for healthcare: Personalising care and boosting
the quality, access and efficiency of healthcare. Available at: http://ec.europa.
eu/information_society/activities/health/docs/studies/robotics_healthcare/
robotics-in-healthcare.pdf (accessed 1 June 2012).
Iriki, A., Tanaka, M., & Iwamura, Y. (1996). Coding of modified body schema
during tool use by macaque postcentral neurons. NeuroReport, 7, 2325–2330.
Itti, L. & Koch, C. (2000). A saliency-based search mechanism for overt and
covert shifts of visual attention. Vision Research, 40, 1489–1506.
References 201

Jack, R. E., Caldara, R., & Schyns, P. G. (2012). Internal representations reveal
cultural diversity in expectations of facial expressions of emotion. Journal of
Experimental Psychology: General, 141, 19–25.
Jacobson, J. (2007). Postdictive generation of subjective intentions: New
experiments on free will and their technological applications. Paper presented
at the 11th Annual Meeting of the Association for the Scientific Study of
Consciousness, Las Vegas, June.
Jacobson, L., Koslowsky, M., & Lavidor, M. (2012). tDCS polarity effects in motor
and cognitive domains: a meta-analytical review. Experimental Brain Research,
216, 2891–2899.
James, W. (1890/1950). The principles of psychology. New York: Dover.
James, D. R. C., Orihuela-Espina, F., Leff, D. R., Sodergren, M. H., Athanasiou, T.,
Darzi, A. W., & Yang, G.-Z. (2011). The ergonomics of natural orifice translu-
menal endoscopic surgery (NOTES) navigation in terms of performance, stress,
and cognitive behavior. Surgery, 149, 525–533.
Jarmasz, J., Herdman, C. M., & Johannsdottir, K. R. (2005). Object-based atten-
tion and cognitive tunneling. Journal of Experimental Psychology: Applied,
11, 3–12.
Jehee, J. F. M., Roelfsema, P. R., Deco, G., Murre, J. M. J., & Lamme, V. A. F. (2007).
Interactions between higher and lower visual areas improve shape selectivity
of higher level neurons—explaining crowding phenomena. Brain Research,
1157, 167–176.
Johnson, A. (2013). Procedural memory and skill acquisition. In A. F. Healy
& R. W. Proctor (Vol. eds), I. B. Weiner (Editor-in-Chief), Handbook of
psychology. Vol. 4. Experimental psychology (2nd ed., pp. 495–520). Hoboken,
NJ: John Wiley & Sons.
Johnson, A. & Proctor, R. W. (2004). Attention: Theory and practice. Thousand
Oaks, CA: Sage.
Johnson, A. & Klein, F. (2006). Approach! Avoid! Emotional reaction and
error handling in human–computer interaction. Dutch Journal of Ergonomics,
31, 22–28.
Johnston, J. C., McCann, R. S., & Remington, R. W. (1995). Chronometric evi-
dence for two types of attention. Psychological Science, 6, 365–369.
Jolij, J. (2008). From affective blindsight to affective blindness: When cortical
processing suppresses subcortical information. In: F. Columbus (ed.), Neural
pathways: New research (pp. 205–208). New York: Nova Science Publishers.
Jolij, J. (2012). Hot Cognition and Social Vision: the interaction between social-
affective processes and inference in visual perception. Neuropraxis 16, 73–78.
Jolij, J. & Lamme, V. A. (2005). Repression of unconscious information by con-
scious processing: Evidence from affective blindsight induced by transcranial
magnetic stimulation. Proceedings of the National Academy of Sciences, 102,
10747–10751.
Jolij, J. & Meurs, M. (2011). Music alters visual perception. PLoS One, 6, e18861.
Jolij, J., Scholte, H. S., van Gaal, S., Hodgson, T. L., & Lamme, V. A. F. (2011). Act
quickly, decide later: Long-latency visual processing underlies perceptual decisions
but not reflexive behavior. Journal of Cognitive Neuroscience, 23, 3734–3745.
Jonides, J., Lewis, R. L., Nee, D. E., Lustig, C. A., Berman, M. G., & Moore, K. S.
(2008). The mind and brain of short-term memory. Annual Review of Psychology,
59, 193–224.
202 References

Jousmäki, V. & Hari, R. (1998). Parchment-skin illusion: Sound-biased touch.


Current Biology, 8, R190.
Just, M. A. & Varma, S. (2007). The organization of thinking: What functional
brain imaging reveals about the neuroarchitecture of complex cognition.
Cognitive, Affective, & Behavioral Neuroscience, 7, 153–191.
Just, M., Keller, T.A., & Cynkar, J. (2008). A decrease in brain activation associ-
ated with driving when listening to someone speak. Brain Research, 1205,
70–80.
Kadir, K., Almkvist, O., Wall, A., Långström, B., & Nordberg, A. (2006). PET
imaging of cortical 11C-nicotine binding correlates with the cognitive function
of attention in Alzheimer’s disease. Psychopharmacology, 188, 504–520.
Kahlbrock, N., Butz, M., May, E. S., & Schnitzler, A. (2012). Sustained gamma
band synchronization in early visual areas reflects the level of selective
attention. NeuroImage, 59, 673–681.
Kahneman, D. (1973). Attention and effort. Englewood Cliffs, NJ: Prentice-Hall.
Kahneman, D. & Tversky, A. (1990). Prospect theory: An analysis of decision
under risk. In: P. K. Moser (ed.), Rationality in action: Contemporary approaches
(pp. 140–170). New York: Cambridge University Press.
Kamitani, Y. & Tong, F. (2005). Decoding the visual and subjective contents of
the human brain. Nature Neuroscience, 8, 679–685.
Kandel, E., Schwartz, J. H., Jessel, T. M., Siegelbaum, S. A., & Hudspeth, A. J.
(2013). Principles of neural science (5th ed.). New York: McGraw-Hill.
Kärcher, S. M., Fenzlaff, S., Hartmann, D., Nagel, S. K., & König, P. (2012). Sensory
augmentation for the blind. Frontiers In Human Neuroscience, 6, 37
Karni, A. & Sagi, D. (1991). Where practice makes perfect in texture discrimina-
tion: Evidence for primary visual cortex plasticity. Proceedings of the National
Academy of Science, 88, 4966–4970.
Karwowski, W. (1992). The human world of fuzziness, human entropy, and the
need for general fuzzy systems theory. Japanese Journal of Fuzzy Theory and
Systems, 4, 591–609.
Karwowski, W. (2000), Symvatology: The science of an artifact-human
compatibility, Theoretical Issues in Ergonomics Science, 1, 76–91.
Karwowski, W. (2005), Ergonomics and human factors: The paradigms for science,
engineering, design, technology, and management of human-compatible
systems. Ergonomics, 48, 436–463.
Karwowski, W., Siemionow, W., & Gielo-Perczak, K. (2003). Physical neuroergo-
nomics: The human brain in control of physical work activities, Theoretical
Issues in Ergonomics Science, 4, 175–199.
Kastner, S. & Ungerleider, L. G. (2000). Mechanisms of visual attention in the
human cortex. Annual Review of Neuroscience, 23, 315–341.
Kastner, S., DeWeerd, P., Desimone, R., & Ungerleider, L. G. (1998). Mechanisms
of directed attention in ventral extrastriate cortex as revealed by functional
MRI. Science, 282, 108–111.
Kastner, S., Pinsk, M. A., de Weerd, P., Desimone, R., & Ungerleider, L. G. (1999).
Increased activity in human visual cortex during directed attention in the
absence of visual stimulation. Neuron, 22, 751–761.
Kato, Y., Endo, H., Kobayakawa, T., Kato, K., & Kitazaki, S. (2012). Effects of inter-
mittent odours on cognitive-motor performance and brain functioning during
mental fatigue. Ergonomics, 55, 1–11.
References 203

Kasai, T., Moriya, H., & Hirano, S. (2011). Are objects the same as groups? ERP
correlates of spatial attentional guidance by irrelevant feature similarity. Brain
Research, 1399, 49–58.
Kecklund, G. & Akerstedt, T. (1993). Sleepiness in long distance truck driving:
An ambulatory EEG study of night driving. Ergonomics, 36, 1007–1017.
Kelly, A. & Garavan, H. (2005). Human functional neuroimaging of brain
changes associate with practice. Cerebral Cortex, 15, 1089–1102.
Kelly, E., Darke, S., & Ross J. (2004). A review of drug use and driving:
Epidemiology, impairment, risk factors and risk perceptions. Drug Alcohol
Review, 23, 319–344.
Kennedy, C. W., Hu, T., Desai, J. P., Wechsler, A. S., & Kresh, J. Y. (2002) A novel
approach to robotic cardiac surgery using haptics and vision. Cardiovascular
Engineering: An International Journal 2, 15–21.
Kennedy, S. H., Giacobbe, P., Rizvi, S. J., Placenza, F. M., Nishikawa, Y., Mayberg,
H. S., & Lozano, A. M. (2011). Deep brain stimulation for treatment-resistant
depression: Follow-up after 3 to 6 years. The American Journal of Psychiatry, 168,
502–510.
Kerns, J. G., Cohen, J. D., MacDonald, A. W., Cho, R. Y., Stenger, V. A., & Carter,
C. S. (2004). Anterior cingulate conflict monitoring and adjustments in con-
trol. Science, 303, 1023–1026.
Keysers, C. (2009). Mirror neurons. Current Biology, 19, R971–R973.
Keysers, C. & Gazzola, V. (2006). Towards a unifying neural theory of social
cognition. Progress in Brain Research, 156, 379–401.
Kieras, D. E. & Meyer, D. E. (1997). An overview of the EPIC architecture for
cognition and performance with application to human-computer interaction.
Human-Computer Interaction, 12, 391–438.
Kieras, D. E., Meyer, D. E., Ballas, J., & Lauber, E. J. (2000). Modern computational
perspectives on executive mental processes and cognitive control: Where to
from here? In S. Monsell & J. Driver (eds), Attention and performance XVIII:
Control of cognitive processes (pp. 681–712). Cambridge, MA: MIT Press.
Kim, Y.-H., Gitelman, D. R., Nobre, A. C., Parrish, T. B., LaBar, K. S., &
Mesulam, M.-M. (1999). The large-scale neural network for spatial attention
displays multifunctional overlap but differential asymmetry. NeuroImage, 9,
269–277.
Kimberg, D. Y., D’Esposito, M., & Farah, M. J. (1997). Effects of bromocrip-
tine on human subjects depend on working memory capacity. Neuroreport,
8, 3581–3585.
King, J. A., Korb, F. M., von Cramon, D. Y., & Ullsperger, M. (2010). Post-error
behavioral adjustments are facilitated by activation and suppression of task-
relevant and task-irrelevant information processing. The Journal of Neuroscience,
30, 12759–12769.
Kiss, M., van Velzen, J., & Eimer, M. (2008). The N2pc component and its links
to attention shifts and spatially selective visual processing. Psychophysiology,
45, 240–249.
Klauer, S. G., Dingus, T. A., Neale, V. L., Sudweeks, J. D., & Ramsey, D. J. (2006).
The impact of driver inattention on near-crash/crash risk: An analysis using
the 100-car naturalistic driving study data. U.S. Department of Transportation.
Report no. DOT HS 810 59.
Klein, R. M. (2000). Inhibition of return. Trends in cognitive sciences, 4, 138–147.
204 References

Klein, R. M. & Shore, D. I. (2000). Relations among modes of visual orienting.


In: S. Monsell & J. Driver (eds), Attention & Performance XVIII (pp. 195–208).
Cambridge, MA: MIT Press.
Klein, R. M., Kingstone, A., & Pontefract, A. (1992). Orienting of visual attention.
In: K. Rayner (ed.), Eye Movements and Visual Cognition: Scene Perception and
Reading (pp. 46–65). New York: Springer-Verlag.
Klein, T. A., Neumann, J., Reuter, M., Hennig, J., von Cramon, D. Y., &
Ullsperger, M. (2007). Genetically determined differences in learning from
errors. Science, 318, 1642–1645.
Klimesch, W. (1999). EEG alpha and theta oscillations reflect cognitive and mem-
ory performance: A review and analysis. Brain Research Reviews, 29, 169–195.
Knudsen, E. I. (2007). Fundamental components of attention. Annual Review of
Neuroscience, 30, 57–78.
Koch, C. & Ullman, S. (1985). Shifts in selective visual attention: Towards the
underlying neural circuitry. Human Neurobiology, 4, 219–227.
Koivisto, M., & Revonsuo, A. (2007). How meaning shapes seeing. Psychological
Science, 18, 845–849.
Kok, A. (1997). Event-related-potential (ERP) reflections of mental resources:
A review and synthesis. Biological Psychology, 45, 19–56.
Kok, A., Ridderinkhof, K. R., & Ullsperger, M. (2006). The control of attention and
actions: Current research and future developments. Brain Research, 1105, 1–6.
Kowler, E., Anderson, E., Dosher, B., & Blaser, E. (1995). The role of attention in
the programming of saccades. Vision Research, 35, 1897–1916.
Krach, S., Hegel, F., Wrede, B., Sagerer, G., Binkofski, F., & Kircher, T. (2008). Can
machines think? Interaction and perspective taking with robots investigated
via fMRI. PLoS One, 3, e2597.
Kramer, A. F. & Jacobson, A. (1991). Perceptual organization and focused atten-
tion: The role of objects and proximity in visual processing. Perception &
Psychophysics, 50, 267–284.
Kramer, A. F., Wickens, C., & Donchin, E. (1985). The processing of stimulus
properties: Evidence for dual-task integrality. Journal of Experimental Psychology:
Human Perception and Performance, 11, 393–408.
Kramer, A. F., Weber, T. A., & Watson, S. E. (1997). Object-based attentional
selection: Grouped arrays or spatially-invariant representations? Journal of
Experimental Psychology: General, 126, 3–13.
Krantz, D. H. (1972). A theory of magnitude estimation and cross-modality
matching. Journal of Mathematical Psychology, 9, 168–199.
Kribbs, N. B., Pack, A. I., Kline, L. R., Getsy, J. E., Schuett, J. S., Henry, J. N., et al.
(1993). Effects of one night without nasal CPAP treatment on sleep and sleepi-
ness in patients with obstructive sleep apnea. American Review of Respiratory
Disease, 147, 1162–1168.
Kristjansson, S. D., Stern, J. A., Brown, T. B., & Rohrbaugh, J. W. (2009). Detecting
phasic lapses in alertness using pupillometric measures. Applied Ergonomics,
40, 978–986.
Krummenacher, J., Müller, H. J., & Heller, D. (2001). Visual search for dimension-
ally redundant pop-out targets: Evidence for parallel-coactive processing of
dimensions. Perception & Psychophysics, 63, 901–917.
Kübler, A., & Müller, K.-R. (2007). An introducton to brain computer interfac-
ing. In: G. Dornhege, J. del R. Millán, T. Hinterberger, D. McFarland, & K.-R.
References 205

Müller (eds), Toward Brain–Computer Interfacing (pp. 1–25). Cambridge, MA:


MIT Press.
Kundel, H. L. & Nodine, C. F. (1975). Interpreting chest radiographs without
visual search. Radiology, 116, 527–532.
Kundel, H. L., Nodine, C. F., Conant, E. F., & Weinstein, S. P. (2007). Holistic
component of image perception in mammogram interpretation: Gaze-tracking
study. Radiology, 242, 396–402.
Kupers, R., Fumal, A., de Noordhout, A. M., Gjedde, A., Schoenen, J., & Ptito, M.
(2006). Transcranial magnetic stimulation of the visual cortex induces somato-
topically organized qualia in blind subjects. Proceedings of the National Academy
of Sciences, 103, 13256–13260.
Kushida, C. A., Littner, M. R., Hischkowitz, M., Morgenthaler, T. I., Alessi, C. A.,
Bailey, D., et al. (2006). Practice parameters for the use of continuous and
bilevel positive airway pressure devices to treat adult patients with sleep-
related breathing disorders. Sleep, 29, 375–380.
Lachter, J., Forster, K. I., & Ruthruff, E. (2004). Forty-five years after Broadbent
(1958): Still no identification without attention. Psychological Review, 111,
880–913.
Lakatos, P., O’Connell, M. N., Barczak, A., Mills, A., Javitt, D. C., & Schroeder, C. E.
(2009). The leading sense: Supramodal control of neurophysiological context
by attention. Neuron, 64, 419–430.
Lalor, E. C., Kelly, S. P., Finucane, C., Burke, R., Smith, R., Reilly, R. B., &
McDarby, G. (2005). Steady-state VEP-based brain-computer interface control
in an immersive 3D gaming environment. EURASIP Journal on Applied Signal
Processing, 19, 3156–3164.
Laming, D. R. J. (1968). Information theory of choice-reaction times. Oxford:
Academic Press.
Laming, D. R. J. (1979). Choice reaction performance following an error. Acta
Psychologica, 43, 199–224.
Lamme, V. A. (1995). The neurophysiology of figure-ground segregation in
primary visual cortex. Journal of Neuroscience, 15, 1605.
Lamme, V. A. (2003). Why visual attention and awareness are different. Trends in
Cognitive Science, 7, 12–18.
Lamme, V. A. (2006). Towards a true neural stance on consciousness. Trends in
Cognitive Science, 10, 494–501.
Lamme, V. A. & Roelfsema, P. R. (2000). The distinct modes of vision offered by
feedforward and recurrent processing. Trends in Neurosciences, 23, 571–579.
Lamme, V. A., van Dijk, B. W., & Spekreijse, H. (1993). Contour from motion
processing occurs in primary visual cortex. Nature, 363, 541–543.
Lange, J., Oostenveld, R., & Fries, P. (2011). Perception of the touch-induced visual
double-flash illusion correlates with changes of rhythmic neuronal activity in
human visual and somatosensory areas. NeuroImage, 54, 1395–1405.
Lantz, D. L. & Sterman, M. B. (1988). Neuropsychological assessment of subjects with
uncontrolled epilepsy: effects of EEG feedback training. Epilepsia, 29, 163–171.
Lavie, N. (1995). Perceptual load as a necessary condition for selective atten-
tion. Journal of Experimental Psychology: Human Perception and Performance, 21,
451–468.
Lavie, N. & Tsal, Y. (1994). Perceptual load as a major determinant of the locus of
selection in visual attention. Perception & Psychophysics, 56, 183–197.
206 References

Lavie, N., Hirst, A., de Fockert, J. W., & Viding, E. (2004). Load theory of selec-
tive attention and cognitive control. Journal of Experimental Psychology: General,
133, 339–354.
Layton, C., Smith, P. J., & McCoy, C. E. (1994). Design of cooperative problem-
solving system for en-route flight planning: an empirical evaluation. Human
Factors 36: 94–119.
Leavitt, V. M., Molhom, S., Gomez-Ramirez, M., & Foxe, J. J. (2011). “What” and
“Where” in auditory sensory processing: a high-density electrical mapping
study of distinct neural processes underlying sound object recognition and
sound localization. Frontiers in Integrative Neuroscience, 5, 23.
Leber, A. B., Turk-Browne, N. B., & Chun, M. M. (2008). Neural predictors of
moment-to-moment fluctuations in cognitive flexibility. Proceedings of the
National Academy of Sciences, 105, 13592–13597.
LeDoux, J. (1996). The emotional brain: The mysterious underpinnings of emotional
life. New York: Simon & Schuster.
Lees, M. N., Cosman, J. D., Fricke, N., Lee, J. D., & Rizzo, M. (2010). Translating
cognitive neuroscience to the driver’s operational environment: A neuroergo-
nomics approach. American Journal of Psychology, 123, 391–411.
Lehne, M., Ihme, K., Brouwer, A.M., van Erp, J.B.F. & Zander, T. (2009). Error-
related EEG patterns during tactile human-machine interaction. Proceedings of
ACII 2009.
Lei, S. & Rötting, M. (2011). Influence of task combination on EEG spectrum
modulation for driver workload estimation. Human Factors, 53, 168–179.
Leiser, D. & Azar, O. H. (2008). Behavioral economics and decision making:
Applying insights from psychology to understand how people make economic
decisions. Journal of Economic Psychology, 29, 613–618.
Lenggenhager, B., Tadi, T., Metzinger, T., & Blanke, O. (2007). Video ergo sum:
Manipulating bodily self-consciousness. Science, 317, 1096–1099.
Leveson, N. (2005). Software challenges in achieving space safety. Journal of the
British Interplanetary Society, 62, 265–272.
Levy, J. L., Foyle, D. C., & McCann, R. S. (1998). Performance benefits with scene-
linked HUD symbology: An attentional phenomenon? Proceedings of the 42nd
Annual Meeting of the Human Factors and Ergonomic Society (pp. 11–15). Santa
Monica, CA: HFES.
Levin, D. T. & Simons, D. J. (1997). Failure to detect changes to attended objects
in motion pictures. Psychonomic Bulletin and Review, 4, 501–506.
Lewis-Evans, B., de Waard, D., Jolij, J., & Brookhuis, K. A. (2012). What you may
not see might slow you down anyway: Masked images and driving. PLoS One,
7, e29857.
Li, Z. (2002). A saliency map in primary visual cortex. Trends in Cognitive Science,
6, 9–16.
Li, C. S., Yan, P., Bergquist, K. L., & Sinha, R. (2007). Greater activation of the
“default” brain regions predicts stop signal errors. NeuroImage, 38, 640–648.
Li, T., Watter, S., & Sun, H. (2011). Differential visual processing for equiva-
lent retinal information from near versus far space. Neuropsychologia, 49,
3863–3869.
Libet, B., Gleason, C. A., Wright, E. W., & Pearl, D. K. (1983). Time of conscious
intention to act in relation to onset of cerebral activity (readiness-potential):
The unconscious initiation of a freely voluntary act. Brain, 106, 623–642.
References 207

Linden, R. D., Picton, T. W., Hamel, G., & Campbell, K. B. (1987). Human auditory
steady-state evoked potentials during selective attention. Electroencephalography
and Clinical Neurophysiology, 66, 145–159.
Lindeman, R. W., Page, R., Yanagida, Y., & Silbert, J. L. (2004). Towards
full-body haptic feedback: The design and deployment of a spatialized vibro-
tactile feedback system. Proceedings of the ACM Virtual Reality Software and
Technology VRST’04 (pp. 146–149). New York: ACM.
Liscombe, J., Hirschberg, J., & Venditti, J. J. (2005). Detecting certainness in
spoken tutorial dialogues. INTERSPEECH 2005, 1837–1840.
Liu, T., Larsson, J., & Carrasco, M. (2007). Feature-based attention modulates
orientation-selective responses in human visual cortex. Neuron, 55, 313–323.
Llera, A., van Gerven, M. A., Gómez, V., Jensen, O., & Kappen, H. J. (2011). On
the use of interaction error potentials for adaptive brain computer interfaces.
Neural Networks, 24, 1120–1127.
Loewenstein, G. F., Rick, S., & Cohen, J. D. (2008), Neuroeconomics. Annual
Review of Psychology,59, 647–672.
Logan, G. D. & Zbrodoff, N. J. (1979). When it helps to be misled: Facilitative
effects of increasing the frequency of conflicting stimuli in a Stroop-like task.
Memory & Cognition, 7, 166–174.
Logothetis, N. (2008). What we can do and what we cannot do with fMRI.
Nature, 453, 869–878.
Lopez-Larraz, E., Iturrate, I., Montesano, L., & Minguez, J. (2010). Real-Time
recognition of feedback error-related potentials during a time-estimation
task. Annual International Conference of the IEEE Engineering in Medicine and
Biology Society. IEEE Engineering in Medicine and Biology Society. Conference, 2010,
2670–2673.
Lubar, J. F., & Shouse, M. N. (1976). EEG and behavioral changes in a hyper-
kinetic child concurrent with training of the sensorimotor rhythm (SMR):
A preliminary report. Biofeedback and Self-Regulation, 1, 293–306.
Lubar, J. F., Swartwood, M. O., Swartwood, J. N., & O’Donnell, P. H. (1995).
Evaluation of the effectiveness of EEG neurofeedback training for ADHD in a
clinical setting as measured by changes in T.O.V.A. scores, behavioral ratings,
and WISC-R performance. Biofeedback and Self-Regulation, 20, 83–99.
Luce, R. D. (1959). Individual choice behavior. New York: Wiley.
Luck, S. J. (2005). An introduction to the event-related potential technique. Cambridge,
MA: MIT press.
Luck, S. J., Heinze, H.J., Mangun, G. R., & Hillyard, S. A. (1990). Visual event-
related potentials index focused attention within bilateral stimulus arrays: II.
Functional dissociation of P1 and N1 components. Electroencephalography and
Clinical Neurophysiology, 75, 528–542.
Luck, S. J., Chelazzi, L., Hillyard, S. A., & Desimone, R. (1997a). Neural mecha-
nisms of spatial selective attention in areas V1, V2, and V4 of macaque visual
cortex. Journal of Neurophysiology, 77, 24–42.
Luck, S. J., Girelli, M., McDermott, M. T., & Ford, M. A. (1997b). Bridging the
gap between monkey neurophysiology and human perception: An ambiguity
resolution theory of visual selective attention. Cognitive Psychology, 33, 64–87.
Ludwig, C. J. H. & Gilchrist, I. D. (2002). Stimulus-driven and goal-driven control
over visual selection. Journal of Experimental Psychology: Human Perception and
Performance, 28, 902–912.
208 References

Luu, P., Tucker, D. M., & Makeig, S. (2004). Frontal midline theta and the error-
related negativity: Neurophysiological mechanisms of action regulation.
Clinical Neurophysiology, 115, 1821–1835.
McAdams, C. J. & Maunsell, J. H. R. (2000). Attention to both space and feature
modulates neuronal responses in macaque area V4. Journal of Neurophysiology,
83, 1751–1755.
McArdle, N. M. C., Devereux, G., Heidarnejad, H., Engleman, H. M., Mackay, T. W., &
Douglas, N. J. (1999). Long-term use of CPAP therapy for sleep apnea/hypop-
nea syndrome. American Journal of Respiratory and Critical Care Medicine, 159,
1108–1114.
Macaluso, E., Frith, C. D., & Driver, J. (2000). Science, 289, 1206–1208.
Macaluso, E., Frith, C. D., & Driver, J. (2002). Neuron, 34, 647–658.
McCarley, J. S. & Mounts, J. R. W. (2008). On the relationship between flanker
interference and localized attentional interference. Acta Psychologica, 128,
102–109.
McCarley, J. S., Kramer, A. F., & Peterson, M. S. (2002). Overt and covert object-
based attention. Psychonomic Bulletin & Review, 9, 751–758.
McCartt, A. T., Ribner, S. A., Pack, A. I., & Hammer, M. C. (1996). The scope and
nature of the drowsy driving problem in New York State. Accident Analysis and
Prevention, 28, 511–517.
McCauley, P., Kalachev, L. V., Smith, A. D., Belenky, G., Dinges, D. F., & van
Dongen, H. P. A. (2009). A new mathematical model for the homeostatic
effects of sleep loss on neurobehavioral performance. Journal of Theoretical
Biology, 256, 227–239.
McClearn, G. E., Johansson, B., Berg, S., Pedersen, N. L., Ahern, F., Petrill, S. A., &
Plomin, R. (1997). Substantial genetic influence on cognitive abilities in twins
80 or more years old. Science, 276, 1560–1563.
Macdonald, J. S. P., Mathan, S., & Yeung, N. (2011). Trial-by-trial variations in
subjective attentional state are reflected in ongoing prestimulus EEG alpha
oscillations. Frontiers in Psychology, 2, 82.
McElree, B. (2001). Working memory and focal attention. Journal of Experimental
Psychology: Learning, Memory, and Cognition, 27, 817–835.
McFarland, D. J., Sarnacki, W. A., Townsend, G., Vaughan, T., & Wolpaw, J. R.
(2011). The P300-based brain-computer interface (BCI): Effects of stimulus rate.
Clinical Neurophysiology, 122, 731–737.
McGehee, D. V., Raby, M., Carney, C., Lee, J. D., & Reyes, M. L. (2007). Extending
parental mentoring using an event-triggered video intervention in rural teen
drivers. Journal of Safety Research, 38, 215–227.
McGookin, D. & Brewster, S. (2011). Earcons. In: T. Hermann, A. Hunt, &
J. G. Neuhoff (eds), The sonification handbook (pp. 339–361). Berlin: Logos
Publishing House.
Mack, A. & Rock, I. (1998). Inattentional blindness. Cambridge, MA: MIT Press.
McKinley, A., Bridges, N., Walters, C. M., & Nelson, J. (2012). Modulating the
brain at work using noninvasive transcranial stimulation. NeuroImage, 59,
129–137.
MacLean, P. D. (1990). The triune brain in evolution: Role in paleocerebral functions.
New York: Plenum Press.
MacLeod, C. M. (1991). Half a century of research on the Stroop effect: An inte-
grative review. Psychological Bulletin, 109, 163–203.
References 209

McNamara, A., Tegenthoff, M., Dinse, H., Büchel, C., Binkofski, F., & Ragert, P.
(2007). Increased functional connectivity is crucial for learning novel muscle
synergies. NeuroImage, 35, 1211–1218.
Maguire, E. A. (2007). Spatial navigation. In: R. Parasuraman, M. Rizzo,
R. Parasuraman & M. Rizzo (eds), Neuroergonomics: The brain at work (pp. 131–
145). New York: Oxford University Press.
Mallis, M. M., Mejdal, S., Nguyen, T. T., & Dinges, D. F. (2004). Summary of
features of seven biomathematical models of human fatigue and performance.
Aviation Space and Environmental Medicine, 75, A4–A14.
Mangun, G. R. & Hillyard, S. A. (1990). Allocation of visual attention to spatial
locations: Tradeoff functions for event-related brain potentials and detection
performance. Perception & Psychophysics, 47, 532–550.
Marois, R. & Ivanoff, J. (2005). Capacity limits of information processing in the
brain. Trends in Cognitive Science, 9, 296–305.
Marois, R., Chun, M. M., & Gore, J.C. (2000). Neural correlates of the attentional
blink. Neuron, 28, 299–308.
Marr, D. (1982). Vision: A computational investigation into the human representation
and processing of visual information. New York: Freeman.
Marrocco, R. T. & Davidson, M. C. (1998). Neurochemistry of attention. In:
R. Parasuraman (ed.), The attentive brain (pp. 35–50). Cambridge, MA: MIT Press.
Martino, G. & Marks L. E. (2000). Cross-modal interaction between vision and
touch: the role of synesthetic correspondence. Perception, 29, 745–754.
Masa, J. F., Rubio, M., & Findley, L. J. (2004). Habitually sleepy drivers have a
high frequency of automobile crashes associated with respiratory disorders
during sleep. American Journal of Respiratory and Critical Care Medicine, 170,
1014–1021.
Matthews, G., Davies, D. R., Westerman, S. J., & Stammers, R. B. (2000). Human
performance: Cognition, stress, and individual differences. Hove: Psychology
Press.
Maunsell, J. H. R., & Treue, S. (2006). Feature-based attention in visual cortex.
Trends in Neurosciences, 29, 317–322.
Maycock, G. (1997). Accident liability-The human perspective. In: T. Rothengatter
& V. E. Carbonell (eds), Traffic and transport psychology: Theory and application
(pp. 65–76). New York: Pergamon.
Mazaheri, A., Nieuwenhuis, I. L., van Dijk, H., & Jensen, O. (2009). Prestimulus
alpha and mu activity predicts failure to inhibit motor responses. Human Brain
Mapping, 30, 1791–1800.
Mazer, J. A. & Gallant, J. L. (2003). Goal-related activity in V4 during free view-
ing visual search: Evidence for a ventral stream visual salience map. Neuron,
40, 1241–1250.
Mehta, M., Owen, A., Sahakian, B., Mavaddat, N., Pickard, J., & Robbins, T.
(2000). Methylphenidate enhances working memory by modulating discrete
frontal and parietal lobe regions in the human brain. Journal of Neuroscience,
20: RC65.
Meijer, P. B. L. (1996). Seeing with sound: The vOICe. Available at: http://www.
artificialvision.com/ (last accessed 3 October 2012).
Melara, R. D. & Marks, L. E. (1990). Processes underlying dimensional interac-
tions: Correspondences between linguistic and nonlinguistic dimensions.
Memory & Cognition, 18, 477–495.
210 References

Mendola, J. D., Conner, I. P., Roy, A., Chan, S.-T., Schwartz, T. L., Odom, J. V., &
Kwong, K. K. (2005). Voxel-based analysis of MRI detects abnormal visual cortex
in children and adults with amblyopia. Human Brain Mapping, 25, 222–236.
Mendoza, C. & Laugier, C. (2003). Tissue cutting using finite elements and force
feedback. In: N. Ayache & H. Delingette (eds), Surgery simulation and soft tissue mod-
eling. IS4TM 2003, LNCS 2673 (pp. 175–182). Berlin Heidelberg: Springer Verlag.
Merabet, L. B. & Pascual-Leone, A. (2010). Neural reorganization following sen-
sory loss: The opportunity of change. Nature Reviews Neuroscience, 11, 44–52.
Merabet, L. B., Hamilton, R., Schlaug, G., Swisher, J. D., Kiriakopoulos, E. T.,
Pitskel, M. B., et al. (2008). Rapid and reversible recruitment of early visual
cortex for touch. PLoS ONE 3: e3046.
Merabet, L. B., Battelli, L., Obretenova, S., Maguire, S., Meijer, P., & Pascual-
Leone, A. (2009). Functional recruitment of visual cortex for sound encoded
object identification in the blind. Neuroreport, 20, 132–138.
Meyer, D. E. & Kieras, D. E. (1997). A computational theory of executive cog-
nitive processes and multiple-task performance: Part 1. Basic mechanisms.
Psychological Review, 104, 3–65.
Meyer, K., Kaplan, J. T., Essex, R., Webber, C., Damasio, H., & Damasio, A. (2010).
Predicting visual stimuli on the basis of activity in auditory cortices. Nature
Neuroscience, 13, 667–668.
Milgram, P., Takemura, H., Utsumi, A., & Kishino, F. (1994). Augmented reality:
A class of displays on the reality-virtuality continuum. SPIE, 2351, 282–292.
Miller, E. K. & Cohen, J. D. (2001). An integrative theory of prefrontal cortex
function. Annual Review of Neuroscience, 24, 167–202.
Milner, A. D. & Goodale, M. A. (2006). The visual brain in action. 2nd ed. Oxford:
Oxford University Press.
Miltner, W. H. R., Braun, C. H., & Coles, M. G. H. (1997). Event-related brain
potentials following incorrect feedback in a time-estimation task: Evidence for
a “generic” neural system for error detection. Journal of Cognitive Neuroscience,
9, 788–798.
Mishkin, M. Ungerlider, L. G., & Macko, K. A. (1983). Object vision and spatial
vision: Two cortical pathways. Trends in Neurosciences, 6, 414–417.
Miyawaki, Y., Uchida, H., Yamashita, O., Sato, M. A., Morito, Y., Tanabe, H. C.,
et al. (2008). Visual image reconstruction from human brain activity using a
combination of multiscale local image decoders. Neuron, 60, 915–929.
Molenaar, I. & Roda, C. (2008). Attention management for dynamic and adaptive
scaffolding. Pragmatics & Cognition, 16, 224–271.
Moller, H. J., Kayumov, L., Bulmash, E. L., Nhan, J., & Shapiro, C. M. (2006).
Simulator performance, microsleep episodes, and subjective sleepiness:
Normative data using convergent methodologies to assess driver drowsiness.
Journal Psychosomatic Research, 61, 335–342.
Moore, B. C. J. (2005). Basic auditory processes. In: E. B. Goldstein (ed.), Blackwell
handbook of sensation and perception (pp. 379–407). Malden, MA: Blackwell.
Moore, T. & Armstrong, K. M. (2003). Selective gating of visual signals by
microstimulation of frontal cortex. Nature, 421, 370–373.
Moore, T. & Fallah, M. (2001). Control of eye movements and spatial attention.
Proceedings of the National Academy of Science, 98, 1273–1276.
Moran, J. & Desimone, R. (1985). Selective attention gates visual processing in
the extrastriate cortex. Science, 229, 782–784.
References 211

Moray, N. (1986). Monitoring behavior and supervisory control. In: K. R. Boff,


L. Kaufman, & J. P. Thomas (eds), Handbook of perception and human performance
(Vol. 2, pp. 40.41–40.51). New York: Wiley.
Moray, N. (1993). Designing for attention. In: A. Baddeley & L. Weiskrantz (eds),
Attention: Selection, awareness, and control (pp. 111–134). Oxford: Clarendon Press.
Mori, M. (1970). Bukimi no tani/The uncanny valley (K. F. MacDorman &
T. Minato, Trans.). Energy, 7, 33–35 [original in Japanese].
Morgan, S. T., Hansen, J. C., & Hillyard, S. A. (1996). Selective attention to stimu-
lus location modulates the steady-state visual evoked potential. Proceedings of
the National Academy of Sciences, 93, 4770–4774.
Morkes, J., Kernal, H. K., & Nass, C. (1999). Effects of humor in task-oriented
human-computer interaction and computer-mediated communication:
A direct test of SRCT theory. Human–Computer Interaction, 14, 395–435.
Mosier, K. L., Skitka, L. J., Heers, S., & Burdick, M. (1998). Automation bias:
Decision-making and performance in high-tech cockpits. The International
Journal of Aviation Psychology, 8, 47–63.
Mosier, K. L., Skitka, L. J., Dunbar, M., & McDonnell, L. (2001). Aircrews and
automation bias: The advantages of teamwork? International Journal of Aviation
Psychology, 11, 1–14.
Most, S. B., Simons, D. J., Scholl, B. J., Jimenez, R., Clifford, E., & Chabris, C. F.
(2001). How not to be seen: The contribution of similarity and selective ignor-
ing to sustained inattentional blindness. Psychological Science, 12, 9–17.
Mulder, L. J. M., Dijksterhuis, C., Stuiver, A., & de Waard, D. (2009). Cardiovascular
state changes during performance of a simulated ambulance dispatchers’ task:
Potential use for adaptive support. Applied Ergonomics, 40, 965–977.
Müller, S. V., Möller, J., Rodriguez-Fornells, A., & Münte, T. F. (2005). Brain poten-
tials related to self-generated and external information used for performance
monitoring. Clinical Neurophysiology, 116, 63–74.
Müller, K. R., Tangermann, M., Dornhege, G., Krauledat, M., Curio, G., &
Blankertz, B. (2008). Machine learning for real-time single-trial EEG-analysis:
From brain-computer interfacing to mental state monitoring. Journal of
Neuroscience Methods, 167, 82–90.
Muller-Putz, G. R. & Pfurtscheller, G. (2008). Control of an electrical prosthe-
sis with an SSVEP-based BCI. IEEE Transactions on Biomedical Engineering,
55, 361–364.
Murphy, S. T. & Zajonc, R. B. (1993). Affect, cognition, and awareness: Affective
priming with optimal and suboptimal stimulus exposures. Journal of Personality
and Social Psychology, 64, 723–739.
Murphy, K. & Spencer, A. (2009). Playing video games does not make for bet-
ter visual attention skills. Journal of Articles in Support of the Null Hypothesis,
6, 1–20.
Myers, C. W. & Gray, W. D. (2010). Visual scan adaptation during repeated visual
search. Journal of Vision, 10, 4.
Mylonas, G. P., Kwok, K. W., James, D. R., Leff, D., Orihuela-Espina, F., Darzi, A., &
Yang, G. Z. (2012). Gaze-contingent motor channelling, haptic constraints
and associated cognitive demand for robotic MIS. Medical Image Analysis,
16, 612–631.
Nass, C. & Gong, L. (2000). Speech interfaces from an evolutionary perspective.
Communications of the ACM, 43, 36–43.
212 References

NHTSA (National Highway Traffic Safety Administration). (2004, December).


United States Department of Transportation. The 100-car naturalistic driving
study: Phase II – results of the 100-car field experiment. Chicago, IL: National
Safety Council.
National Research Council. (2007). Human-system integration in the system deve-
lopment process: A new look. In: R. W. Pew and A. S. Mavor (eds) Committee on
Human-System Design Support for Changing Technology. Washington, DC:
The National Academies Press.
National Sleep Foundation (2002). 2002 Sleep in America poll. Available at: http://
www.sleepfoundation.org/sites/default/files/2002SleepInAmericaPoll.pdf
Neal, D. T., & Chartrand, T. L. (2011). Embodied emotion perception: Amplifying
and dampening facial feedback modulates emotion. Social Psychological and
Personality Science, 2, 673–678.
Newell, A. (1973). You can’t play 20 questions with nature and win: Projective
comments on the papers of this symposium. In: W. G. Chase (ed.), Visual infor-
mation processing (pp. 283–308). San Diego, CA: Academic Press.
Newell, A. (1980). Physical symbol systems. Cognitive Science, 4, 135–183.
Newell, A. (1990). Unified theories of cognition. Cambridge, MA: Harvard University
Press.
Newell, A. & Rosenbloom, P. (1981). Mechanisms of skill acquisition and the
power law of practice. In: J. Anderson (ed.), Cognitive skills and their acquisition
(pp. 1–81). Hillsdale, NJ: Lawrence Erlbaum Associates.
Nicholson, A. N. & Stone, B. M. (1987). Influence of back angle on the quality of
sleep in seats. Ergonomics, 30, 1033–1041.
Nieuwenhuis, S., Holroyd, C. B., Mol, N., & Coles, M. G. (2004). Reinforcement-
related brain potentials from medial frontal cortex: Origins and functional
significance. Neuroscience and Biobehavioral Reviews, 28, 441–448.
Nobre, A. C., Coull, J. T., Walsh, V., & Frith, C. D. (2003). Brain activations
during visual search: Contributions of search efficiency versus feature binding.
NeuroImage, 18, 91–103.
Nodine, C. F. & Kundel, H. L. (1987). Using eye movements to study visual
search and to improve tumor detection. Radiographics, 7, 1241–1250
Norman, D.A. (1981). Categorization of action slips. Psychological Review, 88, 1–15.
Norman, D. A. & Shallice, T. (1986). Attention to action: Willed and auto-
matic control of behavior. In: R. J. Davidson, G. E. Schwartz, & D. Shapiro
(eds), Consciousness and self-regulation: Vol. 4. Advances in research and theory
(pp. 1–18). New York: Plenum Press.
Notebaert, W., Houtman, F., Opstal, F. V., Gevers, W., Fias, W., & Verguts, T.
(2009). Post-error slowing: An orienting account. Cognition, 111, 275–279.
Nothdurft, H. C., Gallant, J. L., & van Essen, D. C. (1999). Response modulation by
texture surround in primate area V1: Correlates of “popout” under anesthesia.
Visual Neuroscience, 16, 15–34.
Noton, D. & Stark, L. (1971a). Scanpaths in eye movements during pattern
perception. Science, 171, 308–311.
Noton, D. & Stark, L. (1971b). Scanpaths in saccadic eye movements while view-
ing and recognizing patterns. Vision Research, 11, 929–942.
Nouchi, R., Taki, Y., Takeuchi, H., Hashizume, H., Akitsuki, Y., Shigemune, Y.,
et al. (2012). Brain training game improves executive functions and processing
speed in the elderly. PLoS ONE, 7: e29676.
References 213

Nowak, M., Kornhuber, J., & Meyrer, R. (2006). Daytime impairment and neuro-
degeneration in OSAS. Sleep, 29, 1521–1530.
Nunez, P. L., Wingeier, B. M., & Silberstein, R. B. (2001) Spatial-temporal struc-
tures of human alpha rhythms: Theory, microcurrent sources, multiscale
measurements, and global binding of local networks. Human Brain Mapping,
13, 125–164.
O’Connell, R. G., Dockree, P. M., Bellgrove, M. A., Turin, A., Ward, S., Foxe, J. J., &
Robertson, I. H. (2009a). Two types of action error: Electrophysiological evi-
dence for separable inhibitory and sustained attention neural mechanisms
producing error on go/no-go tasks. Journal of Cognitive Neuroscience, 21, 93–104.
O’Connell, R. G., Dockree, P. M., Robertson, I. H., Bellgrove, M. A., Foxe, J. J., &
Kelly, S. P. (2009b). Uncovering the neural signature of lapsing attention:
Electrophysiological signals predict errors up to 20 s before they occur. The
Journal of Neuroscience, 29, 8604–8611.
O’Connor, D. H., Fukui, M. M., Pinsk, M. A., & Kastner, S. (2002). Attention modu-
lates responses in the human lateral geniculate nucleus. Nature Neuroscience,
5, 1203–1209.
O’Craven, K. M., Downing, P. E., & Kanwisher, N. (1999). fMRI evidence for
objects as the units of attentional selection. Nature, 401, 584–587.
O’Doherty, J. P., Hampton, A., & Kim, H. (2007). Model-based fMRI and its appli-
cation to reward learning and decision making. Annals of the New York Academy
of Sciences, 1104, 35–53.
O’Hanlon, J. F. & Beatty, J. (1997). Concurrence of electroencephalographic and
performance changes during a simulated radar watch and some implications for a
the arousal theory of vigilance. In: R. R. Mackie (ed.), Vigilance: Theory, operational
performance, and physiological correlates. (pp. 189–202). New York: Plenum Press.
Oberauer, K. (2002). Access to information in working memory: Exploring the
focus of attention. Journal of Experimental Psychology: Learning, Memory, and
Cognition, 28, 411–421.
Oberauer, K. (2009). Design for a working memory. In: B. H. Ross (ed.), Psychology
of learning and motivation (Vol. 51, pp. 45–100): San Diego, CA: Academic Press.
Ogilvie, R. D., Wilkinson, R. T., & Allison S. (1989). The detection of sleep onset:
Behavioral, physiological, and subjective convergence. Sleep, 21, 458–474.
Orth, M., Duchna, H. W., Leiday, M., Widdig, W., Rasche, K., Bauer, T. T., et al.
(2005). Driving simulator and neuropsychological [corrected] testing in OSA
before and under CPAP therapy. European Respiratory Journal, 26, 898–903.
Oskarsson, P., Eriksson, L., & Carlander, O. (2012). Enhanced perception and per-
formance by multimodal threat cueing in simulated combat vehicle. Human
Factors, 54, 122–137.
Otten, L. J., Henson, R. A., & Rugg, M. D. (2002). State-related and item-related neu-
ral correlates of successful memory encoding. Nature Neuroscience, 5, 1339–1344.
Pack, A. I., Maislin, G., Stally, B., Pack, F. M., Rogers, W. C., George, C. F. P., &
Dinges, D. F. (2006). Impaired performance in commercial drivers: Role of sleep
apnea and short sleep duration. American Journal of Respiratory and Critical Care
Medicine, 174, 446–454.
Palmer, S. E. (1992). Common region: A new principle of perceptual grouping.
Cognitive Psychology, 24, 436–447.
Palmer, S. & Rock, I. (1994). Rethinking perceptual organization: The role of
uniform connectedness. Psychonomic Bulletin & Review, 1, 29–55.
214 References

Palva, J. M., Monto, S., Kulashekhar, S., & Palva, S. (2010). Neuronal synchrony
reveals working memory networks and predicts individual memory capacity.
Proceedings of the National Academy of Sciences, 107, 7580–7585.
Papassotiropoulos, A. & de Quervain, D. (2011). Genetics of human episodic
memory: Dealing with complexity. Trends in Cognitive Sciences, 15,
381–387.
Parasuraman, R. (2003). Neuroergonomics: Research and practice. Theoretical
Issues in Ergonomics Science, 4, 5–20.
Parasuraman, R. (2009). Assaying individual differences in cognition with
molecular genetics: Theory and application. Theoretical Issues in Ergonomics
Science, 10, 399–416.
Parasuraman, R. (2011a). Can behavioral, neuroimaging, and molecular genetic
studies of “cognitive superstars” tell us how to augment cognition? In:
Proceedings of the Human Factors and Ergonomics Society (pp. 192–196). Santa
Monica, CA: Human Factors and Ergonomics Society.
Parasuraman, R. (2011b). Neuroergonomics: Brain, cognition, and performance
at work. Current Directions in Psychological Science, 20, 181–186.
Parasuraman, R. & Giambra, L. (1991). Skill development in vigilance: Effects of
event rate and age. Psychology and Aging, 6, 155–169.
Parasuraman, R. & Riley, V. (1997). Humans and automation: Use, misuse, disuse,
abuse. Human Factors, 39, 230–253.
Parasuraman, R. & Greenwood, P. M. (2004). Molecular genetics of visuospatial
attention and working memory. In: M. I. Posner (ed.), Cognitive neuroscience of
attention (pp. 245–259). New York: Guilford.
Parasuraman, R. & Rizzo, M. (2007). Neuroergonomics: The brain at work.
New York: Oxford University Press.
Parasuraman, R. & Wilson, G. F. (2008). Putting the brain to work:
Neuroergonomics past, present, and future. Human Factors, 50, 468–474.
Parasuraman, R. & Manzey, D. (2010). Complacency and bias in human use of
automation: An attentional integration. Human Factors, 52, 381–410.
Parasuraman, R. & Jiang, Y. (2012). Individual differences in cognition,
affect, and performance: Behavioral, neuroimaging, and molecular genetic
approaches. NeuroImage. 59, 70–82.
Parasuraman, R., Greenwood, P. M., Haxby, J. V., & Grady, C. L. (1992). Visuospatial
attention in dementia of the Alzheimer type. Brain, 115, 711–733.
Parasuraman, R., Greenwood, P. M., Kumar, R., & Fossella, J. (2005). Beyond heri-
tability: Neurotransmitter genes differentially modulate visuospatial attention
and working memory. Psychological Science, 16, 200–207.
Parker, P., Englehart, K., & Hudgins, B. (2006). Myoelectric signal processing for
control of powered limb prostheses. Journal of Electromyography and Kinesiology,
16, 541–548.
Parkhurst, D., Law, K., & Niebur, E. (2002). Modeling the role of salience in the
allocation of overt visual attention. Vision Research, 42, 107–123.
Parks, P. D., Durand, G., Tsismenakis, A. J., Vela-Bueno, A., & Kales, S. N. (2009).
Screening for obstructive sleep apnea during commercial driver medical exami-
nations. Journal of Occupational and Environmental Medicine, 51, 275–282.
Pascual-Leone, A., Walsh, V., & Rothwell, J. (2000). Transcranial magnetic stimu-
lation in cognitive neuroscience—Virtual lesion, chronometry and functional
connectivity. Current Opinion in Neurobiology, 10, 232–237.
References 215

Peden, M. & Sminkey, L. (2004). World Health Organization dedicates World


Health Day to road safety. Injury Prevention, 10, 67.
Peelen, M. V., Heslenfeld, D. J., & Theeuwes, J. (2004). Endogenous and
exogenous attention shifts are mediated by the same large-scale neural
network. NeuroImage, 22, 822–830.
Perreira Da Silva, M. P., Courboulay, V., Prigent, A., & Estraillier, P. (2008).
Real-time face tracking for attention aware adaptive games. In: A. Gasteratos,
M. Vincze & J. K. Tsotsos (eds), Computer Vision Systems (Vol. 5008/2008,
pp. 99–108). Berlin: Springer.
Petersen, S. E., van Mier, H., Fiez, J. A., & Raichle, M. E. (1998). The effects of
practice on the functional anatomy of task performance. Proceedings of the
National Academy of Sciences, 95, 853–860.
Pfurtscheller, G. & Lopes da Silva, F. H. L. (1999). Event-related EEG/MEG syn-
chronization and desynchronization: Basic principles. Clinical Neurophysiology,
110, 1842–1857.
Pfurtscheller, G., Allison, B. Z., Bauernfeind, G., Brunner, C., Solis-Escalante, T.,
Scherer, R., et al. (2010). The hybrid BCI. Frontiers in Neuroscience, 4, 3.
Philip, P., Sagaspe, P., Taillard, J., Chaumet, G., Bayon, V., Coste, O., Bioulac, B.,
& Guilleminault, C. (2008). Maintenance of wakefulness test, obstructive sleep
apnea syndrome, and driving risk. Annals of Neurology, 64, 410–416.
Phillips, J. M., McAlonan, K., Robb, W. G., & Brown, V. J. (2000). Cholinergic
neurotransmission influences covert orientation of visuospatial attention in
the rat. Psychopharmacology (Berlin), 150, 112–116.
Pires, G., Nunes, U., & Castelo-Branco, M. (2012). Comparison of a row-column
speller vs. a novel lateral single-character speller: Assessment of BCI for severe
motor disabled patients. Clinical Neurophysiology, 123, 1168–1181.
Poldrack, R. A., Desmond, J. E., Glover, G. H., & Gabrieli, J. D. (1998). The neural
basis of visual skill learning: An fMRI study of mirror reading. Cerebral Cortex,
8, 1–10.
Pomerantz, J. R., Saeger, L. C., & Stoever, R. J. (1977). Perception of wholes
and of their component parts: Some configural superiority effects. Journal of
Experimental Psychology: Human Perception and Performance, 3, 422–435.
Pope, A. T., Bogart, E., & Bartolome, D. (1995). Biocybernetic system evaluates
indices of operator engagement. Biological Psychology, 40, 187–196.
Posner, M. I. (1978). Chronometric explorations of mind. Hillsdale, NJ: Erlbaum.
Posner, M. I. (1980). Orienting of attention. Quarterly Journal of Experimental
Psychology, 32, 3–25.
Posner, M. I. (2012). Expanding horizons of ergonomics research. NeuroImage,
59, 149–153.
Posner, M. I. & Petersen, S. E. (1990). The attention system of the human brain.
Annual Review of Neuroscience, 13, 25–42.
Posner, M. I. & Fan, J. (2007). Attention as an organ system. In: J. Pomerantz
(ed.), Neurobiology of perception and communication: From synapse to society. De
Lange conference IV. London: Cambridge University Press.
Posner, M. I. & Rothbart, M. K. (2007). Research on attention networks as a
model for the integration of psychological science. Annual Review of Psychology,
58, 1–23.
Posner, M. I., Snyder, C. R., & Davidson, B. J. (1980). Attention and the detection
of signals. Journal of Experimental Psychology, 109, 160–174.
216 References

Posner, M. I., Walker, J. A., Friedrich, F. J., & Rafal, R. D. (1984). Effects of parietal
injury on covert orienting of attention. Journal of Neuroscience, 4, 1863–1874.
Posner, M. I., Rothbart, M. K., & Sheese, B. E. (2007). Attention genes.
Developmental Science, 10, 24–29.
Praamstra, P., Boutsen, L., & Humphreys, G. W. (2005). Frontoparietal control of
spatial attention and motor intention in human EEG. Journal of Neurophysiology,
94, 764–774.
Prinzel, L. J., III, Freeman, F. G., Scerbo, M. W., Mikulka, P. J., & Pope, A. T. (2003).
Effects of a psychophysiological system for adaptive automation on perfor-
mance, workload, and the event-related potential P300 component. Human
Factors, 45, 601–614.
Proctor, R. W., & Vu, K.-P. L. (2006). Stimulus-response compatibility principles:
Data, theory, and application. Boca Raton, FL: CRC Press.
Proulx, M. J. & Harder, A. (2008). Sensory substitution: Visual-to-auditory sen-
sory substitution devices for the blind. Tijdschrift voor Ergonomie (Dutch Journal
of Ergonomics), 33, 20–22.
Ptito, M., Fumal, A., de Noordhout, A. M., Schoenen, J., Gjedde. A., & Kupers, R.
(2008). TMS of the occipital cortex induces tactile sensations in the fingers of
blind Braille readers. Experimental Brain Research, 184, 193–200.
Qian, M., Aguilar, M., Zachery, K. N., Privitera, C., Klein, S., Carney, T., &
Nolte, L. W. (2009). Decision-level fusion of EEG and pupil features for single-
trial visual detection analysis. IEEE Transactions on Biomedical Engineering, 56,
1929–1937.
Qin, Y., Sohn, M. H., Anderson, J. R., Stenger, V. A., Fissell, K., Goode, A., &
Carter, C. S. (2003). Predicting the practice effects on the blood oxygenation
level-dependent (BOLD) function of fMRI in a symbolic manipulation task.
Proceedings of the National Academy of Sciences, 100, 4951–4956.
Rabbitt, P. M. A. (1966). Errors and error correction in choice-response tasks.
Journal of Experimental Psychology, 71, 264–272.
Rabbitt, P. M. A. (1990). Age, IQ and awareness, and recall of errors. Ergonomics,
33, 1291–1305.
Rabbitt, P. M. A. & Rodgers, B. (1977). What does man do after he makes an
error? An analysis of response programming. Quarterly Journal of Experimental
Psychology, 29, 232–240.
Rabipour, S. & Raz, A. (2012). Training the brain: Fact and fad in cognitive and
behavioral remediation. Brain and Cognition, 79, 159–179.
Rahne, T., Böckmann, M., von Specht, H., & Sussman, E. S. (2007). Visual cues
can modulate integration and segregation of objects in auditory scene analysis.
Brain Research, 1144, 127–135.
Raichle, M. E. & Snyder, A. Z. (2007). A default mode of brain function: A brief
history of an evolving idea. NeuroImage, 37, 1083–1090.
Raichle, M. E., Fiez, J. A., Videen, T. O., MacLeod, A. K., Pardo, J. V., Fox, P. T., &
Petersen, S. E. (1994). Practice-related changes in human brain functional
anatomy during nonmotor learning. Cerebral Cortex 4, 8–26.
Raichle, M. E., MacLeod, A. M., Snyder, A. Z., Powers, W.J., Gusnard, D. A., &
Shulman, G. L. (2001). A default mode of brain function. Proceedings of the
National Academy of Sciences, 98, 676–682.
Raizada, R. D. & Grossberg, S. (2003). Towards a theory of the laminar architec-
ture of cerebral cortex: Computational clues from the visual system. Cerebral
Cortex, 13, 100–113.
References 217

Ramoser, H., Müller-Gerking, J., & Pfurtscheller, G. (2000). Optimal spatial


filtering of single trial EEG during imagined hand movement. IEEE Transactions
on Rehabilitation Engineering, 8, 441–446.
Ramsey, C. S., Werchan, P. M., Isdahl, W. M., Fischer, J., & Gibbons, J. A. (2008).
Acceleration tolerance at night with acute fatigue and stimulants. Aviation,
Space, and Environmental Medicine, 79, 769–773.
Rapp, D. N. (2006). The value of attention aware systems in educational settings.
Computers in Human Behavior, 22, 603–614.
Rauschenberger, R. & Yantis, S. (2006). Perceptual encoding efficiency in visual
search. Journal of Experimental Psychology: General, 135, 116–131.
Rauscher, H., Popp, W., & Wanke, T. (1991) Acceptance of CPAP therapy for sleep
apnea. Chest, 100, 1019–1023.
Rayner, K. (1998). Eye movements in reading and information processing:
20 years of research. Psychological Bulletin, 124, 372–422.
Reason, J. T. (1979). Actions not as planned: The price of automatization. In:
G. Underwood & R. Stevens (eds), Aspects of consciousness (pp. 67–89). London:
Academic Press.
Reason, J. (1990). Human error. Cambridge: University of Cambridge Press.
Redline, S., Strauss, M. E., Adams, N., Winters, M., Roebuck, T., Spry, K.,
Rosenberg, C., & Adams, K. (1997). Neuropsychological function in mild sleep-
disordered breathing. Sleep, 20, 160–167.
Regan, D. (1966). An effect of stimulus colour on average steady-state potentials
evoked in man. Nature, 210, 1056–1057.
Regan, D. (1989). Human brain electrophysiology: Evoked potentials and evoked mag-
netic fields in science and medicine. Amsterdam: Elsevier Science Ltd.
Reinvang, I., Deary, I. J., Fjell, A. M., Steen, V. M., Espeseth, T., & Parasuraman, R.
(2010). Neurogenetic effects on cognition in aging brains: A window of oppor-
tunity for intervention? Frontiers in Aging Neuroscience, 2, 143.
Remington, R. W., Johnston, J. C., Ruthruff, E., Gold, M., & Romera, M. (2000).
Visual search in complex displays: Factors affecting conflict detection by air
traffic controllers. Human Factors, 42, 349–366.
Reyner, L. A. & Horne, J. A. (1998). Falling asleep whilst driving: Are drivers
aware of prior sleepiness? International Journal of Legal Medicine, 111, 120–123.
Reynolds, J. H. & Desimone, R. (2003). Interacting roles of attention and visual
salience in V4. Neuron, 37, 853–863.
Reynolds, J. H., Chelazzi, L., & Desimone, R. (1999). Competitive mechanisms subserve
attention in macaque areas V2 and V4. Journal of Neuroscience, 19, 1736–1753.
Ridderinkhof, K. R., Nieuwenhuis, S., & Bashore, T. R. (2003). Errors are fore-
shadowed in brain potentials associated with action monitoring in cingulate
cortex in humans. Neuroscience Letters, 348, 1–4.
Ridderinkhof, K. R., Ullsperger, M., Crone, E. A., & Nieuwenhuis, S. (2004). The
role of the medial frontal cortex in cognitive control. Science, 306, 443–447.
Rilling, J. K. & Sanfey, A. G. (2011). The neuroscience of social decision-making.
Annual Review Of Psychology, 62, 23–48.
Ritter, S., Anderson, J. R., Koedinger, K. R., & Corbett, A. (2007). Cognitive tutor:
Applied research in mathematics education. Psychonomic Bulletin & Review, 14,
249–255.
Rizzo, M. (2011). Impaired driving from medical conditions: A 70-year-old man
trying to decide if he should continue driving. The Journal of the American
Medical Association, 305, 1018–1026.
218 References

Rizzo, M., Robinson, S., & Neale, V. (2007). The brain in the wild. In:
R. Parasuraman & M. Rizzo (eds) Neuroergonomics: Brain and Behavior at Work
(pp. 113–128). Oxford: Oxford University Press.
Rizzolatti, G. & Craighero, L. (2004). The mirror-neuron system. Annual Review
of Neuroscience, 27, 169–192.
Rizzolatti, G. & Sinigaglia C. (2010). The functional role of the parieto-
frontal mirror circuit: Interpretations and misinterpretations. Nature Reviews
Neuroscience, 11, 264–274.
Rizzolatti, G., Riggio, L., Dascola, I., & Umiltà, C. (1987). Reorienting attention
across the horizontal and vertical meridians: Evidence in favor of a premotor
theory of attention. Neuropsychologia, 25, 31–40.
Robertson, I. H., Manly, T., Andrade, J., Baddeley, B. T., & Yiend, J. (1997).
‘Oops!’: Performance correlates of everyday attentional failures in traumatic
brain injured and normal subjects. Neuropsychologia, 35, 747–758.
Rockland, K. S. & Ojima, H. (2003). Multisensory convergence in calcarine visual
areas in macaque monkey. International Journal of Psychophysiology, 50, 19–26.
Roda, C. (2010). Attention support in digital environments. Nine questions to be
answered. New Ideas in Psychology, 28, 354–364.
Rodrigues, R. N., Abreu e Silva Rodrigues, A. A., Pratesi, R., Gomes, M. M.,
Vasconcelos, A. M., Erhardt, C., & Krieger, J. (2007). Outcome of sleepiness
and fatigue scores in obstructive sleep apnea syndrome patients with and
without restless legs syndrome after nasal CPAP. Arquivos de Neuro-psiquiatria,
65, 54–58.
Roehrs, T. & Roth T. (2008). Caffeine: Sleep and daytime sleepiness. Sleep Medicine
Review 12, 153–162.
Roehrs, T., Merrion, M., Pedrosi, B., Stepanski, E., Zorick, F., & Roth, T. (1995).
Neuropsychological function in obstructive sleep apnea syndrome (OSAS) com-
pared to chronic obstructive pulmonary disease (COPD). Sleep, 18, 338–388.
Roelfsema, P. R. (2006). Cortical algorithms for perceptual grouping. Annual
Review of Neuroscience, 29, 203–227.
Roelfsema, P. R., König, P., Engel, A. K., Sireteanu, R., & Singer, W. (1994).
Reduced synchronization in the visual cortex of cats with strabismic amblyo-
pia. European Journal of Neuroscience, 6, 1645–1655.
Roelfsema, P. R., Lamme, V. A. F., & Spekreijse, H. (1998). Object-based attention
in the primary visual cortex of the macaque monkey. Nature, 395, 376–381.
Roge, J., Pebayle, T., El Hannachi, S., & Muzet, A. (2003). Effect of sleep depri-
vation and driving duration on the useful visual field in younger and older
subjects during simulator driving. Vision Research, 43, 1465–1467.
Roger, C., Bénar, C. G., Vidal, F., Hasbroucq, T., & Burle, B. (2010). Rostral
cingulate zone and correct response monitoring: ICA and source localiza-
tion evidences for the unicity of correct- and error-negativities. Neuroimage,
51, 391–403.
Rohenkohl, G. & Nobre, A. (2011). Alpha oscillations related to anticipatory atten-
tion follow temporal expectations. Journal of Neuroscience, 31, 14076–14084.
Romei, V., Murray, M. M., Cappe, C., & Thut, G. (2010). Preperceptual and
stimulus-selective enhancement of low-level human visual cortex excitability
by sounds. Current Biology, 19, 1799–1805.
Rosetti, Y. & Revonsuo, A. (eds.) (2000). Beyond dissociation: Interaction between
dissociated implicit and explicit processing. Amsterdam: John Benjamins.
References 219

Rosson, M.-B. & Carroll, J. M. (2003). Usability engineering: Scenario-based


development of human-computer interaction. New York: Morgan-Kaufmann.
Rovira, E., McGarry, K., & Parasuraman, R. (2007). Effects of imperfect automa-
tion on decision making in a simulated command and control task. Human
Factors, 49, 76–87.
Royal, D. (2003). Volume I Findings: National Survey of Distracted and Drowsy
Driving Attitudes and Behavior. NHTSA, Technical Report 809 566.
Rueda, M. R., Fan, J., McCandliss, B. D., Halparin, J. D., Gruber, D. B., Lercari, L. P.,
& Posner, M. I. (2004). Development of attentional networks in childhood.
Neuropsychologia, 42, 1029–1040.
Rueda, M. R., Rothbart, M. K., McCandliss, B. D., Saccomanno, L., & Posner, M. I.
(2005). Training, maturation, and genetic influences on the development
of executive attention. Proceedings of the National Academy of Sciences, 102,
14931–14936.
Russell, R., Duchaine, B., & Nakayama, K. (2009). Super-recognizers: People
with extraordinary face recognition ability. Psychonomic Bulletin & Review,
16, 252–257.
Saalman, Y. B., Pigarev, I. N., & Vidyasagar, T. R. (2007). Neural mechanisms of
visual attention: How top-down feedback highlights relevant locations. Science,
316, 1612–1615.
Sahakian, B. & Morein-Zamir, S. (2007). Professor’s little helper. Nature, 450,
1157–1159.
Salinsky, M. C., Wegener, K., & Sinnema, F. (1992). Epilepsy, driving laws, and
patient disclosure to physicians. Epilepsia, 33, 469–472.
Salvucci, D. D. (2006). Modeling driver behavior in a cognitive architecture.
Human Factors, 48, 362–380.
Salvucci, D. D. & Taatgen, N. A. (2008). Threaded cognition: An integrated theory
of concurrent multitasking. Psychological Review, 115, 101–130.
Salvucci, D. D. & Taatgen, N. A. (2011). The multitasking mind. New York: Oxford
University Press.
Salvucci, D. D., Monk, C. A., & Trafton, J. G. (2009). A process-model account
of task interruption and resumption: When does encoding of the problem
state occur? Proceedings of the human factors and ergonomics society 53rd annual
meeting (pp. 799–803). Santa Monica, CA: Human Factors and Ergonomics
Society.
Salzer, Y., Oron-Gilad, T., Ronen, A., & Parmet, Y. (2011). Vibrotactile “on-thigh”
alerting system in the cockpit. Human Factors, 53, 118–131.
Samar, V. J., Bopardikar, A., Rao, R., & Swartz, K. (1999). Wavelet analysis of neuro-
electric waveforms: A conceptual tutorial. Brain and Language, 66, 7–60.
Sanders, M. M. & McCormick, E. J. (1993). Human factors in engineering and design
(7th ed.). New York: McGraw-Hill.
Sanderson, P. M., Flach, J. M., Buttigieg, M. A., & Casey, E. J. (1989). Object dis-
plays do not always support better integrated task performance. Human Factors,
31, 183–198.
Sanfey, A. G., Loewenstein, G., McClure, S. M., & Cohen, J. D. (2006).
Neuroeconomics: Cross-currents in research on decision-making. Trends in
Cognitive Science, 10, 108–116.
Saper, C.B., Scammell, T.E., & Lu, J. (2005). Hypothalamic regulation of sleep and
circadian rhythms. Nature, 437, 1257–1263.
220 References

Sassani, A., Findley, L., Kryger, M., Goldlust, E., George, C., & Davidson, T.
(2004). Reducing motor-vehicle collisions, costs, and fatalities by treating
obstructive sleep apnea syndrome. Sleep, 27, 453–458.
Sathian, K. & Zangaladze, A. (2002). Feeling with the mind’s eye: contribution of
visual cortex to tactile perception. Behavioural Brain Research, 135, 127–132.
Saupe, K., Widmann, A., Bendixen, A., Müller, M. M., & Schröger, E. (2009).
Effects of intermodal attention on the auditory steady-state response and the
event-related potential. Psychophysiology, 46, 321–327.
Saygin, A. P., Chaminade, T., Ishiguro, H., Driver, J., & Frith, C. (2012). The
thing that should not be: Predictive coding and the uncanny valley in per-
ceiving human and humanoid robot actions. Social, Cognitive, and Affective
Neuroscience, 7, 413–422.
Scheibert, J., Leurent, S., Prevost, A., & Debrégeas G. 2009. The role of finger-
prints in the coding of tactile information probed with a biomimetic sensor.
Science, 323, 1503–1506.
Schenk, T., Franz, V., & Bruno, N. (2011). Vision-for-perception and
vision-for-action: Which model is compatible with the available psychophysi-
cal and neuropsychological data?. Vision Research, 51, 812–818.
Schilbach, L., Eickhoff, S. B., Rotarska-Jagiela, A., Fink, G. R., & Vogeley, K.
(2008). Minds at rest? Social cognition as the default mode of cognizing and
its putative relationship to the ‘default system’ of the brain. Consciousness and
Cognition, 17, 457–467.
Schmitz, T. W., de Rosa, E., & Anderson, A. K. (2009). Opposing influences of
affective state valence on visual cortical encoding. Journal of Neuroscience, 29,
7199–7207.
Schröger, E. & Wolff, C. (1998). Attentional orienting and reorienting is indicated
by human event-related brain potentials. NeuroReport, 9, 3355–3358.
Schultheis, H. & Jameson, A. (2004). Assessing cognitive load in adaptive
hypermedia systems: Physiological and behavioral methods. In: W. Nejdl &
P. De Bra (eds), Adaptive hypermedia and adaptive web-based systems: Proceedings
of AH 2004 (pp. 225–234). Berlin: Springer.
Schultz, W. (1998). Predictive reward signal of dopamine neurons. Journal of
Neurophysiology, 80, 1–27.
Schupp, H. T., Markus, J., Weike, A. I., & Hamm, A. O. (2003). Emotional facilita-
tion of sensory processing in the visual cortex. Psychological Science, 14, 7–13.
Schwartz, A. B., Cui, X. T., Weber, D. J., & Moran, D. W. (2006). Brain-controlled
interfaces: Movement restoration with neural prosthetics. Neuron, 52,
205–220.
Schwarzkopf, D. S., Song, C., & Rees, G. (2011). The surface area of human
V1 predicts the subjective experience of object size. Nature Neuroscience,
14, 28–30.
Senders, J. W. (1983). Visual sampling processes. Hillsdale, NJ: Erlbaum.
Serences, J. T. & Boynton, G. M. (2007). Feature-based attentional modulations in
the absence of direct visual stimulation. Neuron, 55, 301–312.
Sereno, M. I., Pitzalis, S., & Martinez, A. 2001. Mapping of contralateral space
in retinotopic coordinates by a parietal cortical area in humans. Science, 294,
1350–1354.
Sforza, E. & Lugaresi, E. (1995). Daytime sleepiness and nasal continuous positive
airway pressure therapy in obstructive sleep apnea syndrome patients: Effects
References 221

of chronic treatment and 1-night therapy withdrawal. Sleep – Europe, 18,


195–201.
Shams, L., Kamitani, Y., & Shimojo, S. (2000). Illusions. What you see is what
you hear. Nature, 408, 788.
Sheridan, T. (1970). On how often the supervisor should sample. IEEE Transactions
on Systems Science and Cybernetics, SSC-6, 140–145.
Sheridan, T. (2011). Adaptive automation, level of automation, allocation
authority, supervisory control, and adaptive control: Distinctions and modes
of adaptation. IEEE Transactions on Systems, Man and Cybernetics, Part A: Systems
and Humans, 41, 662–667.
Shibata, K., Watanabe, T., Sasaki, Y., & Kawato, M. (2011). Perceptual learning
incepted by decoded fMRI neurofeedback without stimulus presentation.
Science, 334, 1413–1415.
Shirani, A. & St. Louis, E. K. (2009). Illuminating rationale and uses for light
therapy. Journal of Clinical Sleep Medicine, 15, 155–163.
Shirtcliff, E. A. & Marrocco, R. T. (2003). Salivary cotinine levels in human
tobacco smokers predict the attentional validity effect size during smoking
abstinence. Psychopharmacology (Berlin), 166, 11–18.
Shneiderman, B. (1998). Designing the user interface. Strategies for effective human-
computer interaction (3rd ed.). Reading, MA: Addison-Wesley.
Sillito, A. M., Cudeiro, J., & Jones, H. E. (2006). Always returning: Feedback and
sensory processing in visual cortex and thalamus. TRENDS in Neurosciences,
29, 307–316.
Silver, M. A., Ress, D., & Heeger, D. J. (2005). Topographic maps of visual spatial
attention in human parietal cortex. Journal of Neurophysiology, 94, 1358–1371.
Simons, R. F. (2010). The way of our errors: Theme and variations. Psychophysiology,
47, 1–14.
Simons, D. J. & Chabris, C. F. (1999). Gorillas in our midst: Sustained inatten-
tional blindness for dynamic events. Perception, 28, 1059–1074.
Singer, T. (2008). Understanding others: Brain mechanisms of theory of mind
and empathy. In: P. W. Glimcher, C. F. Camerer, E. Fehr, & R. A. Poldrack
(eds), Neuroeconomics: Decision making and the brain (pp. 251–268). Amsterdam:
Elsevier.
Skitka, L. J., Mosier, K. L., & Burdick, M. (2000). Accountability and automation
bias. International Journal of Human-Computer Studies, 52, 701–717.
Smallwood, J. & Schooler, J. W. (2006). The restless mind. Psychological Bulletin,
132, 946–958.
Smith, E. E. & Jonides, J. (1997). Working memory: A view from neuroimaging.
Cognitive Psychology, 33, 5–42.
Smith, A. T., Singh, K. D., Williams, A. L., & Greenlee, M. W. (2001). Estimating
receptive field size from fMRI data in human striate and extrastriate visual
cortex. Cerebral Cortex, 11, 1182–1190.
Sonuga-Barke, E. J. & Castellanos, F. X. (2007). Spontaneous attentional fluc-
tuations in impaired states and pathological conditions: A neurobiological
hypothesis. Neuroscience and Biobehavioral Reviews, 31, 977–986.
Soon, C. S., Brass, M. Heinze, H.-J., & Haynes, J.-D. (2008). Unconscious determi-
nants of free decisions in the human brain. Nature Neuroscience 11, 543–545.
Spence, C. & Driver, J. (eds) (2004). Crossmodal space and crossmodal attention.
Oxford, UK: Oxford University Press.
222 References

Spence, C. & Ho, C. (2008). Multisensory warning signals for event perception
and safe driving. Theoretical Issues in Ergonomics Science, 9, 523–554.
Spence, C., Nicholls, M. E. R., Gillespie, N., & Driver J. (1998). Cross-modal links
in exogenous covert spatial orienting between touch, audition, and vision.
Perception & Psychophysics, 60, 544–557.
Spence, C., Shore, D. I., & Klein, R. M. (2001). Multisensory prior entry. Journal
of Experimental Psychology: General, 130, 799–832.
Sperandio, J. C. (1978). The regulation of working methods as a function of work-
load among air traffic controllers. Ergonomics, 21, 195–202.
Sperling, G. (1960). The information available in brief visual presentations.
Psychological Monographs, 74(11, Whole No. 498).
Spiers, H. J. & Maguire, E. A. (2007). Decoding human brain activity during
real-world experiences. Trends in Cognitive Sciences, 11, 356–365.
Stanton, N. & Edworthy, J. (1998). Auditory affordances in the intensive
treatment unit. Applied Ergonomics, 5, 389–394.
Steelman, K. S., McCarley, J. S., & Wickens, C. D. (2011). Modeling the control of
attention in visual workspaces. Human Factors, 53, 142–153.
Stefanucci, J. K., Gagnon, K. T., & Lessard, D. A. (2011). Follow your heart:
Emotion adaptively influences perception. Social and Personality Psychology
Compass, 5, 296–308.
Stetson, C., Cui, X., Montague, P. R., & Eagleman, D. M. (2006). Motor-sensory
recalibration leads to an illusory reversal of action and sensation. Neuron, 51,
651–659.
Stocco, A. & Anderson, J. R. (2008). Endogenous control and task representation:
An fMRI study in algebraic problem-solving. Journal of Cognitive Neuroscience,
20, 1300–1314.
Stocco, A., Lebiere, C., & Anderson, J. R. (2010). Conditional routing of
information to the cortex: A model of the basal ganglia’s role in cognitive
coordination. Psychological Review, 117, 541–574.
Stoohs, R. A., Guilleminault, C., Itoi, A., & Dement, W. C. (1994). Traffic
accidents in commercial long-haul truck drivers: The influence of sleep-
disordered breathing and obesity. Sleep, 17, 619–623.
Strayer, D. L. & Drews, F. A. (2004). Profiles in driver distraction: Effects of
cell phone conversations on younger and older drivers. Human Factors, 46,
640–649.
Stroop, J. R. (1935). Studies of interference in serial verbal reactions. Journal of
Experimental Psychology, 18, 643–662.
Sturm, W. & Wilmes, K. (2001). On the functional neuroanatomy of intrinsic and
phasic alertness. NeuroImage, 14, S76–S84.
Sugase, Y., Yamane, S., Ueno, S., & Kawano, K. (1999). Global and fine infor-
mation coded by single neurons in the temporal visual cortex. Nature, 400,
869–873.
Supèr, H. & Lamme, A. F. (2007). Altered figure-ground perception in monkeys
with an extra-striate lesion. Neuropsychologia, 45, 3329–3334.
Szalma, J. F. (2009). Individual differences in human–technology interaction:
incorporating variation in human characteristics into human factors and ergo-
nomics research and design Theoretical Issues in Ergonomics Science, 10, 381–397.
Taatgen, N. A. & Lee, F. J. (2003). Production compilation: A simple mechanism
to model complex skill acquisition. Human Factors, 45, 61–76.
References 223

Taatgen, N. A. & van Rijn, H. (2011). Traces of times past: Representations of


temporal intervals in memory. Memory & Cognition, 39, 1546–1560.
Taatgen, N. A., van Rijn, D. H., & Anderson, J. R. (2007). An integrated theory
of prospective time interval estimation: The role of cognition, attention, and
learning. Psychological Review, 114, 577–598.
Taatgen, N. A., Huss, D., Dickison, D., & Anderson, J. R. (2008). The acquisition
of robust and flexible cognitive skills. Journal of Experimental Psychology General,
137, 548–565.
Takeuchi, H., Taki, Y., & Kawashima, R. (2010a). Effects of working memory train-
ing on cognitive functions and neural systems. Reviews in the Neurosciences,
21, 427–449.
Takeuchi, H., Sekiguchi, A., Taki, Y., Yokoyama, S., Yomogida, Y., Komuro, N.,
et al. (2010b). Training of working memory impacts structural connectivity.
The Journal of Neuroscience, 30, 3297–3303.
Tallon-Baudry, C. & Bertrand, O. (1999). Oscillatory gamma activity in humans
and its role in object representation. Trends in Cognitive Sciences, 3, 151–162.
Tamietto, M. & de Gelder, B. (2010). Neural bases of the non-conscious percep-
tion of emotional signals. Nature Reviews Neuroscience, 11, 697–709.
Tang, Y.-Y. & Posner, M. I. (2009). Attention training and attention state training.
Trends in Cognitive Sciences, 13, 222–227.
Tartaglia, E. M., Bamert, L., Mast, F. W., & Herzog, M. H. (2009). Human percep-
tual learning by mental imagery. Current Biology, 19, 2081–2085.
Tattersall, A. J. & Hockey, G. R. J. (1995). Level of operator control and changes
in heart rate variability during simulated flight maintenance. Human Factors,
37, 682–698.
Teran-Santos, J., Jimenez-Gomez, A., & Cordero-Guevara, J. (1999). The associa-
tion between sleep apnea and the risk of traffic accidents. New England Journal
of Medicine, 340, 847–851.
Thaler, R. H. (1981). Some empirical evidence on dynamic inconsistency.
Economic Letters, 8, 201–207.
Theeuwes, J. (1991). Cross-dimensional perceptual selectivity. Perception &
Psychophysics, 50, 184–193.
Theeuwes, J. (1992). Perceptual selectivity for color and form. Perception &
Psychophysics, 51, 599–606.
Theeuwes, J. (2004). Top-down search strategies cannot override attentional
capture. Psychonomic Bulletin & Review, 11, 65–70.
Theeuwes, J. (2010). Top-down and bottom-up control of visual selection. Acta
Psychologica, 135, 77–99.
Thompson, K. G. & Bichot, N. P. (2005). A visual salience map in the primate
frontal eye field. Progress in Brain Research, 147, 251–262.
Thompson, B., Mansouri, B., Koski, L., & Hess, R. F. (2008). Brain plasticity in
the adult: Modulation of function in amblyopia with rTMS. Current Biology,
18, 1–5.
Thompson, K., Read, K., Anderson, S., & Rizzo, M. (2011) Systematic analysis of
real-world driving behavior following focal brain lesions. Proceedings of Driving
Assessment 2011: The Sixth International Driving Symposium on Human Factors in
Driver Assessment, Training and Vehicle Design. Lake Tahoe, California.
Thut, G., Nietzel, A., Brandt, S. A., & Pascual-Leone, A. (2006). Alpha-band
electroencephalographic activity over occipital cortex indexes visuospatial
224 References

attention bias and predicts visual target detection. The Journal of Neuroscience,
26, 9494–9502.
Thut, G., Schyns, P., & Gross, J. (2011). Entrainment of perceptually relevant brain
oscillations by non-invasive rhythmic stimulation of the human brain. Frontiers
in Perception Science, 2, Number 00170. DOI: 10.3389/fpsyg.2011.00170.
Tippin, J., Sparks, J. D., & Rizzo, M. (2009). Visual vigilance in drivers with
obstructive sleep apnea. Journal of Psychosomatic Research, 67, 143–151.
Toffanin, P., de Jong, R., Johnson, A., & Martens, S. (2009). Using frequency
tagging to quantify attentional deployment in a visual divided attention task.
International Journal of Psychophysiology, 72, 289–298.
Toffanin, P., Johnson, A., & de Jong, R. (2011). The p4pc: An electrophysiological
marker of attentional disengagement? International Journal of Psychophysiology,
81, 72–81.
Tong, F. & Pratte, M. S. (2012). Decoding patterns of human brain activity.
Annual Review of Psychology, 63, 483–509.
Tononi, G., Srinivasan, R., Russell, D. P., & Edelman, G. M. (1998). Investigating
neural correlates of conscious perception by frequency-tagged neuromagnetic
responses. Proceedings of the National Academy of Sciences, 95, 3198–3203.
Tregear, S. J. (2007). Obstructive sleep apnea and commercial motor vehicle driver
safety -evidence report. Prepared by Manila Consulting Incorporated and the
ECRI Institute for FMCSA.
Tregear, S. J., Reston J., Schoeles, K., & Phillips, B. (2009). Obstructive sleep apnea
and risk of motor vehicle crash: Systemic review and meta-Analysis. Journal of
Clinical Sleep Medicine, 5, 573–581.
Treisman, A. M. & Gelade, G. (1980). A feature-integration theory of attention.
Cognitive Psychology, 12, 97–136.
Treisman, A. & Souther, J. (1985). Search asymmetry: A diagnostic for preatten-
tive processing of separable features. Journal of Experimental Psychology: General,
114, 285–310.
Treisman, A. & Gormican, S. (1988). Feature analysis in early vision: Evidence
from search asymmetries. Psychological Review, 95, 15–48.
Treue, S. & Martinez-Trujillo, J. C. (1999). Feature-based attention influences
motion processing gain in macaque visual cortex. Nature, 399, 575–579.
Treves, A. & Pizzagalli, D. (2002). Vigilance and perception of social stimuli:
Views from ethology and social neuroscience. In: M. Bekoff, C. Allen, &
G. M. Burghardt (eds), The cognitive animal: Empirical and theoretical perspectives
on animal cognition (pp. 463–470). Cambridge, MA: MIT Press.
Trujillo, L. T. & Allen, J. J. (2007). Theta EEG dynamics of the error-related nega-
tivity. Clinical Neurophysiology, 118, 645–668.
Tsushima, Y., Sasaki, Y., & Watanabe, T. (2006). Greater disruption due to failure
of inhibitory control on an ambiguous distractor. Science, 314, 1786–1788.
Tulving, E., Mandler, G., & Baumal, R. (1964). Interaction of two sources of informa-
tion in tachistoscopic word recognition. Canadian Journal of Psychology, 18, 62–71.
Turk-Browne, N. B., Yi, D. J., & Chun, M. M. (2006). Linking implicit and explicit
memory: Common encoding factors and shared representations. Neuron, 49,
917–927.
Turkington, P. M., Sircar, M., Allgar, V., & Elliott, M. W. (2001). Relationship
between obstructive sleep apnoea, driving simulator performance, and risk of
road traffic accidents. Thorax, 56, 800–805.
References 225

Turkington, P. M., Sircar, M., Saralaya, D., & Elliott, M. W. (2004). Time course
of changes in driving simulator performance with and without treatment in
patients with sleep apnoea hypopnoea syndrome. Thorax, 59, 56–59.
Uithol, S., van Rooij, I., Bekkering, H., & Haselager, P. (2012). Hierarchies in
action and motor control. Journal of Cognitive Neuroscience, 24, 1077–1086.
Ullsperger, M., Bylsma, L. M., & Botvinick, M. M. (2005). The conflict adapta-
tion effect: It’s not just priming. Cognitive Affective & Behavioral Neuroscience,
5, 467–472.
United States Bureau of Labor Statistics (2004). Workers on flexible and shift
work schedules. Available at: http://www.bls.gov/news.release/pdf/flex.pdf
(last accessed 3 October 2012).
Uttal, W. R. (2011). Mind and brain: A critical appraisal of cognitive neuroscience.
Cambridge, MA: The MIT Press.
Valdes-Sosa, M., Bobes, M. A., Rodriguez, V., & Pinilla, T. (1998). Switching atten-
tion without shifting the spotlight: Object-based attentional modulation of
brain potentials. Journal of Cognitive Neuroscience, 10, 137–151.
van der Burg, E., Talsma, D., Olivers, C. N. L., Hickey, C. M., and Theeuwes, J.
(2011). Early multisensory interactions affect the competition among multiple
visual objects. NeuroImage, 55, 1208–1218.
van der Helden, J., Boksem, M. A., & Blom, J. H. (2010). The importance of
failure: Feedback-related negativity predicts motor learning efficiency. Cerebral
Cortex, 20, 1596–1603.
van Dongen, H. P., Maislin, G., Mullington, J. M., & Dinges, D. F. (2003). The
cumulative cost of additional wakefulness: Dose-response effects on neuro-
behavioral functions and sleep physiology from chronic sleep restriction and
total sleep deprivation. Sleep, 26, 117–126.
van Dongen, H. P., Baynard, M. D., Maislin, G., & Dinges, D. F. (2004). Systemic
interindividual differences in neurobehavioral impairment from sleep loss:
Evidence of trait-like differential vulnerability. Sleep, 27, 423–433.
van Erp, J. B. F. (2007). Tactile displays for navigation and orientation: Perception
and behaviour. Unpublished doctoral dissertation. University of Utrecht, the
Netherlands.
van Erp, J. B. F. (2008). Absolute localization of vibrotactile stimuli on the torso.
Perception & Psychophysics, 70, 1016–1023.
van Erp, J. F. & van Veen, H. C. (2004). Vibrotactile in-vehicle navigation
system. Transportation Research Part F: Traffic Psychology and Behaviour, 7,
247–256.
van Essen, D. C., Anderson, C. H., & Felleman, D. J. (1992). Information process-
ing in the primate visual system: An integrated systems perspective. Science,
255, 419–423.
van Maanen, L., van Rijn, H., & Borst, J. P. (2009). Stroop and picture-word
interference are two sides of the same coin. Psychonomic Bulletin & Review, 16,
987–999.
van Maanen, L., van Rijn, H., & Taatgen, N. (2012). RACE/A: An architectural
account of the interactions between learning, task control, and retrieval
dynamics. Cognitive Science, 36, 62–101.
van Rijn, H., Johnson, A., & Taatgen, N. A. (2011). Cognitive user modeling.
In: K. P. L. Vu & R. W. Proctor (eds), Handbook of human factors in web design.
(2nd ed., pp. 527–542). Mawah, NJ: Erlbaum.
226 References

van Schie, H. T., Mars, R. B., Coles, M. G., & Bekkering, H. (2004). Modulation of
activity in medial frontal and motor cortices during error observation. Nature
Neuroscience, 7, 549–554.
van Schie, H. T., Toni, I., & Bekkering, H. (2006). Comparable mechanisms for
action and language: Neural systems behind intentions, goals, and means.
Cortex, 42, 495–498.
van Veen, V. & Carter, C. S. (2002). The timing of action-monitoring processes in
the anterior cingulate cortex. Journal of Cognitive Neuroscience, 14, 593–602.
Vandewalle, G., Balteau, E., Phillips, C., Degueldre, C., Moreau, V., Sterpenich, V.,
et al. (2006). Daytime light exposure dynamically enhances brain responses.
Current Biology, 16, 1616–1621.
Varela, F., Lachaux, J. P., Rodriguez, E., & Martinerie, J. (2001). The brainweb:
Phase synchronization and large-scale integration. Nature Reviews Neuroscience,
2, 229–239.
Vashitz , G., Meyer, J., Parmet, Y., Peleg, R., Goldfar, D., Porath, A., & Gilutz, H.
(2009). Defining and measuring physicians’ responses to clinical reminders.
Journal of Biomedical Informatics, 42, 317–326.
Venter, J. C., Adams, M. D., Myers, E. W., Li, P. W., Mural, R. J., Sutton, G. G.,
et al. (2001). The sequence of the human genome. Science, 291, 1304–1351.
Vernon, D., Egner, T., Cooper, N., Compton, T., Neilands, C., Sheri, A., &
Gruzelier, J. (2003). The effect of training distinct neurofeedback protocols on
aspects of cognitive performance. International Journal of Psychophysiology, 47,
75–85.
Verschure, B., Prati, V., & de Houwer, J. (2009). Cheating the lie detector: Faking
the autobiographical IAT. Psychological Science, 20, 410–413.
Verstraeten, E. & Cluydts, R. (2004). Executive control of attention in sleep
apnea patients: Theoretical concepts and methodological considerations. Sleep
Medicine Reviews, 8, 257–267.
Verstraeten, E., Cluydts, R., Pevernagie, D., & Hoffmann, G. (2004). Executive
function in sleep apnea: Controlling for attentional capacity in assessing
executive attention. Sleep, 27, 685–693.
Ververs, P. M. & Wickens, C. D. (1998). Head-up displays: Effects of clutter,
display intensity, and display location on pilot performance. The International
Journal of Aviation Psychology, 8, 377–404.
Vgontzas, A. N., Pejovic, S., Zoumakis, E., Lin, H. M., Bixler, E. O., Basta, M.,
et al. (2007). Daytime napping after a night of sleep loss decreases sleepiness,
improves performance, and causes beneficial changes in cortisol and interleu-
kin-6 secretion. American Journal of Physiology: Endocrinology, & Metabolism 55,
E253–E261.
Vicente, K. J. (1999). Cognitive work analysis: Toward safe, productive & healthy
computer-based work. Mahwah, NJ: Lawrence Erlbaum.
Vidal, F., Hasbroucq, T., Grapperon, J., & Bonnet, M. (2000). Is the ‘error
negativity’ specific to errors? Biological Psychology, 51, 109–128.
Vidal, J. J. (1973). Toward direct brain-computer communication. Annual Review
of Biophysics and Bioengineering, 2, 157–180.
Vijayraghavan, S., Wang, M., Birnbaum, S. G., Williams, G. V., & Arnsten, A. F. T.
(2011). Inverted-U dopamine D1 receptor actions on prefrontal neurons
engaged in working memory. Nature Neuroscience, 19, 376–384.
References 227

Villringer, A., Planck, J., Hock, C., Schleinkofer, L., & Dirnagl, U. (1993). Near
infrared spectroscopy (NIRS): a new tool to study hemodynamic changes dur-
ing activation of brain function in human adults. Neuroscience Letters, 154,
101–104.
Vincent, J.L., Patel, G.H., Fox, M.D., Snyder, A.Z., Baker, J.T., van Essen, D.C.,
et al. (2007). Intrinsic functional architecture in the anaesthetized monkey
brain. Nature, 447, 83–86.
Vincenzi, D. A., Wise, J. A., Mouloua, M., & Hancock, P. A. (eds) (2009). Human
factors in simulation and training. Boca Raton, FL: CRC Press.
Vogel, E. K., & Luck, S. J. (2002). Delayed working memory consolidation during
the attentional blink. Psychonomic Bulletin & Review, 9, 739–743.
Von Economo, C. (1930). Sleep as a problem of localization. Journal of Nervous
and Mental Disease, 71, 249–259.
Voss, M. W., Prakash, R., Erickson, K. I., Boot, W. R., Basak, C., Neider, M. B., et al.
(2012). Effects of training strategies implemented in a complex videogame on
functional connectivity of attentional networks. NeuroImage, 59, 138–148.
Vuilleumier, P., & Huang, Y.-M. (2009). Emotional attention: Uncovering the
mechanisms of affective biases in perception. Current Directions in Psychological
Science, 18, 148–152.
Wagner T., Fregni F., Fecteau S., Grodzinsky A., Zahn M., & Pascual-Leone A.
(2007). Transcranial direct current stimulation: A computer-based human
model study. NeuroImage, 35, 1113–1124.
Walker, B. N. (2002). Magnitude estimation of conceptual data dimensions for
use in sonification. Journal of Experimental Psychology: Applied, 8, 211–221.
Walker, B. N. & Ehrenstein, A. (2000). Pitch and pitch change interact in auditory
displays. Journal of Experimental Psychology: Applied, 6, 15–30.
Walsh, V. & Cowey, A. (2000). Transcranial magnetic stimulation and cognitive
neuroscience. Nature Reviews Neuroscience, 1, 73–80.
Walsh, V. & Pascual-Leone, A. (2005). Transcranial magnetic stimulation:
A neurochronometrics of mind. Mind and brain: A critical appraisal of cognitive
neuroscience. Cambridge, MA: MIT Press.
Walther, D. & Koch, C. (2006). Modeling attention to salient proto-objects.
Neural Networks, 19, 1395–1407.
Wandell, B. A. (1995). Foundations of vision. Sunderland, MA: Sinauer.
Wang, C., Ulbert, I., Schomer, D. L., Marinkovic, K., & Halgren, E. (2005).
Responses of human anterior cingulate cortex microdomains to error detection,
conflict monitoring, stimulus-response mapping, familiarity, and orienting.
The Journal of Neuroscience, 25, 604–613.
Ward, J. & Meijer, P. (2010). Visual experiences in the blind induced by an audi-
tory sensory substitution device. Consciousness and Cognition, 19, 492–500.
Warren, R. M. (1970) Perceptual restoration of missing speech sounds. Science,
167, 392–393.
Watanabe, T., Náñez, J. E., & Sasaki, Y. (2001). Perceptual learning without
perception. Nature, 413, 844–848.
Watkins, S., Shams, L, Josephs, O., & Rees, G. (2007). Activity in human V1
follows multisensory perception. NeuroImage, 37, 572–578.
Watson, J. M. & Strayer, D. (2010). Supertaskers: Profiles in extraordinary multi-
tasking ability. Psychonomic Bulletin & Review, 17, 479–485.
228 References

Weaver, T. E. & Grunstein, R. R. (2008). Adherence to continuous positive air-


way pressure therapy: The challenge to effective treatment. Proceedings of the
American Thoracic Society, 5, 173–178.
Weaver, T. E., Kribbs, N. B., Pack, A. I., Kline, L. R., Chugh, D. K., Maislin, G.,
et al. (1997). Night-to-night variability in CPAP use over the first three months
of treatment. Sleep, 20, 278–283.
Weaver, T. E., Maislin, G., Dinges, DF, Bloxham, T., George, CFP, Greenberg, H.,
Kader, G, et al. (2007). Relationship between hours of CPAP use and achieving
normal levels of sleepiness and daily functioning. Sleep, 30, 711–719.
Weaver, F. M., Follett, K., Stern, M., Hur, K., Harris, C., Marks, W. R., et al. (2009).
Bilateral deep brain stimulation vs best medical therapy for patients with
advanced Parkinson disease: A randomized controlled trial. JAMA: Journal of
the American Medical Association, 301, 63–73.
Weiskopf, N., Mathiak, K., Bock, S. W., Scharnowski, F., Veit, R., Grodd, W.,
et al. (2004a). Principles of a brain-computer interface (BCI) based on real-time
functional magnetic resonance imaging (fMRI). IEEE Transactions on Biomedical
Engineering, 51, 966–970.
Weiskopf, N., Scharnowski, F., Veit, R., Goebel, R., Birbaumer, N., & Mathiak,
K. (2004b). Self-regulation of local brain activity using real-time functional
magnetic resonance imaging (fMRI). Journal of Physiology Paris, 98, 357–373.
Weiskopf, N., Sitaram, R., Josephs, O., Veit, R., Scharnowski, F., Goebel, R., et al.
(2007). Real-time functional magnetic resonance imaging: methods and appli-
cations. Magnetic Resonance Imaging, 25, 989–1003.
Weissman, D. H., Roberts, K. C., Visscher, K. M., & Woldorff, M. G. (2006).
The neural bases of momentary lapses in attention. Nature Neuroscience,
9, 971–978.
Wertheimer, M. (1938). Laws of organization in perceptual forms. In: W. D. Ellis
(ed.), A source book of Gestalt psychology. New York: Harcourt, Brace.
Westwood, D. A. & Goodale, M. A. (2011). Converging evidence for diverging
pathways: Neuropsychology and psychophysics tell the same story. Vision
Research, 51, 804–811.
Whalen, P. J., Rauch, S. L., Etcoff, N. L., McInerney, S., Lee, M. B., & Jenike, M. A.
(1998). Masked presentations of emotional facial expressions modulate amyg-
dala activity without explicit knowledge. Journal of Neuroscience, 18, 411–418.
Wickens, C. D. & Carswell, C. M. (1995). The proximity compatibility principle:
Its psychological foundations and relevance to display design. Human Factors,
37, 473–494.
Wickens, C. D. & Long, J. (1995). Object versus space-based models of
visual attention: Implications for the design of head-up displays Journal of
Experimental Psychology: Applied, 1, 179–193.
Wickens, C. D. & Holland, J. G. (2000). Engineering psychology and human
performance (3rd ed.). Upper Saddle River, NJ: Prentice Hall.
Wickens, C. D. & McCarley, J. S. (2008). Applied attention theory. Boca Raton,
FL: CRC Press.
Wickens, C. D., Goh, J., Helleberg, J., Horrey, W., & Talleur, D. A. (2003).
Attentional models of multi-task pilot performance using advanced display
technology. Human Factors, 45, 360–380.
Wickens, C. D., Alexander, A. L., Ambinder, M. S., & Martens, M. (2004). The role
of highlighting in visual search through maps. Spatial Vision, 17, 373–388.
References 229

Wickens, C. D., McCarley, J. S., Alexander, A. L., Thomas, L. C., Ambinder, M. S.,
& Zheng, S. (2008). Attention-situation awareness (A-SA) model of pilot error.
In: D. C. Foyle & B. L. Hooey (eds), Human performance modeling in aviation
(pp. 213–239). Boca Raton, FL: CRC Press.
Wickens, C. D., Hooey, B. L., Gore, B. F., Sebok, A., & Koenicke, C. S. (2009).
Identifying black swans in NextGen: Predicting human performance in
off-nominal conditions. Human Factors, 51, 638–651.
Wiener, E. L. & Curry, R. E. (1980). Flight deck automation: Promises and
problems. Ergonomics, 23, 995–1011.
Wierwille, W. W. & Ellsworth, L. A. (1994). Evaluation of driver drowsiness by
trained raters. Accident Analysis and Prevention, 26, 571–581.
Williams, L. G. (1967). The effects of target specification on objects fixated
during visual search. Acta Psychologica, 27, 355–360.
Williams, C. E. & Stevens, K. N. (1972). Emotions and speech : Some acoustical
correlates. Journal of the Acoustical Society of America, 52, 1238–1250.
Williams, G. V. & Goldman-Rakic, P. S. (2002). Modulation of memory fields by
dopamine Dl receptors in prefrontal cortex. Nature, 376, 572–575.
Wilson, G. F. (2001). In-flight psychophysiological monitoring. In: F. Fahrenberg &
M. Myrtek (eds), Progress in ambulatory monitoring (pp. 435–454). Seattle:
Hogrefe and Huber.
Wilson, G. F. & Russell, C. A. (2004). Psychophysiologically determined adaptive
aiding in a simulated UCAV task. In: D. A. Vicenzi, M. Mouloua, &
P. A. Hancock (eds), Human performance, situation awareness and automation:
Current research and trends. (Vol. 2, pp. 200–204). Mahwah, NJ: Erlbaum.
Wilson, G. F. & Russell, C. A. (2007). Performance enhancement in an uninhab-
ited air vehicle task using psychophysiologically determined adaptive aiding.
Human Factors, 49, 1005–1018.
Wohlschläger, A., Haggard, P., Gesierich, B., & Prinz, W. (2003). The perceived onset
time of self- and other-generated actions. Psychological Science, 14, 586–591.
Wolfe, J. M. (1994). Guided Search 2.0: A revised model of visual search.
Psychonomic Bulletin & Review, 1, 202–238.
Wolfe, J. M. (2001). Asymmetries in visual search: An introduction. Perception &
Psychophysics, 63, 381–389.
Wolfe, J. M. & Bennett, S. C. (1997). Preattentive object files: Shapeless bundles
of basic features. Vision Research, 37, 25–43.
Wolfe, J. M. & Horowitz, T. S. (2004). What attributes guide the deployment of
visual attention and how do they do it? Nature Neuroscience, 5, 1–7.
Womelsdorf, T., Fries, P., Mitra, P. P., & Desimone, R. (2006). Gamma-band syn-
chronization in visual cortex predicts speed of change detection. Nature, 439,
733–736.
Womelsdorf, T., Anton-Erxleben, K., & Treue, S. (2008). Receptive field shift and
shrinkage in macaque area MT through attentional gain modulation. Journal
of Neuroscience 28, 8934–8944.
Wong, M. (2007). Touch-screen phones poised for growth. USA Today (21
June 2007). Available at: http://www.usatoday.com/tech/products/2007-06-21-
1895245927_x.htm (accessed 28 September 2012).
Wood, N. L. & Cowan, N. (1995). The cocktail party phenomenon revisited:
Attention and memory in the classic selective listening procedure of Cherry
(1953). Journal of Experimental Psychology: General, 124, 243–262.
230 References

Woodman, G. F. & Luck, S. J. (1999). Electrophysiological measurement of rapid


shifts of attention during visual search. Nature, 400, 867–869.
Worden, M. S., Foxe, J. J., Wang, N., & Simpson, G. V. (2000). Anticipatory biasing
of visuospatial attention indexed by retinotopically specific alpha-band
electroencephalography increases over occipital cortex. Journal of Neuroscience,
20, RC63.
Wright, R. M. & Ward, L. (2008). Orienting of attention. New York: Oxford
University Press.
Wulf, G. (2007). Attention and motor skill learning. Champaign, IL: Human Kinetics.
Wunderlich, K., Rangel, A., & O’Doherty, J. P. (2009). Neural computations
underlying action-based decision making in the human brain. Proceedings of
the National Academy of Sciences, 106, 17199–17204.
Wyart, V. & Tallon-Baudry, C. (2009). How ongoing fluctuations in human
visual cortex predict perceptual awareness: Baseline shift versus decision bias.
The Journal of Neuroscience, 29, 8715–8725.
Yamamoto, H., Akashiba, T., Kosaka, N., Ito D., & Horie, T. (2000). Long-term
effects nasal continuous positive airway pressure on daytime sleepiness, mood
and traffic accidents in patients with obstructive sleep apnoea. Respiratory
Medicine, 94, 87–90.
Yamani, Y. & McCarley, J. S. (2010). Visual search asymmetries within color-
coded and intensity-coded displays. Journal of Experimental Psychology: Applied,
16, 124–132.
Yamani, Y. & McCarley, J. S. (2011). Visual search asymmetries in heavy clutter:
Implications for display design. Human Factors, 53, 299–307.
Yang, G.-Z., Mylonas, G., Kwok, K.-W., & Chung, A. (2008). Perceptual docking for
robotic control. In: T. Dohi, I. Sakuma, & H. Liao (eds), Medical imaging and aug-
mented reality. Lecture notes in computer science (pp. 21–30). Berlin: Springer.
Yantis, S. & Johnston, J. C. (1990). On the locus of visual selection: Evidence
from focused attention tasks. Journal of Experimental Psychology: Human
Perception and Performance, 16, 135–149.
Yantis, S. & Jonides, J. (1990). Abrupt visual onsets and selective attention:
Voluntary versus automatic allocation. Journal of Experimental Psychology:
Human Perception and Performance, 16, 121–134.
Yau, J. M., Olenczak, J. B., Dammann, J. F., & Bensmaïa, S. J. (2009). Temporal
frequency channels are linked across audition and touch. Current Biology,
19, 561–566.
Yee, N. (2006). Motivations for play in online games. Cyberpsychology & Behavior,
9, 772–774.
Yeh, M. & Wickens, C. D. (2001). Attentional filtering in the design of electronic
map displays: A comparison of color coding, intensity coding, and decluttering
techniques. Human Factors, 43, 543–562.
Yetkin, O., Kunter, E., & Gunen, H. (2008). CPAP compliance in patients with
obstructive sleep apnea syndrome. Sleep and Breathing, 12, 365–367.
Yeung, N., Botvinick, M. M., & Cohen, J. D. (2004). The neural basis of error
detection: Conflict monitoring and the error-related negativity. Psychological
Review, 111, 931–959.
Yoo, J. J., Hinds, O., Ofen, N., Thompson, T. W., Whitfield-Gabrieli, S.,
Triantafyllou, C., & Gabrieli, J. D. E. (2012). When the brain is prepared to learn:
enhancing human learning using real-time fMRI. NeuroImage, 59, 846–852.
References 231

Young, M. S. & Stanton, N. A. (2002a). Commentary. it’s all relative: defining


mental workload in the light of Annett’s paper. Ergonomics, 45, 1018–1020.
Young, M. & Stanton, N. (2002b). Malleable attentional resources theory: a new
explanation for the effects of mental underload on performance. Human
Factors, 44, 365–375.
Young, T., Palta, M., Dempsey, J., Skatrud, J., Weber, S., & Badr, S. (1993).
The occurrence of sleep disordered breathing among middle-aged adults.
New England Journal of Medicine, 328, 1230–1235.
Zander, T. O. & Kothe, C. (2011). Towards passive brain-computer interfaces:
applying brain-computer interface technology to human-machine systems in
general. Journal of Neural Engineering, 8, 025005.
Zander, T. O., Kothe, C., Welke, S., & Rötting, M. (2008). Enhancing human-
machine systems with secondary input from passive brain-computer inter-
faces. In: Proceedings of the 4th International BCI Workshop & Training Course.
Graz, Austria. Graz University.
Zelinsky, G. J. & Sheinberg, D. L. (1997). Eye movements during parallel-
serial visual search. Journal of Experimental Psychology: Human Perception and
Performance, 23, 244–262.
Zhang, W. & Luck, S. J. (2009). Feature-based attention modulates feedforward
visual processing. Nature Neuroscience, 12, 24–25.
Zimmerman, M. E., Arnedt, J. T., Stanchina, M., Millman, R. P., & Aloia, M. S.
(2006). Normalization of memory performance and positive airway pressure
adherence in memory-impaired patients with obstructive sleep apnea. Chest,
130, 1772–1778.
Zipser, K., Lamme, V. F., & Schiller, P. H. (1996). Contextual modulation in
primary visual cortex. The Journal Of Neuroscience, 16, 7376–7389.
Subject Index

action control 20, 69, 80, 92, 98, attentional lapses 18, 95, 99, 102,
100, 102, 105–7 104–6
neural substrate of 4, 14, 31 attentional orienting 15–18, 52–5, 98
action monitoring network 99 and eye movements 70–1
see also performance monitoring covert 52–4, 71–2, 147
action slips 91–2, 102 overt 52–4
affective priming 133 attentional selection 51
alerting 15-6, 38, 87, 89, 111 central selection 56–8
see also arousal feature-based selection 55
anterior cingulate cortex (ACC) object-based selection 54
in cognitive control 18 perceptual selection 56
in emotion 18 auditory
in performance monitoring 97–100 attention 9–10, 56, 74, 76, 83
in reinforcement learning 106–7 perception 8, 12–13, 18, 24,
arousal 4, 15–16, 23, 82, 104–5, 38–40, 43–5
135–6 auditory icons 39, 40
and sleep 115, 117 augmentation of performance see
neurobiology of 111 performance augmentation
substances that affect 113 augmented interaction 78–85 see
see also alerting also brain–computer interfaces
attention augmented reality 27
early selection models 56–7 automatic processing 77
large-scale 65–7 automation 2, 83–4, 136, 156–9
late selection models 56, 65 attention-aware systems 68
neural substrates of 9–10, 15–18,
76–7 brain networks 4–6, 14, 15, 23, 35,
object-based 64–5 52–3, 74, 76–7, 150–2, 154
spotlight model 54, 64 emotion 21, 131–2
supervisory attention system 92 social 22, 48, 138
zoom lens model 54 structural connectivity 6, 88
see also attentional control, synchronization 5, 11, 14, 16–7,
attentional cuing, attentional 74–5, 87, 98
lapses, attentional orienting, brain plasticity 5, 37, 89, 146
attentional selection brain rhythms 11
attention deficit/hyperactivity alpha 11, 14, 73–5, 83, 86–7,
disorder (ADHD), treatment 105–6, 117
of 86–7, 89 beta 11, 14, 75, 86–7
Attention Network Task (ANT) 87, delta 11
89 gamma 11, 14, 17, 74–6
attentional control, training of 86–9 sensorimotor 87
attentional cuing 13, 42, 44, 47, 104 theta 11, 16, 74–5, 83, 86, 98–9,
sensory gain 148 117
social cues 140–1 see also engagement index

233
234 Subject Index

brain–computer interfaces (BCIs) display design 31–2, 34, 38–42, 45,


79–82, 85, 103–4, 136 51–2, 58–68
active 80–1 dorsal and ventral processing 12,
based on fMRI 82 30–2, 38–9, 76–7
based on motor imagery 80–1 driving 3, 6, 21, 42, 83, 114, 134,
based on the interaction error 165
potential (IErrP) 103–4 and sleepiness 114–18
based on the P300 104, 79–80, 83 real-world assessment of 119–22
based on the steady-state evoked drugs 1, 24, 113
potential (SSEP) 79–81 and performance
BCI illiteracy 81 enhancement 89, 112
hybrid 81–2 and sleepiness 113
passive 104 dual-task performance 83, 147–8,
reactive 80, 82 164, 167
tactile 104
brain training 86–8 earcons 39
electroencephalography (EEG) 2–3,
caffeine 89, 112, 113 6, 8–11, 26, 71–6, 117, 132, 161
candidate-gene analysis 4, 146, applications 16, 23, 26–7, 78–86,
150–5 98, 103–4, 108–9, 136, 163, 178
and command and control 155–9 see also brain–computer interfaces,
and visual attention 152–4 brain rhythms,
and working memory 154–5 event-related desynchronization,
cardiovascular measures 83–4, 135 event-related potential,
circadian timing 111–12 steady-state evoked potentials
cognitive control see action control, emotion 4, 18–22, 36, 129–38,
performance monitoring 142
computational modelling lateralization of 22, 132–3
and action control 166, 171, 173, neural basis of 130–3
175 recognition of 135–8
and fMRI 165–7 engagement index 75, 83
cognitive architectures 163–7, error-related processing
180 post-error slowing 95–9, 102
cognitive models 163, 166, 176 pre-error speed-up 95, 102
problem-state resource 167–9, see also event-related potential
171–6 event-related desynchronization 11,
conflict monitoring theory 97–8 75
contention scheduling 92 event-related potential (ERP) 8–11,
controlled processing 77, 93, 106 9–10
crossmodal processing 13, 39, 44, anterior directing attention
47, 74 negativity (ADAN) 73
correct-related negativity (CRN)
decision making 3, 13, 19–20, 23, 99–102
31, 84, 115, 119, 132, 136–7, early directing attention negativity
155–9, 161 (EDAN) 73
default mode of brain function 5–6, error-preceding positivity (EPP)
103, 138 102
diffusion tensor imaging (DTI) 2, error-related negativity (ERN)
6, 88 99–101, 106–8
Subject Index 235

feedback-related negativity (FRN) in attention 152–4


100–1, 107–8 in decision making 155–9
ipsilateral invalid negativity in memory 89, 147, 154–5
(IIN) 72
late directing attention positivity learning 23, 88
(LDAP) 73 feedback-related neural signals see
N1 8, 65, 72 event-related potential
N2pc 72 long-term potentiation 5, 24
P1 8, 65, 72–3, 147–9 probabilistic learning 100, 107–8
P300 72, 79–80, 83, 104 reinforcement and ACC 106–7
P4pc 72 see also perceptual learning,
reorienting negativity (RON) 72 training
executive function 4, 14–15, 17–18, lie detection 35
24, 88–9, 92, 115, 148, 152, 154,
156, 159 memory 1, 24, 29, 52, 56, 58, 69,
eye movements 17, 49–50, 52–5, 59, 70–1, 78, 89, 115, 154–6, 159,
70–1, 80, 84, 140 167, 169
and dopamine 14, 154–5
functional magnetic resonance neural correlates of 4–5, 13–15,
imaging (fMRI) 16–18, 35, 76–7
applications 3, 6, 16–17, 23, 37, predicting performance 4, 88
43, 76–9, 82, 86, 98, 103, 108, training of 6, 87–9
112, 136, 139, 146 see also working memory
description of 2, 26, 30, 163–4, mental workload 3, 11
171 measurement of 11, 71, 77, 82–3
model-based fMRI 164, 166–7, minimally-invasive surgery 3,
174–8 49–50
regions-of-interest analysis 164, mirror neurons 20, 22, 139, 142–3
166–7, 170–4, 177 molecular genetics see genomics
functional near infrared spectroscopy monitoring operator state 7–8,
(fNIRS) 3, 6, 7–8, 77–8, 136–7 86, 88, 99, 112–13, 132–3,
135–6
genome-wide association 4, 146 error prediction 101–3
genomics 144–6 mood 134–5
see also candidate-gene analysis, effects on cognition 21, 134–5
genome-wide association motor control see action control
grouping see perceptual multimodal perception 13, 28, 43–6,
organization 48
Multiple Sleep Latency Test (MSLT)
head-mounted displays 64–5 165–6, 118
head-up displays 64–5
human-computer interaction 22, neurocognitive enhancement 3,
27–8, 34, 48, 129–30, 139–42 24–5, 87, 89
see also BCI ethical concerns in 24–5
neuroergonomics, definition and
inattentional blindness 57–8, 65 goals 1–3, 78, 144
individual differences 106, 113, neurofeedback 82
118–9, 125–6, 144–9 and ADHD 86–7
and genetic variation 150–2 and perceptual learning 37–8
236 Subject Index

neuroimaging 2–3, 6–11 sensory substitution 28, 45


see also electroencephalography, simulators 3, 27–8, 78, 84, 156–7
functional magnetic resonance driving 3, 6, 42, 83, 115, 117–19,
imaging (fMRI), functional near 134
infrared spectroscopy (fNIRS) situation awareness 82
neuromarketing 136–7 skill 36, 77, 87–9, 91, 93, 146–7
power law of practice 147
obstructive sleep apnea (OSA) sleep, neurobiology of 111
114–17 sleepiness 111–14
see also positive airway pressure countermeasures 89, 112–14, 113
(PAP) treatment see also self-awareness of
impairments
perceptual docking 49–50 slips see action slips
perceptual illusions 13, 31, 42, 44, social interaction 20–2, 129–30
48, 141 and human-computer
perceptual learning 26, 35–7 interaction 130, 139–43
perceptual organization 54, 61–2 neural systems of 21, 138–9
performance augmentation 1–2, somesthesis see touch perception
23–5, 50, 78–82, 85–9 sonification 38–40
performance monitoring 92–8, 103 space perception 47
and feedback monitoring 108 spatial navigation 6–42
neural correlates of 9, 18, 96–104 speech perception 40
positive airway pressure (PAP) speed–accuracy trade-off 94–5
treatment 116 steady-state evoked potentials
adherence to 117–9 (SSEPs) 75–6, 79–82, 105–6
prefrontal cortex (PFC) see also brain-computer
in decision making 158 interfaces
in executive control 17–8, 77, 92 stimulus–response compatibility 47
in performance monitoring 97–8 supervisory monitoring 63, 66, 84,
in working memory 14–15, 17, 163
76–8, 154, 156 SEEV model 66–7
prosthetic devices 20, 79 Sustained Attention to Response Test
proximity compatibility (SART) 105
principle 62–4
display proximity 61 tactile interfaces 41–2, 45, 48, 104
processing proximity 61 see also brain-computer interface
pupil diameter 71, 85 time-frequency analysis 11, 74–5
and arousal 84 touch perception 40–2
task-evoked pupillary response 71 training 78, 146, 156, 161–2
attention 86–9
robots, interaction with 22, 49–50, memory 6, 88–9
130, 141–3 perception 36–7
variable-emphasis 6
saliency maps (priority maps) 53, video games and 6, 88
67, 86 see also learning
scanpaths 70–1, 84 transcranial direct current stimulation
self-awareness of impairments 116–17 (tDCS) 3, 6–7
self-perception 48 transcranial magnetic stimulation
sensory deprivation 13 (TMS) 2–3, 6–7, 37, 43, 45
Subject Index 237

valence judgements 136–7 see also perceptual organization,


ventral processing route see dorsal perceptual illusions, perceptual
and ventral processing learning
vigilance 99, 118, 147–8 visual search 58–61, 66
virtual reality 6, 27–8, 48
visual perception 4, 8, 12–13, 17, 18, working memory 6, 13–17, 56, 58,
23, 26–38, 43–8, 52–7 76–8, 88–9, 147–8, 154–7, 159,
recurrent processing in 13, 32–3, 167, 178
77 see also memory
Author Index

Abi-Dargham, A., 154 Beatty, J., 16, 71


Adolphs, R., 131–2 Bechara, A., 20
Ahissar, M., 33 Bedard, M. C., 116
Ahlstrom, U., 84 Beebe, D., 115
Akerstedt, T., 117 Behrmann, M., 55
Aksan, N., 110, 123 Belmaker, R., 7
Albright, T. D., 61 Benko, H., 31
Aldrich, M. S., 115 Bennett, K. B., 63
Alexander, G. E., 14 Bennett, S. C., 59
Allen, J. J., 99 Bensmaïa, S. J., 41, 45
Allen, J. J. B., 132 Berger, H., 11
Allison, B. Z., 79 Berns, G. S., 136
Aloia, M. S., 115–16 Bertrand, O., 11, 74
Altmann, E. M., 165 Bichot, N. P., 52, 55, 61
Amedi, A., 12 Bixler, E. O., 115
Anderson, J. R., 163, 165–7, 173, 177 Blike, G. T., 63
Andersen, R. A., 17 Bobrov, P., 27
Antonelli Incalzi, R., 116 Bocanegra, B. R., 133
Arbib, M. A., 31 Boehler, C. N., 63
Arcizet, F., 52 Bogacz, R., 95
Ariely, D., 136 Boksem, M. A., 106
Armstrong, K. M., 54 Boot, W. R., 6, 88
Ashby, W. R. (1956), xxii Borst, J. P., 167–70, 172, 174–5, 177–8
Avery, R. A., 154, 156 Botvinick, M. M., 18, 48, 91, 93–4, 98
Awh, E., 14, 54 Boynton, G. M., 55
Ayaz, H., 7, 77–8 Bylsma, L. M., 91
Azar, O. H., 19 Bouhuys, A. L., 21
Boyle, L., 117, 119
Babcock, Q., 24, 89 Borst, J. P., 163
Bahner, E., 159 Braver, T. S., 18
Bailey, B. P., 68 Brazil, E., 39
Bakardjian, H., 80 Brefczyncski, J. A., 54
Balkin, T. J., 112–13 Bressler, S. L., 52
Ballard, D. H., 67 Brewer, J. B., 88
Bar, M., 35 Brewer, N., 95
Barker, A. T., 6 Brewster, S., 39
Barlow, H., 29 Broadbent, D. E., 51, 55–7
Barnes, M., 117 Brookings, J. B., 75
Barrett, L. F., 21 Broughton, R. J., 112, 117
Başkent, D., 40 Brouwer, A. M., 104
Basner, M., 112 Brunner, C., 20, 81
Bauer, M., 17 Brunyé, T. T., 89
Bear, M. F., xxii Bundesen, C., 66–7

239
240 Author Index

Bunge, S. A., 20 Cohn, J. F., 137


Buracas, G. T., 61 Coles, M. G., 96, 106–7
Bureau of Labor Statistics, 110 Connor, J., 114
Burrows, B. E., 52 Constantinidis, C., 52
Buschman, T. J., 52 Conty, L., 140
Bush, G., 18 Cooke, J. D., 96–7
Busey, T. B., 36 Cools, R., 14
Butcher, L. M., 146 Corbetta, M., 17–8, 52–3, 59, 77,
Buzsáki, G., 2 152
Byrne, E., 83 Corkin, S., 88
Byrne, T., 24, 89 Cosgrove, K. P., 153
Cowan, N., 14, 56
Cabon, P., 111 Cowey, A., 7
Cacioppo, J. T., 132, 136 Craighero, L., 20, 22, 139
Caggiano, D., 147 Cristino, F., 71
Caldwell, J. A., 112 Crottaz–Herbette, S., 18
Calhoun, V. D., 3, 6 Cubells, J. F., 151, 154, 156
Card, S. K., 163 Cummings, M. L., 156
Carswell, C. M., 61–3 Curry, R. E., 82
Carter, C. S., 98
Casco, C., 36 da Silva, F. Jr., 89
Cassel, W., 115 Damasio, A. R., 19–21, 129, 132
Castellanos, F. X., 105 Danielmeier, C., 96, 98
Carroll, J. M., 139 Davidson, M. C., 16, 153
Cattell, R. B., 150 Davis, T., 167
Cavanagh, J. F., 99 Daw, N. D., 166
Cecotti, H., 79–80 Dawson, D., 114
Centers for Disease Control, CDC, Dawson, J., 110
114 de Bruijn, E. R., 100
Chabris, C. F., 58 de Gelder, B., 131
Chaimow, D., 30 de Jong, R., 76
Chartrand, T. L., 22 de Quervain, D., 152
Chellappa, 112 D’Esposito, M., 14
Chen, J., 34 Deco, G., 76
Cherry, C., 51 Dehaene, S., 14
Chief Scientist Air Force, 1 Dell’Acqua, R., 72
Chin, K., 116 Dennett, D. C., 139
Cholewiak, R. W., 41–2 Desimone, R., 16–7, 35, 56–7, 76
Chowdhuri, S., 112 Deutsch, D., 56
Chun, M. M., 18, 52 Deutsch, J. A., 56
Clark, V. P., 3, 144 DeYoe, E. A., 54
Clarke, A. R., 86 di Pelligrino, G., 20
Clarke, M. P., 36 Di Stasi, L. L., 84
Clore, G. L., 21, 129, 134 Diaper, D., 2
Cluydts, R., 115 Diggles, V. A., 96–7
Coan, J. A., 132 Dimperio, E., 176
Cohen, J. D., 14, 48, 77, 92, 98, 154, Dinges, D. F., 112–14
156 Donchin, E., 79, 83, 97
Cohen, M. X., 99 Donner, T. H., 5, 59, 74
Author Index 241

Dosher, B. A., 60 Flach, J. M., 63


Drews, F. A., 147 Folk, C. L., 52
Driver, J., 13, 43 Forlines, C., 31
Drummond, S. P., 116 Fougnie, D., 58
Drury, C. G., 58 Foyle, D.C., 65
Dudschig, C., 95, 102 Freeman, F. G., 75
Duncan, J., 14, 16–7, 35, 54–9, 63, 76 Friedman, N. P., 148, 159
Dunston, P. S., 28 Friedman–Berg, F. J., 84
Fries, P., 11, 74
Edworthy, J., 39 Friesen, W. V., 131, 137, 140
Egeth, H. E., 16, 60 Frijda, N. H., 131
Egly, R., 54–5 Frishman, L., 30
Egner, T., 87, 93 Friston, K. J., 164, 171
Ehrenstein, A., 39 Fu, S., 148–9
Eichele, H., 103 Fuchs, T., 87
Eimer, M., 42–3, 72–3 Furley, P., 58
Ekman, P., 131, 137 Furuta, H., 116
Elliott, R., 24 Fuster, J. M., 14, 154, 156–7
Ellsworth, L. A., 122
Engleman, H. M., 116–17 Gagné, R., 36
Eriksen, B. A., 96 Gallagher, H. L., 140.
Eriksen, C. W., 51, 53–4, 96–97 Gallant, J. L., 52, 55
Erlhagen, W., 143 Ganesh, S., 28, 48
Espeseth, T. 152, 154, 161 Gao, X., 79
Everling, S., 58 Garavan, H., 78
Gasper, K., 134
Fabiani, M., 7 Gastaut, H., 117
Fadden, S., 65 Gatass, R., 56
Fafrowicz, M., xxiii Gazzaniga, M. S., 3
Fahle, M., 35 Gazzola, V., 22, 130, 132, 139
Falk, E. B., 23 Gehring, W. J., 95, 99
Falkenstein, M., 99 Gelade, G., 58–9
Fallah, M., 54 Geldard, F. A., 42
Fan, J., 16, 89 Gentsch, A., 100
Farah, M. J., 24 George, C. F., 118
Farwell, L. A., 79 George, M., 7
Faubert, J., 35 George, N., 140
Fedota, J. R., 91, 93, 99 Gerson, A. D., 85
Feng, Y., 153 Gevins, A., 75
Fernström, M., 39 Ghose, G. M., 57
Ferrez, P. W., 100, 103 Giambra, L. M., 146–7
Ferris, T. K., 42 Gilchrist, I. D., 70
Feuerstein, C., 115 Gläscher, J. P., 166
Findlay, J. M., 55, 70 Glimcher, P. W., 19
Findley, L. J., 116 Gluck, K. A., 165, 176
Fisch, B., 8 Goel, N., 111
Fischer, E., 65 Goldman–Rakic, P. S., 14, 156
Fitch, W. T., 39 Gong, L., 140
Fitts, P. M., 51, 160, 163 Goodale, M. A., 27, 30–2
242 Author Index

Gopher, D., 162 Hillyard, S.A., 8, 148


Gormican, S., 60 Ho, C., 47
Goschke, T., 93 Hochstein, S., 33
Gottlieb, J., 52 Hockey, G. R. J., 84
Gozal, D., 115 Hoffman, J. E., 51, 53–4
Grahn, J. A., 26 Holcomb, P. J., 40
Gratton, G., 7, 94, 96–7 Holland, J. G., 51
Gray, J. A., 132 Hollingworth, A., 35
Gray, P. O., 130 Hollins, M., 40–1, 41
Gray, W. D., 71, 163 Holmes, J. M., 36
Gredebäck, G., 140 Holmes, N. P., 47
Green, A. E., 146, 152, 155 Holroyd, C. B., 100, 106–7
Green, C. S., 88 Hopfinger, J. B., 54, 72–3, 105
Green, J. J., 73 Hoque, M. E., 137
Greenwood, P. M., 150, 152, 154, Horgan, J., 23
159, 161 Horne, J. A., 117–19
Gross, J., 14 Horowitz, T. S., 59
Gross, C. G., 56 Horrey, W. J., 114
Grossberg, S., 13 Hosseini, H., 136–7
Grunstein, R. R., 118 Houston–Price, C., 140
Gruzelier, J. H., 87 Huang, Y.–M., 133
Guest, S., 44 Hubel, D. H., 56
Gulbinaite, R., 91 Human Factors and Ergonomics
Society (2012), xvi
Hajcak, G. 95, 102 Humphreys, G. W., 58, 60
Hakkanen, J., 114 Huntsinger, J. R., 21, 129, 134
Hampton, A. N., 166 Hupé, J. M., 33
Handy, T. C., 8 Hursh, S. R., 114
Hanslmayr, S., 23, 87 Hwang, S., 84–5
Harder, A., 28, 45 Hyman, I. E., 58
Hardt, J. V., 86
Hari, R., 44 IEA (International Ergonomics
Harmon-Jones, E., 136 Association), xxi
Harris, R. L., 84 Institute of Medicine, 112
Harrison, Y., 117 Intuitive Surgical, Inc., 49, 141
Hayden, B. Y., 55 Iriki, A., 47
Haynes, J.-D., 82 Itti, L., 52–3, 57, 59, 66
Heathcote, A., 147 Ivanoff, J., 18
Hebb, D. O., 16
Heinrich, H. W., 122 Jack, R. E., 36, 131
Heinze, H. J., 54, 65 Jacobson, A., 54
Helander, M. G., xxi Jacobson, J., 141
Henderson, J. M., 35 Jacobson, L., 3
Hermann, C. S., 8 James, D. R. C., 3, 144
Hermann, T., 39 James, W., 51–3
Herslund, M. B., 58 Jameson, A., 3
Hess, E. H., 71 Jarmasz, J., 65
Heussen, Y., 129 Jehee, J. F. M., 33
Hilburn, B., 84 Jentzsch, I., 95, 102
Author Index 243

Jiang, Y., 161 Kribbs, N. B., 118


Johnson, A., 1, 26, 51, 69, 91, 93, Kristjansson, S. D., 71
129 Krummenacher, J., 60
Johnston, J. C., 56 Kundel, H. L., 58–9
Jolij, J., 1, 21, 26, 34, 129, 133–5, 138 Kupers, R., 45
Jonides, J., 14, 66, 78, 167 Kushida, C. A., 116
Jørgensen, N. O., 58
Jousmäki, V., 44 Lachter, J., 56
Just, M. A., 3, 144, 165 Lakatos, P., 74
Lalor, E. C., 80
Kadir, K., 153 Laming, D. R. J., 95
Kahlbrock, N., 74 Lamme, V. A., 13, 32–5, 134
Kahneman, D., 19, 71 Lange, J., 44
Kamiya, J., 86 Lantz, D. L., 87
Kandel, E., 2 Laugier, C., 49
Kärcher, S. M., 42 Lavie, N., 56
Karni, A., 36 Layton, C., 156
Karwowski, W., xxi–xxiii Leavitt, V. M., 38
Kastner, S., 54, 57, 77 Leber, A. B., 4
Kato, Y., 23 LeDoux, J., 21, 131
Kasai, T., 63 Lee, F. J., 165
Kecklund, G., 117 Lees, M. N., 119
Kelly, A., 78 Lehne, M., 104
Kelly, E., 112 Lei, S., 83
Kennedy, C. W., 49 Leiser, D., 19
Kennedy, S. H., 24 Lenggenhager, B., 28, 48
Kerns, J. G., 94 Leveson, N., 156
Keysers, C., 22, 130, 132, 139 Levin, D. T., 69
Kieras, D. E., 165 Levy, J. L., 65
Kim, Y.–H., 52 Lewis-Evans, B., 21, 134
Kimberg, D. Y., 89 Li, C. S., 103
King, J. A., 98 Li, T., 47
Kingstone, A., 140 Li, Z., 52–3, 59–60
Kiss, M., 72 Libet, B., 23
Klauer, S. G., 119 Lindeman, R. W., 48
Klein, F., 129 Linden, R. D., 76
Klein, R. M., 52–4 Liscombe, J., 138
Klein, T. A., 108 Liu, T., 55
Klimesch, W., 75 Llera, A., 103
Knudsen, E. I., 14, 16, 77 Loewenstein, G. F., 19
Koch, C., 16, 52–3, 57, 59, 66, 86 Logan, G. D., 94
Koivisto, M., 58 Logothetis, N., 164
Kok, A., 72, 98 Long, J., 64–5
Kothe, C., 80 Lopes da Silva, F. H. L., 11
Kowler, E., 53 Lopez-Larraz, E., 108
Krach, S., 22, 142 Lubar, J. F., 86
Kramer, A. F., 54–55, 63 Luce, R. D., 67
Kramer, G., 39–40 Luck, S. J., 8, 11, 54–5, 57, 65, 72
Krantz, D. H., 39 Ludwig, C. J. H., 70
244 Author Index

Lugaresi, E., 118 Miller, E. K., 14, 52, 77, 92, 98, 156
Luu, P., 99 Milner, A. D., 27, 30–2
Miltner, W. H. R., 100, 104
McAdams, C. J., 55 Mishkin, M., 30
McArdle, N. M. C., 118 Miyawaki, Y., 26
McCarley, J. S., 51, 55, 57, 60, 66 Molenaar, I., 68
McCartt, A. T., 114 Moller, H. J., 119
McCauley, P., 114 Moore, B. C. J., 38
McClearn, G. E., 145 Moore, T., 52, 54
McCormick, E. J., xxi Moran, J., 57
McDonald, J. J., 73 Moray, N., 51, 53, 66
Macdonald, J. S., 106 Morein-Zamir, S., 89
McElree, B., 167 Mori, M., 142
McFarland, D. J., 79 Morgan, S. T., 75
McGehee, D. V., 119 Morkes, J., 140
McGookin, D., 39 Mosier, K. L., 156–9
Mack, A., 57–8, 65 Most, S. B., 58
McKinley, A., 3 Mounts, J. R. W., 57
MacLean, P. D., 131 Mulder, L. J. M., 84
MacLeod, C. M., 93 Müller, S. V., 100
McNamara, A., 23 Muller-Putz, G. R., 80
Maguire, E. A., 6, 82 Murphy, K., 88
Mallis, M. M., 113 Murphy, S. T., 133
Mangun, G. R., 8, 54, 73 Myers, C. W., 71
Manzey, D., 159 Mylonas, G. P., 50
Marks L. E., 39, 44
Marois, R., 14, 58 Nass, C., 140
Marr, D., 12 National Highway Traffic Safety
Marrocco, R. T., 16, 153 Administration (NHTSA), 114
Martinez-Trujillo, J. C., 55 National Research Council, 114
Martino, G., 44 National Sleep Foundation, 110
Masa, J. F., 115 Neal, D. T., 22
Matthews, G., 82, 150 Neville, H. J., 40
Maunsell, J. H. R., 55, 57 Newell, A., 146, 165–6, 80
Maycock, G., 122 Nicholson, A. N., 112
Mazaheri, A., 105 Niedenthal, P. M., 21
Mazer, J. A., 52 Nieuwenhuis, S., 100, 107
Mehta, M., 24 Nobre, A., 23
Meijer, P. B. L., 28, 45 Nodine, C. F. 58–9
Melara, R. D., 39 Norman, D. A., 92
Mendola, J. D., 36 Notebaert, W., 95, 98
Mendoza, C., 49 Nothdurft, H. C., 52–3
Menon, V., 18 Noton, D., 70–1
Merabet, L. B., 13, 28, 45–6 Nouchi, R., 88
Meurs, M., 21, 135 Nowak, M., 116
Meyer, D. E., 165 Nunez, P. L., 11
Meyer, K., 43
Milgram, P., 64 Oberauer, K., 167
Millán, J. d. R., 100 O’Connell, R. G., 105–6
Author Index 245

O’Connor, D. H., 54 Rabbitt, P. M. A., 93, 95–6, 98, 102


O’Craven, K. M., 54 Rabipour, S., 88
O’Doherty, J. P., 166 Rahne, T., 43
Ogilvie, R. D., 117 Raichle, M. E., 5, 138
O’Hanlon, J. F., 16 Raizada, R. D., 13
Ojima, H., 13 Ramoser, H., 80
Orth, M., 118 Ramsey, C. S., 89
Oskarsson, P., 47 Ranganath, C., 99
Otten, L. J., 4 Rapp, D. N., 68
Owen, A.M., 14 Rauschenberger, R., 60
Rauscher, H., 118
Pack, A. I., 115–19 Rayner, K., 53
Palmer, S. E., 54, 61 Reason, J. T., 91, 95
Palva, J. M., 14 Redline, S., 116
Papassotiropoulos, A., 152 Rees, G., 82
Parasuraman, R., xiii, xxii–iv, 1, 3, 25, Regan, D., 11, 75
79, 82–3, 91, 93, 99, 144, 146–8, 150, Reinvang, I., 146
152–4, 156–7, 159, 161, 163, 178 Remington, R. W., 61
Parker, P., 20 Revonsuo, A., 31, 58
Parkhurst, D., 67 Reyner, L. A., 117, 119
Parks, P. D., 117 Reynolds, J. H., 57
Pascual-Leone, A, 3, 7, 28 Ridderinkhof, K. R., 97, 100, 102
Pearlson, G. D., 3, 6 Riley, V., 156
Peden, M., 114 Risner, S. R., 41
Peelen, M. V., 52 Ritter, S., 177
Perreira Da Silva, M. P., 68 Rizzo, M., xxiii, 1, 110, 119–20,
Petersen, S. E., 5, 73, 77–8 144
Pfurtscheller, G., 11, 80–1 Rizzolatti, G., 20, 22, 54, 138–9
Philip, P., 115 Robertson, I. H., 104–5
Phillips, J. M., 153 Rock, I., 54, 57–8, 61, 65
Pineda, J. A., 79 Rockland, K. S., 13
Pires, G., 79 Roda, C., 68
Pizzagalli, D., 138 Rodgers, B., 95, 98, 102
Poggio, T., 35 Rodrigues, R. N., 119
Poldrack, R. A., 146 Roehrs, T., 112, 116
Polt, J. M., 71 Roelfsema, P. R., 13, 32, 35–6, 55
Pomerantz, J. R., 63 Roge, J., 116
Pope, A. T., 75, 83 Roger, C., 100
Posner, M. I., 4, 15–6, 23, 51–4, 72–3, Rohenkohl, G., 23
87, 140, 144, 146–7, 152–3 Romei, V., 44
Praamstra, P., 73 Rosenbloom, P., 146
Pratte, M. S., 26 Rosetti, Y., 31
Proctor, R. W., 26, 34, 47, 51 Rosson, M.-B., 139
Proulx, M. J., 28, 45 Roth, T., 112
Ptito, M., 45 Rothbart, M. K., 4, 15–6
Rötting, M., 83
Qian, M., 85 Rovira, E., 157
Qin, Y., 166 Royal, D., 114
Quilter, R. E., 147 Rueda, M. R., 87
246 Author Index

Russell, C. A., 163–4, 176, 178 Simons, D. J., 58, 69


Russell, R., 147 Simons, R. F., 95, 102
Singer, T., 138
Saalman, Y. B., 52 Sinigaglia C., 22
Sagi, D., 36 Sirevaag, E. J., 97
Sahakian, B., 89 Skitka, L. J., 156
St. James, J. D., 54 Smallwood, J., 104
St. Louis, E. K., 111–12 Sminkey, L., 114
Salinsky, M. C., 117 Smith, A. T., 56
Salvucci, D. D., 165 Smith, E. E., 78
Salzer, Y., 42 Smith, G. A., 95
Samar, V. J., 11 Smith, S., 40
Sanders, M. M., xxi Snyder, A. Z., 5, 138
Sanderson, P. M., 63 Sonuga-Barke, E. J., 105
Sanfey, A. G., 19 Soon, C. S., 23
Saper, C.B., 111 Souther, J., 60
Sarter, N., 42 Spence, C., 13, 43–4, 47
Sassani, A., 115 Spencer, A., 88
Sathian, K., 45 Sperandio, J. C., 82
Saupe, K., 76 Sperling, G., 51
Saygin, A. P., 142 Spiers, H. J., 82
Scheibert, J., 41 Stanton, N., 2, 39, 82
Schenk, T., 31 Stark, L., 70–1
Schmitz, T. W., 220 Steelman, K. S., 51, 66–7
Schooler, J. W., 104 Steinmetz, M. A., 52
Schröger, E., 72 Stefanucci, J. K., 26
Schultheis, H., 3 Sterman, M. B., 87
Schultz, W., 107 Stetson, C., 141
Schupp, H. T., 29 Stevens, K. N., 137
Schwartz, A. B., 20 Stocco, A., 163, 173
Schwarzkopf, D. S., 31 Stone, B. M., 112
Senders, J. W., 66 Stoohs, R. A., 115
Serences, J. T., 55 Strayer, D. L., 1, 147
Sereno, M. I., 17 Stroop, J. R., 93
Sforza, E., 118 Sturm, W., 16
Shallice, T., 92 Subramaniam, B., 53
Sheinberg, D. L., 60 Sugase, Y., 33
Shams, L., 13, 44 Summala, H., 114
Sheridan, T., 66, 82 Supèr, H., 33
Shibata, K., 37 Szalma, J. F., 150
Shirani, A., 111–12
Shirtcliff, E. A., 153 Taatgen, N. A., 163, 165
Shneiderman, B., 130–40 Takeuchi, H., 6, 88
Shore, D. I., 52 Tallon-Baudry, C., 11, 74, 105
Shouse, M. N., 86 Tamietto, M., 131
Shulman, G. L., 17, 52, 77 Tang, Y.-Y., 87
Siegel, M., 59, 74 Tartaglia, E. M., 37
Sillito, A. M., 32 Tattersall, A. J., 84
Silver, M.A., 17 Teran-Santos, J., 115
Author Index 247

Thaler, R. H., 19 Vgontzas, A. N., 112


Theeuwes, J., 52, 70 Vicente, K. J., 2
Thompson, B., 37 Vidal, F., 99–100
Thompson, K., 119 Vidal, J. J., 79
Thompson, K. G., 52 Vijayraghavan, S., 154
Thut, G., 7, 105 Villringer, A., 77
Tippin, J., 110, 115 Vincent, J.L., 6
Toffanin, P., 1, 4, 69, 72, 76 Vincenzi, D. A., 28
Tong, F., 26, 30 Vogel, E. K., 72
Tononi, G., 75 Von Economo, C., 11
Trafton, 165 Voss, M. W., 6
Tregear, S. J., 115, 118 Vu, K.–P. L., 34, 47
Treisman, A. M., 58–60 Vuilleumier, P., 133
Treue, S., 55
Treves, A., 138 Wagner T., 7
Trujillo, L. T., 99 Walker, B. N., 39
Tsal, Y., 56 Walsh, V., 3, 7
Tsushima, Y., 34 Walther, D., 66
Tulving, E., 40 Wandell, B. A., 53
Turk-Browne, N. B., 4, 18 Wang, C., 99
Turkington, P. M., 116, 118 Ward, J., 28
Tversky, A., 19 Ward, L., 152
Warren, R. M., 40
Uithol, S., 20 Watanabe, T., 37
Ullman, S., 16, 86 Watkins, S., 44
Ullsperger, M., 94, 96 Watson, J. M., 147
Ungerleider, L. G., 77 Weaver, F. M., 24
Uttal, W. R., 4 Weaver, T. E., 118–19
Weiskopf, N., 82, 86
Valdes-Sosa, M., 65 Weissman, D. H., 18, 105
Vanderkolk, J. R., 36 Wertheimer, M., 54, 61
van Dongen, H. P., 114, 116, 119 Westwood, D. A., 31
van Erp, J. B. F., 41–2, 104 Whalen, P. J., 133
van Essen, D. C., 29 Wickens, C. D., 51, 61–6
van Maanen, L., 165 Wiener, E. L., 82
van Rijn, H., 163, 165, 177 Wiesel, T. N., 56
van Schie, H. T., 103, 142–3 Wierwille, W. W., 122
van Veen, H. C., 42, 98 Williams, C. E., 137
van der Burg, E., 43 Williams, G. V., 156
van der Helden, J., 107 Williams, L. G., 55
Vandewalle, G., 112 Wilmes, K., 16
Varela, F., 98 Wilson, G. F., 3, 144, 163–4, 176,
Varma, S., 165 178
Vashitz , G., 156 Wohlschläger, A., 141
Venter, J. C., 4, 145 Wolfe, J. M., 52, 58–60, 66–7
Vernon, D., 87 Wolff, C., 72
Verschure, B., 135 Womelsdorf, T., 16–17, 57
Verstraeten, E., 115–6 Wong, M., 27
Ververs, P. M., 64 Wood, N. L., 56
248 Author Index

Woodman, G. F., 72 Yeung, N., 107


Worden, M. S., 73 Yoo, J. J., 88
Wright, R. M., 152 Young, M. S., 82
Wulf, G., 93 Young, T., 115
Wunderlich, K., 166
Wyart, V., 105 Zabetian, C. P., 151, 154, 156
Zajonc, R. B., 133
Yamamoto, H., 116 Zander, T. O., 80, 104
Yamani, Y., 60 Zangaladze, A., 45
Yang, G.–Z., 49 Zbrodoff, N. J., 94
Yantis, S., 16, 56, 60, 66 Zelinsky, G. J., 60
Yau, J. M., 45 Zhang, W., 55
Yeh, M., 61 Zihl, J., 76
Yetkin, O., 118 Zimmerman, M. E., 118

You might also like