Cognitive Einng

You might also like

Download as pdf or txt
Download as pdf or txt
You are on page 1of 41

Handbook of Human-Computer Interaction

M. Helander (ed.)
© Ehevier Science Publishers B. V. (North-Holland), 1988

Chapter 1

Cognitive Systems Engineering

D.D. Woods & E.M. Roth1 1.1 Introduction


Westinghouse Research & Development Center
Pittsburgh, Pennsylvania Why is there talk of a cognitive engineering? Most
simply put, both opportunity and need have con­
tributed. But what is cognitive engineering? What
1.1 Introduction 3 should it be? What can it do? What problems does
it confront? We will explore these questions in this
1.2 What is Cognitive Engineering 5 chapter. As with any nascent and interdisciplinary
field, there can be very different perspectives about
1.3 The Cognitive System Triad 8 what it is and how it will develop over time. You will
Demand Characteristics of Problem find other perspectives in some of the other chapters
Solving Habitats 8 in this handbook. The most complete and coherent
Mismatches in the Cognitive System expression of the possible content and scope of a field
Triad: Getting Lost 10 of cognitive engineering is Jens Rasmussen's Informa­
tion Processing and Human-Machine Interaction: An
1.4 A Sample of Critical Issues in Cognitive Approach to Cognitive Engineering. While it is only
Engineering 11 a single perspective, its comprehensive treatment of
What is Expertise and Skill 11 a large range of important issues - knowledge repre­
Exploration Training 13 sentation, cognitive analysis, human error, and aiding
Human Error and Person-Machine Mis­ problem solving - makes it the best place to begin
matches 14 to understand the field. Another single volume that
Brittle Problem Solvers and Unex­ encompasses a large amount of the territory of cogni­
pected Variability 17 tive engineering is Norman & Draper's User Centered
System Design.
1.5 Towards Effective Decision Support 18
What is Good Advice? 19 Interestingly, the same phenomenon has produced
Cognitive Tools 21 both the opportunity and the need for a cognitve engi­
Conceptualization Aids 25 neering. With the rapid advances and dramatic reduc­
tions in the cost of computational power, computers
1.6 External Representations and Human have become ubiquitous in modern life. In addition to
Problem Solving 26 traditional ofi&ce applications (e.g., word-processing;
Fixed and Adaptive Collections 27 accounting; information systems), computers increas­
Analogical Representations 29 ingly dominate a broad range of work environments
Integral Displays 30 (e.g., industrial process control; air traffic control; hos­
Multiple Representations 32 pital emergency rooms; robotic factories). The need
A Case in Representation Design . . . . 33 for a cognitive engineering occurs because the intro­
duction of computerization often radically changes the
1.7 Summary 34 work environment and the cognitive demands placed
on the worker. For example, increased automation
1.8 References 34 in process control applications has resulted in a shift
in the human role from a controller to a supervi­
1
D. D Woods is now at the Department of Industrial and Systems
sor, who monitors and manages semi-autonomous re­
Engineering, Ohio State University and E. M. Roth is at the depart­ sources. While this change reduces people's physical
ment of Engineering and Public Policy, Carnegie-Mellon University workload, mental load often increases as the human

3
4 CHAPTER 1. COGNITIVE SYSTEMS ENGINEERING

role emphasizes monitoring and compensating for fail­ port will hinge on how the designer decides what
ures. Thus, computerization creates a larger and larger will be useful in a particular application, that is, a
world of cognitive tasks to be performed. More and problem-driven, rather than a technology-driven, ap­
more, we create or design cognitive environments. proach where the requirements and bottlenecks in cog­
The opportunity for a cognitive engineering arises nitive task performance drive the development of tools
because computational technology also offers new to support the human problem solver. Can researchers
kinds and degrees of machine power that greatly ex­ provide designers with concepts and techniques to de­
pand the potential to assist and augment human cogni­ termine what will be useful or are we condemned to
tive activities in complex problem solving worlds, e.g., simply build what can be built practically and wait
monitoring, problem formulation, plan generation and for the judgement of experience? Is principle-driven
adaptation, fault management. This is a highly cre­ design possible?
ative time where people are exploring and testing what While our ability to build more powerful machine
can be created with the new machine power - displays cognitive systems has grown and promulgated rapidly,
with multiple windows and even "rooms" (Henderson our ability to understand how to use these capabili­
and Card, 1986). The new capabilities have led to ties has not kept pace. Today we can describe cog­
large amounts of activity on building new and more nitive tools in terms of the tool building technologies
powerful tools - how to build better performing ma­ (e.g., tiled or overlapping windows). The impediment
chine problem solvers. The question that we continue to systematic provision of effective decision support
to face is how should we deploy the power available is the lack of an adequate cognitive language of de­
through new capabilities for tool building to assist hu­ scription (Rasmussen, 1986a; Clancey, 1985). What
man performance, i.e., the problem of how to provide are the cognitive implications of some application's
intelligent, or more properly, effective decision sup­ task demands and of the aids and interfaces avail­
port (IDS). The application of these tools creates new able to the people in that domain? How do people
challenges about how to "couple" human intelligence behave/perform in the cognitive situations defined by
and machine power in a single integrated system that these demands and tools? Because this independent
maximizes overall performance. cognitive description has been missing, an uneasy mix­
The capability to build more powerful machines ture of other types of description of a complex situation
does not in itself guarantee effective performance, as has been substituted - descriptions in terms of the ap­
witnessed by early attempts to develop computerized plication itself (e.g., internal medicine or power plant
alarm systems in process control (Pope, 1978) and at­ thermodynamics), descriptions in terms of the imple­
tempts to convert paper-based procedures to a com­ mentation technology of the interfaces/aids, descrip­
puterized form (Elm & Woods, 1985). The condi­ tions in terms of the user's physical activities or user
tions under which the machine will be exercised and psychometrics. This view of cognitive technology as
the human's role in problem solving affect the qual­ complementary to computational technology is in stark
ity of performance. This means that factors related contrast to another view which defines the former as
to tool usage can and should affect the very nature the handmaiden of the latter - acquiring the knowl­
of the tools to be used. This observation is not new edge fuel necessary to run the computational engines
- in actual work contexts, performance breakdowns of today and tomorrow.
have been observed repeatedly with support systems, Too often, research questions are posed and gener­
constructed in a variety of media and technologies in­ alizations attempted in terms of the language of the
cluding current AI tools, when issues of tool use were technology of the day. Examples include the count­
not considered (cf., Roth, Bennett & Woods, 1987). less studies that compare menu vs. command language
This is the dark side: the capability to do more am­ interfaces, or the more recent debate over the relative
plifies the potential magnitude of both our successes merits of tile vs. overlapping window systems (e.g.,
and our failures. Careful examination of past shifts in Bly & Rosenberg, 1986). Any guidance generated by
technology reveals that new difficulties (new types of this approach is bound to be outpaced by advances
errors or accidents) are created when the shift in ma­ in technology and is consequently doomed to obso­
chine power has changed the entire human-machine lescence. For example, context free comparisons of
system in unforeseen ways (e.g., Hoogovens Report, menus vs. command languages provide little insight
1976; Noble, 1984; Hirschhorn, 1984; Wiener, 1985). or guidance on the use of "soft keys" or windows.
One of the main tenets of cognitive systems en­
The problem of providing effective decision sup­ gineering is that for broad applicability, generaliza-
12. WHAT IS COGNITIVE ENGINEERING 5

tions need to be expressed at the level of a cognitive second, some aspects of problem solving may only
language (Hollnagel & Woods, 1983). Tasks are an­ emerge when more complex situations are directly ex­
alyzed with respect to cognitive processing require­ amined. For example, the role of problem formulation
ments (Card, Moran, & Newell, 1983; Kieras & Pol- and re-formulation in effective performance is often
son, 1985; Johnson & Payne, 1985). For example, overlooked. Reducing the complexity of design or
the question of menus versus command languages or research questions by bounding the world to be con­
different menu systems is reformulated into questions sidered merely displaces the complexity to the person
about memory requirements and questions about com­ in the operational world, rather than provide a strategy
petition between cognitive activity devoted to option to cope with the true complexity of the actual prob­
selection versus cognitive activity devoted to carrying lem solving context. It is one major source of fail­
out domain tasks. Can the person see all currently ure in IDS. For example, the designer of a machine
available or meaningful options or must he/she recall problem solver may assume that only one failure is
them or where to find them from memory? - are op­ possible to be able to completely enumerate possible
tions and domain task information presented in parallel solutions and to make use of classification problem
or do options replace domain data? This allows con­ solving techniques (Clancey, 1985); however, the ac­
clusions to be drawn that have direct applicability to tual problem solver must cope with the possibility of
the evaluation and design of systems that employ dif­ multiple failures, misleading signals, interacting dis­
ferent implementation technology to perform the same turbances (e.g., Pople, 1985; Woods & Roth, 1986).
cognitive function. The result is that we need, particularly in this time of
advancing machine power, to understand human be­
havior in complex situations. What makes problem
1.2 What is Cognitive Engineering solving complex? How does complexity affect the per­
formance of human and of machine problem solvers?
There has been growing recognition of this need How can problem solving performance in complex
to develop an applied cognitive science that draws worlds be improved and deficiencies avoided? (See
on knowledge and techniques of cognitive psychol­ Dorner, 1983; Selfridge et al., 1984; Montmollin &
ogy and related disciplines to provide the basis for De Keyser, 1985; Rasmussen, 1986a; Fischhoffet al.,
principle-driven design (Norman, 1981b; Newell & 1986; Klein et al., 1986 for other discussions on the
Card, 1985; Brown & Newman, 1985). In this nature of complexity in problem solving.) Understand­
section, we will examine some of the characteristics ing the factors that produce complexity, the cognitive
of cognitive technologies or cognitive engineering or demands that they create, and some of the cognitive
whatever moniker you prefer (cognitive factors, cog­ failure forms that emerge when these demands are not
nitive ergonomics, knowledge engineering). The per­ met is essential if advances in machine power will lead
spective for this exposition is that of cognitive sys­ to new cognitive tools that actually enhance problem
tems engineering (Hollnagel & Woods, 1983; Woods, solving performance.
1986).
Cognitive engineering is ecological It is about mul­
Cognitive engineering is about human behavior in
tidimensional, open worlds and not about the artifi­
complex worlds. Studying human behavior in com­
cially bounded closed worlds typical of the laboratory
plex worlds (and designing support systems) is one
or the engineer's desktop (e.g., Funder, 1987)2. An
case of people engaged in problem solving in a com­
example of the ecological perspective is the need to
plex world, analogous to the task of other human prob­
study humans solving problems with tools (i.e., sup­
lem solvers (e.g., operators, troubleshooters) who con­
port systems) as opposed to laboratory research which
front complexity in the course of their daily tasks. Not
continues, for the most part, to examine human per­
surprisingly, the strategies researchers and designers
formance stripped of any tools. Cognitive engineering
use to cope with complexity are similar as well. For
is sine qua non for this - how to put effective cognitive
example, bounding the world to be considered is a
tools into the hands of practitioners. From this view­
standard tactic to manage complexity. Thus, one might
point, quite a lot could be learned from examining the
address only a single time slice of a dynamic pro­
nature of the tools that people spontaneously create to
cess or only a subset of the interconnections between
work more effectively in some problem solving envi-
parts. This strategy is limited because it is not clear
whether the relevant aspects of the whole have been 2
Of course virtually all of the worlds that we might be interested
captured. First, parts of the problem solving process in are man-made. The point is that these worlds encompass more
may be missed or their importance underestimated and, than the design intent - they exist in the world.
6 CHAPTER I. COGNITIVE SYSTEMS ENGINEERING

ronment, or examining how pre-existing mechanisms acceptable levels of performance, but only that under­
are adapted to serve as tools as occurred in Roth et standing how they function in the current cognitive
al. (1987), or examining how tools provided for a environment is a starting point to develop truely ef­
practitioner are really put to use by practitioners. The fective support systems.
studies by De Keyser (1986) are extremely unique with Semantic approaches, on the other hand, are vul­
respect to the latter. nerable to myopia. If each world is seen as com­
In reducing the target world to a tractable labo­ pletely unique and must be investigated 'tabula rasa,'
ratory or desktop world in search of precise results, then cognitive engineering can be no more than a set
we run the risk of eliminating the critical features of of techniques that are used to investigate every world
the world that drive behavior. This creates the prob­ anew. If this were the case, it would impose strong
lem of deciding what counts as an effective stimulus practical constraints on principle-driven development
(as Gibson has pointed out in ecological perception) of support systems, restricting it to cases where the
or, to use an alternative terminology, deciding what consequences of poor performance are extremely high.
counts for a symbol. To decide this question, Gib­ To achieve relevance to specific worlds and gener-
son (1979) and Dennett (1982), among others, have alizability across worlds, the cognitive language must
pointed out the need for a semantic and pragmatic be able to escape the language of particular worlds,
analysis of environment-cognitive agent relationships as well as the language of particular computational
with respect to the goals/resources of the agent and the mechanisms, and identify pragmatic reasoning situa­
demands/constraints in the environment. As a result, tions, after Cheng & Holyoak (1985) and Cheng et
one has to pay very close attention to what people ac­ al. (1986). These reasoning situations are abstract
tually do in a problem solving world, given the actual relative to the language of the particular application
demands that they face (Woods, Roth & Pople, 1987). in question and therefore transportable across worlds,
Principle-driven design of support systems begins with but they are also pragmatic because the reasoning in­
understanding what are the difficult aspects of a prob­ volves knowledge of the things being reasoned about.
lem solving situation (e.g., Rasmussen, 1986a; Woods More ambitious, are attempts to build a formal cogni­
& Hollnagel, 1987). tive language, for example, Coombs & Hartley (1987)
A corollary to the above is that cognitive engineer­ through their work on coherence in model generative
ing must address the contents or semantics of a domain reasoning.
(e.g., Coombs, 1986). Purely syntactic and exclusively Cognitive engineering is not just about the contents
tool-driven approaches to develop support systems are of a world; it is about changing behavior/performance
vunerable to the error of the third kind - solving the in that world. This is both a practical consideration -
wrong problem. The danger is to fall into the psy­ improving performance or reducing errors justifies the
chologist's fallacy of William James, where the psy­ investment from the point of view of the world in ques­
chologist's reality is confused with the psychological tion, and a theoretical consideration - the ability to pro­
reality of the human practitioner in his or her prob­ duce practical changes in performance is the criterion
lem solving world. To guard against this danger, the for demonstrating an understanding of the factors in­
psychologist or cognitive engineer must start with the volved. Basic concepts are only confirmed when they
working assumption that practitioner behavior is rea­ generate treatments (aiding either online or offline)
sonable and attempt to understand how this behav­ that make a difference in the target world. Cheng's
ior copes with the demands and constraints imposed concepts about human deductive reasoning (Cheng &
by the problem solving world in question. For ex­ Holyoak, 1985; Cheng et al., 1986) generated treat­
ample, the initial introduction of computerized alarm ments that produced very large performance changes
systems into power plant control rooms inadvertantly both absolutely and relative to the history of rather
undermined the strategies operational personnel used ineffectual alternative treatments to human biases in
to cope with some problem solving demands so badly deductive reasoning.
that it had to be removed and the previous kind of Cognitive engineering is about systems. One source
alarm system restored. The question is not why peo­ of tremendous confusion has been an inability to
ple failed to accept a useful technology, but rather how clearly define the "systems" of interest. From one
the original alarm system supported and the new sys­ point of view the computer program being executed is
tem failed to support operator strategies to cope with the end application of concern. In this case, one often
the world's problem solving demands. This is not to speaks of the interface, the tasks performed within the
say that the current strategies are optimal or produce syntax of the interface, and human users of the inter-
12. WHAT IS COGNITIVE ENGINEERING 7

face. Notice that the application world (what the inter­ fall at one extreme of the cognitive complexity space.
face is used for) is de-emphasized. The bulk of work In contrast, there are many decision making and su­
on human-computer interaction takes this perspective. pervisory environments (e.g., military situation assess­
Issues of concern include designing for learnability ment; medical diagnosis) where problem formulation,
(e.g., Brown & Newman, 1985; Carroll & Carrithers, situation assessment, goal definition, plan generation,
1984; Kieras & Poison, 1985), and designing for ease and plan monitoring and adaptation are significantly
and pleasureableness of use (Norman, 1983; Malone, more complex. It is in designing interfaces and aids
1983; Shneiderman, 1984; 1986). Many of the chap­ for these applications where it is essential to distin­
ters in this volume are directed at these concerns. guish the world to be acted on, from the interface or
A second perspective is to distinguish the inter­ window on the world (how one comes to know that
face from the application world (Miyata & Norman, world), and from agents who can act directly or indi­
1986; Stefik, Foster, Bobrow, Kahn, Lanning, & Such- rectly on the world.
man, 1985; Rasmussen, 1986a; Hollnagel, Mancini & A large number of the worlds that cognitive en­
Woods, 1986; Mancini, Woods & Hollnagel, in press). gineering should be able to address contain multiple
For example, text editing tasks are performed only in agents who can act on the world in question (e.g.,
some larger context such as transcription, data entry, command and control, process control, data commu­
composition, or one must know about air travel in nication networks). Not only do we need to be clear
order to design or understand an airline reservation about where systemic boundaries are drawn with re­
interface. The interface is an external representation spect to the application world and interfaces to or rep­
of an application world, that is, a medium through resentations of the world, we also need to be clear
which agents come to know and act on the world - about the different agents who can act directly or in­
troubleshooting electronic devices (Davis, 1983), lo­ directly on the world. Cognitive engineering must be
gistic maintenance systems, managing data commu­ able to address systems with multiple cognitive agents.
nication networks, managing power distribution net­ This applies to multiple human cognitive systems (of­
works, medical diagnosis (Gadd & Pople, 1987; Cohen ten called distributed decision making, e.g., Schum,
et al., 1987) aircraft and helicopter flightdecks (Pew 1980; Einhorn & Hogarth, 1985; Fischhoff et al.,
et al., 1986), air traffic control systems, process con­ 1986; Fischhoff, 1986; March & Weisinger-Baylon,
trol accident response (Woods, Roth & Pople, 1987), 1986). It also applies io joint human-machine cogni­
command and control of a battlefield (Fischhoff et al., tive systems because changes in computational power
1986). Tasks are properties of the world in question, have increased the frequency of partially autonomous
although performance of these fundamental tasks (i.e., machine agents (Woods, 1986). When a system in­
demands) are affected by the design of the external cludes these machine agents, the human role is not
representation (e.g., Mitchell & Saisi, 1987). The hu­ eliminated, but shifted. This means that changes in
man is not a passive user of a computer program, but automation are changes in the joint human-machine
is an active problem solver in some world. Therefore, cognitive system. Design and research then should
we will generally refer to people as domain agents or center on what are effective architectures of multiple
actors or problem solvers and not as users. In this cognitive agents (Sheridan & Hennessy, 1984; Sorkin
view, one can generalize across application worlds via & Woods, 1985; Woods, Roth & Bennett, in press;
the basic cognitive demands. For example, Rasmussen Sorkin et al., in press; Sheridan, 1988).
(1986b) brings briningle set of cognitive engineering
Cognitive engineering is problem-driven, tool-
concepts (from Rasmussen, 1986a) to bear on five dif­
constrained. This means cognitive engineering must
ferent worlds - process control, emergency manage­
be able to analyze a problem solving context and un­
ment, CAD/CAM, office systems, library systems.
derstand the sources of both good and poor perfor­
In part, the difference in the above two views can mance, i.e., the cognitive problems to be solved or
be traced to differences in the cognitive complexity of challenges to be met (e.g., Rasmussen, 1986a; Woods
the domain task being supported. Research on person- & Hollnagel, 1987). The results from this analysis are
computer interaction has typically dealt with office ap­ used to define the kind of solutions that are needed
plications (e.g., word-processors for document prepa­ to enhance successful performance - to meet cogni­
ration or copying machines for duplicating material) tive demands of the world, to help the human func­
where the goals to be accomplished (e.g., replace word tion more expertly, to eliminate or mitigate error prone
1 with word 2) and the steps required to accomplish points in the total cognitive system (demand-resource
them are relatively straightforward. These applications mismatches). The results of this process then can be
8 CHAPTER!. COGNITIVE SYSTEMS ENGINEERING

deployed in many possible ways as constrained by tool lations of information explicit at the expense of other
building limitations and tool building possibilities - information or manipulations which are pushed into
exploration training worlds, new information, repre­ the background (e.g., Janvier, 1987). A simple exam­
sentation aids, advisory systems, or machine problem ple is notational systems such as numeral systems -
solvers. The cognitive system approach can address try multiplication and division in the Roman numeral
existing cognitive systems in order to identify defi­ system. The effect of a representation then depends on
ciencies that cognitive system redesign can correct and the cognitive demands imposed by the world and on
prospective cognitive systems as a design tool dur­ the processing characteristics of the relevant cognitive
ing the allocation of cognitive tasks and the develop­ agent.
ment of an effective joint architecture (e.g., Pew et al., The third apex addresses the problem solver or
1986). solvers themselves - their processing resources and
the architecture of multiple agents. What mechanisms
and resources affect monitoring the behavior of the
1.3 The Cognitive System Triad world over time, recognizing undesirable situations,
explaining unexpected behavior, selecting or adapting
One can think of problem solving situations in terms of
responses to cope with these situations (cf., Woods et
interactions among a set of three mutually constrained
factors (Figure 1): the world to be acted on, the agent al., 1987).
or agents who act on the world, and the external rep­ One example of the interaction of the three apices is
resentations through which the agent experiences that Fischhoff et al. (1986) who discuss how risk interacts
world. Each of these factors contributes to the per­ strongly with multi-agent architectures to influence the
formance of the agent in the relevant domain (cf., von level of risk acceptance of the joint cognitive system
Winterfeldt & Edwards, 1986, p. 669-677 for a partic­ and therefore the quality of problem solving behavior.
ularly engaging discussion of the interaction of these Furthermore, they show how changes in the power
factors). When we better understand the interactions and scope of centralized information systems (a change
among these factors in a broad sample of natural prob­ on the representation apex) can change the distributed
lem solving habitats we will be better able to support decision architecture and strongly affect risk taking
and improve performance, i.e., establish a flourishing behavior, i.e., the decision criteria on the willingness
cognitive engineering. to innovate or depart from doctrine at various levels
Starting from the apex of the world itself, the char­ of a hierarchical organization.
acteristics of that world contribute various kinds of Errors emerge when there is a mismatch among the
cognitive demands that must be handled to adequately elements of the triad in a kind of demand-resource re­
perform domain tasks. There are four dimensions of lation. For example, a narrow field of attention (low
problem solving worlds that define its cognitive de­ resources) can lead to errors only if the world in ques­
mands: dynamism, the number of parts and the ex- tion produces situations where a wide field of view is
tensiveness of interconnections between the parts or needed for timely detection of important system be­
variables, uncertainty, and risk (Woods, 1988). The haviors.
position of a domain along these dimensions deter­
mines the cognitive demands and the cognitive situ­ Demand Characteristics of Problem Solving Habi­
ations that problem solvers can face in the world in tats
question. These demands can strongly affect what are
effective or sensible reasoning strategies to adopt and To briefly illustrate the cognitive triad as an heuristic
the cognitive failure forms that will occur. The diffi­ to help establish an independent cognitive description
culty of meeting these demands varies depending on of a world, let us consider the factors that modulate a
the processing characteristics of the relevant cognitive world's cognitive demands (cf., Woods, Roth & Pople,
agents, the architecture of the various cognitive agents 1987 for a description of an AI-based modeling envi­
(Sorkin & Woods, 1985; Fischhoffet al., 1986; Roth et ronment where variations along the three apices of the
al., in press; Muir, in press) and on the representation cognitive triad can be simulated to investigate the po­
of the world that is provided to the problem solving tential for cognitive failures in particular worlds).
agent (Rasmussen & Lind, 1981; Rasmussen, 1986). When a world is dynamic, problem solving incidents
The external representation apex captures the fact unfold in time and are event-driven, that is, events can
that a particular problem representation affects prob­ happen at indeterminate times and the nature of the
lem solving by making certain information or manipu­ problem to be solved can change (e.g., multiple fail-
13. THE COGNITIVE SYSTEM TRIAD 9

AGENT
Multiple Agents
Joint Cognitive Systems

WORLD
Dynamism
Many Highly Inter
Uncertainty
Risk

REPRESENTATION
Collection: Fixed or Adaptive
Integration: Computational or Analogical

Figure 1: Factors that contribute to the complexity and difficulty of problem solving. (From Woods, 1988)

ures). The result is cognitive demands associated with for that state - premature localization. This becomes
anticipation or prediction of the behavior of the world an error when the attribution to a single factor de­
and the need to be able to revise one's assessment of lays or prevents identification of the set of factors that
the state of the world and therefore one's tactical or actually contribute to the observed situation (Bechtel,
strategic response (cf., Montmollin & De Keyser, 1985 1982).
for one treatment of how dynamism affects cognitive When data are uncertain, an inferential process is
demands). Failures to revise are one source of fixation needed to go from data to answers about the state of
errors (Woods, Woods et al., 1987). the world. When uncertainty is high, some data al­
When a world is made up of a large number of ways fail to fit together into the correct assessment due
highly interconnected parts (cf., Perrow, 1984; Ras- to red herrings, sensor failures, human reported data,
mussen, 1986; Dorner, 1983; Woods & Hollnagel, perceptual judgements, irrelevant factors, or multiple
1987), one failure can have multiple consequences failures. Not only does the inferential value of differ­
(produce multiple disturbances); a disturbance could ent data vary, the inferential value of a single set of
be due to multiple potential causes and can have multi­ data can vary with context. Furthermore, data gath­
ple potential fixes; there can be multiple relevant goals ering to reduce uncertainty can become necessary and
which can compete with or constrain each other; there can interact with effort and risk. A typical cognitive
can be multiple ongoing tasks at different time spans. failure form is over-reliance on familiar signs (Ras-
In addition, the parts of the world can be complex mussen, 1986). See Cohen et al. (1987), Coombs &
objects in their own right. One typical error form is Hartley (1987), Schum (1980), Garbolino (1987) and
failures to consider side effects, requirements, or post­ Dubois (1987) for some treatments of different aspects
conditions (Dorner, 1983). Another problem solving of uncertainty and its consequences for problem solv­
error that occurs when a world is high on this dimen­ ing demands and activities.
sion of complexity (particularly when it is simplified to When there is risk, possible outcomes of choices
cope with the complexity) is to mistake one factor re­ can have large costs. The presence of risk means
lated to the state of the world as the single explanation that one must be concerned with the rare but catas-
10 CHAPTER I. COGNITIVE SYSTEMS ENGINEERING

trophic situations as well as with more frequent but approach to the utilization of evidence. From the point
less costly situations. When uncertainty is coupled of view of problem formulation, these attention fail­
with risk, situations of choice under uncertainty and ures are seen as errors of solving the wrong problem.
risk arise. Current knowledge about human perfor­ Examples of breakdowns in this cognitive demand in­
mance in risky decision making indicates that gener­ clude disturbance management in process control ac­
alizations from research with non-risky tasks to risky cidents with conventional alarm systems (e.g., Lees,
tasks must be made very cautiously. 1983; Woods, Elm & Easter, 1986) and intelligence
To illustrate how interactions among these dimen­ failures in military history (e.g., Shlaim, 1976).
sions affect cognitive demands, consider a world that
has high evidential uncertainty. When a world is both Mismatches in the Cognitive System Triad: Getting
uncertain and dynamic, strategies for acquiring data Lost
become important. All evidence is not available at
once because it comes in over time or because it must Mismatches among the elements of the triad produce
be actively acquired with associated costs (effort and global human-machine performance problems even
risk). As a result, situation assessment and evidence when the "knobs and dials" level of human factors is
gathering interact, e.g., the information value of the well handled. One example of this in human-computer
data to be collected can interact with the effort that interaction is the getting lost phenomenon in large dis­
must be expended in order to collect it (e.g., Moray, play networks (Woods, 1984b).
1984; Johnson & Payne, 1985), especially when there In a particularly pointed case, a data-base system
is a high workload. Uncertainty and risk interact when was designed to computerize paper-based instructions
data gathering can be an action in the world of inter­ for nuclear power plant emergency operation (Elm
est with attendant consequences for parts of the world. & Woods, 1985). The system was built based on a
For example, medical tests can have negative conse­ human-computer interface "shell" (issues, it was pos­
quences that must be balanced with gain of information sible to create the computerized system simply by en­
(Cohen et al., 1987) or obtaining data from a loca­ tering domain knowledge (i.e., the current instructions
tion involves danger such as radiation exposure or fire as implemented for the paper medium) into the inter­
(Klein et al., 1986). A common strategy in research face and network data base framework provided by the
and in building machine experts (Mycin) is to con­ shell.
sider situation assessment/diagnosis independent from The system contained two kinds of frames: menu
data gathering. However, this simplification obscures frames that served to point to other frames, and content
the interaction between these two and the cognitive frames that contained instructions from the paper pro­
demands generated by that interaction when evidence cedures (generally one procedure step per frame). In
comes in over time or has associated costs. It is one preliminary tests of the system, it was found that peo­
example of failing to take the perspective of the prob­ ple uniformly failed to complete recovery tasks with
lem solver in the situation when mapping cognitive procedures computerized in this way. They became
demands. disoriented or "lost": unable to keep procedure steps
The need to know when and where to look for evi­ in pace with plant behavior, unable to determine where
dence results in the demand for control of attention they were in the network of frames, unable to decide
- the need to focus in on the significant subset of where to go next, or unable even to find places where
data for the current problem context, given a large they knew they should be (i.e., they diagnosed the sit­
amount of potentially relevant data (Woods & Roth, uation, knew the appropriate responses as trained op­
1986; Woods, Roth & Pople, 1987). In this case, erators, yet could not find the relevant procedural steps
the problem solver must decide what data are rele­ in the network). These results were found with peo­
vant to consider in determining a solution; thus, it ple experienced with the paper-based procedures and
is part of problem formulation. A large category of plant operations as well as with people knowledgeable
errors in worlds high on the dimensions of complex­ in the frame-network software package and how the
ity can be described as failures of attention (Woods, procedures were implemented within it.
1984a), in that, from hindsight, data from which the What was the source of the disorientation prob­
solution could have been extracted were available but lem? It resulted from a failure to analyze the cog­
were not attended to or looked for at the right time or in nitive demands associated with using procedures in an
conjunction with the appropriate set of data, given an externally-paced world. For example, in using the pro­
erroneous assessment of the situation or an erroneous cedures the operator often is required to interrupt one
1.4. A SAMPLE OF CRITICAL ISSUES IN COGNITIVE ENGINEERING 11

activity and transition to another step in the procedure was supported by displaying the rules for possible
or to a different procedure depending on plant con­ adaptation or shifts in the current response plan that are
ditions and plant responses to operator actions. As a relevant to the current context in parallel with current
result, operators need to be able to rapidly transition relevant options and the current region of interest in
across procedure boundaries and to return back to in­ the procedures (a third window). Note how the cogni­
complete steps. Because of the size of a frame, there tive analysis of the domain defined what types of data
was a very high proportion of menu frames relative to needed to be seen effectively in parallel, which then
content frames and the content frames provided a nar­ determined the number of windows required. Also
row window on the world. This structure made it dif­ note that the cognitive engineering re-design was, with
ficult to read ahead to anticipate instructions, to mark a few exceptions, directly implementable within the
steps pending completion and return to them easily, to base capabilities of the interface shell.
see the organization of the steps, or to mark a "trail" of As the example above saliently demonstrates, there
activities carried out during the recovery. Many activ­ can be severe penalties for failing to adequately map
ities that are inherently easy to perform in a physical the cognitive demands of the environment. However,
book turned out to be very difficult to carry out, e.g., if we understand the cognitive requirements imposed
read ahead. The result was a mismatch between user by the domain, then a variety of techniques can be
information processing activities during domain tasks employed to build support systems for those functions.
and the structure of the interface as a representation of The remainder of this chapter will address a sample
the world of recovery from abnormalities. of critical issues in cognitive engineering. The topics
These results triggered a full design cycle that be­ include what is expertise and skill, the problem of hu­
gan with a cognitive analysis to determine the user in­ man error, the problem of brittle problem solvers, what
formation handling activities needed to effectively ac­ is good advice, cognitive tools, and representation aid­
complish recovery tasks in emergency situations. Fol­ ing. The topics are not meant to be exhaustive of the
lowing procedures was not simply a matter of linear, scope of cognitive engineering, rather they represent a
step by step execution of instructions, rather it required range of the research activities that are relevant to the
the ability to maintain a broad context of the purpose problem of improving human performance.
and relationships among the elements in the procedure
(cf. Brown et al., 1982; Roth et al., 1987). Operators
needed to maintain awareness of the global context 1.4 A Sample of Critical Issues in Cogni­
(i.e., how a given step fits into the overall plan), to tive Engineering
anticipate the need for actions by looking ahead, and
to monitor for changes in plant state that would require What is Expertise and Skill
adaptation of the current response plan. One of the major challenges in cognitive engineering is
A variety of cognitive engineering techniques were being able to specify the basis for expert performance
utilized in a new interface design to support the de­ (Gitomer & Glaser, 1987). First, there is a need for a
mands. First, a spatial metaphor was used to make the theory of cognitive task performance that illuminates
system more like a physical book. Second, display the cognitive skills and the nature of the internal repre­
selection/movement options were presented in paral­ sentation of the domain which contribute to proficient
lel, rather than sequentially, with procedural informa­ performance (Gitomer & Glaser, 1987). Second, there
tion (defining two types of windows in the interface). is a need to specify how best to deliver that knowledge
Transition options were presented at several grains of in an operational environment. Knowledge can be de­
analysis to support moves from step to step as eas­ ployed internal to people (inherent capabilities, educa­
ily as moves across larger units in the structure of the tion, training, experience) or in external form (encoded
procedure system. In addition, incomplete steps were in different media and organized for retrieval in differ­
automatically tracked and those steps were made di­ ent ways) and the knowledge can be available during
rectly accessible (e.g., electronic bookmarks or place­ task performance (via memory, another person, or a
holders). knowledge delivery system) or on call (e.g., a human
To provide the global context within which the cur­ specialist). Third, there is a need for pedagologicial
rent procedure step occurs, the step of interest is pre­ theory for the development of expert knowledge and
sented in detail, and it is embedded in a skeletal struc­ skill.
ture of the larger response plan of which it is a part There is an extensive body of research in cogni­
(Woods, 1984b; Furnas, 1986). Context sensitivity tive science that examines differences between expert
12 CHAPTER I. COGNITIVE SYSTEMS ENGINEERING

and novices (cf., Lesgold, 1984; Chi, Glasser & Farr, cognitive situation and place the if-then rule in this
1986). Studies have been conducted in a wide variety context. The original Mycin rule is a reasoning short­
of domains including chess playing (Chase & Simon, cut based on frequency of occurrence through a choice
1973; DeGroot, 1965), physics problem-solving (Chi, under uncertainty and risk cognitive situation. In this
Feltovich & Glaser, 1981; Chi, Glaser & Rees, 1982; representation both efficiency and sensitivity to excep­
Larkin, McDermott, & Simon, 1980; McCloskey, tions can be achieved.
Caramazza, & Green, 1980), medical and clinical diag­ Human experts seem able to use shortcut rules to
nosis (Lesgold, Rubinson, Feltovich, Glaser, Klopfer, achieve cognitive economy and to be sensitive to situ­
& Wang, 1986; Kuipers & Kassiner, 1984; Cantor, ations that suggest utilizing more knowledge or more
Smith, French & Mezzich, 1980),computer program­ elaborate strategies (or acquiring these from other re­
ming (Adelson, 1984), electronics (Egan & Schwartz, lated experts). One of the challenges facing cogni­
1979) and process control. The results of this research tive engineering and cognitive science in general, is to
have uniformly shown that experts perform better due account for the cognitive flexibility displayed by ex­
to differences in the nature and organization of specific perts in shifting from routine shortcuts to more elab­
domain knowledge as opposed to more global differ­ orated knowledge. Most of the psychological the­
ences in problem solving skills. ories that address expert performance have empha­
Experts possess richer domain knowledge with more sized how knowledge organization changes over time
structure and interconnections. Their knowledge tend through processes such as "automatization" or "com­
to be organized both taxonomically, in terms of classes pilation" of knowledge as routine problems are solved
and subclasses of events (e.g., kinds of diseases) and more efficiently with increasing expertise (Anderson,
in relation to theories and causal models within the 1983; Schneider & Fisk, 1983). There has yet to be
domain. This allows them to identify a productive an adequate theoretical account for the cognitive flex­
problem representation more readily, leading to more ibility that allows experts to "switch frameworks" to
efficient solution. For example physicists classify me­ deal with exceptional cases or to revise interpretation
chanics problems according to the underlying laws and in light of new information (or the failure to revise -
principles that apply, whereas physics students focus fixation errors).
on surface similarities of problems (Chi, Glaser &
A related issue is the question of how knowledge
Rees, 1982). Similar results are obtained with pro­ is activated and utilized in the actual problem solv­
grammers (Adelson, 1981) and electronics technicians ing environment (e.g., Cheng et al., 1986; Bransford,
(Egan & Schwartz, 1979). Sherwood, Vye, and Rieser, 1986). Studies of edu­
The different organization of knowledge in experts cation and training often show that students success­
not only leads to more efficient solution of routine fully acquire knowledge that is potentially relevant to
problems, but also provides for more flexibility in solving domain problems, but that they often fail to
problem representation and solution strategy. One of exhibit skilled performance, e.g., differences in solv­
the hallmarks of skilled performance is flexibility, e.g., ing mathematical exercises versus word problems (cf.,
being able to handle special cases or exceptions and to Gentner & Stevens, 1983 for examples). The fact that
use reasoning shortcuts or more thorough processing people possess relevant knowledge does not guaran­
in different contexts (Clancey, 1985; Lesgold et al., tee that that knowledge will be activated and utilized
1986). Part of cognitive skill is the ability to rec­ when needed in the actual problem solving environ­
ognize when assumptions for routine processing are ment. Does the problem solver know what is the re­
violated and when to switch to other more elaborated lationship between two domain entities? Does he/she
processing, e.g., take more factors into account. know that it is relevant to the problem at hand? Does
A particularly clear example of this in machine cog­ he/she know how to utilize this knowledge in problem
nitive agents is the evolution of the "black teeth" rule solving? This is the issue of expression of knowledge.
from the Mycin system for medical problem solving. Education and training tends to assume that, if a per­
In the original system the knowledge is encoded only son can be shown to possess a piece of knowledge in
as a routine shortcut (implemented just as an if-then any circumstance, then this knowledge should be ac­
rule). In the Neomycin system (Clancey, 1983) back­ cessible under all conditions where it might be useful.
ground knowledge about the basis for this reasoning In contrast, a variety of research has revealed dissocia­
shortcut (the if-then rule) is available as an "expla­ tion effects where knowledge accessed in one context
nation," if requested by the human problem solver. remains inert in another (Cheng et al., 1986; Brans-
Finally, Langlotz et al. (1986) articulate the complete ford et al., 1986). For example, Gick and Holyoak
1.4. A SAMPLE OF CRITICAL ISSUES IN COGNITIVE ENGINEERING 13

(1980) found that, unless explicitly prompted, people Exploration Training


will fail to apply a recently learned problem solution
strategy to an isomorphic problem (cf. also, Kotovsky Practice is essential to achieve condite colized knowl­
et al., 1985). Thus, the fact that people possess rel­ edge, but practice alone is not enough. One possibility
evant knowledge does not guarantee that this knowl­ to enhance the expression of knowledge is exploration
edge will be activated when needed. The critical ques­ training, the most notable example of which is the
tion is not to show that the problem solver possesses STEAMER system (Hollan, Hutchins, & Weitzman,
domain knowledge, but rather the more stringent cri­ 1984)3. Exploratory learning utilizes a manipulable
terion that situation relevant knowledge is accessible representation of the subject matter (this usually means
under the conditions in which the task is performed. at least a partially graphic or analogical representation
This has been called the problem of inert knowledge - of the domain, interactive capabilities, and an underly­
knowledge that is accessed only in a restricted set of ing simulation of the behavior of the domain) to sup­
contexts. port student explorations of how the world in question
works. One part of exploration training is the abil­
The general conclusion of studies on the problem of ity to experiment with possible situations and options.
inert knowledge is that teaching the relevant domain Another part of exploration training is the ability to vi­
knowledge or strategies by themselves is not sufficient sualize the abstract or what is normally uninspectable.
to ensure that this knowledge will be accessed in new The student is able to directly manipulate the model
contexts. Offline training experiences need to promote world (for example, systematically changing one vari­
an understanding of how the concepts and procedures able while holding other variables constant and run­
can function as tools for solving relevant problems ning the simulation to see the resulting changes in the
(Bransford et al., 1986; Brown, Bransford, Ferrara & behavior of the system, or exploring the relationship
Campione, 1983; Brown & Campione, 1986). Train­ between two variables that interact by simultaneously
ing has to be about more than just student knowledge varying both). The student also must be able to see the
acquisition; it must also enhance the expression of effects of manipulations on system behavior. This can
knowledge by condiüonalizing knowledge to its use be done via representations that reveal aspects of the
via information about "triggering conditions" and con­ semantics of the world (see the section on external rep­
straints (Glaser 1984). resentations and problem solving). Wiecha and Hen-
rion (1986) contains a case where an interactive mod­
Similarly, online representations of the world can eling capability was underutilized until a graphic rep­
help or hinder problem solvers recognize what in­ resentation of domain semantics was provided. Thus,
formation or strategies are relevant to the problem exploration training relies heavily on analogical repre­
at hand. For example, Fischhoff et al. (1978) and sentation (Woods, 1986) and direct manipulation tech­
Kruglanski et al. (1984) found that judgemental biases niques (Hutchins, Hollan & Norman, 1985).
(e.g., representativeness) were greatly reduced or elim­ Lesh (1987) developed an exploration learning en­
inated when aspects of the situation cued the relevance vironment which was combined with an instructional
of statistical information and reasoning. Thus, one di­ strategy about how to guide student explorations. He
mension along which representations vary is their abil­ compared the performance of students taught with this
ity to provide prompts to the knowledge relevant in a exploration learning environment and students taught
given context. in a conventional instructional process for the same
material. Students who used the exploration learn­
The challenge for cognitive engineering is to study ing environment significantly outperformed the other
and develop ways to enhance the expression of knowl­ group on a later test. Furthermore, some of the explo­
edge and to avoid inert knowledge. What training con­ ration learning students (but none of the other group)
tent and experiences are necessary to develop condi- discovered domain principles that applied across prob­
tionalized knowledge - knowledge that includes infor­ lems which were not explicitly addressed in the mate­
mation about "triggering conditions" aionsonstraints rial or in any single exercise or problem.
on its use (Anderson, 1983; Glaser 1984; Sternberg
The exploration approach to training raises several
& Caruso, 1985)? This last question is particularly
questions that have only begun to be investigated.
important for worlds where it is difficult to anticipate
What aspects of a world should be explorable or made
all of the situations to be faced by domain actors, for
example, highly variable worlds or worlds where rare 3
Exploration training is the offline application of conceptualiza­
events are important (high risk). tion aiding, cf., the section on conceptualization aids.
14 CHAPTER 1. COGNITIVE SYSTEMS ENGINEERING

visible to enhance skill acquisition? It is critical to This approach may help students "conditionalize" their
understand where and why performance bottlenecks knowledge and develop knowledge organizations that
occur in order to select what aspects of the domain increase the probability that relevant information will
need to be captured in the exploratory world. What be accessed when needed.
instructional strategies apply to exploratory learning There are many other important questions about the
contexts (e.g., should the student develop his/her own acquisition of expertise. Developmental psychologists
experiments; can they discover all of the relevant space (and more recently studies of how information pro­
of possible situations on their own)? What role does cessing changes with aging) emphasize that cognitive
coaching play in exploratory learning environments processes are not givens but change as a function of
(should coaching consist of illustrations followed by age, experience, training. Thus, simple categorizations
free-exploration; should coaching include ordering ex­ of novice and expert may not be very useful to cogni­
periments in some way; does the role of coaching vary tive engineers interested in enhancing the acquisition
with types of learners)? Will all learners benefit from of expertise. Generally, training focuses on identify­
this approach (some evidence suggests that exploratory ing knowledge and processing strategies utilized by
training environments will not be effective for poor experts and then directly exposing novices to these
learners without coaching, e.g., Hasselbring, Goin & strategies (e.g., Clancey, 1986). However, we may
Bransford, 1985 and Gopher, Weil, & Siegel 1986; in need to understand the evolution from lesser to greater
press). degrees of expertise and then assist the learner through
How is progress measured? If performance is poor, these stages or steps in the evolution of their own ex­
what remedial or corrective strategies apply (what ex­ pertise (Lesgold, 1984; White & Frederiksen, 1986).
periences or representations of the world will remedy
bugs in a student's knowledge)? There is also the
Human Error and Person-Machine Mismatches
problem of how to develop capabilities for skilled dis­
criminations in a problem solving world: how to en­ Questions about human error - what it is, what factors
hance differentiation of relevant from irrelevant parts produce it, what forms does it take, how to measure the
of a problem situation (Gaeth & Shanteau, 1984; Lit- potential for error, how to predict its form or frequency
tlefield & Rieser, 1985). Note that this applies to or timing, where is a task vulnerable to error - are
problem formulation stages in many worlds, not just essential topics for a cognitive engineering if it is to
perceptual worlds such as radiography (Lesgold et al., impact on performance. The level of effort devoted to
1986). these questions has accelerated recently (e.g., Moray &
There is the need to develop new instructional strate­ Senders, in preparation; Rasmussen, Duncan & Leplat,
gies that are coherent with the exploration approach. 1987).
For example, what is advice during online task perfor­ There are several common themes in the recent work
mance can be coaching offline. A coach can affect the on human error (e.g., Rasmussen, 1986a; Rasmussen
experience base the student receives by choosing or et al., 1987; Reason & Embrey, 1985). First, there
introducing circumstances that force the student into is a distinction between error forms, the likelihood of
difficult areas. Coaches can teach response strategies different kinds of errors given that an error occurs,
explicitly via the exploratory trainer by showing stu­ and error emission patterns, the statistics of how of­
dents why a situation is difficult or undesirable, show­ ten inadequate performance will occur. Researchers
ing how to avoid getting into that situation given other have generally focused on the systematic error forms
constraints, and showing how to navigate or escape because these suggest potentially identifiable and cor­
from difficult situations. For example, Lesh (1987) rectable underlying factors.
initially used an exploration learning environment to Research on error today assumes error is the result
step students through problem solutions. Second, stu­ of limited rationality - people are doing reasonable
dents were shown cases and asked to critique perfor­ things given their knowledge, their objectives, their
mance. In the third phase the students explored sit­ point of view and limited resources, e.g., time or work­
uations on their own. The result is the kind of in­ load (Reason & Mycielska, 1982; Montmollin & De
formed training suggested by Bransford et al. (1986; Keyser, 1985; Woods et al., 1987) - and that error
cf. Brown & Campione, 1986) where students are analysis consists of tracing the problem solving pro­
helped to understand when and why to use various cess to identify points where rationality breaks down.
strategies and where students are provided opportu­ This has been called an assumption of imperfect ra­
nities to practice strategies and monitor their effects. tionality or a cognitive existence theorem - there is
1.4. A SAMPLE OF CRITICAL ISSUES IN COGNITIVE ENGINEERING 15

some cognitive system which can be postulated which and a dry air whiteout made the mountain invisible
would rationally, i.e., within its knowledge and infor­ to the pilots (Mahon, 1981). Identifying how a set of
mation processing capability, exhibit the behavior that factors converge in the genesis of a disaster leads to an
has been observed. emphasis on demand/resource mismatches, the perfor­
A third theme derives from studies of actual inci­ mance of the total person-machine system, the inter­
dents (e.g., Pew et al., 1981; Reason & Mycielska, action of multiple contributors to an accident, multiple
1982; Woods, 1982; Perrow, 1984; Montmollin & De ways to improve the systems involved in the accident,
Keyser,1985; Wagenaar & Groeneweg, 1987).Wagen- and failures as potential learning experiences. From
nar, W. A. These studies reveal that there are a set of this perspective we need to understand the relation­
individually necessary but only jointly sufficient con­ ship between human processing mechanisms (e.g., von
ditions for a disaster to occur, therefore, labeling any Winterfeldt & Edwards, 1986) and the demands and
one the cause requires additional information to distin­ resources actually present in complex problem solving
guish it from the background causal field. For exam­ situations such as flightdeck operations, emergency re­
ple, in an actual train derailment leading to a two train sponse in nuclear power plants, or marine safety. For
crash, one can refer to a large number of conditions example, one position is that human processing con­
but for which the accident would not have occurred sists of relatively simple but normally quite effective
- such as, the information processing demands placed mechanisms. Performance failures occur when design­
on the driver, the change in habits due to the end of ers unintentionally create excessively harsh cognitive
the holiday period schedule, the speed of the train, the environments due to the demands of the world itself
proximity of another track at the site of the derailment, and due to the primitive representations available (Per­
the timing that another train was on this other track at row, 1984; Reason, 1986; 1987).
nearly the same time, etc.
One path to approach questions about human er­
This observation has led to general agreement ror is to examine criteria for skilled or expert perfor­
among the research community (but not necessarily mance. One often noted aspect of skill or expertise is
in particular application communities) that it is more the ability to adapt one's response in light of changing
fruitful to think about person-machine system flaws circumstances in pursuit of a goal (Brown et al., 1982;
than human errors (e.g., Perrow, 1983; Rasmussen, Bainbridge, 1984; Woods & Roth, 1986). Adaptability
1986). The label "human error" is often used as depends on abilities to predict or anticipate the behav­
a residual category; it tends to imply responsibility ior of the world and to be sensitive to what might
and blame (punishment); it focuses changes on lo­ happen next (e.g., the field of attention). This defi­
cal, incident-specific responses (different people, bet­ nition emphasizes the ability of the skilled performer
ter motivation). Error attribution is an exercise in to compensate for environmental variability or distur­
hindsight judgement (a two state discrete dimension); bances. Error then becomes a breakdown in one's
whereas prior to the outcome there are only degrees of resistence to variability or disturbances, either failures
performance (a continuous dimension) and factors that to recognize the need to adapt (behavior persists in
make it more or less difficult to perform well. In the one path in the face of changing circumstances that
train crash example, all the trains that day went too demanded a shift in response) or erroneous adaptation
fast through the section of the track where the derail­ (the need for adaptation was recognized but the at­
ment occurred. Thus, once performance deviates from tempted adaptation was inadequate due to incomplete
its target, it has the potential to be labelled erroneous. knowledge). This view emphasizes the active role of
Whether it will lead to negative consequences depends any cognitive agent or set of cognitive agents in con­
on the presence of other necessary factors. trolling a world and the need to treat performance fail­
A more productive alternative is to focus on the fac­ ures through changes in the capabilities, resources and
tors that produce the behaviors underlying the disaster architecture within the cognitive systems triad. This
(as Perrow 1983 puts it, what forced the erroneous be­ suggests that human participation often prevents the
haviors). For example, an airplane crash into a moun­ propagation of errors or usually works around error
tain on a clear day was originally labeled "crew error." prone points (flaws) due to the flexibility, adaptability,
However, further investigation revealed that this seem­ "intelligence" of the human in the loop. For example,
ingly irrational incident was actually the product of it is ironic that decision automation is often justified
the interaction of a set of factors - the flight computer on the grounds of human incompetence, however it is
course had been changed without notifying the pilot, the same person who must compensate for the "brittle-
the new course was visually quite similar to the old, ness" typically exhibited by machine agents in the face
16 CHAPTER 1. COGNITIVE SYSTEMS ENGINEERING

of unexpected situations (Roth et al. 1987). Note the the first side of a tape and part of the second. On the
stark contrast between this view and another point of next day you want to add additional material to the
view that sees the human as an independent source or tape; in other words, you want to resume recording
contributor of errors, where system failures in which where you left off the previous day. Notice what can
human actions played a role should be treated by in­ happen. You do not remember where you left off - on
creasing the role of machine cognitive agents in order side A or on side B and there is no cue in the configu­
to eliminate or reduce the human's role. ration of the device to tell you which (because the tape
One source of mismatches within the cognitive sys­ is always in the machine on the same side - the ma­
tem triad has been purely technology-driven deploy­ chine switches for you). Thus there is the possibility
ment of new automation capabilities. When this oc­ that you will simply initiate the record mode and begin
curs, there are frequently unintended and unforseen to speak without the second command needed to start
negative consequences. Examples of this include: on side B. As a result, a portion of the material you had
recorded the previous day will be erased and recorded
• cases of shifts from manual to supervisory con­ over. Note that this is a new error form/negative out­
trol in process control where productivity actually come; it is impossible with the manual machine - one
fell from previous levels when there was a failure always begins to record where one left off previously.
to support the new supervisory control demands
The point is that new automation aids can have mul­
(Hoogovens Report, 1976);
tiple effects within the cognitive triad and that some of
• cases of automation related disasters in aviation these effects can be negative. In other words, there are
(e.g., Wiener, 1985); post-conditions associated with new automation that
must be met otherwise errors/negative outcomes can
• a shift in power plant control rooms from tile an­ occur - in this example, some way to keep track of
nunciator alarm systems to computer based alarm which side of the tape is unrecorded (a display or mem­
systems that eventually collapsed and forced a ory aid) or for the machine to be capable of resum­
return to the older technology because strategies ing to record from the place where previous record­
to meet the cognitive demands of fault manage­ ing stopped (further automation). The point is not
ment that were implicitly supported by the old that new technology should be avoided - because the
representation were undermined in the new rep­ automation does make possible significant improve­
resentation (Pope, 1978); ments. Similarly, the point is not that new technology
is always beneficial - because there are post-conditions
• shifts from paper based procedures to comput­
associated with its introduction that must be satisfied
erized procedures that have also collapsed due
in order for the potential to be achieved and for unde­
to disorientation problems again as a result of a
sirable consequences to be avoided or mitigated. Fur­
failure to anticipate the cognitive reverberations
thermore, these post-conditions can strongly influence
of technological changes (Elm & Woods, 1985).
the way the technology is used (or even the type of
Consider a simple example of automation. You have technology used). One example of this has ocurred
a portable cassette tape recorder to record letters. In in marine transportation (Perrow, 1984). Technology-
the manual recorder you must switch from one side of only driven increases in the machine power devoted
the tape to the other when you have filled one side. to navigation and collision avoidance did not signifi­
Now you purchase a new, more technologically so­ cantly reduce marine accidents (Gaffhey, 1982). This
phisticated recorder that has an automatic reverse - spurred the U.S. marine community to continue to in­
when you reach the end of one side, the machine au­ crease levels of automation. However, the existence
tomatically begins to record on the other side of the of automation related accidents triggered the European
tape. You no longer have to take out the tape, reverse community to look at what we have called here the
it, and reload it. Automation has produced positive cognitive system and to use machine power to enhance
impact, especially if you frequently record while your total system performance (e.g., to reduce fixation er­
hands are busy at some other activity like driving a rors by encouraging communication of task relevant
car. However, there are other consequences or effects information across cognitive agents).
of the new automation that introduce possibilities for There are several specific recent bodies of research
new kinds of errors/inadequate outcomes if not ad­ on human error. One rather well developed example
dressed. In our automated recorder example, consider addresses slips in the execution of highly skilled rou­
what happens if on one day you record enough to fill tines or what have been often referred to as capture or
1.4. A SAMPLE OF CRITICAL ISSUES IN COGNITIVE ENGINEERING 17

substitution errors (Norman, 1981; Norman & Shal- in carrying out the recognitions and responses, and on­
lice, 1980; Reason & Mycielska, 1982; Broadbent et line systems to help prompt and retrieve the guidance
al., 1982). In slips the intention to act is correct; (either paper or computer based). When human be­
the breakdown occurs in carrying out the intention. havior is to be guided by pre-planned routines, human
Note that this body of research only addresses highly performance aiding is to teach the person about the
practiced tasks. knowledge encoded, especially the background for the
Other bodies of work are focused on errors in inten­ routines (Brown, Moran & Williams, 1982), to prac­
tion formation. One of these - buggy models - attempts tice the person on the routines, and to provide retrieval
to account for errors by inexperienced domain prob­ aids so that the person uses the correct guidance for
lem solvers (Brown & Burton, 1978; Brown & Van- the current situation.
Lehn, 1980; Caramazza, McCloskey & Green, 1981; However, coordination by pre-planned routines is
McCloskey, 1983). Buggy models provide a spec­ inherently "brittle" (e.g., Brown et al., 1982; Roth et
ification of the knowledge structures (e.g., incomplete al., 1987; Herry, 1986; Fischhoff et al., 1986). Be­
or erroneous knowledge) and processes that are pos­ cause of both pragmatic and theoretical constraints, it
tulated to produce the pattern of errors and correct re­ is difficult to build mechanisms into pre-planned rou­
sponses that characterize the performance of particular tines that cope with novel situations, adapt to special
individuals. These buggy models are typically embod­ conditions or contexts, or recover from plan break­
ied as a computer program, and constitute a "theory" downs. When the pre-planned routines are rotely in­
of what these individuals "know" (including miscon­ voked and followed, performance breaks down in the
ceptions) and how they utilize the knowledge. They face underspecified instructions, violations of bound­
are a good example of the focus on limited rationality, ary conditions, or impasses (where the plan's assump­
and the emphasis on analyzing and accounting for sys­ tions about the world are not true), human execution
tematic error patterns that characterize recent trends errors, bugs in the plan, multiple failures, and novel
in error research. This contrasts with previous ap­ situations (incidents not planned for). This is the prob­
proaches to human performance that focused on more lem of unanticipated variability.
global measures (e.g., percent correct). By providing a While some types of worlds or parts of worlds are
micro-analysis of the misconceptions that lead to error- more prone to these factors than others and the quality
ful performance, analyses of "buggy" knowledge open of the pre-planning effects the frequency with which
the door for much more pointed achievement testing these arise in particular worlds, research in a variety
(Gitomer & Glaser, in press) and more highly tailored of worlds consistently has shown that these factors
instruction than was previously possible. Buggy mod­ can never be completely eliminated by expanding and
els are often incorporated as part of intelligent tutoring refining pre-planned routines (for both pragmatic and
systems (Sleeman & Brown, 1982; Anderson, Boyle theoretical reasons) and that these factors are ubiqui­
& Reiser, 1985). The major limitations of this ap­ tous in actual serious accidents in risky technologies
proach has been that it requires a large amount of very (Pew et al., 1981; Rasmussen et al., 1987; Hirshhorn,
domain specific investigation to identify the range and 1984; Perrow, 1984). The potential for unanticipated
distribution of "buggy" models that arise in the domain variability in a problem solving world is higher for
of interest (e.g., McCloskey, 1983). Further, the re­ worlds that are highly dynamic and highly coupled,
sults have tended to remain highly context bound with when the degree of uncertainty is high and when there
little transfer of insight from one domain to the next. are significant consequence to potential outcomes. In
worlds where all of these factors are high, such as nu­
clear power plants control rooms, commentators like
Brittle Problem Solvers and Unexpected Variability
Perrow (1984) and Reason (1987) have likened the
One way that people behave in managing com­ cognitive environment to a "minefield of human er­
plex technologies can be termed coordination by pre­ ror."
planned routines. In this mode of problem solving, Usually, people think that the only alternative to co­
knowledge is packaged in terms of what situations are ordination by pre-planned routines is creative problem
to be recognized, what evidence signals that these sit­ solving, but that practitioners are rarely selected for
uations have arisen, and what response should follow capabilities in this area. As Reason (1987) points out,
(Rasmussen's rule based behavior, e.g., Rasmussen, people are rarely good or reliable enough at creative
1986a). This knowledge is delivered in the form of ed­ problem solving for this to be an acceptable mode of
ucation about the routines, training (exercise and drill) problem solving in risky worlds. However, there is
18 CHAPTER 1. COGNITIVE SYSTEMS ENGINEERING

an alternative type of problem solving behavior which tive human performance at the operational front lines:
effective practitioners employ. In coordination by re­ expanding and refining available guidance expands the
source management, an actor on the scene does more amount of routine problem solving; supporting coor­
than deploy pre-planned strategies; he or she also mon­ dination by resource management addresses situations
itors and adapts plan execution. The world's responses that have not been or cannot be completely anticipated
are monitored to see if it is behaving as expected (e.g., - a kind of cognitive defense in depth. This is a critical
are the relevant goals met? has an execution error oc­ challenge to cognitive engineering - how to avoid brit-
curred? have systems performed as demanded?). De­ tleness in the face of unanticipated variability. Design
partures from plan signal that more knowledge needs to manage trouble as well as design to prevent trouble
to be brought to bear to adapt or fill in pre-planned is needed for effective problem solving systems (Ack-
responses given the unusual situation. What happens off, 1981; Brown & Newman, 1985). Reason (1987)
is that there is a gradual unfolding of more and more points out the difficulties of this challenge, especially
of the background knowledge behind the pre-planned in risky technologies: significant events in natural sys­
material or specialist knowledge about the unusual sit­ tems almost always involve novel or unexpected fea­
uation. In some cases, the relevant knowledge is po­ tures; how then do we prepare or support practitioners
tentially available if a delivery path is known, i.e., to recognize and cope with these unanticipated situa­
the practitioner must know enough to ask the right tions?
question and they must know where or who can pro­
vide relevant information or answers. There have been
many incidents where, with hindsight, the appropriate 1.5 Towards Effective Decision Support
guidance was available although not activated in the
context of the actual incident (Woods, 1984a). In other Given that one can characterize agent-environment re­
cases, knowledge about how to adapt the plan must be lationships in problem solving worlds, the question for
generated. This involves conceptual knowledge about cognitive engineering is how to enhance overall per­
the domain (Hiebert & Lefevre, 1986), e.g., knowledge formance. In principle, there are degrees of freedom
about relevant goals and means and about constraints in adjusting any of the three elements in the cogni­
(side effects, pre-conditions, post-conditions), and of­ tive system triad to produce a better demand-resource
ten the interaction of several sources of knowledge or match. The problem solving world itself can be re­
points of view (e.g., Coombs & Hartley's examples of designed. For example, the degree of coupling in a
model generative reasoning). highly coupled world can be reduced if errors that
arise from this dimension (e.g., missing side effects)
Aiding human performance in coordination by re­ are a problem. Or a world with multiple modes can
source management addresses recognizing departures be re-designed (since people are prone to mode errors)
from plan, deciding what additional knowledge is so that there is only a single mode. But usually, the
needed, and how to call on that knowledge - plan mon­ designer has the greatest latitude to enhance perfor­
itoring, plan adaptation, plan repair. It is interesting mance by varying the online or offline representations
that Roth et al. (1987) found that the human's ac­ by which the agents come to know the problem solv­
tual role is to amplify the machine expert's ability to ing world or by varying the presence and architecture
cope with unanticipated variety in the world and in the of multiple, especially machine, cognitive agents.
problem solving process. How does one use machine resources to enhance
In virtually every significant disaster or near dis­ overall performance? In the next sections we will ex­
aster in risky technologies, there has been some point amine some of the research on this question, first, from
where knowledge beyond the pre-planned routines was the point of view of decision aids as semi-autonomous
needed. Furthermore, this point often involves multi­ cognitive agents - the problem of interfacing human
ple people in multiple locations with only partial infor­ and machine cognitive agents or joint human-machine
mation about the state of the system. This means the cognitive systems (Woods, 1986; Woods et al., in
need to think about the problem solving organization, press). This includes questions about what is good ad­
about how knowledge is tunneled to the operational vice between cooperative problem solving agents and
staff, about who has responsibility and authority for what are effective cognitive tools. In the subsequent
what decisions (see van Creveld, 1985 and Fischhoff sections, we will examine work on another approach to
et al., 1986 for examples of failures in problem solving improve human problem solving - representation aids.
organizations in military command and control). Because of the expansions in computational powers,
Both kinds of coordination are necessary for effec­ the machine element can be thought of as a partially
I5. TOWARDS EFFECTIVE DECISION SUPPORT 19

autonomous cognitive agent in its own right. But this ceptual apparatus such as medical imaging), and now,
is a case with which we have little experience, and it with the arrival of AI technology, cognitive tools (al­
raises the problem of how to build a cognitive system though this type of tool has a much longer history,
that combines both human and machine cognitive sys­ e.g., aide-memoires or decision analysis, AI has cer­
tems or, in other words, joint cognitive systems (Holl- tainly increased the interest and capability to provide
nagel, Mancini & Woods, 1986; Mancini, Woods & cognitive tools).
Hollnagel, in press).
One metaphor that is often invoked to guide our
What is Good Advice?
thinking about how to utilize intelligent machine re­
sources is based on an analogy to human-human coop­ Human-human advisory encounters have been exam­
erative problem solving. Because the machine element ined in a number of different contexts including face-
can be considered as a semi-autonomous cognitive sys­ to-face advice on using a local computer system (Alty
tem, we can begin to understand human interaction & Coombs, 1980; Coombs & Alty, 1980; McKendree
with a machine cognitive agent by transposing how & Carroll, 1986) and radio talk show "call in" advice
two people interact during problem solving. Follow­ on finance management (Pollack et al, 1982). The
ing this metaphor leads Muir (1987) to raise the ques­ results uniformly indicate that good advice is more
tion of the role of "trust" between man and machine than recommending a solution. Good advisory inter­
in effective performance. For example, how does one actions involve cooperative problem solving. The ad­
come to know another's range of competence. This visor does not merely respond to the immediate request
metaphor also suggests that machine and human cog­ of the user, rather he/she aids the user in problem for­
nitive agents might sucessfully interact by paralleling mulation and plan generation (especially with regard
how one person advices another during problem solv­ to obstacles, side effects, interactions and tradeoffs).
ing. The advisory encounter aids the user to determine the
A second metaphor that is frequently invoked is su­ right questions to ask; how to look for or evaluate pos­
pervisory control (Sheridan & Hennessy, 1984; Ras- sible answers; and how to develop or debug a plan of
mussen, 1986a; Sheridan, 1988). Again, the machine action to achieve his goals (Jackson & Lefrere, 1984,
element is thought of as a semi-autonomous cognitive p. 63).
system, but in this case, it is a lower-order subor­ For example, Alty and Coombs (1980; Coombs &
dinate, albeit partially autonomous. The human su­ Alty, 1980), who examined consulting interactions on
pervisor generally has a wider range of responsibility, the use of a local computer, found that in more suc­
and he or she possesses ultimate responsibility and au­ cessful advisory interactions, two partial experts (expe­
thority. Boy (1987) uses this metaphor to guide the rienced computer user with a domain task to be accom­
development of assistant systems built from AI tech­ plished and a specialist in the local computer system)
nology. In order for a supervisory control architecture cooperated in the problem solving process. Control
between human and machine agents to function effec­ of the interaction was shared in the process of iden­
tively there are several requirements to be met which, tifying the important facts and using them to better
as Woods (1986) has pointed out, are often overlooked define the problem. In this process each participant
when tool-driven constraints dominate design. First, stepped outside of his own domain to help develop a
the supervisor must be able to re-direct the lower or­ better understanding of the problem and, as a conse­
der machine cognitive system. Second, in order to be quence, appropriate solution methods. In another case,
able to supervise another agent, there is need for a Perkins & Martin (1986) found that one third of the
common or shared representation of the state of the errors made by student programmers were corrected
world and of the state of the problem solving pro­ with general hints that did not require knowledge of
cess. Otherwise, communication between the agents the solution to the problem or the specific bug.
will break down (e.g., Suchman, 1987). In contrast, Alty and Coombs found that unsatisfac­
A third metaphor is to consider the new machine tory human-human advisory encounters were strongly
capabilities as extentions and expansions along a di­ controlled by the advisor. The advisor asked the user
mension of machine power. In this metaphor, ma­ to supply some specific information, mulled over the
chines are tools; people are tool builders and tool users. situation, and offered a solution with little feedback
Technological development has moved from physical about how the problem was solved. While a prob­
tools (tools that magnify man's capacity for physical lem was usually solved, it was often some proximal
work), to perceptual tools (extentions to man's per- form of the user's real problem (i.e., the advisor was
20 CHAPTER 1. COGNITIVE SYSTEMS ENGINEERING

guilty of a form of solving the wrong problem: solving complexity an individual brings a single point of view
a proximal case of the user's fundamental problem). on the actual domain or bounds the domain. One ma­
The advisor provided little help in problem definition. jor source of unanticipated variability arises when the
Analyses of human-human advisory interaction different points of view must be combined or when the
clearly reveal the elements of good advice, and point to bounded domain confronts the actual context. Thus,
the features required for effective joint cognitive sys­ naturally ocurring problems more and more require the
tems. Machine decision support must be more than a integration of different specialities, each of which con­
solution plus justification; it must be structured around tributes a unique point of view (e.g., Hawkins, 1983;
the problem solving process and involve close coop­ Coombs & Alty, 1984; Coombs, 1986; Gadd & Pople,
eration with the user in formulating the problem, and 1987). One kind of expertise is the ability to evalu­
identifying and evaluating solution paths (Perkins & ate and integrate specialist knowledge in some real
Martin, 1986). The function of an "advisor" (man or problem context. As a result, the designer of a joint
machine) is to broaden the user's horizons; to raise and cognitive system must address the question of what is,
help answer questions like (e.g., Woods et al., in press; or what should be, the relationship between machine
Woods & Hollnagel, 1987): What would happen if? expertise and human expertise in a given domain: is
Are there side effects to this response? How do x and the person a generalist who manages and integrates in­
y interact? What are the pre-conditions (requirements) put from various machine implementations of special­
and post-conditions for a particular response (given the ist knowledge? is the human one specialist interacting
response, what consequences must be handled)? Is the with the knowledge of other specialists to deal with
world heading towards undesirable states? How can a problem at the junction of two fields? Effective in­
these states be avoided? How can one escape from teractions between these kinds of partial, overlapping
an undesirable state without triggering negative out­ experts (which we suspect will be a prevalent type of
comes? joint cognitive system) requires that knowledge from
different viewpoints is integrated during the decision
The common assumption is that reflection process, including problem formulation and plan mon­
generates dialogue, when, in fact, it is di- itoring/adaptation. For example, Pople (1985) models
alouge that generates reflection. Very often an IDS system on the interactions between multiple
when people engage in dialogue with one an­ medical specialists that occur, often informally, when
other, they are compelled to reflect, to con­ one of them encounters a difficult problem.
centrate, to consider alternatives, to listen
How can the IDS designer package and deliver "ef­
closely, to give careful attention to defini­
fective advice" to domain practitioners? Too often
tions and meanings, to recognize previously
and in contrast to the above results, designers of con­
unthought of options, ... Lipman, Sharp &
sultant systems implicitly assume that "advice" is syn­
Oscanyan, 1980, p. 22.
onymous with outputing an answer, that is, solving the
Studies of human-human advisory interactions re­ problem for the practitioner. For example, Zisner &
veal another important characteristic of joint cognitive Henneman (1988), and Resnick et al.,(1987) contain
systems: the relationship between the kinds of skills examples of laboratory based machine advisors which
and knowledge represented in the human and in the simply recommended a solution to the human problem
machine. The human user of decision tools is rarely solver were ignored.
incompetent in the problem domain. Even when the There are several challenges in formulating and
design intent is to reduce human skill and knowledge delivering advice. What is the appropriate level of
requirements, significant amounts of domain knowl­ context-sensitivity and specificity to aim for in gener­
edge are required for minimal capabilities to under­ ating advice? Should the system attempt to generate
stand and execute machine instructions (Roth et al., advice under all conditions or only for those situa­
1987). The relation of human to machine is not one tions where the appropriateness of the advice can be
of novice to expert; instead, the human and machine assured? Should the system generate highly specific
elements contain partial and overlapping expertise that, micro-reponses or should it generate more global in­
if integrated, can result in better joint system perfor­ formation leaving the operator degrees of freedom in
mance than is possible by either element alone. To­ deciding how to instantiate the advice? While good
day, no one expert in any field can keep up with the micro-advice may be the ideal, there are a host of as­
amount or rate of change of information. The result sociated problems in achieving this.
is the generalist- specialist problem. To cope with this Accurate judgment of the most appropriate specific
1.5. TOWARDS EFFECTIVE DECISION SUPPORT 21

response is often very difficult because the "best" re­ There are strong pressures to bound the problem nar­
sponse depends on extensive context-specific informa­ rowly to ensure completeness of coverage (to get a
tion and because of the danger of providing inapropri- machine advisor to generate any kind of output). How­
ate advice. Micro-advice is antithetical to the formu­ ever, in trying to keep the scope bounded there is a
lation of flexible, adaptive responses to unanticipated danger of taking an overly narrow perspective (e.g.,
variability that characterizes skilled performance, be­ focusing on single fault problems in troubleshooting
cause it provides no context or rationale for the advice domains or on a very narrow subspecialty in medical
provided (what process state is the advice a response diagnosis) and producing a support system that is not
to? what is the expected outcome of this response?). useful to practitioners. Narrowly conceived advisors
This applies both with respect to inhibiting the ability typically handle too little of the range of challeng­
of the practitioner to formulate an adaptive response ing cases actually encountered or the advice offered
to an unanticipated situation at a given point in time is unrealistic, a nuisance or erroneous4. The bound­
and with respect to the ability of the practitioner to ing problem is particularly challenging in the case of
develop adaptive skill over time. Micro-advice is also highly coupled worlds.
deficient when the practitioner's role demands paral­ Advice should be presented in the context of the as­
lel processing of multiple tasks. In order to resolve sessment of current (and future) state of the world to
goal competition that can arise in this kind of situ­ create a shared frame of reference for the person and
ation, good performance requires formulating global machine (Woods and Roth, 1988). By having access to
control strategies or plans of action and changing level the same information as the machine expert, the prac­
of abstraction during plan adaptation and repair (Ras- titioner is in a better position to evaluate the soundness
mussen, 1986a; Alterman, 1986). The global strategic of the state assessment that served as input to the ma­
response to whole situations is missing from micro- chine expert's decisions. This, combined with explicit
advice. statement of intended objectives of recommended re­
In contrast, global advice is likely to be more robust, sponses, allows the practitioner to follow the line of
in the sense that the advice will be correct, but it risks reasoning and evaluate the quality of the advice. This
the danger of being vague or stating the obvious. The common frame of reference concept for explaining the
problem is how to define a level of advice that is spe­ behavior of intelligent machines is designed to avoid
cific enough to be useful without being over-specific the ubiquitous problems associated with opaque advice
and risking the danger of brittleness - incorrect advice (Roth et al., 1987; Suchman, 1987).
due to a failure to consider all the relevant contextual
variables.
Cognitive Tools
In dynamic worlds one can take advantage of the
fact that as the situation worsens the degree of poten­ What are cognitive tools and how should they be de­
tial ambiguity about what are appropriate responses ployed? One approach often adopted implicitly or ex­
decreases. Thus, the machine advisor can be quies­ plicitly is to design IDS systems as prostheses - a re­
cent when the system is relatively stable or normal placement or remedy for a deficiency. In one sense
and gradually output stronger and more specific re­ all tool use implies a human deficiency with respect to
sponse advice as the situation worsens. This approach some goal - if we had hard hands, we would not need
to interfacing advisor and domain actor balances sev­ hammers. Most current expert systems are cognitive
eral constraints on what is good advice: deh ver advice tools in this sense. In the cognitive tool as prosthesis
in those situations where one can generate clearly ap­ paradigm, the primary design focus is to apply compu­
propriate information, but interject advice only when it tational technology to develop a stand-alone machine
is needed (which implies an understanding of the state expert that offers some form of problem solution. The
and likely future course of the problem solving pro­ "technical" performance of this system is judged in
cess and which tends to be an inverse function of how terms of whether the solutions offered are "usually"
easy it is to generate relevant advice), and preserve correct (e.g., Yu et al., 1979). This paradigm empha­
the operator's initiative and flexibility in responding sizes tool building over tool use. Questions about how
to situations beyond the capacity of the advisor (cf., to interface human to machine are secondary to the
Roth et al., in press). primary design task of building a machine that usually
Another challenge in characterizing an advisory sys­ 4
A common pitfall of intelligent systems is that the range of
tem is defining the scope of the advice - what is the problems it can solve is nearly isomorphic to the range of problems
breadth of problems for which advice can be provided. the target user is able to solve for him or herself.
22 CHAPTER 1. COGNITIVE SYSTEMS ENGINEERING

produces correct decisions. A typical encounter with


an "intelligent" computer consultant developed in this
fashion consists of the following scenario: the user Machine Expert
initiates a session; the machine controls data gather­
ing; the machine offers a solution; the user may ask
for "explanation" if some capability exists; the user
accepts (acts on) or overrides the machine's solution
A
(Woods, 1986).
What is the joint cognitive system architecture im­
plicit in this paradigm? The primary focus is to apply Human
computational technology to develop the machine ex­
pert. In practice, putting the machine expert to work Solution Data
requires communication with the environment - data Filter Gatherer
must be gathered and decisions implemented Rather
than automate these activities they are typically left for
the human (Figure 2). Thus, questions about support
for the human portion of the ensemble become not so
much how to interface the machine to the human, but
rather, how to use the human as an interface between Y
the machine and its environment. As a result, locus
of control resides with the machine portion of the en­ Environment
semble, and human-machine interface design focuses
on features to aid the person's role as data gatherer
(e.g., Mulsant et al., 1983) and on features to help the
user accept the machine's solution.
Figure 2: The architecture of the joint human-machine
The passive role assigned to the human in the cognitive system implicit in the prosthesis approach to
machine-as-prosthesis paradigm echoes the description cognitive tools (From Woods, 1986).
of unsatisfactory advisory encounters found in empir­
ical studies of human to human interactions described
earlier. The results of that empirical literature fore­ tool use involves an active role for the tool wielder
shadow the critical flaws in the joint cognitive system in some instrumental context, whereas the cognitive
implicit in the prosthesis approach to decision aiding tool as prosthesis view leaves the tool user with a pas­
(Woods, 1986; Roth et al., 1987), i.e., breakdown in sive role. Anyone who has worked with practitioners
the face of unanticipated variability. in some field has seen the tools that the practitioners
An alternative approach to IDS is to conceive of fashion to help them carry out their jobs, even tools to
decision aids as instruments, that is, a means for ef­ help them work around the hindrances often created
fecting something in the hands of a competent prac- by the "tools" that they are given or to make these
tioner (cf., von Winterfeldt & Edwards, 1986). People "aids" actually useful.
are a generalized species - we work fairly well in a Fundamentally, the difference between the cognitive
wide range of environments (almost all man-made to­ tools as prosthesis approach and the cognitive tools as
day). It is the ability to develop, adapt and utilize instrument approach to joint cognitive system design
tools that supports specialization in any particular en­ is a difference in the answer to the question "what is
vironment - if we had hard hands, we might make a consultant." One definition of a consultant is some
good carpenters, but we would be poor pianists. From one (thing) called in to solve a problem for another, on
this vantage point, our tool building/using skills are the asssumption that the problem was beyond the skill
resource utilization in the pursuit of a goal. For exam­ of the original agent. Given this definition, the impor­
ple, when I am working to repair my car and I see that tant issues for building decision aids is to build better
I need to manipulate something around a comer where automated problem solvers and to get people to call
my hand cannot reach, I then look for an instrument on these automated problem solvers (the acceptance
that affords manipulation around an angle - perhaps a problem). The instrumental perspective, on the other
flexible manipulator is nearby or perhaps I can bend hand, defines a consultant as a reference or source
an extra pair of needle nose pliers. The point is that of information for the problem solver. The problem
I.5. TOWARDS EFFECTIVE DECISION SUPPORT 23

solver is in charge; the consultant functions more as pertise overestimate how much they know (e.g., Wa-
a staff member. As a result, this view of joint cog­ genaar, 1986). However, we sometimes forget that
nitive systems stresses the need to use computational these biases can apply to the designers of machines
technology to aid the user in the process of solving as well as to the users of machines. This means that
his problem. The human's role is primarily to achieve the designer of an IDS system is likely to overesti­
total system performance objectives as a manager of mate his/her ability to capture all relevant aspects of
knowledge resources that can vary in kind and amount the actual problem solving situation in the behavior of
of "intelligence" or power. the machine expert. This result has often happened
In order to put more knowledge in the hands of with other forms of automation and support systems
the human problem solver, first, raw domain data can - the designer of the system fails to appreciate all of
be massaged into forms that more directly answer the the variability and complexities of the operational set­
questions the human problem solver must face as prob­ ting so that the system fails to support or even hin­
lems unfold. Machine resources can compute such in­ ders the human problem solver in achieving his/her
formation as, when is a domain goal violated, when goals. For example, operators quickly learn the sig­
one alternative is prefered over others, are there al­ natures of automatic controllers - when they are so
ternative response strategies, what data are potentially unreliable that the task must be performed manually,
relevant, what are the post-conditions that follow from when they are highly sensitive to disruptions and must
a strategy, what are the preconditions that must be met be very closely monitored, when they are adequate
before a strategy can be implemented (e.g., Woods & to the task and require little supervision (e.g., Roths
Hollnagel, 1987). This means that knowledge about and Woods, 1988). The result, in accordance with
what the computations carried out by the machine sig­ Ashby's law of requisite variety, is that as automation
nify about the domain (i.e., the domain's semantic increases, the human's role is increasingly to amplify
structure) must be used explicitly or implicitly to struc­ the machine's ability to cope with the unanticipated
ture machine power for human utilization. Second, the variety in the world or in the problem solving process.
human problem solver must have capabilities to utilize (Note the irony that increased automation is often jus­
and direct these computational resources. To build tified on grounds of human incompetence, yet in prac­
human-machine cognitive systems of this type, there tice it is often that same person who must now help
is a need for techniques and concepts to determine the machine to cope with disturbances beyond its de­
what knowledge resources are needed by a domain sign range, c.f.e.g., NUREG-1154 for one real world
problem solver and how to integrate and communicate example from nuclear power plants emergencies; the
these results as appropriate during the problem solving Hoogovens Report for a case in the automatization of
encounter. a rolling mill; Hirschhorn, 1984 for examples from
manufacturing technologies; Noble, 1984 for exam­
The instrumental view of tools emphasizes the role
ples from numerical control; and Perrow, 1984).
of adaptability in intelligent behavior. Research on
human performance reveals that skill is the capability A study reported in Roth, Bennett & Woods, (1987)
to adapt behavior to changing circumstances in pursuit makes salient the need for adaptive problem solving
of a goal (e.g., Woods, 1984a), and some philosophers and the brittleness of response inherent in the prosthe­
argue that rationality is intellectual adaptability (Toul­ sis approach to decision aiding. The study examined
min, 1972). This means that the power or intelligence actual technicians attempting to troubleshooting real
of an IDS system is related to its ability to increase devices with the aid of an expert system built within
the human problem solver's adaptability to the kinds the decision-aid-as-prosthesis paradigm. They were
of problems that could arise in the pursuit of domain able to observe human-machine interaction in the face
goals. While a large amount of research on intelligent of unanticipated variability because novel situations
interfaces focuses on building interfaces that adapt to arose (problems outside the machine's competence),
different users independent of a problem world, for adaptation to special conditions was required, instruc­
example, adapting to different user styles or to differ­ tions were underspecified, and recovery from human
ent stages of learning, the above suggests that research or machine errors was demanded.
should focus more on increasing the range of problems The expert system was designed under the assump­
solvable by the human-machine ensemble. tion that the user would be assigned a passive role of
Psychologists are fond of discovering biases in hu­ basically following instruction. In fact technicians var­
man decision making. One judgemental bias is the ied in the degree to which they were willing to fit that
overconfidence bias where people at all levels of ex­ mold, with some asserting a more active role in the
24 CHAPTER 1. COGNITIVE SYSTEMS ENGINEERING

problem solving process. The opportunity to observe the state of the device, possibly because of an input
both passive and active human problem solvers inter­ error on the technicians part, or whether the system
act with a machine expert in the face of unanticipated was beyond its boundery of competence and respond­
variability revealed how the prosthesis approach to de­ ing erratically. Failing to find an input error, the only
cision aiding leads to an impoverished joint cognitive basis for evaluating whether the system was on track
system and its consequences for joint system perfor­ was to judge the plausibility of the line of reasoning
mance. Contrary to the assumptions underlying the based on his own perceptions of device state and his
prosthesis approach to decision aiding, troubleshooters assessment of plausible lines for diagnosis and tenable
actively and substantially contributed to the diagnostic hypotheses under those conditions. Thus, he is forced
process. The more the human functioned as a passive to work through the diagnosis process independently
data gatherer for the machine, the more joint system and in parallel rather than build on top of the informa­
performance was degraded. Those who passively fol­ tion processing work carried out by the machine.
lowed the directives of the machine expert dwelled on These results illustrate many of the issues that arise
unproductive paths and reached dead-ends more often when machine power is deployed in actual work con­
than participants who took a more active role. Further­ texts. The capability to build more powerful machines
more, because comprehension of instructions depends does not in itself guarantee effective performance. The
on active participation (Bransford & Johnson, 1983; conditions under which the machine will be exercised
Britton & Black, 1985; Smith & Goodman, 1984), and the human's role in problem solving affect the
the passive troubleshooters made more errors that con­ quality of performance. This means that factors related
tributed to unproductive paths. to tool usage can and should affect the very nature of
In contrast, active human participation in the prob­ the tools to be used.
lem solving process led to more successful and rapid The breakdowns observed in human interaction
solutions. The active technicians attempted to cope with an "intelligent" machine do not reflect prob­
with unexpected conditions, to monitor machine be­ lems unique to dealing with "intelligent" systems.
havior, to recognize unproductive directions, and to They are ubiquitous to systems in a variety of media
redirect the machine to more productive paths (within and technologies, built according to the machine-as-
the limited means available) based on activities and prosthesis philosophy (e.g., Smith & Goodman, 1984;
judgements formed outside of the machine's direction. Hoogovens Report, 1976; Suchman, 1987). One of the
However, the prosthesis design of the machine expert main sources of problem is the reliance on pre-planned
not only failed to support an active human role, it ac­ solution paths which inevitably fail in the face of the
tually retarded technicians from taking or carrying out unanticipated variability (e.g., Fischhoff et al., 1986;
an active role. The machine expert provided few cues Brown et al., 1982).
as to its intentions in pursuing a line of diagnosis and Domain actors do more than deploy pre-planned
little possibilities for redirecting its resources. This in­ strategies; they also monitor the response for viola­
hibited the human problem solver in his role as a man­ tions of expectation (e.g., are the relevant goals met?).
ager of a semi-autonomous resource - to monitor the Departures from plan (from expectation) signal that
performance of the subordinate agent in order to detect more knowledge needs to be brought to bear on the
when it is offtrack and to redirect it (again, the same problem. Computational power can be used to gather
result has occurred with the introduction of other forms and deliver the resources needed to recognize cases
of automation when the human's supervisory role has beyond the boundaries of pre-planned material, and
been ignored, e.g., the Hoogovens Report, 1976). to unfold or deliver additional knowledge to modify
When the technician reached a point where the line plans in light of changing circumstances.
of questioning seemed to be focused in an unproduc­ There are several simple steps that can be taken
tive direction or the system generated a hypothesis that to convert the power of the machine expert into a
was suspect, the technician had to decide whether the more instrumental form. One is to make the ma­
machine expert was still on track or not. The tech­ chine's knowledge about the state of the device, viable
nician did not have access to the machine's percep­ hypotheses, and diagnostic directions available to the
tions of current world state nor to its troubleshoot­ human (Roth, Butterworth, & Loftus, 1985; Fitter &
ing strategy. Consequently, he had no way of know­ Sime, 1980). This means that these aspects of ma­
ing whether the system was systematically working chine knowledge must be represented explicitly (e.g.,
through hypotheses and would eventually reach the the shift from Mycin to Neomycin, cf., Clancey, 1983;
correct one, whether the machine expert misperceived cf. also, Gruber & Cohen, 1987). Another step is to
1.5. TOWARDS EFFECTIVE DECISION SUPPORT 25

provide more capabilities for human control of the ma­ • to enhance error tolerance by providing better
chine's reasoning. This includes mechanisms for the feedback about the effects/results of actions (not
human problem solver to add to or change the infor­ that errors are not made but that error detection
mation or knowledge that the machine is using about and correction is enhanced so that errors do not
the state of the device and regions where the fault may propagate; e.g., Rasmussen, 1986a; Perkins &
lie (Roth, Elias, Mauldin & Ramage, 1985). Moie am­ Martin, 1986; Rizzo et al., in press).
bitious use of machine power can provide the human
The most important contribution of AI technology
with facilities to explicitly manipulate the attention of
to decision support in the long run may well be in
the machine expert. For example, Pople (1985) has
cognitive tools that amplify human powers of concep­
developed mechanisms to allow users to direct the di­
tualization.
agnostic path pursued by the Caduceus expert system
The critical importance of conceptualization power
for internal medicine.
(which could be amplified by appropriate tools) in
effective problem solving performance is often over­
Conceptualization Aids looked because the part of the problem solving process
that it most crucially effects, problem formulation, is
How can we avoid the deficiencies of the prosthesis often left out of studies of problem solving. We study
approach? Consider what kinds of power cognitive diagnosis independent of evidence gathering or plan
tools could provide. Physical tools magnify the user's generation independent of plan monitoring and adap­
capacity for physical work; perceptual tools extend tation or diagnosis only when possible categories have
the user's perceptual range. For knowledge environ­ been completely enumerated (converting the problem
ments, tools can provide calculation power (the origi­ to one of classification; e.g., Clancey, 1985; Nickles,
nal use of computer technology), search and deductive 1980). The problem formulation aspect can be neatly
inference power5 (support for more complete utiliza­ finessed in the design world by establishing bounding
tion of available evidence), economy of representation constraints on the problem. For example, the designer
through frame-based representation technology (e.g., of a machine problem solver may assume that only
Walker & Stevens, 1986), and an extended ability to one failure is possible in order to completely enumer­
conceive of both the problem to be solved and possible ate possible solutions and, therefore, use classifica­
paths to solutions, i.e., conceptualization power. tion problem solving techniques. However, the actual
Both the designer of IDS systems (really any ma­ problem solver must cope with possibility of multiple
chine) and the target users of IDS systems face a com­ failures, misleading signals, interacting disturbances
plex, ill defined problem solving task: how to cope (e.g., Pople, 1985; Woods & Roth, 1986). In this ex­
with unanticipated variability and avoid brittleness? ample note that the difficulty arises, in part, because
The best help we can provide to people engaged in the designer defines the design problem to be solved
these tasks may be conceptualization power through in terms of the tools available to him or her. The
cost of underemphasizing the role of problem formu­
• enhancing their ability to experiment with possi­ lation and reformulation is particularly high if more
ble worlds or possible strategies (e.g., Pew et al., adaptive behavior is required to improve performance,
1986; Woods et al., 1987; Wiecha and Henrion, for example how to adapt a plan given non-success
1987), on an element of the original plan (Alterman, 1986)
or how to revise one's situation assessment in a dy­
• enhancing their ability to visualize, or make con­ namic, event-driven world (Woods & Roth, 1986).
crete, the abstract or uninspectable (analogous to Conceptualization power as an aid to problem for­
perceptual tools) in order to: mulation and as a means to cope with unanticipated
- better see the implications of a concept or variability suggests several kinds of IDS systems. One
change in the world (Hutchins, Hollan & Nor­ approach is to deploy machine power in the form of an
man, 1985; Mitchell & Saisi, 1987), agent who comments or critiques human problem solv­
ing (e.g., Coombs & Alty, 1984; Langlotz & Shoit-
- help one restructure his view of the problem liffe, 1983; Miller, 1983). Systems built in this mold
(e.g., Pople, 1985; Coombs & Hartley, 1987), consist of a core automated problem solver plus other
5
But there are other forms of reasoning that people may and
modules that use knowledge about the automated prob­
should be using, e.g., Cheng et al., 1986; as Mill commented, "Logic lem solver's solution and solution path, knowledge of
neither observes, nor invents, nor discovers." the state of the user-computer interaction, and knowl-
26 CHAPTER I. COGNITIVE SYSTEMS ENGINEERING

edge of the user's plans/goals in order to warn the


user, to remind him of potentially relevant data, and
to suggest alternatives.
A similar approach is to help the human problem SYM PTOM
solver construct and evaluate alternative world views DISEASE S no S
that could explain the available data (Pople, 1985). D .95 .05 1
The problem becomes better understood and a solution noD 99.90 899.10 999
emerges as the user directs the machine's attention to 899.15 1000
different subsets of the data, different domain issues,
and different hypotheses much as one person with a
difficult problem will bounce data, interpretations, and Figure 3: Data for a case in cancer risk assesment in
issues off of colleagues to help refine his understand­ contingency table form. From Cole (1986) and repro­
ing of the problem. Coombs (1986) and Coombs & duced by permission.
Hartley (1987) are working to develop model gener­
ative reasoning techniques for machine problem solv­
women is one in 1000. Based solely on this
ing where the machine itself builds alternative models
information, how likely is it that the woman
based on available evidence and then manipulates and
does in fact have breast cancer?
evaluates the models to generate possible solutions.
Another path is representation aids that use direct
The most common response by both laymen and
manipulation and graphic techniques to help the human
physicians is 95%, yet the normative answer is less
problem solver find the relevant data in a dynamic
than 1%. The rationale for such a low probability
environment, visualize the semantics of the domain
becomes apparent if elements of the problem are pre­
(that is, make concrete the abstract), and restructure
sented in the form of a contingency table enumerat­
his view of the problem. In this approach, machine
ing how 1000 people would fall into the four possible
power is used to create and manipulate representations
logical categories (Figure 3). In Figure 3, the test's
of the target world, rather than to create autonomous
sensitivity is .95 (95 of 100 people with the disease
machine problem solvers.
will test positive); its specificity is .90 (10 of 100 peo­
ple people without the disease will test positive); the
1.6 External Representations and Hu­ cancer probability for a patient of this age is .001 (the
man Problem Solving base rate or prevalence is 1 in 1000 people).
This representation of the problem makes more
A basic fact in cognitive science is that the repre­ salient the importance of the base rate information,
sentation of the world provided to a problem solver specifically it highlights that of those individuals show­
can affect his/her/its problem solving performance (see ing the symptom, there will be 100 (99.9) women
Mayer, 1983, Fischhoff, 1978, or Lenat & Brown, without the disease (false alarms) for every one (.95)
1984 which is a particularly poignant example of this woman with the disease. As Cole points out, the con­
in machine problem solving). tingency table representation is superior to the lin­
A compelling example of how different represen­ guistic description as an aid to problem formulation
tations of a problem can affect user performance is in several respects. Most importantly, it provides a
provided by Cole (1986). The example is a classic model for structuring the problem. The four slots in
problem in diagnostic inference (Casscells, Schoen- the contingency table provides a template for the sorts
berger, and Grayboys, 1978): of calculations needed to think the problem through.
In addition, by filling in the slots in the table, it unbur­
As part of a general physical examination, an dens the user from having to make those calculations,
asymptomatic 35 year old woman is given a and provides an external memory aid for the results
test for breast cancer and the test comes back of the partial calculations. Performance can be im­
positive. This test has certain known charac­ proved even more dramatically by the use of an ana­
teristics, namely that people with the disease logical representation that makes visually apparent the
have a 95% probability of a positive test re­ relationship between false alarms (test results positive
sult and people without the disease have a without the disease - 100 in this case, coded by cells
90% probability of a negative result. The marked with an S) and hits (test results positive with
prevalence of breast cancer in 35 year old the disease - 1 in this case, coded by darkened cells).
1.6. EXTERNAL REPRESENTATIONS AND HUMAN PROBLEM SOLVING 27

(Figure 4). portant because there are many cases in the history
The representation makes visually apparent the re­ of person-machine systems where technology choices
lationship between false alarms (test results positive contained serendipitous relationships between the form
without the disease, coded by cells marked with an S) and content of a representation (serendipitous in the
and hits (test results positive with the disease, coded sense that the relationships were not intentional design
by darkened cells). This representation of the problem choices) were noticed and utilized by the human prob­
visually highlights the fact that of those individuals lem solver. For example in a process control applica­
showing the symptom, there will be 100 (99.90 with­ tion, the position of a device usually controlled by an
out the disease (false alarms) for every one (.95) with automatic system was indicated via digital hardwired
the disease. counters. These counters happened to make clearly
audible clicks when device position changed. Opera­
tors were able to make use of the clicking sounds to
monitor this system because the clicks and click rate
contained information about the behavior of the au­
tomatic controller. Similarly, there are many cases
where changes in technology hindered user perfor­
mance because these serendipitous relationships dis­
appeared and no functional alternative was provided.
One example is tile annunciator based alarm systems
in process control plants and initial attempts to shift
to computer based chronological lists. Recent studies
have shown that the spatially distributed characteris­
tic of tile system supports operator extraction of many
forms of information about normal and abnormal plant
s conditions (Lees, 1983; Kragt & Bonten, 1983). When
ssssssssss the tile technology was replaced with chronological
s sssssssss
s sssssssss listing of the same data on abnormal conditions, the
s sssssssss
s sssssssss shift from a spatial to a temporal organization that oc­
s sssssssss curred removed the serendipitous benefits provided by
s sssssssss
s ssssssss s the spatial organization inherent in the tile medium.
ssssssssss
■ ssssssss s Because of this, the change in media collapsed, forc­
ing a return to the tile system (Pope, 1978).
Figure 4: Data for the case shown in Figure 3. repre­ The question for cognitive engineering is what are
sented as a probability map (sensitivity = .95, speci­ the important dimensions of problem representations.
ficity = .90, prevalence = .001). From Cole (1986) and We will examine several issues in this area: fixed and
reproduced by permission. adaptive collection of basic data elements, analogical
representations, object displays, and multiple represen­
tations.
As this example illustrates, one can re-interpret
questions about the display of data through a medium
as questions about how types of representations vary in Fixed and Adaptive Collections
their effect on the problem solver's information pro­
cessing activities and problem solving performance. In complex worlds, a large amount of data is made
Shifting questions about display/interface design to available through some medium (e.g., hardwired in­
questions about representation design allows one to struments, sets of VDU displays) to people who are re­
have independent but inter-related descriptions of the sponsible to control, troubleshoot and supervise these
displays/interface in terms of implementation technol­ worlds. The base from which representations of a
ogy - hardwired control boards, semi-graphic display world can be constructed ultimately consists of directly
technology, full graphic technology, or bit-mapped measured quantities. Subsets of these elemental data
graphics; full screen displays (no windows), tiled win­ must be gathered and processed to answer questions
dows, or overlapping windows - and in terms of di­ during the problem solving process such as situation
mensions of problem representations. assessment, disturbance/fault identification, goal set­
The perspective of representation design is im­ ting, response planning (cf., Woods & Roth, 1986).
28 CHAPTER 1. COGNITIVE SYSTEMS ENGINEERING

The available data can be thought of as the evidence decision about how to collect data collections.]
the problem solver uses to answer questions, and usu­ When the representation designer pre-selects fixed
ally there is a many to many mapping between the collections of data, the effect of a particular definition
available evidence and the questions to be answered. of collection boundaries on user performance depends
Traditionally, human-computer interaction has fo­ on the amount of across-predefined boundary search
cused on how to design for data availability - is the that results. This quantity is a function of three factors:
available data legible and accessible. It is the domain (1) the degree to which the designer's choice of cri­
actor's task to use the available data to answer ques­ terion for collecting data anticipates the questions that
tions and detect and solve problems. Take the case of a must be answered to perform domain tasks (a cogni­
datum about a thermodynamic system which indicates tive demand-representation interaction factor), (2) the
that valve x is closed. Most simply the message sig­ characteristics of the representation in which the search
nals a component status. If the domain actor knows occurs that affect search performance across the pre­
that valve x should be open in the current context, defined boundaries, such as the issues associated with
then the datum signifies a misaligned component. Or parallel versus serial display (a representation- cog­
the datum could signify that, with valve x closed, the nitive agent interaction factor), and (3) the degree to
capability to supply material to resevoir H via path which the links between data sets and domain ques­
A is compromised. Or given additional knowledge, tions vary with context for a particular world and for
it could signify that the process to supply material to a particular collection criterion (a cognitve demand-
resevoir H is disturbed (low flow or no flow). Fur­ representation interaction factor).
thermore the significance of any of the above inter­ The first factor is the degree to which the designer's
pretations depends upon the state of other parts of the choice of criterion for collecting data anticipates the
world (is an alternative available? is resevoir H inven­ questions that must be answered to perform domain
tory important in the current context) and the person's tasks. An example is that collecting data along physi­
expectations about the world. To determine the sig­ cal system boundaries will not support collecting data
nificance of the incoming datum, the problem solver to answer questions about the status of a function
must collect and integrate other data. The perspective when the domain contains a many-to-many mapping
of representation design shifts emphasis away from the between systems and functions. There are two paths
display of available data as signals to be interpreted to­ that can be followed to match collection boundaries to
wards a focus on communicating what is signified by meaningful questions. One is an empirical path, as in
the data. There are no a priori neutral representations Henderson & Card (1986), where you look for pat­
in this process; how data are collected and organized terns in the data search behavior of people performing
can have significant positive and negative performance the task and then use those patterns (a "locality prin­
consequences. ciple") to define collections (in Henderson & Card's
Data will be collected into groups given the char­ system this is rooms, i.e., collections of windows).
acteristics of the implementation technology (e.g., An alternative path is to analyze the domain problem
screens or windows within screens) and some type solving demands to determine what issues must be ad­
of criterion such as device geography, physical sys­ dressed or what questions must be answered and then
tem boundaries, time (e.g., chronological lists of mes­ to collect the data that is relevant to that issue or ques­
sages) and occaisionally domain goals to be achieved tion (e.g., Goodstein, 1983; Mitchell & Miller, 1986;
or the function performed by parts of the device. The Woods, 1986).
collection of data into groups affects user informa­ The second factor is the characteristics of the repre­
tion processing depending on whether the collection sentation in which the search occurs that affects search
boundaries are fixed or adaptive, on the criterion used performance across the predefined boundaries. An ex­
to set the boundaries, and on the characteristics of the ample of this is the changes in search performance
problems the world poses. [Note that there also can (e.g., getting lost, keyhole effects, cognitive tunnel vi­
be collections of collections. As a result, users of­ sion) that occur depending on the availability of spa­
ten have an information processing task at the level tial cues to provide a link between data collections
of collecting collections of data as when the user has (Woods, 1984b; Mitchell, 1985 and personal commu­
available two or more independent VDUs or can as­ nication; Elm & Woods, 1985). This is the issue of
sign data to multiple independent windows. Moray parallel versus serial display as it is called in dynamic
(1986) contains an example where a significant error worlds (and is one of the important cognitive issues
occurred in domain task performance due to the user's behind the medium question of window management
I.6. EXTERNAL REPRESENTATIONS AND HUMAN PROBLEM SOLVING 29

tactics such as tiled versus overlapping windows). The Analogical Representations


question is, given that there is too much data too be
A second dimension along which representations can
seen at once (note that physically parallel display does
vary is whether the information is presented as de­
not mean all the data can be taken in one glance),
scription (i.e., linguistic or digital form) or as depic­
how can the representation design be manipulated to
tion (graphic or analog form). Visualization can be
help the problem solver find the right set of data at
a powerful aid to comprehension and conceptualiza­
the right time. Woods (1984b) extensively discusses
tion. The priority of space as an organizing princi­
this issue and proposes a variety of techniques to aid
ple is so compelling that non-spatial data are often
search across fixed collection boundaries (cf. also,
given a spatial representation to improve comprehen­
K. Norman et al., 1986), and empirical results that
sion (e.g. Haber, 1981). For example, people use
indicate the utility of these techniques include Elm
Venn diagram representations to assist syllogistic rea­
& Woods (1985), Mitchell (1985, personal communi­
soning (Johnson-Laird, 1983), and the implications of
cation), Mitchell & Saisi (1987); Wiecha & Henrion
a set of data are found more easily in a cartesian plot
(1987a); Vicente (1987).
or other graphic form than in a data table (Cole, 1986).
The third factor is the degree to which the links be­ Similarly, showing how some device or system works
tween data sets and questions vary with context for (not how it looks) can aid a person to control or trou-
a particular world and for a particular collection cri­ bleshoot that device (e.g., Lieberman, 1985).
terion. An example of this factor occurs when the Based on this observation one approach is to build
data relevant to answer a question changes depending representations that directly encode higher order se­
on the state of the world (such as, the data used to mantic properties of the domain, that is, there is some
answer the question is there adequate material inven­ degree of analogical relation between the representa­
tory inside a container may depend upon whether the tion and the underlying world. In analogical repre­
material is in a liquid state or a gas/liquid mixture). sentation the structure and behavior of the representa­
This is one factor that leads to the need for adaptive tion (symbol) is related to the structure and behavior
collection. of what is represented (referent). This means that per­
Frequently, what data are relevant depends on the ceptions about the form of representation correspond to
state of the domain or the intentions of the problem judgements about the underlying semantics, for exam­
solver. In these cases, effective decision support can ple, a relationship between two elements of the repre­
be based on adaptive collection. In adaptive collec­ sentation corresponds to a relational semantic property
tion, how data are collected into groups is a parameter of the world.
that changes as a function of the state of the world Because the symbol and referent are not related on
in question or as a function of inferred user intent. all dimensions, the difficult and critical design prob­
Machine power is applied to manipulate the represen­ lem with this kind of integration is what is a useful
tation as a function of the semantics of the world. Ex­ partial isomorphism between the form and the content
amples include automatic blowup/suppression of detail of a representation. To accomplish this the designer
(e.g., Furnas, 1986), dynamic menus, adaptive collec­ first must determine what constraints between the sym­
tion of relevant sets of data as a function of device bol and what is symbolized should be established, and
state such as adaptive presentation of a subset of the what techniques for analyzing the domain semantics
possible side effects of a course of action (e.g., Miller, can specify these constraints (e.g., Miller & Mitchell,
1983) and adaptive collection of relevant sets of data 1986; Woods & Hollnagel, 1987). A second design
as a function of inferred user domain intent such as the problem is how to map the chosen aspects of domain
RABBIT system for data base query (Williams, 1985). structure and behavior into characteristics of the rep­
Adaptive collection marks one kind of application resentation so that the domain semantics are directly
of machine power behind the representation. This can visible to the problem solver. This includes issues of
be algorithmic machine power where the designer de­ selecting, scaling and assigning lower order data to
termines in advance the conditional relationships be­ characteristics of the representation. Finally, there is
tween data collections and the states of the world or the question of what amount and type of training and
"intelligent" machine power where the designer pro­ experience with the representation as window on the
vides a description of the functional possibilities which domain is needed.
the machine uses to infer the particular data subset to Analogical representation produces economies of
gather in a particular situation (e.g., Miller, 1983 or processing in problem solving by reducing memory,
RABBIT). referencing and computational demands:
30 CHAPTER I. COGNITIVE SYSTEMS ENGINEERING

• If A is an analogy of S, then relationships within visual model of the world being acted upon, direct ma­
S are represented in A without being explicitly nipulation interfaces provide a concrete framework on
named in A. Thus, modifying a diagram also which the user can operate. As example, in the Pin-
and automatically changes the relationship be­ bal Construction Set program (Budge, 1983) the com-
tween the modified object and all other objects pononents for "building" a Pinball machine are visu­
in the picture. The effects or consequences of the ally displayed and can be manipulated at will to create
change do not have to be calculated; they can be different Pinball games. As a result viable configura­
directly observed. tions can be more quickly conceived and tested, and
impossible configurations become more readily appar­
• Using a map one can get to all of the relation­ ent.
ships involving a certain place through a single
access point; by contrast, each part of the region
would have to be referred to many times, in a Integral Displays
large number of statements, if the same data set
was expressed as a description (where the struc­ One of the strong forms of analogical representation
ture of the representation is independent of the is the use of integral or object graphic forms. Inte­
structure of what is represented). As a result, it gral displays have been developed for static worlds
is easier with a map to access any part of a mu­ such as statistics to represent the results of multivari-
tually relevant data set given any one element of ate analyses (e.g., Chernoff, 1973; Kleiner & Harti-
the set. gan, 1981; Becker & Cleveland, 1984) as well as
dynamic worlds such as flightdecks (e.g., contact ana­
• In good analogical representations, non-useful log displays; Roscoe & Eisele, 1976), in medicine to
possibilities are inherently difficult to follow and aid patient monitoring (e.g., Siegel et al., 1971), and
useful possibilities are easy to follow (for exam­ in process control to aid monitoring of system status
ple, it is harder to invent a drawing of an impos­ (e.g., Woods et al., 1981; Goodstein, 1981).
sible object than it is to describe an impossible In integral displays higher order properties are repre­
object) because constraints in the semantics of the sented by depicting the relationships among the lower
domain (including causal and consequential rela­ order data that define the property. Instead of directly
tions) are expressed as constraints in the syntax noting the value of the higher order property, it is
and allowable transformations of the representa­ represnted as an emergent feature of the structure and
tion. behavior of the representation of the lower order data.
Figure 5 contains an example of one kind of integral
• All of the above help reduce the dependence of
display (a geometric pattern format originally proposed
performance on user working memory capacity
by Coekin, 1969) that was developed to depict the
because the analogical representation acts as an
overall safety status of a nuclear reactor (Woods et al.,
external memory or provides the contextual re-
1981). The different spokes are dynamically scaled
treival cues (e.g., Norman & Bobrow, 1979).
so that a regular polygon always represents normal
The weak form of analogical representation is to conditions, while distortions in the figure represent a
provide structure among the elements of a collection. developing abnormality. This integral display was de­
Not any structure counts (if the representation is to veloped in 1980 and is in use today in several power
be more than a pictograph; Berlin, 1974), the visual plant control rooms.
structure must be related to domain semantics. The Integral displays have many potential advantages for
strong form of analogical representation uses machine improved user information extraction. Wickens (1984)
power to manipulate the behavior of the representation contains an overview of the psychological basis for
in ways that directly correspond to domain semantics this philosophy for the display of data, and Jacob et
(e.g., browsers, direct manipulation interfaces). Note al. (1976), Goldsmith & Schvaneveldt (1984), Wick­
that in strong analogical integration, as in adaptive ens et al., (1984), and Carswell & Wickens (1984)
collections, machine power is used to manipulate the are empirical studies that confirm the theoretical anal­
representation. ysis. For example, Wickens et al. (1984) examined
The power of analogical representation undoubtedly the role of integral display of data in user informa­
contributes significantly to the appeal and effectiveness tion processing via a fault detection task on a simple
of direct manipulation interfaces (Hutchins, Hollan & dynamic system. The data available on system perfor­
Norman, 1986; Shneiderman, 1984). By presenting a mance was interrelated through the dynamics of the
1.6. EXTERNAL REPRESENTATIONS AND HUMAN PROBLEM SOLVING 31

system, and this data needed to be integrated in or­ visual format such as a bar chart can be used in many
der to determine if the system was disturbed and the representational forms.
source of the disturbance. They found that there was a The design challenge for creating effective repre­
significant advantage with an integral display (a geo­ sentations, such as integral displays, is to set up this
metric pattern format similar to Figure 5) over separate mapping so that the task-meaningful semantics are di­
display of the data (individual bars). rectly visible to the observer (what Becker & Cleve­
Different representational forms can vary in what land, 1984 call the decoding problem, i.e., domain data
information (and how much information or "richness") may be cleverly encoded into the attributes of a visual
can be directly read out, how directly this information form, but unless observers can effectively decoded the
specifies actions in the world, how broad or narrow representation to extract relevant information the rep­
are the uses that the directly available information can resentation will fail to support the user). In the case of
be put to. The evidence from several studies suggests integral displays, this requires the designer to confront
that properly designed integral displays can support the question of scaling the chosen data so that the ob­
human performance by providing more direct read out server can see the desired patterns. This includes prob­
of more useful information. lems of making disparate data comparable (e.g., how
Representations can support effective performance to integrate data from two different physical dimen­
in more ways than just the direct read out of signifi­ sions or how to combine a continuous and a discrete
cant information - how easily can one read out of the dimension), setting scale resolution, and, most criti­
representation other, not directly available, informa­ cally, identifying the reference normal and boundary
tion (i.e., how well it supports the observer in going conditions as a function of the context. For example,
beyond the information given). General attributes of the display in Figure 5 can be seen as a dynamic likeli­
skill can be affected as well. For example, an essential hood alarm that shows continuous variations between
attribute of skill is the flexibility to adapt strategies and target and limit conditions with respect to the health
resources to a changing world in the pursuit of goals. of a nuclear reactor (Sorkin et al., in press).
Some representations may support the flexibility of The designer must also decide how to assign data to
skilled performance better than others, even when both the characteristics of the object format selected (e.g.,
directly provide useful information. Lesh (1987) sug­ to the spokes of the geometric figure). Careful assign­
gests (and illustrates) that the power of a representation ment of data is necessary because there are inherent
is a function of the ability of agents using it to go be­ relationships in the graphic format that must be related
yond the information given in the sense of discovering to (or at least not conflict with) the data relationships
new relationships or generating significant new ques­ to be portrayed. For example, Brown (1985) found
tions or solving new sophisticated problems (i.e., those that in the faces object display form the data mapped
which require more than simple read out of the infor­ into the mouth feature dominated the data represented
mation available in the representation). This attribute on the other facial features. In Figure 5, the pres­
is particularly important for problem solving in worlds sure and temperature scales together define subcooling
where not all cases can be enumerated in advance and (temperature at high limit marker and pressure at low
in using representations offline to enhance skill acqui­ limit marker); placing these data on adjacent scales
sition. Many people suspect that integral display is a emphasizes this underlying relationship.
powerful type of representation in this sense. These problems of selecting, scaling, and assign­
Given that integral displays can support improved ing data to the visual form are critical for constructing
performance, how does one create an effective inte­ an effective example of the class of integral displays
gral display or, more generally, an effective represen­ because they affect not just how the domain seman­
tation for a domain? Representational form is defined tics are mapped into the the graphic form, but also
in terms of how data on the state and semantics of the how visible they are to the user. For example, Kleiner
domain is mapped into the syntax and dynamics of and Hartigan (1981, and the accompanying commen­
visual forms in order to produce information transfer tary by Wainer) contains three different solutions to
to the agent using the representation. It is indiffer­ these problems for a single integral format (faces) that
ent to the visual form per se. Bar charts, trends and visibly change the salience of a single underlying data
other visual forms can be used to create the same rep­ pattern. On the other hand, irrelevant characteristics or
resentational form (or variants on a representational behavior of the graphic form (irrelevant to the designer
theme). Conversely, the visual appearence alone does in the sense that domain information is not mapped
not define the representational form so that a single into these characteristics or behaviors of the graphic
32 CHAPTER 1. COGNITIVE SYSTEMS ENGINEERING

1TL2 WID RNG 24FEB81 00 00 00 1TL2 WID RN


G 24FEB81 00 00 00

CORE EXIT CORE EXIT


618/618°F 579/422° F
33 SUBCOOL 0 SUBCOOL
WID RCS PRESS STARTUP WID RCS PRESS STARTUP
2235/2235 PSIG 0 0 DPM 1265/2235 PSIG 0 0 DPM
250 MCP 768 MOP

PRZR LEV CNTMT PRESS PRZR LEV CNTMT PRESS


60/60% l· ί 0 PSIG 0/41 o/o l· i 15 PSIG

RV LEV RAD CNTMT RV LEV RAD CNTMT


100% SEC 70% SEC
OTHER OTHER
WID SG LEV Ip2 WID SG LEV Ip 2
50/50% 39/50%

Figure 5: One example of integral displays. A regular polygon represents normal conditions (Panel A), while
distortions in the polygon represent a developing abnormality (Panel B). (Copyright 1980 Westinghouse Electric
Corporation, and reproduced by permission

form) can be interpreted as carrying domain informa­ ometric pattern display in Figure 5. It was frequently
tion by the user and interfere with extraction (decod­ described by operational people who monitored simu­
ing) of the relevant domain information. This over- lated plant conditions as providing a "heartbeat" for the
interpretation is one kind penalty that can occur with plant. Furthermore, plant behavior that was reflected
analogical representations in general (Naveh-Benjamin in characteristic behavior (not a single state) of the
& Pachella, 1982; Halasz & Moran, 1982), e.g., the graphic form was immediately apparent to operational
unintentional from the designer's perspective domi­ personel (e.g., the occurrence of a reactor shutdown
nance of the mouth feature in the faces object display and whether it is a normal shutdown or whether some
(Brown, 1985). Additional challenges arise in design­ underlying disturbance persists). Another example of
ing integral displays for dynamic applications. Not the power of dynamic symbolization is Cleveland's
only must the designer be concerned with the relation­ interactive environment (Becker & Cleveland, 1984)
ship between domain semantics and the characteristics - brushing the scatterplot matrix - where an integral
of the integral display form he must also take into ac­ graphic for analyzing multi-variate data is made in­
count how the behavior of the domain issue of interest teractive As a result some statistical analyses become
maps into the dynamic behavior of the graphic form. visible dynamic patterns (e.g., autocorrelation).
Very little attention has been paid to dynamic symbol-
referent relationships to provide a source of guidance
Multiple Representations
for designers. However, studies of movement per­
ception show that a great deal of information unavail­ One possibility is that effective problem solving per­
able (or available only with great difficulty) from static formance is best achieved by providing multiple rep­
representations is immediately extracted from dynamic resentations. Rasmussen (1986a) explicitly character­
stimuli (Johansson, 1975). The power of the dynamic izes different levels of representation and calls for de-
element was often apparent in the work with the ge­ cison Support/interface design that provides multiple
1.6. EXTERNAL REPRESENTATIONS AND HUMAN PROBLEM SOLVING 33

representations for the human problem solver such as the user's data queries are formulated in terms that
views that contain information about the physical de­ are meaningful relative to his/her domain tasks rather
vice and views that contain information about the func­ than in terms unique to the interface itself. Because
tions performed by the different physical parts (cf. perspectives act as filters about what is relevant or ir­
also, Davis, 1983 who addresses the role of multi­ relevant given a vantage point, they provide implicit
ple representations in machine problem solving). Sys­ control over the portion of the database available to
tems have begun to emerge that provide the user with the user at any time (from the data base point of view,
multiple views of a domain such as PECAN (Reiss, the problem of how much and what kind of data to
1985). Goodstein (1983) and Woods (1986) describe present to the user of large, heterogeneous data bases
programs underway to define what would be useful or, from a psychological viewpoint, the problem of
multiple perspectives in process control applications specifying the user's normative field of attention in
based on analyses of the semantic structure of these varying contexts, Moray, 1981).
types of worlds. Specific multiple representation in­
terfaces based on these concepts are being developed
and tested. The question for cognitive engineering is A Case in Representation Design
not how to provide computational facilities that allow
The most complete extant case of representation de­
multiple representations but to study how multiple rep­
sign (including both a new design and experimental in­
resentations can be used to aid performance.
vestigation of the results) is Mitchell & Saisi (1987).
For example, the significance of a given data point The application world was NASA satellite commu­
varies with the observer's point of view, and a per­ nications ground control centers. This is a complex
spective is a description of an event or device from a problem solving world (cf., Woods, 1988) in that it
particular viewpoint. A representation system can be is dynamic (event based), there are significant inter­
organized around these semantically relevant perspec­ actions between parts of the world, it is large enough
tives rather than around implementation concepts of that workload can play a significant role, and avail­
screens or windows (Williams, 1984; Woods, 1986). able evidence is uncertain. In the existing computer
The concept of multiple perspectives is based on a based display and control system (what is in use in
metaphor that examining a data base is like a person actual centers) design for data availability dominated:
standing at some point in a topology. The portion of whatever is measurable is available to be displayed
the data topology that is viewable from a user specified (e.g., data block counts), for the most part what can
vantage point corresponds to the data of high potential be examined is raw data, and the collection principle
relevance given the user's interest (as expressed by is primarily by data type. Three VDU screens were
the requested vantage point). This field of attention available to the users to call up display pages of data
is constructed and manipulated through horizon lim­ to inspect in parallel.
its, landmarks, height of vantage point (level of sum­ Mitchell & Saisi developed a new design that re­
mation), granularity of the view. The use of multiple versed the emphasis from the display of data as signals
perspectives depends on the analysis of domain objects to be processed by the operators to display organized
and relations in order to define what perspectives are to enhance communication of what is signified by the
meaningful and what portion of the topology is of high available data. To accomplish this, they began with a
potential relevance in that context, e.g., from the per­ formal analysis of the domain semantics to identify the
spective of some goal of interest, data about goal sat­ demands on and activities of the operator as domain
isfaction, inter-goal constraints, process performance, problem solver (cf., Mitchell & Miller, 1986;). Based
process availability form data subsets of high poten­ on these results a new control center was designed.
tial relevance. The user can then recognize interesting First, data were collected based on operator functions
or important conditions within this data field to focus and information processing activities. Second, adap­
on or pursue. The display of the potentially relevant tive collection was introduced through automatic dis­
data set helps the user progressively refine and focus play of relevant data (via window management) given
in on the important data for the current context (e.g., the operator's expression of an area of interest and
Norman & Bobrow, 1979). the current state of the world (based on the seman­
Display selection now becomes a process of select­ tic analysis). This was accomplished through modi­
ing an object and selecting a perspective or context. fications to an off the shelf window manager. Note
Conceiving of data selection in terms of objects and that the design developed by Mitchell & Saisi was not
perspectives helps build interface transparency because directed by computational mechanisms (windows or
34 CHAPTER I. COGNITIVE SYSTEMS ENGINEERING

no windows, the three VDUs in the original system 1.7 Summary


constituted "windows;" or which window manager to
choose); instead, the design was directed by the se­ To build a cognitive description of a problem solving
mantic analysis and then by a search for implemen­ world one must understand how representations of the
tation techniques that could accomplish the cognitive world interact with different cognitive demands im­
objectives constrained by cost, availability, etc. Third, posed by the application world in question and with
analogical representations were introduced through in­ characteristics of the cognitive agents, both for exist­
tegral display graphics to communicate the state of ing and prospective changes in the world. Building a
higher order domain properties directly. Note that this cognitive description is part of a problem driven ap­
means that the contents of collections were sometimes proach to the application of computational power. In
analogically represented integrations of raw data rather tool driven approaches, knowledge acquisition focuses
than raw data itself. on describing domain knowledge in terms of the syn­
tax of computational mechanisms, i.e., the language of
implementation is used as a cognitive language. Se­
Mitchell & Saisi investigated the performance of
mantic questions are displaced either to whomever se­
moderately trained subjects (five and one half hours)
lects the computational mechanisms or to the domain
on both the original support system and the new de­
expert who enters knowledge. The alternative is to
sign. Performance with the new design showed large provide an umbrella structure of domain semantics that
and significant improvements over a wide range of per­ organizes and makes explicit what particular pieces of
formance criteria, including workload, task execution, knowledge mean about problem solving in the domain.
fault compensation, plan revision given new events in
the world (e.g., unscheduled satellite transmissions). There is a clear trend in this direction in order
to achieve more effective decision support and less
brittle machine problem solvers (e.g., Clancey, 1985;
One objection to "strong" representations such as Coombs, 1986; Gruber & Cohen, 1987). Acquiring
above is that they require an analytical or empirical and using a domain semantics is essential to be ca­
model of problem demands and user information pro­ pable of avoiding potential errors and specifying per­
cessing activities. Inadequacies in this model, as the formance boundaries when building "intelligent" ma­
argument goes, will introduce new difficulties. One chines (Roth et al., 1987). Techniques for analyzing
way to deal with this objection has to do with robust cognitive demands not only help characterize a partic­
techniques for mapping cognitive demands and activ­ ular world, they also help to build a repertoire of gen­
ities in complex worlds - one major topic for a viable eral cognitive situations, that is, abstract and therefore
cognitive engineering. Second, the mapping from data transportable reasoning situations which are domain
to what is signified by the data has to be done in order relevant because they use knowledge of the world be­
to carry out domain tasks. This can be done on the fly ing reasoned about. A cognitive description character­
with no support for the domain actor, or offline anal­ izes what knowledge of the world is needed and how
ysis and investigation of the problem domain-current this knowledge can be used to achieve effective perfor­
representation-cognitive agents triad can be done in mance. It helps to identify places where performance
advance to identify ways to produce a better match. is more likely to go wrong (e.g., situations where state
Third, representational techniques such as adaptive dependent side effects become relevant or where a rea­
collection and analogical representation are effective soning shortcut coded in a rule is not valid). Both of
in part because they are robust in that they leave the these types of information about problem solving in
domain actor in control (cf., Woods, 1986). For exam­ a domain can be used to specify what cognitive tools
ple in the Mitchell & Saisi design, adaptive collection are needed by cognitive agents and what kinds of com­
is based on the user's expression of a region of inter­ putational mechanisms are capable of providing such
est. Given an area or topic of interest, constraints in tools.
the semantics of the domain are used to narrow the set
of potentially relevant data. Fourth, the results should
always be subject to testing. In Mitchell & Saisi's ex­
periment, deficiencies in the model were identified in a 1.8 References
few places. Using a model of cognitive demands and
activities helps identify model inadequacies because it Ackoff, R. L. The art and science of mess manage­
makes explicit the representation design. ment. Interfaces, 1981, 11, 20-26.
1.8. REFERENCES 35

Adelson, B. When novices surpass experts: the diffi­ Boy, G. A. Operator Assistant Systems. International
culty of a task may increase with expertise. Journal Journal of Man-Machine Studies , 27, 1987. (Also in
of Experimental Psychology: Learning, Memory and G. Mancini, D. Woods & E. Hollnagel (Eds.) Cogni­
Cognition, 483-495, 10(3), 1984. tive Engineering in Dynamic Worlds, London: Aca­
demic Press.
Adelson, B. Problem solving and the development of
abstract categories in programming languages. Mem­ Bransford, J. D. & Johnson, M. K. Considerations of
ory and Cognition, 1981, 9, 422-433. some problems of comprehension. In Chase, W. G.
(Ed.), Visual Information Processing. Hillsdale, N. J.:
Alterman, R. An adaptive planner. In Proceedings of
Lawrence Erlbaum Assoc, 1983.
theAAAI. AAAI, 1986.
Alty, J. L. & Coombs, M. J. Face-to-face guidance of Bransford, J., Sherwood, R., Vye, N. & Rieser, J.
university computer users-I: A study of advisory ser­ Teaching and problem solving: Research foundations.
vices. International Journal of Man-Machine Studies, American Psychologist, 1986, 41, 1078-1089.
1980, 12, 390-406. Britton, B. K. & Black, J. B. Understanding Exposi-
Amendola, A., Bersini, IL, Cacciabue, P. & Manicni, tory Text. Hillsdale, N. J.: Lawrence Erlbaum Asso­
G. Modelling Operators in Accident Conditions: Ad­ ciates, 1985.
vances and Perspectives on a Cognitive Model. Inter­
Broadbent, D.E., Cooper, P. E., Fitzgerald, P. &
national Journal of Man-Machine Studies, 27, 1982.
Parkes, K. R. Cognitive Failures Questionnaire (CFQ)
(Also in G. Mancini, D. Woods & E. Hollnagel (Eds.)
and its Correlates. British Journal of Clinical Psychol-
Cognitive Engineering in Dynamic Worlds, London:
ogy, 1982, 21, 1-16.
Academic Press.
Anderson, J.R. The Architecture of Cognition. Cam­ Brown, A.L., Bransford, J. D., Ferrara, R. A. &
bridge, MA: Harvard University Press, 1983. Campione, J. C. Learning, remembering, and unde-
standing. In Flavell, J. H., & Markman, E. M. (Ed.),
Anderson, J. R. , Boyle, C. F. & Reiser, B. J. In­ CarmichaeVs manual of child psychology. New York:
telligent Tutoring Systems. Science, April 1985, 228, Wiley, 1983.
456-457.
Brown, A. L. & Campione, J. C. Psychological the­
Bainbridge, L. Diagnostic skill in process operation. ory and the study of learning disabilities. American
In M. L. Matthews & R. D. G. Webb (Eds.), Pro­ Psychologist, 1986, 41(10), 1059-1068.
ceedings of International Conference on Occupational
Ergonomics. Toronto, Canada: , 1984. Brown, J. S. & Burton, R. R. Diagnostic models for
procedural bugs in basic mathematical skills. Cognitive
Bechtel, W. Functional localization assumptions as im­ Science, 1978, 2, 155-192.
pediments to clinical research. In L. R. Troncale (Ed.),
A General Survey of Systems Methodolgy, Vol. 2. Mon­ Brown, J. S., Moran, T. P. & Williams, M. D. The
terey, CA: Intersystems Publications, 1982. Semantics of Procedures (Tech. Rep.). Xerox Palo Alto
Research Center, 1982.
Becker, R. A. & Cleveland, W. S. Brushing the Scat-
terplot Matrix: High-Interaction Graphical Methods Brown, J. S. & Newman, S. E. Issues in cognitive
for Analyzing Multidimensional Data (Tech. Rep.). and social ergonomics: from our house to bauhaus.
AT&T Bell Laboratories Technical Memorandum, Human-Computer Interaction, 1985, 1, 359-391.
1984.
Brown, J.S., & VanLehn, K. Repair theory: A gen­
Bly, S. A. & Rosenberg, J. K. A comparison of tiled erative theory of bugs in procedural skills. Cognitive
and overlapping windows. In M. Man tei & P. Orbe- Science, 1980, 4, 379-426.
ton (Ed.), Human Factors in Computing Systems. New
York: Association for Computing Machinery, 1986. Brown, R. L. Methods for graphic representation of
(CHT86). systems of simulated data. Ergonomics, 1985, 28,
1439-1454.
Bobrow, D. G. (ed.). Qualitative Reasoning About
Physical Systems. Cambridge, Mass.: The MET Press, Budge, B. Pinball construction set. (Computer pro­
1985. gram). San Mateo, CA: Electronic Arts, 1983.
36 CHAPTER I. COGNITIVE SYSTEMS ENGINEERING

Cantor, N., Smith, E. E., French, R., & Mezzich, Clancey, W. J. The epistemology of a rule-based ex­
J. Psychiatric diagnosis as prototype categorization. pert system - a framework for explanation. Artificial
Journal of Abnormal Psychology, 1980, 89, 181-193. Intelligence, 1983, 20, 215-251.
Caramazza, A., McCloskey, M., & Green, B. F. Naive Clancey, W. J. Heuristic Classification. Artificial In­
beliefs in sophisticated subjects: Misconceptions about telligence, 1985, 27, 289-350.
trajectories of objects. Cognition, 1981, 9, 117-128.
Cleveland, W. S. The Elements of Graphing Data.
Card, S. K., Moran, T. P. & Newell, A. The Psychol­
Monterey, California: Wadsworth Advanced Books
ogy of Human-Computer Interaction. Hillsdale, N.J.:
and Software, 1985.
Lawrence Erlbaum Associates, 1983.

Carroll, J. M. & Carrithers, C. Training wheels in a Cleveland, W. S. & McGill, R. Graphical perception:
user interface. Communications oftheACM, 1984, 27, Theory, experimentation, and application to the devel­
800-806. opment of graphical methods. Journal of the American
Statistical Association, 1984, 79, 531-554.
Carswell, C. M. & Wickens, C. D. Stimulus integral­
ity in displays of system input-output relationships: a Coekin, J. A. A versatile presentation of parameters for
failure detection study. In Proceedings of the Human rapid recognition of total state. In International Sym­
Factors Society. 28th Annual Meeting, 1984. posium on Man-Machine Systems. IEEE Conference
Record 69, 1969. (58-MMS 4).
Casscells, W., Shoenberger, A. & Grayboys, T. Inter­
pretation by physicians of clinical laboratory results. Cohen, P. et al. Management of uncertainty in
New England Journal of Medicine, 1978, 299, 999- medicine. In (Ed.), IEEE Conference on Computers
1000. and Communications. IEEE, 1987. (Also University
of Massachusetts COINS Technical Report 86-12).
Chambers, J. M., Cleveland, W. S., Kleiner, P. A. &
Tukey, P. A. Graphical Methods for Data Analysis. Cole, W. G. Medical Cognitive Graphics. In Chi'86
Belmont, CA: Wads worth, 1983. Conference Proceedings. ACM, 1986.
Chase, W. & Simon, H. A. Perception in chess. Cog­
Coombs, M. J. Artificial Intelligence and Cognitive
nitive Psychology, 1973, 4, 55-81.
Technology: Foundations and Perspectives. In Holl-
Cheng, P. W. & Holyoak, K. J. Pragmatic reasoning nagel, E., Mancini, G. & Woods, D. D. (Eds.), Intelli­
schémas. Cognitive Psychology, 1985, 17, 391-416. gent Decision Support in Process Environments. New
York: Springer-Verlag, 1986.
Cheng, P. W., Holyoak, K., Nisbett, R. & Oliver,
L. Pragmatic versus syntactic approaches to training Coombs, M. J. & Alty, J. L. Face-to-face guidance of
deductive reasoning. Cognitive Psychology, 1986, 18, university computer users-H: Characterising advisory
293-328. interactions. International Journal of Man-Machine
Studies, 1980, 12, 407-429.
Chernoff, H. The use of faces to represent points in
k-dimensional space graphically. Journal of the Amer­ Coombs, M. J. & Alty, J. L. Expert systems: An
ican Statistical Association, 1973, 68, 361-368. alternative paradigm. International Journal of Man-
Chi, M. T. H., Feltovich, P. J. & Glaser, R. Cate- Machine Studies, 1984, 20, 21-43.
gorizaton and representation of physics knowledge by
Coombs, M. J. & Hartley, R. T. The MGR Algo­
experts and novices. Cognitive Science, 1981, 5, 121-
rithm and Its Application to the Generation of Ex­
152.
planations for Novel Events. International Journal of
Chi, M. T., Glaser, R. and Rees, E. Expertise in Man-Machine Studies, 27, 1987. (Also in G. Mancini,
problem solving. In R. Sternberg (Ed.), Advances in D. Woods & E. Hollnagel (Eds.) Cognitive Engineer­
the Psychology of Human Intelligence. Hillsdale, N.J.: ing in Dynamic Worlds, London: Academic Press.
Lawrence Erlbaum Assoc, 1982.
Davis, R. Reasoning from first principles in elec­
Chi, M., Glaser, R., & Farr. The Nature of Expertise. tronic troubleshooting. International Journal of Man-
Hillsdale, NJ: Erlbaum, 1986. Machine Studies, 1983, 19, 403-423.
1.8. REFERENCES 37

De Keyser, V. Les Interactions Hommes-Machine: Fitter, M. J. & Sime, M. E. Responsibility and shared
Caractéristiques et utilisations des different supports decision making. In H. T. Smith & T. R. G. Green
d'information par les operateurs. Rapport Politique (Eds.), Human Interaction with Computers. London:
Scientifique/FAST n8.: Psychologie du Travail, Uni­ Academic Press, 1980.
versite de l'Etat a Liege, 1986.
Funder, D. C. Errors and mistakes: Evaluating the
de Groot, A. Thought and Choice in Chess. The Hague: accuracy of social judgements. Psychological Bulletin,
Mouton, 1965. 1987, 101, 75-90.

Dorner, D. Heuristics and cognition in complex sys­ Furnas, G. W. Generalized Fisheye Views. In Chi'86
tems. In R. Groner, M. Groner & W. F. Bischof (Eds.), Conference Proceedings. ACM, 1986.
Methods of Heuristics. Erlbaum, 1983. Gadd, C. S. & Pople, H. E. An interpretation synthe­
sis model of medical teaching rounds discourse: Im­
Dubois, D. & Prade, H. A Tentative Comparison of
plications for expert system interaction. International
Numerical Approximate ReasoningMethodologies. In­
Journal of Educational Research, 1, 1987.
ternational Journal of Man-Machine Studies, 27, 1987.
(Also in G. Mancini, D. Woods & E. Hollnagel (Eds.) Gaeth, G. J. & Shanteau, J. Reducing the influ­
Cognitive Engineering in Dynamic Worlds, London: ence of irrelevant information on experienced decision
Academic Press. makers. Organizational Behavior and Human Perfor­
mance, 1984, 33, 263-282.
Egan, D. & Schwartz, B. Chunking in recall of sym­
bolic drawings. Memory and Cognition, 1979, 7, 149- Gaffhey, M. E. Bridge simulation: Trends and com­
158. parisons. Washington D. C : Maritime Transportation
Research Board, National Academy of Sciences, 1982.
Ehn, P. & Kyng, M. A tool perspective on design unpublished report.
of interactive computer support for skilled workers.
Unpublished manuscript, Swedish Center for Working Garbolino, P. Bayesian Theory and Artificial Intelli­
Life, Stockholm, 1984. gence: The Quarrelsome Marriage. International Jour­
nal of Man-Machine Studies, 27, 1987. (Also in G.
Einhorn, H. J. & Hogarth, R. M. Ambiguity and Un­ Mancini, D. Woods & E. Hollnagel (Eds.) Cognitive
certainty in Probabilistic Inference. Psychological Re­ Engineering in Dynamic Worlds, London: Academic
view, 1985, 92, 433-461. Press.
Elm, W. C. & Woods, D. D. Getting lost: A case Gentner, D. & Stevens, A. L. (Eds.). Mental Models.
study in interface design. In Proceedings of the Human Hillsdale, NJ: Lawrence Erlbaum Associates, 1983.
Factors Society. 29th Annual Meeting, 1985.
Gibson, J. J. The Ecological Approach to Visual Per­
Fischhoff, B. Decision Making in Complex Systems. ception. Boston: Houghton Miffiin, 1979.
In Hollnagel, E., Mancini, G. & Woods, D. D. (Eds.),
Gick, M. L., & Holy oak, K. J. Schema induction and
Intelligent Decision Support. Germany: Springer-
analogical transfer. Cognitive Psychology, 1983, 15,
Verlag, 1986.
1-38.
Fischhoff, B., Slovic, P. & Lichtenstein, S. Fault trees: Gitomer, D. H. & Glaser, R. Knowledge, self-
Sensitivity of estimated failure probabilities to prob­ regulation, and instruction: If you don't know it, work
lem representation. Journal of Experimental Psychol­ on it. In R. E. Snow, & M. J. Farr (Eds.) (Ed.), Apti­
ogy: Human Perception and Performance, 1978, 4, tude, Learning and Instruction (Vol. 3). Hillsdale, NJ:
330-344. Erlbaum, 1987.
Fischhoff, B., Slovic, P. & Lichtenstein, S. Improv­ Glaser, R. Education and thinking: The role of knowl­
ing intuitive judgement by subjective sensitivity analy­ edge. American Psychologist, 1984, 39, 93-104.
sis. Organizational Behavior and Human Performance,
1979, 23, 339-359. Goldsmith, T. E. & Schvaneveldt, R. W. Facilitating
Multiple-Cue Judgements with Integral Information
Fischhoff, B., Lanir, Z., & Johnson, S. Military Risk Displays. In J. C. Thomas & M. L. Schneider (Eds.),
Taking and Modern C I (Tech. Rep. 86-2). Portland, Human Factors in Computer Systems. Norwood, NJ:
Oregon. Decision Research, 1986. Ablex Publishing, 1984.
38 CHAPTER 1. COGNITIVE SYSTEMS ENGINEERING

Goodstein, L. Discriminative Display Support for Pro­ Hirschhorn, L. Beyond Mechanization: Work and
cess Operators. In J. Rasmussen & W. B. Rouse (Eds.), Technology in a Postindustrial Age. Cambridge, MA:
Human Detection and Diagnosis of System Failures. MIT Press, 1984.
New York: Plenum Press, 1981.
Hollan, J., Hutchins, E. & Weitzman, L. Steamer: An
Goodstein, L. An integrated display set for process interactive inspectable simulation based training sys­
operators. In G. Johannsen & J. E. Rijnsdorp (Eds.). tem. AI Magazine, 1984, 4, 15-27.
Analysis, Design and Evaluation of Man-Machine Sys­
Hollnagel, E. & Woods, D. D. Cognitive Systems En­
tems. Pergamon Press, 1983.
gineering: New wine in new bottles. International
Gopher, D. & Weil, M. & Siegel, D. Is it only a Journal of Man-Machine Studies, 1983, 18, 583-600.
game? Using videogames as surrogate instructors for
HoUnagel, E., Mancini, G. & Woods, D. D. (Eds.).
the training of complex skills. In 1986 IEEE Proceed­
Intelligent Decision Support in Process Environments.
ings. IEEE, 1986.
New York: Springer-Verlag, 1986.
Gopher, D., Weil, M. & Siegel, D. Practice under Hoogovens Report. Human factors evaluation:
changing priorities: An approach to training of com­ Hoogovens No. 2 hot strip mill (Tech. Rep. FR251).
plex skills. (Tech. Rep.) in press. London: British Steel Corporation/Hoogovens, 1976.
Gruber, T. & Cohen, P. Design for Acquisition: Hutchins, E., Hollan, J. & Norman, D. A. Direct Ma­
Principles of knowledge system design to facilitate nipulation Interfaces. Human-Computer Interaction, 1,
knowledge acquisition. International Journal of Man- 311-338, 1988.
Machine Studies, 26, 143-159. 1987, (Special Issue
on Knowledge Acquisition for Knowledge Based Sys­ Jackson, P. & Lefrere, P. On the application of rule-
tems). based techniques to the design of advice giving sys­
tems. International Journal of Man-Machine Studies,
Haber, R. N. The power of visual perceiving. Journal 1984, 20, 63-86.
of Mental Imagery, 1981, 5, 1-40.
Jacob, R., Egeth, H. & Bevan, W. The face as a data
Halasz, F. & Moran, T. Analogy considered harmful. display. Human Factors, 1976, 18, 189-200.
In Human Factors in Computer Systems. Gaithersburg,
MD: Chi 82, 1982. Janvier, C. (ed.). Problems of Representation in the
Teaching and Learning of Mathematics. Hillsdale, NJ:
Hasselbring, T., Goin, L., & Bransford, J. Dynamic Erlbaum, 1987.
assessment and mathematics learning. Paper presented
at the 109th Annual Meeting of the American Associa­ Johansson, G. Visual motion perception. Scientific
tion of Mental Deficiency, Philadelphia, PA, 1985. American, June 1975, pp. 76-88.

Hawkins, D. An analysis of expert thinking. Interna­ Johnson, E. & Payne, J. W. Effort and accuracy in
tional Journal of Man-Machine Studies, 1983, 18, 1- choice. Management Science, 1985, 31, 395-414.
47. Johnson-Laird, P. N. Mental Models: towards a cogni­
tive science of language, inference, and consciousness.
Henderson, A. & Card, S. Rooms: The use of multi­
Cambridge, MA: Harvard University Press, 1983.
ple virtual workspaces to reduce space contention in
a window-based graphical user interface (Tech. Rep.). Kieras, D. E., & Poison, P. G. An approach to the for­
Xerox PARC, 1986. mal analysis of user complexity. International Journal
of Man-Machine Studies, 1985, 22, 365-394.
Herry, N. Errors in the execution of prescribed instruc­
tions. In J. Rasmussen, K. Duncan & J. Leplat (Eds.), Klein, G. A., Calderwood, R. & Clinton-Cixocco, A.
New Technology and Human Error. Chichester: John Rapid decision making on the fire ground. In Proceed­
Wiley & Sons, 1987. ings of the Human Factors Society. 30th Annual Meet­
ing, 1986.
Hiebert, J. & Lefevre, P. Conceptual and procedu­
ral knowledge in mathematics. In J. Hiebert (Ed.), Kleiner, B. & Hartigan, J. A. Representing points in
Conceptual and Procedural Knowledge: The Case of many dimensions by trees and castles. Journal of the
Mathematics. Hillsdale, NJ: Erlbaum, 1986. American Statistical Association, 1981, 76, 260-269.
1.8. REFERENCES 39

Kleinmuntz, D. N. Cognitive heuristics and feedback Lieberman, H. Seeing what your programs are doing.
in a dynamic decision environment. Management Sci­ International Journal of Man-Machine Studies, 1984,
ence, 1985, 31, 680-702. 21,311-331.
Kotovsky, K., Hayes, J. R. & Simon, H. A. Why are Mahon, P. T. Report of the Royal Commission to In­
some problems hard? Evidence from Tower of Hanoi. quire into the Crash on Mount Erebus, Antarctica of a
Cognitive Psychology, 1985, 17, 248-294. DC 10 Aircraft Operated by Air New Zealand. Welling­
Kragt, H. & Bonten, J. Evaluation of a conven­ ton, New Zealand: Government Printers, 1981.
tional process-alarm system in a fertilizer plant. IEEE
Transactions on Systems, Man, and Cybernetics, 1983, Malone, T. W. How do people organize their desks:
SMC-13, 586-600. Implications for designing office automation systems.
ACM Transactions on Office Information Systems,
Kruglanski, A., Friedland, N. & Farkash, E. Lay per­ 1983, 1, 99-112.
sons' sensitivity to statistical information: The case
of high perceived applicability. Journal of Personality Malone, T. W., Grant, K. R., & Turbak, F. A.
and Social Psychology, 1984, 46, 503-518. The Information Lens: An Intelligent System for
Information Sharing in Organizations. In Marilyn
Kuipers, B & Kassirer, J. P. Causal reasoning in Mantei and Peter Orbeton (Eu.)JtIuman factors in
medicine: Analysis of a protocol. Cognitive Science, Computing Systems: CHV86 Conference Proceedings.
1984, 8, 363-385. ACM/SIGCHI, 1986.
Langlotz, C. P. & Shortliffe, E. H. Adapting a con­
sultation system to critique user plans. International Mancini, G., Woods, D. D. & Hollnagel, E.
Journal of Man-Machine Studies, 1983, 19, 479-496. (Eds.).Cognitive Engineering in Dynamic Worlds. Aca­
demic Press: London, in press. (Special Issue of In­
Langlotz, C , Shortliffe, E. & Fagan, L. Using decision ternational Journal of Man-Machine Studies.).
theory to justify heuristics. In Proceedings of AAAI.
AAAI, 1986. March, J. G. & Weissinger-Baylon, R. (Eds.). Ambigu­
ity and Command. Marshfield, MA: Pitman Publishing,
Larkin, J., McDermott, D. P., & Simon, H. A. Expert
1986.
and novice performance in solving physics problems.
Science, 1980, 208, 1335-1342. McCloskey, M. & Caramazza, A. & Green, B. F.
Lees, F. P. Process computer alarm and disturbance Curvilinear motion with absence of external forces:
analysis: Review of the state of the art. Computers Naive beliefs about the motion of objeccts. Science,
and Chemical Engineering, 1983, 7, 669-694. 1980,210, 1139-1141.

Lenat, D. B. & Brown, J. S. Why AM and Eurisko McCloskey, M. Naive Theories of Motion. In Gentner,
appear to work. Artificial Intelligence, 1984, 23, 269- D. & Stevens, A. L. (Ed.), Mental Models. Hillsdale,
294. N. J.: Erlbaum, 1983.
Lesgold, A. M. Acquring expertise. In Anderson, J. McKendree, J. & Carroll, J. M. Advising roles
R. & Kosslyn, S. M. (Ed.), Tutorials in Learning and of a computer consultant. In Marilyn Mantei
Memory: Essays in Honor of Gordon Bower. San Fran- and Peter Orbeton (Ed.), Human factors in Com­
scisco: W. H. Freeman, 1984. puting Systems: CHF86 Conference Proceedings.
Lesgold, A. M., Rubinson, H.,Feltovich, P., Glaser, ACM/SIGCHI, 1986.
R., Klopfer, D., & Wanfg, Y. Expertise in a complex
skill: Diagnosing X-ray pictures. In Chi, M. T. H., Miller, P. L. ATTENDING: Critiquing a physician 's
Glaser, R., & Farr, M. (Ed.), The Nature of Expertise. management plan. IEEE Transactions on Pattern Anal­
Hillsdale, NJ: Erlbaum, 1986. ysis and Machine Intelligence, 1983, PAMI-5, 449-
461.
Lesh, R. The evolution of problem representations in
the presence of powerful conceptual amplifiers. In C. Mitchell, C. & Miller, R. A. A discrete control model
Janvier (Ed.), Problems of Representation in the Teach­ of operator function: A methodology for information
ing and Learning of Mathematics. Hillsdale, NJ: Erl­ display design. IEEE Systems, Man, and Cybernetics,
baum, 1987. 1986, SMC-16, 343-357.
40 CHAPTER 1. COGNITIVE SYSTEMS ENGINEERING

Mitchell, C. & Saisi, D. Use of model-based qual­ Newell, A. & Card, S. K. The prospects for psycholog­
itative icons and adaptive windows in workstations ical science in human-computer interaction. Human-
for supervisory control systems. IEEE Transactions on Computer Interaction, 1985, 1, 209-242.
Systems, Man, and Cybernetics, 1987.
Nickles, T. Scientific discovery and the future of phi­
Miyata, Y. & Norman, D. A. Psychological Issues losophy of science. In T. Nickles (Ed.), Scientific Dis­
in Support of Multiple Activities. In Norman, D. A. covery, Logic, and Rationality. Dordrecht, Holland: D.
& Draper, S. W. (Ed.), User Centered System De­ Reidel, 1980.
sign: New Perspectives on Human-Computer interac­
Noble, D. F. Forces of Production: A Social History of
tion. Hillsdale, New Jersey: Lawrence Erlbaum Asso­
Industrial Automation. New York: Alfred A. Knopf,
ciates, 1986.
1984.
Montmollin, M. de & De Keyser, V. Expert Logic Norman, D. A. Categorization of action slips. Psycho­
vs Operator Logic. In G. Johannsen, G. Mancini & logical Review, 1981, 88, l-15.(a).
L. Martensson (Eds.), Analysis, Design, and Evalua­
tion of Man-Machine Systems. CEC-JRC Ispra, Italy: Norman, D. A. Steps towards a cognitive engineering
IFAC, 1985. (Tech. Rep.). University of California at San Diego,
Program in Cognitive Science, 1981. (b).
Moray, N. The Role of Attention in the Detection of
Errors and the Diagnosis of Failures in Man-Machine Norman, D. A. Design rules based on analyses of hu­
Systems. In J. Rasmussen & W. B. Rouse (Eds.), Hu­ man error. Communications of the ACM, 1983, 26,
man Detection and Diagnosis of System Failures. New 254-258.
York: Plenum Press, 1981. Norman, D. A., & Draper, S. W., (Eds). User Cen­
tered System Design: New Perspectives on Human-
Moray, N. Attention to dynamic visual displays in
Computer Interaction. Hillsdale, NJ: Lawrence Erl­
man-machine systems. In Parasuraman, R. and Davies,
baum Asciates, Inc., 1986.
D. R. (Ed.), Varieties of Attention. Academic Press,
1984. Norman, D. A. & Bobrow, D. G. Descriptions: An
intermediate stage in memory retrieval. Cognitive Psy­
Moray, N. Modelling Cognitive Activities: Human chology, 1979, 11, 107-123.
Limitations in Relation to Computer Aids. In Hollan-
gel, E., Mancini, G. & Woods, D. D. (Eds.), Intelligent Norman, D. A. & Shallice, T. Attention to Action:
Decision Support in Process Environments. New York: Willed and Automatic Control of Behavior (Tech. Rep.
Springer-Verlag, 1986. 8006). University of California, San Diego, 1980.

Moray, N. & Senders, J. (Eds.). Human Error, in Norman, K. L., Weldon, L. & Shneiderman, B. Cog­
preparation. nitive layouts of windows and multiple screens for
user interfaces. International Journal of Man-Machine
Muix, B. Trust between Humans and Machines. In­ Studies, 1986, 25, 229-248.
ternational Journal of Man-Machine Studies, 1987.
(Also in G. Mancini, D. Woods & E. Hollnagel Perkins, D. & Martin, F. Fragile Knowledge and
(Eds.) Cognitive Engineering in Dynamic Worlds, Lon- Neglected Strategies in Novice Programmers. In E.
don:Academic Press. Soloway & S. Iyengar (Eds.), Empirical Studies of Pro­
grammers. Norwood, NJ: Ablex, 1986.
Mulsant, B. & Servan-Schreiber, D. Knowledge En­ Perrow, C. The organizational context of human fac­
gineering: A daily activity on a hospital ward (Tech. tors engineering. Administrative Science Quarterly,
Rep. STAN-CS-82-998). Stanford University, 1983. 1983, 28, 521-541.
Naveh-Benjamin, M. & Pachella, R. G. The effect of Perrow, C. Normal Accidents. New York: Basic
complexity on interpreting 'ChernofP faces. Human Books, 1984.
Factors, 1982, 24, 11-18.
Pew, R. W. et al. Cockpit Automation Technology
Newell, A. The knowledge level. Artificial Intelli­ (Tech. Rep. 6133). BBN Laboratories Incorporated,
gence, 1982, 18, 87-127. 1986.
1.8. REFERENCES 41

Pew, R. W., Miller, D. C. & Feehrer, C. E. Evalua­ Reason, J. Cognitive Aids in Process Environments:
tion of Proposed Control Room Improvements Through Prostheses or Tools? International Journal of Man-
Analysis of Critical Operator Decisions. Palo Alto, Machine Studies, 27, 1987. (Also in G. Mancini, D.
CA: Electric Power Research Institute, 1981. (NP- Woods & E. Hollnagel (Eds.) Cognitive Engineering
1982). in Dynamic Worlds, London: Academic Press.

Pollack, M. E., Hirschberg, J. & Webber, B. User par­ Reason, J. & Mycielska, K. Absent Minded? The Psy­
ticipation in the reasoning processes of expert systems. chology of Mental Lapses and Everyday Errors. Engle-
In Proceedings of the National Conference on Artificial wood Cliffs, NJ: Prentice-Hall, 1982.
Intelligence. 1982. Resnick, D. E., Mitchell, C. M. & Govindraj, T. An
embedded computer-based training system for rhino
Pope, R. H. Power station control room and desk robot operators. IEEE Control Systems Magazine, May
design: alarm system and experience in the use of 1987, pp. 3-8.
CRT displays. In International Symposium on Nuclear
Power Plant Control and Instrumentation. Cannes, Rizzo, A., Bagnara, S. & Visciola, M. Human Er­
France, 1978. ror Detection Processes. International Journal of Man-
Machine Studies, 27, 1987. (Also in G. Mancini, D.
Pople, H. Jr. Evolution of an Expert System: from In­ Woods & E. Hollnagel (Eds.) Cognitive Engineering
ternist to Caduceus. In I. De Lotto and M. Stefanelli in Dynamic Worlds, London: Academic Press.
(Ed.), Artificial Intelligence in Medicine. Elsevier Sci­
ence Publishers B. V. (North-Holland), 1985. Robertson, G., McCracken, D. & Newell, A. The
ZOG approach to man-machine communication. In­
Rasmussen, J. Information Processing and Human- ternational Journal of Man-Machine Studies, 1981, 14,
Machine Interaction: An Approach to Cognitive En­ 461-488.
gineering. New York: North-Holland, 1986a. Roscoe, S. N. & Eisele, J. E. Integrated computer-
generated cockpit displays. In T. B. Sheridan & G.
Rasmussen, J. A cognitive engineering approach to
Johannsen (Eds.), Monitoring Behavior and Supervi­
the modelling of decision making (Tech. Rep. Riso-
sory Control. New York: Plenum Press, 1976.
M-2589). Riso National Laboratory, 1986b.
Roth, E. M., Butterworth, G., ΠΙ, & Loftus, M. J. The
Rasmussen, J. & Lind, M. Coping with complexity. In Problem of Explanation: Placing Computer Generated
H. G. Stassen (Ed.), First European Annual Confer­ Answers in Context. In Proceedings of the Human Fac­
ence on Human Decision Making and Manual Control. tors Society. 29th Annual Meeting, 1985. Vol. II.
New York: Plenum Press, 1981.
Roth, E. M., Elias, G. S., Mauldin, M. L., & Ram­
Rasmussen, J., Duncan, K. & Leplat, J. New Tech­ age, W. W. Toward Joint Person-Machine Cognitive
nology and Human Error. Chichester: John Wiley & Systems: A Prototype Expert System for Electronics
Sons, 1987. Troubleshooting. In Proceedings of the Human Factors
Society. 29th Annual Meeting, 1985. Vol. I.
Reason, J. & Embrey, D. E. Human Factors Principles
Relevant to the Modelling of Human Errors in Abnor­ Roth, E., Bennett, K. & Woods D. D. Human Interac­
mal Conditions (Tech. Rep. EC1-1164-B7221-84-UK). tion with an 'Intelligent' Machine. International Jour­
European Atomic Energy Community, 1985. nal of Man-Machine Studies, 27, 1987. (Also in G.
Mancini, D. Woods & E. Hollnagel (Eds.) Cognitive
Reason, J. Recurrent errors in process environments: Engineering in Dynamic Worlds, London: Academic
Some implications for the design of intelligent deci­ Press.
sion support systems. In Hollnagel, E., Mancini, G. Roth, E. & Woods, D. D. Aiding human performance:
& Woods, D. D. (Eds.), Intelligent Decision Support. I. Cognitive analysis. Le Travail Humain, 51(1), 39-
Germany: Springer-Verlag, 1986. 64, 1988.
Reason, J. Cognitive under-specification: Its varieties Schneider, W. & Fisk, A. D. Attention theory and
and consequences. In B. Baars (Ed.), The Psychology mechanisms for skilled performance. In Magill, R. A.
of Error: A window on the mind. New York: Plenum, (Ed.), Memory and Control of Motor Behavior. Ams­
in press. terdam: North Holland, 1983.
42 CHAPTER 1. COGNITIVE SYSTEMS ENGINEERING

Schum, D. A. Current Developments in Research on Sternberg, R.J., & Caruso, D. R. Practical modes of
Cascaded Inference. In T. S. Wallstein (Ed.), Cognitive knowing. In Eisner, E. & Rehage, K. J. (Ed.), Learning
Processes in Decision and Choice Behavior. Hillsdale, and teaching: The Ways of Knowing. Chicago: Univer­
NJ: Lawrence Erlbaum Associates, 1980. sity of Chicago Press, 1985.
Selfridge, O. G., Rissland, E. L. & Arbib, M. A. Adap­ Suchman, L. A. Plans and situated actions: The prob­
tive Control of Ill-Defined Systems. New York: Plenum lem of human-machine communication. Cambridge,
Press, 1984. England: Cambridge University Press, 1987.

Sheridan, T. Human and Computer Roles in Super­ Toulmin, S. Human Understanding. Princeton NJ:
visory Control and Telerobotics. In L. P. Goodstein, Princeton University Press, 1972.
H. B. Andersen & S. E. Olsen (Eds.), Mental Models,
Tasks and Errors. London: Taylor & Francis, 1980. U. S. Nuclear Regulatory Commission. Loss of Main
and Auxiliary Feedwater at the Davis-Besse Plant on
Sheridan, T. & Hennessy, R. (eds.). Research and June 9,1985. Springfield, VA: National Technical In­
Modeling of Supervisory Control Behavior. Washing­ formation Service, 1985. (NUREG-1154).
ton D. C : National Academy Press, 1984.
van Creveld, M. Command in War. Cambridge, MA:
Shlaim, A. Failures in national intelligence estimates: Harvard University Press, 1985.
The case of the Yom Kippur War. World Politics, 1976,
28, 348-380. von Winterfeldt, D. & Edwards, W. Decision Analy­
sis and Behavioral Research. New York: Cambridge
Shneiderman, B. The future of interactive systems and University Press, 1986.
the emergence of direct manipulation. In Y. Vassiliou
(Ed.), Human Factors and Interactive Computer Sys­ Wagenaar, W. A. Does the Expert Know? The Reli­
tems. Norwood, NJ: Ablex, 1984. ability of Predictions and Confidence Ratings of Ex­
perts. In Hollnagel, E., Mancini, G. & Woods, D. D.
Shneiderman, B. Seven plus or minus two central is­ (Eds.), Intelligent Decision Support in Process Envi­
sues in human-computer interaction. In Mantei, M. ronments. New York: Springer-Verlag, 1986.
and Orbeton, P. (Ed.), Human Factors in Comput­
ing Systems: CHI'86 Conference Proceedings. Boston: Wagenaar, W. & Groeneweg, J. Accidents at Sea:
ACM/SIGCHI, 1986. Multiple Causes and Impossible Consequences. Inter­
national Journal of Man-Machine Studies, 27, 1987.
Siegel, T., Goldwyn, R. & Friedman, H. Pattern (Also in G. Mancini, D. Woods & E. Hollnagel (Eds.)
and process of the evolution of human septic shock. Cognitive Engineering in Dynamic Worlds, London:
Surgery, 1971, 70, 232-245. Academic Press.
Sleeman, D. & Brown, J. S. (eds.). Intelligent Tutoring
Walker, E. & Stevens, A. Human and Machine Knowl­
Systems. NY: Academic Press, 1982.
edge in Intelligent Systems. In Hollnagel, E., Mancini,
Smith, E. E. & Goodman, L. Understanding written G. & Woods, D. D. (Eds.), Intelligent Decision Support
instructions: The role of an explanatory schema. Cog­ in Process Environments. New York: Springer-Verlag,
nition and Instruction, 1984, 1, 359-396. 1986.

Sorkin, R. D. & Woods, D. D. Systems with hu­ White, B. Y., & Frederiksen, J. R. Intelligent tutoring
man monitors: A signal detection analysis. Human- systems based upon qualitative model evolutions. In
Computer Interaction, 1985, 1, 49-75. AAAI-86: Proceedings of the Fifth National Confer­
ence on Artificial Intelligence. American Association
Sorkin, R. D., Kantowitz, B. & Kantowitz, S. Likeli­ for Artificial Intelligence, 1986.
hood alarm displays. Human Factors, in press.
Wickens, C. D. Engineering Psychology and Hu­
Stefik, M., Foster, G., Bobrow, D., Kahn, K., Lanning, man Performance. Columbus, OH: Charles E. Merrill,
S. and Suchman, L. Beyond the chalkboard: using 1984.
computers to support collaboration and problem solv­
ing in meetings (Tech. Rep.). Intelligent Systems Lab­ Wickens, C. The object display: Principles and a re­
oratory, Xerox Palo Alto Research Center, September view of experimental findings (Tech. Rep. CPL 86-6).
1985. Army Research Institute, 1986.
1.8. REFERENCES 43

Wickens, C. D., Boles, D„ Tsang, P. & Carswell, M. Woods, D. D., Elm, W. C. & Easter, J. R. The Distur­
The limits of multiple resource theory in display for­ bance Board Concept for Intelligent Support of Fault
matting: Effects of task integration. Springfield, VA: Management Tasks. In Proceedings of the Interna­
National Technical Information Service, 1984. (AD- tional Topical Meeting on Advances in Human Factors
POOS 321). in Nuclear Power. , 1986.

Wiecha, C & Henrion, M. Linking multiple program Woods, D. D. & Hollnagel, E. Mapping Cognitive
views using a visual cache. In Proceedings of Interact- Demands in Complex Problem Solving Worlds. Inter­
87. Stockholm, Sweden, 1987. national Journal of Man-Machine Studies, 1987, 26,
(Special Issue on Knowledge Acquisition for Knowl­
Wiecha, C. & Henrion, M. A graphical tool for struc­ edge Based Systems).
turing and understand quantitative decision models. In
Proceedings of Workshop on Visual Lamguages. EEEE Woods, D. D. & Roth, E. Models of cognitive Behavior
Computer Society, 1987. 1987. in Nuclear Power Plant Personnel. Washington D. C :
U. S. Nuclear Regulatory Commission, in preparation.
Wiener, E. Beyond the sterile cockpit. Human Factors, (NUREG-CR-4532).
1985, 27, 75-90.
Woods, D. D. & Roth, E. Aiding human performance:
Williams, M. D. What makes RABBIT run? Inter­ II. From cognitive analysis to support systems. Le Tra­
national Journal of Man-Machine Studies, 1984, 21, vail Humain, 1988, 51(2), 139-171.
333-352.
Woods, D. D., Roth, E. & Pople, H. Cognitive En­
Williams, M. D., Hollan, J. D., & Stevens, A. L. vironment Simulation :An Artificial Intelligence Sys­
Human reasoning about a simple physical system: a tem for Human Performance Assessment. Washington
first pass. In D. Gentner & A. S. Stevens (Ed.), Men­ D. C : U. S. Nuclear Regulatory Commission, 1987.
tal models. Hillsdale, N.J.: Lawrence Erlbaum Asso­ (NUREG-CR-4862).
ciates, 1983.
Woods, D. D., Roth, E. M. & Bennett, K. B. Explo­
Woods, D. D. Operator decision making behavior dur­ rations in Joint Human-Machine Cognitive Systems.
ing the steam generator tube rupture at the Ginna nu­ In W. Zachary, S. Robertson & J. Blach(Eds.), Cog­
clear power station. In W. Brown & R. Wyrick (Eds.), nition, Computing and Cooperation. Norwood, NJ:
Analysis of Steam Generator Tube Rupture Events at Ablex Publishing, in press.
Oconee and Ginna. Institute of Nuclear Power Opera­
tions, 1982. (Also Westinghouse Research and Devel­ Woods, D. D., Wise, J. A. & Hanes, L. F. An evalu­
opment Center Report: 82-1C57-CONRM-R2). ation of nuclear power plant safety parameter display
systems. In Proceedings of the Human Factors Society.
Woods, D. D. Some results on operator performance 25th Annual Meeting, 1981.
in emergency events. In D. Whitfield (Ed.), Ergonomie
Problems in Process Operations. Inst Chem. Eng. Young, R. N. The machine inside the machine: user's
Symp. Ser. 90, 1984. models of pocket calculators. International Journal of
Man-Machine Studies, 1981, 15, 51-85.
Woods, D. D. Visual Momentum: A concept to im­
prove the cognitive coupling of person and computer. Yu, V. et al. Antimicrobial selection by a computer.
International Journal of Man-Machine Studies , 1984, Journal of the American Medical Association, 1979,
21, 229-244. 242, 1279-1282.

Woods, D. D. Paradigms for Intelligent Decision Sup­ Zachary, W. A cognitively based functional taxonomy
port. In Hollnagel, E., Mancini, G. & Woods, D. D. of decision support techniques. Human-Computer In­
(Eds.), Intelligent Decision Support in Process Envi­ teraction, 1986, 2, 25-63.
ronments. New York: Springer-Verlag, 1986.
Zisner, K. & Henneman, R. L. Development and eval­
Woods, D. D. Coping with Complexity: The Psychol­ uation of a model of human performance in a large-
ogy of Human Behavior in Complex Systems. In L. scale system. IEEE Transactions on Systems, Man and
P. Goodstein, H. B. Andersen & S. E. Olsen (Eds.), Cybernetics, 1988, in press.
Mental Models, Tasks and Errors: London: Taylor &
Francis, 1988.

You might also like