Professional Documents
Culture Documents
Seifertandhutchins Hci1992
Seifertandhutchins Hci1992
net/publication/242548532
CITATIONS READS
86 80
2 authors, including:
Colleen M. Seifert
University of Michigan
130 PUBLICATIONS 4,506 CITATIONS
SEE PROFILE
Some of the authors of this publication are also working on these related projects:
Tackling the 'right' problem: Investigating cognitive strategies used in understanding engineering problems View project
Evidence-based pedagogy in engineering education: Design Heuristics for concept generation View project
All content following this page was uploaded by Colleen M. Seifert on 16 October 2014.
Human-Computer Interaction
Publication details, including instructions for authors and subscription information:
http://www.informaworld.com/smpp/title~content=t775653648
To cite this Article Seifert, Colleen M. and Hutchins, Edwin L.(1992)'Error as Opportunity: Learning in a Cooperative Task',Human-
Computer Interaction,7:4,409 — 435
To link to this Article: DOI: 10.1207/s15327051hci0704_3
URL: http://dx.doi.org/10.1207/s15327051hci0704_3
This article may be used for research, teaching and private study purposes. Any substantial or
systematic reproduction, re-distribution, re-selling, loan or sub-licensing, systematic supply or
distribution in any form to anyone is expressly forbidden.
The publisher does not give any warranty express or implied or make any representation that the contents
will be complete or accurate or up to date. The accuracy of any instructions, formulae and drug doses
should be independently verified with primary sources. The publisher shall not be liable for any loss,
actions, claims, proceedings, demand or costs or damages whatsoever or howsoever caused arising directly
or indirectly in connection with or arising out of the use of this material.
HUMAN-COMPUTER INTERACTION, 1992, Volume 7, pp. 409-435
Copyright 0 1992, Lawrence Erlbaum Associates, Inc.
University of Michifan
Edwin L. Hutchins
University of California, San Diego
ABSTRACT
CONTENTS
1. INTRODUCTION
2. NAVIGATION DOMAIN
2.1. Task Description
2.2. Physical Setting
2.3. Computational Tasks
2.4. Training and Instruction
3. DATA COLLECTION
4. SITUATED LEARNING
5. LEARNING FBOM ERROR
Downloaded By: [University of Michigan] At: 20:09 19 February 2009
1. INTRODUCTION
Learning is a critical issue for systems that support cooperative work. Such
a system must produce the intended output and at the same time reproduce
itself. In particular, change will occur among the human participants in a
distributed system over time, as relatively expert personnel are replaced by
more novice participants. Another constraint on cooperative systems is that
they may require training within the context of the situated activity
(Suchman, 1987). Some of the skills may be learned through instruction;
however, the interactions characteristic of cooperative work can generally
only be learned within the functioning distributed system. Because simulating
an interactive work setting may not be possible or cost effective, there will
always be new participants who are experiencing the demands of the
distributed task setting as novices.
Thus, in cooperative tasks, learning on the job is inevitable, and where
there is learning, there is potential for error. Most studies of error focus on its
reduction or elimination; however, errors will occur in the distributed task
setting due to the need for on-the-job training. Can learning (with its
consequent error) be combined successfully with overall accurate perfor-
mance in cooperative systems of work?
E R R O R AS OPPORTUNITY 411
In this article, we present a case study of an existing situated task, the team
navigation of a U.S. Navy ship. This distributed task has developed under
several constraints: specifically, maintaining a working system in a state of
readiness, operating in adverse circumstances, and training replacements for
team members. These characteristics are shared by many real-world settings
of cooperative work, where task performance must carry on despite changes
in personnel, function, or technology (Rogoff & Lave, 1984).
In examining this successful task system, we focus on the role of learning
and error. How does the cooperative system foster learning while ensuring
that the overall results of the system are accurate? We begin by providing
information about the navigation domain and the specific tasks it involves,
Downloaded By: [University of Michigan] At: 20:09 19 February 2009
along with the training procedures currently employed. Next, we describe the
observations and protocols collected. We then describe results pertaining to
the role of errors in training and system properties that enable on-the-job
learning. Finally, we propose design principles for robust cooperative systems
and support technology.
2. NAVIGATION DOMAIN
At all times when a naval vessel is underway, a graphical plot of its past and
projected movements is maintained. This information, gathered and pro-
cessed by the navigation team, supports the decisions of the officer who is
responsible for the ship's movements. Day and night, whenever a ship is
neither tied to a pier nor at anchor, navigation computations are performed.
The technologies currently available include instruments for celestial, radar,
satellite, and visual-bearing observation. Most of the time while at sea, the
work of navigation is performed by a single crew member, using a variety of
these methods; however, when a ship leaves or enters port, or operates in any
other environment where maneuverability is potentially restricted, visual-
bearing observation is performed by a team of crew members working
together in a distributed manner. Given extremely tight time constraints,
limited channel size for safe conduct, the large lag in ship response to steering,
and the high cost of accidents, the performance of the entire navigation team
is critical to the safe passage of the ship. In the next sections, we describe the
task requirements and performance characteristics of team navigation.
The team navigation task is cooperative in the sense that the result of the
system performance cannot be attributed to any individual participant. The
navigation activity requires interaction between both human and technolog-
ical components such as task-specific tools and representational devices
412 SEIFERT AND HUTCHINS
' The vessel observed was a combat ship on which female crew members cannot serve tours of
duty.
ERROR AS OPPORTUNITY 413
Figure I . Map of the locations of navigation team members inside and outside
of the pilot house on the bridge.
t
EXlT
PILOT HOUSE
F M M CIRCUIT
-
EXlT
Downloaded By: [University of Michigan] At: 20:09 19 February 2009
[ FATHoMETER 1
Figure 2. Navigational chart of a portion of San Diego Harbor. The small triangle drawn onto the chart locates the current
position of the ship within a margin of observation error. The track marked to the upper right is the dead reckoning of
projected position at later times.
ERROR AS OPPORTUNITY 415
one can constrain the position of the observer to a line extending from the
landmark in the reciprocal direction of the bearing. Given two observations,
one can constrain the location of the ship uniquely by computing where the
two lines of position intersect. Typically, three lines of position are located
and plotted for each fix (determination of position); that way, a small triangle
representing the ship's likely position appears on the chart (see Figure 2). If
the observations are accurate, the size of the triangle, representing error,
should be small.
The second major computation, dead reckoning, involves computing where
the ship will be at a later time if it proceeds at a particular bearing and speed.
This is computed by drawing a tracking line on the chart from the current
known position and extending it in the direction of travel. Then, speed and
Downloaded By: [University of Michigan] At: 20:09 19 February 2009
time interval are utilized to project the distance to be covered. This projection
is marked on the chart (see Figure 2) and compared to the position fix
computed at the next interval.
Thus, a continuous cycle of activity occurs in team navigation. After fixing
current position and dead reckoning for future position, preparations are
made to repeat the sequence. Because the time interval between fixes can be
less than 1 min while underway, performance pressures are considerable.
In discussing learning in this setting, we focus on the performance of the
most novice participants, the bearing takers. Taking bearings is the first job
assigned to novice team members and, therefore, is a likely site of on-the-job
learning. The following sequence describes the steps involved in taking
bearings as part of the repeated cycle of navigation activity.
Plotter chooses three landmarks, assigns them for the next fix.
Timerhecorder communicates landmark assignments to the takers.
Timerhecorder assigns time interval before next bearings needed.
Takers coordinate who will locate which landmarks.
Takers locate the landmarks in their field of view.
Timerhecorder issues a "stand-byn warning to the takers.
Timerhecorder issues a "mark" command to take the bearings.
Takers observe the landmark bearings.
Takers coordinate their series of three separate reports.
Takers report the bearings over phone line to timer/recorder.
Timerhecorder reports bearings verbally and records in log book.
Plotter uses the bearings to fix the position.
Plotter uses position, ship's direction, and speed to dead reckon.
Plotter uses future position to select next set of landmarks.
The bearing takers (located outside on each wing of the ship) listen for the
assignment of particular landmarks to find (after studying maps of landmarks
in target harbors in advance); upon hearing a timing signal from the
416 SEIFERT AND HUTCHINS
Figure 3. Landmarks are sighted for the purpose of reading bearings for lines of
position. The figure shows the display overlaid on a landmark with the scale
indicating the bearing to be read.
Downloaded By: [University of Michigan] At: 20:09 19 February 2009
to by the navigation officer, who noted that the plotter had become the most
expert navigator solely through on-the-job training.
Although all team members learn all the jobs, the roles assigned for the
high-pressure team navigation task include a match of individual ability to
role. For example, the timerlrecorder position requires simultaneously writ-
ing, talking, and listening, with selective attention to conversation in the pilot
house and over the phone circuit. Even when novices reach a point of
competency in individual task performance, they may yet fail as a team
member under the demands of the cooperative task.
3. DATA COLLECTION
The main data collection took place aboard a U.S. naval vessel by Colleen
M . Seifert. This involved a total of 48 hr of observation. Most of this time
involved observing solo navigation at sea with a novice apprentice. Under the
team navigation configuration, two separate entrances and exits from the
harbor were observed, with on the order of 40 fix cycles each. Observations
and protocols were collected, along with interviews of each team member
within the cooperative system. The resulting data consisted of structured field
notes detailing example sequences, with special attention to error processes. A
single team of six crew members was observed, with all participants appearing
in the same roles for team navigation (their standard positions). This crew had
been working together for 6 months, and individual tenures ranged from 1
year as bearing taker to a total of 18 years in all roles by the senior navigator.
T o provide some comparison across different teams, these observations were
supplemented by videotapes and transcriptions from two studies by Edwin L.
Hutchins aboard other ships (Hutchins, 1989, in press).
4. SITUATED LEARNING
One function that the distributed system appears to serve is the detection,
diagnosis, and correction of errors during task performance. This robustness
feature is quite adaptive for the system, given that it must function with
novice participants who may be more prone to error. In the next sections, we
examine evidence of errors and discuss how the design of the distributed task
system offers the potential to learn from error while producing accurate
results in the final stages of team performance.
ERROR AS OPPORTUNITY
reported.
The number of errors produced, and caught, is much larger than might be
expected given the low overall rate of navigational accidents. Although
complete counts are not available for the main dataset, coverage of error types
was recorded. For example, in one exit from harbor, every team member
committed at least one error that was detected and corrected by another team
member. Two of these four error types, jinding landmarks and reading
compasses, are exactly the type of individual task performance errors one
would expect from novices in an on-the-job training context. A third type of
error, monitoring errors, are individual errors that humans are particularly
prone to due to limitations of attention and vigilance. Finally, coordination errors
are a direct result of the need to cooperate with other team members; for
example, bearing takers make decisions on the fly about turn taking in
reporting, based on not only their own task information but also on
knowledge of the other's current task constraints (who has the beam bearing,
who has two locations to shoot).
Although not all observed recoveries from error were instructional in intent
or consequence, some error recovery strategies involved the detection of
error, diagnosis of the source of the error, and perhaps explicit demonstration
of the correct solution in order to benefit the learner. Thus, errors may serve
to focus attention on whichever parts of the task are misunderstood by the
novice. In the next sections, we present specific examples of error detection,
diagnosis, and correction, and we also explore the distributed system
properties that they illuminate.
The most novice members of the navigation crew are first assigned to the
task of taking bearings. Thus, our analysis focused on errors made by the
bearing takers. The results showed that almost every step of the bearing-
420 SEIFERT AND HUTCHINS
taking task included some observed error. The observations provide evidence
of four main error types arising from subtasks in taking bearings. Examples
of each of these error types follow (BT = bearing taker, T / R = timer/
recorder, P = plotter).
Within the distributed system, errors can be very useful in guiding the
instruction given while learning on the job. Rather than trying to inform
about all possible knowledge needed, novices can be encouraged to attempt
tasks and be given instruction only on the aspects of the task that result in
error. In this way, errors can serve to guide instruction in the learning
process, indicating in very specific ways exactly what information or skill is
missing in the current knowledge state of the novice. Consider this example
of a senior navigator interacting with a novice plotter.
Downloaded By: [University of Michigan] At: 20:09 19 February 2009
A novice navigator was asked, "How far west shall we go to get back to
harbor at 1600?" The ship was directly west of the harbor entrance (time
= 1200). He paused, then measured the distance the ship could go in an
hour and marked it with a divider; he then marked four hour-lengths
from the harbor entrance on the chart. This position lay far west of the
ship's current position. A more experienced crewmember was moni-
toring and said, "If he wants to be back by 1600, he's not going to go
west from now until he hits this point!"
One solution procedure for the problem would be to measure the current
distance to the harbor, compute its travel time, and then subtract that time
from the total 4 hr. Any time remaining could then be divided by two, such
that equal remaining time is spent continuing on and returning to the current
position. The novice appears to begin part of this solution procedure (marking
the distance traveled with a length-preserving tool, a divider). However,
instead of determining the distance to shore, the novice computed the ability
of the ship to travel a particular distance within the time interval. It appears
the novice did not know how to adapt the more familiar procedure for finding
travel distances to the course recommendation question.
Responding to error appropriately depends on understanding how the error
may have been generated, perhaps through modeling the reasoning processes
of the person who committed the error. Through such understanding, the
expert can gear the presentation of a solution in a way that is appropriate to
the novice's current knowledge state. The instructor may also benefit, as
diagnosing the cause of an error may produce a new insight about the task
processes. This is true whether the error was committed by the learner or
observed in someone else. Novices in particular benefit from the opportunity
to observe the diagnosis of other's errors and the explication of the error
maker's missing knowledge. Every witnessed instance of error provides an
opportunity for learning or the confirmation of knowledge, potentially saving
the system from future errors.
422 SEIFERT AND HUTCHINS
instruction.
This is especially important with concepts that must be inferred from
examples rather than explicitly stated. Because relevant information for a
decision may not be directly observable or explicable by an expert, novices
may have to infer information from experience in a variety of situations,
guided by error correction on specific failures. Where there is a solution space
to be explored, feedback can guide the discovery of the concepts underlying
solutions. Consequently, a novice who makes few errors may actually learn
less about the task through training than one who, as a result of making
errors, has explored more information within the domain. The implicit nature
of domain knowledge in the navigation task requires learning directed by
error. Novices are allowed to do their best and are provided with correction
and instruction on the particular errors they make. This provides a more
efficient system for training, because costly explicit instruction is minimized.
Interestingly, the content of feedback while under team configuration was
extremely sparse. The observed feedback was frequently stated in very
general terms, such as "tighten up out there." Such limited feedback is not as
helpful as a more complete demonstration or instruction; for example, a
bearing taker will not know whether his reading is off because he selected the
wrong landmark, the wrong portion of the landmark, read the scale incor-
rectly, or reported it incorrectly. At least, however, this general feedback
serves to focus novices' attention to the errors they make, presumably
motivating attempts toward improvement. Such limited feedback may be the
only response the error detector can provide during the ongoing task. Because
errors are corrected during performance of the on-line task, and the detector
is also involved in his own subtask, there may be insufficient time, processing
resources, or communication channels for the composition and delivery of
appropriate instruction. Yet novices appeared to benefit even from this
limited feedback, without the added cost of unnecessary duplication of team
members in the role of "trainers."
Additional learning opportunities are provided in the distributed system
through the errors that others make. When an error is detected and corrected
ERROR AS OPPORTUNITY 423
Except for one subtask (evaluate, which is performed only in the final step of
the cycle), each cognitive subtask is performed in more than one task and by
more than one team member.
Downloaded By: [University of Michigan] At: 20:09 19 February 2009
This task analysis proved useful for detecting commonalities across tasks and
identifying potential alternate configurations for task distribution. Next, the
specific distribution of these cognitive subtasks within the observed naviga-
tion team was determined. Each of the subtasks identified is presented within
the groupings organized within the distributed navigation task in Figure 4.
The task analysis shows that the needed tools (e.g., compasses or charts) are
adequately arranged so that individuals have control of the tool as needed to
perform tasks. In addition, the analysis corresponds well to divisions within
team members, such that similar repeated tasks appear grouped within
individuals rather than distributed across them (the plotter records all three
lines of position on the chart). However, the interactions required by this task
design (shown with arrows) indicate potential difficulty based on the unor-
dered sequence of activity on shared channels (particularly the phone circuit).
Although the tasks themselves have a standardized order (illustrated vertically
on the page), the team interactions are left unscheduled and may be
considered chaotic in comparison. This open-ended communication sequence
requires additional work by the crew members to coordinate their actions.
ERROR AS OPPORTUNITY 425
- Bearing Taker 1
PARSE
oOWNATE
Tirner/Recorder
PARSE
em
Plotter
COMWlE
+ MONITOR
-
4 PARSE
MONITOR
-
MONITOR c MONITOR
COORDINATE 4 PARSE
Downloaded By: [University of Michigan] At: 20:09 19 February 2009
MONITOR
PARSE MONITOR
Bearing Taker 2 * PARSE
PARSE m
ALIGN
COORDINATE
m PARSE
EVAL
m REPORT
LlGN
m
LOG BOC)r
I Deck Log
Recorder
one has access to an error, one also often has knowledge of the processes that
may have generated it. This is because one has already - at an earlier stage in
learning-performed all those subtasks that one now observes in others'
performances. The overlap of access and knowledge that results from the
alignment of career path and data path is not a necessary feature of this
system, but it does give rise to especially favorable conditions for the detection
of error.
The cooperative task environment results in many errors being detected by
team members who are simply monitoring the actions of those around them.
Because each actor is not constantly occupied, there is ample processing time
available to observe the activities of others on the team. Indeed, the team
Downloaded By: [University of Michigan] At: 20:09 19 February 2009
distribution is set up both physically (spatially and through the phone circuit)
and conceptually so that activities of team members are often conducted
where they can be observed by others. This incidental monitoring is an
important factor in error detection. Consider this example (DL = Deck Log
recorder):
P: One, two, three. Same two. Ballast Point, Bravo. And the next
one . . .
TIR: (On phone to takers) Time 40 should be, Ballast Point. . . .
P: Front Range, Bravo.
TIR: (On phone to takers) And Bravo. . . .
DL: He may not be able to see Front Range from here.
TIR: Yeah.
In this example, the deck log recorder, who is not involved in taking
bearings (he answers questions about past course information and changes),
monitors and notices that one landmark (Front Range) will be difficult for the
taker to see from his current location. This type of monitoring was frequently
observed, especially during periods of low work demand in tasks within the
observation area. Because detection requires attention, however, one of the
costs of increasing current workloads within the team members may be the
reduction of resources available for monitoring and correcting errors.
An additional monitoring source that can assist with errors is the
fathometer operator. This operator is in the position to overhear any
conversations on the phone circuit. Consider this example, where the
relatively inexperienced timerhecorder has selected the last set of landmarks
to be observed (FO = Fathometer Operator):
In this example, the fathometer operator detected the poor pattern of lines
Downloaded By: [University of Michigan] At: 20:09 19 February 2009
of position from the bearings reported by the takers, and, with a question, he
communicated a problem with the three sightings (with three sightings close
to horizontal, the triangle formed by the intersecting lines is harder to chart
and less accurate than three more varied bearings). The fathometer operator's
interaction with the team is normally limited, but in some situations this extra
monitor is in a position to facilitate error detection in the sensing loop of the
navigation team. Thus, shared knowledge distributed over the team, although
irrelevant to one's own task, may assist in monitoring for error as participants
learn by doing.
The protocol suggests that this error was generated in the verbal report of
information by the tirnerlrecorder, not in the reading process by the bearing
taker. Given the distributed task structure, however, there is no reliable way
for the plotter to distinguish errors that may have been generated by the takers
from those generated by the tirnerlrecorder. Consequently, the diagnosis of
the error source may frequently be impossible or inaccurate. Figure 5 depicts
a schematic flow of information from the takers through the plotter and
evaluator.
As shown in the diagram, the timerlrecorder serves as a filter between the
information-gathering tasks (the bearing takers) and the information compu-
tation tasks (the plotter). This filter point makes the diagnosis of responsibility
for error within the information-gathering loop very difficult from outside the
loop. Most often, errors generated in the information-gathering loop are
simply assumed (often incorrectly) by the plotter to be the consequence of
error by the bearing takers.
This property of the system facilitates observing errors by others; however,
an unlimited horizon of observation might prove too distracting as individuals
perform their own tasks. The particular arrangement within the navigation
system may be the result of the technologically limited means of communi-
cation between the takers and the rest of the team. Thus, one improvement to
the system might include some recording technology that would supplement
the timerlrecorder as the information filter in the bearing-collection task.
Some improvement in the current system may facilitate error correction
across team members by enlarging the horizon of observation.
ERROR AS OPPORTUNITY
The social and motivational aspects of cooperative tasks may greatly affect
performance. In this setting, the group identity appeared to provide motiva-
tion for individuals to participate in minimizing error. Each member of the
team is taught to feel responsible not only for his own job but also for all parts
of the process to which he can contribute. Likewise, when an error is made,
the entire team accepts responsibility. For example, one bearing taker said of
another that, "I have to watch out for him, 'cause he goofs it up a lot."
Similarly, the senior navigator assessed the team's strength in terms of the
whole, rather than based on individual competencies: "There are a few that
will need some help, but every one of them looks out for the others and makes
sure the job gets done." Therefore, the social context of the group task
facilitates attention to error as a feature of the distributed task environment.
Distributed tasks also serve needs for individual group members (Hackman
& Morris, 1975), and membership in the navigation team was important to
the crew members. Social motivation among the team was observed in the
form of competitive comments by team members about other crews; for
example, one navigator remarked that his team was "the best crew aboard this
E R R O R AS OPPORTUNITY 43 1
ship. Ask anyone; they will tell you that the navigation crew never screws up."
This team's reputation for error-free performance testifies to the effectiveness
of the error correction mechanisms within the system. In addition, the
navigation crew tended to spend off-duty hours on the bridge, observing other
team members at work and studying together. These social factors presum-
ably play an important role in motivation, learning, and performance.
minimize the causes of error, make it possible to "undo" errorful actions, and
make it easier to discover and correct errors. These same goals are appropriate
for designers of cooperative work systems; in addition, our results suggest a
goal to include mechanisms that help participants learn and benefit from the
errors that will inevitably occur. The advantage of designing for error is that
the cooperative system can turn occasions of error into opportunities.
A question remains about whether the function of providing opportunities
for learning was an intended or fortuitous feature of this distributed system
design. Even though learning may be a constant goal for any system, training
and education often take a back seat to an emphasis on production and
performance. That is, the main source of pressure on the team is simply to
produce accurate and timely navigational information; their internal needs
for learning and training are secondary. Second, even if adapting the system
to promote learning were the explicit goal, it is much more difficult to design
for learning than for system performance. With performance, feedback on
success and failure is immediate and clearly defined; for learning, however,
it is difficult to measure whether improvement not directly reflected by
performance measures (such as deeper understanding) occurs. Consequently,
the performance feedback provided may serve to guide novices to learn to
perform their task, rather than learning about navigation more generally.
However, the errors and corrections observed appear to provide both
learning opportunities and improved performance by the distributed task
team. At the same time that the system design facilitated on-the-job
instruction, the built-in functions for correcting errors ensured that final
system output was relatively error free. This coupling of learning through
error with a robust network for catching errors before final output is the
central strength of this task design. The cooperative system in this case study
meets the goals of accurate performance along with tolerance for errors as
novices participate in the task system.
The improvement in error detection, diagnosis, and correction capabilities
within a system may have other consequences for system performance.
Design decisions will involve trading off enhanced capabilities in one area
432 SEIFERT AND HUTCHINS
against less optimal processing elsewhere in the system. For example, the
multiple perspectives of crew members were observed to affect the ability to
detect errors within the system. This task division separating evaluation from
computation includes sacrificing the advantages of including this participant
within the computational processing. The gain from this separation of
evaluation is that possible bias from having participated in the computation
process is avoided. A multiple-perspective system may avoid the confirmatory
bias prevalent in individual reasoning by supporting multiple solution paths
simultaneously.
Other design tradeoffs include shared knowledge, which improves diagnosis of
error by including expertise in redundant units within the system but
Downloaded By: [University of Michigan] At: 20:09 19 February 2009
There is evidence from other cooperative tasks that the design criteria
identified here are potentially very important to distributed systems. For
example, aviation research on the cause of jet transport accidents world-wide
found 60 cases that were due to breakdowns of cooperative crew performance
(Cooper, White, & Lauber, 1979). Foushee and Helmreich (1987) and
Hackman and Morris (1975) found that process loss is inherent in any
cooperative task; however, the redundancy of expertise in such groups did
produce a gain in performance. These studies suggest that increased famil-
iarity with interactions during team performance improves the overall task
performance. In addition, they suggest that training should occur in high-
error-rate scenarios (at least in simulation), where error detection, diagnosis,
Downloaded By: [University of Michigan] At: 20:09 19 February 2009
and recovery skills can be practiced. The commonalities between airline crew
systems of cooperative work and the case study presented here suggest that
other types of distributed systems could benefit by planning and designing for
the features found to be critical to learning and performance in the
navigational task.
In any system where individual performance is mediated by group activity,
learning on the job and consequent errors will occur. Technological innova-
tions suggest possibilities for methods to support learning and correcting
errors. For example, interfaces that support error-prone tasks such as
monitoring and computation could be added into the distributed setting. In
addition, programs that monitor subtask output and detect errors could be
added, which might include a model of the desired relationship between the
task parts. Simple recording devices could improve the ability to detect where
errors are occurring and provide the ability to review performance for
instruction. Finally, computer-supported cooperative systems could be uti-
lized to train and test performance by learners in a simulated task environ-
ment. With a high-quality simulation of the situated task setting, novices
could be provided with relevant experience before having to meet perfor-
mance requirements on the job.
8. CONCLUSION
REFERENCES
HCZ Editorial Record. First manuscript received April 15, 1991. Revision
received May 1, 1992. Accepted by Robert Kraut. Final manuscript received June 12,
1992. -Editor