1 s2.0 S074756329900031X Main

You might also like

Download as pdf or txt
Download as pdf or txt
You are on page 1of 22

Computers in Human Behavior 15 (1999) 441±462

Training team problem solving skills: an event-


based approach$
R.L. Oser, J.W. Gualtieri, J.A. Cannon-Bowers, E. Salas*
Naval Air Warfare Center Training Systems Division, 12350 Research Parkway, Orlando, FL
32826-3275, USA

Abstract
Training problem solving teams presents a unique set of challenges to instructional design-
ers. Given the criticality of teams to the performance of many organizations, it is important to
develop training systems to support coordination and problem solving. While recent techno-
logical advancements, such as computer-based performance assessment, hold considerable
promise for training, the introduction of technology alone does not guarantee that the train-
ing will be e€ective. This article focuses on three important questions that must be addressed
when developing coordination and problem solving training: (1) How can technology best be
used to provide an environment in which learning can take place? (2) What knowledge, skills,
and attitudes need to be learned and trained to facilitate expert problem solving teams? and
(3) How can the development of problem solving expertise be fostered through a systematic
instructional strategy? A naturalistic decision making paradigm will be used to provide a
theoretical framework for describing the problem solving task domain. An event-based
approach to training will then be o€ered as a practical application for training teams to per-
form in naturalistic environments. Finally, conclusions are drawn regarding the provided
framework for team learning. Published by Elsevier Science Ltd.
Keywords: Teams; Team training; Problem solving; Instructional strategies; Scenario design

1. Introduction

The necessity for training problem solving skills has been recognized for some
time (Bettenhausen & Murnighan, 1985; Cannon-Bowers, Salas, & Converse, 1993).
These skills are crucial for both individuals and teams to be able to cope with the

* Corresponding author.
$
The views herein are those of the authors and may not re¯ect the opinion, policy or views of the
Department of Defense.

0747-5632/99/$ - see front matter. Published by Elsevier Science Ltd.


PII: S0747-5632(99)00031-X
442 R.L. Oser et al. / Computers in Human Behavior 15 (1999) 441±462

dynamic environments of today's world (Baxter & Glaser, 1997; O'Neil, Allred, &
Baker, 1997; Orasanu & Connolly, 1995). For example, the trend in modern warfare
is toward an increasing reliance on distributed systems. Concepts such as `network-
centric' warfare suggest that future missions will be accomplished by cross-domain
teams who are physically dispersed. Given the criticality of these team's missions it is
important to develop training systems and programs to support coordination and
problem solving in these environments. To respond to this training need, it is
necessary to answer three questions: (1) To what extent can technology provide an
environment in which learning can take place? (2) What knowledge, skills, and atti-
tudes need to be learned and trained to facilitate expert problem solving teams? and
(3) How can the development of problem solving expertise be fostered through a
systematic instructional strategy?
Prior to a discussion of how and what to train to develop problem solving exper-
tise in teams, it is helpful to ®rst de®ne what is meant by the construct. While deci-
sion making may simply be de®ned as the selection of an alternative from a larger
set (Kerr, 1981; Stasser & Davis, 1981), a more appropriate de®nition would include
the processes of problem identi®cation and evaluation as well as choice (Huber,
1980). Problem solving moves beyond even this expanded de®nition of decision
making to include the execution and monitoring of the selected alternative (Cannon-
Bowers & Bell, 1997; Hirokawa & Johnston, 1989; Means, Salas, Crandell, &
Jacobs, 1995).
The above discussion provides a brief overview of the importance of training of
problem solving teams. The tenets of naturalistic decision making (NDM) (Orasanu
& Connolly, 1995) will be used to develop a theoretical framework for exam-
ining what needs to be trained to establish e€ective problem solving teams. An
event-based approach to training (EBAT) (Oser, Cannon-Bowers, Dwyer, &
Salas, 1997), which can be described in terms of the naturalistic decision making
paradigm, will serve as a practical application for training teams. The following
three sections will address these issues in greater detail. First, the use of simulation
for training will be examined. Next, the naturalistic decision making paradigm will
be used to argue for the necessity of simulations to train team problem solving.
Third, the utility of an EBAT for training problem solving skills will be explored
(Oser, Cannon-Bowers, Dwyer, & Miller, 1997). Finally, conclusions based on the
literature will be drawn.

2. Simulations and training

Emerging technological advances have considerable potential for creating virtual


training environments (Bell, 1996). Through the use of these technologies (e.g.
models, simulation, networking) it is possible to create shared problem spaces that
imitate naturalistic environments. These technologies are expected to provide
enhanced training capabilities and signi®cantly reduce the requirement for training
resources (e.g. human, computer processing). Although these advancements have
the potential for enhancing training, technology alone does not ensure that e€ective
R.L. Oser et al. / Computers in Human Behavior 15 (1999) 441±462 443

learning will result (Salas & Cannon-Bowers, 1997). In fact, few e€orts have focused
on how to best use these systems to establish e€ective environments to support
learning (Dwyer, Fowlkes, Oser, Salas, & Lane, 1997). In the absence of a sound
learning methodology, it is possible that a training system may not fully achieve
their intended objectives.
E€ective training environments are systems that facilitate the ability of the parti-
cipants to develop and maintain the competencies (i.e. knowledges, skills, attitudes)
necessary to perform required tasks. Learning environments must enable the
employment of systematic, deliberate approaches to achieve critical task require-
ments of the training participants. These approaches need to support: (1) all phases
of training development (e.g. planning, preparation, execution, analysis); (2) per-
formance measurement; and (3) feedback.
One training technology that has the potential for establishing e€ective learning
environments for the enhancement of team problem solving expertise is simulation
(Cannon-Bowers & Bell, 1997; Means et al., 1995). Simulations are ideally suited
for training problem solving through exposure of teams to the types of environ-
ments and problems that may be confronted in a naturalistic setting. In addition,
the use of computers to develop and present simulations allows for a high level of
stimulus control and structured repetition (Oser, Cannon-Bowers, Dwyer, & Miller,
1997). Simulations can allow the instructional developer to manipulate and
pre-specify the presentation of, and relationship between, critical environmental
features. Within a simulated environment, it also is possible for the team to conduct
problem solving in a realistic manner. For example, a problem solving team
could implement alternatives as well as monitor their execution (Gualtieri, Parker,
& Zaccaro, 1995). However, for this training environment to be e€ective, it not
only requires a high ®delity representation of the problem solving domain, but a
systematic instructional methodology as well (Oser, Cannon-Bowers, Dwyer, &
Salas, 1997).

3. Theoretical framework for training problem solving

While the above discussion focused on the use of simulations as a tool for estab-
lishing learning environments, the current section will focus on what competencies
(i.e. knowledge, skills, abilities) to train. The characteristics of e€ective problem
solvers delineated by Cannon-Bowers and Bell (1997) (i.e. ¯exibility, quickness,
resilience, adaptability, risk taking, accuracy) provide an end state for training. In
essence, these characteristics describe the capabilities that problem solving teams are
expected to possess to be e€ective in real world situations (Cannon-Bowers & Bell,
1997). While these characteristics are useful at a global level, they need to be more
explicitly de®ned to be directly trainable. Cannon-Bowers and Bell (1997) identi®ed
six cognitive process and skill areas considered critical for problem solving; four of
these (i.e. information processing, situation assessment, reasoning, monitoring) will
be examined in detail in the following sections, as they are most applicable to a
simulation setting.
444 R.L. Oser et al. / Computers in Human Behavior 15 (1999) 441±462

3.1. Information processing skills

One of the key features that di€erentiates experts from novices is their ability to
accurately encode and categorize information (Glaser, 1986). This better categor-
ization of information leads to a better organization of that knowledge for storage
(Glaser, 1989). Similarly, Druckman and Bjork (1991) noted that experts not only
possess more knowledge than novices but also organize it more eciently. Typically,
novices' knowledge organization is incomplete or inadequate for the task domain
(Glaser, 1986). An expert's knowledge organization is characterized by complex
knowledge structures, process and goal states, and solution templates (Glaser &
Chi, 1988).
The presence of these highly organized knowledge structures enables experts to
easily encode new information into their internal framework (Glaser, 1986), rapidly
retrieve knowledge when required (Druckman & Bjork, 1991), and implement a
speci®c solution to the problem (Klein, 1989). The development of an organized
knowledge base that enables decisions to be made quickly by the team is important
for teams that must perform in naturalistic decision making environments (Cannon-
Bowers & Bell, 1997).
Of particular relevance to performing in naturalistic decision making environ-
ments is the use of knowledge templates (Noble, Grosz, & Boehm-Davis, 1987).
These templates are initiated in response to the recognition of situational cues by
individuals. According to Noble et al. (1987), people recognize cues in a new situa-
tion and match them back to previously encountered situations. The assumption
being that those actions that were successful in previous situations will also be suc-
cessful in the current situation. This process will be discussed further in the next
section (i.e. situation assessment). Noble (1995) contends that these templates con-
tain an objective feature (i.e. a speci®c goal state to be achieved), action features (i.e.
a particular process to achieve that goal), and environmental features (i.e. criteria
that are used to bound the problem).
The development of problem templates is particularly important in teams. If team
members possess di€erent templates, then there are likely to be coordination
problems during task execution (Cannon-Bowers, Salas, & Converse, 1990). How-
ever, if an EBAT framework is adopted, then the individuals are able to consider
not only their own tasks but also the tasks of the other team members in a sys-
tematic way (Oser, Cannon-Bowers, Dwyer, & Salas, 1997). The ability of indivi-
dual team members to call upon the same template is to some extent dependent on
the plan, which was developed prior to task execution. Therefore, planning is a
crucial skill for team problem solving that will be discussed in the reasoning skills
subsection.

3.2. Situation assessment

The aim of situation assessment is to understand the causes of the cues observed
(Rasmussen, 1983). The ability to recognize meaningful patterns within the problem
space is another one of the key abilities that experts possess (Glaser, 1986). These
R.L. Oser et al. / Computers in Human Behavior 15 (1999) 441±462 445

patterns help to reduce the expert's workload, process information, encode knowl-
edge, and facilitate rapid responses (Means et al., 1995).
In a review of naturalistic decision making literature, Lipshitz (1995) found that
all of the theories that he examined included some element of situation assessment.
In addition, several of these theories demonstrated that experts are better able to
perform situation assessment quicker and more accurately than novices (Beach &
Mitchell, 1987; Klein, 1989; Noble, 1989; Rasmussen, 1983). Lipshitz (1995) pro-
posed that the improved situational assessment capabilities of experts account for
much of their ability to solve problems quicker and with greater accuracy compared
to novices.
Based on this research, Cannon-Bowers and Bell (1997) identi®ed two skills
necessary for situation assessment: cue recognition and pattern recognition. Cues
are those stimuli within the environment that provide information to the problem
solver regarding the current or future state of the system (Noble et al., 1987).
Researchers have suggested that experts recognize problem solving cues di€erently
than novices (Cannon-Bowers & Bell, 1997). For example, novices attend to the
surface features of a problem, while experts attend to the underlying deep structure
(Chi, Feltovitch, & Glaser, 1981).
Other researchers have discovered that experts can detect important features in
the environment more readily than novices (e.g. Druckman & Bjork, 1991; Means et
al., 1995). Experts seek additional information to validate their problem repre-
sentation, while novices rely only on the information that is readily available (Cel-
lier, Eyrolle, & Marine, 1997). Therefore, experts tend to take more time to build
their problem representation than do novices (Dorner & Schaub, 1994). Finally,
Orasanu and Connolly (1995) stated that experts seem to be better able to frame
problems which enables them to better detect the problem's underlying cues and
structure. These ®ndings suggest that cue recognition signi®cantly contributes to the
skill of situation awareness.
While the identi®cation of individual cues is important, research involving per-
formance in naturalistic decision making environments has suggested that it is not
just single isolated cues that are important, but rather patterns of cues (Klein, 1995).
Earlier Wickens (1984) noted that the underlying patterns that are learned through
training or experience lead to an internal representation of the environment. This
network allows experts to develop causal structures for predicting the future state of
the system (Wickens, 1984).
More recently, Cellier et al. (1997) have proposed that because of interactions
among the cues, understanding requires the construction of relationship patterns
that are suciently detailed to construct causal relationships. While patterns
may be nothing more than the `chunking' of individual cues (Chase & Simon,
1973), this ability to aggregate cues greatly increases the speed and accuracy of
expert decisions (Means et al., 1995). Druckman and Bjork (1991) also reported
that experts were able to recognize patterns of cues more rapidly than novices.
Klein (1995) has stated that the ability to separate relevant cue patterns from the
background noise is what allows experienced problem solvers to assess complex
situations quickly. The ability to perform the initial steps of the problem solving
446 R.L. Oser et al. / Computers in Human Behavior 15 (1999) 441±462

process is critical in naturalistic environments where time can be critical (Orasanu


& Connolly, 1995).
The chief method by which individuals develop these patterns is through practice.
Simulation can provide an excellent environment in which to obtain that practice.
Simulation can be used to present problems that are designed to highlight speci®c
patterns that are common and useful for expert reasoning (Means et al., 1995). That
is, simulations are useful in that they can be utilized to impart goals to the trainee
and thus partially constrain the solution space that must be searched (Cellier et al.,
1997). Simulations also have the capability to provide many more trials than would
occur normally (Means et al., 1995).

3.3. Reasoning skills

In naturalistic environments, temporal constraints and dynamic feedback loops


frequently force problem solving teams to make a decision before a full diagnosis
can be made (Cellier et al., 1997). This uncertainty requires that team members
develop skill sets that enable them to cope with this complex and dynamic environ-
ment. Cannon-Bowers and Bell (1997) identi®ed a set of skills that were critical
for problem solving in these naturalistic environments. They have characterized
this set as ``reasoning skills'' (i.e. domain speci®c problem solving skills and plan-
ning). Given that naturalistic environments are complex and dynamic (Orasanu &
Connolly, 1995), it is not unreasonable to expect that if teams are to be pro®cient in
this environment, they then need to be adaptable and ¯exible in their e€ort to gen-
erate and evaluate solutions. This requires reasoning skills not of a general, but
instead skills that are speci®c to the domain (Dorner & Schaub, 1994; Patel &
Groen, 1991).
Previous research has demonstrated that experts have more speci®c knowledge
regarding a domain and that knowledge is better organized than novices (Druckman
& Bjork, 1991; Glaser & Chi, 1988; Schvaneveldt et al., 1985). However, this
knowledge is not simply a repository of information but a framework for rapidly
utilizing that knowledge to assess the situational environment and solve problems
(Cellier et al., 1997; Klein, 1995; Means et al., 1995). Because of their familiarity
with the topic domain, experts are able to apply problem solving strategies that are
speci®c to that domain and take advantage of peculiarities of the problem space
(Dorner & Schaub, 1994; Glaser, 1986; Patel & Groen, 1991). Summarizing these
and other ®ndings, experts and novices appear to di€er signi®cantly in the processes
by which they arrive at a solution to a problem (Cellier et al., 1997).
Orasanu and Connolly (1995) have investigated the impact of high stakes and
temporal pressures associated with task performance, both of which are character-
istic of naturalistic decision making environments. There are several strategies,
which could be adopted by the team to deal with this pressure and uncertainty.
Experts, when faced with a set of alternative assumptions, err on the side of caution
and choose the most serious one (Bisseret, 1981). This strategy maximizes safety
rather than accuracy and is reasonable given the high cost of failure in naturalistic
environments. A second strategy adopted by experts to develop a successful solution
R.L. Oser et al. / Computers in Human Behavior 15 (1999) 441±462 447

strategy is to do contingency planning. Amalberti and Deblon (1992) found that


pilots de®ne their ¯ight path and choose procedures that will mitigate the con-
sequences of potential incidents during the mission. Due to di€erences in experience,
the ¯ight plans for novices and experts di€ered signi®cantly for the same mission.
Similarly, Woods (1988) found that experts utilized mental simulations to project
the consequences of a plan and to ensure coordination of action. Klein (1995) stated
that experts utilized mental simulations to ensure than an option was viable.
By providing teams an opportunity to practice planning in simulated environ-
ments, pro®ciency can be accelerated by exposing the training participants to a
greater quantity and variety of problems than would be available without the simu-
lation. A systematic approach to training requires the presence of speci®c events
within a simulation period that provide opportunities for the problem solving team
to demonstrate and receive feedback on their process and performance (Oser, Can-
non-Bowers, Dwyer, & Salas, 1997). These events should be pre-planned and re¯ect
situations that could possibly occur in naturalistic decision making environments.
This ensures that the inserted training events are transparent to the team and that
the team members are immersed in the simulation.

3.4. Monitoring

Monitoring deals with the control of a system and the detection of system faults
(Cellier et al., 1997). To e€ectively monitor a system, the team must be familiar with
normal state of the system as well as potential faulty system states. As was noted
earlier, Cannon-Bowers and Bell (1997) emphasized the importance of ¯exibility and
adaptability for successful problem solving. Lipshitz (1995) also addressed this issue
in his comparisons of emerging naturalistic decision making theories and classical
decision theories. According to Lipshitz (1995), no single set of processes or proce-
dures could accurately represent the decision making process in naturalistic envir-
onments. Similarly, Mintzberg, Raisinghani and Theoret (1976) found that the
problem solving processes were highly ``unstructured'' and not amenable to a single
procedural framework.
Because of the ¯uid nature of naturalistic domains, the problem solving team must
dynamically assess and regulate their solution strategy (Cannon-Bowers & Bell,
1997). That is, teams must continuously collect information from the system to
ensure that the goals speci®ed in the initial decision are still being met. One strategy
for guaranteeing that these goals are reached is for the individual team members to
attempt to adopt a metacognitive perspective. Metacognition is a skill that con-
tributes to an expert's ability to remain ¯exible during the execution of a decision
(Cohen, Freeman, Wolf, & Militello, 1995).
Several theorists have suggested that experts appear to be better able to monitor
their own process during problem solving better than novices (Chi, Bassock, Lewis,
Reinmann, & Glaser, 1989; Druckman & Bjork, 1991). Glaser (1986) has suggested
that novices need to be trained self-regulatory and self-management skills during the
training of problem solving skills. He goes on to assert that these skills are primarily
acquired through experience. Experts are able to monitor their own learning so that
448 R.L. Oser et al. / Computers in Human Behavior 15 (1999) 441±462

they can manage their experiences (Glaser, 1986). This self-regulation capability
fosters improved knowledge acquisition and encoding. Similarly, Orasanu (1990)
noted that e€ective metacognitive skills allow problem solvers to better manage
resources because they have a more accurate picture of their own strengths and
weaknesses.
Simulation can be an e€ective means to enhance monitoring performance by
presenting trainees with a wide variety of situations that may be encountered in
real-world settings. Because di€ering system states can be demonstrated using
simulation, trainees can be presented with conditions that are characteristic of
normal and abnormal conditions. This can provide trainees with opportunities to
practice dynamic assessment of critical environmental cues. Simulation can also
facilitate the development of meta-cognitive (i.e. self-regulatory) behaviors asso-
ciated with expert performance. The recording and replay capabilities of sim-
ulations can be used to demonstrate how an individual managed their problem
solving resources. This can provide important information for the development of
self-regulatory behavior.

3.4.1. Summary
While the above discussion highlights some the most important knowledge, skills,
and abilities for training in problem solving, it is certainly not exhaustive. A number
of other articles provide a discussion of additional knowledge, skills, and abilities
that could aid in the training of problem solving teams performing in naturalistic
decision making environments (Cannon-Bowers & Bell, 1997; Cannon-Bowers,
Tannenbaum, Salas, & Volpe, 1995). Table 1 provides a summary of implications
for training problem solving teams via simulation for each of the skills discussed in
the previous section.

4. Event-based approach to training (EBAT)

To ensure that learning takes place, it is necessary to enact a systematic approach


that presents knowledge, demonstrates concepts, initiates practice and provides
feedback (Dwyer, Oser, & Fowlkes, 1995; Oser, Cannon-Bowers, Dwyer, & Miller,
1997; Oser, Cannon-Bowers, Dwyer, & Salas, 1997). Recent work in a number of
applied domains has demonstrated that an EBAT can provide such an approach.
The objectives of an EBAT are to ensure that: (1) training opportunities are struc-
tured using appropriate methods, strategies, and tools; (2) learning objectives, exer-
cise design, critical tasks, performance measurement and feedback are tightly linked;
and (3) training results in improved team performance (Oser, Cannon-Bowers,
Dwyer, & Miller, 1997).
The EBAT builds upon a traditional approach to training system design (i.e.
Instructional Systems Design, ISD) and for personnel selection (i.e. assessment cen-
ters). ISD is a formalized set of approaches for developing a training system that is
derived from learning theory (Dipboye, 1997; Meister, 1985). The ISD methodology
provides procedures for developing training materials based on comprehensive
R.L. Oser et al. / Computers in Human Behavior 15 (1999) 441±462 449

Table 1
Implications of simulation for training team problem solving skills

Skill Implications for Training Source

Information Simulation can provide a shared environment Cannon-Bowers & Bell, 1997;
processing skills which fosters similar templates Cannon-Bowers et al., 1990;
Encoding Similar experiences within a simulation can Oser, Cannon-Bowers,
Storage enable team members to have consistent Dwyer, & Salas, 1997
Retrieval knowledge organizations that lead to the
development of common goals
Situation awareness Simulation is an excellent environment in Cannon-Bowers & Bell, 1997;
Cue recognition which to receive practice Klein, 1995; Means et
Template recognition Simulation can provide many more trials al., 1995; Noble et al., 1987
than would be possible in a natural setting
Simulation can highlight speci®c patterns
Problem solving skills Simulation can provide a safe setting to Cannon-Bowers & Bell, 1997;
Domain speci®c skills practice problem solving in complex Dorner & Schaub, 1994;
dynamic environments Means et al., 1995; Oser,
By providing multiple practice opportunities, Cannon-Bowers, Dwyer, &
simulation can accelerate team member Miller, 1997
pro®ciency
Opportunities for feedback can be established
in simulation environments
Monitoring Simulation can be used to provide examples Cannon-Bowers & Bell, 1997;
Detecting faults of normal system states Cohen et al., 1992;
Metacognition Team members are able to practice self- Orasanu, 1990
regulating behaviors within a simulation
environment

analyses of training needs (Goldstein, 1993; Wong & Raulerson, 1974). In a review
of ISD, Whitmore (1981) stated that it focuses on: (1) developing e€ective perfor-
mance; (2) learning required skills; and (3) identifying what the trainee must do to
learn a skill. ISD is primarily behavioral in orientation, that is, ISD focuses on
learning skills rather then the development of knowledge (Wong & Raulerson,
1974).
Assessment centers, in contrast, were developed for the purpose of observing
and evaluating complex human behaviors, like leadership ability (McKinnon,
1975), for the primary purpose of selection. This methodology presents individuals
with situational tests designed to provide an opportunity to demonstrate a speci®c
skill. A review by Klimoski and Brickner (1987) found that assessment centers
are highly ¯exible and provide valuable information for evaluating performance
in a variety of domains. Due to these, and other, characteristics (e.g. high predictive
validity) assessment centers are used to make selection decisions, determine indivi-
dual training needs, and facilitate human resource planning (Casio, 1991).
Unfortunately, assessment centers often lack a formal structure on which to make
judgments. Gaugler and Thorton (1989) found that raters utilized only a small sub-
set of the information available to them to make the ratings. A second potential
450 R.L. Oser et al. / Computers in Human Behavior 15 (1999) 441±462

problem with assessment centers is that their standardization procedures focus on


evaluating individual rather than team behavior. Perhaps the greatest diculty with
using an assessment center approach as a potential training intervention is that the
predominant goal of the approach has been a focus on selection versus training.
The EBAT builds on the success of these two prior methodologies, but still di€ers
from these more traditional approaches in two important ways. First, the EBAT
speci®cally focuses on scenario-based training design in simulated environments,
whereas the other approaches to training design were intentionally described as fra-
meworks that can be applied in any training setting. While the generality of these
frameworks allows application to a wide range of training environments, the fra-
meworks do not address the unique challenges and opportunities found in simula-
tion-based training environments (Oser, Cannon-Bowers, Dwyer, & Miller, 1997).
This enables the EBAT to address the unique aspects of training in an environment
where a scenario is the curriculum. This is contrast to most traditional training set-
tings where a set of instructions (e.g. lessons, lectures, computer-based-training) is
the curriculum.
Second, the EBAT was originally designed for application in training environ-
ments where team competencies are required for e€ective performance. According
to Cannon-Bowers et al. (1995), a team competency is the: (1) requisite knowledge,
principles, and concepts underlying e€ective team performance; (2) repertoire of
required skills and behaviors necessary to perform the team task e€ectively; and (3)
appropriate attitudes on the part of team members that foster e€ective team per-
formance. Cannon-Bowers et al. (1995) have developed a conceptual framework for
specifying team competencies and have reviewed the literature to identify team
competencies on the basis of the framework. This theoretical work formed an
important foundation for the development of the EBAT.
EBAT methods and tools have been researched and tailored to meet the speci®c
needs found in team training environments (e.g. process measurement, collective
feedback). In comparison, most other frameworks have been primarily developed
and applied to individual training environments. Training programs with compo-
nents of the EBAT have resulted in psychometrically sound measures and improved
problem solving performance across a variety of training participants and environ-
ments (e.g. Combat Information CentersÐJohnston, Smith-Jentsch, & Cannon-
Bowers, 1997; Aircrew Coordination TrainingÐFowlkes, Dwyer, Oser, & Salas,
1999; Multi-Service Distributed teamsÐDwyer et al., 1995).
The following section will brie¯y describe each of the components of the EBAT:
(1) learning objective speci®cation; (2) trigger event development; (3) measures of
e€ectiveness (MOEs) speci®cation; (4) scenario generation; (5) exercise management;
(6) MOEs data collection; (7) after action review generation and conduct; and (8)
database management and archival.

4.1. Learning objective speci®cation

E€ective training begins with a clear understanding of the needs of the training
participants. The EBAT starts with the speci®cation of learning objectives asso-
R.L. Oser et al. / Computers in Human Behavior 15 (1999) 441±462 451

ciated with domain relevant tasks, conditions, and standards. For purposes of this
discussion, a learning objective is de®ned as a statement that clearly describes what
is to be acquired during a training opportunity. Depending on the training partici-
pants and task requirements, learning objectives can be associated with a speci®c
task (e.g. select a construction site for a new distribution center) or general compe-
tencies required across a number of tasks (e.g. problem solving, planning, commu-
nication).
The capabilities described below are the end goals of training and provide a basis
for specifying the knowledge, skills and abilities that need to be trained and sup-
ported within a simulation. Technically, the capabilities discussed in this section are
the learning objectives that a instructional designer would want to specify as learn-
ing objectives. These capabilities provide guidance for development of an integrated
framework for an EBAT system (e.g. scenario events, measurement tools, feedback
format).
To train problem solving teams it is ®rst necessary to determine what are the
characteristics of naturalistic decision making environments (Cannon-Bowers &
Bell, 1997). Orasanu and Connolly (1995) listed eight factors that were important to
consider in naturalistic settings: (1) ill-structured problems; (2) dynamic environ-
ments; (3) shifting or competing goals; (4) feedback loops; (5) time stress; (6) high
stakes; (7) multiple players; and (8) overriding organizational goals. These factors
delineate the conditions under which a problem solving team must perform. Using
the factors that Orasanu and Connolly (1995) identi®ed as characteristics of natur-
alistic environments, Cannon-Bowers and Bell (1997) identi®ed a set of capabilities
that expert problem solvers would be expected to possess (i.e. ¯exibility, quickness,
resilience, adaptability, risk taking, and accuracy).
Expert problem solvers need to be ¯exible to cope with the complex, dynamic and
ill-structured environment in which they ®nd themselves (Orasanu & Connolly,
1995). As the environment changes, goals shift, potentially requiring changes in
strategy (Kau€man, 1995). The need for ¯exibility is further tasked by the need to
address the perspectives of each of the team members in the implementation of a
solution (Vroom & Yetton, 1973).
In addition to being ¯exible, problem solvers need to be quick. Time pressure is
one of the primary factors in naturalistic environments and limits the ability of the
team to conduct analytical procedures (e.g. multi-attribute utility analysis) (Zakay &
Wooler, 1984). The presence of feedback loops and dynamic environments also
impacts the tempo at which a decision must be made (Orasanu & Connolly, 1995).
Because of the potentially severe consequence for inaccurate alternative selections
and severe time pressure, problem solvers must be resilient (Cannon-Bowers & Bell,
1997). These stressors along with the necessity of handling ill-structured problems in
dynamic environments requires that the problem solver must be capable of operating
under these conditions, without degradation in performance (Means et al., 1995).
Each team must optimize organizational goals as well as personal requirements
(Hackman, 1987). These factors make adaptation particularly dicult (Orasanu &
Connolly, 1995). On top of the structural characteristics of the team (i.e. organiza-
tional goals, multiple players) that obligate adaptation are the task characteristics
452 R.L. Oser et al. / Computers in Human Behavior 15 (1999) 441±462

(i.e. dynamic environment, shifting goals, feedback loops) that place demands on the
problem solver (Cannon-Bowers & Bell, 1997). Adaptation by the problem solver is
necessary to ensure survival as well as expand the problem solver's behavioral
repertoire (Holland, 1993).
Because of the nature of the tasks that a problem solver is required to perform, a
certain level of risk taking is required (Cannon-Bowers & Bell, 1997). Orasanu and
Connolly (1995) have suggested that problem solvers mitigate some of this risk by
drawing on their considerable knowledge of the problem domain. This knowledge
base allows the expert problem solver to better categorize ill-structured problems
more eciently than a novice with the domain (Glaser, Lesgold, & Gott, 1991).
The ®nal capability that Cannon-Bowers and Bell (1997) identi®ed as being cri-
tical to problem solving in naturalistic environments is accuracy. Because of the high
stakes involved, problem solvers do not become experts unless they are able to
accurately predict future states of the system. Performance on any given situational
trial impacts the ability of the problem solver to learn which in turn impact perfor-
mance on future trials (Holland, 1993).
A summary of the relationship between the problem solving capabilities identi®ed
by Cannon-Bowers and Bell (1997) and the eight characteristics of naturalistic
environments speci®ed by Orasanu and Connolly (1995) are presented in Table 2.

4.1.1. Trigger event development


The EBAT approach then requires that episodes within the simulation be either
identi®ed or developed for each learning objective. The events create speci®c
opportunities for the participants to practice critical tasks and competencies asso-
ciated with learning objectives. The events allow the participants to demonstrate
their pro®ciencies and de®ciencies for the purpose of performance measurement and
feedback. Typically, a number of events are created for each learning objective that:
(1) vary in diculty and (2) occur at di€erent points of an exercise. Speci®c learning
objective events are necessary opportunities to train and receive feedback on critical
competencies (Cannon-Bowers & Salas, 1997). In much the same way as on-the-job
training can fail to lead to the development of e€ective employee procedures
(Goldstein, 1993).

Table 2
Impact of naturalistic decision making characteristics on problem solving capabilities

Flexibility Quickness Resilience Adaptability Risk taking Accuracy

Ill-structured problems + + +
Dynamic environments + + + +
Shifting goals + +
Feedback loops + +
Time stress + +
High stakes + + +
Multiple players + +
Organizational goals +
R.L. Oser et al. / Computers in Human Behavior 15 (1999) 441±462 453

A systematic approach to training requires the presence of speci®c, pre-planned


opportunities for the training participants to demonstrate and receive feedback on
critical competencies. The deliberate introduction of these opportunities must be
transparent and believable to the training participants. Otherwise, the training par-
ticipants may not perform in a realistic manner.
Just as a naturalistic decision making paradigm can impact the development of
learning objectives, it can also impact the development of scenario events. If the
scenario is to adequately train the team in problem solving, it should possess
the factors that Orasanu and Connolly (1995) listed as characterizing naturalistic
environments (i.e. ill-structured, dynamic, time stress). These factors delineate the
conditions under which a problem solving team must perform. The taxonomy pro-
vided by a naturalistic decision making paradigm can also be used to assess the level
of diculty for scenario events. For example, a relatively easy scenario event for an
oil spill management team may involve the team developing a distribution plan for
the deployment of resources (e.g. delivery of oil spill booming equipment to a single
location). Using the previous example, the diculty of a scenario event may be
increased in a variety of ways to include changing the number or location of the
required delivery sites, intensifying the consequences for non-delivery, or by placing
temporal pressures on the delivery.

4.1.2. Measure of e€ectiveness (MOE) speci®cation


The EBAT approach prescribes the development of performance strategies and
measures to collect data associated with the inserted training events. Within this
context, the strategies and measurement tools enable: (1) examination of perfor-
mance trends during the exercise and (2) the development of diagnostic performance
feedback. Measuring performance at several events for a speci®c learning objective
enables the development of pro®les of how well a team performs on that objective
over a range of conditions. Based on a systematic assessment of performance, feed-
back can be designed and provided to speci®cally focus on competencies necessary
for e€ective performance. Without e€ective performance measurement and feed-
back, there is no way of knowing or ensuringÐwith any degree of certaintyÐthat
the training will have its intended e€ect (Cannon-Bowers & Salas, 1997).
Measures must allow the systematic assessment of performance (i.e. pro®ciencies
and de®ciencies) associated with critical tasks and competencies. Depending on the
speci®c characteristics of the learning environment, a variety of measurement
approaches may be required. For example, the measurement of competencies that
are unique to a single task, given a speci®c set of conditions (e.g. determine budgetary
requirements for company ABC for 1998), will vary from the measurement of com-
petencies that can generalize across a variety of tasks and conditions (e.g. demon-
strate an ability to perform strategic planning).
An important characteristic of the multi-faceted approach to measurement
requires the collection of data involving outcomes (e.g. was the right decision made?)
and processes (e.g. was the decision made right?) (Dwyer et al., 1997; Johnston et al.,
1997). While outcome measures do provide important information regarding overall
performance, process measures are required for diagnosing speci®c de®ciencies
454 R.L. Oser et al. / Computers in Human Behavior 15 (1999) 441±462

associated with how a given outcome was reached. The nature of performance in
naturalistic decision making environments further increases the need to measure
processes. This is because the team interacts with the dynamic and ill-structured
environment in which it is embedded, and it is necessary to have measures that will
enable observers to record the team's behaviors.
A second impact of the naturalistic decision making paradigm on the speci®ca-
tion of MOEs is related to the types of tools that are utilized to assess team pro-
cesses and performance. While structured environments allow for highly
proceduralized checklists, the very nature of naturalistic environments limits the
ability to be assessed using standard operating procedures (i.e. di€erent methods can
result in e€ective performance). This has implications for training observers to
evaluate the construct of interest. Finally, because naturalistic decision making
environments involve shifting goals and feedback loops there is a need for long-
itudinal observation.

4.1.3. Scenario generation


Given the task requirements, learning objectives, events, and performance mea-
sures, a scenario is then developed. Scenarios must permit the training participants
to interact in realistic situations that will facilitate learning. Scenarios can use a wide
range of constructive, virtual, synthetic, and live resources. Regardless of the speci®c
resources used to create the training environment, the scenario must support the
learning objectives, enable the required events to be presented to the participants,
and facilitate the utilization of the performance measures for feedback.
While a naturalistic decision making paradigm can provide an overarching fra-
mework for establishing scenario events, it also furnishes guidance on how those
events should be organized in a realistic manner. For example, since a characteristic
of naturalistic environments is ill-structured situations, scenarios need to be gener-
ated that ensure the cues and features underlying a particular problem are not
obvious to the team. Similarly, the developer needs to ensure that there are feedback
loops within the simulation (i.e. trainee inputs impact future scenario events). For
that reason, scenarios that do not provide mechanisms for real-time modi®cation
may inhibit the ability of problem solving teams to learn.

4.1.4. Exercise management


After the scenario is generated and tested, the participants can interact with the
training environment. Obviously, exercise management and control of exercise ¯ow
are critical aspects of this process. While training participants must be permitted to
make their own decisions and ®ght the simulated battle in a manner consistent with
doctrine, exercise managers must ensure that the right types of opportunities are
presentedÐin a controlled mannerÐto meet the intended objectives.
The very same features that produce a realistic scenario also inhibit the trainers'
ability to control it. For example, the presence of dynamic environments and
feedback loops, suggests that changes to a single scenario event can potentially set
o€ a chain of events that can necessitate changes to numerous future events as well
as changes to the data collection plan. As simulation scenarios become more
R.L. Oser et al. / Computers in Human Behavior 15 (1999) 441±462 455

sophisticated it will be necessary to either increase the number of people operating


the system or develop software to aid in simulation control and execution.

4.1.5. Data collection


The participants perform the scenario and measurement data are collected to
support feedback. Speci®cally, when an event occurs, performance related to that
event is assessed. However, when training in naturalistic environments, this see-
mingly simple task becomes highly complex. Due to the fact that the environment
can change so rapidly and because the presence of multiple players complicates the
data collection process, some form of automated data collection (e.g. key stroke
capture, video/audio tape, eye tracking) is necessary so that an audit trail can be
constructed after the scenario's completion.
In addition to the speed with which behaviors occur in naturalistic problem sol-
ving environments, the fact that problems are ill-structured allows for multiple
strategies for their solution. This implies that any data collection tools used neces-
sarily must possess features that can capture these various solution methods. For
example, a joint commander may choose to quell a food riot by: (1) using tear gas to
disperse the crowd, (2) sending in troops to control the crowd, or (3) delivering
additional food. Each of these options would require that a di€erent set of actions
be set in motion and coordinated. Data collection tools require the ¯exibility to cope
with this level of variability.
Multiple players and organizational goals further complicate the data collection
process. It is not unreasonable to expect that team members from di€erent back-
grounds (e.g. Joint Command Post) will utilize di€erent criteria to assess an alter-
native's suitability. This presence of multiple criteria greatly expands the demands
that are placed on the observers. Is data to be collected against all criteria? Are
multiple raters needed for each perspective? How will the data from these di€ering
perspectives be integrated into a single perspective for feedback? This last question
provides a transition to the next section.

4.1.6. After-action review (AAR) generation and conduct


Performance is documented, analyzed, and packaged to highlight critical teaching
points for subsequent feedback. AARs need to focus on critical events having a
direct in¯uence on mission outcomes (Lederman, 1992; Meliza, Bessemer, & Hiller,
1994; Rankin, Gentner, & Crissey, 1995). The systematic linkage in the EBAT con-
tinues by tying feedback topics to the performance measures, which in turn are
linked to the events and learning objectives. This approach provides structure and
control to training and ensures internal consistency throughout an exercise.
The necessity for such an approach when training teams in a simulated environ-
ment becomes obvious when one considers that complex dynamic environments
frequently do not provide clear or timely feedback (Means et al., 1995). It is neces-
sary that the trainer establish criteria with which to measure problem solving skills a
priori. This will allow the trainer to compare a particular team's results against an
expert team's solution. The challenge is to provide meaningful feedback to the entire
team on their performance and not just on their individual task areas.
456 R.L. Oser et al. / Computers in Human Behavior 15 (1999) 441±462

Despite the challenges of identifying criteria on which to provide feedback in


naturalistic environments, many simulations provide features that have the potential
to enhance the ability of the trainer to provide feedback. Using simulations it is
possible to speed up time (Schneider, 1985), which makes it possible for the trainees
to observe the impact of their decisions sooner and thereby receive feedback more
rapidly. Because all aspects of the environment are encoded in the computer, it is
possible to decompose the team's task and provide explicit feedback on their process
and performance by replaying their actions during the debrief. The digitization of
the scenario also makes it possible to highlight and augment those portions of the
scenario that are necessary to provide feedback, while discounting those sections
which provide little or no learning. The rapid analysis of team performance data is
also possible if on-line measures (e.g. key strokes, communications, software arti-
facts) are used for feedback purposes.

4.1.7. Database management and archival


Following the completion of the exercise, appropriate data are stored and
archived in a meaningful manner that supports the development of lessons learned
and future exercises. The question is not so much what to archive but how to archive
data. It is possible in most modern simulation systems to record every key stroke.
One of the most useful pieces of data to record is likely to be a listing of the scenario
events that the team was exposed to during the simulation. Linked to these events
should be some assessment of a team's problem solving performance. As data is
collected across exercises, a database can be developed which record individual and
team responses to the scenario events. Over time as the archives grow, normative
patterns will emerge and performance for a given team can be compared against
the `norm'.
Just as naturalistic decision making provides a framework for considering the
speci®cation of learning objectives and required team capabilities, it may also pro-
vide a structure for archiving data. Speci®cally, scenario segments could be cate-
gorized into one or more of the characteristics (e.g. high stakes, time pressure,
shifting goals) of the naturalistic problem solving environments identi®ed by Ora-
sanu and Connolly (1995). The characteristics could be used as meta-tags to develop
an architectural framework for the storage of data. Such a framework would enable
relatively easy storage and retrieval.

4.1.8. Summary
Because of the linkages in the EBAT, performance can be traced directly back
to speci®c learning objectives via events and performance measures. Performance
related to a given objective can then be assessed and fed back to the training parti-
cipants. The linkages are critical, and therefore must not be viewed as a set of
options, but rather a list of requirements for e€ective training. If properly imple-
mented, the EBAT has the potential to establish an e€ective learning environment
for team training in simulated environments. Table 3 summarizes the implications of
an EBAT for training the team problem solving skills required in naturalistic deci-
sion making environments.
Table 3
Implications of event-based approach to training (EBAT) and naturalistic decision making for training problem solving teams

Event-based approach Implications for training Sources


to training step

Learning objectives Trainer should utilize expert problem solver capabilities to set learning objectives Cannon-Bowers & Bell, 1997;
Simulation should try to mimic naturalistic environment and exercise team Oser, Cannon-Bowers, Dwyer, &
problem solving skills Miller, 1997; Oser Cannon-
Bowers, Dwyer, & Salas, 1997

Trigger events A naturalistic decision making framework provides a means for manipulating Cannon-Bowers & Bell, 1997;
scenario event Orasanu & Connolly, 1995;
Scenario designers should use the characteristics of naturalistic decision making Oser, Cannon-Bowers, Dwyer,
environments to ensure that number and type of trigger events selected accurately & Miller, 1997; Oser Cannon-
re¯ect real world situations Bowers, Dwyer, & Salas, 1997
MOEa speci®cation Care should be taken in developing measures for naturalistic environments Dwyer et al., 1997; Johnston
because they are characteristically ill-structured et al., 1997; Oser, Cannon-
Naturalistic environments limit the utility of highly procedural measurement tools Bowers, Dwyer, & Miller, 1997;
and, therefore, should not be used Oser Cannon-Bowers, Dwyer, &
Salas, 1997
Scenario generation When the learning objectives relate to problem solving, greater attention should Cannon-Bowers and Bell, 1997;
be paid to issues of simulator ®delity than for more behavioral learning objectives Oser, Cannon-Bowers, Dwyer,
Scenario designers should use a naturalistic decision making framework to classify & Miller, 1997; Oser Cannon-
and evaluate the diculty of individual events Bowers, Dwyer, & Salas, 1997
Exercise management Greater attention needs to be provided to exercise management when the scenario Oser, Cannon-Bowers, Dwyer,
possesses the characteristics of a naturalistic environment & Miller, 1997; Oser Cannon-
Due to their complexity, simulations that attempt to represent naturalistic Bowers, Dwyer, & Salas, 1997
R.L. Oser et al. / Computers in Human Behavior 15 (1999) 441±462

environments should possess software to aid in exercise control


457

(Table continued on next page)


458

Table 3Ðcontinued

Event-based approach Implications for training Sources


to training step

Data collection Observers that collect data within a naturalistic environment should possess tools Dwyer et al., 1997; Johnston
that help them maintain an awareness of the big picture et al., 1997; Oser, Cannon-
Simulation should possess some form of automatic data capture Bowers, Dwyer, & Miller, 1997;
Oser Cannon-Bowers, Dwyer,
& Salas, 1997
AARb generation Because naturalistic decision making environments are so complex, feedback Means et al., 1995;
should be provided in as timely a fashion as possible to foster learning Schneider, 1985
Simulation should be utilized during feedback sessions to help the trainee
visualize the event
Data management Archival tools should employ meta-tags that categorize scenario events using a Oser, Cannon-Bowers, Dwyer,
and archival naturalistic decision making framework & Miller, 1997; Oser Cannon-
Archived databases should include prior performance information to help Bowers, Dwyer, & Salas, 1997
determine normative values for scenario events
a
MOE, measure of e€ectiveness.
b
AAR, after-action review.
R.L. Oser et al. / Computers in Human Behavior 15 (1999) 441±462
R.L. Oser et al. / Computers in Human Behavior 15 (1999) 441±462 459

5. Conclusions

As the cost of preparing teams for the work place increases, the use of simulations
for training is likely to become more prevalent. For many domains (e.g. military,
emergency response teams, commercial airlines, nuclear power plants), simulation
will become the training environments of choice. While technological improvements
to training systems are likely to continue, they alone can not ensure that trainees will
possess improved competencies (i.e. knowledge, skills, abilities) after completing
training. Additional work in the development of learning strategies, methods, and
tools is needed to maximize the bene®t of using simulation for training.
This article has sought to provide a framework with which to develop these
learning strategies, methods, and tools for the training of problem solving teams. It
should be noted, however, that most of the research that served as the basis for this
article is theoretical, and empirical assessment of the propositions are still required.
While both a naturalistic decision making and EBAT framework have tremendous
potential for impacting the training of problem solving teams, until an evaluation of
their tenets is conducted, the conclusions derived from them are conjectural.
In addition to providing empirical support for the use of the EBAT to train
problem solving for naturalistic decision making environments, work remains for
further theoretical development for the identi®cation of learning objectives, devel-
opment of scenario events and exercises, measurement of performance, and appli-
cation of appropriate feedback within a training simulation. Other improvements
could be achieved by addressing training needs during the technical development of
simulations to ensure the overall architecture as well as the models, protocols, and
algorithms all support learning. The establishment of e€ective learning environ-
ments is an important ®rst step to fully utilizing simulation to support training.
In summary, while emerging technologies have considerable potential for training,
technology itself will not result in e€ective learning environments. E€ective learning
environments require the integration of technologies with theoretically sound and
empirically demonstrated learning strategies, tools, and methods.

References
Amalberti, R., & Deblon, F. (1992). Cognitive modeling of ®ghter aircraft's process control: A step
towards an intelligent on-board assistance system. International Journal of Man Machine Studies, 36,
639±671.
Baxter, G. P., & Glaser, R. (1997). An approach to analyzing the cognitive complexity of science perfor-
mance assessments (CSE Technical Report 452). Los Angeles, CA: University of California, Graduate
School of Education and Information Studies.
Beach, L. R., & Mitchell, T. R. (1987). Image theory: Principles, goals and plans. Acta Psychologica, 66,
201±220.
Bell, H. H. (1996). The engineering of a training network. Proceedings of the 7th International Training
Equipment Conference (pp. 365±370). Arlington, VA: ITED Ltd.
Bettenhausen, K., & Murnighan, J. K. (1985). The emergence of norms in competitive decision-making
groups. Administrative Science Quarterly, 30, 350±372.
Bisseret, A. (1981). Application of signal detection theory to decision making in supervisory control: the
e€ect of the operator's expertise. Ergonomics, 24 (2), 81±94.
460 R.L. Oser et al. / Computers in Human Behavior 15 (1999) 441±462

Cannon-Bowers, J. A., & Bell, H. H. (1997). Training decision makers for complex environments: impli-
cations of the naturalistic decision making perspective. In C. Zsambok, & G. Klein, Naturalistic deci-
sion making (pp. 99±110). Mahwah, NJ: Erlbaum.
Cannon-Bowers, J. A., & Salas, E. (1997). A framework for developing team performances measures in
training. In M.Brannick, E. Salas, & C. Prince, Team performance assessment and measurement: theory,
research and applications (pp. 15±62). Mahwah, NJ: Erlbaum.
Cannon-Bowers, J. A., Salas, E., & Converse, S. (1990). Cognitive psychology and team training: training
shared mental models of complex systems. Human Factors Society Bulletin, 33 (12), 1±4.
Cannon-Bowers, J. A., Salas, E., & Converse, S. (1993). Shared mental models in expert team
decision making. In N. Castellan, Individual and group decision making (pp. 221±246). Hillsdale, NJ:
Erlbaum.
Cannon-Bowers, J. A., Tannenbaum, S. I., Salas, E., & Volpe, C. E. (1995). De®ning team competencies
and establishing team training requirements. In R. Guzzo, & E. Salas, Team e€ectiveness and decision
making in organizations (pp. 333±380). San Francisco, CA: Jossey-Bass.
Casio, W. F. (1991). Applied psychology in personnel management. Englewood Cli€s, NJ: Prentice Hall.
Cellier, J. M., Eyrolle, H., & Marine, C. (1997). Expertise in dynamic environments. Ergonomics, 40 (1),
28±50.
Chi, M., Bassock, M., Lewis, M., Reinmann, X., & Glaser, R. (1989). Self-explanations: how students
study and use examples in learning to solve problems. Cognitive Science, 13, 145±182.
Chi, M., Feltovitch, P. J., & Glaser, R. (1981). Categorization and representation of physics problems by
experts and novices. Cognitive Science, 5, 121±152.
Chase, W. G., & Simon, H. A. (1973). Perceptions in chess. Cognitive Psychology, 4, 55±81.
Cohen, M. S., Freeman, J., Wolf, S., & Militello, L. (1995). Training metacognitive skills in Naval combat
decision making. Arlington, VA: Cognitive Technologies, Inc.
Dipboye, R. L. (1997). Organizational barriers to implementing a rational model of training. In M. A.
Quinones, & A. Ehrenstein, Training for a rapidly changing workplace: applications of psychological
research (pp. 31±60). Washington, DC: American Psychological Association.
Dorner, D., & Schaub, H. (1994). Errors in planning and decision making and the nature of human
information processing. Applied Psychology: an International Review, 43 (4), 433±453.
Druckman, D., & Bjork, R. A. (1991). Modeling expertise. In D. Druckman, & R. Bjork, In the mind's
eye: enhancing human performance (pp. 57±79). Washington, DC: National Academy Press.
Dwyer, D. J., Fowlkes, J. E., Oser, R. L., Salas, E., & Lane, N. E. (1997). Team performance
measurement in distributed environments: TARGETS methodology. In M.Brannick, E. Salas, & C.
Prince, Team performance assessment and measurement: Theory, research and applications (pp. 137±153).
Mahwah, NJ: Erlbaum.
Dwyer, D. J., Oser, R. L., & Fowlkes, J. E. (1995). A case study of distributed training and training per-
formance. In Proceedings of the Human Factors and Ergonomics Society 39th annual meeting 4, 1316±
1320. Santa Monica, CA: Human Factors and Ergonomics Society.
Fowlkes, J. E., Dwyer, D. J., Oser, R. L., & Salas, E. (1999). Event-based approach to training (EBAT).
International Journal of Aviation Psychology (in press).
Gaugler, B. B., & Thornton, G. C. (1989). Number of assessment center dimensions as a determinant of
assessor accuracy. Journal of Applied Psychology, 74, 611±618.
Glaser, R. (1986). Training expert apprentices. In I. Goldstein, R. Gagne, R. Glaser, J. Royer, T. Shuell,
& D. Payne, Learning research laboratory: proposed research issues (AFHRL-TP-85-54). Brooks Air
Force Base, TX: Manpower and Personnel Division, Air Force Research Laboratory
Glaser, R. (1989). Expertise and learning: how do we think about instructional processes now that we
have discovered knowledge structure? In D. Klahr, & D. Kotosky, Complex information processing: the
impact of Herbert A. Simon (pp. 269±282). Hillsdale, NJ: Erlbaum.
Glaser, R., & Chi, M. T. (1988). Overview of expertise. In M. Chi, R. Glaser, & M. Farr, The nature of
expertise. Hillsdale, NJ: Erlbaum.
Glaser, R., Lesgold, A., & Gott, S (1991). Implications of cognitive psychology for measuring job per-
formance. In K. Wagdor, & B. Green, Performance assessment for the workplace II (pp. 1±26).
Washington, DC: National Academy Press.
Goldstein, I. L. (1993). Training in organizations (3rd ed). Paci®c Grove, CA: Brooks/Cole Publishing.
R.L. Oser et al. / Computers in Human Behavior 15 (1999) 441±462 461

Gualtieri, J. W., Parker, C., & Zaccaro, S. (1995). Group decision-making: an examination of decision
processes and performance. Paper presented at the Annual Meeting of the Society for Industrial and
Organizational Psychology, Orlando, FL.
Hackman, J. R. (1987). Design of work teams. In J. Lorsch, Handbook of organizational behavior. Engle-
wood Cli€s, NJ: Prentice-Hall.
Hirokawa, R. Y., & Johnston, D. D. (1989). Toward a general theory of group decision making: devel-
opment of an integrated model. Small Group Behavior, 20 (4), 500±523.
Holland, J. H. (1993). Adaptation in natural and arti®cial systems. Cambridge, MA: MIT Press.
Huber, G. P. (1980). Managerial decision making. Glenview, IL: Scott Foresman.
Johnston, J. H., Smith-Jentsch, K. A., & Cannon-Bowers, J. A. (1997). Performance measurement tools
for enhancing team decision-making training. In M. Brannick, E. Salas, & C. Prince, Assessment and
measurement of team performance: theory, research and applications. Mahwah, NJ: Erlbaum.
Kau€man, S. (1995). At home in the universe: the search for the laws of self-organization and complexity.
New York: Oxford University Press.
Kerr, N. L. (1981). Social transition schemes: charting the groups road to agreement. Journal of Person-
ality and Social Psychology, 41 (4), 684±702.
Klein, G. A. (1989) Recognition-primed decisions. In W. Rouse, Advances in man-machine systems
research (pp. 47±92, Vol. 5). Greenwich, CT JAI Press.
Klein, G. A. (1995). A recognition-primed decision (RPD) model of rapid decision making. In G. Klein,
J. Orasanu, R. Calderwood, & C. Zsambok, Decision making in action: models and methods (pp. 138±
147). Norwood, NJ: Ablex.
Klimoski, R., & Brickner, M. (1987). Why do assessment centers work? The puzzle of assessment center
validity. Personnel Psychology, 40, 243±260.
Lederman, L. C. (1992). Debrie®ng: toward a systematic assessment of theory and practice. Simulation
and Gaming, 23 (2), 145±160.
Lipshitz, R. (1995). Converging themes in the study of decision making in realistic settings. In G. Klein,
J. Orasanu, R. Calderwood, & C. Zsambok, Decision making in action: models and methods (pp. 103±
137). Norwood, NJ: Ablex.
McKinnon, D. W. (1975). Assessment centers then and now. Assessment and Development, 2, 8±9.
Means, B., Salas, E., Crandell, B., & Jacobs, T. O. (1995). Training decision makers for the real world. In
G. Klein, J. Orasanu, R. Calderwood, & C. Zsambok, Decision making in action: models and methods
(pp. 306±326). Norwood, NJ: Ablex.
Meister, D. (1985). Behavioral analysis and measurement methods. New York: Wiley.
Meliza, L. L., Bessemer, D. W., & Hiller, J. H. (1994). Providing unit training feedback in the distributed
interactive simulation environment. In R. F. Holz, J. H. Hiller, & McFann, H. H., Determinants of
e€ective unit performance: research on measuring and managing unit training readiness (pp. 257±280).
Alexandria, VA: US Army Research Institute for the Behavioral and Social Sciences.
Mintzberg, H., Raisinghani, D., & Theoret, A. (1976). The structure of ``unstructured'' decision processes.
Administrative Science Quarterly, 21, 246±275.
Noble, D. F. (1989). Applications of a theory of cognition to situation assessment. Vienna, VA: Engineering
Research Associates.
Noble, D. F. (1995). A model to support development of situation assessment aids In G. Klein, J. Orasanu,
R. Calderwood, & C. Zsambok, Decision making in action: models and methods (pp. 287±305). Norwood,
NJ: Ablex.
Noble, D. F., Grosz, C., & Boehm-Davis, D. (1987). Rules, schema and decision making. Vienna, VA:
Engineering Research Associates.
O'Neil, H. F. Jr., Allred, K., & Baker, E. L. (1997). Review of workforce readiness theoretical frame-
works. In H. F. Jr. O'Neil, Workforce readiness: competencies and assessment. Mahwah, NJ: Lawrence
Erblaum.
Orasanu, J. (1990). Shared mental models and crew decision making (Technical Report No. 46). Princeton,
NJ: Princeton University, Cognitive Sciences Laboratory.
Orasanu, J., & Connolly, T. (1995). The reinvention of decision making In G. Klein, J. Orasanu,
R. Calderwood, & C. Zsambok, Decision making in action: models and methods. Norwood, NJ:
Ablex.
462 R.L. Oser et al. / Computers in Human Behavior 15 (1999) 441±462

Oser, R. L., Cannon-Bowers, J. A., Dwyer, D. J., & Miller, H. (1997). An event based approach for
training: enhancing the utility of joint service simulations. Paper presented at the 65th Military Opera-
tions Research Society Symposium. Quantico, VA: Oce of Naval Research.
Oser, R. L., Cannon-Bowers, J. A., Dwyer, D. J., & Salas, E. (1997). Establishing learning environment
for JSIMS: challenges and considerations. In Proceedings of the 19th interservice/industry training sys-
tems and education conference (pp. 141±155). Orlando, FL: National Security Industrial Association.
Patel, V. L., & Groen, G. L. (1991). The general and speci®c nature of medical expertise: a critical look. In
A. Ericsson, & J. Smith, Toward a general theory of expertise (pp. 93±125). Cambridge, MA: Cambridge
University Press.
Rankin, W. J., Gentner, F. C., & Crissey, M. J. (1995). After action review and debrie®ng methods:
technique and technology [CD-ROM]. In Proceedings of the 17th interservice/industry training systems
and education conference (pp. 252±261). Orlando, FL: National Security Industrial Association.
Rasmussen, J. (1983). Skills, rules, knowledge, signals, signs, symbols and other distinctions in human
performance modeling. IEEE Transactions on Systems, Man and Cybernetics, 13 (3), 257±267.
Salas, E., & Cannon-Bowers, J. A. (1997). Methods, tools and techniques for team training. In M. Qui-
nines, & A. Ehrenstein, Training for a rapidly changing workplace: applications of psychological research
(pp. 249±279). Washington, DC: APA Press.
Schneider, W. (1985). Training high performance skills: fallacies and guidelines. Human Factors, 30, 539±
566.
Schvaneveldt, R., Durso, F., Goldsmith, T., Breen, T., Cooke, N., Tucker, R., & DeMaio, J. (1985).
Measuring the structure of expertise. International Journal of Man±Machine Studies, 23, 699±728.
Stasser, G., & Davis, J. H. (1981). Group decision making and social in¯uence: a social interaction
sequence model. Psychological Review, 88 (6), 523±551.
Vroom, V., & Yetton, P. W. (1973). Leadership and decision-making. Pittsburgh, PA: University of Pitts-
burgh Press.
Whitmore, P. G. (1981). The ``whys'' and ``hows'' of modern instructional technology. National Society
for Performance and Instruction Journal, 9±13.
Wickens, C. D. (1984). Engineering psychology and human performance. Columbus, OH: Merrill.
Wong, M. R., & Raulerson, J. D. (1974). A guide to systematic instructional design. Englewood Cli€s, NJ:
Educational Technology Publications.
Woods, D. D. (1988). Coping with complexity: the psychology of human behavior in complex systems. In
L. Goodstein, H. Anderson, & S. Olsen, Tasks, errors and mental models. London: Taylor and Francis.
Zakay, D., & Wooler, S. (1984). Time pressure, training, and decision e€ectiveness. Ergonomic, 27, 273±
284.

You might also like