Download as pdf or txt
Download as pdf or txt
You are on page 1of 272

Battle of Cognition

i
Praeger Security International Advisory Board
Board Cochairs
Loch K. Johnson, Regents Professor of Public and International Affairs, School of Public and
International Affairs, University of Georgia (USA)
Paul Wilkinson, Professor of International Relations and Chairman of the Advisory Board,
Centre for the Study of Terrorism and Political Violence, University of St. Andrews (UK)
Members
Anthony H. Cordesman, Arleigh A. Burke Chair in Strategy, Center for Strategic and
International Studies (USA)
Thérèse Delpech, Director of Strategic Affairs, Atomic Energy Commission, and Senior
Research Fellow, CERI (Fondation Nationale des Sciences Politiques), Paris (France)
Sir Michael Howard, former Chichele Professor of the History of War and Regis Professor of
Modern History, Oxford University, and Robert A. Lovett Professor of Military and Naval
History, Yale University (UK)
Lt. Gen. Claudia J. Kennedy, USA (Ret.), former Deputy Chief of Staff for Intelligence,
Department of the Army (USA)
Paul M. Kennedy, J. Richardson Dilworth Professor of History and Director, International
Security Studies, Yale University (US.A)
Robert J. O’Neill, former Chichele Professor of the History of War, All Souls College, Oxford
University (Australia)
Shibley Telhami, Anwar Sadat Chair for Peace and Development, Department of Govern-
ment and Politics, University of Maryland (USA)
Fareed Zakaria, Editor, Newsweek International (USA)

ii
Battle of Cognition

T HE F UTURE I NFORMATION -R ICH


W ARFARE AND THE M IND
OF THE C OMMANDER

Edited by
Alexander Kott

PRAEGER SECURITY INTERNATIONAL


Westport, Connecticut • London

iii
Library of Congress Cataloging-in-Publication Data
Battle of cognition : the future information-rich warfare and the mind of the commander /
edited by Alexander Kott.
p. cm.
Includes bibliographical references and index.
ISBN 978–0–313–34995–9 (alk. paper)
1. Command and control systems. 2. Situational awareness. 3. Command of
troops. I. Kott, Alexander.
UB212.B37 2008
355.3'3041—dc22 2007037551
British Library Cataloguing in Publication Data is available.
Copyright © 2008 by Greenwood Publishing Group
All rights reserved. No portion of this book may be
reproduced, by any process or technique, without the
express written consent of the publisher.
Library of Congress Catalog Card Number: 2007037551
ISBN-13: 978–0–313–34995–9
First published in 2008
Praeger Security International, 88 Post Road West, Westport, CT 06881
An imprint of Greenwood Publishing Group, Inc.
www.praeger.com
Printed in the United States of America

The paper used in this book complies with the


Permanent Paper Standard issued by the National
Information Standards Organization (Z39.48–1984).
10 9 8 7 6 5 4 3 2 1
When referring to future, all names, characters, organizations, places and incidents featured in
this publication are either the product of authors’ imagination or used fictitiously. Any resem-
blance to actual persons (living or dead), events, institutions or locales is coincidental.

iv
Contents

Introduction 1
Alexander Kott
1 Variables and Constants: How the Battle Command
of Tomorrow Will Differ (or Not) from Today’s 10
Richard Hart Sinnreich
The Timeless Conditions of Battle 13
The Changing Context of Command 15
Key Command Tasks 18
Recurring Command Dilemmas 24
New Command Challenges 29
Enhancing Future Battle Command 33

2 A Journey into the Mind of Command: How DARPA


and the Army Experimented with Command in Future
Warfare 37
Alexander Kott, Douglas J. Peters, and Stephen Riese
A Battle of 2018 37
Network-Enabled Warfare 41
The History of the MDC2 Program 44
Experimental Testbed 46
The Blue Command 48
The Commander Support Environment 50
A Typical Experiment 53
A Typical Battle History 59
Information Processing, Situation Awareness, and Battle Command 61

v
vi Contents

3 New Tools of Command: A Detailed Look at the


Technology That Helps Manage the Fog of War 64
Richard J. Bormann Jr.
The Architecture of the BCSE 65
Warfighter’s Command Functions and Tools within CSE 75
An Illustrative Scenario 87
The Decision Support Framework 90

4 Situation Awareness: A Key Cognitive Factor in


Effectiveness of Battle Command 95
Mica R. Endsley
Challenges for SA in Command and Control 98
System Design for SA in Command and Control 105
SA Requirements Analysis 107
SA-Oriented Design Principles 109
SA Design Evaluation 110
Shared SA in Team Operations 112
SA in Distributed and Ad Hoc Teams 117

5 The Hunt for Clues: How to Collect and Analyze


Situation Awareness Data 120
Douglas J. Peters, Stephen Riese, Gary Sauer,
and Thomas Wilk
Data Collection 120
Situation Awareness—Technical 124
Sensor Coverage 128
Situation Awareness—Cognitive 130
Battle Tempo 135
Collaborative Events 136

6 Making Sense of the Battlefield: Even with Powerful


Tools, the Task Remains Difficult 140
Stephen Riese, Douglas J. Peters, and Stephen Kirin
Information Advantage Rules 140
SA Is Hard to Maintain 143
Gaps and Misinterpretations 149
Common but Not Shared 157
The Cognitive Load 162
Experimental Design and Analysis 165

7 Enabling Collaboration: Realizing the Collaborative


Potential of Network-Enabled Command 167
Gary L. Klein, Leonard Adelman, and Alexander Kott
The Collaboration Evaluation Framework (CEF) 168
Three Points of Impact 169
Task Transmissions 170
Contents vii

Hierarchical Level and Information Abstraction 171


Collaboration and Levels of Situation Awareness 174
Types of Coordination and Collaborative Behaviors 176
Types of Task Processes 179
Type of Interdependence 180
Task Environment 180
Concept of Operations 181
Applying the Collaboration Evaluation Framework 183
Impact on Mission-Oriented Thinking 184
High Cognitive Costs of Mutual Adjustments 189
Collaboration in a Disrupted Command 190

8 The Time to Decide: How Awareness and Collaboration


Affect the Command Decision Making 194
Douglas J. Peters, LeRoy A. Jackson, Jennifer K. Phillips,
and Karol G. Ross
Collecting the Data about Decision Making 197
The Heavy Price of Information 203
Addiction to Information 205
The Dark Side of Collaboration 206
Automation of Decisions 208
The Forest and the Trees 210

Concluding Thoughts 212


Alexander Kott
The Tools of Network-Enabled Command 213
The Challenges of Network-Enabled Command 217

Appendix: Terms, Acronyms, and Abbreviations 223


Acknowledgments 233
Notes 237
Index 247
About the Contributors 257
viii
Introduction
Alexander Kott

The impact of the information revolution on our society has been sudden,
profound, and indisputable. The last couple of decades have seen a dramatic
rise of new, powerful economic sectors dedicated to machines and processes
for generation, transformation, distribution, and utilization of informational
products. Computers and software, wired and wireless communication net-
works, autonomous machines, the proliferation of highly capable sensors—all
these elements have transformed both daily lives and worldwide economies
to an extent that would be difficult to fathom merely a generation ago.
Warfare, inevitably, is among the human endeavors that have experienced
the massive impact of the information revolution. Historically, warfare has
been particularly dependent on, and influenced by, technology. From iron
and bronze weapons to horse breeding and riding to sails and gunpowder to
motor power and so on—the history of warfare is largely the story of some
people creatively adapting (and some failing to adapt) their military cultures,
institutions, and tactics to new waves of technology.1 Not surprisingly, since
the beginning of the information revolution, military thinkers in the United
States and elsewhere have been both analyzing and implementing the changes
enabled and necessitated by the rapidly advancing information technologies.2
While some of these adjustments have rapidly entered military practice,
others remain elusive even after long anticipation.
Examples of military transformations engendered by the information revo-
lution include some that are relatively inexpensive and benign.3 Others are
ambitious, enormously expensive, and therefore often controversial. One
effort in the latter category is the Future Combat System of the U.S. Army, a
colossal program intended to build a highly networked system of new battle

1
2 Battle of Cognition

vehicles, combat robots, and human warriors.4 Thriving on its ability to


obtain, communicate, understand, and use huge volumes of battle-related
information, this system of humans and machines is to be highly transportable,
agile, survivable, and lethal to the enemy, as compared to its industrial age
ancestors.
Disturbingly, the history of warfare also offers numerous lessons of coun-
terproductive military adaptation to new technologies and even notorious
cases of technological dead ends.5 Can an enormously ambitious transfor-
mational undertaking like the Future Combat System work as intended? It
is a $160 billion question—that’s approximately how much taxpayer money
the program is currently slated to consume.6 The program’s critics and advo-
cates argue about the readiness of the intended technologies, whether such a
force can help with counterinsurgency wars bedeviling the U.S. military, and
how vulnerable (or not) the future light-armored combat vehicles might be in
comparison with today’s predominant battle machines like the Abrams tank.
Yet while many ponder the weighty matters of armor, few seem to worry
about the gray matter—the mind of the commander, the place where all the
information power of the new age is supposed to converge and to yield its
mighty dividends. Consider that it is the human mind, particularly the minds
of military commanders and their staffs, which remains the pinnacle and the
ultimate consumer of all these enormously expanded flows of information.
What if the true weak link of the information age force is not the hardware
of machines, but the software of the human mind? And if so, could it be that
the entire conceptual structure of the information revolution, at least as it
applies to military affairs, is built on sand, on the notorious fickleness of
human cognition?
These are the questions that this book strives to examine. Looking at the
command and control of information-rich warfare, we explore its potential
new processes, techniques, and organizational structures. As we do so, we find
reasons for both optimism and concerns about the limitations of human cog-
nition and supporting technologies in commanding information age battles.
Naturally, much of this book is about the technology that may enable new
paradigms of battle command. Without such technology, as the reader will soon
see, the new methods of battle command are neither feasible nor desirable.
To underpin the new technology, this book must also address the science—
theoretical and empirical—of the key processes occurring in battle command.
Finally, because new ideas are hard to either explain or motivate without explor-
ing their genesis, this book is also about the history of how we developed the
concepts and technologies of the new battle command.
In part, the roots of this work are in two programs conducted by the De-
fense Advanced Research Projects Agency (DARPA), the central research
organization of the U.S. Department of Defense.7 DARPA is widely regarded
as the world’s most ambitious, risk-taking, and largely effective research
organization dedicated to military technologies. In the words of one writer,
“America’s secret weapon today is not the stealth airplane or the Predator
Introduction 3

but the agency that was responsible for their development (and much else
besides)—The Defense Advanced Research Projects Agency.”8
Whatever the accolades, it was DARPA that by the year 2000 became
increasingly concerned about the challenges of battle command in an
information-rich, network-enabled military force. Urged by the energetic
and visionary Lieutenant Colonel Gary Sauer, DARPA and the U.S. Army
formed a joint development program, initially called the Future Combat Sys-
tem Command and Control (FCS C2).9 Gary Sauer joined DARPA, became
the program manager of the program, and convinced two other talented mili-
tary technologists—Maureen Molz, a senior engineering manager with the
U.S. Army, and Lieutenant Colonel Robert Rasch—to join him. Together,
they led the program through most of its life. Around 2003, the program was
renamed Multicell and Dismounted Command and Control (MDC2)10 and
continued into 2007.
The products of the program included an unconventional, innovative
approach and technology for battle command. While not representing the
position of either the U.S. Army or DARPA, this book is based partly on the
ideas, experiments, and lessons of that program.11

DRIVING FORCES
Major innovations do not occur without both a push and a pull. A push is a
set of factors that make a change possible. Often, the push is technological—a
new invention or an advance in technology or a combination of new tech-
nologies that makes possible a capability that was previously unachievable.
In the world of warfare, such a push implies that the potential opposing
force may also avail itself of such technologies and capabilities, and therefore
some counteraction must be considered. A pull is a set of factors that make a
change desirable and even necessary. Commonly, such factors result from an
evolution in the environments and opponents that the military is likely to face
in the near future.
Today’s shifts in battle command paradigms are enabled by several push
factors such as smart precision weapons, unmanned platforms and sensors,
ubiquitous networking, and intelligent decision aids. There are also powerful
pull factors: the continuous trend toward the dispersion of forces, the need
for lighter forces that can defeat a heavier opponent without entering into
his direct fire range, and the dramatic increase in the volume of battlespace
information. Later in this book, we discuss these topics in detail, but let us
preview them briefly here.
For example, the recent emergence and the rapid progress of unmanned
platforms are nothing short of revolutionary. In our generation, we are
observing the entry into the battlespace of an entirely new class of warriors:
unmanned automated sensors, stationary and mobile, ground based and air-
borne; unmanned fire platforms, for both direct and indirect fires, capable
of operating in the air and on the ground, large and small. It is difficult to
4 Battle of Cognition

compare this development with anything else that has ever occurred in the
history of human warfare.
These artificial warriors possess unique strengths and weaknesses. They
can obtain and process far more information than a human being and yet
are generally much less intelligent in accomplishing even seemingly simple
tasks. They possess inhuman endurance, precision, strength, and “courage,”
thereby offering the commander a yet unexplored range of new tactics. On
the other hand, the unmanned, robotic platforms impose on their human
commanders a great burden: monitoring and controlling the assets that for
the foreseeable future will remain remarkably unintelligent as compared to
human warriors.
At the same time, affordable networking and computerization have brought
an unprecedented ability to exchange large volumes of information at great
speeds between both human and artificial warriors, both horizontally between
peers and vertically between echelons. The implications of this development
for the battle command are also drastic: the information flow rates as well
as distances and node-to-node connectivity have grown by many orders of
magnitude as compared to any time in the history of warfare. Many tradi-
tional limitations that used to constrain and shape the nature of the battle
command, such as the hierarchical flow of information and control, are now
open to rethinking.
In addition to greatly improving the flow of information, the technology
also helps make better use of the information. The ubiquitous presence of
computers in the battlespace at all levels of command became a norm in the
last 10–15 years and opened the door to the emergence and acceptance of
various computerized aids: visualization of the situation, exchange and inte-
gration of information, course-of-action planning, logistics, and maneuver
execution control. These aids are simultaneously multiplying the need for
information flows and enabling the decision maker to deal effectively with the
proliferation of information. They also bring new challenges by both reduc-
ing and increasing, in different ways, the fog of war.
Unlike the push factors, the pull has more to do with the changing nature
of the operational environment and opposing forces, although these are also
driven to a large extent by technological and economic forces. For example,
as recently as the 1980s, the U.S. military operated in a bipolar world, with
a clearly defined primary potential opponent. The geographic places and the
modes of likely confrontations were well understood. But then the collapse of
the Soviet Union—partly due to the information revolution—shattered the
clarity of threats faced by the U.S. military. Instead came a bewildering array
of often unpredictable conflicts scattered worldwide.
Without the well-defined expectations of where the next war may occur,
the cold war approach of prepositioning U.S. forces at strategically located
bases becomes impractical. Besides, without a massive and highly visible enemy
like the Soviet Union, the U.S. public is less willing to pay for a large number
of military personnel. This creates the need for ways to shuffle the limited
Introduction 5

number of U.S. forces around the globe, from one conflagration to another,
rapidly and efficiently. The force has to become lighter and easier to deploy
to far-flung places. Its design, platforms, and weapons, and its command and
control have to adapt to the new realities.
In addition to helping undermine the Soviet Union and release a plethora
of other evildoers, the information revolution has produced other unexpected
ramifications. Advances in communications enable millions of Americans back
home to watch the wars—and their often gory outcomes—literally as they
unfold. The brutal emotional impact of real-time video beamed across the
world from the battlefront has no precedents in human history. Besides, the
new precision weapons, also engendered in part by advances in information
technologies, lead the public to expect surgical strikes without unnecessary
civilian deaths. With such images and expectations, the public’s tolerance for
casualties among our own troops as well as among enemy civilians has dimin-
ished dramatically. Today, the so-called “CNN effect” imposes new pressures
on a military commander: images of civilian casualties caused by a single stray
bomb can produce enormous, strategically significant outrage around the
world and in his12 own country. Somehow, the commander has to accomplish
his mission under the constraint of a public demand for low-casualty warfare.
In combination, these diverse driving forces have made a trend toward a new
battle command both feasible and inevitable.

THE TRIANGLE OF COGNITIVE PROCESSES


Like any other real-world process, the battle command is driven by objec-
tive relations and quantitative dependencies within its process. To invent a
new battle command, we had to attempt to understand, measure, and ana-
lyze such dependencies. Among the multiple interconnected phenomena of
battle command, particularly salient are three processes: situation awareness,
collaboration, and decision making (Figure 0.1).
Decision making produces the ultimate product of the battle command:
decisions. Depending on the echelon and role of the battle command element,
a decision can range from applying a particular weapon to a particular target
to a scheme of maneuver for an entire campaign. Accuracy and timeliness of
decisions are the key measures of the quality of battle command. This quality
is critically dependent on the extent to which the decision makers understand
the situation and on the degree of cognitive load they experience.
The situation awareness here is the process by which the decision makers
absorb the available information (inevitably incomplete, delayed, and often
erroneous) and attempt to form a correct picture of the events and forces in
the battle. Better understanding leads to making better decisions. Unfortu-
nately, arriving at a better situation awareness is an expensive process. It con-
sumes the attention of the decision makers and imposes a greater cognitive
load on them, which can in turn decrease the quality and timeliness of their
decisions.
6 Battle of Cognition

Figure 1. Situation awareness, collaboration, and


decision making are key phenomena of battle com-
mand.

Collaboration is a process in which decision makers exchange their decision


options and their understanding of the situation in order to produce a better,
more complete, consistent, and correct understanding of the situation and the
ultimate decisions. Collaboration usually improves the situation awareness
and the decision quality. However, when the cognitive load on the decision
makers is high, as is so often the case in battle command, the process of colla-
boration can also become a fatal distraction.
Thus, the three processes are intricately interconnected. They both sup-
port and impede each other. In this book, much of the discussion is organized
around these three fundamental and closely linked processes.

THE ROAD MAP OF THE BOOK


We begin by exploring the nature of battle command in its historical con-
text. Whether practiced by a tribal chieftain of antiquity or a captain of
tomorrow who is surrounded by technology, most elements comprising the
battle command are timeless and remain substantially invariant, as argued in
chapter 1. The tasks of battle command, such as making decisions and com-
municating them effectively, and its profound dilemmas, such as prioritizing
goals or dealing with noncombatants, are stubbornly resistant to changes in
the technological and political environment of warfare. And yet, one must
not overlook the profound ways in which these permanent elements of battle
command manifest themselves and affect commanders differently depending
on the technological evolution of war. In particular, the context of battle com-
mand does change in important ways, such as the continuing drastic increases
of the last two centuries in the physical scope of the battlespace as well as in
the structural complexity of military forces and systems. Together, contex-
tual and technological changes introduce new command challenges, such as
Introduction 7

greater agility and simultaneity of actions, as well as the demand for more
precise, surgical operations. However, we stress, the most important constant
of battle command is the commander himself, and a technological advance in
this field can succeed only by matching the new technology to the intricate
strengths and weaknesses of the human mind.
Yet, it is a very tall order to match technology and the human mind. The
complexities are immense, and the only effective techniques to deal with
them are experimental—essentially trial-and-error methods. In chapter 2, we
describe our approach to solving these challenges: the series of experiments
that comprised the core of the MDC2 program, in which we explored various
arrangements of human command cells and computer-based tools. This is
the place where the setting of the military scenarios and the physical arrange-
ments of our experiments are introduced. We describe the history of the pro-
gram, the typical battles portrayed in the experiments, and the tools we built
to help commanders fight the battles.
Having briefly introduced the battle command tools constructed and exp-
lored in the course of our experiments, in chapter 3, we offer the technically
minded reader a detour to explore the nuts and bolts of the tools. The chap-
ter starts with the overarching architecture, continues into a mapping of the
command functions and corresponding tools and then shows how they work
together in an illustrative scenario, and concludes by explaining the underly-
ing technology of the tools used in the experimental battle command support
environment.
With the profusion of functions, tools, and ramifications of battle com-
mand, one aspect—situation awareness—stands out as uniquely pervasive
and influential. That is why the next several chapters focus almost exclusively
on this all-important underpinning of battle command. Chapter 4 introduces
the fundamentals of situation awareness, beginning with the definitions of
situation awareness at several distinct levels and its exceptional significance
to battle command. To provide value to commanders, the design of a battle
command tool must pay careful attention to its ability to deliver situation
awareness. The chapter discusses specific recommendations on how to meet
such design objectives: the approach to requirements analysis, design, and
evaluation of systems that support situation awareness. It also points out the
serious limitations of our current understating of situation awareness, espe-
cially as it applies to command teams.
Continuing the discussion of the theoretical foundations of situation aware-
ness, in chapter 5 we describe our experimental approach to measuring and
analyzing the processes by which warfighters develop situation awareness,
the role of situation awareness in effective decision making, and its ultimate
impact on the battle outcome. Our experimental findings highlighted situa-
tion awareness as the linchpin of the command process, as a key factor that
determined the efficacy of all its elements—from sensor and asset control to
decision quality and battle outcome. We explain how we gradually developed
the methods of collecting the relevant data and measuring both the so-called
8 Battle of Cognition

technical situation awareness (what the technical systems make available to


the human mind) and the cognitive situation awareness (what the human
mind actually absorbs).
The actual findings resulting from such data collection and analysis are the
topic of chapter 6. We illustrate them with both data and examples. Some of
the findings serve as possibly the first-ever quantitative validation of long-
standing intuitive expectations of military practitioners. For example, we see
quantitative evidence that the difference between situation awareness levels
of the opponents is among the most influential factors governing the battle
outcome. Other findings are far from obvious, perhaps counterintuitive, and
even somewhat disturbing, such as the surprisingly large gap between the
information made available to the command cell and its actual—and very
often incorrect—perception of the situation.
When a single mind has difficulties understanding a situation, the time-
honored solution is to bring more minds to bear on the problem; collabo-
ration always helps. Yes and no, says chapter 7. In fact, collaboration can
be a double-edged sword; it can both help and hurt. Our findings on the
role and value of computer-assisted collaboration on battle command are
decidedly mixed. On one hand, we find that network-enabled command
tools can result in remarkably effective cooperation of far dispersed forces,
when they share information and resources in ways not imaginable without
such networks and tools. On the other hand, we discover that common and
well-accepted approaches to computer-assisted collaboration can be quite
ineffective and even counterproductive. We also find that under some con-
ditions, collaboration can reinforce incorrect conclusions and be outright
dangerous.
Having formed situation awareness, with or without the benefits of col-
laboration, the commander must make decisions. Although in our experi-
ments we see numerous positive contributions of battle command tools to
the commanders’ decision processes, in chapter 8 we elect to focus on the
shortcomings. After all, the main objective of our experiments was to seek
and iteratively correct such shortcomings. We found, not unexpectedly, that
the major increase in information flows available to the commander comes at
a price. Too often, for example, the richness and detail of the available infor-
mation led a commander to chase relatively insignificant immediate actions
while abrogating his responsibility for managing the big picture of the battle.
The same abundance of easily accessed information often seduced a com-
mander into delaying a decision while collecting yet more precise and reliable
information.
In the final chapter, we offer the reader our conclusions. There is a wide
variation in our confidence in these conclusions: some are well supported
by our research while others are rather conjectural. There are also several
distinct perspectives covered in the conclusions. Some are focused on tech-
nical aspects of building and testing battle command technologies; others
Introduction 9

offer observations on the nature and practice of network-enabled warfare. Yet


others talk about the ways to command a battle in such warfare.
In all cases, the reader should be mindful that this volume is a work of
multiple authors, and not every author agrees with every opinion expressed
in this book. And, of course, the authors’ opinions do not represent those of
their employers, DARPA, U.S. Army, or any agency of the U.S. government.
CHAPTER 1
Variables and Constants:
How the Battle Command of
Tomorrow Will Differ (or Not)
from Today’s
Richard Hart Sinnreich

To be a successful soldier, you must read history . . .Weapons change, but


man, who uses them, changes not at all. To win battles, you do not beat
the weapons—you beat the man.
—George S. Patton

In May 1916, off Denmark’s Jutland Peninsula, a naval battle took place for
which Great Britain’s Royal Navy had been preparing for more than a decade
and which it had sought in vain to bring about since the beginning of the
World War I. For a day and a night, British admiral Sir John Jellicoe’s Grand
Fleet sparred in a roar of guns and hiss of torpedoes with German admiral
Reinhard Sheer’s High Seas Fleet.
In risking battle against his numerically superior adversary, Sheer’s inten-
tion was to lure a detachment of British warships out of port and destroy it
in detail, whittling away the Royal Navy’s tonnage advantage. Threatening
British shipping navigating the straits between Denmark and Norway, he
hoped, would entice Jellicoe’s fast but lightly armored battlecruisers into an
ambush by the more powerful battleships of the High Seas Fleet.
Instead, warned by intelligence of the intended German sortie, Jellicoe
took his entire fleet to sea even before Sheer weighed anchor. Misled by a
misunderstood radio intercept, however, Jellicoe, like Sheer, expected to face
only his enemy’s battle cruisers.
Accordingly, on the afternoon of May 31, distant from the rest of the British
fleet by more than 50 miles, Jellicoe’s battlecruisers commanded by Vice Ad-
miral David Beatty found themselves engaging their German counterparts on
a course that unchanged would have taken them directly under the guns of
Sheer’s battleships. Only a last-minute warning by one of Beatty’s light cruis-
ers alerted him in time to reverse course.

10
Variables and Constants 11

What the goddess of fortune gave with one hand, however, she took back
with the other. Thanks to signaling problems and its commander’s reluctance
to act without orders, Beatty’s most powerful squadron lagged behind, depriv-
ing him of its firepower for crucial minutes. That and poor gunnery cost the
British two battle cruisers. Fortunately, Beatty’s detached Third Battlecruiser
Squadron arrived just in time to even the odds and allow Jellicoe to deploy his
battleships into fighting formation before the arrival of Sheer’s main body.
What should have followed was the decisive clash of battle fleets for which
both navies had been built. Instead, astonished to find himself confronting
not just battle cruisers, but rather the entire Grand Fleet, Sheer turned his
ships on their heels and fled, a maneuver that the British at first failed to
detect, then failed to exploit.
Thirty minutes later, however, Sheer unaccountably reversed course once
again, in the process exposing his ships in column to the fire of Jellicoe’s battle
line. Awakening to his error, he then turned back a second time, covered by
his battle cruisers and torpedo attacks. Again Jellicoe failed to pursue, and
with night falling, the two fleets separated.
Both fleets now altered course, Jellicoe hoping to intercept the Germans
at daybreak, Sheer seeking only to evade further action and return to port. In
the darkness their courses converged, the Germans actually passing through
the rear of the British fleet. The British warships detecting them, however,
neither engaged them nor informed Jellicoe, who thus remained ignorant of
their proximity. At dawn the fleets were miles apart, and the Royal Navy had
lost its golden opportunity to destroy its German rival once and for all.1
Tactically, honors were nearly even. The British lost three modern battle
cruisers and three older cruisers, the Germans one battle cruiser and four
light cruisers, both in addition to smaller vessels. Psychologically, however,
Jutland was an immense disappointment to Britain. Unchallenged at sea for
more than a century, the vaunted Royal Navy had failed in a head-to-head
encounter to destroy the smaller fleet of what amounted to an upstart naval
power.
In a penetrating examination of the evolution of the Royal Navy between
Trafalgar and Jutland, British historian Andrew Gordon traced the factors
that led to that embarrassing result. The most important was a pervasive
change in the Royal Navy’s approach to battle command, reflecting above all
the impact on the Navy’s institutional culture and leadership of revolutionary
technological change during a century without major naval conflict.2
In the process, the initiative and audacity that had won Britain command of
the sea surrendered to a centralized and mechanical battle-command system
that proved slow to recognize opportunity and unable to exploit it. At Jutland
as it has so often in the history of war, numerical superiority alone proved
unable to compensate for that deficiency.
A recent U.S. Army paper defines battle command succinctly as “the art
and science of applying leadership and decision making to achieve mission
success.”3 The elements of this definition deserve attention.
12 Battle of Cognition

To begin with, the definition asserts that battle command is both an art
and a science. The former is a creative activity not susceptible to objective
confirmation or prediction, the latter a process of systematic discovery that
ultimately must satisfy both requirements. By the definition, battle command
somehow must reconcile these incompatible qualities.
Second, the definition implies a predetermined military objective. Battle
command seeks mission success. But just what that entails must be specified
elsewhere. So described, battle command differs markedly from strategic and
even operational direction, in which whether and why to accept or decline
battle is a preliminary and often difficult decision.
Finally, battle command is asserted to comprise two separate, albeit re-
lated functions—leadership and decision making. Both are ultimately soli-
tary activities. They differ in that respect from control, which significantly
appears nowhere in the definition. Control, the application of regulation
and correction, can be and typically is a corporate process, and, as with
defining mission success, apparently is something distinguishable from com-
mand, a view tacitly reflected in the common pairing of the two in military
terminology.
Together, these elements of the definition describe an idiosyncratic but
nonetheless reproducible activity. Reduced to its essentials, the definition
portrays battle command as a creative process constrained to a predefined
objective and conforming, in some measure at least, to confirmable principles
the application of which can produce predictable results. Indeed, that is the
way battle command is taught in most professional military schools.
History tells a rather different story. Examining the achievements of suc-
cessful battle captains, one can’t avoid concluding that much more is going on
than just the application, however artful, of reliable principles and practices.
Successful combat commanders display an almost uncanny ability to sense the
battlespace, anticipate their enemies’ behavior, and create and exploit oppor-
tunities where none previously were visible.
The late Air Force colonel John Boyd tried to capture that special talent
in his now-famous “OODA Loop”—Observe, Orient, Decide, Act.4 Like
the fighter pilots from whom Boyd derived his theory, successful battle com-
manders routinely execute that cycle more rapidly and effectively than their
adversaries, gaining a progressively greater advantage with each successive
engagement.
As Boyd himself recognized, however, the problem confronting the battle
commander differs in several crucial respects from that facing the fighter pilot.
The difference affects all four elements of the OODA loop, but especially the
last. For, whereas only his own reflexes and tolerances, the capabilities of his
aircraft, and the laws of physics constrain the fighter pilot’s ability to act, the
battle commander must act through others. The translation from decision to
action thus is much less straightforward, more vulnerable to miscommunica-
tion or misperception, and above all, more sensitive to human error and plain
bad luck.
Variables and Constants 13

THE TIMELESS CONDITIONS OF BATTLE


That is true to some degree of any collective human enterprise, but especially
of war. No one understood that better than Carl von Clausewitz, who high-
lighted the powerful impact on commanders of what he called the “realms” of
war: danger, physical exertion and suffering, uncertainty, and chance.5 As the
Battle of Jutland confirmed, those conditions are by no means unique to war
on land. But it is in land combat that they tend to appear in their most varied
and visible forms.

Danger
Danger affects battle command on several levels. At the most basic, it
requires those at the sharp edge to suppress every instinct of self-preservation
for purposes that rarely will be as visible to them as to their leaders. Through-
out history, a focal purpose of military socialization and discipline has simply
been to inculcate resistance to fear.6
Even among trained and disciplined soldiers, however, that resistance has
limits. As every experienced commander knows, the well of courage isn’t
bottomless. Today, when democratic societies, at least, no longer will tolerate
the harsh discipline that stiffened Frederick’s lines at Leuthen or Wellington’s
squares at Waterloo, fighting men and women must be convinced in other
ways to expose themselves voluntarily to death or serious injury.7 As some
commanders relearned painfully during the Vietnam War, nothing can more
easily shatter that conviction than suspicion that their leaders don’t know
what they’re doing. Battle command thus directly affects the willingness of
soldiers to fight.
Danger also affects human perception. Modern sensors have by no means
diminished soldiers’ propensity under threat to misperceive, exaggerate, and
fantasize. Clausewitz’s notorious “fog” of war is much less often the product
of an outright lack of information than of misreading the information avail-
able. Like desert heat, danger tends to distort the vision and generate false
images. In the confusion of battle, Clausewitz commented, “it is the excep-
tional man who keeps his powers of quick decision intact.”8 More information
alone therefore is no guarantee of effective battle command. Instead, what
matters more is the judgment through which that information is filtered and
translated into knowledge.
Finally, danger affects the commander directly, less often in terms of physi-
cal risk than through the dilemmas it poses. Choices that may seem obvious
in hindsight rarely present themselves so clearly at the moment of decision.
In battle, every choice is fraught with peril. At Jutland, as Winston Churchill
justly acknowledged, Jellicoe was “the only man on either side who could lose
the war in an afternoon.”9 For Jellicoe, therefore, the perceived cost of defeat
more than counterbalanced the will to win. No battle-command system can
relieve the commander of the moral burden such dilemmas entail.
14 Battle of Cognition

Physical Exertion and Suffering


The psychological and moral effects of danger only are aggravated by the
sheer physical demands on soldiers and leaders alike. Ground combat is ardu-
ous and exhausting. The ground itself is an unremitting obstacle, never mind
weather and the enemy. Brain and muscles tire, food and sleep are erratic and
often insufficient, and every successive casualty compounds the psychological
pressure on the survivors.
Machines ease but by no means eliminate these hardships. In return, main-
taining the machines adds its own burdens. In the nature of ground combat,
circumstances only rarely allow soldiers to turn their fighting platforms over to
others to arm, fuel, and maintain while they themselves recuperate. The con-
dition of weapons alone, therefore, is no guarantee of the condition of those
manning them, and the commander who measures the combat effectiveness of
his unit solely by the preparedness of his weapons is asking for trouble.

Uncertainty
In 1927, German physicist Werner Heisenberg proposed to his colleagues
that perfect knowledge on a quantum scale was unattainable. Skeptics objected
that this merely reflected the limits of measurement. But Heisenberg was able
to demonstrate that imperfect knowledge is built into the very fabric of the
subatomic universe.
A century earlier, Clausewitz reached a similar conclusion about war. “War,”
he wrote, “is the realm of uncertainty; three-quarters of the factors on which
action in war is based are wrapped in a fog of greater or lesser uncertainty.”10
As in physics, that uncertainty is in great measure insensitive to the means by
which information is acquired and transmitted. At Jutland, the same intel-
ligence system both alerted Jellicoe and misinformed him.
Modern technology certainly has vastly improved armies’ abilities to acquire
and share information. Netted communications, global positioning, enhanced
sensors, and overhead platforms all have significantly increased the informa-
tion available to commanders.
And yet, as recent conflicts reveal only too clearly, uncertainty persists.
Enemy and friendly units appear where they have no business being. Tar-
gets turn out to be not what they seemed. And commanders with abundant
communications still manage to misread the battlespace, the enemy, and each
other.11 Such difficulties have plagued armies and navies since the dawn of
organized warfare. While technology may be able to diminish them, there is
no convincing evidence to date that it ever will banish them entirely.

Chance
Finally, chance or, more broadly, what Clausewitz called “friction,” domi-
nates every battlefield. “Action in war,” he wrote, “is like movement in a resis-
tant element. Just as the simplest and most natural of movements, walking,
Variables and Constants 15

cannot easily be performed in water, so in war it is difficult for normal efforts


to achieve even moderate results.”12
In military terms, friction denotes the unforeseeable accidents, incidents,
delays, errors, misunderstandings, misjudgments, and freaks of nature that
perversely intervene between plan and execution. It is famously reflected in
the proverb that begins “For want of a nail, a shoe was lost” and ends with
“A kingdom was lost, and all for want of a nail.” At Jutland, friction bedeviled
both sides from the battle’s beginning to its end.
Military and naval operations are by no means uniquely susceptible to this
problem. In a recent book, surgeon and medical columnist Atul Gawande
insightfully described the myriad ways in which unforeseen problems can
complicate even routine surgical procedures.13
The most careful planning cannot altogether prevent such complications.
Accordingly, for the battle commander as for the surgeon, coping with the
unforeseen is an unwelcome but also inescapable requirement.

THE CHANGING CONTEXT OF COMMAND


While the preceding challenges to effective battle command are timeless,
others have changed as war itself has changed. Developments in technology,
military organization, and the battlefield environment all have significantly
altered the exercise of command. In land warfare, three developments espe-
cially have influenced that evolution.

Battlefield Enlargement
The first is the progressive enlargement of the battlefield and the increas-
ing number and dispersal of fighting formations. Until very recently in his-
torical terms, army commanders could and did exercise tactical command
directly, basing their decisions on what they could see with their own eyes,
transmitting orders verbally or at worst by messenger, and exerting leadership
by personal example.
To an Alexander, Julius Caesar, or Gustavus Adolphus, the modern injunc-
tion to “lead from the front” would have been superfluous. Throughout
much of military history, effective command could be exercised nowhere else.
The annals of warfare are replete with examples of battles won through the
commander’s direct personal supervision or lost through his incapacitation,
capture, or flight.
In the mid-nineteenth century, that began to change. At Waterloo in 1815,
Wellington and Napoleon still could exercise direct tactical command, observ-
ing virtually the entire battlefield and moving their units like chess pieces.
Nothing more vividly reveals the dependence of both armies on that personal
involvement than the errors committed by Bonaparte’s subordinates during
his brief infirmity in midbattle and Wellington’s dramatic personal interven-
tion to mass his musketry against the Old Guard at the battle’s climax.
16 Battle of Cognition

Fifty years later in the wilderness, neither Ulysses S. Grant nor Robert E.
Lee could even begin to exert similar personal direction. Not just the diffi-
culty of the terrain but also the sheer scale of the battlefield and the dispersal
of units made continuous observation and influence virtually impossible. Both
commanders could and did intervene at a few crucial moments. In large mea-
sure, however, having brought their forces to battle, they were compelled to
leave its tactical direction in the hands of their subordinates.14

Organizational Complexity
During the next half-century, as weapons diversified and their lethal reach
expanded, the enlargement of the battlefield was paralleled by a similar
increase in organizational complexity. The late nineteenth century saw the
multiplication of command echelons, emergence of battle staffs, accelerat-
ing functional specialization, and the introduction of new technologies from
motorization to electronic communications.
Reflecting on these developments, General Alfred von Schlieffen, chief of
the German general staff from 1891 to 1906, predicted that future command-
ers endowed with modern communications no longer would lead from the
front, but instead would direct operations by telephone and telegraph from
distant headquarters where, “seated in a comfortable chair, in front of a large
desk, the Modern Alexander will have the entire battlefield under his eyes, on
a map.”15
His vision proved far too sanguine. When the World War I erupted in 1914,
senior commanders found themselves little more able to exert direct influence
on the battle than their predecessors of half a century earlier. Instead, wedded
to centralized direction on battlefields the size and complexity of which far
outstripped commanders’ abilities to sense, assess, and communicate, armies
and their commanders collided like rudderless ships.
In the end, the much criticized linearity characteristic of so many World
War I battles was not simply a product of dim-witted leadership, but instead
reflected as much or more the sheer difficulty of reconciling central tacti-
cal direction with decentralized execution by commanders and subordinates
equally unprepared by doctrine or training for its demands. Only toward the
end of the war did the Germans, acknowledging the problem, at last begin to
develop tactics relying on more decentralized command arrangements.16
Meanwhile, the twentieth century also saw a steep increase in the number,
types, and effects of weapons, and with it, problems in harmonizing their
employment. Until the 1904 Russo-Japanese War, for example, artillery usu-
ally positioned forward and fired over open sights. Even then, coordination
with supported infantry was anything but perfect, as the failure of Confeder-
ate artillery to suppress federal defenses on Gettysburg’s final day revealed.
The withdrawal of artillery into defilade at the beginning of the twentieth
century merely compounded the problem. Throughout World War I, tele-
phonic communications routinely proved inadequate to coordinate fire and
Variables and Constants 17

movement. Unable to communicate reliably in real time, commanders were


compelled to fall back on elaborately planned and tightly scheduled bombard-
ments that once begun could be modified only with great difficulty. Such fires
sometimes proved nearly as deadly to supported infantry as to the enemy.
So too the integration of cavalry with infantry, a long-standing problem
that continued to plague commanders as late as the Boer War.17 The tank,
when it eventually replaced horse cavalry, proved no easier to tame. Absent
portable communications, tank-infantry coordination depended on visual sig-
nals. The result was to slow armor to the pace of its accompanying infantry,
forfeiting even the modest increase in tempo offered by early tank models.
Still more was that true of early air operations, coordination of which with
ground operations remained impractical until the development and fielding
of reliable air–ground radio communications.
By World War II, many of these technical obstacles had been overcome,
but only at the price of a quantum increase in the complexity besetting tactical
commanders. Among the major belligerents, only Germany really had begun
to address this problem when the war began.18 Other armies including our
own learned more expensively on the battlefield. Even then, combined arms
integration remained uneven throughout the war.

Multiplication of Domains
The final key development affecting battle command, barely glimpsed in
World War I but emerging full-blown in World War II, was the transmuta-
tion of a two-dimensional battlefield into a multidimensional battlespace, in
which maneuver, fires, aircraft, and electronics all found themselves com-
peting for command attention. As the complexity of the command problem
increased, the very decentralization essential to cope with battle’s increased
scale and fluidity found itself in competition with the need to synchro-
nize domains, preclude their mutual interference, and achieve economy of
force.
Throughout World War II, and in both major theaters, tactical integration
of land, sea, and especially air capabilities prompted repeated disputes, some
of which persist today. For example, the Fire Support Coordination Line,
until recently the focus of bitter doctrinal debate between U.S. air and ground
forces, originated as the Bomb Line, a safety measure established in mid-1944
in response to a much-publicized episode of air–ground fratricide.
Conflicts since then, including our own recent engagements in Afghani-
stan and Iraq, reveal that this dilemma has by no means been resolved. On
the contrary, the proliferation of long-range missiles, fixed and rotary wing
aircraft, and unmanned aerial vehicles only has aggravated it. Combined with
an enlarged irregular threat and battlefield transparency that increasingly
subjects even minor tactical miscues to immediate public scrutiny, the result
has been to impose unprecedented pressures on tactical commanders at every
level.
18 Battle of Cognition

KEY COMMAND TASKS


More than a century ago, Ulysses S. Grant insisted that “The art of war is
simple enough. Find out where your enemy is. Get at him as soon as you can.
Strike him as hard as you can, and keep moving.”19 Much later, British Field
Marshal William Slim described the battle commander’s challenge in almost
identical language. Quoting a former NCO instructor, he wrote, “There’s only
one principle of war, and that’s this: Hit the other fellow as quick as you can
and as hard as you can, where it hurts him most, when he ain’t looking.”20
Powerful as these injunctions are, they also conceal some practical dilem-
mas. Finding out where the enemy is takes time and so also does deciding
what will hurt him the most. Both compete with getting at him as quickly
as possible “when he ain’t looking.” Today far more than in Grant’s day or
even Slim’s, striking him hard may require bringing to bear capabilities dis-
tant from the fight and which often are not under the immediate direction
of the commander seeking to apply them. Ditto for finishing the fight and
moving on.
Above all, each of these tasks increasingly must be accomplished by smaller
and more widely dispersed units. Reconciling the increased complexity of
battle command with that devolution of responsibility downward is perhaps
today’s preeminent conceptual as well as technical military problem. As a
recent U.S. Army capstone concept declared, “The conduct of simultaneous,
high-tempo, non-contiguous operations executed by Future Force formations
at varying levels of modernization and distributed broadly across the area of
operations will place very high demands on Future Force leaders with respect
to both the art and science of command.”21
It was precisely to help address that requirement that DARPA and the
U.S. Army launched the Multicell and Dismounted Command and Control
(MDC2) project. Before examining the project’s conduct and insights in the
chapters that follow, it may be worth reviewing the recurring tasks confront-
ing any tactical commander and their susceptibility to assistance through net-
worked automation.

Diagnosing
The first and in some ways most important requirement of effective
battle command is accurately reading the battlefield. Terrain appreciation,
knowledge of friendly unit locations and conditions, and intelligence about
the enemy all contribute. But all are essentially static, whereas battle itself is
dynamic. War, Clausewitz reminds us, “is not the action of a living force upon
a lifeless mass . . . but always the collision of two living forces.”22
Accordingly, successful battle command presumes the ability to infer
dynamic patterns from fragmentary and inevitably incomplete information,
sense them as they shift, and project them forward in time. How far forward
and in what detail will vary with the situation and level of command, but not
the basic requirement.
Variables and Constants 19

Moreover, because such inferred patterns inherently are hypothetical, they


must be tested repeatedly, to the point where the commander may have to
take risk just to confirm them. The extent of that risk will vary, but it rarely
will be wholly absent. Not the least difficult command dilemma is deciding
how much risk to accept simply in order to learn.
Networked automation by itself can’t eliminate that dilemma, but it does
promise to ease it. By displaying tactical information in ways that assist com-
manders to infer patterns, associating reports that might otherwise seem
unrelated and alerting commanders in a timely way to information tending
to alter or invalidate their forecasts, automation can enhance and accelerate
diagnosis, and with it the entire decision-making process.

Planning
Given accurate diagnosis, battle planning largely is a matter of problem
solving. For the tactical commander unlike his superiors, the mission typically
is prescribed, along with the constraints within which it must be pursued and
the assets expected to be available to accomplish it. While that makes tacti-
cal planning simpler than strategic or operational planning in one respect, in
another it is more complicated, for battles tend to be much more volatile than
the strategic and operational conditions that prompt them.
Prussian general Helmuth von Moltke’s much quoted warning that “no
plan extends with any degree of assurance beyond the first encounter with
the enemy’s main force” acknowledged this volatility.23 For the tactical com-
mander, therefore, planning is less a matter of devising a template with which
to guide the entire conduct of the battle than of arranging resources to begin
it advantageously and retain that advantage as it evolves. In the end, the litmus
test of battle planning is not how perfectly it anticipates events, but instead
how well it promotes rapid and effective adjustment to them as they occur.
Obviously, the more complete and reliable the information on which to
base planning, the better. But because battle is a two-sided contest in which
time plays no favorites, deferring engagement in the hope of acquiring better
information easily can become self-defeating. One of the central challenges
of battle command is deciding when such delay is more likely to increase than
to diminish uncertainty and its associated risks.
Above all, like diagnosis, planning to be useful must be continuous. At higher
levels of command, additional personnel are available for that purpose. Smaller
formations enjoy no such luxury and must plan with the same personnel who
execute. Enabling such units to do so more rapidly and effectively is among the
more important potential contributions of networked automation.

Deciding
If battle planning is largely problem solving, decisions are the mechanisms
through which solutions are translated into intentions. In battle against a
20 Battle of Cognition

competent enemy, however, solutions rarely are self-evident and even less
often final. Moltke exaggerated only slightly in remarking to his staff that
given three possible courses of action, the enemy almost invariably could be
counted on to choose the fourth.24
A great deal has been written about the decision-making process, but in
the end much about it remains obscure, which is one reason efforts to date
to automate it have made relatively modest progress. In the military, early
attempts to apply artificial intelligence to tactical decision making have not
fared well. TacFire, for example, the U.S. Army’s first true automated artillery
fire direction system, became so notorious for misdirecting fires and dimin-
ishing artillery’s responsiveness that gunners eventually began turning the
system off after using it to help generate their initial fire plans. If automation
performed so poorly in a relatively quantifiable matter such as fire distribu-
tion, how well would it be likely to satisfy the significantly more complex
decision-making requirements of battle command?
Instead, automation is much more likely to be successful in assisting decision
making than in replicating it. First, in addition to facilitating more rapid and
effective diagnosis, automation may help the commander judge more accu-
rately the time-space implications of choosing a particular course of action.
Since the enemy has a vote, that estimate always will be imperfect. But given
accurate information on the terrain and friendly capabilities, automation can
help reduce the variables with which tactical decision making must deal.
Second, automation can help trigger decision making by alerting the com-
mander in a timely way to the occurrence of events likely to require a modi-
fication of his intentions. As with projecting courses of action, such triggers
are likely to be imperfect and a prudent commander will avoid becoming
overreliant on them. But used judiciously, they can enhance the commander’s
sensitivity to changing circumstances.
Finally, automation may allow some decisions to be prespecified. It thus
may allow a more prompt reaction to certain events. For example, detection
of an air-defense threat to a critical airborne sensor might automatically trig-
ger counterfire or the movement of the sensor out of the threatened airspace.
Or the detection of an enemy force on an open flank might automatically
generate a warning to the nearest friendly unit. As TacFire’s example revealed,
such a capability must be used with caution. But the potential is there.

Delegating
Many years ago, a respected senior U.S. Army officer became widely
known for the admonition that “Only those things are done well that the
boss checks.” Whatever the case for that viewpoint in preparing for battle,
it invites failure once the fight begins. To command effectively in modern
battle is to delegate, and delegation without discretion is meaningless. In
Vietnam, the attempt of progressively senior commanders to dictate the
actions of the same engaged unit from command helicopters orbiting over
Variables and Constants 21

the unit and each other proved as dysfunctional as it would be infeasible in a


more distributed battle.25
Unfortunately, as communications expand and battlefield transparency
increases, the temptation to centralize direction can be overwhelming. Its
invariable result is to submerge the commander in tactical detail while obscur-
ing the battle’s overall pattern. History is strewn with the bones of armies
whose commanders allowed themselves to become fixated on only one piece
of the battle.
Avoiding that and delegating effectively requires above all mutual confi-
dence between commander and subordinate in the former’s intentions and the
latter’s discretion. Some of that can be established in planning, but doctrine and
training are far more important. Such mutual confidence is the more essential
because commander and subordinates rarely will perceive the battle identi-
cally. Even with reliable communications and technical aids such as Blue force
tracking systems, each soldier’s and leader’s view will be conditioned by his or
her immediate surroundings and psychological makeup. The same informa-
tion thus rarely will be processed mentally in exactly the same way.
Instead, the battle commander must be able to rely on subordinates’ under-
standing of his or her intentions and their ability and willingness to act as
the commander would in their place in the circumstances confronting them.
Reflecting precisely that experience during Operation Iraqi Freedom, one
rifle company commander later recalled “At one point, I had five separate ele-
ments, with four of them in contact, and I thought I was losing control . . . In
slow motion, I started to realize that every single element was doing exactly
what I would have told them to do if I was standing there next to them.”26
Here too, networked automation can help, provided it is used with self-
discipline to refine and enhance communication of the commander’s inten-
tions rather than to hamstring subordinates with detailed orders that may be
utterly inappropriate to the conditions they face. In this area more than any
other, networked automation easily can be a two-edged sword, and how it is
used finally will determine whether it enhances or hinders its users’ tactical
performance.

Synchronizing
Army Field Manual 3.0, Operations, defines synchronizing as the process of
“arranging activities in time, space, and purpose to mass maximum relative
combat power at a decisive place and time.”27 The need to synchronize goes
back a very long way, ultimately to the first moment when hand and projec-
tile weapons appeared together on the battlefield. The Roman javelin, for
example, temporarily could deprive its targets of the use of their shields, but
unless the unit after casting its javelins followed up quickly with hand-to-hand
engagement, that momentary advantage easily could be lost.
Since then, synchronizing has become more complicated with every
improvement in weapons technology and, more recently, the expansion of
22 Battle of Cognition

battle’s domains. In combining arms and services, the most valuable benefits
accrue from complementary rather than just additive effects. As with combin-
ing javelin and sword, however, complementarity is sensitive to timing. Fires
delivered too early may forfeit their utility in allowing unhindered maneuver.
Sensors shifted too late may deprive a moving formation of the early warn-
ing that its deployment assumed. The retention of a mobile reserve may be
pointless if the routes by which it might have to be committed weren’t earlier
reconnoitered and cleared.
In this area more than any other, dispersal and its accompanying devolution
of tactical responsibility downward have radically increased the burden on
commanders. Platoons and companies routinely must be able to synchronize
activities and effects previously managed by their parent formations. While
that burden occasionally may be lessened by the direct intervention of the
higher commander, such intervention desirably should be infrequent.
By allowing the virtual rehearsal of activities before the fight begins and
the adjustment of their relative timing as it proceeds, automation can ease
the synchronization burden. In the best case, by enhancing common situa-
tion awareness among the combining activities, networked automation may
increase subordinates’ ability to self-synchronize, diminishing the need—and
also the temptation—for the commander to intervene directly.

Communicating
About the importance to battle command of reliable communications, lit-
tle need be said. Whether to receive and disseminate information, transmit
orders and intentions, or synchronize activities, the commander must com-
municate. At Jutland, erratic communications contributed materially to the
failure to force the German fleet to battle.
But even perfect connectivity is no guarantee of effective communication.
At Balaclava in the Crimea on October 25, 1854, a scribbled order to the Brit-
ish light cavalry brigade to “prevent the enemy carrying away the guns” pro-
duced one of history’s most celebrated blunders. To the sender on the heights,
the order was perfectly clear. To its recipient in the valley below, it was utterly
opaque.28 Military history is replete with such episodes.
Automation is no panacea, but it can help make such disconnects less likely.
Simply the ability to transmit graphics quickly and clearly can diminish if not
prevent altogether the sort of perceptual divergence that sacrificed men so
uselessly albeit gallantly at Balaclava.
At the same time, in communicating as in delegating, automation misused
can turn on its owners. As bandwidth increases, so too does the potential to
communicate too much information. During the invasion of Iraq in March
2003, a senior U.S. intelligence officer complained wryly that his headquarters
was awash in information it was utterly unable to process. The more compre-
hensively automation is networked downward, the greater the risk of similar
information overload on smaller units even less capable of coping with it.
Variables and Constants 23

Enhancing communications through networking thus requires not only


accelerating the flow of information, but also refining what information is
communicated, to whom, at what rate, and in what form. In the end, the acid
test of effective communication, like that of a legal contract, is a meeting of
minds, and networked automation’s contribution to battle command will be
determined largely by whether it facilitates or impedes that encounter.

Motivating
One of Bill Mauldin’s wonderful World War II cartoons has Willie and Joe
crouching behind a bush while a general standing near their outpost blithely
surveys the battlefield. “Sir,” grumbles Willie, “do ya hafta draw fire while yer
inspirin’ us?”29
Front line humor aside, not least of the command challenges associated
with battlefield enlargement is how much harder it has become for com-
manders to make their personal presence felt. At Waterloo, Wellington
could stand in his stirrups and wave his hat and the gesture be seen by half
the men under his command. During the recent fighting in Fallujah, a batta-
lion commander would be lucky to connect directly with more than a few of
his men.
Some may argue that in modern war, such direct personal influence is over-
rated. An incident during Operation Iraqi Freedom suggests otherwise. As
the 101st Airborne attacked through An Najaf, several senior officers includ-
ing the corps and division commanders assembled near a road intersection to
confer. Ignoring nearby incoming mortar rounds, they continued their dis-
cussion until interrupted by small arms fire, upon which they immediately
moved toward the source of the firing. No one was hurt, but word of their
leaders’ coolness and audacity spread quickly among the troops, boosting
morale throughout the corps.30
Moreover, motivating involves more than just inspiring. On a dispersed
battlefield, the ability of any subordinate unit to gauge the impact of its
behavior on the fight overall is intrinsically limited. When minutes count, one
unit’s failure to act promptly by reason of wariness or weariness may mean
the difference between victory and defeat. Injecting a sense of urgency when
necessary is a vital command obligation, and sometimes only the command-
er’s personal presence will suffice. Commanders from George Washington to
George Patton were renowned for appearing, unlooked for, at critical times
and places.
While the commander can’t be everywhere, in short, he must be able at need
to go wherever his personal presence can make a difference without losing his
grip on the battle overall. One of the most important prospective benefits of
networked automation is to unleash the commander from his headquarters
without depriving him of its information resources, thus helping to diminish
the tension between the commander’s need to maintain his perspective on the
battle and his ability to exert personal influence where necessary.
24 Battle of Cognition

RECURRING COMMAND DILEMMAS


“Everything in war is very simple,” Clausewitz commented, “but the sim-
plest thing is difficult.”31 While battle command certainly becomes more
complicated with increasing responsibility, some recurring dilemmas are
almost insensitive to seniority. Any command system will be judged at least
partly by how well it assists commanders in resolving them.

Prioritizing Requirements
Like other enterprises, war is subject to economic imperatives, requiring
commanders to allocate finite resources among competing requirements.32
Given battle’s uncertainties, however, doing so is especially difficult for the
tactical commander. In effect, it requires him despite incomplete and transi-
tory information to prejudge which resource commitments will prove most
important in accomplishing the mission.
Between friction and the enemy, that forecast rarely will be perfect. Priori-
tization thus requires balancing economy of force with elasticity. Ignoring the
first risks wasting assets. Ignoring the second risks inability to recover from
surprise. The smaller the unit, the harder it is to reconcile these competing
requirements.
Modern mobility systems together with more flexible tactical organizations
have greatly improved commanders’ ability to shift assets around the battle-
field. Even so, reallocation can’t always be counted on to correct an error in
prioritization.33 In battle much more than in other activities, retasking tends
to be difficult and dangerous. Every major readjustment risks confusion and
delay, especially inasmuch as it is likely to be needed most urgently precisely
when least convenient to execute.
One solution always has been to withhold some assets for later commit-
ment without disturbing those already committed. While appropriate for
larger formations, however, retaining such a reserve is less feasible the smaller
the unit. And while it is less essential when adjacent units are physically close
enough to furnish each other assistance at need, the more dispersed the force,
clearly, the less reliable and responsive such mutual support.
Networked automation can help diminish the prioritization dilemma in
two ways. First, by involving subordinate units directly and concurrently in
examining the implications of prioritization alternatives, it may surface gaps
in those arrangements that otherwise would be missed. At worst, by helping
forecast contingencies that might require retasking, it can enable the affected
units to consider in advance how they would mutually adjust.
Second, by helping to track the shifting spatial relationships among subor-
dinate units, networked automation can alert the commander to a develop-
ing situation that may obviate that contingency planning, allowing him in
a timely way either to alter his intentions or seek additional support from
higher echelons. The latter may be especially important, since, like internal
retasking, obtaining additional support from higher may take time.
Variables and Constants 25

Judging Timing
“War,” Clausewitz declared, “is nothing but a duel on a larger scale.”34 It
was a peculiarly apt analogy, evoking the flickering weave of blades as duelists
feint, attack, parry, and riposte, each seeking to preempt the adversary’s cor-
responding action. For the fencer, though, timing merely is a matter of alert-
ness and reflexes. Applying the analogy to battle, one should try to visualize
the same encounter taking place on the uneven bottom of a cloudy pond, in
which, moreover, each combatant is wired to a block of concrete just small
enough to be dragged with difficulty.
Deciding when to engage, change direction, commit a reserve, reallocate
assets, call for additional support, or attempt to disengage are among the bat-
tle commander’s toughest questions. “Ask me for anything but time,” Napo-
leon enjoined a subordinate, and with reason. In battle, minutes may mean
the difference between success and disaster. At Jutland, for example, only a
few minutes’ delay in reversing course might well have cost the Beatty the rest
of his battle cruisers or Sheer his entire fleet.
Describing his brigade’s attack on Objective SAINTS south of Baghdad on
April 3, 2003, the commander of Third Infantry Division’s Second Brigade
Combat Team recalled, “At 3 in the morning, there was only one battalion
ready. [I] made the decision to go without the entire brigade consolidated.
The intelligence we had received said the Hammurabi [Division] was reposi-
tioning south to take SAINTS and the airport ahead of us so we didn’t have
the freedom to wait. It was a classic commander’s dilemma.”35
Absent perfect intelligence and equally perfect foresight, nothing can elim-
inate such dilemmas, but there are ways to mitigate them. Perhaps the most
important is simply to pay close attention to the locations and conditions of
key assets. The more this can be managed without distracting subordinates
with repetitive reporting requirements, the better. Blue force tracking already
has contributed materially to this process, and more comprehensive network-
ing can further enhance it.
Tracking enemy movements and strengths is much harder, and the smaller
and less readily identifiable the enemy forces of interest, the greater the chal-
lenge. Discussing his efforts to template Iraqi forces defending the Karbala
Gap, one brigade intelligence officer recalled “huge disconnects between the
CFLCC C2, the corps G2, and the division G2 on the enemy picture. One
level had two battalions in the gap, while another level had one battalion in
the west and two battalions east of the Euphrates . . . One echelon assessed a
maneuver defense from Karbala with one battalion in the gap, while another
had the enemy defending from its garrison and controlling bridges, and a
third echelon had the enemy defending bridges from the eastern side.”36
Intelligence was even less able to detect and track the Iraqi irregulars who
proved the most persistent threat to coalition forces.
Networked automation won’t solve that problem, but it certainly can help
narrow the intelligence disconnects among successive echelons and alert com-
manders in real time to developing patterns of enemy activity that otherwise
26 Battle of Cognition

might go unremarked or be noticed too late to take effective preemptive


action.
An even better way to mitigate the timing dilemma is to limit its impact.
In war as in mechanics, tight tolerances increase sensitivity to friction. Plans
that depend on the perfect sequencing of activities, that incorporate no fall-
backs for the loss of key capabilities, or that leave no room for unexpected
enemy actions are an invitation to Murphy. In the Huertgen Forest in
October 1944, for example, the entire main effort of a corps attack stalled
for a day when one U.S. infantry platoon became pinned down by a German
machine gun.37
Networked automation can help avoid such problems by enabling multiple
command levels to test in advance the sensitivity of plans to an unforeseen
delay in events or the loss or temporary unavailability of key assets. In the
process, it can help identify where added robustness is desirable or how the
plan itself should be modified. In the worst event, it can help commanders
decide when an otherwise attractive course of action should be jettisoned in
favor of one less ambitious but also less sensitive to timing. At very least, it can
make the risk of not doing so more visible.

Managing Logistical Risk


Commenting on the shifting tides of battle during 1942’s back-and-forth
contest in the Libyan desert between German general Erwin Rommel and
his British opponents, a student at the army’s School of Advanced Military
Studies once likened the situation of both armies to being tied to rubber
bands. As each attacked in turn, the rubber band would stretch until at last
it snapped back, forcing the army to recoil and surrendering the initiative to
the enemy.38
The rubber band in question, of course, was logistics. Commanders always
have had to reconcile their ambitions with logistical risk, if only the risk of
starvation. In what remains among the supreme feats in military history, Alex-
ander the Great selected his routes, scheduled his marches, and even fought
battles based heavily on logistical imperatives.39
Since that time, and especially since the mechanization of warfare, opera-
tions and logistics have become increasingly indivisible. On an enlarged and
distributed battlefield, no commander however small his unit can ignore
this interdependence. As British field marshal Sir Archibald Wavell rightly
insisted, “A real knowledge of supply and movement factors must be the basis
of every leader’s plan; only then can he know how and when to take risks with
those factors, and battles are won only by taking risks.”40
Because logistics deals fundamentally with quantities changing in time and
space, it understandably has been a priority focus of military automation.
Nevertheless, efforts through automation to tighten the integration of opera-
tions with logistics have not been as successful to date as many had hoped, as
the logistical problems encountered during the 2003 invasion of Iraq attest.
Variables and Constants 27

Those problems reflected as much as anything difficulties in reconciling the


logistical information available at different levels of command.41
Meanwhile, logistical inventories continue to expand and diversify. More
than 20 years ago at a high-level U.S. Army war game, corps commanders
uniformly declared their most intractable logistical problem to be managing
the explosive increase of critical but noninterchangeable electronic compo-
nents.42 Modern combat platforms built around advanced electronics can’t be
jury-rigged when those components fail, nor can the latter easily be repaired
in the field. Replacing a faulty component therefore may be just as important
as refueling or rearming, and corps commanders aren’t the only ones who
need be concerned.
Networked automation obviously has a role to play in diminishing these
difficulties, but assuring more comprehensive logistical tracking and better
information commonality alone won’t solve tactical commanders’ logistical
challenges. By itself, neither will move a single bullet or circuit board. At least
as important, especially to the small unit commander, is judging the effect of
the unit’s current and projected logistical condition on its ability to complete
its mission. That requires translating quantitative into qualitative informa-
tion, and it is in this effort that networked automation’s potential actually may
be greatest.
This is most visible in relation to managing supplies of fuel, water, and
ammunition, the bulkiest of commodities, hence the most difficult and time
consuming to replenish. By tracking time since replenishment, distances
traveled, and munitions expended, networked automation can alert the com-
mander well before supplies of these commodities become critical, enabling
timely request for resupply or, at worst, modification of the commander’s
intentions to avert unexpected culmination at a tactically unfavorable time
and place.
Automation also can help determine what assets may be fungible in relation
to a given requirement, hence able to substitute for one another more rapidly
than either can be replenished. For example, it can help decide whether the
loss of a sensor can be covered by adjusting other sensor assets rather than
suspending action to await repair or replacement.
Finally, networked automation may eventually be able to forecast failure
conditions in men and machines alike, and to alert commanders when those
conditions approach. Work already is in progress to develop such self-alerting
prognostics for the maintenance status of vehicles and weapons, and may one
day extend to monitoring the physical condition of the soldiers who employ
them. Such tools would significantly assist in managing logistical risk.

Coping with Noncombatants


“War is hell on civilians,” an anonymous wag observed in reply to com-
plaints about World War II’s wartime rationing. His refinement of Sherman’s
famous adage was no less accurate for being cynical, and no more so than
28 Battle of Cognition

in the irregular warfare characterizing nearly every recent conflict, in which


enemy combatants unregulated by culture or convention deliberately embed
themselves in civilian populations. Between that and the increased media visi-
bility of military operations, coping with noncombatants even while fighting
has become a pervasive problem for commanders at every level.
Commanders always have had to deal with their own and enemy casual-
ties, prisoners of war, and refugees. During the Crimean War, mishandling of
battlefield casualties produced public outrage in Britain; and just a few years
ago, claims that a U.S. unit retreating from the North Korean onslaught in
the summer of 1950 deliberately fired on refugees believed to harbor enemy
infiltrators prompted controversy and a formal if belated investigation.43
Today, in contrast, scarcely a combat operation of any magnitude can be
mounted without considering its likely impact on civilian lives and property
and vice versa. That concern will affect everything from the selection of mili-
tary objectives to the use of tactics and weapons to the number and location
of forward logistical and medical facilities.
The major obstacle to collateral damage avoidance is target identification,
particularly in operations against irregular forces that deliberately conceal
themselves in and among civilian populations. In Afghanistan and Iraq as in
Vietnam, distinguishing insurgents from noncombatants has been a persistent
tactical problem and has affected both the conduct of combat operations and
their larger political consequences.
Especially in the urban warfare that has typified recent conflicts, that same
problem also increasingly has restricted use of the area weapons and muni-
tions on which ground troops have depended for more than a century. Unless
offset by the use of precision weapons, such limitations thus invite an equally
unpalatable increase in friendly casualties. Achieving that greater precision
without loss of tactical effectiveness, however, will depend heavily on the
responsiveness and reliability of the command-and-control mechanisms link-
ing weapons to their targets and their ability to adapt to the targeting envi-
ronment.
The potential contribution of networked automation to managing this
problem resides above all in its ability to assemble and correlate informa-
tion from multiple sources in real time and display it in a useful way. That
information never will be perfect, especially where the population is hostile
or, if friendly or neutral, vulnerable to coercion by the enemy. Some collateral
damage thus is unavoidable however disciplined the friendly force and rigor-
ous its rules of engagement. But networked information can help diminish
avoidable civil damage and casualties. Merely facilitating the rapid imposition
and removal of control measures such as no-fire areas and weapons restric-
tions, for example, can assist materially in reducing collateral damage while
minimizing the associated tactical penalties.
Automation also can help manage the logistical burdens likely to be pre-
sented by noncombatants, whether enemy prisoners of war or civilians. During
Operation Desert Storm in 1991, the unexpectedly rapid accumulation of Iraqi
Variables and Constants 29

prisoners of war threatened to overwhelm the units to which they surrendered.


The better the information on the numbers and locations of noncombatant
clusters, the faster assets can be marshaled to secure and sustain them.
Finally, not least of the challenges associated with noncombatants on the
battlefield is insuring their condition is accurately reported, both up the chain
of command and by news media. In an era when every alleged lapse in humane
treatment, real or imaginary, tends to prompt a political outcry, commanders
have a vested interest in disseminating truthful and rebutting false informa-
tion concerning noncombatants, if only to preclude unwelcome restrictions
on their tactical freedom of action. Networked automation clearly has a part
to play in this effort.

NEW COMMAND CHALLENGES


While most challenges of battle command have varied over time in degree
more than in kind, a few today are changing so profoundly that they warrant
special attention. Some already have been noted in passing. Others reflect
emerging requirements and technologies.

Time Compression
As a general proposition, tactical engagements today tend to begin more
precipitately, transpire more rapidly, and terminate more abruptly than they
have for centuries. To gauge the impact of this foreshortening on battle com-
mand, consider how tactical headquarters have attempted until very recently
to monitor battlefield events.
Reports and queries arrived in a crowded tent or command vehicle over
multiple and invariably congested radio nets, often so thickly as to be audibly
indistinguishable. Tired and dirty soldiers recorded those reports on what-
ever paper was at hand, and then, provided the reports hadn’t been misplaced,
transferred them by marking pen or grease pencil to an acetate map overlay
that became harder to read with every erasure and remarking.
Orders were received and transmitted in the same way, occasionally accom-
panied by hastily drawn graphics that made their own contribution to the
diminishing readability of the recipient’s map. Combine that with dated infor-
mation that failed to be deleted and new information that somehow failed to
be recorded, and it isn’t hard to see how quickly any correspondence between
the displayed and actual tactical situation could evaporate, even assuming the
original information was accurate and complete.
Such a process, needless to say, is incongruous in the military of a nation
whose children play routinely with devices that store, display, manipulate, and
communicate information far more rapidly and reliably. More important, it
increasingly has become unable to keep pace with the information flow with
which today’s commanders must cope. That alone is enough to justify current
efforts to apply networked automation to battle command.
30 Battle of Cognition

Simultaneity
Not only do events on today’s battlefield happen more quickly, but they also
involve more diverse activities in more places at the same time. In part, that
simply reflects increased dispersal. But in recent U.S. doctrine, it also reflects
a deliberate intention to confront the enemy simultaneously with more prob-
lems than his own command apparatus can handle. Its aim is to produce
confusion and dislocation that deprives the enemy of the ability to respond
effectively and thus accelerates his mental and psychological collapse.
The presumption, of course, is that such multiple simultaneous activities
won’t prove more disruptive to the perpetrator than to their intended vic-
tim, and also—and perhaps more important—that such efforts won’t result in
piecemealing assets to the point where they no longer contribute effectively
to a coherent overall purpose.
Assuring both is a challenge even at the operational level. For the bat-
tle commander, it is much greater. To the usual obligation to synchronize
maneuver and integrate combined arms, it adds the requirement to orches-
trate concurrent but independent activities aimed at separate and spatially
disconnected objectives.
Simply keeping track of those activities to insure they don’t mutually inter-
fere will test the commander’s information systems, never mind managing any
unforeseen adjustments. In addition, simultaneity only multiplies the tactical
and logistical dilemmas associated with any single operation. Effective dele-
gation thus becomes even more critical, and with it continuous review and
refinement of the commander’s estimates and intentions.
There certainly is historical precedent for the success of such simultane-
ous operations when directed by genius, but even more for their failure in its
absence. And since, as Clausewitz pointed out, genius is a scarce commodity
even in the finest military, a doctrinal commitment to simultaneity implies
some more uniformly accessible command resource. Apart from fostering
subordinate initiative, that resource resides in improved information systems
if it resides anywhere, and that too argues for enhancing battle command with
networked automation, and with soldiers and leaders schooled to employ it
effectively.

Lethality
It would be a mistake to suggest, as some have, that the lethality of ground
warfare overall has increased. At Cold Harbor in 1864, attacking federal
forces lost nearly 7,000 men in less than an hour, and at the Somme in 1916,
58,000 British troops were killed or wounded on the first day, roughly 3,000
per kilometer of front and more than 10 percent of the committed force. No
recent conflict has seen anything like those numbers.44
What is true, however, is that the reach and precision of both surface and
air weapons and their associated target acquisition means have increased
Variables and Constants 31

dramatically during the past several decades. The continuing dispersal of


ground combat formations discussed earlier directly reflects that change, and
also helps explain why increased weapons lethality hasn’t automatically trans-
lated into heavier aggregate casualties.
The net effect of these developments, however, has been to place signif-
icantly more firepower at the disposal of progressively smaller units while
simultaneously making their own exposure more hazardous. Together with
increased political reluctance to accept high casualty rates, that has prompted
the United States and other modern military forces to seek new ways to sepa-
rate tactical sensors from shooters and reduce the dependence of both on the
physical proximity, hence exposure, of human operators.
Efforts currently in progress range from developing means of coopera-
tive engagement, by which one platform—a tank, say—can engage a target
acquired by another that thus remains unexposed, to precision engagement
from beyond line of sight of targets detected by robotic sensors, to engage-
ment from beyond the enemy’s reach altogether by remotely controlled plat-
forms such as armed unmanned aerial vehicles.45 Further down the road,
some envision fully autonomous combat platforms, able based on automated
instructions to detect, identify, maneuver to, and engage targets without addi-
tional human intervention.
All these efforts today are embryonic, especially in the ground combat
arena. Difficult technical obstacles in areas ranging from sensor behavior to
communications reliability and robustness remain to be overcome before even
semiautonomous ground combat platforms can be fielded with confidence in
their battlefield performance.
But the more difficult challenge concerns how the battle-command system
will handle these emerging capabilities. Precision weapons are by that very
quality intolerant of error. They will do exactly what they’re told. The burden
on the teller increases commensurately. Similarly, while remotely controlled
or autonomous systems may help diminish the impact of danger and discom-
fort by reducing human exposure, they also magnify the sensitivity of maneu-
ver and engagement to uncertainty and friction.
A central focus of MDC2’s experiments has been to probe these interac-
tions at the lowest tactical level and at high resolution, against an adversary
free within his capabilities to operate in whatever manner he chooses. As the
chapters that follow will confirm, we still are far from resolving many of the
tensions introduced by emerging automated capabilities. But the experiments
also suggest that they are solvable, and indicate the directions, both techno-
logical and in terms of new soldier skills, in which those solutions may be
found.

Tactical Agility
The greater the physical dispersal of tactical units, the less confidently
they can count on timely reorganization or reinforcement to accommodate
32 Battle of Cognition

unexpected changes in missions or the conditions in which they must be


accomplished. That reality already has prompted the U.S. Army significantly
to alter its basic tactical design, pushing organic combined arms capabilities
down from the division to the brigade.
The implication, however, is that more of the combat power that brigades
and their subordinate units will bring to bear in the future will have to be
furnished by U.S. Army and joint assets at higher command levels. Until
recently, that would have meant significantly expanding supported units’
organic or attached battle-command facilities and personnel, increasing tacti-
cal headquarters’ footprints and diminishing their ability to move rapidly and
safely around the battlefield.
Networked automation promises to diminish that requirement. Merely the
ability to share threat information, friendly locations, and tactical graphics in
real time inherently reduces the coordination burden, while digital connec-
tivity similarly reduces the load on voice communications. Both should result
in a need for fewer headquarters personnel, although to what extent isn’t yet
clear. At any manning level, however, they promise more rapid and reliable
integration of capabilities and effects.
That enhancement is the more important to permit prolonged operations
without loss of tactical continuity. Historically, headquarters battle rhythms
have been dictated as much by the inability of command systems to operate
at a uniform level of effectiveness around the clock as by the need of their
personnel to rest. Networked automation won’t diminish that need. But it can
make battle handover much faster and smoother, and reduce the likelihood
that vital information will be lost or inadvertently neglected in the process.
Finally, provided it is not abused, networked information promises
improved resistance to catastrophic failure from the loss of any single com-
mand element. Merely the presence of or rapid access to the same information
at multiple locations provides inherent command-and-control redundancy.
Similarly, networked automation can allow more rapid assumption of com-
mand by an alternate, higher or subordinate headquarters without confusion
and delay. Naval ship design long has incorporated such redundancy, with
steering, fire control, and other key command functions duplicated at more
than one physical location. Networked automation offers ground combat
units similar robustness.

Transparency
On February 13, 1991, a U.S. airstrike destroyed the Al Firdos bunker in
downtown Baghdad, killing scores of Iraqi civilians who had taken refuge
beneath its reinforced concrete. Intelligence had confirmed the bunker’s use
as a military command post, whereas no information suggested its occupation
by noncombatants. Nevertheless, media reaction to the attack, ably exploited
by the Iraqis, resulted in a precipitate decision to suspend any further attacks
on war-supporting targets in and around Baghdad.46
Variables and Constants 33

More recently, as this was written, debate swirled in the United States and
international press in reaction to the use of white phosphorus against Iraqi
insurgents.47 White phosphorus has been a standard artillery and mortar
munition since World War II, used both to destroy inflammable materiel and
attack personnel protected from high explosive fragmentation. Until now, its
use for either purpose has never been challenged.
Not the least of the ironies of modern war is that the same technologies
promising to enhance the commander’s access to and use of information are
equally available—in some cases more available—to media with interest in
but no accountability for the conduct of military operations. Increasingly, the
video camera hangs over the battle commander like an electronic Sword of
Damocles, able almost instantaneously to dispute his appreciations, judge his
decisions, and criticize their consequences.
Commanders always have had to contend with history’s judgment, but
never before have so many been subjected to such immediate and pervasive
public scrutiny. It would be asking too much of human nature to imagine their
behavior remaining unaffected by it. Its danger, of course, is the inculcation of
overcaution and an aversion to the risk taking without which, as Field Marshal
Wavell rightly argued, no success in battle is possible.
There is little the battle-command system can or should do to diminish
transparency or to immunize commanders against its effects. What it can
do, given the right tools, is help avoid the internal delays and confusion that
too often make the commander the last to know about an incident or deci-
sion likely to prompt unwelcome media attention. Almost invariably, the best
defense against false, distorted, or incomplete reportage is rapid and accurate
dissemination of the truth. And if that truth is awkward, it is even more essen-
tial that the commander be aware of it and be perceived to be aware of it.
Beyond that is moral courage and the acceptance of responsibility, for which
no battle-command system can substitute.

ENHANCING FUTURE BATTLE COMMAND


No single chapter can hope to deal with all the complexities of battle com-
mand, about which, in any case, histories, memoirs, and analyses abound.
Nor has this chapter sought to do so. Rather, its purpose has been to identify
some of the continuities and changes that have influenced and will continue
to influence the conduct of battle command, and to suggest where in that pro-
cess emerging networked automation technology may offer the most promise
and present the greatest risk.
When all is said and done, achieving the former and avoiding the latter will
reflect how well technology designers and their military clients satisfy three
requirements: reconciling the art and science of battle command, understand-
ing the limitations as well as the virtues of networked automation technol-
ogy in furthering that effort, and assuring that both the technology and the
way it is used continue to reflect the inescapably human dimensions of battle.
34 Battle of Cognition

To complete this discussion, a few thoughts follow concerning these require-


ments.

Reconciling Art with Science


Reflecting on emerging military information systems, one thoughtful offi-
cer wrote some years ago, “As more powerful technological tools intrude into
the process of command, they bring with them the risk that a generation of
officers will be more inclined by instinct to turn to a computer screen than
to survey the battlefield, and that the use of precise operational terms will be
displaced by computertalk. If that happens, we may have lost more than we
have gained.”48
Of course he was right, and events since he wrote already have surfaced
indications of that problem. During early combat operations in Afghanistan,
for example, some ground commanders reportedly became so mesmerized
by the televised feed from unmanned aerial vehicles that other headquarters
business virtually came to halt. As subsequent chapters will reveal, similar
episodes occurred during the experiments described in this book.
The invariable and quite correct response to such problems is that prevent-
ing them is a matter of training. But that training to be successful must begin
by acknowledging the limits of technology in satisfying the creative and thus at
least partly intuitive requirements of battle command. Networked information
systems can assemble, filter, collate, disseminate, and display information. They
can alert the commander to attend to it. But they can’t force him to attend to it,
nor make sense of it for him, nor decide what elements of it deserve the most
urgent attention, nor envision what form that attention should take.
Just as the pilot too focused on his instruments can collide with the moun-
tain in front of his face, so a commander too tightly wedded to his information
systems may fail to sense indicators no less important for being perceptible
only through his own educated instincts. Likewise, while technology can
enhance the commander’s ability to judge opportunity and risk, it can nei-
ther reconcile them for him nor determine which to favor and in what way.
In short, in training as in actual operations, information technology must be
treated as the servant, not the master, recognizing that in battle command
perhaps more than in any other human endeavor, the ultimate value of science
is to facilitate the practice of art.

Recognizing System Limitations


A recent paper advances six principles on which to base future battle-
command system development. The first, argued to be prerequisite to the
rest, proposes that “The Future Battle Command and Control System [must
be] an integrated system of systems that meets the needs of commanders and
staff at every level, for all BFAs [battlefield operating systems], and across
the Services.”49 The Army Transformation Roadmap quoted earlier goes even
Variables and Constants 35

further, envisioning that “The same system that controls wartime operations
will regulate activities in garrison and in training.”50
These are heroic ambitions. They also are very likely unrealizable, perhaps
fortunately, for their premise is that all command requirements can be reduced
to the same ingredients. But if history is any guide, command, especially in
battle, is far too idiosyncratic a process to tolerate so procrustean a solution.
Instead, as one historian concluded after carefully examining several success-
ful and less successful command systems, “Command being so intimately
bound up with other factors that shape war, the pronunciation of one or more
‘master principles’ that should govern its structure and the way it operates is
impossible.”51
In reality, no single system of command however robust is likely to satisfy
every military requirement. Indeed, such a system even were it technically
feasible to devise would tend almost inevitably to reproduce the very sort of
command rigidity that contributed so heavily to the Royal Navy’s embarrass-
ment at Jutland.
Instead, a more reasonable expectation is that emerging networking tech-
nology will allow information to be shared in a timely way by different orga-
nizations without imposing undue restrictions on the way it is manipulated,
displayed, and employed. As will be seen in the chapters that follow, a central
objective of the MDC2 program has been to examine how commanders at
different levels choose what information to attend to and how.
Still more is that true of command direction, which must not only be framed
by the commander’s intentions, but also adapted to the intended recipient and
the conditions in which it is received. Both can be expected to vary constantly,
and a command system unable to accommodate those changes will impede
the commander more than assist him.

Allowing for the Human Dimension


No one has ever described the human dimensions of battle command more
eloquently than poet Stephen Vincent Benet who, in his narrative poem about
the Civil War, describes generals trying in vain to move their men around
the battlefield like blocks on maps, only to discover that men linger, straggle,
and die “In unstrategic defiance of martial law, Because still used to just being
men, not block-parts.”52
The difficulty with commanding through icons is that it is all too easy to lose
sight of what the icons represent. Soldiers—and their leaders—are human, and
being human are fallible. Discipline, training, cohesion, and pride all help limit
that fallibility, but battle is a crucible that can test even the strongest to failure.
A battle-command system that conceals or discounts the human factors
affecting every tactical evolution does its users no service. Its depiction of real-
ity will be incomplete and its utility for decision making will be crippled. Most
of all, it invites surprise and defeat the more devastating for being unnecessary.
In August 1914, Von Schlieffen’s Olympian vision of command failed the test
36 Battle of Cognition

of battle not only for lack of adequate technology, but also because it largely
ignored the human limitations of the army to which it was applied. Some of
today’s command and control concepts risk making the same mistake.
Networked automation can’t be made responsible for the professional com-
petence of the commanders who employ it, and we wouldn’t want it to be. But
it can be designed in a way that no flat map can to alert commanders to the
creeks and gullies that may hamper their wooden squares, remind them how
long it has been since those squares were rested or resupplied, warn them of
dangers those squares may be unable to sense, and in the crunch, help bring
to bear in a timely way the resources to insure that their block-men don’t die
unnecessarily “because still used to just being men, not block-parts.”
In the effort to insure that battle command remains sensitive to war’s ines-
capably human character, in short, the system designer shares responsibility
with the commander. Only if, in addition to assisting him to apply capabili-
ties, networked automation also assists him in protecting and preserving the
soldiers on which those capabilities ultimately depend, will such a system truly
deserve to be called a battle-command system.
CHAPTER 2
A Journey into the Mind of
Command: How DARPA and
the Army Experimented with
Command in Future Warfare
Alexander Kott, Douglas J. Peters,
and Stephen Riese

A BATTLE OF 2018
His small robotic spy planes, tens of kilometers away, faithfully scanned the
battlespace. The composite image—flat, swampy land dotted with hamlets,
lakes, and untidy small forests—slowly scrolled on the computer screen. Cap-
tain Johnson1 glanced at the calendar and weather predictor tucked into a
corner of the display. On this August 17, 2018, it was going to be hot and
muggy all day. “Lousy visibility,” he thought, “the UAVs are going to miss a
lot of enemy.”
A few months ago, in early 2018, a faction within the Azerbaijan military
suddenly offered its support to a long-lingering dissident movement. Tradi-
tionally, the dissidents’ influence rarely extended outside of the southeastern
portion of the country, mostly south of the Kura River in the Kura Depres-
sion Region. Now, however, things were unfolding differently. By April 2018,
the Azeri Islamic Brotherhood (AIB), a coalition of antigovernment factions,
subverted the bulk of an Azeri Motorized Rifle Brigade, the well-trained and
formerly reliable Kura Brigade that mutinied to realign with this faction.
In a surprise action, a battalion from the Kura Brigade (10th MRB) seized
control of a historically significant district in the capital of Baku. A desperate
week-long defense by loyalist government forces against the attacks of the
10th MRB managed to secure the center of government within the capital
city. Still, the AIB succeeded in halting the session of the national assembly
when members fled to their home territories. The president, along with his
prime minister and council of ministers, remained in Baku and continued to
direct the government and remaining loyalist military forces in the city and
along the Apsheron Peninsula.

37
38 Battle of Cognition

These events induced an inevitable round of desultory diplomacy by


the international community, which predictably failed. At that point, Rus-
sia proposed a coalition of U.S. and Russian forces to restore order within
Azerbaijan and to stabilize its government. The United States agreed to the
proposal, and by July 2018, U.S. forces began staging in coalition bases along
the Georgia-Azerbaijan border. Later that month, Coalition forces conducted
rapid movement across the border to clear lines of communication and to
establish a forward operating base to be used as a staging area by Coalition
forces. Captain Johnson and his Combined Arms Unit (CAU), a total of about
30 human fighters and 20 robotic warriors, were a part of the Coalition. From
this point on, we will refer to this coalition as the Blue force.
Facing Captain Johnson and the rest of the Blue force was a rather for-
midable foe. The Red forces included the mutinous Kura Brigade joined
by several other key units of the Azeri military (see Figure 2.1), a powerful
uniformed motorized force—the best tanks, armored personnel carriers, and
self-propelled artillery that the oil-rich Azerbaijan Ministry of Defense was
able to buy in the early 2010s.
This well-trained traditional military enjoyed close cooperation with insur-
gent forces—some of them experienced foreign guerilla fighters, and others
local, untrained, but enthusiastic militiamen of AIB. The dismounted insur-
gent forces could disperse throughout the area of operations to provide the

Figure 2.1. Organization and equipment of Kura Brigade. The Ap-


pendix describes the equipment.
A Journey into the Mind of Command 39

Red commander with early warning of Blue force movement, and to serve as
additional dismounted infantry to confront the Blue force. Further compli-
cating the battlespace picture, the armed members of the Nagorno-Karabakh
Internal Liberation Organization (NKILO), a militia that elected to remain
neutral in this conflict, operated throughout the area. The neutral NKILO
members dressed as civilians but often carried weapons like members of the
insurgent AIB forces. From this point on, we will refer to these enemy units
as the Red force.
The raw numbers of personnel and weapon systems in Captain Johnson’s
area of responsibility—Red versus Blue—were certainly stacked against him.
A common rule of thumb used to say that the attacking force must be about
three times larger than the defending force. Johnson’s CAU, however, had
about one-third the number of platforms and many fewer troops than his Red
counterpart. Yet his orders were to attack! By turn-of-the-century measures,
Johnson’s force was an order of magnitude smaller than it should have been.
It was hard to count on a significant difference in training and motivation:
the Red force was known to be brave, motivated, and knowledgeable in their
own tactics. He knew, however, that in terms of technology his force was a
generation ahead of the enemy and that this fact could be worth more than a
10-fold numerical advantage.
The CAU’s fighting platforms (see Figure 2.2) were light and fast. Most
of them were unmanned, robotic vehicles that did not have to carry heavy
armor to protect any human riders and instead carried more weapons and
fuel. Less encumbered by weight and dependence on supply trains, their
maneuver could be far-reaching and more agile than their opponent’s.
Besides, if necessary for longer distances, most of them could be carried by
helicopters.2
Johnson’s unit was also rich in aerial and ground sensors. His robotic recon-
naissance assets—aerial and ground—ranged far ahead of the CAU’s main
forces. With their diverse sensors and the semiautomatic ability to detect sus-
picious objects—potential Red vehicles or infantry—they provided the Blue
force with crucial information about the locations and intent of the Red force,
long before coming in contact with the enemy’s weapons. The captain usually
knew much more about his enemy than the enemy knew about him.
Granted, it was of limited value to know much more about the enemy with-
out being able to impact him. Fortunately, the CAU included plenty of capa-
ble shooters. His long-range artillery and precision missiles—most of them
carried by unmanned vehicles—allowed Johnson to attack the Red force at a
distance once his sensors found the targets.
Still, all these assets would be worthless without the network that tied them
all together, a network that allowed Johnson to receive voluminous informa-
tion about the battlespace and send detailed commands to his forces. United
by the network, the CAU’s assets could fight in a widely distributed, dis-
persed fashion without losing synchronization and mutual support. Beyond
the CAU’s own assets, it could rely, if necessary, on those of a sister unit: the
40 Battle of Cognition

Figure 2.2. Organization and equipment of CAU. The Appendix describes the
equipment.

network enabled them to support each other with both information and fires
even when separated by tens of kilometers.
Finally, and perhaps most importantly, the CAU’s small command cell—
Johnson and his three battle managers (see Figure 2.3) riding in their Com-
mand and Control Vehicle (C2V)—wielded a powerful weapon for battle
command, the Commander Support Environment (CSE). A collection of com-
puter tools, the CSE fused the massive amount of information arriving from
the CAU’s manned and unmanned platforms; assisted with the recognition of
enemy targets; advised on the courses of action available for maneuver, fires,
and intelligence collection; translated the battle manager’s terse commands
into detailed instructions to robotic warriors; and even, if necessary, autono-
mously planned and executed the fires and intelligence collections tasks.
Johnson looked at his battle managers. On his right was Sergeant Rahim,
the intelligence manager, about 15 years older than Johnson. Trying to opti-
mize aerial sensor availability for battle damage assessment tasks, he was
tasking the CSE to calculate plans to potentially use Class I sensors only;
Class I and II sensors only; and Class I, II, and III sensors, with Class III
sensor platforms only used after the maneuver force has reached Phase Line
GOLD. Seated in front of Johnson, Specialist Chu, the maneuver manager,
worked the CSE to finalize several alternative routes for relatively clumsy
robotic ground vehicles. The fourth member of the CAU command cell, the
effects manager, Sergeant Manzetti, sat in the front-right corner of the C2V
and busied herself with entering into CSE the no-fire rules for the politically
touchy villages controlled by hopefully neutral NKILO militias.
A Journey into the Mind of Command 41

Figure 2.3. The Blue command cell—commander and three battle managers—ride
in a C2V.

“It’s weird,” the captain thought, “when Rahim joined the military, I was
barely out of kindergarten, and all this stuff—robotic guns, unmanned sensors,
smart computers everywhere—was considered mostly science fiction. Now it
is just normal, totally normal.”

NETWORK-ENABLED WARFARE
In fact, the efforts to make all this totally normal stuff for Captain Johnson
started well before he went to kindergarten. The style of warfare practiced by
his CAU is called network centric or network enabled (we prefer to use the latter
term in this book), and, like most other revolutions in warfare, it sprang from
a confluence of several technological and political developments.
A good place to start unraveling this chain of developments is the personal
computer revolution of the 1980s. Suddenly, anyone could afford to buy a signi-
ficant amount of computing power. Digital information became ubiquitous—
it was easy to generate such information, to capture, to reproduce, and to
distribute. One unexpected outcome of this development was its impact on the
seemingly invincible Evil Empire, the Soviet Union. Long reliant on keeping
information away from its citizens, the Communist power was faced with a
choice: technological obsolescence or relaxation of its information control.
The Soviet Union wisely elected the latter, promptly collapsed (for a number
of reasons, not just the information revolution), and released in the wake of its
42 Battle of Cognition

collapse a tsunami wave of religious and ethnic wars around the world. These
conflicts changed many of the equations for the U.S. military, forcing it to
look for such things as rapid deployment, small wars, counterinsurgency, and
highly distributed operations.
In another branch within this network of developments, personal computers
made networking both highly feasible and highly desirable. In the early 1990s,
people started to notice a mysterious slogan in the marketing literature of Sun
Microsystems, a then-popular maker of high-end computer workstations.
“The network is the computer,” went the slogan. Sun’s leaders argued that
platform-centric computing was a thing of the past, and the future was with
marvels, such as the Internet, that arise from the network-centric computing.
The Internet went on to have a glorious life of its own, including changing
the ways that the U.S. military communicates. Meanwhile, the term net-
work centric and its broader underlying ideas appealed to a visionary duo—
an Air Force officer, John Garstka, and a Navy aviator, Admiral Arthur
K. Cebrowski—who proceeded to apply it to things military.3 If all mili-
tary assets—warfighters, tanks, ships, airplanes—were to be connected by
powerful information networks, they could cooperate, make synchronized
decisions, and fight in a more agile, effective fashion. They could be tailored
to a specific mission. They could be geographically distributed. Their deploy-
ment and logistics could be faster and more flexible. They could use differ-
ent ways to organize themselves, perhaps even self-organize. They could also
provide a more-efficient environment for employment of a slightly older
development—precision weapons.
The ideas fit perfectly. Finally, here was a coherent, elegant vision of how
the indisputable information revolution could revolutionize military affairs.
Network-centric warfare became a popular concept within the U.S. Depart-
ment of Defense. The Office of Force Transformation became the official
home of the concept, and Admiral Cebrowski, John Garstka, and Dr. David
Alberts issued a steady stream of influential publications.4
Naturally, every service within the U.S. military developed its own perspec-
tive on the network-centric idea. By the late 1990s, the U.S. Army was eye-
ing a number of challenges. The typical ponderous deployment of the army’s
heavy forces was seen as a liability in the post-Soviet age: faster deployment
by fixed-wing airlift seemed necessary, but the army’s equipment was too
heavy. The expense of large numbers of men and women in army uniforms
was becoming difficult to justify. Its main fighting platforms—the Abrams
tank and the Bradley personnel carrier—were starting to approach their
obsolescence horizon and called for replacements. Emerging technologies—
computers, sensors, laser-guided weapons, robots, unmanned aerial vehicles
(UAVs)—all seemed interesting but difficult to accommodate in the army’s
current conceptual structure.
Enter network-centric warfare. In one fell swoop, it offered a holistic solu-
tion, a unifying framework for all of the above-mentioned concerns. The army
named this synthesis of ideas the Future Combat System (FCS).5 Computer
A Journey into the Mind of Command 43

networks would permeate the FCS, delivering the information from farseeing
advanced sensors (such as those carried on UAVs) to shooter platforms (many
of them robotic) that fire precision weapons at a faraway enemy beyond the
horizon. By detecting and engaging hostile forces at a distance, the FCS force
could avoid the enemy’s direct fires and allow the army platforms to carry
more modest armor. This would reduce the weight of the platforms and make
them suitable for rapid air delivery to multiple trouble spots around the world.
These FCS platforms, elaborately rich in information but prudently frugal in
armor, would be procured to replace the aging Abrams and Bradleys. All this
was a perfect fit.
Of course, there were critics of the idea. Some argued that certain key tech-
nologies, such as robotic vehicles, remained underdeveloped and not ready
for prime time, and other technologies, such as the wireless mobile networks,
remained too vulnerable to enemy attacks.6 The need for rapid deployment
by air may have been overestimated; there was no pressing need to dispense
with heavy armor,7 and besides, FCS would take almost as long to deploy as
a conventional force.8 Others argued that the FCS system was too expen-
sive,9 that a light-armored force was too vulnerable for direct contact with
the enemy,10 and that emphasis on network-centric warfare would lead our
military to neglect the need for more boots on the ground.11
Not so, responded the advocates of the program. Most critical technolo-
gies were already mature, and others were in well-managed development.
The architecture and characteristics of the system were carefully optimized to
balance its deployability, survivability, and lethality in a broad range of future
conflicts. The overall costs would be much lower than any practical alterna-
tive approaches, including an attempt to modernize the current conventional
platforms. With many tasks automated, and many platforms standardized,
the need for support personnel and its associated costs would be significantly
reduced.12 The new network-enabled force would provide more boots on the
ground, significantly faster deployed into difficult hot spots, defeating either
conventional or unconventional enemies with fewer risks and costs.
We do not intend to diminish either the weightiness of all these consider-
ations, or the contenders’ sincerity and competence. You, the reader, may find
in this book grounds for support for both sides of the argument. Our findings
and observations both confirm the potential of a network-enabled force and
highlight risks of such systems as currently conceived. Still, these are not the
arguments we wish to pursue on the pages of this book.
Rather, we argue that the issues of armor and information do not need to
be coupled. Most of our findings indicate that these are orthogonal issues:
the cognitive challenges of information-rich, network-enabled warfare do not
depend on the thickness of the armor. Network-enabled warfare will deliver
its value (or will fail to deliver, if the challenges of information-rich battle
command are not solved properly) even with heavily armored platforms.
Conversely, heavy armor neither obviates the need for networked informa-
tion, nor precludes it.
44 Battle of Cognition

Regardless of the decisions on the right thickness of the armor, or the right
number of boots on the ground, the already-present elements of network-
enabled warfare call for serious attention to how warfighters can deal with the
explosion of information. The strengths (or weaknesses, as the case might be)
of the collective human-machine cognition will be at least as important as the
right combination of the platforms’ characteristics.
While at the time of this writing the FCS program continues as a strong,
innovative, ambitious, and expensive effort, network-enabled warfare does
not wait. It enters the military by guerilla marketing methods, far outpacing
conventional military procurement. Enterprising warfighters buy laptops and
wireless devices; rig databases, blogs, and chat sites; and establish their own
procedures and techniques that are clearly reminiscent of network-enabled
ideas. UAVs and even ground robots find growing acceptance among the
warfighters, regardless of the inevitable immaturities of the technologies.
With or without the muscle of military acquisition, network-enabled warfare
is entering real-world military operations.
And this brings a major concern that began to emerge even in the late 1990s:
with drastic proliferation of information flows impacting the warfighter, and
with so many new devices requiring the warfighter’s attention, what will hap-
pen with the human cognitive mechanisms? To put it differently, network-
enabled warfare will unleash a flood of information on the warfighter. Will
the flood overwhelm the cognitive abilities of the warfighter? Particularly
important, will the warfighter be able to manage the battle command?

THE HISTORY OF THE MDC2 PROGRAM


At least two organizations recognized the challenges of battle command in
network-enabled warfare. One was the Defense Advanced Research Projects
Agency (DARPA), a legendary and occasionally controversial powerhouse of
American research and development since the late 1950s, the central R&D
organization of the U.S. Department of Defense. Another was the U.S. Army
Training and Doctrine Command (TRADOC), a key brain trust of the army,
responsible in particular for the development of new concepts and techniques
of warfare. By the year 2000, the network-enabled developments, particularly
the FCS, were charging forward. But what about battle command in a network-
enabled environment? How would it be performed and by what means?
Recognizing the seriousness of these challenges, in October 2000, DARPA
and TRADOC initiated the Future Combat Systems Command and Control
(FCS C2) program. Its objective was to explore the unique command and
control challenges that would face the future force envisioned at the time by
the U.S. Army.13 The program was a brainchild of Gary Sauer, a visionary
lieutenant colonel of the U.S. Army, whose energy and ideas impressed the
army’s top leadership.
Initially, the program focused on the smallest notional future unit with a
significant range of weapon systems—the hypothetical CAU. The weapons,
A Journey into the Mind of Command 45

capabilities, and organization of a CAU were derived from the then-current


Operational and Organizational Plan for FCS.14 Smaller than a conventional
army company, the CAU was equipped with an impressive array of sensors
and shooters, both unmanned (robotic) and manned, all connected by a data
network. The unit was able to acquire and process detailed information about
the faraway enemy, maneuver rapidly over distances of tens of kilometers, and
defeat the enemy with precise fire, both direct and indirect.15
For its survival and effectiveness against a larger and more heavily armored
enemy, the unit relied on large volumes of information collected by its numer-
ous sensors, such as UAVs, and delivered to the unit’s decision makers via its
ubiquitous network. The commander and his assistants had to process this
massive volume of incoming information and to command the far-flung assets
of the unit, many of them robotic and therefore reliant on precise and detailed
commands. To absorb the vast quantity of incoming information, and to issue
a high volume of detailed commands in a high-tempo battle, the commander
required a new type of a command and control tool.
Therefore, the program had to accomplish several things: (a) build a proto-
type command and control tool called the CSE and construct the associated
command organization, techniques, and procedures; (b) devise an experimental
way to measure the performance of decision makers in such an environment
and identify the key factors influencing the performance; and (c) perform a
series of experiments—simulated battles in which the decision makers and the
new tools display their capabilities.
The initial program consisted of a series of four experiments. The first
three experiments focused on developing the basic CSE application and
control mechanisms. The fourth and final experiment of this program was
conducted in two phases and designed as a discovery experiment. It explored
the implications of robotics, information superiority, and networked fires on
a commander’s ability to develop situation awareness and coordinate recon-
naissance and surveillance, maneuver, and fires of his organic assets.
While FCS C2 was seen as a successful program, it explored a new com-
mand and control capability for one unit only. It did not answer questions
about battle command in a multiunit and multiechelon environment, parti-
cularly about the challenges of collaboration between peer units and across
multiple echelons. Therefore, in 2003, DARPA and the U.S. Army TRADOC
decided to extend the initial effort via a successor program—the Multicell and
Dismounted Command and Control (MDC2),16 which continued the FCS
C2 experimental series. Both programs were created and led by Gary Sauer
(who retired from the U.S. Army and became a program manager at DARPA)
and Maureen Molz, a senior civilian manager in the U.S. Army’s research
and development system. For the sake of brevity, in this book we refer to the
combination of the two programs simply as MDC2.
A key focus of the program’s continuation was on extending the CSE suite
of command tools to explore the command and control required by several
echelons of a future force. The extended CSE was designed to collect relevant
46 Battle of Cognition

Figure 2.4. The progression of experiments in the FCS C2 and


MDC2 programs.

battlespace information provided by sensors across the multiechelon, multiunit


force, fuse this information into a consistent representation of the battlespace,
and present it in such a way that timely and effective decisions could be made
at echelons ranging from the individual combatant through the battalion com-
mander.
Continuing the series, the fifth experiment expanded the CAU to include
dismounted infantry soldiers and to explore the information and control
requirements of these lower echelons. Inspired by the then-current suite of
the U.S. Army scenarios, the so-called Caspian Sea scenarios,17 the simulated
battlefield also moved to a more complex geopolitical setting. The final two
experiments in the MDC2 program added a fully functional higher echelon
and a sister unit to explore collaboration, assets sharing, and information
flows between echelons and between peer commanders.

EXPERIMENTAL TESTBED
To experiment with the ways in which the future Captain Johnson and his
battle managers might execute their network-enabled battles, we constructed
the MDC2 experimental laboratory. There we constructed mock-up C2 vehi-
cles not unlike the one Captain Johnson might ride into a battle and populated
them with teams of live officers and staff members. Each such team, called a
command cell, commanded a force of artificial warriors and platforms, such as
the CAU we described earlier, simulated by the U.S. Army’s premier simula-
tion system called OneSAF Testbed (OTB).18 The opposing force, the Kura
Brigade and their insurgent allies, were also simulated but were commanded
A Journey into the Mind of Command 47

by a live and very capable Red command cell. The Red and Blue command
cells did not know the locations and plans of the opponent’s forces, except as
they were able to determine during the battle. Both were allowed to conduct
the battle as they desired, in a so-called free-play fashion, without following a
prescribed script, although within prescribed rules of engagement.
The battles unfolded with realistic speed, in other words in real time. The
information about the battle events received by a command cell via computer
monitors and radio channels was also fairly realistic, allowing the command
cell to interact with the environment as if in the midst of a real battle. All of
the important functions of Johnson’ CAU were represented: the ability to
maneuver the forces, direct lethal and suppressive fires on the enemy, direct
and collect ISR effects, and conduct a small measure of logistics. To put all
this more formally, the MDC2 experimental program was conducted in a
simulation-supported, interactive, real-time, free-play, human-in-the-loop
laboratory environment.
Although all eyes were on the live command cells, the experiments could
not be performed unless the commanders had somebody to command. Thus,
OTB was the critical basis of the experimental environment. OTB simulated
Blue and Red forces at the entity level, meaning that each warfighter or
tank or other entity in the battlespace was simulated individually. An entity
received a command from the command cell and then computer programs
(called behaviors) took over and controlled the detailed actions of the
entity—for example, the way in which the soldier ran or fired his weapon.
This simulation approach is called entity-level semiautomated or computer-
generated forces. The capabilities and characteristics of such entities were
strictly managed: we maintained a set of Red and Blue equipment manu-
als that provided the detailed characteristics of Red and Blue platforms,
weapons, and sensors.
It helped that the OTB software had an open architecture with source code
that allowed for modifications to meet specific requirements. For example, we
developed and added several dismounted infantry behaviors to the OTB soft-
ware in order to support the MDC2 experiments. These unit-level (squad and
fireteam) behaviors added tactically realistic functionality that reduced opera-
tor workload. For example, the React to Contact behavior provided intel-
ligent rule-based behaviors when a dismounted infantry unit came in contact
with enemy forces or indirect fire. The behavior resulted in one of several
potential outcomes, including advancing on the threat, withdrawing from the
threat, or pausing to survey the threat.
In a typical experiment, the OTB-simulated Blue force was commanded
by three Blue command cells. Two of the cells commanded a CAU each—a
force of roughly company strength. The two CAUs were parts of a Com-
bined Arms Team (CAT), a unit of about a battalion strength that was com-
manded by the third cell. In addition to these three cells, a notional brigade
commander provided input and course correction to help ensure experi-
mental objectives were being met. This brigade commander formed the
48 Battle of Cognition

necessary link between the friendly forces and the experimental control cell
(described later).
The Red command cell included a commander and several staff mem-
bers. The commander was separated from his staff and could communicate
through radio calls. Because each radio call carried the possibility of being
detected by a friendly force sensor, communication was used sparingly. The
Red staff members interacted with the OTB simulation and had access to all
information gathered by their units. The enemy commander, however, only
had access to an infrequently updated display of the battlespace. The intent
of this display was to represent an approximation of the operational picture
available to an enemy commander in 2018.
Neutral forces, also simulated by OTB, acted independently of both the
friendly and enemy forces and added much complexity to the battlespace.
They included buses with predefined routes, trucks, and civilians in populated
areas.
Unlike the Blue and Red command cells that could see only the informa-
tion that their forces would acquire in the battle, the experimental control
cell had displays that showed all true locations and actions of Blue and Red
forces—the full ground truth. The control cell also listened to radio con-
versations between the Red commander and his staff or between the Blue
command-cell operators.
The members of the control cell did not interact directly with the simu-
lation unless there was a system problem but were responsible for ensur-
ing that the experimental objectives were being met and that the systems
were performing as expected. Furthermore, to maintain the integrity of the
experiments, the control cell did not interface directly with the Blue com-
mand cells. Instead, required communications to Blue commanders were
accomplished through the notional brigade commander using conventional
military protocol.
Observers and analysts were located throughout the laboratory and tracked
the action directly from the experimental control cell and the enemy com-
mander’s cell. These analyst observers were privy to all discussions within the
experimental control cell and between the enemy commander and his staff.
The analyst observers responsible for recording the Blue command opera-
tions were physically removed from the Blue cell operators they were observ-
ing but had access to everything the Blue commanders saw and heard. To
guard against any experiment-disrupting influences, these analysts were not
allowed to interact with the Blue staff.

THE BLUE COMMAND


The all-important command cells, the focus of our experiments, deserve a
more detailed discussion. Recall that latter experiments included several Blue
command cells. One cell commanded the CAT (approximately a battalion-
strength unit). Subordinate to the CAT cell there were two CAU command
A Journey into the Mind of Command 49

cells (each CAU being about company strength). In addition to commanding


the two CAUs, the CAT cell also commanded its own organic battlespace
assets as illustrated in Figure 2.12.
To provide greater realism in how the CAT and CAU cells would interface
with higher and lower echelons, we also included one brigade-level command
element, the superior of the CAT cell, and two platoon leaders, subordinate
to the CAU cells. However, these echelons—brigade and platoon—were not
a focus of our experiment, so we modeled them less accurately than the CAT
and CAU cells.
CAU and CAT command cells were similar in their internal organization.
Each cell consisted of a commander and three staff members called battle
managers. Although we allowed the cells to self-organize their responsibili-
ties, the battle managers usually assumed the conventional staff roles of intel-
ligence, maneuver, and effects.
In these roles, the intelligence manager (also called information manager)
was responsible for developing the picture of the battlespace by controlling
sensors and examining and classifying sensor images. The battlespace man-
ager (also called maneuver manager) was responsible for coordinating the
movement and synchronization of the maneuver elements, such as combat
robots and infantry carriers. The effects manager (also called fires manager)
was responsible for identifying and engaging targets. In the experimental
battles, the roles of commanders and battle managers were played by army
officers and noncommissioned officers, some retired, most active duty.
This allocation of responsibilities was far from fixed. The commander often
reallocated the tasks depending on the requirements of the situation and on
the skills of the battle managers. For example, the responsibility for battle
damage assessment tasks often floated between the effects manager and the
intelligence manager. Management of the critical collection assets, such as
UAVs, often devolved to the commander. The tasks of moving Non-Line of
Sight (NLOS) assets sometimes changed hands between the battlespace man-
ager and the effects manager. Because each cell member accessed the same
data set and had a unified reconfigurable command system, every member
could perform every task. This allowed for a variety of creative approaches to
the organization and procedures of a command cell.
In addition to the four members of a command cell, a typical C2V also car-
ried its driver and a gunner. The CAU C2V was a fully developed mock-up
that was enclosed (isolated from the room in which it resided) and attempted
to represent the space and lighting conditions expected in a future C2V (see
Figure 2.3). The driver “drove” the vehicle while a computer simulated the
appropriate location and the realistic scenery in the driver’s front view (a com-
puter screen). The cell members in this C2V did not experience any vehicle
motion, but vehicle noise did interfere with their communications. Within
the vehicle, the cell members had internal audio communications using head-
sets. While communicating to another command cell (located naturally in
another vehicle), a cell member talked to his functional counterparts over
50 Battle of Cognition

Figure 2.5. The relations and internal organizations of


CAT and CAUs command cells. Here CMD = comman-
der; BM = battle manager.

their own radio channel (e.g., the intelligence manager of CAU-1 would talk
to the intelligence manager of CAU-2).

THE COMMANDER SUPPORT ENVIRONMENT


Rich in sensors and networks, the Blue force provided the command cell,
such as Captain Johnson and his battle managers, with an enormous quantity
of information. This flood of incoming information could be both a blessing
and a curse.
It was a blessing because it did help Captain Johnson to know where his
forces, and the enemy forces, were located and what they were doing. He was
able to avoid threats and to destroy them before the enemy even detected the
presence of friendly units. On the other hand, the massive inflow of infor-
mation could be a curse, an overwhelming, disorienting cognitive burden, if
Johnson did not have effective tools to make sense of this information.
Similarly, the high proportion of autonomous, unmanned, robotic assets
within Johnson’s CAU was a double-edged sword. The autonomous sensors,
such as UAVs, were able to collect, prefilter, and communicate vast amounts of
A Journey into the Mind of Command 51

useful information about the enemy. The robotic shooters, such as unmanned
mobile cannons, fired rapidly and accurately at their designated targets. The
human warriors stayed further from harm’s way and dedicated themselves to
less mechanical, more creative tasks. However, robotic platforms also required
extensive amounts of information—accurate, highly detailed orders—from
Johnson’s battle managers. Here was another potential curse: the command
cell had to feed their efficient but relatively brainless robotic warriors with
an exorbitant amount of command information. Without a powerful tool,
the command cell would not be able to generate such a complex, voluminous
output.
Fortunately, our Blue command cells had a suite of helpful tools—the CSE,
a key product of the MDC2 program (Figures 2.6 and 2.7). It was the CSE
that processed the flood of incoming information and reduced it to a man-
ageable set and presented it to the cell members in an easily understandable
manner. And it was also the CSE that translated high-level guidance and the
commands of Captain Johnson and his battle managers into highly detailed,
precise instructions to the robotic warriors.
In Figure 2.3, you see two displays in front of each command-cell member.
These screens, interfaces to the CSE, could be reconfigured and personalized
according to the cell member’s tasks and personal preferences. Typically, the
primary content of the screens was the visualization of the common operating
picture, an automatically updated and integrated (fused) picture of friendly,
neutral, and enemy forces. The information used to populate the common
operating picture came from the unit’s organic and higher-echelon sensors.
Because all displays were networked and drew the underlying data from a
shared database, an update to one display immediately updated the display on
all other displays.
In addition to integrating and displaying the available information about the
conditions of the battle, the expert system and intelligent agents within the
CSE reasoned about the intelligence report assessments that correlated and
fused detailed information. This included such considerations as the enemy
status (e.g., fuel, ammo status, and health), alerts, planning versus execution
comparisons, current tasking, and more. In particular, once Captain John-
son or Sergeant Rahim entered configurable alerts into the CSE, the system
would notify them when, for example, an enemy force within an area of inter-
est exceeded a certain size. Other types of critical events, derived from the
Commander’s Critical Information Requirements, were handled similarly. To
help keep track of the Blue forces, the CSE’s monitoring tools provided feed-
back about the status of individual assets and of the echelon as a whole.
The CSE’s expert system also allowed Captain Johnson and his staff to task
and control their organic assets or groups of assets and to perform maneuver,
sensing, or shooting functions. A cell member communicated his intent to the
CSE using a set of warfighter-configurable rules. When the configured set of
conditions was met, CSE executed the predefined actions or recommended
actions to the designated cell member. When the cell member approved one
Figure 2.6a. Some of the tools of the CSE.

52
A Journey into the Mind of Command 53

Figure 2.6b. Key functions of the CSE.

of the suggested actions—usually with a single click—the CSE translated


the chosen action into detailed, specific instructions to a specific manned or
unmanned asset—to fire a missile, to perform reconnaissance of a specific
target, and so on. Sometimes, the actions were set to execute automatically,
without staff interaction, when a set of predefined conditions was met. This
enabled the warfighters to take advantage of fleeting opportunities and to
minimize the risks of suddenly emerging threats in the battlespace. We will
talk about the CSE in great detail in chapter 3, and about implications of the
use of the CSE throughout this book.

A TYPICAL EXPERIMENT
Over the course of several years, we performed a total of eight experiments.
Each experiment took multiple months to prepare and weeks to execute. An
experiment involved multiple battles (we called them runs), each taking sev-
eral hours to complete. Although most runs were based on a common terrain,
the force structure, and general situation, the specifics of Blue and Red mis-
sions and dispositions were unique in each scenario.
Each scenario was designed to address multiple experimental objectives and
forced the commanders into different tactical dilemmas. At the beginning of
each scenario, the commanders and their staff members would go through
detailed collaborative and individual planning. Several scenarios included
54 Battle of Cognition

Figure 2.7. A typical CSE screen in front of a command-cell member.

fragmentary orders part way into the runs to force dynamic replanning to
meet the new objectives.
In general, the Red force was significantly larger and more heavily armored,
although their vehicles were slower, their weapons and sensors had shorter
ranges, and their C2 capabilities were less sophisticated than the Blue’s. De-
feating such an enemy with a smaller, more lightly armored, but more agile
and better informed force was the common dilemma of the Blue forces.
In designing the experiments, we made an early decision to look at this
experimental program as one of discovery instead of hypothesis testing. The
most significant implication of this decision was that instead of focusing the
analysis on determining if a particular hypothesis was true or false, we instead
explored significant factors and their relations—for example, the information
requirements of the cell members, or which CSE features were most effective,
and why. The results of our experimental analysis influenced enhancements
to the experimental tools and often led to generalized findings that would be
pertinent to a range of future battle-command approaches and tools.
To focus our data collection and guide the direction of subsequent analysis,
we developed a core set of essential elements of analysis—the key questions
the experiments were to answer. An analytic decomposition of these elements
enabled us to carefully construct each experiment to ensure that the required
A Journey into the Mind of Command 55

data elements were collected. To provide a valid operational context, we


based our scenario design on the U.S. Army TRADOC Caspian Sea scenario
and staffed the command cells mainly with active-duty military officers and
soldiers.
Each scenario in an experiment was based on a common “road to war”
story, which provided context for the upcoming battle and used the same basic
force structures. Let us discuss these conditions in some detail.

Mission
The commanders and their staff planned, rehearsed, and executed one or
two tactical missions each day. The scenario was derived from a collection of
unclassified Caspian Sea scenarios set in the country of Azerbaijan circa 2018
and was designed to force analytically significant dilemmas and command
decision making. The precise mission for each run varied, but the friendly
force was consistently on the offensive and had a terrain-oriented mission (i.e.,
secure an area, clear a path, etc.). The enemy mission varied more substan-
tively: in some runs the Red force was to exfiltrate across the border to Iran,
in some they defended a region, and in others their priority was to destroy the
Blue forces. Neither side was aware of the specific mission of its opponent.

Enemy
The Red forces, operating independently and unconstrained tactically, were
a mixture of Azeri army regulars, special-purpose teams, and insurgents. The
militants of the AIB made up the insurgent forces that attempted to over-
throw the pro-Western government. The AIB subverted control of the Kura

Figure 2.8. Essential elements of analysis—the key questions the


experiments were to answer.
56 Battle of Cognition

Brigade—a unit of the Azerbaijan army—from the Azeri government. This


brigade was composed of four motorized rifle battalions (MRB), one self-
propelled artillery battalion, and a surface-to-air missile (SAM) battery. At
the beginning of each scenario run, these elements were at approximately
50 percent strength. Two of the MRBs were outside the Kura River Depres-
sion and were not represented in the simulation.
Special Purpose Forces (SPF) from the AIB worked with the Red com-
mander in the region and operated in four-man dismounted teams. Additional
fighters in the area belonged to the NKILO, which was not directly aligned
with either the AIB or Azeri regulars. The NKILO forces carried guns but
were not combatants, and they did not report to either Red or Blue forces.
Allegiance of NKILO was suspect and highly dependent upon clan alliances.
Figure 2.1 shows the force structure of the Kura Motorized Rifle Brigade.

Terrain
The area where the battles took place—the so-called terrain box—was
typically located in the Kura River Depression in present-day Azerbaijan
(see Figures 2.9 and 2.10). The terrain box varied in size from experiment to
experiment based on the size of the force involved. The largest terrain box was
approximately 100 kilometers from north to south and 100 kilometers from
east to west, with the Kura River running west to east through the center of
the region.
The Kura River Depression region of Azerbaijan is remarkably flat with
elevation variations of –5 to +15 meters from sea level. The region is mostly
covered by sandy or hard-packed soil and is primarily an agricultural area for
grains and native crops. Large, thickly wooded areas are dispersed throughout
the area, and the region includes a large swamp in the south-central region of
the depression.
In order to increase the complexity of the experimental battles, the real-
world terrain was modified. Enhancements included over a hundred built-up
areas, dozens of mosques, 11 cemeteries, 36 national monuments, and 4 dis-
placed persons camps. To stay within the constraints of the simulation system,
the size of most of the built-up areas—very modest hamlets—was intention-
ally limited to 6 buildings per area. See Figure 2.10 for a graphical representa-
tion of the experiment terrain. Different colors indicate terrain features such
as mountainous areas, marshy areas, farmland, lakes and rivers, water, and
impassable swampland. The legend describes the cultural features, and the
numbered areas are the towns and small built-up areas.

Troops
The organization of Blue troops used in the experiments varied according
to the context of the experiment. In early experiments, a single CAU was rep-
Figure 2.9. The terrain box used in the experiment was set in a Caspian Sea region.

Figure 2.10. To increase the complexity of environment, the terrain box included a
variety of additional, fictitious features.

57
58 Battle of Cognition

resented. In Experiment 5, we added a subordinate platoon with two squads


of infantry. In Experiments 6 and 7, we added a second CAU and a higher-
headquarters (CAT).
For the sake of realism, the CAT commander reported to a Brigade Com-
bat Team (BCT) commander and was one of three CATs represented in the
mission orders that came from the BCT commander. The BCT commander
controlled several sensor assets and High Mobility Artillery Rocket Systems
(HIMARS), had intermittent access to joint assets such as F-117A fighter
assets, and received information from a two-man Special Operations Force
team operating in the BCT Area of Responsibility. Each CAU included a
platoon led by a platoon leader who controlled two nine-man virtual squads.
See Figures 2.2, 2.11, and 2.12.

Figure 2.11. Overall organization of the Blue force. Gray shading shows the forces
represented with live personnel. Squads were computer simulated. Other forces were
not represented in the experiments. The Appendix describes the equipment.
A Journey into the Mind of Command 59

Figure 2.12. Organization and equipment of BCT and CAT. The


Appendix describes the equipment.

Civilians on the Battlespace


To add complexity to the decision-making environment, and to increase
the challenge of understanding the battlespace, we placed dozens of civil-
ians on the battlefield. They drove trucks similar to the enemy vehicles and
moved about in the towns alongside the insurgent forces. The Blue force had
to interact with the civilians in accordance with the rules of engagement and
in general had to avoid harming and antagonizing civilians.

Time
Mission planning was allotted two hours, mission execution was up to four
hours. In most cases, the Blue force was ordered to accomplish its objectives
by a specified time limit.

A TYPICAL BATTLE HISTORY


As an example, consider a fairly typical experimental battle, Run 6 of Experi-
ment 6. The Red force is positioned south of the Kura river, with the bulk of its
assets in the western part of the area shown in Figure 2.13a. They expect that
the Blue force is about to attack from the positions north of the river. The Red
commander’s plan is to delay the Blue force while exfiltrating his forces across
the international boundary (southern part of Figure 2.13a) into friendly Iran.
60 Battle of Cognition

The Blue force plans to prevent the Red exfiltration. To this end, one CAT
(not explicitly modeled in the experiment) will secure the western part of the
international boundary while CAT-2 (the force modeled in our experiment)
will take key objectives (MEAD and SHERMAN) in the central and eastern
part of the area, thereby enveloping the Red force. CAT-2’s sensor assets will
be the first to cross the river, to probe the Red positions, and to set the nec-
essary conditions for the subordinate units to begin maneuver. CAU-2 will
then initiate the main effort toward the objective SHERMAN (in southeast)
while CAU-1 will execute the supporting effort and take objective MEAD
(center).

Figure 2.13a. An example of an experimental battle: Blue initial plan. See Appendix
for explanation of abbreviations.

Figure 2.13b. An example of an experimental battle: Blue change of plan.


A Journey into the Mind of Command 61

Initially, the battle unfolds approximately as expected. By H+16 (i.e., 16


minutes after the start of the operation), CAT’s aerial reconnaissance ele-
ments are crossing the river, and shortly thereafter, at H+37, CAU-2 secures
the river crossing site. At H+42 CAU-1 also exits its assembly area heading
toward the crossing site. At that point, at H+46, the brigade headquarters
sends CAT-2 a major redirection: because CAT-1 failed to secure the south-
ern boundary and block the Red force, the enemy is presently escaping south,
and CAT-2 is ordered to attack into the western part of the area, to pur-
sue and destroy as much of the fleeing Red force as possible (Figure 2.13b).
Remarkably, using their respective CSE facilities, within the next 10 minutes,
all three command cells—CAT-2, CAU-1 and CAU-2—manage to formulate
and coordinate a new plan.
By H+73 CAU-2 has already turned onto a new axis in the south. How-
ever, its knowledge of the enemy situation in the western part of the box is
rather limited—until this time, all reconnaissance assets were focused on the
central and eastern areas. Now they have to reorient their reconnaissance
assets—especially UAVs—to the western area. The Red command notices
the change in the UAV routes and by H+85 already notifies the Red forces—
correctly—that the Blue has reoriented toward the southwest. Lacking situ-
ation awareness and sweeping with UAVs ahead of their main forces, CAU-1
and CAU-2 advance cautiously, even though in reality most of the Red force
has already retreated further south. By H+162, the experiment control cell
announces the end of the experiment. The bulk of the Red force successfully
escapes.

INFORMATION PROCESSING, SITUATION AWARENESS,


AND BATTLE COMMAND
Such experiments brought a rich harvest of findings. Among them, the
role of one factor emerged as the most dominant and pervasive—situation
awareness. In our experiments, it was situation awareness and its impact on
the commander’s decisions that often determined the outcome of a battle.
The fate of a unit was largely dependent upon how the commander and staff
deployed sensors and forces to fight for information and exploit the unit’s
advantages, if any, in situation awareness.
Although the MDC2 program was probably the first systematic experimen-
tal study to obtain extensive quantitative data on the role of situation aware-
ness in battle command, the qualitative recognition of this role has a long
history and is well established in military doctrine (Figure 2.14). Informa-
tion, as an element of combat power, “enhances leadership and magnifies the
effects of maneuver, protection and firepower.”19 Discovering and measuring
quantitatively how information influences a commander’s situation awareness
and impacts his decision making has been a key objective of this program’s
experiments.
62 Battle of Cognition

Figure 2.14. The combat power is critically dependent on leaders’ situation


awareness (after U.S. Army Field Manual 3.0, Operations, p. 4-4).

The dominant role of situation awareness (i.e., the ability to obtain the
necessary information about the situation in which a military force operates)
should not come as a surprise. In fact, it has been long been argued that the
very nature of command organizations has been historically driven by the
need for situation awareness.
In the world of warfare, an influential historian argued that “the history
of command can be understood in terms of a race between the demand for
information and the ability of command systems to meet it.”20 It also should
not be surprising that the solution to the problem, in all ages, had much to do
with technological innovations in information processing.
Consider the Napoleonic revolution in battle command. In the large-scale
operations of the Napoleonic age, the enormously enlarged and geographi-
cally dispersed armies engendered massively increased flows of information.
The emperor could no longer be in person with every corps; he needed detailed
reports. The task of transforming these formidable inflows of information into
adequate situation awareness was too difficult even for a genius of Napoleon’s
caliber. To solve the problem, he introduced a system of remarkable innova-
tions, technological in nature even if the technology was based on humans
and paper.21
He devised sophisticated databases—a system of formalized reports, specialized
summaries, and custom-designed cabinets for efficient storage and retrieval
of such information. He institutionalized the use of a relatively recent tech-
nological development—accurately triangulated and mass-produced maps—
as a media for time-space modeling and analysis of strategic movements.22
A Journey into the Mind of Command 63

To process the incoming reports and outgoing orders with greater speed and
accuracy, he devised a system of specialized human information processors—
staff officers—responsible for formally decomposed and allocated sets of
functional tasks. This suite of technological innovations—based on paper
databases and human information processors—was at the core of Napoleonic
battle-command revolution.
In the world of industrial management, it was also long recognized that the
structure and processes of an effective organization are driven by the need
to transform large volumes of information into useable forms—situation
awareness and decisions. The ability of a decision-making organization to
produce successful performance is largely a function of avoiding information-
processing overload,23 not unlike what we saw in the Napoleonic invention of
a new battle command. Thus, in the 1990s, globalization and computerization
drove massive changes in industrial and commercial management—reduction
in layers of management, just-in-time operations, and networked structure of
enterprises.
In short, the chain of influences works as follows. New conditions of war-
fare (such as Napoleon’s large, distributed corps) both engender and demand
more information. More information challenges the ability of the old com-
mand system to transform it into actionable situation awareness. To resolve the
challenge, a capable military develops or adopts new information-processing
technologies (such as Napoleon’s paper databases and specialized human
processors), with suitable organizations and procedures—a new battle-command
system.
Captain Johnson’s command cell was a product of a similar chain. Given
the unusual degree of Blue forces dispersion and their physical separation
from the enemy, and with the confusing flood of information produced by
multiple sensors and networks, can Captain Johnson and his battle managers
maintain an effective level of situation awareness? A key hypothesis of the
MDC2 program was that they could, if provided with an appropriate suite of
tools, such as the CSE.
CHAPTER 3
New Tools of Command:
A Detailed Look at the
Technology That Helps
Manage the Fog of War
Richard J. Bormann Jr.

Skip this chapter if computer terminology bores you. The subsequent chapters
are quite understandable without the heavy technical content of this one. On
the other hand, for a technically minded reader, this is a great place to learn
about the nuts and bolts of the network-enabled battle-command tools actu-
ally built and tested in the MDC2 program.
Let’s begin by introducing two important abbreviations. Battle Command
Support Environment (BCSE) is the overall system we build to perform the
experiments with battle command within he MDC2 program. It includes
many diverse components with a range of capabilities. A large part of these
capabilities are the functions that actually support a commander cell like Cap-
tain Johnson and his battle managers. That set of functions is called the Com-
mander Support Environment (CSE). In this chapter we will describe the
entire BCSE. In other chapters, we focus almost exclusively on CSE only.
The BCSE is an execution-centric command and control (C2) decision
support system for cross-functional, collaborative mission planning and exe-
cution. It provides a common operating picture (COP) for enhanced, real-
time situation awareness (SA). It supports multiple echelons, from battalion
down to the individual mounted and dismounted warfighter level, including
both manned and unmanned platforms. Using the BCSE, command cells
control manned and robotic assets in a network-enabled, cross-functional
environment in response to rapidly changing battlespace conditions and digi-
tally share the changes across organizations in real time.
In developing BCSE, we pursued a number of objectives centered around a
network-enabled approach to battle teamwork. All assets within the command
cell are able to collaborate and share their view of the world. The information

64
New Tools of Command 65

is shared by every human and every robotic system within the commander’s
control, so that they can each operate under the same assumptions. Even
though, with inevitable periodic losses of communications, perfect instanta-
neous and continuous sharing of information may not be always possible, the
system does its best to keep the information stores as up to date as possible.
In its current form, the BCSE is based on a three-tier C2 architecture with
integrated Battlefield Functional Areas (BFA). The BFAs handled directly
within the BCSE include Maneuver, Intelligence, Effects, and Logistics. The
three-tier architecture provides decision support (1) at the warfighter graphi-
cal user interface, (2) among multiple networked assets, and (3) at the indi-
vidual asset level. The decision support capability is distributed across the
command cell while providing redundancy of the information model. The
latter is important because it ensures that a loss of any one asset does not
result in the loss of existing information and keeps the information store from
becoming a central point of failure.
The integrated BFAs allow every member of the command cell, regard-
less of his functional specialization (e.g., military intelligence or logistics) the
ability to pitch in and share the workload regardless of their assigned func-
tional role, with the same tools and functions available to every member. If
the information manager is overwhelmed while the effects manager has spare
cognitive cycles, the effects manager can pitch in to help with intelligence
tasks without switching computer screens or moving to other workstations.
The architecture is also tailorable—it allows the users to tailor the system
interface to their specific preferences and warfighting needs.
In the following section, we begin by describing the architecture of BCSE
in some detail. We then continue by highlighting some of the tools and fea-
tures that are available to the warfighter and finish by describing the decision
support system framework that at the heart of the BCSE.

THE ARCHITECTURE OF THE BCSE


Our main approach to the architecture design is to distribute C2 intelli-
gence across the network as a set of intelligent agents that assist the members
of the command cells. Intelligent Agents mean many things to many people.
For the purposes of this chapter we use the definition by Gheorghe Tecuci:
“An intelligent agent is a knowledge-based system that perceives its environ-
ment (which may be the physical world, a user via a graphical user interface,
a collection of other agents, the Internet or other complex environment);
reasons to interpret perceptions, draw inferences, solve problems, and deter-
mine actions; and acts upon that environment to realize a set of goal or tasks
for which it was designed.”1 Unlike a conventional program, an intelligent
agent should be able to do more than merely obey commands. Instead, it may
be able to ask for clarifications and modify or even refuse some requests. It
should employ a degree of knowledge of the user’s goals and needs.
66 Battle of Cognition

In the BCSE, there are three types of agents:

• Commander and staff agents provide the commander and his staff with intelligent
information that helps them understand the battlespace conditions and occurrences
as well as giving them the ability to control the assets under their command.
• Collective agents reside at nodes within the network and handle the coordination of
multiple assets on behalf of the commander and his staff. Examples of a cross net-
work agents include the Attack Guidance Matrix (AGM) and agents that maximize
intelligence gathering through the coordination of multiple sensors based on the
commander’s intent.
• Asset agents enable each asset to understand how to carry out the commander’s
intent and directives within the scope of the overall mission.

The BCSE contains automated functions that allow the commander to


efficiently control his assets so that he can effectively focus on his mission.
Figure 3.1 shows some of these automated functions provided by the BCSE.
An example of an agent that provides employment of fires and effects is the
AGM—it helps the command-cell members with a combination of threat
analysis, survivability estimates, and weapon-to-target pairing. It can also
automatically execute fires by issuing a command, for example, to an auto-

Figure 3.1. The main automated functional areas of the BCSE.


New Tools of Command 67

mated unmanned mortar to fire at a particular target, automatically or semi-


automatically, as instructed by a command-cell member.
Such agents are usually distributed across the network. In Figure 3.2, for
example, each of the so-called brains represents an intelligent agent that among
other things contains a knowledge base and a reasoning engine. The knowl-
edge base contains factual and heuristic knowledge about a specific domain.
The knowledge base is made up of a data model representing the worldview
as well as the rules that define the problem-solving paradigm. The reasoning
engine, or reasoner, is responsible for applying the set of rules against the cur-
rent set of knowledge to reach a result, develop a recommendation, or estab-
lish a new fact. Agents are widely distributed over the network. The most
important reason for such a distribution is that the agent can benefit from the
proximity to the object that it reasons about, or to the user which it assists.
For example, if the agent resides directly with a robotic asset like a ducted
fan unmanned air vehicle (UAV), the agent can make decisions on how to
best perform a reconnaissance task taking into consideration the current bat-
tlespace conditions in order to determine how to best maneuver, what amount
of risk is acceptable for the particular mission, how to position itself to cap-
ture the best imagery, how to avoid threat and react when threatened, and
how to control its sensors. Knowing the current state of the battlespace and
keeping the decision-making process close to the UAV enables it to make
informed and educated decisions about itself without the need for a human to
micromanage the UAV.
Another advantage of distributed agent architecture is redundancy of the
data model. While the data model resides within each agent in the network,
communication disruptions and delays may cause the contents of the data
model to differ. The system tries to keep the information as consistent across
agents as possible. If an agent fails the data model will be recovered from
another agents’ data model. Distributed agent architecture also eliminates a
central point of failure because the knowledge base is not centrally located. A
loss of a knowledge base instance is fully recoverable and is not catastrophic.
On the other hand, distributing the agents across the network leads to a
challenging question: who is in charge? If the agents are all making deci-
sions on their own then there can be competition or a duplication of effort in
achieving the mission or accomplishing a task. In the BCSE the commander
and staff are always in charge and have control of every asset. They have the
ability to micromanage the assets at any time; however, this is not the ideal
situation to be in. As we noted when describing the types of agents, the Col-
lective Agents handle the coordination of multiple assets on behalf of the
commander and staff. Through the use of the Collective Agents, much of the
coordination and micromanagement is automated, leaving the command cell
to focus on the mission and not the minute details of asset control.
A command-cell member can adjust the degree of autonomy given to the
agents as he feels necessary. For example, if the command cell wants to give
the AGM the authority to execute automatic fires against specified target
68 Battle of Cognition

types immediately upon detection of the target, they can instruct the AGM to
do so. Conversely, the command cell can limit control, setting the AGM to
provide recommendations only. In effect, a cell member has the ability to pro-
gram and control the agents during the operation by modifying the rules of
engagement, altering threat assessment and targetability criteria, and setting
weapon-to-target preferences through the use of their user interface. This
user interface is known as the CSE and will be explained in more detail in a
moment. In the chapters that follow, we will use the term CSE to refer to all
the functions that support the command-cell members, in order to exclude
other parts of the BCSE that support command-unrelated functions, such as
simulation.
Now let us consider each of the three tiers of our architecture (Figure 3.2).
Tier 1 of the BCSE provides decision support within the CSE.
The CSE is the primary interface between the command-cell members and
the rest of the system—it delivers the battle information to the cell operators,
and receives the battle commands from the operators. The CSE assimilates a
flood of digital battlespace information coming from intelligence reports and
sensory information of assets reporting from the battlespace into a graphical
picture so that the commander and his staff can quickly and easily understand
what is happening. This graphical picture is known as the common operating
picture (COP) (Figure 2.7 of Chapter 2). The CSE also provides information
filters that a user can customize to control the type and quantity of infor-
mation being received based on his information needs. The CSE includes a
suite of tools that operators use to control and coordinate assets, tools like
Task Synchronization Matrix, Threat Manager, AGM, Alert Tracker, and

Figure 3.2. The BCSE’s three-tier approach provides a distributed C2 decision


support architecture. See Appendix for explanation of abbreviations.
New Tools of Command 69

tools for reconnaissance guidance, intelligence and picture management, and


Commander’s Critical Information Requirements (CCIR).
In particular, the CSE includes a tool that visually presents to the cell
members the COP (Figure 3.3). It uses both 2D and 3D maps and a military
standard set of graphics, called MIL-STD-2525B, which includes, among
other things, icons and graphical control measures. Real-time updates to the
COP display friendly movement, detonations, missile fires, enemy positions,
noncombatants, detections of unknown (possibly hostile) assets, battle dam-
age assessment, and the logistical status (ammunition, fuel, and health) of
friendly forces. We discuss this and other tools in detail later.
One of the most intriguing features of the CSE is its integration of mul-
tiple BFAs. It allows every user to task and retask battlespace assets; monitor
execution; and facilitate maneuver, reconnaissance, and effects management
through a single graphical user interface. The BFA integration means that the
role of the command-cell member does not have to be rigorously restricted to
being only an information manager or only an effects manager, each with his
own unique set of tools. Rather, each member of the command cell can con-
figure the system to his needs, possibly fulfilling several functions (i.e., serving
more than one BFA). Each member can determine threats, view intelligence
information, classify and identify targets, maneuver, assign reconnaissance,
perform BDA, and fire all from the same screen.
This integration allows a reduced staff to operate according to need, not
function. If a cell member is overwhelmed with a task or becomes incapaci-
tated, others can step in and help out without having to change stations or
computer screens. In fact, we observed in some of the MDC2 experiments
how command-cell members used the BCSE to assign themselves roles that

Figure 3.3. The CSE utilizes a knowledge base to integrate the static, temporal, and
spatial information into a coherent set of knowledge for the commander and staff to
operate on across echelons.
70 Battle of Cognition

were well outside traditional doctrine. In one such an experiment, for exam-
ple, the cell members divided their roles so that one maintained responsibility
for maneuver, intelligence, and fires for the close fight while another handled
all these functions for the deep fight.
The CSE provides the following:

• Visualization of Information: the real-time display and focal point for situation
monitoring and understanding in forms such as 2D and 3D maps, graphics, icons,
tables, and reports.
• Collection management and logistical displays: the commander’s portal to the
current status of each of his assets in the battlespace.
• Command Center: the operators’ main control center for task creation and
modification, mission planning, mission coordination, and collaboration.
• Task Decomposition: the ability to break down a complex task into a set of individual
tasks.
• Terrain Analysis: the ability to understand the terrain, how it can be used, and how
to maneuver across it.
• CCIR Management and Display: the ability to specify and receive alerts, cues, and
notifications in response to topics of critical interest to the commander and his
staff.

Figure 3.4. The BCSE helps command cell in both planning and execution.
New Tools of Command 71

Tier 2 contains agents that provide the commander and staff with specific
decision support needs across the assets in their control. These agents are
known as collective agents because these agents treat the assets within their
control as a dedicated network of assets focusing on their combined goals.
The collective agents function as the commander’s assistant by directing,
coordinating, and synchronizing the assets to achieve mission goals. They
also provide recommendations and assimilate disparate information into the
COP. Each of these agents can be hosted on any asset equipped with the
appropriate hardware anywhere within the command cell’s control and can
move from one asset to another asset if required. The movement from one
asset to another may occur for a number of reasons including the destruction
or critical failure of its host asset.
Each of the collective agents fulfills a specific need of the cell members, and
therefore, there are many different types, which include those that manage
schedules; provide guidance such as attack, BDA, and reconnaissance; report
and manage threat information; fuse and synchronize data into information
that the command-cell members can easily and quickly comprehend; provide
critical alerts, such as potential fratricide or violations in rules of engage-
ment; provide memory management, which is important in view of the large
volumes of battle-relevant information in the network-enabled environment;
and collect data for postbattle analysis.
Tier 3 is a collection of resident agents—one at every asset controlled by
the command cell. The idea is to provide a networked environment and
communication mechanism such that each asset keeps the entire community
aware of its current state while in turn being kept informed of the environ-
ment and its surroundings. As a result, each asset’s knowledge base is kept
as synchronized as possible to the full state of the battlespace. This way, it
can reason on how to maneuver, control its sensors, react to threat, and take
initiative to accomplish the goals of the command cell. An important point
to mention is that while each asset may know how to maneuver on its own
(humans as well as sophisticated robotic platform have built-in reasoners for
knowing how to move and avoid obstacles), the resident agents provide an
understanding of how to maneuver in respect to the current command cell’s
goals. They understand the mission, not just the specific task.
The asset should be able to act for the good of the mission and not just
for the good of itself; the asset must also understand the world around it.
Besides merely breaking down the asset’s task into the necessary atomic
actions, it must understand its purpose and goal as well as every other asset’s
purpose and goal, and even the commander’s intent. This also enables the
asset to carry out its mission when communications with the commander
is lost and to react to threats in a way that is intelligent and meaningful,
according to mission parameters. In order to react appropriately, it must
also understand its role in the mission, what it can and cannot sacrifice for
the sake of the mission, and how to avoid causing harm to other participants
in the mission.
72 Battle of Cognition

The resident agents are aware of the environment that may be outside the
viewpoint or perspective view of the asset’s sensors and therefore can help
provide a better recommendation for which to maneuver. From that recom-
mendation, the human and robot can use their brain or onboard navigation
system to do the details. This is important because resident agents are not
merely artifacts of a simulation. In fact, they provide as much to a live envi-
ronment as they do to the simulated environment.
Here we will attempt to clarify this with an example. Assuming a UAV is
assigned to reconnoiter a potential target, it must understand the level of risk
it can endure to reach the target and accomplish the task. The detection of
an air-defense system might normally cause a reaction to flee, but if the mis-
sion dictates that mission success outweighs the risk of loosing the platform,
then the system will decide to continue the mission despite the risk—fleeing
is not an option. In other words, the UAV knows what it needs to accom-
plish, what risk it can take, and when to call off the task. The UAV must
also ensure that when it does react, it doesn’t react in a way that may bring
enemy attention to others—like fleeing back to base.
The Tier 3 resident agents are currently provided in two forms, which are
referred to as the Platform Support Environment (PSE) and the Soldier/
vehicle Support Environment (SSE).
The PSE is used in unmanned (robotic) assets to provide system guid-
ance and control. Using a knowledge base, with its set of rules, it translates
mission related tasks into directives understood by the robotic system’s con-
trol software. For example, the commander requests a particular area to be
searched for enemy. The collective agents assign asset X to perform the task
due to its availability and proximity to the area. A route and set of sensor
controls to accomplish the task is determined by the collective agents and
passed to the resident agent on asset X. Assuming asset X only understands
directives in a specific message format of segments and speeds, the agent
formats the appropriate messages and sends them to the robotic system’s
control software. As the asset moves out on its task, an enemy indirect fire
system is detected by another asset on the network. The resident agent on
asset X realizes that the enemy asset’s attack range intersects X’s route. As a
result, the resident agent generates a new route around the danger area and
notifies all the other agents on the network (including the command-cell
members) of its course change and expected time of task completion. Again
the agent formats the appropriate messages and sends them to the robotic
system’s control software. The robot reroutes and completes its task without
the cell member’s intervention. Without the resident agent, the asset would
never have known about the danger.
The SSE provides assistance to the soldier by providing recommendations
on reconnaissance and fire support, alerts and cues, and situation awareness
through the development and display of the COP (Figure 3.5).
Generally, an agent has a way to display its output to the humans. The
human warfighter has his own brain to make decisions and to break down a
New Tools of Command 73

Figure 3.5a. The SSE provides C2 and decision support to the dismounted warfighter.

task into subtasks. The warfighter can only make decisions on what is actually
known to him (unless he is guessing). The knowledge base knows the overall
known state of the battlespace. It knows the global situation and what every
asset is currently and expected to do. Decision tools, using the knowledge
base, can keep track of information that is important to the warfighter, figure
out when situations are occurring that are important to the warfighter, and
make recommendations to the warfighter—all based on detailed information
that may be difficult or tedious to keep track of when your focus is on sur-
vival. So, there needs to be a good way to interface the output of the decision
tools to the warfighter so that he can use his brain to accomplish the task at
hand. The soldier support environment uses a visualization display to show
the COP and alert the commander to critical events. The agents interface to
the display.
Now, having discussed the BCSE system, the humans that use it to command
the assets, and the assets themselves, let us not forget about the all-important
real world within which all this operates. A system like BCSE, along with
its commanders and assets, would normally exist in a real battlespace popu-
lated by real terrain features, enemies, neutrals, other friendly entities, and so
forth. However, in the MDC2 program, we did not have the luxury of experi-
menting with BCSE in a real battle or even a field exercise. Instead, the real
world was simulated by an advanced battle simulation system called OneSAF
Testbed (OTB).2 It was OTB that simulated such physical events as move-
ments of assets, their fires, and effects of fires. The overall simulation suite
also included sensor effects servers (these simulated, for example, whether a
74 Battle of Cognition

Figure 3.5b. Another view of the SSE for the dismounted


warfighter.

particular UAV would be visible to a radar system under the given conditions)
and imaging severs that generated simulated imagery, for example, how an
enemy tank would look to a UAV’s video camera. Because the world (includ-
ing the assets) was simulated, each PSE resided not on a real asset but was
instead connected to its simulated asset in OTB.
Figure 3.6 depicts the architecture used in one of the MDC2 experiments.
The circular part of the diagram on the right shows how multiple cells com-
municate—they are linked together on a shared C2 Internet. Recall that a cell
consists of a commander and his staff. In this example configuration, there
are 10 cells that include one higher headquarters (HHQ), one battalion (Bn),
two companies (Co), two platoons (PL), and four squads (SQ). Each cell also
owns the items represented in the callout on the left. Following the three-
tier architecture, the cell contains C2 agents that handle platform support at
the robotic level (PSE), soldier support for the dismounted warfighter (SSE),
vehicle support (VSE) for mounted warfighters, commander support for the
commander and his staff (CSE), and collective agents providing support across
New Tools of Command 75

Figure 3.6. The BCSE architecture and the decision support components used in
Experiment 7.

the assets (CA). The architecture distributes the C2 elements at three levels
or tiers: the commander and staff (CSE), across the networked components
(CA), and at the individual assets (PSE, VSE, SSE).
The reader, we are afraid, may still be uncertain of how all this works
together. We will explain this later via an illustrative example. But before we
do so, let us take a closer look at the functions and tools within the most
important part of the system—the CSE.

WARFIGHTER’S COMMAND FUNCTIONS


AND TOOLS WITHIN CSE
Planning Missions and Courses of Action (COAs)
The command-cell member (let us call him an operator for the purposes
of this section—he operates the functions of CSE) may create a new mission
definition working individually or in a collaboration session simultaneously
with his commander, peers, subordinates, or with higher headquarters. Each
mission can contain one or more COAs, which include the military assets, in
the form of units and platforms, associated with the COA, the organization of
those assets, their set of tasks, military graphics, terrain overlays, and more.
Having completed the analysis of a mission, the operator selects a COA to
be downloaded to all the decision support entities within the command cell.
Every mission can be saved and reloaded at any time. The saved mission is
76 Battle of Cognition

stored in an XML format. A mission wizard is provided to aid the operator in


the process of mission creation.
The functionality related to mission analysis resides in the Mission Work-
space. From the workspace, the commander and staff can dynamically task
organize and reorganize the assets during planning and execution. Its tree
structure allows users to see the hierarchy of assets and ownership of Graphi-
cal Control Measures (GCMs). The Mission Workspace is directly tied to
the map, so that a click on an asset, GCM, route, or asset is highlighted on
the map. During execution, the workspace is automatically updated with new
detections, munitions, target recognition results and asset status. Additionally,
the Mission Workspace allows the users to access a suite of tools for creating
and maintaining GCMs, creating tasks, and managing an array of maps and
geographic overlays.

Visualization
The CSE shows all operators a complete and up to date view of what
is known about the battlespace, during both planning and execution of the
mission. Individual operators can access, view, configure, and tune their
view, workspace, and processes in ways that support their thinking. Icons
and markers allow the operators to quickly see the shape and status of the
battlespace.

• With just a single glance at the map, the operator can see new detections. A detec-
tion is a report stating that a sensor has detected a suspicious object (e.g., a possible
enemy tank), with details on when, where, by what sensor, and any available imagery
of the object.
• With another glance, the operator can see enemy targets, the target status, includ-
ing identification (e.g., is it a tank or an infantry team), engagement status (whether
and who fired at the target), and battle damage assessment (is it destroyed or dam-
aged, and to what extent).
• The operator can also see the locations of every asset, its tasks, routes, and sensor
coverage.
• Each platform on the map also has a tooltip that shows its fuel consumption, speed,
location, heading, and other pertinent information.

In general, data in the system is supplied to the operator in graphical, tex-


tual, or mixed views based on the operator’s needs and preferences. While the
map of the battlespace, with its graphic representation of graphical control
measures and enemy, friendly, neutral, and unknown assets, is used mostly
for general situation awareness, the other views may present more detail. The
system provides multiple ways to see the same data. For example, the operator
may see the fuel left on a platform by moving the mouse over the platform
on the map to bring up a tooltip, by bringing up the Unit Viewer and click-
ing on the platform, by looking in the combat power status or by looking in
New Tools of Command 77

the resource allocation tool. Each icon is decorated with special adornments.
These decorators indicate additional information about the object. Decora-
tors for an enemy asset, for example, include symbology to show engagement
status, BDA status, image availability, the sensor type that last detected the
asset, its direction of movement, and more.

Customization of User Interfaces


The CSE offers the operator multiple methods of customizing his work-
space. The operator can change and save his personal settings of tools, tool-
bars, views of information, zooms, task-organization settings, alerts, attack
guidance, and BDA guidance.
In addition, the system provides the operator a series of data filters. This is
important because all data, with its great volume, is available to all authorized
operators, and can readily overwhelm any given operator. Filters also help the
operator to create an optimized view of the battlespace that meets his specific
data needs. For example, the operator may show or hide assets, routes, or
GCMs either individually or as a group. The operator may also elect to fade
assets that are dead and munitions that have detonated.
Such customized views (also called map sets) also provide the operator with
geospatial references and tools for terrain measurement and visualization. For
example, the Line of Sight (LOS) suite of tools helps in intervisibility analysis.
With another tool, the Geographic Intelligence Overlays, the operator can
specify geographic regions with associated information (such as political, mil-
itary, economic, social, and infrastructure characteristics) about the region.
Later, this information may be retrieved and displayed as a tooltip.

Briefing
Operators use the Briefing Tool to view and share mission-planning data
and intent. It is a “whiteboard” shared by multiple operators during planning
and execution in order to exchange information. This capability is particularly
critical for allowing the commander to share his vision or view of the current
or future battle with other commanders, staff, and subordinates. The opera-
tors share overlays related to mission plans, change OPORD and situation
templates, and save the briefing layers. Individual function-specific plans may
be merged into a single plan, changed during execution, and shared at any
time with selected personnel. This feature also allows the operators to enter
the mission statement, commander’s intent, and task and purpose statements.
To avoid the confusion between multiple operators of the graphic space, the
operators can add personalized graphics and icons and color code their pointer
and graphics.
There are two additional tools to aid in communication and coordination
between commanders, and between a commander and his staff. The ViewSync
tool allows an operator to synchronize his view of the battlespace with another
78 Battle of Cognition

operator’s view. The heads-up display tool allows an operator to project his
screen on a shared monitor visible to all members of a cell.

Situation Awareness
The Threat Manager shown in Figure 3.7 provides the operator with all
identified threats in a tabular display. This information is determined by the
intelligence information that has been correlated and fused by the command-
cell members and the fusion agents. It is their perception of the threat and
does not represent ground truth. The Threat Manager includes the following
information:

• Threat Name—user specified name and military type of threat such as SA-13 or
Draega.
• Unit Status—indicates the level of knowledge about the threat (e.g., suspected,
identified, and targetable),
• Threat level—qualitative level of threat ranging from low to high.
• Damage status—indicates the current perceived health of the threat, which includes
information such as destroyed, mobility-kill (e.g., a tank cannot move but can func-
tion otherwise), firepower-kill (e.g., a tank cannot fire but otherwise functions),
unknown damage, and more.

Figure 3.7. The Threat Manager identifies enemy threats and provides access to the
AGM, BDAGM, and Intel Viewer.
New Tools of Command 79

• Threat type—indicates type of threat such as air defense, indirect fire, direct fire,
and so forth.
• Friendly assets within range—lists the friendly assets that are being threatened.
When an operator selects a threat by clicking on it, the map shows lines extending
from the threat to the friendly assets being threatened.
• Engagement status—information about engagements that have been executed
against the threat.
• BDA status—presents information about the last known reconnaissance against the
threat since its last attack and whether the reconnaissance is scheduled, in progress,
or complete. When the operator clicks on the BDA Status, the BDA Guidance
Matrix recommends an asset to perform further reconnaissance on the threat.
• Show Image—indicates the time when the most recent image of the threat was taken
and a marker if the image has not been viewed by an operator. By clicking on this
field, the Intel Viewer is displayed allowing the user to view and classify the image.
• Engagement Status—indicates the last known engagement (attack) information on
the target and its status indicating whether the engagement is scheduled, in prog-
ress, or complete. When the operator clicks the Engagement Status field, the AGM
recommends an asset to perform another attack based on the command staff’s AGM
settings.

The Resource Availability tool is a textual, tabular display that gives the
operator the following information on all friendly assets: name, damage status,
fuel remaining, sensor being used, percent of task completed, speed, heading,
altitude, and location. Double clicking on the name of an asset centers the
map on the asset and highlights it.
The Collection Management tool is a textual, tabular display that gives task
information for friendly assets: asset name, task, target (for a fire or reconnais-
sance task), start time, end time, purpose, percent complete, and task status. It
is a quick way to check a platform, see all tasks assigned to that platform, and
the status of each task. Double clicking on the name of an asset centers the
map on the asset and highlights it.

Tasking
To issue a task to an asset, the operator clicks the right mouse button on
an asset shown on the map, or on the Execution Synchronization Matrix (this
will be described a little later in this section), or on the mission workspace.
The click brings a context sensitive menu that shows the possible tasks that
can be issued based on the current situation. Having selected a task, the oper-
ator is presented with a tasking window where the operator can specify the
information and parameters that are specific to the execution of the task. This
includes specifying intent, waypoints and schemes of maneuver, task duration,
dependencies for start or completion of the task, use and operation of sen-
sors and weapons, and terms of task completion. All platform tasks give the
operator an option to be notified upon task completion. During planning,
80 Battle of Cognition

the operator often animates the tasks and sees how assets move on the map.
During execution, operators add, delete, or modify tasks as the battle situa-
tion changes.
Fortunately, many of the tasks are high level tasks for which the operator
needs to input very limited information—only the intent. The system then
automatically generates the rest of the tasking information. For example, to
reconnoiter an area, the operator selects only the platform, the area to recon,
the flight area, the sensor, and altitude. The system then determines the best
route for the best coverage. A task that requires a ground maneuver invokes
the terrain analysis components to automatically generate the best route to
meet the operator’s intent (fastest, shortest, and most concealed) and to avoid
terrain obstacles.
Individual platforms usually receive tasks related to reconnaissance, maneu-
ver, or fires. The platform task menu is context sensitive, which means that
only those tasks suitable for the platform are represented in the menu. For
example, the currently available movement tasks for a reconnaissance, surveil-
lance, and target acquisition (RSTA) unmanned ground vehicle (UGV) are
Move, Halt-Resume, Overwatch, Reconnoiter Route Reconnaissance, Area,
Auto Reconnaissance, Locations Reconnaissance, Targets Reconnaissance,
Follow and Pursue.
When multiple assets are involved, an operator can create groups of assets
that are to work together for tasks such as maneuver or reconnaissance. The
group may also be tasked as a formation, moving the vehicles in a formation
pattern established by the operator. With this technique, the operator spec-
ifies the route, a stand-off distance, and a pattern, such as column, wedge,
herringbone, line, echelon left, or echelon right. The operator can change the
formation later during the execution.
Clicking on an asset able to fire weapons brings a menu with appropriate
choices. The Quick Fire tool brings up the appropriate fire options for the
selected asset, while the Prohibit Fire tool allows the operator to mark this
asset as a “Do Not Fire” asset. If a fire task is assigned to this asset, the system
issues a warning, which the operator may choose to override.
The sensor control allows the operator to manually turn sensors on and off.
However, some opportunistic reconnaissance tasks override the settings. Sen-
sor direction can also be changed using the 360 degree reference. This works
well for stationary platforms that use sensors such as GSR.
The Execution Synchronization Matrix is a customizable user interface
module that represents the COA tasks in a Gantt chart format. The matrix
graphically shows each task’s start time, end time, and duration, the status of
each task (planned, completed, in progress, off schedule), and the interdepen-
dencies between tasks. Each task can be further examined by double clicking
on the graphic representation and displaying the task details. For example,
a Targets Reconnaissance task may take 25 minutes to complete. Further
examining the task, the operator can see that it is made up of three Target
Reconnaissance tasks where the first is to reconnoiter a Garm and will take
New Tools of Command 81

15 minutes to complete, followed by a nearby truck, which will take 2 minutes


to complete, and finally a Draega, which will take 3 minutes to complete. The
remaining time is to position itself out of harms way.

Automation of Fires
The AGM is a tool that automatically monitors the enemy targets that
become known to the BCSE and, following the operator-specified rules, gen-
erates and issues commands (or recommendations) to fire at the targets. The
AGM integrates fires and effects with intelligence, maneuver, and logistics.
It does this as follows: tracks movement and location information about a
potential enemy assets, reasons about the enemy assets capabilities to hurt the
friendly forces, reasons about the currently available friendly assets and ammu-
nition, determines the friendly assets that are being threatened by the enemy
asset, determines if the enemy asset is a valid target using rules supplied by
military intelligence experts, and pairs (allocates) the friendly weapon systems
and munitions to the targets. The AGM is aware of the friendly forces in the
area, No Fire Zones, and No Fire Lines when computing the firing solution.
The operator uses the AGM tool in the CSE to enter the criteria, which
will later be used to determine if an enemy asset is targetable. It does this by
setting criteria such as how sure we know the target is the type we believe it
is (identification confidence) and how sure we know where it is (CEP). The
operator can also specify the priority order in which munition types should
be selected to attack a given target, the number of munitions to use against a
target, and much more. Figure 3.8 is shows the AGM Tool. Underlying the
AGM tool in the CSE is a collective agent called the AGM agent. This agent
takes the inputs made in the CSE and modifies the rule parameters in its
knowledge base to provide recommendations and perform actions based on
the operator’s specification. There is typically one AGM per command cell.
The AGM agent is capable of coordinating its guidance with AGM agents of
other cells. The extent of coordination between AGMs belonging to different
cells can also be controlled through the CSE. Additionally, there are load-
balancing rules that can be set to control the types and amount of munitions
that can be used by the AGM. An important aspect of the AGM is that the
operator can activate a different AGM set of criteria at any time before and
during a mission. The operator can also create new AGM criteria sets, change
criteria sets (both active and inactive), and share criteria sets with other opera-
tors in and out of the cell.

Manual Execution of Fires


In addition to using the AGM, the operator can manually issue a command
to fire at an enemy target using the Quick Fire tool. The Quick Fire tool
allows the tasking of any armed, direct or indirect fire asset in the cell. Each
platform in the cell capable of accepting the task to fire has a tab in the tool
82 Battle of Cognition

Figure 3.8. The CSE provides the command-cell member with an interface
to the AGM for controlling the rules for automated and recommended fires.

with the current and allocated munitions count. The operator selects a target
or location and clicks the button to fire.
Any warfighter can make requests for fire during mission execution. For
example, a Long Range Surveillance (LRS) soldier can issue a call for fire
on a target. Or a CAT commander may call in joint fires from an F-117A.
The Request for Fire tool displays the request and allows it to be accepted or
denied by the organization that controls the fire assets.

Intelligence Management
The CSE offers a suite of tools to help the operator organize and act on
incoming detections and intelligence. The Picture Viewer allows the operator
to customize a presentation of images provided by various sensors by speci-
fying the sensor-carrying assets he wishes to monitor. As new pictures (IR,
New Tools of Command 83

DVO, SAR) taken by that asset come in, they are added to the presentation.
By selecting the history for the asset, the operator can see all pictures taken
by that asset.
With the Intel Viewer, the operator can examine images of battlespace
objects that have been detected, as well as the history of intelligence infor-
mation, such as the object’s type, affiliation (e.g., enemy, neutral, unknown),
damage state (e.g., destroyed, firepower-kill, unknown), how sure we know
what it is (e.g., suspected, identified, targetable), and its classification (e.g.,
air defense, heavy tracked, wheeled) (Figure 3.9). The term battlespace object
is used here to refer to a physical object of interest in the battlespace. This
includes enemy, neutral, and unknown assets as well as people, bunkers,
buildings, bridges, and more. The intelligence information that is displayed
comes from several sources, including automated sensor fusion, information
correlation, and updates based on previous operator interaction with the
Intel Viewer. Previous operator interaction refers to the operator’s ability to
use the Intel Viewer to update the intelligence information listed above. The
Intel Viewer adheres to the principle of integrated Battlefield Functional
Areas since the operator can request recommendations from the system for
tasks of reconnaissance or fires and then issue the command to carry out the
tasks. This integrates intelligence, operations, and fires into one tool.
The Unit Viewer integrates information from several sources into one small
pop-up window. The tool works for both friendly and enemy platforms and
pulls all available information into one screen. The window can be docked on
the screen, and whenever an asset is selected, its information is presented. For
a friendly asset, it shows the asset status, such as speed, location, altitude, head-
ing, available munitions, fuel level, its current tasks status, the current use of its
sensors, and a list of enemy assets that are currently threatening it. From the
Unit Viewer, there is an option to display, on the map, the friendly platform’s
route and its current task’s sensor coverage. For a nonfriendly asset, the Unit
Viewer shows a list of all the friendly assets threatened by that asset and the
nonfriendly’s speed, direction, status (e.g., suspected, identified, targetable),
and movement tracks derived from previous detections. The Unit Viewer fur-
ther provides a link to the Intel Viewer to view available images of that asset.
The Detection Catalog is a tabular and textual tool that lets the operator
see all detections made by the system. It is organized by detection and shows
who detected it, and when, and by which sensor.
The intelligence estimate is a real-time enemy situation template that helps
the operator template the enemy within an area of responsibility. As infor-
mation is gathered within the area of responsibility, the system updates the
estimate with information on what was actually identified, destroyed, immo-
bilized, and so forth, in that area.

Automation of BDA
Similar to the AGM, the Battle Damage Assessment Guidance Matrix
(BDAGM) monitors the friendly fires at the enemy targets and automatically
84 Battle of Cognition

Figure 3.9. The Intel Viewer and Picture Viewer offer methods to view images and
provide information on the objects viewed.

issues optimized commands (or recommendations) to send the available sen-


sors to perform BDA. This tool integrates information involving maneuver,
intelligence, and fires. The BDAGM allows the operator the opportunity to
enter his intent in conducting automatic BDA missions (referred to as his
BDA plan) into the knowledge-base system. Multiple BDA plans can exist for
different approaches to BDA at different times in the battle. The operator can
New Tools of Command 85

modify the currently active BDA plan, create a new one, activate a saved one,
and share his BDA plan information with a peer, superior, or subordinate.
When setting up a BDAGM, the operator selects from a list of assets in his
command that have sensors that can generate imagery or are humans who
can visually assess damage. Note that no sensors modeled in our experiments
can automatically assess damage. Therefore, we rely on images viewed by a
human using the Intel Viewer to determine the level of damage. That is, the
sensors send back images and humans determine the level of damage. For
each asset under the group’s command, the operator may disable automatic
tasking of the asset, allow the asset to be automatically tasked by the system,
and/or dedicate the asset to the system for use as a BDA collection asset. It can
further require the asset to avoid performing BDA on certain target types or
limit its collection to specific geographic areas. When an enemy is fired at, the
collective agents in charge of BDA will monitor the attack and recommend or
automatically assign (based on the operator’s plan) the best suited asset to per-
form the reconnaissance (BDA) of the target based on the active BDA plan. In
addition to the BDAGM, the collective agents will monitor movement, radar,
fires, and communications coming from enemy units marked as damaged to
determine if there are any signs of life (such as movement or communications)
and then report that information back to the CSE where it is displayed in the
BDA Report tab of the Intel Viewer.

CCIR
The system can alert an operator about a number of situations based on
the development of Priority Intelligence Requirements and Friendly Force
Information Requirements. Alert selections are operator specific and can be
saved for the operator across multiple missions. The alert functions help a
commander determine when CCIR criteria are met.
The system also includes a planning audit tool that walks through a list
of mission planning tasks that are either automatically validated by a collec-
tive agent as complete or posed as a yes-or-no question to the operator. The
planning audit serves as an operation preparation check list for use prior to
execution of a plan.

Communications
Even with the wealth of information available through the BCSE displays,
verbal communications remains important. When the BCSE is used in a
simulation environment like the MDC2 program, the ASTi simulated radio
and communications system can be used. In the real world the BCSE would
integrate military grade radios such as SINCGARS. The BCSE integrates the
radio channel and volume controls through the use of a built in interface on
the CSE that keeps the operators interface to the radio the same regardless of
whether they are using it in a simulated or live environment.
86 Battle of Cognition

The operators also use the CSE collaboration mode in order to share the cur-
rent plan with higher headquarters, peers, or subordinates. The operator may
also choose to drop out of collaboration and work independently on his part of
the mission plan and then rejoin the collaboration session at a later time.
In order to allow the command cells the most flexibility in sending and
receiving data, the bandwidth management function lets each cell set trans-
mit and receive rates for the following: heartbeats (blue asset’s state, such
as location and health), sensor measurements (as an example, moving target
measurements include azimuth, azimuth variance, elevation, elevation vari-
ance, range rate, and range rate variance), and spot reports—fused informa-
tion about battlespace objects, which includes an ID, type, list of all possible
objects considered with each one’s estimated probability, location with its
probability of error, speed, and sensor information that was used in deter-
mining the spot report. The operators can modify the customized bandwidth
settings at any time during execution.
A unique and important feature of the CSE is command succession. If a
command asset is destroyed, like the Command and Control Vehicle (C2V),
the networked components sense its loss. An alert is triggered and the remain-
ing assets are notified that the system suspects that the particular command
vehicle has been destroyed. A commander in another cell can investigate, and
if the loss is confirmed, he may reassign assets to one or more cells, assign a
new commander to the cell, or a mixture of the two.
The CSE supports both individual chat sessions and group chat sessions.
The operator is shown the active members and may select their chat partner.
Operators may also set up a named group (one or more) for chat. Individuals
are invited to the chat and may elect to join.

Logistics
The Combat Power tool tells the operator about the health, fuel, and
ammunition status of assets during the battle. The operator may tailor the
data to his specific interests. For example, the operator may request to show
only certain assets and change such settings as the threshold when the low
level of fuel is to be reported. During execution, it shows a trend analysis for
fuel, munitions, and heath.
The Munitions-on-Hand is a tabular, textual tool that lists all the avail-
able ammunition and the allocated and spent ammunition counts for each
asset. During execution, the counts are continuously updated by the system as
rounds are fired, detonated, resupplied, or allocated by a plan.

Maps and Terrain Analysis


Any battle-command system relies on good terrain analysis and visualiza-
tion tools. The BCSE uses the Commercial Joint Mapping Toolkit (CJMTK)
to provide its mapping and terrain products. The CJMTK is the standard
New Tools of Command 87

geospatial exploitation tool for Department of Defense command, control,


and intelligence systems.3
Using CJMTK makes it easy to load georeferenced satellite imagery and
maps, and to add layers that describe features such as depression areas, con-
tour lines, hydrology, roads, restricted areas, and buildings. Depending on the
warfighter’s needs, overlays can show dynamic terrain updates; diplomatic,
economic, and political reference locations; overwatch positions; terrain com-
partments; enemy sniper or ambush positions; and more.
The 2D map helps operators understand the overall situation. The 3D
map offers multiple perspectives of the terrain (third person, first person,
and bird’s-eye views) along with the COP overlay. The 3D viewer also allows
the user to view the terrain while “flying through” the virtual environment
as well as attaching the view to an asset to see the environment from the
asset’s perspective. The CSE can display location information in two for-
mats. These are the military grid reference system (MGRS), which provides
a means of locating any point on the earth with a 2 to 10 character code,
and the geodetic coordinate system which uses the angular measurements of
latitude and longitude to mark a location. The status bar shows the map scale
(for the current zoom), position, and elevation as the mouse moves across
the map.
Another key technology is the route generation products that are part of the
Battlespace Terrain Reasoning and Awareness (BTRA) system,4 developed by
the U.S. Army Engineer Research and Development Center (ERDC). BTRA
has been integrated into the BCSE to identify and show areas of observa-
tion, cover and concealment, mobility, key terrain and avenues of approach,
positions of advantage, and advanced mobility analysis. For example, it can
be used to answer questions like, Where are all the feasible locations that an
enemy tank platoon could have moved to since its last detection 25 minutes
ago?
The tool for automatic route generation based on commander guidance
was found to be a very useful function in the BCSE. In particular, the intel-
ligent agents such as the PSE take advantage of the route generation tools
when they decompose high level tasks like “Reconnoiter Area X” into specific
routes needed to accomplish the mission.

AN ILLUSTRATIVE SCENARIO
To demonstrate how all the moving parts work together, and how they imple-
ment a network-enabled approach to battle command, let us walk through the
following scenario with the help of Captain Johnson, the commander we met
in chapter 2.
In this scenario, Captain Johnson’s force is responsible for providing recon-
naissance and for clearing of the enemy in an important area of the battlefield.
Following Johnson’s instructions, the maneuver manager, Specialist Chu, uses
GCMs—polygonal areas that he sketches on the map—to mark a specific area
88 Battle of Cognition

of interest. Chu names the area DOG. The assets under Johnson’s control
include five unmanned (robotic) assets:

• An RSTA UGV is tasked to perform surveillance of area DOG by ground surveil-


lance radar (GSR). GSR is commonly used to detect moving targets.
• A ducted-fan unmanned air vehicle Class II UAV is code named A-321. It carries
a fixed camera that provides daylight and infrared imagery. A-321 also carries an
onboard system that performs target detection but not target identification. In
other words, it can find a suspicious-looking thing in the battlefield but cannot
determine what the thing is. A-321 is tasked to automatically perform reconnais-
sance of area DOG.
• Another Class II UAV is named A-322. It is mounted with the same camera and has
the same capabilities as A-321. The commander assigned A-322 the task of obtain-
ing imagery for the purposes of battle damage assessment. When the Blue force
fires a precision weapon at a target within area DOG, A-322 will move into position
to take a picture in order to evaluate the success of the attack. He does this with the
help of BDAGM.
• A ground robotic vehicle carries precision attack munitions for Non-Line of Sight
(NLOS) fires and stands by for orders to move and fire.
• A LOS vehicle carrying a Multi-Role Armament and Ammunition System (MRAAS)
weapon is standing by for orders.

The RSTA detects a moving object—let’s call it ObjX—with its GSR. The
PSE on the RSTA sends a stream of sensor measurement information to the
Collective Intelligence Module where it is fused with other available informa-
tion. The result is a spot report indicating that ObjX is a target of unknown
type moving east at 40 km/h. Since its type is unknown, the system marks it
with a low confidence level. The detection is broadcasted to all the compo-
nents in all three tiers on the network. The visualization components, the
CSE, SSE, and VSE, place an icon indicating a detection of an unknown type
on the map at the last detected location.
A-321’s PSE, its onboard expert system, gets the information, sees that the
confidence level of the target information is low, and recognizes that the target
is within the boundaries of area DOG. It immediately moves into reconnais-
sance action while alerting Johnson and his staff that it is beginning a new task
against the unknown moving object ObjX. A-321’s PSE uses the knowledge of
its own capabilities, the terrain information, and location prediction algorithms
to determine a good location to snap a picture of the target. It formulates a task
for itself called “Target Reconnaissance” that includes movement tasks and a
picture-taking task. The PSE broadcasts this information across the three tiers
so that other assets understand its intent. Specialist Chu monitors the route
closely on his CSE to ensure that the right decisions are being made.
The RSTA vehicle, still tracking the object, receives A-321’s information,
realizes that A-321 needs a more rapid feed of information in order to accu-
rately track the moving object and increases its transmission rate of the objects
location information. Instead of broadcasting the information throughout the
New Tools of Command 89

three tiers, the message is directed from the RSTA vehicle to A-321 since only
A-321 needs such detailed information. This helps reduce network traffic by
transmitting high-rate updates between communicating vehicles (using net-
work relay points if appropriate) only when necessary.
With the increased rate of incoming information about the location of the
moving target, A-321 uses its PSE to analyze the situation, terrain, and other
environmental and logistical information to track down the target and then
snaps an image. The analysis of the image, however, has to be performed
elsewhere. The Collective Intelligence Module receives the image, fuses it
with the previous information about ObjX, and generates a new spot report,
indicating an image is available. The spot report is broadcast to all three tiers.
The GUI layer updates the visualization of ObjX with an image marker, adds
the image to the picture viewer, and updates the status in all tables.
Within the command cell, Sergeant Rahim, the intelligence manager, is
alerted to the incoming image and uses his Intel Viewer on the CSE to dis-
play the image. He recognizes that ObjX is clearly an SA-13 air-defense sys-
tem that does not appear to have experienced any damage. With a few mouse
clicks, he designates the target as an enemy SA-13 with no damage. As soon
as the identification is entered into the system, an update message is trans-
mitted to other decision support components in the network. Each system
that is displaying the unknown ObjX immediately gets an update with the
appropriate symbol of an SA-13. This event, in turn, prompts several reason-
ing processes across the system.
Using its onboard intelligence, the A-321 understands a fire mission is
planned in its area and that it is in danger of friendly fire. It immediately
uses its self-protection rules in the PSE and analyzes the terrain for a good
place to seek cover. Fortunately, A-312 is small and went undetected by the
enemy, but it still takes every precaution to survive now that its task has been
completed.
The CIM’s threat manager agent classifies the SA-13 as a high threat based
on the criteria set up earlier within the AMF by the effects manager, Ser-
geant Manzetti. The updated situation awareness is broadcast to all tiers and
platforms. Each user can see the friendly assets that the SA-13 is currently
threatening.
Captain Johnson knows what is about to happen and watches the situa-
tion extremely closely. Here is the moment when the automated tasking of
vehicles and the human decision making must come together. In an instant he
gets an alert that A-321 is in danger of friendly fire and is now seeking cover.
Watching his CSE, Johnson confirms that A-321 is well on its way to take
cover behind a nearby hill. In this case, no other friendly or neutral assets are
within the vicinity of the enemy SA-13.
Concurrent with the threat identification, based on preset criteria, the
AGM calculates several attack recommendations, prioritizes them, and sends
to the command cell. The first choice on the list is a recommendation to fire
a precision attack munition from the NLOS vehicle. The second choice is to
90 Battle of Cognition

fire the MRAAS from the LOS vehicle. Manzetti positively acknowledges
the NLOS recommendation, and a fire request is sent to the NLOS vehicle.
The NLOS vehicle accepts the request and carries out the attack—fires the
missile. This update is disseminated to everyone in the network
Some minutes later, the missile detonates. The fact that the missile has det-
onated is estimated by the BDAGM agent based on the munition’s distance,
trajectory, and speed. The detonation event triggers additional automatic
behaviors. UAV A-322 dedicated to BDA is automatically tasked by the
BDAGM agent to take a picture of the SA-13. A-322’s PSE uses the same
terrain analysis tools that A-321 did. However, in this case, the PSE applies
concealment criteria to the route generation because its self-protection rules
indicate that the SA-13 may not have been destroyed and could shoot it down.
This results in a concealed route that stays out of the line of sight of the SA-13,
and other known enemy assets, for as long as possible before popping up and
taking a picture.
Soon, A-322 arrives at the target and takes the picture. The information
is communicated across the tiers to the other components. Sergeant Rahim
receives the image, views it, and then updates the damage state of the SA-13
to “Damaged.” The new state of the enemy target is broadcasted across the
tiers. Johnson leans back and announces to his team “Great job, guys!”
To summarize: working in partnership with the BCSE system, the opera-
tor sets criteria for automatic behaviors, responds to visual cues, and updates
identifications. The system shares this information across the tiers and initi-
ates appropriate automatic tasking of assets. The system keeps the commander
well informed and enables him to focus on the high-level management of the
battle rather than the control of the assets.

THE DECISION SUPPORT FRAMEWORK


Although fairly distinct in their functions, most of the agents comprising
the BCSE—CSE, PSE, SSE, and so forth—are built from the same under-
lying technology framework. The Viecore Decision Support Framework
(VDSF)5 has been applied to a range of C2 software applications to support
army echelons from corps down to individual warfighter-borne platforms and
autonomous robots.
The VDSF framework is a generalized, shell-like decision support sys-
tem that can be delivered as a service or enterprise java bean in a service-
based architecture. To construct this framework, we identified functions
and data objects that are common to automated decision support, factored
them out, tested and validated them, and combined them into the gener-
alized framework. With a framework as a starting point, the development
task is simplified—the development team can focus on those pieces that are
unique to the problem being addressed. The VDSF framework employs an
expert system rule-based approach that separates the communication and
data exchange formats from domain specific rules, or knowledge, and the
New Tools of Command 91

processing engine. Another key feature provided as part of the framework


is automated code generation (Figure 3.10). The code generation tool reads
the project metadata and generates well-formed source code in accordance
with a specific set of design patterns. Typically, about 50 percent of the soft-
ware code in the resulting decision support application can be automatically
generated.
The VDSF helps developers change system behavior rapidly as needs
and processes evolve. By introducing a new set of rules, the decision sup-
port software can be quickly adapted to create new solutions. Rules operate
on the data that represent the battlespace and reflect the decision support
software’s ability to detect and respond to relevant changes in that data.
Systems that have successfully used the VDSF include Future Combat Sys-
tems (FCS) C2, MDC2, Future Force Warrior (FFW), Vetronics Technol-
ogy Integration Program (VTI), and Collaborative Technology Alliances
(CTA).
The VDSF is built around a production rules engine based on the Rete Algo-
rithm.6 The use of a production rules engine benefits the development and
knowledge acquisition process because rules can be acquired, modified, and re-
moved without having to reexamine the entire rules base after a change is made.

Figure 3.10. The use of automated code generation minimizes development time and
maximizes code reuse.
92 Battle of Cognition

We have been able to eliminate much of the extensive and error-prone analysis
and potential problems concerning rule interaction that often reveal themselves
in complex applications. The system can be adapted as the development team
learns about the domain and the application through experimentation.
Within the VDSF there is one framework-specific reasoning engine and
two third-party products that we can currently leverage to provide the rea-
soning engine in the VDSF. The third-party products are HaleyRules7 and
Clips/R2.8
Using the VDSF, a new system is built by adding a new set of application-
specific rules and augmenting some of the system components rather than
by creating a new architecture from scratch. By using a common architec-
ture, we are able to exploit commonalities across applications. This approach
has proven to be robust and scaleable and has resulted in significant cost
savings.
The key challenge from a software development perspective is to create the
appropriate data models, symbolic framework, and an efficient way of detect-
ing changes in battlespace state when the number of state change events can
be very large. To mitigate the challenge, the core VDSF architecture provides
means for the following:

• Collecting the data needed to represent the environment (in our case, the battle-
space).
• Reasoning about the battlespace data.
• Detecting and responding to relevant changes in that data.

In addition to the data that characterize the battlespace, the framework is


used to model assets (Who), activities (What), operational graphics and ter-
rain (Where), absolute and relative time (When), and the purpose or intent
behind the activities (Why). To accomplish this, the concepts of named enti-
ties, activities, places, objects, and time are represented in the data model (i.e.,
the lexicon, semantics, and ontology required to symbolically reference and
associate the five Ws are predefined).
Figure 3.11 depicts the main architectural components of a VDSF-based
Decision Support System (DSS) and their relationships. The entities at the
top of the diagram are external systems that help place the DSS in context.
They will not be discussed. Others include the following::

• The Device Dependent Interface (DDI)/Device Translator (DX) layer communi-


cates with, and translates the information from, devices external to the DSS. By
device we mean any software or hardware system that can exchange information
with the DSS. For example a hardware device could be GPS and a software device
could be another C2 system or a warfighter’s computer screen. Most often the
exchange of information is performed via specific message formats. The bottom
line is that there are messages that need to be received from external devices to
get information into the DSS. The DDI/DX layer provides the connection to the
New Tools of Command 93

Figure 3.11. The architecture of the VDSF underlies most of the agents within the
BCSE.

device as well as the translation of the information from the device to the DSS.
The DDI/DX provides a separation of the DSS system from a device—a failure of
one device will not affect the DSS. It also ensures that the DSS proper does not
have to provide a capability to communicate or provide protocol interaction with
any device.
• A component within DDI/DX, the DDI is a device driver built to conform and sup-
port the communication protocol of a specific supported external device or system
(e.g., a GPS or Terrain Reasoner).
• Another component of DDI/DX is the DX. It converts data received from the DDI
into a normalized format for use by the Decision Support System Processor (DSSP).
Likewise, the DX converts data received from the DSSP to device-specific formats
and passes it through to the device-specific DDI layer.
• The Device Independent Interface (DII) presents a single messaging interface to
the DSSP, in effect hiding all device-specific data.
• The Transaction Processor manages all messages between the DX and the DSS
Reasoner (described next).
• The DSS Reasoner contains the knowledge base containing the world state and the
reasoning engine known as the Rules Engine.
• The Rules Engine applies the rules against the knowledge base (called running
the rules), which leads to new inferences. Ultimately the Rule Trigger Method,
described below, will receive a notification of a change in world state and take the
appropriate action as described (later in the Rule Trigger Method description).
• Rules Helper Methods call functions registered with the rules engine during rule
evaluation. For example, if an enemy asset moves, a rule may call the Terrain Rea-
soner to help determine if the enemy can see a given position. In this example, the
Terrain Reasoner helps the rule determine if a threat exists.
94 Battle of Cognition

• A Rule Trigger Method is a mechanism to take an action as a result of a monitored


change in the knowledge base’s world state. A trigger method can be registered to
be invoked when a fact (data element) is added, modified, or deleted in the world
state. When the fact is changed, the trigger method is invoked, allowing additional
actions to occur such as alerting outside components or making additional changes
to the world state.
• The C2 Data Model is represented in the decision support application by a set of
software objects instantiated in memory. These objects are typically stored in a map
structure and accessed by a unique identifier. The classes of the objects are defined
by the data schema which is a model of the various data elements used to represent
the world state (e.g., mission, course of action, task organization). The data schema
is fed into code generation utility that automatically generates the classes. Instan-
tiation of these classes automatically associates them with the rules engine library
via a framework. These instantiated objects are then represented in the working
memory.

The VDSF framework can be applied to a new application by adding a


new set of application specific rules, augmenting the C2 data model, and
adding in additional DDI/DX components as needed. The architectural
framework can be reused along with much of the tested and validated code
base.
All the machinery we described in this chapter assists the command-cell
members in numerous ways. They use it to organize and integrate the incom-
ing information from multiple sensors within the unit, to interpret and under-
stand the unfolding situation, to project what may happen in the battle in the
future, to plan and decide how to achieve the objectives of the mission, to
formulate and issue commands to subordinate assets accurately and efficiently,
and to collaborate with other command cells. Among all these important
tasks and their purposes, however, one stands out as exceptionally influen-
tial—comprehension of what is happening in the battlespace. Scientists call
this situation awareness.
CHAPTER 4
Situation Awareness:
A Key Cognitive Factor
in Effectiveness of Battle
Command
Mica R. Endsley

Situation awareness (SA) is critical to successful battle-command operations.


Warfighters have always paid attention to determining the critical factors of
the situation—the location and capabilities of enemy forces, the lay of land,
and the effect of weather and terrain on their own forces and operations. This
knowledge is critical to effective decision making, both during the planning of
operations and during execution of the battle. Situation awareness is defined
as “the perception of the elements in the environment, within a volume of
time and space, the comprehension of their meaning, and the projection of
their status in the near future” (Endsley 1988). Building SA therefore involves
perceiving critical factors in the environment (Level-1 SA); comprehend-
ing or understanding what those factors mean, particularly when integrated
together in relation to the warfighter’s goals (Level-2 SA); and at the highest
level, projecting what will happen in the near future (Level-3 SA). The higher
levels of SA are critical for timely, effective decision making. The three levels
are depicted in Figure 4.1.
Level-1 SA—Perception of the elements in the environment. The first step in
building SA is to perceive the status, attributes, and dynamics of relevant
elements in the environment. This includes important elements such as
enemy, civilian, and friendly position and actions; terrain features; obstacles;
and weather. Inherent in military operations is the difficulty associated with
determining all the needed aspects of the situation due to obscured vision,
noise, smoke, confusion, and the dynamics of a rapidly changing situation.
In addition, the enemy works hard to deny information regarding its troops
and operations, or to intentionally provide misleading information. Table
4.1 lists some of the key elements of Level-1 SA for command and control
operations.

95
96 Battle of Cognition

Figure 4.1. Situation awareness levels and the decision-


action process.

Level-2 SA—Comprehension of the current situation. Comprehension of the


situation is based on a synthesis of disjointed Level-1 elements. Level-2 SA
goes beyond being aware of the elements that are present in a situation—it
adds an understanding of the significance of those elements in light of the
warfighter’s goals. This level of SA is sometimes called situation understand-
ing. The warfighter assimilates Level-1 data to form a holistic picture of the
environment, including a comprehension of the significance of objects and
events. For example, an intelligence officer may need to assimilate the data
from multiple sensors and reports in order to determine enemy intent or
the impact of friendly operations on the degree to which an enemy asset can
shoot, move, and communicate. Typically Level-1 SA (perceived data) must
be interpreted (with reference to goals or plans) in order to have meaning as
Level-2 SA. For example, a commander must understand the impact of dis-
covering a new enemy asset on the conduct of his mission operation so that
he can rapidly make the necessary adjustments. Table 4.2 shows examples of
typical factors that are relevant to situation comprehension or understanding
in command and control.
Level-3 SA—Projection of future status. The third and highest level of SA
is the ability to project the future actions in the environment, at least in the
near term. This is achieved through knowledge of the status and dynamics of
the elements and a comprehension of the situation (both Level-1 and Level-2
SA). Commanders who possess a strong Level-3 SA are able to project, for
example, where and when the enemy will strike or how much time they have
until reinforcements arrive. This gives them the knowledge and time neces-
sary to decide on the most favorable course of action to meet their objectives.
Typical Level-3 SA elements are shown in Table 4.3.
This highest level of SA has received limited attention, yet is one of the most
important. During both planning and operations, the possible actions and
outcomes of relevant actors must be considered, in multiple combinations, to
form a wide range of possible expectations regarding what could happen. This
process of projection and planning for contingencies (also called branches and
sequels in military-planning parlance) is fundamental to successful command
Situation Awareness 97

Table 4.1
Examples of Level-1 SA in Command and Control

Enemy Friendly Situation Weather


Recent enemy activity Current friendly Icing Wind
Location activity Projected weather Direction
Time of activity Location Inversion Magnitude
Enemy composition Time of activity Temperature Surface winds:
Organization Friendly Barometric Aloft winds
structure composition pressure Visibility
Leadership Unit type Precipitation Illumination
Unit type Equipment Thunderstorms Fog
Equipment Experience level Hurricanes Day/night
Transmission types Morale/ Tornadoes Ambient noise
Weaponry commitment Monsoons/flash Moon phases
Experience level Fatigue/load flooding Sand storms
Morale/ Vehicle Tides
commitment Capabilities/skills/ Cloud ceiling
Vehicle training Lightning
Capabilities/skills/ Recent action
training Friendly disposition Terrain
Enemy pattern of Operational Elevation
movements readiness Type of terrain
Location of ammo/ Comms types Flat
supplies Location Urban
Movement of Dispersion Hilly
weapons Numbers Mountainous
Enemy center of Weapons Rocky/jagged
gravity Troop psychology Conditions
Enemy disposition Ammo/supplies Mud
Location Troop doctrine Land mines
Dispersion Past behavior/ Rubble
Numbers actions Sand
Weapons Religious/political Drainage
Ammo/supplies beliefs Slope bank
Objective City plan
Enemy psychology Civilian Situation Map
Past behavior/ Disposition Features
actions Location Vegetation
Religious/political Number Hydrology
beliefs Refugee flow Swamps
Perception of friendly Known terrorists Wetlands
forces Americans Rivers
Enemy history Media Obstacles
Enemy doctrine NGO/IGO Infrastructures
Past COAs Living condition Cellular net
Past behavior/ Clans present Telecom net
actions Ethnicities Roads

(continued)
98 Battle of Cognition

Table 4.1
Examples of Level-1 SA in Command and Control (continued )

Previous types of Culture Buildings


occurrence Languages spoken Usage
Previous time of Level of organization Materials
occurrence Mood of crowd Location
Enemy assets Religious/political
Location beliefs
Number by location Agitators present
Type Threatening actions
Coverage Weapons
Morales/
Mission commitment
Task Training/skills
Purpose Intent
Commander’s COAs Politics
Potential terrorists

and control. Unfortunately, it is also very difficult to do well, due partly to


uncertainty in possible future events, but also because of limited cognitive
resources available for considering many possible permutations of future
events. As shown in Figure 4.2, at anytime the warfighter may know only
a subset of that which is knowable. As he seeks to make projections as to
what will happen in the future, that percentage which is known may decrease
further.

CHALLENGES FOR SA IN COMMAND AND CONTROL


SA is rarely, if ever, perfect. It must be gathered from multiple, sometimes
contradictory sources. At times, needed information is unavailable. At other
times, multiple changes occur too rapidly, and too many sources compete for
the warfighter’s limited attention. All of this leads to what has been termed
the fog of war.
SA can be derived from a variety of sources (Figure 4.3), including direct
observations, communications with others (through radio or direct face-to-
face contact), and increasingly through computerized command and control
(C2) systems and enhanced sensors that are becoming a routine part of mili-
tary operations. It is important to point out that no C2 system can convey all
that is needed for SA. Warfighters will still find it necessary to integrate this
information with information gathered directly from the environment as well
as from others. It is also important to point out that information from each
of these sources will be associated with different levels of reliability. A critical
part of Level-1 SA is confidence in information (based on the sensor, organi-
zation, or individual providing it), in addition to the information itself.
Situation Awareness 99

Table 4.2
Examples of Level-2 SA in Command and Control

Possible engagement areas Advantages/disadvantages of COA


Possible avenues of approach Risk of mission failure/success with COA
Areas of cover and concealment Risk of casualties/loss of assets with
Choke points COA
Friendly limitations/advantages due to Weaknesses of COA
terrain Ability to counteract potential enemy
Friendly limitations/advantages due to actions
weather Ability of friendly forces to carry out
Friendly limitations/advantages due to COA
infrastructures Number/severity of undesirable potential
outcomes
Enemy cover and concealment Ability to mitigate risk in COA
Enemy ability to shoot Ability to make needed changes
Enemy ability to obscure the friendlies Ability of friendly forces to execute plan
Enemy objectives/intent Flexibility of COA
Freedom of maneuver of COA
Relative costs/benefits of potential COAs Ability to take advantage of opportunities
to enemy Ability to respond to unexpected events
Advantages of COAs Level of exposure to enemy
Disadvantages of COAs
Probability of friendly COAs success Impact of failures on execution of plan
Probability of enemy COAs success Ahead/behind schedule for task
accomplishment
Effect of terrain on enemy time to Deviations from expectations/plan
maneuver Impact of deviations on mission
Effect of terrain on enemy’s choice of
avenue of movement Distance to supply points
Enemy limitations/advantages due to Mobility requirements
terrain Deviation between items needed and
Enemy limitations/advantages due to when we can deliver them
weather Deviation between effectiveness level and
Possible enemy locations reconstitution criteria
Enemy ability to obscure identity Time and distance between location of
Enemy projected time to maneuver supplies and unit
Deviation between planned and actual
Ability to support the plan with intel, timing of events
recon, and surveillance Distance to supply routes
Ability to get supplies to assets Impact of not meeting request on
Time needed for placement mission effectiveness
Transportation needed to move the asset Impact of meeting request on future
Level of risk to the assets in the area supply plans
Ability to move the assets stealthily Deviation between request and
Security needed to protect asset availability
Available force protection Impact of early arrival
Time needed to collect the information Ability to combine shipments

(continued)
100 Battle of Cognition

Table 4.2
Examples of Level-2 SA in Command and Control (continued)

Accessibility of the area Weapon effectiveness


Criticality of information needed Prioritization of targets
Priority of assigned information Best points for use of smoke
requirements Capability of troops to execute actions
Gaps in organic assets Best areas for engagement
Gaps in coverage Ability/time requirements for movement
Items where more information is needed to engagement
Credibility of source Ability of terrain to support vehicle/
Confidence level in information troop movements
Difference between current and desired
confidence in info

As Figure 4.3 shows, there may be a significant gap between ideal SA (per-
fect knowledge) and that which is currently “known” by the system from all of
its available sensors and other inputs. By system knowledge, we mean not only
the information residing in an individual technical system (such as a radar or
command and control software), but that in the sum total of the technical
systems, people, processes, and operations that together form the basis for
command and control.
There may also be a gap between this level of system information that the
warfighter might possibly obtain and that which can be derived from the system
interfaces (available information). This gap may exist because some system infor-
mation may not be passed to the warfighter through the system interfaces—due
to limited network bandwidth, for example, or a failure of a subordinate to pass
on a needed report—or because the warfighter must take additional actions to
derive the information from the system (paging through menus and windows
to find information that may be obscured). An important goal for the develop-
ment of command and control systems is not only to raise the level of system
information, but also to minimize the gap been system information and avail-
able interface information through effective system design.
Finally, a gap can occur between the amount of information available at
the system interface and the SA that is finally formed in the mind of the indi-
vidual. There are a number of cognitive limitations that often act to limit SA,
as well as a number of external factors that can act to make situation awareness
difficult to attain.

Individual Limitations
People have a limited amount of attention they can direct toward gather-
ing needed information and a limited amount of working memory that can
be used to combine and process perceived information to form the higher
levels of SA. Unless they are experienced and dealing with learned classes of
situations (which helps them develop mental models and schema that allow
Situation Awareness 101

Table 4.3
Examples of Level-3 SA in Command and Control
Projected enemy COAs Projected availability of friendly forces
Expected COA Projected ease of implementation of
Most dangerous COA COA
Projected effect of weather on enemy COA Projected availability of resources
Projected effect of weather on enemy Projected ability to minimize troop risk
equipment Projected impact on enemy
Projected enemy unit size/actions Projected availability of resources
Projected enemy decision points Projected effect of COA on enemy
Projected effect of COAs on enemy plans/mission
vulnerabilities Projected effect of COA on enemy
Projected impact of friendly COAs on workload
enemy COAs Projected effect of COA on enemy
capabilities/ability to fight
Predicted reaction of population to Projected time required to carry out COA
friendly COAs Projected ability of plan to disrupt/
Projected civilian behavior counter enemy intentions
Projected effect of fires on enemy/civilians Projected risk associated with friendly
COA
Projected effect of weather on friendly Projected time on route
COAs Projected safety on route
Projected effect of weather on equipment Projected safety of shipments
Projected effect of weather on terrain Projected reliability of transportation
Projected effect of weather on personnel mode
Projected effect of weather on Projected time required to get item to
infrastructures site
Projected impact of weather on visibility
Projected impact of weather on Projected ability to get to location on
trafficability time
Projected impact of weather on visibility Projected ability to sustain the assets
Projected impact of weather on ability to Projected ability of enemy to counterat-
get air support tack asset
Projected timing of weather inversions Projected ability of assets to collect
Projected impact of terrain on trafficability needed information
Projected impact of terrain on visibility Projected availability of assigned assets
Projected impact of terrain/weather on Projected ability to support units with
systems operations COA
Projected impact of terrain/weather on Projected usage of each item over time
comm capabilities Projected location of unit over time
Projected impact of terrain/weather on Projected usage of each item over time
ability to get intel Projected safety of units and logistics
Projected impact of terrain/weather on team
ability to get air support Projected availability of resources
Projected time and ability to get items
Projected safety of deployment for assets to units
Projected effect of infrastructures on Projected ability to achieve new supply
friendly COAs plan

(continued )
102 Battle of Cognition

Table 4.3
Examples of Level-3 SA in Command and Control (continued)

Projected impact of intel information on Projected need for more intel


COAs information
Predicted effect of enemy assets on Projected impact of missing
friendly COAs information on operations
Predicted enemy asset deployment
Projected time to carry out COAs

for rapid pattern matching to recognized classes of situations), the level of


SA achieved in demanding real-time environments, such as C2, is necessarily
limited (Endsley 1995c).
In addition, many task and environmental factors can seriously challenge
the ability of the warfighter to maintain a high level of SA. This includes fea-
tures of the environment (e.g., noise, heat, rugged terrain) and the warfight-
er’s condition (e.g., fatigue and physical or mental stress). These factors can
be greatly influenced by the enemy, which can alter the tempo of the battle
and affect the conditions under which a battle is fought.

Figure 4.2. The extent of what is known to the warfighter


decreases the further he projects into the future.
Situation Awareness 103

Figure 4.3. Sources of SA—after Endsley (2006).

Perceptual Constraints
In today’s practice, much of command and control occurs in a relatively
stationary command post or tactical operations center (TOC). In the future,
however, the military is planning on a much more mobile command and con-
trol, an on-the-move concept that distributes C2 activities and places them in
conditions that are intertwined with activities in the battlespace.
Under many battlespace conditions, the warfighter must traverse widely
disparate terrain and deal with highly varied environmental conditions. Obsta-
cles, noise, poor weather, visibility, and smoke may reduce the warfighter’s
ability to perceive the information he needs. Due to enemy actions, even
directly viewing a critical area may be impossible. Gathering the needed
information across a widely dispersed operation is a challenging activity that
takes considerable effort, particularly when the enemy may actively work to
conceal critical information or provide misinformation. These factors work
to directly limit Level-1 SA, and thus the higher levels of SA (comprehension
and projection), due to incomplete or inaccurate perceptions of environmen-
tal cues.

Stressors
Several types of stress factors omnipresent in C2 operations may nega-
tively affect SA. These include (a) physical stressors—noise, vibration, heat
and cold, lighting, atmospheric conditions, boredom, fatigue—and (b) social
104 Battle of Cognition

and psychological stressors—fear or anxiety, uncertainty, importance or con-


sequences of events, self-esteem, career advancement, mental load, and time
pressure (Hockey 1986; Sharit and Salvendy 1982). Natural anxiety occurs
due to the dangers inherent in military operations. In addition, the physical
and mental condition of the individual can also affect SA. Fatigue (due to lack
of sleep or rest, or simply prolonged mental or physical exertion—all of which
often arise in combat) may negatively affect the warfighter’s individual capa-
bilities to derive SA from the environment. The tempo and time pressures of
combat operations can make maintaining SA in the face of rapid change very
difficult.
A certain amount of stress may actually improve performance by increas-
ing attention to important aspects of the situation (e.g., sniper fire or booby
traps). A greater amount of stress can have negative consequences, however, as
accompanying increases in autonomic functioning and aspects of the stressors
can demand a portion of the warfighter’s limited attentional capacity (Hockey
1986). Stressors can affect SA in various ways, including attentional narrow-
ing, reduction of information intake, and reductions in working memory
capacity. This is a critical problem for SA, leading to the neglect of certain
aspects of the situation in favor of others. In many cases, especially emergency
situations, it is those factors outside the person’s perceived central task that
can be lethal.
Under stress, the warfighter also may have fewer processing resources for
combining information into a meaningful picture and making decisions. It
may also be harder to retain detailed information that is essential. In tasks
where achieving SA involves a high-working memory load (such as a com-
mander managing the information flow in a fast-paced operation), a signifi-
cant impact on SA Levels 2 and 3 (given the same Level-1 SA) is also expected.
If, however, long-term memory stores are available to support SA, as in more
practiced situations, there may be a less negative impact of stress on SA.

Overload and Underload


If the volume of information and number of tasks are too great, SA may
suffer because only a subset of information can be considered. The warfighter
may work actively to achieve SA, yet suffer from erroneous or incomplete per-
ception and integration of information. Conversely, poor SA can also occur
under low workload. In this case, the warfighter may have little idea of what
is going on and not be actively working to find out due to inattentiveness or
lack of vigilance. This may occur during periods of waiting, night operations,
and extended duty situations.
Developing and maintaining SA during C2 operations is difficult. Unfamiliar
conditions (terrain, people, cultures, etc.); stress; fatigue; periods of both infor-
mation underload and information overload; and the challenge of a deceptive,
hidden enemy create a situation where a considerable number of the activi-
ties and cognitive resources of the warfighter must be devoted to SA. While
Situation Awareness 105

modern technology cannot eliminate these fundamental constraints on SA, it


can act to significantly alleviate many of them, making the development of good
SA easier than it has ever been in the past.

SYSTEM DESIGN FOR SA IN COMMAND AND CONTROL


The capabilities of the systems provided for acquiring needed information,
and the way in which the information is presented, have a significant impact
on the quality of warfighter SA. While a lack of information can certainly be
a problem for SA, too much information or poorly organized and presented
information poses a problem as well. With improvements in the network
and computerized support systems, warfighters face a dramatic increase in
the sheer quantity of available data. Sorting through this data to derive the
desired information and achieve a good picture of the overall situation is no
small challenge (Figure 4.4).
New C2 systems and technologies may inadvertently widen the information
gap, even while trying to reduce it. For example, the complexity of computer-
ized command and control systems can degrade SA because such complexity
can significantly increase mental workload. System complexity may be some-
what moderated by the degree to which the warfighter has a well-developed
mental model of the system to aid in directing attention, integrating data,
and developing the higher levels of SA. This mechanism may be effective in
coping with complexity, but developing such mastery may also require a con-
siderable amount of training.
Other technologies may inadvertently degrade SA by redirecting the war-
fighter’s attention inappropriately or overloading his cognitive processing. For

Figure 4.4. The information gap—after Endsley, Bolte, and Jones


(2003).
106 Battle of Cognition

instance, the use of night vision devices has been associated with decrements in
other senses (e.g., hearing) that could reduce SA (Dyer et al. 1999). More seri-
ous effects may be produced by other devices (e.g., helmet-mounted displays)
that interfere with the warfighter’s vision, hearing, or attention (National
Research Council 1997). High levels of automation and decision aids are also
proposed and developed for C2 systems. These efforts should be conducted
with great caution. Warfighter SA can be negatively affected by the automa-
tion of tasks, which puts them “out-of-the-loop” (Endsley and Kiris 1995).
All of these issues lead to the need for a process that systematically identi-
fies warfighter SA needs and develops C2 systems that specifically promote
high levels of SA. Over the past two decades, a significant amount of research
has been focused on this topic, developing an initial understanding of the
basic mechanisms that are important for SA and of the design of systems
that support those mechanisms. Based on this research, the SA-Oriented
Design process has been established (Endsley, Bolte, and Jones 2003) to guide
the development of systems that support SA (Figure 4.5). This structured
approach incorporates SA considerations into the design process, including

Figure 4.5. SA-oriented design process, after Endsley, Bolte, and


Jones (2003).
Situation Awareness 107

a determination of SA requirements, design principles for SA enhancement,


and measurement of SA in design evaluations.

SA REQUIREMENTS ANALYSIS
To determine the aspects of the situation that are important for a particular
warfighter’s SA, one can use a form of cognitive task analysis called a Goal-
Directed Task Analysis (GDTA), illustrated in Figure 4.6. In a GDTA, the
analysis identifies major goals of each warfighter position, along with the
major subgoals necessary for meeting each of these goals. The analyst then
determines the major decisions that need to be made in order to meet each
subgoal. Then, the analyst delineates the SA needed for making these deci-
sions and carrying out each subgoal. These SA requirements focus not only
on what data the warfighter needs, but also on how that information is inte-
grated or combined to address each decision, providing a detailed analysis of
the warfighter’s SA requirements at all three levels of SA. Such an analysis is
usually carried out using a combination of cognitive engineering procedures.
Expert elicitations, observation of warfighter performance of tasks, verbal
protocols, analysis of written materials and documentation, and formal ques-
tionnaires have formed the basis for the analyses. The analysis is conducted
with a number of warfighters, who are interviewed, observed, and recorded
individually. The results are pooled and then validated overall by a larger
number of warfighters.
An example of the output of this process (Figure 4.7) shows the goal struc-
ture for a brigade logistics coordinator and the decisions and resulting SA
requirements analysis for the subgoal “project future supply needs of units.”

Figure 4.6. The form of a Goal-Directed Task Analysis for determin-


ing SA requirements.
108 Battle of Cognition

This analysis systematically defines the SA requirements (at all three levels of
SA) for effectively making the decisions required by the warfighter’s goals. The
analysis does not indicate a prioritization among the goals (which can vary over
time) or that each subgoal within a goal will always be active. Rather, in prac-
tice, a warfighter juggles between subsets of goals, based on current priorities.
The analysis also strives to make as few assumptions about the technology
as possible. How the information is acquired is not addressed, as this can
vary considerably from person to person, from system to system, and from
time to time. Depending on a specific case, the information could be acquired
through system displays or verbal communications with other warfighters,
or it could be generated by the warfighter himself. Many of the higher-level
SA requirements are generated in the minds of warfighters today, but that

Figure 4.7a. Example of Goal-Directed Task Analysis for a Brigade


Logistics Coordinator position.
Situation Awareness 109

may change in future as intelligent agents and other forms of automation are
introduced. By focusing on ideal SA, the GDTA forms the basis for system
design; it provides a delineation of the information that the system should try
to provide while imposing the least workload on the warfighter.

SA-ORIENTED DESIGN PRINCIPLES


The development of a system design for successfully providing the multitude
of SA requirements that exist in complex systems is a significant challenge.
To meet the challenge, a set of design principles have been developed based
on an understanding of the mechanisms and processes involved in acquiring
and maintaining SA (Endsley 1995c; Endsley, Bolte, and Jones 2003). The
50 design principles include (1) general guidelines for supporting SA, (2)
guidelines for coping with automation and complexity, (3) guidelines for the
design of alarm systems, (4) guidelines for the presentation of information

Figure 4.7b. Analysis for the subgoal “project future supply needs of
units.”
110 Battle of Cognition

uncertainty, and (5) guidelines for supporting SA in team operations. Some of


the general principles include the following:

1. Direct presentation of higher-level SA needs (comprehension and projection) is


recommended, rather than supplying only low-level data that warfighters must
integrate and interpret.
2. Goal-oriented information displays should be provided and organized so that the
information needed for a particular goal is colocated and directly supports the
major decisions associated with the goal.
3. Support for global SA is critical, providing an overview of the situation across
the warfighter’s goals at all times (with detailed information for goals of current
interest) and enabling efficient and timely goal switching and projection.
4. Critical cues related to key features of schemata need to be determined and made
salient in the interface design. In particular, those cues that indicate the presence
of prototypical situations are of prime importance and facilitate goal switching in
critical conditions.
5. Extraneous information not related to SA needs should be removed (while carefully
ensuring that such information is not needed for broader SA needs).
6. Support for parallel processing, such as multimodal displays, should be provided in
data-rich environments.

SA-Oriented Design is applicable to a wide variety of system designs. It has


been successfully applied as a design philosophy for systems involving remote
maintenance operations, medical systems, flexible manufacturing cells, and
command and control for distributed teams.

SA DESIGN EVALUATION
Many concepts and technologies are claimed to enhance SA in command
and control and military operations in general. Prototyping and simulation
of new technologies, new displays, and new automation concepts is extremely
important for evaluating the actual effects of proposed concepts within the
context of the task domain and using domain knowledgeable subjects. If SA is
to be a design objective, then it is critical that it be specifically evaluated during
the design process. Without this step, it will be impossible to tell if a proposed
concept actually helps SA, does not affect it, or inadvertently compromises it in
some way. A primary benefit of examining system design from the perspective
of warfighter SA is that the impact of design decisions on SA can be objectively
assessed as a measure of quality of the integrated system design when used
within the actual challenges of the operational environment.
SA measurement has been approached in a number of ways (Endsley
and Garland 2000). A review of the advantages and disadvantages of these
methods can be found in (Endsley 1996; Endsley, Bolte, and Jones 2003). In
general, direct measurement of SA can be very advantageous in providing
more sensitivity and diagnostic value in the test and evaluation process. This
Situation Awareness 111

provides a significant addition to performance measurement and workload


measurement in determining the utility of new design concepts. While work-
load measures provide insight into how hard a warfighter must work to per-
form tasks with a new system design, SA measurement provides insight into
the level of understanding gained from that work.
Direct measurement of SA has been approached either through subjective
ratings or by objective techniques. While subjective ratings are simple and
easy to administer, research has shown that they correlate poorly with objec-
tive SA measures, indicating that they more closely capture an individual’s
confidence in their SA rather than the actual level or accuracy of that SA
(Endsley, Selcon, Hardiman, and Croft 1998).
One of the most widely used objective measures of SA is the Situation
Awareness Global Assessment Technique (SAGAT) (Endsley 1988, 1995b,
2000). SAGAT has been successfully used to directly and objectively mea-
sure warfighter SA in evaluating avionics concepts, display designs, and inter-
face technologies (Endsley 1995b). Using SAGAT, a simulated test scenario
employing the design of interest is frozen at randomly selected times, the
system displays are blanked, and the simulation is suspended while warfight-
ers quickly answer questions about their current perceptions of the situation.
The questions are designed based on their SA requirements as determined
by an SA requirements analysis for that domain. Warfighter perceptions are
then compared to the real situation based on simulation computer databases,
to provide an objective measure of SA.
Multiple so-called snapshots of warfighters’ SA can be acquired in this way,
giving an index of the quality of SA provided by a particular system design.
The data collection approach provides an objective and unbiased assess-
ment of SA that overcomes the problems incurred when collecting such data
after the fact. It also minimizes biasing of warfighter SA due to secondary
task loading or artificially cueing the warfighter’s attention, which real-time
probes may do. By including queries across the full spectrum of a warfighter’s
SA requirements, this approach minimizes possible biasing of attention, as
subjects cannot prepare for the queries in advance since they could be que-
ried over almost every aspect of the situation to which they would normally
attend. The primary disadvantage of this technique involves the temporary
halt in the simulation. As a global measure, SAGAT includes queries about
all warfighter SA requirements, including Level-1 (perception of data),
Level-2 (comprehension of meaning), and Level-3 (projection of the near
future) components.
SAGAT has also been shown to have predictive validity. For example,
SAGAT scores were found indicative of pilot performance in a combat sim-
ulation (Endsley 1990). It is also sensitive to changes in task load and fac-
tors that affect warfighter attention (Endsley 2000), demonstrating construct
validity. It produces high reliability levels (Collier and Folleso 1995; Endsley
and Bolstad 1994; Gugerty 1997). Studies examining the intrusiveness of the
freezes to collect SAGAT data have generally found no effect on warfighter
performance (Endsley 1995a, 2000).
112 Battle of Cognition

SHARED SA IN TEAM OPERATIONS


C2 is based on multiple teams that function at various echelons and levels
of responsibility. Teams of military decision makers must coordinate and com-
municate within their immediate groups (e.g., a brigade combat team), as well
as with individuals or teams across echelons that may be above (e.g., at division
level), below (e.g., at battalion level), or lateral to them (e.g., other brigades).
This introduces a great deal of complexity, specifically when attempting to
design C2 systems for enhancing team performance and decision making. Such
C2 systems must provide the SA that is needed for each team member (based
on his specific SA requirements), as well as support the need for a common
shared SA across the team. This is essential if warfighters are to effectively
participate in making decisions with and on behalf of the team.

Figure 4.8. Example of SAGAT results: (a) experienced and inexperienced


platoon leaders (Strater, Jones, and Endsley 2003); (b) brigade staff (Bolstad and
Endsley 2003).
Situation Awareness 113

Warfighters are not a homogenous group. Military operations are con-


ducted by warfighters with various areas of specializations, such as operations,
intelligence, logistics, engineering, fires and effects. Military operations are
generally conducted by a commander supported by the smooth functioning of
a highly specialized, yet integrated team. Therefore, supporting C2 involves
supporting not just the SA of the commander, but also the highly specialized
SA needs of each staff member and each subordinate to the commander.
To begin to address these issues, it is first necessary to identify what the
individuals in the team must do (i.e., what their goals are), how they must
interact with one another to meet the common team goals, and what informa-
tion is needed to achieve these goals using the GDTA process. Overall team
SA can be conceived of as “the degree to which every team member possesses
the SA required for his or her responsibilities” (Endsley 1995c). Each mem-
ber of the command staff must have SA for all of his own SA requirements, or
become the proverbial chain’s weakest link.
In smoothly functioning teams, each team member also shares a common
understanding of the situation with respect to those SA requirements that
he has in common with other teammates. This is known as shared SA—“the
degree to which team members possess the same SA on shared SA require-
ments” (Endsley and Jones 1997, 2001), as represented by the overlapping
areas in Figure 4.9. For example, the intelligence manager and the effects
manager both need information on enemy locations and areas of cover and
concealment. They may both be aware of these data elements, though they
do not make use of the information in the same way. Conversely, if one has
knowledge of certain information but does not share it, or if they each have a
different understanding of the same information, shared SA will be low.
Complete knowledge of the other person’s SA requirements is not necessary.
A team member does not need to know everything other team members know.
Actually, sharing every detail of each person’s job with each team member cre-
ates a great deal of noise for people to sort through to get needed information
(Bolstad and Endsley 1999) and can degrade performance. Only those portions
of the overall situation that need to be shared between team members should
be passed on and should to be highlighted in order to develop systems that
support collaborative SA in team operations (Bolstad and Endsley 2000).
A major part of teamwork involves the area where the SA requirements over-
lap—the shared SA requirements that reflect the essential interdependency of
the team members. While two team members may be assigned different tasks in
executing a mission plan, they must also operate on a common set of data. The
assessments and actions of one can have a large impact on the assessments and
actions of the other. In a poorly functioning team, two members may have dif-
ferent assessments of the shared SA requirements and thus behave in a noncoor-
dinated fashion. For example, if a warfighter has one picture of where a target is
relative to the ambush site, but this is not properly communicated to the others,
suppressive fires may not be initiated at the right time or in the right direction.
Bolstad and Endsley (2002) examined shared SA in brigade staff. They found
that the way individual team members use and interpret the same information
114 Battle of Cognition

Figure 4.9. Commonalities among team member goals lead to


shared SA requirements (after Endsley and Jones [1997]).

to form the higher levels of SA can vary significantly, based on the goals
that are pertinent to a member’s position. For example, they found all posi-
tions require knowledge of terrain information (see Table 4.4 for terrain SA
requirements); however, the required level of detail and the way in which the
information is used varies considerably between staff positions. The majority
of differences in SA requirements appear in how the various positions need
to comprehend and make projections (Levels 2 and 3 SA) based on the same
Level-1 data. For example, the intelligence and operations officers are pri-
marily concerned with how the terrain affects friendly as well as enemy troop
movements, assets, and capabilities. The logistics officer and engineer are
more concerned with how terrain affects vehicle movements and the place-
ment of obstacles and assets. By understanding not only what data each staff
position needs, but also how that information will be used by each position,
system displays can be designed that provide only the detail level needed for a
particular position without presenting unnecessary information.
The same research also shows how the shared SA requirements within the
brigade combat team can be identified via the GDTA. Table 4.5 shows some
of the shared information requirements for the intelligence and logistics offi-
cers. The analysis of shared SA items indicates that the two positions do not
share many specific details. Instead, they share general information regarding
troops, infrastructures, and courses of action. While they each have many dif-
ferent uses for this information, they also make a number of different future
projections (Level-3 SA). Interestingly, these types of projections are rarely
conveyed in display design but instead must be communicated verbally by
team members for successful coordination in most systems. Unfortunately,
teams are often poor at sharing high-level SA requirements. Instead, they
communicate only low-level data (Level-1 SA) with the (often false) expecta-
tion that it will be interpreted the same way by other team members (Endsley
and Robertson 2000).
Knowledge of these shared SA requirements can be used to develop sys-
tems to increase shared SA between team members, which will be increas-
ingly important as future operations are likely to be more distributed. One
Situation Awareness 115

Table 4.4
SA Requirements Associated with Terrain Information Differ Depending on
the Staff Positions (Bolstad, Riley, Jones, and Endsley 2002)

SA Level 1

S2 S3 S4
(Intelligence) (Operations) (Logistics) Engineer

Areas of cover/ Areas of cover/ Areas of cover/ Type


concealment concealment concealment Conditions
Enemy boundaries Key terrain Potential choke City plan
Engagement areas Type points due to Map of area
Location of Conditions terrain Subsurface
restrictive City plan Type Features
terrain Map of area Conditions Vegetation
Map of the area Subsurface City plan Hydrology
Restrictive Points Features Map of area Location
Significant terrain Vegetation Subsurface Swamps
characteristics Hydrology Features Lakes
Type Location Vegetation Wetlands
Conditions Swamps Hydrology Rivers
City plan Lakes Location Locations
Map of area Wetlands Swamps Conditions
Subsurface Rivers Lakes Bank
Features Bank slopes Wetlands Slopes
Vegetation Water Rivers Condition
Hydrology tables Bank slopes Water tables
Location Obstacles Streambeds Obstacles
Swamps Drainage Type
Lakes Water tables Location
Wetlands Obstacles Quantity
Rivers Contour/elevation Rocks
Bank slopes Firmness of ground Houses
Water tables Grade Terrain
Obstacles Roads
Vehicles
Villages
Buildings
Trees
People
Mines
Location
enemy
Location
friendly

(continued)
116 Battle of Cognition

Table 4.4
SA Requirements Associated with Terrain Information Differ Depending on
the Staff Positions (Bolstad, Riley, Jones, and Endsley 2002) (continued)

SA Level 2

S2 S3 S4
(Intelligence) (Operations) (Logistics) Engineer

Enemy limitations/ Accessibility Suitability of land Potential


advantages due of routes for unit approaches and
to terrain Effect of terrain Effect of terrain on exiting areas
Friendly limita- on movement ability to access Potential staging
tions/advan- times/time to location with each areas
tages due to position troops vehicle type Potential terrain
terrain Effect of ter- Effect of terrain on suppression areas
Effect of terrain rain on rate of type of vehicles to Traffic ability
on enemy and enemy closure be supported Visibility of the
friendly assets Effect of terrain locations
Effect of terrain on visual capa- Critical obstacle
on anticipated bilities information
troop move- Effect of terrain Past enemy usage
ment time on communi- of obstacles
Effect of ter- cation capa- Effect of terrain
rain on system bilities on location of
detection Effect of terrain enemy counter-
capability on route dif- attacks
ficulty

SA Level 3

S2 S3 S4
(Intelligence) (Operations) (Logistics) Engineer

Predicted effects Predicted effects Projected effect of Estimated obstacle


of terrain on of terrain on terrain on usage effectiveness
enemy COAs enemy COAs rates per item per Predicted most
Projected effects unit secure location
of terrain on Projected effect of for assets, sol-
friendly COAs terrain on security diers, vehicles
Projected terrain of resources Predicted most sur-
Projected effect of vivable routes
terrain on troop
movements
Situation Awareness 117

Table 4.5
Shared SA Requirements for Intelligence and Logistics Officers,
after Bolstad, Riley, Jones, and Endsley (2002)

Shared SA Requirements

Level 1 Level 2 (none)


Enemy
• Number Level 3
• Type Course of Action
• Proximity • Predicted enemy COAs
Friendly Units • Projected friendly COAs
• Current mission status Enemy
• Equipment • Projected enemy actions
• Experience level • Projected enemy location
• Size • Projected enemy number
• Type • Projected enemy type
• Status Mission
• Power • Projected mission tasks
• Weaknesses
Infrastructures
• Roads
• Types
• Condition

way to provide high levels of shared SA in teams is to use the identification


of overlapping SA needs to create tailored shared displays or a common
relevant operating picture. This method has proved effective in certain team
tasks (Bolstad, Riley, Jones, and Endsley 2002). In general, it is critical that
the shared displays provided in C2 systems allow for information to be tai-
lored to each warfighter’s needs (preventing overload), but also support team
SA by providing a window into the relevant SA of other team members.

SA IN DISTRIBUTED AND AD HOC TEAMS


With the advent of network-enabled warfare, warfighters are becoming
more mobile and more distributed in time, space, and tasks. In order to support
such warfighters, new tools are needed to enable collaboration among team
members across the broad range and tempo of missions. The future force
will be “strategically and operationally responsive, rapidly deployable, men-
tally and physically agile, and able to transition rapidly across the spectrum of
operations—a versatile force capable of dominating any situation or adversary
with minimal organizational adjustment and time” (U.S. Army 2001).
Agile organizations present many challenges. With most military teams,
warfighters have been trained as a unit to work together. The time spent
118 Battle of Cognition

together not only builds team skills, but also supports the social processes
that impact team performance, such as the development of trust and under-
standing between the members. Network-enabled warfare calls for the abil-
ity to “leverage the intellect, experience, and tactical intuition of leaders at
multiple levels in order to identify enemy centers of gravity and conceptualize
solutions, thus creating a collective genius through accelerated collaborative
planning” (U.S. Army 2001). The expressed intent is to bring together rapidly
forming teams with the skills, background, and experience to offer multiple
perspectives on a problem for the purpose of collaborative planning.
These ad hoc teams would likely be selected based on the specific needs of
the situation under consideration, would pull members from multiple military
specialties and echelons, and often would incorporate joint forces or multi-
nation team members. Such teams would not have the benefit of combined
training and background, nor would the time that is necessary to establish
relationships built on mutual trust and understanding likely be afforded to
these teams. Thus, the presence of ad hoc teams adds an additional level of
complexity to the development of C2 systems.
Experience indicates that ad hoc teams, frequently occurring phenomena,
face a number of significant challenges in developing a shared understanding
of the situation upon which to base their actions.

• SA of ad hoc teams—First, there is an overall challenge in merely keeping up with


what current ad hoc teams are in place, what they are doing, and who is a part of
them. Commanders traditionally need to maintain an awareness of the current task
organization, what their units are doing, and their progress on key objectives. An ad
hoc team is a type of temporary unit, for which the commander needs to maintain a
similar understanding and awareness of status. Yet, this is far more challenging with
ad hoc teams, which may not adhere to traditional command hierarchies and whose
status may be far less well defined.
• Lack of team mental models—Ad hoc teams face challenges in developing a shared
awareness among team members. They must rapidly develop a mutual understand-
ing of their shared task and mission, and the state of their operational environment,
while simultaneously trying to build knowledge of their new teammates’ capabili-
ties. Unlike more permanent teams, they may have little or no mental model of their
teammates, which is needed for interpreting their teammates’ inputs and contribu-
tions and for formulating joint team operations.
• Temporal flows in SA—The temporal timelines on which ad hoc teams operate fur-
ther challenge their functioning. They often do not form as a whole, or operate
and then disband as a whole. Rather, their members come and go over time, often
multitasking with other duties on other teams (permanent or ad hoc) of which they
are also members. Thus playing catch up to find out what they’ve missed while out
of the loop and dealing with interruptions is more the norm than the exception. A
team member’s ability to develop a shared understanding of the situation in such a
manner, often under time duress, may often be quite limited.

Presently, there is little information available on how to best support shared


SA for ad hoc teams. This remains a challenge for systems designers.
Situation Awareness 119

Admittedly, mechanisms of SA are better understood for some situations


than others. In every case, however, in order to build a system that supports
situation awareness, the current practice must rely on experimental analysis
of alternative designs. Usually such experiments are performed using proto-
type or surrogate technologies, often in a simulation environment. Use of
situation awareness in designing a command system—both its tools and tech-
niques—requires a rigorous experimentation process, with effective collec-
tion and analysis of quantitative data.
CHAPTER 5
The Hunt for Clues: How to
Collect and Analyze Situation
Awareness Data
Douglas J. Peters, Stephen Riese, Gary Sauer,
and Thomas Wilk

Building on the theoretical concepts of situation awareness introduced in the


previous chapter, let us now describe our experimental approach to measur-
ing and analyzing how warfighters develop situation awareness, the role of
situation awareness in effective decision making, and its ultimate impact on
the battle outcome. In these experiments, the commanders and their staffs
used a set of command and control tools, described in chapter 3, to assist the
acquisition of situation awareness. Our experimental findings identify situ-
ation awareness as the linchpin of the command process—a key factor that
determines the efficacy of all other command elements—from sensor and
asset control to decision quality and battle outcome.
We begin by discussing our approaches to collecting the experimental
data, synthesizing the collected data, and extracting analytic insights. We also
describe the tools and processes developed to facilitate our analysis of the
commander situation awareness.

DATA COLLECTION
Effective experimental design and setup are critical to the successful eval-
uation of the experimental metrics. However, the quality and depth of the
resulting experimental findings are ultimately linked to the quality and depth
of the collected data. We were fortunate to have extensive data collection
capabilities in our experimental program.
Figure 5.1 shows an overview of the data collection approach. In the top left
portion of this figure, we depict the sources of data, particularly automated
loggers. These loggers collect virtually every piece of information flowing

120
The Hunt for Clues 121

through the network for each of the software tools. The data contained in these
log files is comprehensive, as most of the files doubled as debugging tools
for the software developers. Using these data, we were able to explore new
avenues of analysis as we developed emerging insights and extended our analy-
sis into areas that could not have been predicted prior to the experiment. The
downside of using this information is the sheer magnitude of the data col-
lected, combined with the lack of standardization among the data files for the
different tools. Therefore, to use data from the various tools in the analysis,
we developed parsers to convert each file into relational database tables. Ulti-
mately, these data sources enable us to compare ground truth, sensor detec-
tions (including those by human eyes), fused information, and perceived truth.
These automated logs are pivotal to the analysis tools described later in this
and subsequent chapters.
The top right of Figure 5.1 shows the data that we collected from the
command-cell operators. This includes video and audio recordings of each
operator during the run as well as recordings of the after-action reviews and
planning sessions. In addition to these quantitative data regarding the opera-
tor interactions, we collected information on how the operators perceived the
tools and the battle progress. Our approach to this evolved over time. In early
experiments, we administered surveys to collect feedback from the operators.
Unfortunately, the quality of responses varied dramatically between individu-
als, and there seemed to be a decrease in the quality of responses as each
experiment wore on. In addition, the surveys were necessarily generic and
could not be tailored to specific events for a given run.
In later experiments, we replaced the host of surveys with a single demo-
graphic survey conducted at the start of the experiment. The majority of oper-
ator-related information collected during these later experiments was collected
in focus groups. At the end of each trial run, we conducted small-group inter-
views to elicit the operators’ perceptions of key events in the battle. Depending
on the events of interest, we arranged the focus groups by cell (i.e., company
staff, battalion staff) or by staff position (i.e., intelligence managers, effects
managers, etc.). By having the participants elaborate on critical situations in
the recent trial run, we obtained immediate recollections that could be corre-
lated with actual battle events. Based on the input from the group interview, we
identified an individual decision maker to interview in more detail. In this one-
on-one interview, we discussed a single key event in depth: what the operator
knew at the time, what decisions were made, what information the decisions
were based on, how a less-experienced person may have reacted under a similar
situation, and what additional information may have affected the decision. All
interviews were recorded, and analysts published notes from the interviews
that we used during the subsequent analysis phase.
The bottom portion of Figure 5.1 shows a particularly important element
of the data collection process—the analytic observers. In this complex free-
play experiment, the half-life of understanding the context of key events is
very short. We mitigated this by providing analytic observers with tailorable
122 Battle of Cognition

Figure 5.1. The MDC2 approach to data collection.

workstations, each comprising tools that enable the human observer to under-
stand and record as much of the battle context as possible in real time. During
the experiments, up to 20 analytic observers were stationed at these tailored
workstations. About half of the observers focused on a command cell’s under-
standing of the battle situation and the collaboration within the cell. Their
panels displayed a view of the active tools used by the commander and battle
managers of a given cell. The other half of the analysts focused on collaboration
The Hunt for Clues 123

and coordination between command cells. Their panels replicated the active
tools of all operators who specialized in a given function. For example, one
set of analyst displays considered collaboration between commanders, and
another focused on effects managers. In addition to their tailored displays,
each panel also contained a video view of the operators and a display of the
actual ground truth status of all Red and Blue forces. Together, the displays
allowed the observers to maintain awareness of ground truth, perceived truth,
the commander’s situation awareness, how the cells collaborated, and how the
commanders and staffs made decisions.
Additionally, we created a database application that enabled each analyst to
enter observations in real time. We designed the application to facilitate rapid
data entry and thereby help focus the data collection. An example collection
form is shown in Figure 5.2.
Based on the early experiments, we realized that it was not reasonable to
expect a single observer to effectively collect on all aspects of the battle. There-
fore, we made a conscious effort to identify data elements that could be col-
lected postexperiment from the automated data loggers and to not duplicate
the collection of that information via human observers. Further, we staffed
each functional area (e.g., intelligence, effects) and each unit (e.g., CAT, CAU)
with two observers. The first observer was responsible for selected counts—
recording each time certain events occurred (e.g., how often the intelligence
manager collaborated with the effects manager). The second observer was

Figure 5.2. An example of a collection tool used by an observer.


124 Battle of Cognition

responsible for context—assessing the quality of situation awareness reports,


the topics of collaboration sessions, and timing of key events.
Immediately following each trial run, the observers met to identify several
key events that took place during execution. This served to focus the group
interviews (described above) and to enable postexperiment process tracing
(described later in this chapter).

SITUATION AWARENESS—TECHNICAL
The rich data sets collected during the experiments gave us significant flex-
ibility to explore emerging concepts, develop associated metrics, and relate
analytic results to combat outcomes. Our quantitative data included both the
information available to the commanders (perceived truth) and ground truth
states of all battlespace entities. Using this information, we devised the Situ-
ation Awareness—Technical (SAt) scoring method to evaluate the quality and
scope of information collected by the units over time.
Our primary measure of SAt reflects the quantity and accuracy of relevant
information available to a command-cell member over time. In its basic form,
the SAt score is a ratio of the information available to the information required.
This ratio is different for each commander at each echelon because information
needs vary with the size and contents of the areas of responsibility, the lethality
and range of weapon systems, and the mission at hand. While the complexities
of battle command are many, we simplify the scope to include three fundamen-
tal components for each enemy entity: knowing where the enemy is (location),
what the enemy is (acquisition level), and how healthy the enemy is (state).
In the SAt score, we did not consider information about friendly forces or
terrain because in our experiments the operators consistently had very good
information in those areas. The SAt model also did not include neutral enti-
ties. Although neutrals added complexity and additional information gathering
requirements to the scenarios, the command cells typically did not dedicate
sensors to trying to find civilians on the battlespace. That said, the impact of
civilians to situation awareness can be significant, and a more elaborate SAt
model may include, for example, the awareness of civilians in the proximity
of enemy entities or a decreased score when a neutral entity is incorrectly
identified as an enemy.
The evaluation of the SAt score was possible in our experiments because
every spot report (report about a detection of an entity) included a unique iden-
tifier that allowed us to relate unambiguously a detected entity to the actual
entity. This information was not available to the commander or his staff but
was available for analysis.
Of the three components of situation awareness considered in our model, the
awareness of where the enemy is located is perhaps the most tangible. The loca-
tion component of the SAt score is a measure of the accuracy of the perceived
truth location of a given entity as compared to the ground truth location. For
example, an inaccurate sensor reading or a target that moves after detection
The Hunt for Clues 125

may lead to a significant error between where an enemy entity is thought to be


as compared to where it actually is. Until an entity is detected, its location score
is zero. Once detected, the location score is reevaluated as the entity moves and
as additional spot reports arrive. If the location information becomes unusable,
the overall score for that entity (including acquisition and state components)
also reduces to zero. The location score is assigned using three categories—
unusable, actionable, and targetable. These categories are defined by the muni-
tion accuracy and availability at a given echelon as well as the capabilities of
available sensors. In other words, a target is considered targetable if the distance
between the actual location of the entity and its perceived location is within the
search radius of the best available munition. Likewise, the target is considered
actionable if the location error is small enough that the commander could send
a sensor to collect additional details on the location of the entity.
The location score is defined by the information provided by the sensor
network; neither the commander nor his staff can influence the value except
to dedicate more sensors to refine the available information about a target.
Because different sensors have different degrees of accuracy for location infor-
mation, it is important that the commander understand what sensors have
covered an area and how long ago the target was last detected. Although the
commanders in our experiments were presented with a target location error
based on the fused sensor picture, this uncertainty did not seem to be a major
consideration in decision making, and the commanders usually presumed
that location information on their common operating picture display was
correct—immediately issuing orders to engage identified enemy platforms
without first checking location uncertainty.
Knowing the location of an enemy asset is not sufficient: the commander
and staff also need to know what the enemy asset is. This second aspect is
called the acquisition component of the SAt score and measures how com-
pletely and correctly an entity is identified. We considered the following
levels of acquisition:

• Detect—a sensor perceived an object of possible military interest but did not rec-
ognize it otherwise.
• Classify as tracked, wheeled, or biped—the sensors (and processing systems) clas-
sified the object according to its mobility class (e.g., tracked vs. wheeled vehicle).
• Classify as enemy or neutral—the entity was classified as enemy based on radio
signal processing. Because neutral entities did not emit radio, and all information
about the Blue force was known, the classification as friendly was not included in
this score.
• Recognize/identify—the entity’s specific type or model was determined (e.g., T-72
vs. M1). This provides the commander with enough information to fully understand
the threat of the detected entity.

The acquisition scores represent how correctly the command cell (or the
fused spot reports) acquired and classified an enemy entity. For example, if
126 Battle of Cognition

one of the cell members correctly identifies a tracked enemy vehicle based
on a sensor picture, the score increases from “classify track” to “recognize/
identify.” However, if the command cell incorrectly identifies the same target,
the score remains “classify track” because it is the most correct representation
of the entity available to the commander and staff.
Simply knowing the location and identification of an enemy entity is insuf-
ficient. For example, if the commander engages a target beyond line of site
of his entities, he needs to know whether he effectively disabled that target
before proceeding through that area. We model this need to know the state
of the enemy (e.g., whether an entity is alive, dead, or damaged), as the third
critical input to the SAt score. The state component of SAt is a measure of
the accuracy of the perceived state knowledge compared to the actual state of
the entity. For example, incorrectly marking an entity that has actually been
killed as still alive may lead to expending additional scarce resources to reen-
gage. Likewise, incorrectly marking a healthy entity as dead may have lethal
consequences when the friendly force moves within range of the entity.
To determine the state score, we first evaluate how much of each enemy
entity’s mission is dedicated to moving, firing, and communicating (spotting
and reporting). This evaluation is roughly based on the capabilities of the
entity and how the enemy commander typically uses the platform. The cor-
rectness of an individual state assessment is then calculated by summing the
correctly identified components of combat function. For example, a battle tank
may have 35 percent of its function as moving, 55 percent of its function dedi-
cated to firing, and 10 percent of its function as reporting or communicating.
If the entity is perceived to be “total kill,” but the actual state is “firepower
kill” (and therefore also communications kill), the assessed state is correct
for the fire function and the communication function but incorrect for the
movement function. Therefore, the state score of the entity is 55 percent + 10
percent = 65 percent.
The three component scores (location, acquisition, and state) are evalu-
ated for each entity in the opposing force and then combined to form an
overall score for a particular side’s knowledge of its opponent. The formula
used to support this evaluation is shown in Figure 5.3. It produces a score
between 0.0 and 1.0, with 0.0 indicating a complete lack of useful informa-
tion, and 1.0 indicating the possession of all required information. A score of
1.0 would imply that at a particular point in time the commander has access to
full knowledge about the location, type, and state of all enemy entities within
his area of interest. We introduced coefficients into the formula to enhance
its utility:

• The weights, W, allow the analyst to emphasize the three components of the com-
bined score to different degrees. Setting a weight to zero eliminates the contribu-
tion of that measure from the score. We have applied the following selection of
weights: location (Loc) was weighted at 0.45, acquisition (Acq) was weighted at
0.45, and state (Sta) was weighted at 0.10. This initial selection of weights reflects
The Hunt for Clues 127

Figure 5.3. Calculation of instantaneous SAt score.

a concern that the difficulty experienced by the operators in assessing the impact
of the application of effects could dominate the portrayed values of SAt. Sensitiv-
ity analyses conducted on the data available from the experimental series indicates
that the general trends of the curves are not sensitive to moderate changes in the
weighting values.
• The criticality coefficient, c, enables the analyst to account for certain entities that
might be of more value than others, regardless of location. For example, an air
defense platform may be more critical to find, identify, and eliminate than a supply
truck.
• The decay factors, d, were used in early experiments to account for the loss in value
of information over time. In the simulation, information was made available to the
operators through internal reports after each sensing event. The age of the infor-
mation is measured as the elapsed time since the last report of a particular target.
The information is of most value immediately after a report and begins to lose value
from that point forward. In later experiments, this decay component was replaced
with a more accurate representation of the value of information based on constantly
updated location accuracy information (discussed above). When an entity moves
beyond actionable position information, its track is lost, and both the location and
acquisition components of the score go to zero.

The SAt formula involves summation over a set of entities. But what is
included in that set of entities? The simplest possibility is to include all enemy
entities deployed in the battlespace. However, it is often more meaningful to
include only particular types of targets or targets in a specific geographic area.
Our SAt model allows for such specifications. For example, we used this flexi-
bility to explore SAt scores when applied to those entities that the commander
defined as most dangerous targets (MDT) or high-payoff targets (HPT). The
typical analytic package produced for each experimental run included the SAt
128 Battle of Cognition

of all enemy entities, the SAt of the MDTs as defined by the commander, the
SAt of the HPTs, and the enemy’s SAt of friendly forces.
In analyzing each trial run, recomputation of the SAt score is triggered by
a number of activities such as the receipt of a spot report, entity movement,
fire missions, or an entity state change. Because such activities occur very
frequently, the resulting graph is a nearly continuous curve that describes the
evolution of the score over time.

SENSOR COVERAGE
A command cell obtains its information from sensor reports (including
human warfighters’ reports). We found sensor coverage to be the key opera-
tional factor affecting the SAt score. Understandably, knowing the status and
capability of available sensors is crucial to the commander.
With the CSE, sensor detections immediately populated the COP to give
the commanders a sense of the battlespace. However, this immediate dis-
play of information also often had the unintended consequence of leading
the commander to mistakenly conclude that an area absent of detections was
devoid of enemy entities. To reduce the risk of being surprised by a signifi-
cant enemy force, the command cell had to understand how effectively an
area had been covered with sensors. However, the absence of detections can
also contribute positively to situation awareness. For example, suppose the
commander directs certain sensors to observe an area, and the sensors do not
detect anything. Knowing that the area is void of detections is very useful,
assuming of course, that the lack of detections is due to the absence of enemy
entities and not due to inadequate sensor coverage.
To explore how effectively the commanders and staffs used their sensors to
cover key areas, we developed a tool that examined the quality of sensor cover-
age across the battlespace. This tool enabled us to consider the commander’s
level of confidence that an area void of detections on his visual display was, in
fact, void of enemy entities.
To compute the sensor coverage quality score, we first define one or more
regions of the battlespace as critical areas of interest for the unit. Each of
these areas is then given an importance score, and a regular grid is superim-
posed over these areas, as depicted in Figure 5.4. We then evaluate each grid
cell (as described below) and compute an aggregate score based on individual
cell scores and the importance of each cell.
Our model for computing the sensor coverage quality accounted for a
number of factors:

Sensor mix—Different sensor types have different capabilities and are often much
more effective in combination than they are alone. For example, some sensors can
only detect moving targets, while others can only detect stationary targets. Sepa-
rately, either one provides some amount of information about the area, but the
combination is more effective than the sum of the parts.
The Hunt for Clues 129

Figure 5.4. Area definition and analytic grid.

Time value of information—In addition to the effectiveness of the sensors that have
covered an area, it is important to account for how much time has passed since the
area was covered.
Time of coverage—The longer a sensor covers an area, the more effective it is at
detecting entities within an area. There are several reasons for this: a stationary
target may begin moving, creating possible detection opportunities, or an entity
that was out of sensor range may move into range.
Number of times covered—Spot-mode sensors do not cover wide areas within a single
time increment but look at a localized region. Therefore, for spot-mode sensors, an
important parameter to consider is the number of passes the sensor makes over a given
area. In this case, the effective coverage increases the more times an area is covered.
Distance from sensor—Sensors tend to provide more accurate and reliable detections
at closer ranges. They are nearly ineffective at their extreme detection range.

In order to explore how the rate of change in SAt correlates with the effec-
tiveness of sensor coverage, we plotted the quantitative measure of sensor
coverage against the SAt curves. This helped reveal three primary reasons for
situations in which SAt grew slowly or stayed relatively constant:

• The sensors were idle.


• The sensors were looking at an area that had already been covered by that sensor
type.
• The sensors were covering a new area, but there was nothing there to find.

Figure 5.5 shows an example of such an analysis. During this run, the SAt
growth followed a fairly typical trend of rapid initial growth due to the initial
intelligence feed from higher headquarters and due to sensors coming online,
130 Battle of Cognition

followed by a relatively flat period before the Blue unit began ground opera-
tions and then a rapid growth as the ground forces moved into enemy terri-
tory and found enemy targets at close range with sensors or human vision.
Two sensor coverage curves are shown in the lower portion of Figure 5.5: the
darker line represents coverage of all areas beyond the initial line of departure,
and the lighter line represents the areas most critical for mission success. The
commander and analysts jointly identified these critical regions. Together,
these charts indicate that after an initial surge of intelligence information,
there were few new detections because no new area was being covered by the
sensors. As the ground forces began maneuvering, the sensor coverage quality
increased, and new information became available to the commander.
In addition to providing the sensor coverage measurements and graphs,
the tool’s graphical interface shows analysts the positions of Red and Blue
assets over time, identifies which sensors make detections, indicates differ-
ences between perceived location and actual location, and displays Red and
Blue attrition over time (see Figure 5.6).

SITUATION AWARENESS—COGNITIVE
Although SAt is a relevant measure of the information available to a commander,
situation awareness ultimately occurs in the mind of the commander: “Technology

Figure 5.5. Analysis indicates a strong relation between SAt


and sensor coverage.
The Hunt for Clues 131

Figure 5.6. Graphical display of the Sensor Coverage Tool.

can enhance human capabilities, but at the end of the day . . . we can have ‘per-
fect’ knowledge with very ‘imperfect’ understanding” (Brownlee and Schoo-
maker 2004). It is the commander who perceives, categorizes, and synthesizes
the available information into a complete picture that bridges the three levels
of situation awareness discussed in the previous chapter. In general, Situation
Awareness—Cognitive (SAc) has a complex relation to SAt. Unfortunately, mea-
suring how effectively the commander understands the battle situation is not
as simple as developing database queries and scoring algorithms. Throughout
this experimental program, we searched for ways to understand what was in the
commander’s mind and how well he understood the tactical situation.
At the conclusion of battles in the early experiments, we asked each com-
mander to assess the level of situation awareness that he achieved during
that battle on a scale from 1 to 10. This retrospective, subjective assessment
is inherently biased and is strongly influenced by the surveyed individual’s
assessment of the recent battle outcome. Because these surveys were con-
ducted after the run was complete, and the commander knew how effective
his unit had been, there was an artificially strong correlation between the
commander’s self-assessed SA and unit success in the battle. This postex-
periment self-assessment often contradicted what the commander said dur-
ing a run. An example of this postexperiment bias is shown in Figure 5.7. In
this run, the commander’s verbalizations indicated a severe lack of situation
awareness regarding the critical northern avenue of advance, yet he rated his
overall situation awareness very high because the unit eventually achieved a
clear victory.
During these early experiments, we were fortunate to have a commander
who spoke freely about his current thoughts and perceptions of the battlespace.
At times, the commander addressed his thoughts directly to specific cell
members, while at other times, the actual target of the discourse was unclear.
These verbalizations contained critical information about the commander’s
current understanding of the battlespace—understanding that analysts parsed
132 Battle of Cognition

into components that addressed specific aspects of situation awareness: per-


ception, comprehension, and projection.
For example, consider the run depicted in Figure 5.7. In this case, the com-
mander intended to attack through the northern corridor to avoid the expected
enemy strength in the south. This turned out to be a very accurate assessment,
and the northern corridor contained very few enemy units. The graph depicts
the level of SAt that the command cell developed over the course of the run.
Though the commander expected a weak force in the north, he was clearly
surprised by how weak the force actually was. The enemy entity titles depicted
on the chart (e.g., Garm, Darya) indicate the Red force that was deployed in
the north, and the times at which they were destroyed. As the chart suggests,
there was only one enemy entity left in this area by the end of the run.
A set of quotes extracted from the audio log captured the commander’s devel-
oping mental model of the situation. It appeared that he was either convinced
that the enemy had positioned more forces in the north than were actually
there, or that he was extremely sensitive to the potential of a lone platform that
could inflict significant damage on his force. At the start of the exercise, the
commander recognized that he had “no read” (i.e., inadequate understanding
of the situation) in the north. He interpreted that as a lack of knowledge about
an existing enemy threat. He perceived that “there is a lot of stuff ” in the north
and that perhaps there was an enemy platoon positioned there. He further
projected that this enemy force would be waiting in ambush for the advanc-
ing friendly force. Consequently, he caused his unit to slow its rate of advance
to 5 km/h, an exceedingly slow movement rate. The star on the chart at the
75-minute mark indicates the point when a cell member reported that he had
examined all available sensor range fans and that the area appeared clear for
forward movement. The commander was not influenced by that input and, as
the quotes suggest, continued to perceive a significant threat in the north.
In the postexperiment interview, the commander indicated that he had all the
information he needed at about the 4-minute mark. However, in actuality, he did
not cross the line of departure until 27 minutes into the run. When asked why,
the commander suggested that the “delay was caused by continued information
gathering.” This appeared to be his approach to dealing with uncertainty and
to aligning his mental model with the system’s reports (or more often than not,
aligning the system’s reported information to his mental model). The Red com-
mander suggested it was a good example of the “paranoia factor”—a hesitancy
of commanders of lightly armored units to move forward despite an open path
and sufficient sensor coverage showing no enemy force in the region.
While this approach of comparing combat events, operator dialogue,
SAt scores, and battle outcome is insightful and informative, it is also time
consuming and demands a very detailed understanding of the events of the
battle. Therefore, we continued to search for explicit and direct measure-
ments of cognitive situation awareness. Because postexperiment surveys
proved to be unreliable measures of situation awareness, we attempted to
administer surveys to operators during experiment execution, expecting
The Hunt for Clues 133

that this would give us insight into the instantaneous state of awareness at
given points in the battle. Unfortunately, because of the rapid pace of the
simulated battle, command-cell operators were reluctant to take their eyes
off the screen even for the limited amount of time (less than one minute)
required to complete a survey, and the quality of survey responses reflected
the fact that the participants viewed these surveys as a distraction from
their primary duty of fighting the battle. In later experiments, we aban-
doned the standardized surveys altogether. Instead, we relied on three
other techniques.
First, we changed postrun surveys into postrun interviews to allow the ex-
change to be tailored to emerging trends or specific events from a completed
battle. This technique also enabled us to maintain a more consistent quality
of information. These interviews were based on the questioning framework
of the Critical Decision Method (Klein, Calderwood, and MacGregor 1989).

Figure 5.7. An example in which the Blue commander exhibited poor cognitive
situation awareness, SAc, in spite of high SAt available to him.
134 Battle of Cognition

This process identifies one or more critical decisions in a run and explores the
commander’s thinking at the time of the decision. By limiting the focus and
not asking the commander subjective questions, this technique minimizes the
problems encountered with postrun surveys.
Second, we made extensive use of dedicated observers to study and analyze
decisions and verbalizations obtained from squad leaders, platoon leaders, and
commanders in which they discussed their perception of the battle and the envi-
ronment. The experimental facilities gave the observers access to all operators’
screens and communications. Using these information feeds and customized
collection tools, the observers developed an analytically rich data set.
Finally, we employed a more formal structure for the periodic “commander’s
read”—the verbal report on the commander’s assessment of his situation and
the enemy situation. Unlike our earlier attempts to encourage the commander
to speak nearly continuously, we now requested that the commander give a
verbal situation “read” at key points in the battle. We also provided the com-
mander with an outline that included his assessment of friendly and enemy
troops, and an indication of whether or not the mission could be completed on
schedule.
Both during and immediately after each experimental run, analysts recorded
the commander’s reads and qualitatively assessed their correctness as com-
pared with ground truth. This provided a subjective measure of cognitive sit-
uation awareness (SAc) as it was expressed in the commander’s reports and in
dialogues with other operators (Figure 5.8). Additionally, the observers were
aware of the actions and intent of the Red commander and were able to take
this information into consideration when making their assessments.
Each commander’s SAc was assessed by observers on a Green, Amber, and
Red scale for awareness of the Red forces and Red plans; his own forces; and
his own plan status.
The foundation for much of the analytic effort in the latter experiments
was the process trace (Woods 1993) that focused around key events (e.g., a
decisive battle, a missed decision, or an effective and timely decision). After
each run, analysts identified one or more key events based on their relative
battle impact and the commander’s cognitive effort, compiled all available

Figure 5.8. An example of SAc as assessed by an observer.


The Hunt for Clues 135

information for those events, and pieced together a detailed storyboard that
informed other aspects of the analysis.
In order to relate information availability to situation awareness, these indi-
vidual analytic results were plotted over time along with the relevant SAt curves.
This comparative examination led to the development of a new metric reflect-
ing the cognitive environment in the cell, the battle tempo.

BATTLE TEMPO
One of the primary inhibitors to developing situation awareness is the
tempo of operations in a battle. At times of peak activity, a commander is
often absorbed in the details of the moment and fails to comprehend the
bigger picture. We saw situations in which the commander made nearly con-
tinuous verbal observations about details he was seeing on his screen but
never synthesized that information into a coherent picture. The common
approach was to watch the screen for changes and then react to those
changes. All commanders in our experiments exhibited this behavior to
some extent, and the tendency became more pronounced as the tempo of
operations increased.
To better analyze this trend, we introduced a measure of battle tempo—the
frequency of battle-relevant events that influence a command cell. This met-
ric gives an indication of external cognitive factors that are likely to impact the
commander’s ability to process information and act in a timely manner. The
following events are available in either the log files or the observer database
and are used to quantify the battle tempo score:

Sensor detections—These represent the most prevalent source of information avail-


able to the commander. Of these sensor detections, more emphasis is placed on first
detection of a given entity (i.e., when an icon of the entity first “pops up” on the
common operating picture) than on subsequent detections because first detections
prompt the greatest response by the cell members.
Taskings—Giving a fire tasking, movement tasking, or sensor tasking to a unit or an
entity takes effort. The frequency of issuing tasks is an indicator of the cognitive
load on the command cell.
Entities lost—Losing a friendly entity is a significant event that calls for a major cogni-
tive effort: What killed it? Can we accomplish the mission without it? How are we
going to reallocate resources? These events also tend to increase the pace of other
events, such as taskings.
Collaborations initiated—The commander typically collaborates with other operators
or cells prior to making decisions. These collaborative sessions are reflective of the
pace of activity within his cell.

The general form of the battle tempo metric is as follows:

Tempo = W1 · FD + W2 · SD + W3 · CI + W4 · GT + W5 · FT + W6 · BA
C1 C2 C3 C4 C5 C6
136 Battle of Cognition

Where:
Wi = A weighting factor.
Ci = A normalizing factor.
FD = First detections of enemy entities per unit time.
SD = Subsequent detections of enemy entities per unit time.
CI = Collaborations initiated by the commander per unit time.
GT = General taskings initiated from the cell per unit time.
FT = Fire taskings initiated from the cell per unit time.
BA = Blue entities lost per unit time.
In a manner similar to the SAt curves, calculating an instantaneous battle tempo
score repeatedly during a run produces a curve that reflects changes in cognitive
load over time. Example curves are shown for three echelons in Figure 5.9.

COLLABORATIVE EVENTS
By plotting multiple metrics—SAt, SAc, sensor coverage, and battle tempo
as a function of time—in one chart, we are able to visually explore relationships
between the metrics and underlying phenomena. We call such plots stacked
charts. An example of a useful stacked chart is shown in Figure 5.10. This chart
focuses on the CAU-1 SAt and battle tempo and was useful in the analysis of
that unit during Run 8 of Experiment 6. This stacked chart highlights three key
events and four critical decisions (denoted with large starts in the top section of
the graphic). In Event 1, four of the primary information gathering units, the
Unmanned Aerial Vehicles (UAVs), were lost to enemy fire early in the run.

Figure 5.9. An example of Battle Tempo score evolution over time.


The Hunt for Clues 137

Although the commander recognized the loss of these units, he did not system-
atically consider the effect of this loss on his Reconnaissance and Surveillance
(R&S) plan. The second key event was the asychronous maneuver of the two
CAU units. The lack of effective coordination between the units allowed the
enemy to strike against the attacking forces. This led to the third key event—
the destruction of CAU1 and its failure to achieve the mission objectives. Even
after this impace on force strength, the higher echelon commander (CAT CDR)
pressed on using the current plan instrad of systematically considering the capa-
bility of the remaining force. A similar product could be generated with respect
to any command cell included in our area of analytic focus.
Stacked charts are particularly helpful when used in conjunction with pro-
cess traces. A process trace produces a detailed chronicle of how an incident of
interest came about. With the process-tracing methodology, we can map out
how an incident unfolded, including available cues; what cues were noted by
operators; and the operators’ interpretation of those cues in both the immedi-
ate and larger contexts. Process tracing helps to link collaboration to changes
in situation awareness and to connect situation awareness to decision making
with a focus on the operators and their use of the battle-command system.
A stacked chart typically shows four elements:

• Which operators were involved in collaborative events


• The assessment of each commander’s SAc

Figure 5.10. An example of a stacked chart: CAU-1 in


Experiment 6, Run 8.
138 Battle of Cognition

Figure 5.11. Collaborative events display.

• Select SAt curves that may include single or multiple curves for both Red and Blue
commanders
• Battle tempo

Figure 5.11 is a detail of the top chart in the stacked chart. This view details
the observer database entries for collaborations that occurred across, and
internal to, each of the command cells included in the analytic focus. The
vertical axis contains the list of operators by cell. A blue diamond in the chart
indicates a participant in a collaboration, while the pink square indicates an
initiator of the collaboration.
The second chart in the stack (Figure 5.8) reflects assessments of the com-
manders’ cognitive situation awareness (SAc) as expressed in the commander’s
reads and in his collaborations with other operators. These subjective assess-
ments were made by observers based on comparisons between individual com-
manders’ expressions and the ground truth situation available to the observer.
In the chart, the assessments of the Blue commander’s awareness of the Red
forces and Red plans are indicated by a square; awareness of his own forces,
by a triangle; and awareness of his own plan, by a diamond.

Figure 5.12. SAt curve over time.


The Hunt for Clues 139

Figure 5.13. Battle Tempo curve in a stacked chart.

The third chart is the SAt curve (Figure 5.12) and may depict the SAt curve
for one or more command cells as indicated in the legend at the bottom of the
stacked chart. The example in the figure is for CAU-1’s SAt of the enemy and
the enemy’s SAt of CAU-1 in Run 4 of Experiment 6.
The fourth chart (Figure 5.13) contains the battle tempo curve—an assess-
ment of the relative rate of activities within the commander’s cell.
Analysis using these stacked charts led to insights regarding the amount and
nature of information available to the commander; the relationships between
that information and the commander’s situation awareness; how the cells col-
laborated; and the linkage between situation awareness, decision making, and
battle outcomes. Overall, this led to a number of interesting conclusions, some
of them encouraging; and some troubling. Perhaps for the first time quantita-
tive characteristics of battle command have been experimentally captured with
special attention to situation. Such quantitative analysis begins to shed light on
the relation between the science (particularly the use of technology) and the art
(human cognitive processes) of command, and on the cognitive dynamics that
both enable and hinder the commander in making sense of the battlespace.
CHAPTER 6
Making Sense of the Battlefield:
Even with Powerful Tools, the
Task Remains Difficult
Stephen Riese, Douglas J. Peters,
and Stephen Kirin

Some of our experimental findings, even if relatively obvious, provide possibly


the first-ever quantitative validation of long-standing intuitive expectations
of military practitioners. Certain findings are far from obvious; others are
perhaps counterintuitive and even somewhat troubling. In this chapter, we
discuss several such findings:

• Information advantage, and not level of acquired information, is the stronger


indicator of tactical outcome.
• Human tendencies and machine-interface limitations make Situation Awareness
(SA) hard to maintain.
• Gaps and misinterpretations in SA are alarmingly common.
• Shared information does not necessarily mean shared SA.
• The cognitive load of future battle command is extremely high and tends to be
disproportionately borne by the most junior leaders.

INFORMATION ADVANTAGE RULES


SA is a remarkably strong determinant of battle outcome. Of all the factors
examined in these experiments, SA easily surfaced as the most influential. The
relationship between SA and battle outcome is worth discussing in detail.
The outcome of each experimental run can be assessed in terms of whether
the Blue or Red force achieved a tactical advantage. This assessment is based
on mission accomplishment. For example, Blue may be required to clear a
path to enable the passage of follow-on forces while Red is required to prevent
Blue from penetrating its sector. While runs did not always result in a clear

140
Making Sense of the Battlefield 141

win or loss for either side, the determination of which side held the ultimate
tactical advantage was usually rather clear.
To help illustrate this characterization, we show a set of Situation Awareness—
Technical (SAt) curves for Experiments 4a and 4b in Figures 6.1 and 6.2,
grouped by assessed battle outcome. Results from later experiments tend to
be similar but are more complicated due to the increased complexity that mul-
tiple echelons introduced. “Advantage Blue” or “Advantage Red”—whether it
was the Blue force or the Red force that gained advantage in the battle—was a
group assessment based on professional judgment of the observers, white cell,
analytic team, and participating operators and was not influenced by the cap-
tured data or subsequent analysis. Some remarkable relationships between the
information availability (as measured by SAt) and the tactical results emerge.
In all of the charts, the initial spike in SAt reflects the intelligence feed pro-
vided to the Blue command cell. The size of that spike is relatively consistent
across all runs as the amount of information initially provided was intention-
ally controlled. Although Experiments 4a and 4b were manned by different
Blue operator teams, those command cells achieved a comparable peak of
SAt across the runs (approximately 60%–63%), probably a reflection of the
inevitable close fight that occurs in every run. On average, the Experiment 4b
Blue command cell achieved a lower average Blue SAt score than the Experi-
ment 4a cell (40% vs. 47%). This may reflect the second team’s focused col-
lection management plan—a plan that painted a clearer picture of key areas
of interest at the expense of not understanding the more remote areas of the
battlespace. By comparison, the first team tended to cover more of the bat-
tlespace with sensors, without specifically focusing on their planned avenue of
advance. The Experiment 4b team also tended to negotiate more restrictive
terrain, a tactic that tended to mitigate the contribution of certain sensors.
Given that the amount of information ultimately available to the Blue com-
mand cell was similar across the 12 runs presented, the understanding of the
relationship between SAt and battle outcome emerges from the relative difference
between Blue and Red scores. In fact, it is the difference between Red and Blue
available information, and not the level of Blue SAt achieved, that is the stronger
predictor of battle outcome. When the Blue command cell achieved the tactical
advantage, their SAt clearly dominated that of Red over the course of the run.
In the cases in which Red gained the tactical advantage, the Red SAt usually mat-
ched or exceeded that of Blue for a significant portion of the battle (significant
either in terms of length of time or the criticality of the point in the operation).
Typically, each graph displays periods of time within the fight when there is
rapid growth of SAt, gradual and continuous growth of SAt, or no growth of
SAt. Rapid growth is usually a reflection of an intense close fight where many
new detections are made, as in the last 30 minutes of Run 6 in Experiment 4b.
There is usually an associated rise in Red SAt during these close fight encoun-
ters. Gradual and continuous growth, as in Experiment 4a, Run 8, typically
reflects deliberate movement and an Intelligence Preparation of the Battle-
field (IPB) process that enabled the appropriate placement of Named Areas
142 Battle of Cognition

Figure 6.1. Blue and Red SAt achieved in Experiment 4a.

of Interest (NAI). Periods of no growth, as reflected in the period between 20


and 40 minutes of Run 3 in Experiment 4a, usually result from one of several
conditions, including the following: sensors are inactive and not looking, sen-
sors are searching but are looking in areas that have already been searched, or
sensors are covering new ground but finding nothing new.

Figure 6.2. Blue and Red SAt achieved in Experiment 4b.


Making Sense of the Battlefield 143

The difference in sensing capability between Blue and Red was intention-
ally great and resulted in different tactics and procedures during the battles.
Because of Red’s heavy reliance on humans for detection, Blue was able to
limit Red’s ability to see by limiting close encounters. As a result, Red SAt
routinely increased most significantly during the close fight.
This observation is vital to understanding the nature of the future, informa-
tion-enabled force. While the value of information has always been appreci-
ated, it is less widely recognized that it is the information differential, and not
the absolute level of information acquired, that is the stronger determinant
of battle outcome. The impact on future force design, tactics, and procedure
development is significant: in the fight for information, acquiring a certain
level of information is less important than achieving a substantial information
advantage over the enemy.

SA IS HARD TO MAINTAIN
In spite of the great help provided by sensors and the Commander Sup-
port Environment (CSE), commanders and staffs found it very challeng-
ing to gain and maintain adequate SA. From the large number of possible
causes for this challenge, some of which have not yet been fully explored,
we examine two seemingly unrelated reasons in this section: the operators’
tendency to prefer acquiring new targets over conducting Battle Damage
Assessment (BDA), and the CSE’s limitations in presenting information to
human operators.

Human Tendencies
Human biases play a significant role in battle command. For example,
belief persistence and confirmation bias (Endsley 2000; Nickerson 1998)
were often seen to appreciably shape the course of our experimental runs.
Another tendency—to prefer acquiring new targets over assessing the state
of previously acquired targets—also emerged as one of the more consistent
biases.
Knowing the state of enemy assets is a key component of SAt and a key con-
tributor to battle outcome. Although the importance of conducting adequate
BDA was known to all operators, it emerged and remained a key challenge
through all phases of the experimental campaign. From a tactical perspective,
BDA (or the lack thereof) dramatically influenced the conduct of operations
as operators attended to previously engaged targets, reducing their speed of
maneuver and expending redundant munitions to mitigate the risk of operat-
ing in a less certain environment.
In operator surveys, the lack of BDA was reported as one of the most sig-
nificant detriments to achieving SA. Although there were certainly different
approaches, intents, and capabilities across the various Blue command cells, a
number of similarities surfaced:
144 Battle of Cognition

Command cells did not fully develop a set of tactics, techniques, and procedures to address
the requirement for BDA. Tactics, techniques, and procedures emerged and were
modified over time as the operators conducted subsequent missions. Some cells were
still experimenting with methods during their final battle. At times, the recorded
dialogues revealed a lack of awareness as to who was controlling BDA assets and
who was responsible for making assessments.
Command cells did not establish a priority for BDA. Prior to each run, the commander
established priorities, including the designation of the most-dangerous targets and
high-payoff targets. Quite remarkably, staffs often did not consider these priorities
in conducting BDA during the mission.
Command cells struggled to satisfy the competing demands of acquiring and characterizing
new targets and assessing the status of known targets. Although most sensors are opti-
mized for one task or the other, humans are needed to pursue both tasks. Some
cells developed Intelligence, Surveillance, and Reconnaissance (ISR) protocols for
the use of available sensors, but these usually did not address the use of sensors for
BDA. Because of this, BDA missions were usually ad hoc and seen to deviate from
the ISR plan.
Command cells relied heavily on sensors with lower-quality images in making their assess-
ments. For example, in Experiment 4b, although images provided by robotic ground
scouts and by UAV were a less-frequent source of imagery to support BDA, the
imagery they did provide was high quality, informative, and enabled correct BDA
(Figure 6.3).
Command cells failed to exploit the automated BDA capabilities provided by the CSE. This
may reflect a lack of training and understanding, or it may reflect operator reluc-
tance to forfeit control of assets that were also needed to identify new targets. It
should also be noted that the available automated tools did not offer the capability
to prioritize targets for BDA based on Commander’s Critical Information Require-
ments (CCIR).
Command cells often lost visibility over how many times a particular target had been attacked,
how many assessments had been cataloged, and how many images were available and had
been viewed—information that is available in the CSE user interface, but not easily
accessed and interpreted in the heat of the close fight.
Command cells repeatedly relied on the option of attacking targets multiple times in the
absence of effective BDA. Some groups developed engagement heuristics to mitigate
the lack of BDA (e.g., “Fire two precision munitions at a most dangerous target on
our axis of advance.”)

Even when operators dedicated resources to BDA, achieving an accurate


assessment was difficult. From Experiment 4b, Figure 6.4 shows the total
number of assessments attempted, and the results of those evaluations (cor-
rect, inconclusive, overassessment, or underassessment). Overassessment (for
example, assuming an enemy asset experienced a catastrophic kill when it was
only a mobility kill) might encourage the operator to move rapidly against
an enemy asset that is still capable and dangerous. An underassessment (for
example, assuming an enemy asset’s state was a mobility kill when it was a
catastrophic kill) may encourage the operator to move cautiously against an
enemy that is ineffective or destroyed.
Making Sense of the Battlefield 145

Figure 6.3. Source of BDA updates in Experiment 4b.

Less than 30 percent of the attempted assessments were correct; that is,
the assessment matched the actual state of the enemy asset at that point in
time. Recall that in our experiments we assumed that although a sensor can
detect and classify an object as a potential enemy asset, the final interpretation
of the images obtained by the sensor was left to a cell member. The images
were simulated with a realistic degree of uncertainty and other defects, and
to make a definitive assessment was difficult at best. This resulted in the rela-
tively low level of assessment accuracy. For example, Figure 6.5 illustrates the

Figure 6.4. Correctness of BDA.


146 Battle of Cognition

content provided by images; the quality of the images reflects the simulation
system’s adjudication of the engagement and the quality of sensor providing
the image.
We describe the difficulties of BDA here at length to help the reader appre-
ciate one of the more significant challenges faced by the human operators.
Given this challenge, it is somewhat understandable that the command cells
often favored using sensors to characterize unengaged targets despite the
importance of BDA. However, this tendency and the resulting diminished
amount of quality BDA had a direct impact on the level of SA, as measured
both by SAt and SAc.

Limitations of the Common Operating Picture (COP)


A second significant contributor to the difficulty of achieving SA is that
of the CSE interface. In particular, displays often do not convey the level of
uncertainty present in the underlying data and thus present a deceiving pic-

Figure 6.5. Quality of images available for BDA in Experiment 4b: (a) 27 per-
cent of images showed no discernable target; (b) 34 percent of images showed
the presence of a target with no discernable damage; (c) 29 percent of images
showed smoke rising from the target but without sufficient detail to determine
the extent of damage; and (d) 10 percent of images were of sufficient detail to
accurately conduct effective BDA.
Making Sense of the Battlefield 147

ture to the human operator. For example, the CSE’s COP displays detailed
icons showing where enemies were detected with no visual indication of how
old the detections were and no visual indication of the level of confidence
in the information presented. Without such critical clues, human operators
tend to believe what they see on the screen and ascribe near certainty to the
information. A key principle for designing systems to support SA is to directly
present the level of certainty associated with information on the display
(Endsley et al. 2003; Miller and Shattuck 2004).
The CSE’s COP presents such a believable representation of the bat-
tlespace that it is often treated by operators as ground truth. This includes
instances in which particular areas have not been searched by sensors, and
the human operators discount the potential presence of enemy assets in those
locations. When such sensor gaps align with the expectations of the men-
tal frame developed by Blue operators, those operators have a tendency to
take what is on the map at face value—believing in this false world. In other
words, if a particular area on the display is free of Red force icons, then the
corresponding area in the battlespace must be unoccupied and therefore safe
to move through.
This was certainly true halfway through Run 8 of Experiment 6, when one
Blue commander stated that if the Red unit had placed a counterattack force
in the vicinity of the objective, then his unit would have already bypassed
the counterattack. In fact, there was a large counterattack force just beyond
the objective in an area not yet covered by sensors (see Figure 6.6). The
top view shows the information available to the commander through his
system. The bottom view shows the ground truth of the same area of the
battlespace. At this point in the battle, the Blue unit had begun its move-
ment toward the objective (Town 23 circled in top picture). In this example,
the commander’s understanding was based on the information display show-
ing no enemy icons in the vicinity of the objective without a corresponding
view of the sensor coverage in that area. This is the same event described
in Figure 5.10.
The CSE battle-command system interface is platform focused—it dis-
plays individual platforms and detections of enemy, as opposed to aggrega-
tions and higher-level interpretation. Commanders and their staffs tended
to focus on using the system interface to task individual sensor assets to
acquire information on individual platforms. In doing so, they often lost
focus on the commander’s critical information requirements (e.g., the loca-
tion of the Red counterattack force). Such shortcomings in multitasking
have been found to be one of the most frequent problems leading to low
SA (Jones and Endsley 1996). This tendency led to gaps in sensor coverage
and made it difficult to predict Red disposition from the available limited
intelligence.
The tendency to overtrust the COP was more commonly observed when
the time to complete the mission was running low. Additionally, the level
148 Battle of Cognition

of trust in the COP often correlates with other available intelligence. For
example, an intelligence input from higher headquarters that suggested an
enemy presence in a certain area often resulted in the Blue unit conducting
an exhaustive search of that area and blaming the lack of Red detections on
sensor or system problems. Similarly, an intelligence report suggesting that
the enemy is not defending certain terrain led the Blue unit to move quickly
through that area with little or no sensor coverage.
The human tendency to favor acquiring and characterizing new targets
over conducting BDA and the limitations imposed by the CSE COP display
are but two challenges to gaining and maintaining SA. This difficulty drives
the demand for more automated and semiautomated tools (e.g., BDA and
BDA management), as well as the need for human training on the use of those
tools. We also see a similar demand for COP improvements to help convey

Figure 6.6. Experiment 6, Run 8, at H + 43.


Making Sense of the Battlefield 149

the level of uncertainty in information to the human operator, consistent with


SA-oriented design principles.

GAPS AND MISINTERPRETATIONS


Earlier, we addressed the importance of SA in determining battle outcome.
However, even with adequate levels of SAt, we often find that commanders and
staffs fail to interpret that information correctly, typically resulting in lower-
than-expected levels of SAc. Gaps and misinterpretations in SA are alarmingly
common. It is likewise difficult to self-assess gaps in one’s own SA, especially in
time-critical stressful environments such as represented in our experiments.
In the subsequent sections, we consider a number of factors that contribute
directly to incorrect interpretations of information and the creation of gaps
in SA. These include a human bias toward preconception, the tendency of
the COP to drive human attention, the human predisposition to want more
information, and the cognitive impact of the lack of sensor coverage tools in
the CSE.

Human Biases—How to Overcome Preconceptions


As a result of the planning process, Blue operators develop a mental frame,
in the form of a plan, which guides their actions and understanding of the bat-
tle as it evolves. Part of the frame is a set of expectations about how the Blue
force will conduct the operation, and likewise a set of expectations about how
the Red commander is likely to array his forces and what he is likely to do as
the mission progresses. These expectations can be strongly held and require
significant disconfirming data in order to be discarded. This phenomenon has
been called confirmation bias, representational error, or belief persistence, and has
been found to be particularly hard to eliminate. People tend to develop stories
that explain away conflicting evidence, rather than adjust their internally held
model of the world (e.g., see Jones and Endsley 2000; and Cheikes, Brown,
Lehner, and Alderman 2004).
In each experiment, we see a surprising number of cases where, despite data
to the contrary, operators continue to expect the Red unit to act a certain way.
Rather than trusting the sensor reports as represented on the CSE interface,
they listen to their “gut feel” or mental frame that tells them where they should
find Red forces.
For example, in the fourth run in Experiment 7, one Blue unit used UAVs to
repeatedly sweep the mountainside and the area around two towns, expecting
to see Red assets there. The commander’s frame was developed based on
the Red force’s past tendencies of hiding in the mountains. In an interview
conducted after the run (excerpt below), the Blue unit commander reveals
how his strongly held beliefs about the Red force’s disposition influenced his
actions. In fact, as shown in Figure 6.7, the Red force had only two vehicles
150 Battle of Cognition

near the search area, with the majority of his forces arrayed to the west and
further north.

Blue Commander: Well, initially in the assembly area, obviously the first thing
we wanted to do is to try to gain situation awareness through sensors. So we
deployed our sensors—our Class 1s and our Class 2s and a Class 3 forward into
sector to try to gain intelligence. And from the get-go they weren’t picking up a
lot. They made it to the first two towns—like 97, 98—somewhere around that
area, and weren’t picking up anything, which surprised us. It kind of caught
us off guard a little bit because you just expect to pick up something. So now
you’re thinking, “Okay, is there something wrong with the sensor? Let’s keep
going back.” And we kept going back and looking and looking and not seeing
anything.
Interviewer: What was your expectation for what you’d be able to see?
Blue Commander: At least people. Usually when we get in the towns we see a lot
of human indicators come up. There wasn’t even that. At a minimum, you think
you’d see a lot of human indicators come up with people in the towns . . . at the
least . . . Or we would see something up in the hills—some sort of sensor or
observer up there.
Interviewer: So what did you do about that?
Blue Commander: We just continued to go around and around that town in just the
hope that we’d pick something up.
Interviewer: Did you have some sense of how long you would search a town until you
felt like it was clean?
Blue Commander: I guess not; no, I felt like, “I’m just going to keep doing this until
I find something.”

Analysis of commanders’ assessments through all of the experiments sug-


gests that both correct and incorrect assessments were distributed among the
battles, often independent of the level of SAt available at that point in time in
the run. This apparent lack of a correlation between a commander’s assess-
ment and his concurrent level of SAt is inconsistent with assertions that sug-
gest that having more, better, and timelier information necessarily leads to
greater SA. Clearly, humans often exhibit the tendency—confirmation bias
or belief persistence—to stand by their beliefs in spite of evidence to the con-
trary. This places unique demands on the human-machine interface and sug-
gests potential limits to the level of SA that might be achieved. Humans tend
to be hardwired to view new data through the lens of prior convictions and
thus often interpret correct information as reinforcement of an inaccurate
assessment.

Attention Shifting—Where to Expend Cognitive Effort


One phenomenon observed frequently during the course of these experi-
ments was the Blue operator’s inability to use available information to envi-
Making Sense of the Battlefield 151

Figure 6.7. Area in which the Blue commander expected, but did not find, Red forces.

sion the future state of the battle (Level-3 SA). One possible explanation is
that the command cell was inclined to rely heavily on the COP and often
preferred to watch events unfold rather than projecting two or three steps
ahead. Thus, in some respects, the CSE COP may inadvertantly encourage a
more reactive cognitive posture.
Endsley suggests that the “single most frequent causal factor associated
with SA errors involved situations where all the needed information was pres-
ent, but not attended to by the operators. This was most often associated with
distraction due to other tasks” (Jones and Endsley 1996). Indeed, in a num-
ber of our experiments, we noticed that the CSE display drove a commander
toward very rapid and apparently unproductive shifting of his attention focus.
Constantly scanning the display for any new information, the commander
would rapidly move the cursor from one enemy icon to another, hunting for
additional details. This led him, apparently, to focus very narrowly on the
most recent information, often in a very small area of the battlespace, and
to allocate little attention to the broader appreciation of the battle. Further-
more, he would often induce other cell members to shift their attention focus
to the subject of his immediate, narrow interest. Rather than directing the
attention of the specific operator who needed to be cognizant of the event, he
would often make a general announcement that forced the other operators to
interrupt their activities. In chapter 8, we will return to this observation and
examine it in more detail.
152 Battle of Cognition

We frequently see that attention is drawn to areas where the most current
activity occurs, regardless of the importance of the information represented
by the activity. Thus, commanders can easily lose overall situation awareness
when they become too focused on specific areas of the battlespace, specific
events, or specific distracting conversations. By focusing primarily on new
information that populates the display, and therefore narrowing their atten-
tion, command cells can lose overall situation awareness.

Information Overload—How to Recognize


What Is Important
The manner and amount of information presented through the CSE inter-
face drove the SA process. An objective of interface design, therefore, should
be to maximize the amount of available critical information while minimizing
the amount of unimportant data presented to the operator. Overall, the CSE
interface did enable the operators to execute their key tasks. In some ways,
however, the robust set of available CSE options actually hindered operators’
tactical actions.
For example, the amount of information that a cell member could cause
to display easily exceeded the available screen space. Most conspicuously, the
screen clutter produced by numerous adjacent icons made individual assets
difficult to distinguish (see Figure 6.8). In another example, the operators’
tendency to set low thresholds for the automated alert function often caused
multiple noncritical alerts. As a result, the system frequently overloaded the
cell members with alerts (averaging as many as 10–12 per minute for a single
operator) and thus impaired their ability to develop SA. Also, many of the
images captured by sensors contained no usable information (e.g., they did
not allow the viewer to discern the type of asset or the extent of damage
following an engagement; see Figure 6.5). With no prefiltering capability,
and with limited cataloging capability provided by the CSE system, the oper-
ators often seemed inundated with images: they had to review all that were
provided and had to remember which images were new and which had been
previously viewed, at least until they were comfortable with their current
assessment.
While each operator chose the alerts he felt were required to execute his
assigned tasks, there seemed to be little general appreciation of the number
of alert messages (hundreds per operator each mission) and the amount of
noncritical information that various interface selections would generate. As
a result, operators were often inundated with information, particularly alerts
and images, and the value of each data element was lost in the sea of mes-
sages generated (see also Endsley et al. 2003). For example, in Experiment
4a, less than 10 percent of the alerts received could be classified as critical
information; the vast majority of alerts were general in nature or provided
redundant information. Interestingly, less than 5 percent of the decisions
made by operators resulted from the alerts defined by those operators.
Making Sense of the Battlefield 153

Figure 6.8. CSE interface showing clutter that can result


from too much information.

These observations suggest the need for both training and interface
improvements. The system could provide fewer, yet more helpful, alerts
that are a combination of specific and fused data. One author distinguishes
between acute incidents in which the situation presents itself all at once and
the alert must be immediate, and going sour incidents in which there is a
slow degradation of the monitored process (Woods, Johannesen, Cook,
and Sarter 1994). For example, it is easy to imagine an immediate alert,
“out of ammunition,” but this type of alert could also be given in regular
intervals—25 percent, 50 percent, and 75 percent ammunition exhausted—
that allow the command cell to better assess the status and make decisions
before the situation becomes immediate. Furthermore, a fused alert might
only warn the operator when the amount of ammunition is being expended
at a rate faster than the percent of the mission accomplished. An alert fused
to the CCIR might indicate when a dangerous enemy asset is detected in
the planned axis of advance. By reducing false alarms and providing such
salient and distinct cues, the system would allow the operators to devote
more attention to understanding trends and other more important decision-
making tasks.

Sensor Coverage—How to Know


What Is Already Known
Earlier, we suggested that a Blue command cell may experience periods
of little or no growth in SAt because sensors are inactive, looking in areas
that have already been searched, or are simply finding nothing new. In our
experiments, all command cells were challenged to maintain awareness of
locations and activities of their sensors. At times, valuable aerial assets awaited
instructions while simply orbiting unproductively. Command cells often did
154 Battle of Cognition

not know where their sensors had already looked and employed sensors in
areas that were already well covered.
To further complicate the problem, the Red force avoided establishing any
recognizable operational patterns and specifically avoided assuming similar
force dispositions from one experimental run to another. For example, the
Red commander put decoys where he thought the Blue unit expected actual
forces and often interspersed decoys with actual forces to portray a larger
force. Red also did not attempt to defend the entire battlespace but instead
massed combat power in one area and took risk in another area. Further, the
Red force organized itself into irregular groups composed of platforms of
different types that precluded the Blue command cell from making consistent
conclusions about the enemy force disposition.
Recognition of these conditions motivated the development of a sensor cov-
erage analysis tool (illustrated in Figure 6.9) that could portray, over time, the
amount of terrain being searched by the Blue sensors. Among its functions,
the tool measures the quality of sensor coverage by comparing the amount of
terrain searched to the Blue unit’s area of interest. A sample resultant curve
is depicted in Figure 6.10. This afforded the analysts a visual comparison of
the SAt curve to the amount of terrain covered by the available sensors. Addi-
tionally, because operators were extremely sensitive to the appearance of new
icons on their displays, and because they are the primary causes of the distinct
jumps observed in the SAt curves, the display shows the number of enemy
assets detected in each 10-minute period. In all battles examined with this

Figure 6.9. Sensor coverage analysis tool.


Making Sense of the Battlefield 155

approach, there was a clear positive relationship between the amount of ter-
rain searched and the corresponding level of SAt.
The sharpest increases in coverage tended to occur when Blue ground
elements or more capable UAVs moved forward, covering new terrain with
high-quality direct vision optics (DVO) sensors or human vision. However,
the contribution of ground elements came with a cost: since Blue ground
platforms’ sensing capability was comparable to that of the Red platforms, the
ground advance usually contributed to an increase in Red SAt. And because
the Red force was usually dispersed throughout the battlespace, there were few
instances of significant coverage growth without some number of detections
(Figure 6.10).
The commander often based maneuver decisions in large part on whether
or not an area had been searched by a sensor. Seldom was he able to quantify
or assess the quality or currency of that coverage. In addition, if an area had
been searched and no enemy entities detected, there was no mechanism to
remind the commander that the area was potentially vacant. An investigation
of the factors that contribute to the commander’s decision making indicates
that the knowledge of what is not there often may be as important as the

Figure 6.10. Relationship between sensor coverage and SAt.


156 Battle of Cognition

knowledge of what is there. For example, recognizing gaps in the enemy’s


defense can have as many tactical implications as knowing where his strength
is located.
To this end, we explored the possibility of developing a CSE version of
the sensor coverage tool that captures how effectively critical battlespace
areas have been covered by sensors (a prototype display, based on the ana-
lytic tool, is shown in Figure 6.11). Such a sensor coverage device would
assist the command cell in understanding how effectively the plan is being
executed and when an area has been sufficiently covered and movement
should begin. Not having such a tool, one Blue commander described
his mental model of the sensor coverage provided during the battles as
discrete, rectilinear, and homogeneous—similar in design to the crisp and
distinct NAIs that were drawn during the planning phase. In reality, as
illustrated in Figure 6.11, sensor coverage is continuous, nonlinear, and
heterogeneous.
The four impediments to SA presented above all have a significant influ-
ence on decision making and, as we have seen, are not overcome with more
information. When adequate information is available, human decision makers
may often interpret it incorrectly and at the same time demand even more
information. Automated tools that quickly provide large amounts of infor-
mation to humans do not yet inherently help overcome human tendencies
and biases. In fact, they often exacerbate the biases as commanders look for
the specific data elements they expect to see while discounting the perceived
noise of the other information. The impact of these findings for the future
force clearly includes specialized training (largely not discussed here), the
direct presentation of interpreted information (Level-2 and Level-3 SA), and
the need for the interface to help recognize when humans are developing or
maintaining incorrect views of the world.
To account for human limitations in perception and attention span, future
systems should present information to the human operator with an indication

Figure 6.11. User interface of the Sensor Coverage Tool.


Making Sense of the Battlefield 157

of certainty and degree of urgency. Absent such a mechanism, humans tend


to treat the most recent information as the most urgent, and all information
as equally credible. The system should help curb the operator’s predisposi-
tion to demand more information, regardless of his current cognitive load.
Also needed are sensor management tools that indicate the amount and qual-
ity of previous sensor coverage. The lack of such a mechanism often results
in deficient decisions even while the operator is unaware of the underlying
information gap.

COMMON BUT NOT SHARED


Shared information does not necessarily lead to shared SA. Our original
expectation was that the collaborative nature of the CSE would lead to a
shared perception of the environment (shared Level-1 SA). One could argue
that sharing this information across the command cell would lead to a shared
comprehension (SA Level-2) and a shared projection (SA Level-3). Evidence
from our experiments suggests that although a shared display of information
does lead to a degree of shared awareness, its interpretation and projection
differs among individuals, and shared understanding can only be developed
through effective collaboration. Several observations from Experiment 7 serve
to illustrate this finding.
Earlier in this chapter, we presented a number of SAt curves that indicate
information availability and its correlation with battle outcome. While the
information was available equally to all members of the cell, it was often inter-
preted differently at each station. Just as in a class in which all students have
the same text and hear the same lectures but do not achieve the same level of
understanding of course material, operators in our experiments often drew
different conclusions from the same data. In particular, the following three
commonly held assumptions proved to be faulty:

Assumption 1: If we can show operators more Red and Blue information, then they
will have a more accurate interpretation of both the Red and Blue situations.
Assumption 2: If we can show operators more Red and Blue information, then they
will make better predictions about how the Red and Blue actions will play out.
Assumption 3: If we can show operators more Red and Blue information, then they
will have a shared Level-2 and Level-3 SA across the command cells.

In the second run of Experiment 7, we see a compelling example in which


assumptions 1 and 3 do not hold true. In this battle, the Combined Arms Unit
(CAU)-2 commander inaccurately assessed that the loss of two reconnaissance
vehicles was due to a Red minefield. In fact, the units had been destroyed by
an undetected enemy scout vehicle. This seemingly minor sense-making fail-
ure had a profound impact on the manner in which the battle was fought by
the entire Blue team. In fact, that failure was arguably the momentum killer
that negated any possibility of Blue mission accomplishment, as the Blue unit
158 Battle of Cognition

struggled to neutralize the perceived minefield. Through these events, two


different Blue commanders looked at the same information and drew two dif-
ferent conclusions about the enemy force—the failure to achieve shared SA
across the team.
Early in the mission, the commander saw data in the form of three Auto-
matic Target Detections (ATDs) in the vicinity of Town 97 and immedi-
ately suggested that this could indicate a minefield and sent a slow-moving
mine-detecting sensor, a MULE, to investigate (see Figure 6.12). Although
the ATDs actually represented Red infantry, the Blue unit had developed
an expectation of Red minefields near their route of advance. Based on the
movement corridors provided by the terrain, and on their expectation of what
the Red force would anticipate that the Blue force would do, the Blue opera-
tors came out of the mission planning session with a mental frame that pro-
duced a prediction of Red mines. The ATDs that arrived in close proximity
to each other supported the cell’s expectation of a minefield, thus serving as a
mental anchor for the invalid frame that continued to develop.
As the run continued, the Blue operators’ attention was diverted to Town
101 (also in Figure 6.12), where multiple sensor hits produced a picture of
several Red combat vehicles. The Blue team spent considerable time engag-
ing and collecting BDA on those Red assets. About 10 minutes later, the Blue
team lost two of its unmanned reconnaissance vehicles, both at the same loca-
tion near the ATDs mentioned previously. The Blue operators’ mental frame
for Red minefields in that vicinity was so strong that the commander immedi-
ately concluded that the reconnaissance vehicles’ loss was due to a minefield.
In fact, the vehicles were lost to fires from Red armored infantry vehicles near
Town 101, where the Blue unit had earlier destroyed several Red assets. The
Blue commander did not even consider that Red indirect fires could have
accounted for his losses. He lacked a key piece of information: an indication
of what killed the unmanned reconnaissance vehicles. This data alone would
likely have dramatically changed the outcome of the battle.
The data provided by CSE interface concerning this incident was accurate.
It was the human comprehension of the data that was flawed. The Blue opera-
tors were correct to assume that the Red team would lay mines near the Blue
avenues of approach—Red mines were in fact further west of their suspected
location. However, the ATDs provided such strong anchors for their mental
frame regarding the specific minefield location that this mind-set could not
be broken. This may be an example of confirmation bias—the tendency to
search for or interpret information in a way that confirms one’s preconcep-
tions. For the Blue commander, the new information fit his working hypothe-
sis so well that he did not perceive a need to generate alternative explanations.
While individual sense-making mistakes are to be expected even under the
best circumstances, this scenario is illustrative of a number of cases that begin
to challenge the validity of the assumptions presented earlier.
To further examine assumption 1, we compare Level-1 and Level-2 SA
using the SAt curves (Figure 6.13). Here we see that the Red and Blue units
Making Sense of the Battlefield 159

Figure 6.12. Phantom minefield versus actual location of Red mines.

had comparable SAt scores up until about 50 minutes into the run. However,
we know from the Blue commander’s actions just described that his explana-
tion of the data available around Towns 96 and 97—his Level-2 SA—was not
strong. At 70 minutes into the run, the Blue unit lost two unmanned recon-
naissance vehicles to Red indirect fire. Level-1 SAt at this time is higher for
CAU-2 than it is for Red regarding CAU-2. At 80 minutes into the run, the
Blue unit commander finalizes his assessment of the Red minefield by draw-
ing it on the map, and from that point on, the entire Blue team holds onto
the assessment that a minefield is located east of Town 96 and subsequently
makes movement decisions according to that interpretation. The SAt curve
indicates that the Blue unit’s Level-1 SAt was much better than the Red unit’s
for the last 90 minutes of the run—a clear example of how accurate data pro-
vided by the system does not necessarily lead to an accurate interpretation of
the situation.
Furthermore, despite the fact that all Blue operators saw the same elements
of data via the COP, a shared interpretation (Level-2 SA) was lacking. Both
unit commanders (CAU-1 and CAU-2) saw the same three ATD reports.
The CAU-1 commander correctly interpreted them as Red infantry in Town
97. He stated this assessment during a formal commander’s report and again
during informal information exchanges with the CAU-2 commander. The
CAU-2 commander noted the difference of opinion but maintained his suspi-
cion of a minefield. Working from their individual frames, the two CAU com-
manders maintained different explanations (Level-2 SA) for the same data.
Interestingly, the CAU-1 commander later accepted the assessment of a
minefield in that location after the two unmanned reconnaissance vehicles
160 Battle of Cognition

Figure 6.13. SAt over time for Blue and Red in Run 2.

had been killed and the minefield was drawn on the map. Thus, when they
finally did gain a shared understanding, it was a wrong one.
As a result of the misinterpretation, they allocated resources to clear the
phantom minefield. The Blue unit had enjoyed good forward momentum to
this point, but movement stopped for the mine-clearing work. The tactical
impact of the flawed assessment was also felt by CAU-1, as they postponed
initiating their movement until the phantom minefield was cleared. The non-
existent minefield contributed to movement decisions for the remainder of
the run (e.g., whether to go around the minefield or to navigate the path
through the minefield).
Had the CSE prompted the commander to consider other explanations for
the data, it might have helped both CAUs and the entire Blue force’s mission.
Additionally, because the discrepant interpretations were articulated but not
examined, an improved CSE system could provide a better means of support-
ing collaborative comprehension of data (e.g., tools to compare and contrast
probabilities associated with different interpretations).
In a separate situation, during the mission planning for Run 3, the com-
bined arms team (CAT—higher echelon for the two CAUs) set forth criteria
to be met prior to initiating the attack. They included certain preparatory fires
to disable Red ADA, sensors sweeps and surveillance at river crossing sites,
and a sweep of the sector 20 km forward of the Line of Departure (LD). The
CAT commander’s intent was to be as certain as possible that the subordinate
CAUs would be safe to move forward. As it turns out, the CAT commander
did not feel comfortable allowing the forces to cross the LD until more than
an hour past the start of the mission. During that hour, both subordinate
Making Sense of the Battlefield 161

units were confused as to why they had to wait so long. In one instance, at
25 minutes into the battle, the CAU-2 commander asked for permission to
cross the LD with unmanned mortar and reconnaissance vehicles, in order to
both range the enemy infantry and to increase the viewing area of the UAVs
that were tethered to the reconnaissance vehicles. The higher commander
denied this request, and as a result, the subordinate commander decided to
use longer-range and more-powerful-than-necessary weapons on the exposed
infantry. This in turn confused the higher-echelon command cell—why would
they use such weapons, usually reserved for more dangerous targets, on dis-
mounted infantry? The CAT commander further realized that by using these
weapons, CAU-2 may have telegraphed his location to the Red force. As a
result, there was sudden pressure to cross the LD more quickly, despite the
CAT commander’s low level of confidence that the situation was right.
Two SA issues are brought to light from this example. First, there was not
a common frame among the various Blue elements to describe how the plan
should unfold—lack of shared SA. The CAT commander developed a certain
vision during planning for how the battlespace should look before crossing
the LD and verbally communicated the specific criteria to the CAUs prior to
the run. But his mental vision that operationalized those criteria and essen-
tially described what his comfort level had to be before acting was not com-
municated. Both subordinate units were ready to move out long before the
CAT commander’s unknown vision had been achieved.
Second, the two echelons (CAT and CAU-2) comprehended the evolving
battle differently as a result of different cognitive frames. The CAT command-
er’s goals were to target enemy ADA and to have sensor coverage of specific
areas. The CAU-2 commander’s goals were to target all known enemy enti-
ties and to continually extend his sensor coverage forward. The CAU-2 com-
mander’s interpretation of the data in the context of his mission objectives led
him to believe that his best action was to move certain assets across the LD,
in contrast with the CAT commander’s comprehension of the same data and
his perceived best action.
This difference in Level-2 SA resulted in the CAT’s refusal to let CAU-2
cross the LD, the subsequent use of more potent weapons to take care of
enemy infantry, and confusion at both echelons concerning the other’s
actions. The difference led to Level-3 SA (projection) problems as well. The
CAT commander did not anticipate that a delayed movement would lead to
a suboptimal engagement decision on the part of one of his subordinates.
And while the CAU-2 commander may have anticipated that his fires could
provide his location to the Red unit, he did not anticipate that higher head-
quarters would wait so long to approve his crossing the LD.
These few examples, and there are many more, help illustrate that having
access to the same information does not necessarily ensure common understand-
ing. In the next chapter, we further discuss how collaboration suffered when
operators believed that sharing the same display content reduced the require-
ment for human communication. Situation awareness is built upon information;
162 Battle of Cognition

it does not equal information. Turning the available information into SA is a


demanding cognitive task, and one that different individuals do differently.
Future battle-command tools must help overcome these challenges, per-
haps through development of methods for gauging the common understand-
ing, or alerting when a unit’s actions conflict with the commander’s intent.
One can envision tailorable and flexible displays that allow operators to view
information from perspectives of different goals. Additionally, there are sig-
nificant training implications as future leaders will have to assimilate and
comprehend information more rapidly, while at the same time overcoming
the natural tendency to assume that others see things as they do.

THE COGNITIVE LOAD


The increased availability of information, along with a shift in capabilities
and responsibilities of lower echelons, presented our commanders and their
staffs with two distinct problems. First, the increased demands of informa-
tion gathering and processing led, overall, to a high level of cognitive load.
Second, this increased cognitive burden is shared very unequally among the
functions of cell members.
We see four aspects that combine to create a heavy cognitive load on the
operators of future battle command. First, there is a major increase in the influ-
ence of SA on determining battle outcome, as we discussed earlier. Second,
the individual tasks associated with gaining and maintaining SA are among
the most difficult and time demanding of all staff duties. Third, the CSE
tools that support acquiring and sustaining SA are among the least developed
of the command cell’s automated aids. And fourth, the depth of experience,
education, and expertise necessary to overcome these challenges is typically
lacking at the lower echelons. In this environment, the resulting fight for
information is truly a difficult struggle and imposes a heavy cognitive load on
the commander and his staff.
In the future force environment, we anticipate that the amount of informa-
tion passed from sensors to commanders and staff will be orders of magnitude
greater than we see today. One of the largely unanticipated consequences of
this increased load is the disproportionate burden shouldered by leaders at
lower echelons. As brigade-like capabilities are given to lower echelons, at the
company and battalion levels, commanders at these lower echelons require an
advanced skill set and a greater experience level to enable effective decision
making. With the least experienced commanders and staffs facing the greater
cognitive load, a significant rethinking of manning and training practices may
be needed. In particular, future force warfighters will likely need to begin early
training to operate effectively in an information-saturated environment.
Using the SAt metric during postrun data analysis, we were able to measure
and quantify the information available for Level-1 SA. This information was
used by a commander to develop and maintain his SAc. The analysis of how
different commanders interpret this information to formulate a mental model
Making Sense of the Battlefield 163

upon which their decisions were based was a critical area of focus for Experi-
ment 7. Here we examine the relationship between the cognitive loads at
the two combined arms units (CAU-1 and CAU-2—approximately company-
level units), the CAT (roughly comparable to a battalion), and their higher
headquarters (brigade).
Figure 6.14 depicts the SAt curves obtained from two selected runs (4 and
7). The curves indicate that the Blue commander won the critical fight for
information and was able to obtain a higher level of SAt than his adversary.
While this leads one to expect that the Blue team would win the overall battle,
we saw situations where this did not occur. In fact, the ability of one side to
gain an early information advantage and maintain that advantage through-
out the battle is critical. While we have already discussed the relationship
between SA and battle outcome, we highlight this result here because it is
essential that our future commanders be trained to properly use their future
force assets to gain an early information advantage. Additionally, it is criti-
cal these commanders understand how they can use future force assets to
perform counterreconnaissance against a future enemy in order to limit the
information obtained by the enemy early in the battle.
The CSE provides a significantly greater quantity of information to lower
echelons at a faster pace than occurs with today’s force. To handle the increased
amount of information volume, the commander requires an advanced skill set
and experience level in order to effectively visualize the battlespace, identify
and understand decision points, and create effective CCIRs to support his
decisions. A Blue commander described the challenge of being capable of
quickly sorting through all of the available information and determining what
information is most relevant to his mission at that time:

I don’t look at all the imagery, I look at what I see as affecting my plan because there is
a lot of superfluous imagery that floats around and you know I don’t have to be aware
of every shot nor would I ever want to be aware of every shot. I can’t process that
much. I want to focus on 2–3 bits of information at any given time and that helps me
maintain a confidence level vis-à-vis my mission success.

Experience and training on how to handle this new cognitive load are
essential for a commander to excel in the future environment of network-
enabled warfare. The critical combat enabler is not simply the presence of
the information, but rather the ability of the commander to understand and
process the relevant information and act upon that information quickly. This
ability to understand, process, and act quickly must be built into the experi-
ential base of future junior leaders. Often this means overcoming the natural
human tendency to want more and more information.
Future company-level commanders and staffs need to effectively integrate
advanced ISR assets into tactical operations, classify and identify future enemy
targets through different types of imagery, prioritize and engage these targets
with the correct munitions, and perform effective BDA. More important, and
164 Battle of Cognition

Figure 6.14. CAT’s SAt for Runs 4 and 7.

likely more difficult, is the requirement for these future leaders to be proficient
in processing vast amounts of information, determining what is relevant and
what is not relevant, and making key decisions based on partial information.
The solution to helping future tactical commanders is likely to be a complex
one that includes training, assignment policies (to better manage experience),
Making Sense of the Battlefield 165

and technical developments in C2 systems. Increasingly complex simulation


environments, such as those seen in this experimental campaign, begin to
address some of these needs.

EXPERIMENTAL DESIGN AND ANALYSIS


Finally, we highlight several lessons concerning the requirements for cre-
ating an effective experimental and analytical environment. Our campaign
of experiments, with its supporting simulations, scenarios, data-collection
system, and investigative mechanisms provided analysts with an uncom-
mon opportunity to explore the relationships between man, machine, and
commander’s situation awareness. While the following list is necessarily
a partial one, it conveys the demand placed on experimental organizers,
simulation programmers, and analysts in order to gain insight into such
relations.

Experimental Environment
• Having multiple, free-play runs of the same mission, with different operator teams
across experimental phases, allowed analysts to make observations and draw conclu-
sions that would not be possible from single-run experiments or from more-varied
runs.
• The dynamic, stressful environment in which decisions could not be exhaustively
examined a priori allowed a thorough exploration of the more intuitive decision
modes.
• Having humans both in the loop and able to freely exercise battle command allowed
situation awareness to influence battle outcome.
• The representation of multiple potential future capabilities in a structured environ-
ment enabled examination of a relatively large number of concerns with a relatively
small number of operators.

C2 Tools and Simulation


• The clear technical centerpiece of these experiments is the CSE. And while certain
technical aspects of the CSE system are limited to capabilities of current technolo-
gies, the system introduced a significant number of anticipated C4ISR (Command,
Control, Communications, Computers, Intelligence, Surveillance, and Recon-
naissance) enhancements, particularly regarding future sensors and unmanned
platforms. These allowed us to study man-machine interactions and assess where
cognitive demand might outpace automation support.
• The tailorable COP required each operator to set up and to adjust his C2 interface
and thus allowed analysts to observe operators’ tendencies concerning the acquisi-
tion and management of information.
• The high-resolution sensor effects module in the simulation architecture compelled
the human staff to develop ISR plans, control the execution of sensor platforms,
and interpret sensor images and communications. This allowed analysts to thor-
166 Battle of Cognition

oughly explore the cognitive demand of sensor management and its relationship
with gaining and maintaining SA.

Analysis
• The SAt metric to measure information availability, combined with metrics for
mission success, permitted analysts to objectively determine who held the tactical
advantage (Blue or Red) and to compare those outcomes to information availability.
• Availability of dedicated analysts and data collectors during the experiments allowed
us to capture the context of command decisions.
• Tailorable analyst observation stations, automated logging of all simulation activi-
ties, recordings of all communications, and complete transcriptions of a number of
runs enabled the detailed analysis that yielded substantive, quantitative insights.

The findings outlined in this chapter derive from relatively early experi-
mentation on network-enabled battle command. Continued and more sophis-
ticated experiments and analysis are required to fully explore the impact of
network-enabled warfare on the future commander’s cognition. Still, even
these tentative findings are significant in their implication for the design
and training of our future force. The importance of situation awareness in
network-enabled warfare drives the demands on the human operator who,
despite remarkable advances in automation, faces an increased cognitive burden
in the future. This in turn calls for further improvements in battle-command
support systems and in corresponding training.
In addition to training and tools, another age-honored approach to deal-
ing with cognitive challenges is to bring more minds to the task. Two heads
are better the one, says the common wisdom, and collaboration helps solve
hard problems. It should be particularly true, one could argue, with the mod-
ern tools designed to make collaboration more efficient. Our experiments,
however, offer a far more nuanced story of collaboration’s role in battle
command.
CHAPTER 7
Enabling Collaboration:
Realizing the Collaborative
Potential of Network-Enabled
Command
Gary L. Klein, Leonard Adelman,
and Alexander Kott

The battle command’s tasks that we discussed in chapter 1—diagnosing, plan-


ning, deciding, delegating, synchronizing—are enabled by collaboration
between the members of the battle command: decision makers, staffs, and
implementers. For our purposes, collaboration is the behaviors people use to
participate jointly in a task, especially information-related behaviors such as
locating a collaborator, transmitting information, establishing a shared mean-
ing for the information, and orchestrating joint actions. Doctrinal publica-
tions (e.g., Field Manual 6.0 2003) highlight the importance of collaboration
for fast and effective command and control.
Collaboration occurs in multiple forms and across multiple types of com-
mand relations. It occurs between members of a staff of an organization;
between superiors and subordinates; between peer commanders; between
peer staff members of different organizations; between organizations that are
explicitly intended to collaborate and those that found a need or an opportu-
nity to collaborate by happenstance.
Although the content and means of collaboration have changed throughout
the history of warfare, it is nonetheless a timeless component of battle com-
mand: Alexander the Great collaborated with his advisors, subordinates, and
allies; future warfighters will collaborate too. However, the importance of col-
laboration in future warfare will be greater than ever before. There are several
reasons why theorists of network-enabled warfare stress the growing role of
collaboration in future battle command (e.g., Garstka and Alberts 2004).
One reason is that wide distribution of information envisioned in the
future network-enabled warfare enables a qualitatively greater extent of col-
laboration between units, even at the lowest tactical echelons and even when

167
168 Battle of Cognition

geographically dispersed. In MDC2 experiments, for example, we observed


multiple examples when Combined Arms Units (CAUs) were able to support
each other by fires, sensors, and information, and to plan and execute rapidly
complex synchronized tasks when separated by up to 50 km. The broad tech-
nological support for readily available shared situation awareness (including,
for example, the awareness of other units’ resource availability and current
tasks) made such collaboration possible.
In addition, the growing expectations of the future force’s effectiveness
and the great amount of information available to the battle-command practi-
tioners require greater collaboration. MDC2 experiments have demonstrated
that a CAU was rarely able to accomplish its demanding missions without an
agile sharing of tasks and capabilities with its peer units and higher echelons.
Furthermore, with the large volume of information available (albeit often
ambiguous in meaning), the commander and staff found it necessary to elicit
opinions and interpretations from others within their own command cell and
from peers in other units. Collaboration, then, is not merely a convenient
option available to the future warfighter; instead, collaboration is a necessity.
To survive and to win, the battle command of the future must collaborate, and
collaborate effectively.
And yet, in our experiments we find that collaboration in a network-enabled
environment can also be difficult and even harmful. It can be a cognitively
expensive process that takes time and attention away from other tasks, such
as monitoring and interpreting the evolving situation. Although multiple
programs—the Army Battle Command System, the Command Post of the
Future, and the Future Combat System to name a few—have been developing
collaboration-oriented technologies, much remains unknown about the com-
plex and nonobvious effects of collaboration. To support collaboration, these
programs are developing capabilities like common operational pictures and
common information environments that provide facilities for chatting, draw-
ing, and file sharing among participants in collaborative command and con-
trol. These developments hold great promise, but realizing their full potential
will depend on systems engineering and new skills development that address
a number of organizational, social, and cognitive factors in conjunction with
technological capabilities.
In this chapter, we present a conceptual framework that shows how such
systems engineering can be done and how collaborative systems can be evalu-
ated. We use examples from the MDC2 experiments to illustrate the con-
ceptual framework; we then use the framework to explain the experimental
findings.

THE COLLABORATION EVALUATION FRAMEWORK (CEF)


Effective C2 collaboration requires consideration of a number of organi-
zational, cognitive, and social factors. The authors have therefore developed
a framework that integrates applicable theories from organizational design,
Enabling Collaboration 169

social psychology, and cognitive psychology. The following are some of the
considerations addressed in the framework:

• The different ways that technology can affect a collaborative task process
• The different granularity of information that is required at different levels in an
organizational hierarchy, which requires collaborators to transform and reinterpret
the information that they share
• The different informational requirements of each level of situation awareness
• The different approaches used for collaboration and the different behaviors they
require
• The interaction between concepts of operation, the task environment, and tech-
nology on performance efficiency
• The means to measure the effect of using technology on collaborative perfor-
mance

The following sections present a detailed description of the CEF in order to


provide the reader with the background needed to understand its application
to new technological systems being developed for collaborative C2. To that
end, the framework will be introduced with explanatory examples from the
MDC2 experiment.

THREE POINTS OF IMPACT


A collaborative system includes people, organizations, processes, and tech-
nology. Sets of collaborative behaviors and task transmissions must be sup-
ported by a collaborative system in order to achieve either effective sharing
of the task load or sharing of information needed to complete the task. The
following sections explain the basis for determining what subsets of behaviors
and transmissions will require support in a given collaborative environment.
Based on that determination, the sufficiency of the elements of a collaborative
system can be assessed.
Key to designing or evaluating the impact of technology in a collaborative
system is understanding that there are actually three points of impact, which
are illustrated in Figure 7.1. In any task, technology can facilitate the task
process itself, such as automating the detection of targets. However, in a col-
laborative task, technology can also support collaborative behaviors that facil-
itate doing the task process jointly, such as notification of others when one
operator has a significant piece of information to share. Also in a collabora-
tive task, technology can support task transmissions among the collaborative
task participants, such as rationalizations or justifications for a commander’s
intent, which are needed to facilitate complete understanding. Considerably
more-detailed discussions of collaborative tasks, collaborative behaviors, and
task transmissions will be presented in subsequent sections. That will allow us
to demonstrate how the combination of all three points of impact ultimately
determines task performance and task cost in a collaborative system.
170 Battle of Cognition

Figure 7.1. The three impacts of technology on a collaborative


task.

Figure 7.1 illustrates the interaction between tasks and technology. The
downward arrows represent the constraints of one element on another; the
nature of the tasks should define the behaviors and transmissions required in
a specific context, and together they should define the nature of the technol-
ogy. The upward arrows represent that one element is a resource to the other;
the nature of the available technology will influence the conduct of the col-
laborative behaviors and transmissions, and together they will influence the
conduct of the collaborative task. Taken together, these influences suggest the
mutual entanglement between task, technology, and concept of operations
that always exists in a system. So for example, various characteristics of the
MDC2 collaborative task require certain types of collaborative behaviors and
task transmissions among and between collaborators in their vehicles for cost-
effective performance. On one hand, the need for these behaviors and trans-
missions represents requirements for the Command Support Environment
(CSE) technology, which if not met, constrain its effectiveness as a collabora-
tive tool. On the other hand, the prototype and future CSE technology can
provide resources that when combined with an appropriate concept of opera-
tions, permit behaviors and transmissions that were not previously available
and thus provide the potential for dramatically improving collaborative task
performance.

TASK TRANSMISSIONS
All collaborative tasks require the transmission of information among par-
ticipants. This requires the development of external representations of mental
constructs, which in an individual task might otherwise remain intuitive and
imprecise. A number of typical generic classes of transmissions can be defined:

• Situation awareness confirmation—determining that we share situation awareness


• Rationalization—establishing internal consistency from data to decision or among
levels of situation awareness
Enabling Collaboration 171

• Justification—establishing links to external values (why should one take an action)


• Differentiation—identifying differences
• Alternative generation—developing multiple options (e.g., courses of action) from
which to choose
• Alternative evaluation or selection—ordering options based on some set of criteria
or objectives

In addition to these generic classes, there is a great deal of task-specific infor-


mation that must be transmitted among participants. In a battlespace, this
may include the location of entities in the battlespace, the commander’s evalu-
ation of his unit’s state, or the commander’s evaluation of the enemy’s intent.
Collaborative systems can facilitate developing these external representa-
tions. For example, they can provide structured constructs like data-entry
forms or graphical representations like maps and overlays. The CSE provides
a number of such constructs, such as on-screen forms that are filled in by the
intelligence manager for identifying target entities in the battlespace, and a
common operational picture of the battlespace.

HIERARCHICAL LEVEL AND INFORMATION


ABSTRACTION
In C2, a major consideration that should shape the nature of task trans-
missions is hierarchical organization. Understanding the need for hierarchy
will help clarify the informational requirements of an organization and con-
sequently the role that technology can play.
One way that collaboration serves to improve task performance is by reduc-
ing the individual cognitive burdens of a task by distributing the load among
a group of individuals. There can be both a hierarchical decomposition of the
task and a horizontal task allocation.
The need for hierarchical organization emerges from the nature of the bat-
tlespace. Brehmer (1991) shows that this hierarchy results from three primary
informational drivers:

• The need for a way of understanding a complex situation


• The need for a way of structuring an organization for management
• The need for a principle for the control of a complex situation

Organizational success comes from designing a system that brings these


three drivers into alignment. Moreover, achieving understanding, manage-
ment, and control of a complex system requires a hierarchical organization
because of two irreducible informational constraints.
First, the inescapable cognitive limits of people require a hierarchical decom-
position for understanding complex situations and managing complex organi-
zations. Brehmer (1991) states, “To impose a hierarchical system is to introduce
172 Battle of Cognition

a series of descriptions of the system, which differ with respect to their level
of abstraction. This makes it possible to control the level of complexity in the
sense that only a limited number of units have to be considered at each level of
the hierarchy.” For example, even a proficient Combined Arms Team (CAT)
commander will be at some point cognitively incapable by himself of under-
standing what is happening in a large battlespace at the level of each individual
entity. Yet through a hierarchical decomposition, the CAT commander’s under-
standing of the battlespace can be simplified to regarding the overall status of
his CAUs and the likelihood of their accomplishing their missions, rather than
regarding the status of whether specific, individual entities of the CAU have
succeeded in crossing the river. Similarly, applying the same hierarchical prin-
ciple to managing the organizational structure, a CAT commander needs to
manage only three staff members and two CAU commanders rather than doz-
ens of individual people and entities.
Second, limits on control of any system result from the impossibility of
developing models of complex situations that provide sufficient prediction
information at every level of abstraction for a given time frame (Flake 1998).
For example, we can perfectly predict in September that it will be cold in the
winter and warm in the summer; however, due to the complex interactions
and nonlinearity in atmospheric phenomena, we can never have enough infor-
mation, in September, to predict the day (or perhaps even the month) of the
last frost for the coming year. This is an informational limit not a cognitive
limit: the principles of chaos theory assert that this constraint is inescapable,
regardless of measurement accuracy, computer power, or software sophisti-
cation. Therefore, although CAU commanders may be able to estimate the
likelihood of mission completion for the CAU, they cannot predict the pre-
cise future state of every vehicle in the unit in that same time frame. However,
to achieve meeting the commander’s mission objectives, individual vehicles
and other assets at the more detailed level of abstraction must be controlled
in real time compensating for the unpredictable real-time conditions on the
ground. The commander’s mission-completion estimate and the control of
the assets must happen at different levels of abstraction and in different time
frames. Both levels of abstraction are critical to mission success.
Because of the difference in conception and information structure at each
level, hierarchical organization requires that information, transmitted up from
one level to another, needs to be not just aggregated but transformed and
reinterpreted. Commanders do not need merely the sum of the casualties taken;
they need to know how the distribution of these casualties is going to affect the
campaign to secure their mission objective. This systematic transformation of
information should be an essential element of designing organization-system
integration (Katz and Kahn 1978). The significance of these hierarchical
organization considerations is that in a complex battlespace situation, collabo-
ration also needs to be designed hierarchically. When so designed, the scope
of function at one level will informationally encompass more than one func-
tion at the level below, but at a higher level of abstraction. With information
Enabling Collaboration 173

appropriate to its level of abstraction, a function at one level is therefore able


to facilitate aspects of coordination that are beyond the scope of the individual
functions below it (Thompson 1967).
Figure 7.2 sketches a hypothetical hierarchical CAU and CAT organization
that addresses these considerations. The paramount importance of designing
for transformation and reinterpretation of information for such a hypotheti-
cal CAU and CAT organization is apparent from the sheer number of trans-
formation and reinterpretation transmission links illustrated in the figure.
This kind of organization, when facilitated by technology like the CSE, can
support dynamic self-organization at each level of the hierarchy while main-
taining a structured relationship between levels of the hierarchy. For example,
via the CSE planning tools, the commander provides the intelligence manager
with the current mission objectives and timelines. Within that framework,
the information manager can manage and control their Intelligence, Surveil-
lance, Reconnaissance (ISR) entities in real-time via the CSE interfaces and
can coordinate via voice and chat with other intelligence managers to develop
the Battle Damage Assessment (BDA) for use by the maneuver manager and
effects manager. To operate effectively at each level, cell members must be
provided with information that is at the appropriate level of aggregation and
abstraction for the scope of the decisions being made at that level. In addi-
tion, the time frame for these decisions must match their scope. Therefore,

Figure 7.2. A hypothetical MDC2 hierarchical organization.


174 Battle of Cognition

by facilitating this transformation of information from one level to another, a


system can facilitate better synchronization between the levels.
In this hypothetical organization, the CAU commander needs to deal with
the higher-level abstraction of the CAU itself. For example, the CAU com-
mander should set mission objectives, evaluate enemy intent, evaluate the
CAU’s tactical standoff status, and collaborate with other CAU commanders
about these issues. Bridging levels of abstraction when collaborating with their
own staff (maneuver manager, intelligence manager, effects manager), there
is a constant transformation and reinterpretation of information as it is trans-
mitted across hierarchy levels in both directions—commanders explain their
intent, while the staff provides information and evaluations in support of the
commander’s goal setting. Therefore, the CAU commanders need to have a
different conception of the system than their staff, and the commanders need
to have information for their decisions visualized in a different way from that
of the staff (Brehmer 1991). Ideally, there should be correspondingly differ-
ent visual displays for describing the system at each level in the hierarchy, and
there should be a set of command interfaces appropriate to each level.
Some detailed information will be lost at each level of transformation, but
new information is gained from the higher-order patterns that become appar-
ent from the more abstract visualizations. In addition, each level of transfor-
mation achieves longer-term information stability than the levels below, such
that information stability in turn supports longer planning horizons. So, even
though detail is lost at higher levels of transformation, new information is also
gained as a larger strategic situation awareness is established.
When the hierarchical management and control structures of an organi-
zation are aligned with the hierarchical structure needed to understand the
environment, and technology supports the informational needs at each level
of the organization, then localized functions (whether a commander’s or a
battle manager’s) can self-organize and adapt to dynamic conditions through
monitoring the environment at their level of abstraction. They then can plan
responses at the corresponding temporal, physical, and organizational level
to any changes that arise (Rasmussen, Pejtersen, and Goodstein 1994; Simon
1996; Thompson 1967). Such an alignment can be seen between military
organizations and the battlespace environment (Brehmer 1991). The chal-
lenge is reengineering that alignment for new concepts of operation.
Figure 7.2 in fact illustrates a very conventional organization for echelons
at brigade and above today. However, the MDC2 experiments suggest how
technology such as the CSE could allow visiting this structure at lower eche-
lons such as the CAT and CAU.

COLLABORATION AND LEVELS


OF SITUATION AWARENESS
In addition to facilitating sharing the task load as described above, collabo-
ration can improve task performance by facilitating information sharing to
Enabling Collaboration 175

enable a level of situation awareness for the group of participants, which may
not be possible for any single member. Therefore, a second major consideration
for task transmissions is enabling such awareness. As was discussed in chapter 4,
Endsley (1995) has identified three levels of situation awareness (SA).

• Level-1 SA is the perception of information. For example, it is having the awareness


of where different battlespace objects (enemy and friendly) are located in the battle-
space at different times.
• Level-2 SA is the comprehension of meaning. It addresses what the Level-1 situa-
tion awareness means currently; for example, what actions the enemy is currently
capable of performing.
• Level-3 SA is the projection of the situation over time. It is the awareness of what
could happen in the future under various contingencies.

There are different collaboration implications for these three levels of situ-
ation awareness. Level-1 is the basic awareness of available information. It
answers the question, “What information do we have about the enemy?” or
“Where are the friendly forces?” It is information that can be placed in a
database (or pooled) for use by others because it has a global frame of refer-
ence that is not tied to the information recipient’s situation. In Figure 7.2,
a hypothetical example is the development by the intelligence manager of a
common BDA picture that can be drawn upon by both the maneuver man-
ager and effects manager to support developing their own situation-specific
Level-2 and Level-3 SA. In fact, Level-2 and Level-3 SA require that Level-1
information be interpreted to meet the information recipient’s needs. More-
over, when information is shared across organizational levels, up the chain
of command, the lower echelon’s Level-2/3 SA must be transformed and
reinterpreted into the higher echelon’s Level-1 SA. For example, the CAU-1
commander’s assessment of CAU-1’s combat effectiveness, and the status of
their plan execution, is Level-1 SA for the CAT commander’s assessment of
the CAT’s mission status.
Therefore, providing the same common operational picture at the same
level of abstraction, across levels of an organizational hierarchy, can be prob-
lematic because the same information is not equally useful at each level—the
form of the information and its level of abstraction should be dictated by the
information needs of the recipient. Sharing the same entity-level information
through the CSE can enable the operators at the same level of abstraction
(e.g., intelligence manager) to efficiently backstop and compensate for each
other regarding entity-level actions (Level-1 SA) like identifying targets and
directing fires. However, employing that same view across different levels
of abstraction (CAT or CAU commander) could result in a lack of mission-
oriented (Level-2/3 SA) command and control.
These information-sharing and load-sharing aspects of collaboration can
interact with each other to impact performance. For example, the effective-
ness of information sharing can interact with the hierarchical decomposition:
176 Battle of Cognition

as was seen in one MDC2 case where none of the operators developed a more
mission-oriented view of the situation, and the larger picture (e.g., likelihood
of mission success) may not have been evaluated even though all of the needed
information existed within the group.
In addition, the information requirements for the levels of situation aware-
ness interact with characteristics of the task process. The nature of this
interaction can be understood after a description of those characteristics is
presented in the following sections.

TYPES OF COORDINATION AND


COLLABORATIVE BEHAVIORS
Within the collaboration evaluation framework, the key defining dimen-
sion of the collaborative task process is the type of coordination employed.
Whereas collaboration refers to the general situation where people work
together to achieve mutual goals, coordination refers to the specific approaches
that people use to work together. As will be subsequently described, the type
of coordination employed defines the collaborative behaviors that will be
required in a given task context and therefore the ultimate cost of coordina-
tion. In the CEF, all of the other dimensions defined by Thompson (1967)
serve to determine the constraints on the type of coordination that is possible.
Thompson (1967) defines three types of coordination: standardized, planned,
and mutual adjustment.
Under standardization, there are established rules or routines for how people
should coordinate their activity. As with traffic rules, standardization improves
performance per unit cost by reducing coordination costs in both financial and
cognitive terms because rules remove many uncertainties about how people
should coordinate their behaviors. Standardization functions best in stable task
environments. For example, in the MDC2 experiment, when an intelligence
manager identifies a target, they enter its description into the target database,
which in turn is used by the effects manager for targets to fire upon. This
routine relieves the need for the intelligence manager to communicate overtly
with the effects manager. In fact, this allows them to reserve such overt com-
munication for critical targets that are an immediate threat to the CAU, which
makes such nonroutine communications more alarming and effective.
In some task environments, preestablished global rules and routines are not
feasible. However, team members can still plan their coordination processes
based on more immediate circumstances. For example, in the battlespace,
each mission can be planned: waypoints set, areas of interest defined, and sur-
veillance strategies determined. Through planning, different units have been
provided with critical information for how to coordinate with each other,
thereby reducing the requirements (and costs) of subsequent coordination
through discussion as long as the plan remains in effect.
When the task environment does not lend itself to standardization or
even planning, team members must coordinate through continuous mutual
Enabling Collaboration 177

adjustment to each other’s activities. This requires constant communication


to make sure that coordination requirements (and expectations) are clear and
that activities are performed with minimal confusion and maximum benefit.
As a result, mutual adjustment is the most costly form of coordination. This
costly coordination is required when the task environment is very dynamic
and unpredictable. For example, once a mission is under way, an intelligent
adversary will present situations that could not be anticipated. Then, opera-
tors will need to mutually adjust to coordinate their responses.
The types of coordination defined by Thompson (1967) delimit the types of
behaviors people need to carry out a collaborative task. Clark (1996) describes
the behaviors that people engage in to carry out joint actions, such as a con-
versation. Extrapolating from Clark, at least eight collaborative behaviors can
be identified:

• Connection—locating with whom to collaborate and how to contact them


• Transmission—sending a message
• Notification—alerting the intended party of an incoming transmission
• Identification—designating the sender, receiver, and subject of a transmission
• Common ground preservation—establishing and maintaining a shared context and
meanings in transmissions
• Confirmation—notifying the sender of a transmission that it has been received
• Synchronization—orchestrating actions to facilitate joint action
• Election—group process of selecting among alternatives

Each type of coordination requires a different subset of collaborative


behaviors as shown in Table 7.1. Mutual adjustment generally requires all of
the collaborative behaviors because of its ad hoc nature. However, the pre-
defined rules and routines of standardized coordination take care of all of
the behaviors but identification and transmission. This difference highlights
why mutual adjustment is so relatively costly—because all of these behaviors
involve communication and time.
Consider a situation, when CAU-2 needs CAU-1 to fire upon a target that
is in CAU-1’s range, but on the border of CAU-2’s targeting area, and is
attacking CAU-2’s assets.
If the CAUs coordinate through mutual adjustment, CAU-2 will need to
notify CAU-1, transmit and identify the nature of the request, establish com-
mon ground regarding the location of the target and the action to be taken,
confirm that the information has been received, and synchronize their own
actions with CAU-1’s destruction of the target.
In contrast, they could standardize their coordination using a so-called
cursor-on-target approach. Cursor-on-target could allow all the necessary
information and targeting orders to flow digitally to the effects manager as
needed when the requesting staff put their computer cursor over the target and
click to approve. Once a target is approved, transmission, identification, and
178 Battle of Cognition

Table 7.1
Behaviors Required for Mutual Adjustment
(M), Planned (P) and Standardized (S)
Coordination

Collaborative M P S

Connection □ □ □
Notification □ □ □
Identification □ □ □
Transmission □ □ □
Common ground □ □ □
preservation
Confirmation □ □ □
Synchronization □ □ □
Election □ □ □

all other collaborative behaviors are handled by the automation. Therefore,


technology (in this case a shared database) actually can facilitate moving from
the more numerous (and therefore more costly) mutual adjustment behaviors
to less expensive (and faster) standardization. The drive for such collaborative
efficiency often results in spontaneous, innovative user adaptations of tech-
nology. This in fact occurred in the midst of the MDC2 experiment when the
participants spontaneously devised their own variation of cursor-on-target:
they standardized on using a circle-drawing function (normally used in plan-
ning) to indicate targets by circling them during mission execution.
Although the linkage may not be as strong, because of the nature of task
processes and level of interdependence, the type of coordination among col-
laborators affects the type of requisite task transmissions as well as collabora-
tive behaviors. This is noticeable when replanning occurs during execution.
For example, when the CAT and CAU commanders are executing an opera-
tion that is not meeting the intent of higher headquarters, and the command-
ers have previously identified branches and sequels, then SA confirmation
may be the only task transmission required; there is no need for alternative
generation and evaluation because of the standardized alternatives they devel-
oped during prior planning.
However, if branches and sequels were not developed, then the collabora-
tors must perform all of the generic task transmissions to reach a new course of
action, unless the new action is obvious (e.g., via recognition-primed decision
making; Klein [1999]). They first should confirm their SA with respect to the
current situation and possible futures, exchanging rationalizations connecting
current observables (data) to predictions about future states. Then, they should
generate alternative courses of action, evaluate them through differentiation
and justification, and then select one for implementation. Performing these
Enabling Collaboration 179

task transmissions typically requires coordination via a scheduled planning ses-


sion at the mission-oriented level of abstraction. MDC2 C2Vs and CSE pro-
vide the potential of engaging in such a session, even with mutual adjustment,
while on the move on the battlefield. Such mobile, agile C2 decision making
could save time and keep pressure on the enemy, but the collaborative tech-
nology, and the concept of operations for its use, must support all of the task
transmissions required by the collaborative decision-making process.
Based on the relationship between the type of coordination and the con-
sequently required collaboration behaviors, the framework clarifies that the
effectiveness of different tools to support coordination depends upon (1) the
type of coordination used to perform a task and (2) how many of that type’s
required collaboration behaviors are supported by the tools. In addition to how
many are supported, how well that support is implemented in terms of human
factors and cognitive usability must be considered. Finally, the framework clar-
ifies how a new entanglement of task, processes, organization, and technology
can facilitate moving to a new task concept of operation with less expensive
coordination. However, the task characteristics of a collaborative environment
can constrain such movement. These characteristics will be considered next.

TYPES OF TASK PROCESSES


Thompson identified three general types of task processes (which he called
technologies): long linked, mediating, and intensive (Thompson 1967).
Long-linked processes require the completion of various task activities over
time, like an assembly line. The intelligence process can be considered long
linked: collection tasking, surveillance, analysis, and dissemination of a fin-
ished product.
Mediating processes link together individuals or groups that want to be
interdependent for their mutual benefit, as a broker mediates between those
who buy and sell stock. Effects management is a mediation process: connect-
ing targets, given their characteristics, with appropriate weapons for attack
(albeit not for mutual benefit in this case). In Figure 7.2, the BDA (a pooled
resource) facilitates mediation: connecting the intelligence products with the
maneuver manager and effects manager that use them.
Intensive task processes are directed toward changing an object, where the
specific actions taken depend on feedback from the object. Military opera-
tions in general are intensive processes, where the next operation against a
target is dependent on the effects of earlier operations.
A relationship between the types of task processes and types of coordination
(long linked and standardization, intensive and mutual adjustment) is appar-
ent. However, this relationship is not deterministic. The other organizational
and environmental dimensions must be considered as well. In particular,
technology can be used to change this relationship too. The cursor-on-target
example above showed how at least one phase of an intensive targeting task
could be moved from mutual adjustment to standardized coordination.
180 Battle of Cognition

TYPE OF INTERDEPENDENCE
Thompson (1967) identified three general types of interdependence among
unit personnel and organizational units: pooled, sequential, and reciprocal.
In pooled interdependence, each team member or unit provides a discrete
contribution to the whole by collating (or pooling) their obtained information
and knowledge. In the MDC2 task, individual intelligence managers contrib-
uting to the shared CSE database is an example of pooled interdependence.
Although the final product depends on the activities of each intelligence man-
ager, the individual analysts’ work is not necessarily dependent on each other’s
activities. However, their organization as a group is critical to ensure that each
intelligence manager’s surveillance of part of the battlefield contributes to a
complete picture of the whole battlespace.
In sequential interdependence, the product of one unit (or person) is
dependent upon the output of another. In MDC2, the intelligence managers
and effects managers exhibit a sequential interdependence: the intelligence
manager identifies targets, the effects manager directs fires upon the targets,
the intelligence manager does BDA, and the sequence repeats.
Finally, in reciprocal interdependence, units pose critical contingencies for
each other that have to be resolved before taking action. Operations and logis-
tics often have a reciprocal interdependence. Whether or not different opera-
tions can be undertaken depends on the availability of certain resources, and,
in turn, the availability of those resources depends on previous and planned
operations. Therefore, operations and logistics pose critical contingencies for
each other that have to be addressed reciprocally during planning.

TASK ENVIRONMENT
An organization or task process exists within a context, its task environ-
ment. Thompson (1967) identifies two dimensions that are critical to the way
an organization is structured.
The first is the stability of the environment—how quickly the elements in
the environment change. Our discussion of situation awareness suggests that
this dynamism can be considered at the three different levels: not only how
quickly the battlespace entities change (Level-1), but also how sensitive the
situation (Level-2) is to those changes, and how sensitive the projected future
(Level-3) is to changes in the situation.
The second dimension is heterogeneity—how many different kinds of enti-
ties (and by analogy situations and futures) does the organization need to deal
with? Thompson proposes that in order to reduce uncertainty to manageable
levels, organizations should divide their environment into subdivisions that
are as stable and homogenous as possible, and they should create separate
organization units (e.g., different CAUs) to deal with each subdivision.
Collaborative technology can permit teams to respond faster and,
thereby, deal with more dynamic situations than previously possible (e.g.,
Enabling Collaboration 181

more mobile targets). It can also facilitate collaboration among more units
addressing different subdivisions of the environment—this enhances orga-
nizational management and expands the scope of battlespace understanding
and control.

CONCEPT OF OPERATIONS
Ultimately, the performance achieved for a given cost of coordination,
communication, and time depends upon how well the collaborative task, the
concept of operations (CONOPS), and the technology are fitted together.
Clearly, the best system (represented by the upper-right box in Figure 7.3)
is one where the CONOPS is appropriate for the collaborative task and the
CONOPS takes full advantage of technology to achieve the lowest cost coor-
dination possible given the constraints imposed by the task’s various dimen-
sions as described in the CEF. However, in developing new technology for a
task, we often find ourselves near the lower-right box. This is the case when
new technology simply replicates the function of old technology using the
existing CONOPS, without taking advantage of the new technology’s poten-
tial to change the CONOPS and reduce costs (or improve performance).
When this happens, we often see people develop work-arounds that move
toward a more effective CONOPS, even if it is outside the technology’s origi-
nal design (the middle box). Even though the work-around results in a worse
fit for the original design of the technology, the improvement in fit with the
task yields improved performance.
Often, collaborative technology is designed to permit the kind of coordi-
nation possible in face-to-face groups. Video and audio teleconferencing are
examples of this kind of collaborative technology. People in different geo-
graphic regions can now use this technology to coordinate in the same way
as face-to-face groups, typically via mutual adjustment and planned coordi-
nation. Web-based (or inspired) collaborative technology offers window and
file-sharing capabilities, instant messaging (or chat), and even bulletin boards
for electronic drawing.
However, as Figure 7.4 illustrates, the biggest gains in more cost-effectively
performing collaborative tasks is using technology to move to a new task
concept of operations with less expensive coordination. As discussed earlier,
standardization requires fewer collaborative behaviors and task transmissions
than planned and mutual adjustment coordination. If technology makes task
processes less intensive and thereby permits them to deal with more dynamic
environments and heterogeneous units than previously possible, then the cost
effectiveness of the technology can jump qualitatively rather than incremen-
tally by providing the same or better levels of performance for substantially
less communication. This is the long-term goal of the MDC2 program of
evaluating collaborative environments like CSE.
Finally, underlying all collaborative performance outcomes is the quality
of training. The effectiveness of distributing the cognitive load and sharing
182 Battle of Cognition

Figure 7.3. Performance per cost is a function of CONOPS task-tool


fit.

situation awareness is heavily dependent on the competency of the individu-


als to execute effectively their portion of the collaborative task. Whether inter-
dependence is pooled, sequential, or reciprocal, the performance of the whole
team is impacted when one member cannot perform his or her allotted task
effectively. Performing that task effectively is obviously a product of task knowl-
edge (command, operations, intelligence, logistics, and fire support), but it is
also a product of tool “usage.” It is common for personnel to learn only the
minimum necessary features for using a tool: team members are trained on

Figure 7.4. Hypothetical performance and cost effectiveness of colla-


borative technology as a function of type of coordination.
Enabling Collaboration 183

how to operate a tool’s buttons and menus. Establishing proper usage requires
defining an effective concept of operations that uses the tools to accomplish
the task efficiently. Therefore, a technology can appear to be ineffective, even
if it inherently supports required collaborative behaviors and task transmis-
sions, because team members do not know how best to use the tool within the
task context.

APPLYING THE COLLABORATION


EVALUATION FRAMEWORK
The value of the CEF is illustrated by applying it to the technology, con-
cepts of operation, and tasks observed in MDC2 Experiments 6 and 7. Experi-
ment 6 was the first experiment utilizing multiple (simulated) MDC2 vehicles
for a CAT with two subordinate CAUs. As described earlier, each vehicle was
equipped with a CSE containing a COP with a synchronized view available to
ensure all recipients saw a presenter’s screen, full voice and digital-chat com-
munications capabilities, and various planning tools.
We can characterize the MDC2 collaborative task as follows using the
CEF:

• A classic example of a battlefield intensive task process.


• Typically, there was a reciprocal interdependence among MDC2 personnel and the
entities they control (e.g., weapons and intelligence-gathering platforms) because
the units pose critical contingencies for each other that had to be resolved success-
fully in order to coordinate successfully.
• The MDC2 experiment’s task environment was designed to be dynamic. In addi-
tion, the environment was heterogeneous because there were many different kinds of
friendly and enemy entities (and by analogy, situations and futures) that the MDC2
personnel have to deal with quickly.
• The principal form of coordination observed among these reciprocally interdepen-
dent units performing intensive tasks in dynamic environments was mutual adjust-
ment to each other’s activities. The needed constant communication was facilitated
by the CSE.

Given that Experiment 6 was the first exploration of CSE with multiple C2
vehicles, and that CONOPS development was exploratory, it was not unex-
pected that we would find ourselves in the central and lower-right regions of
Figure 7.3. This is the region of the figure illustrating improvements in task
performance, but where additional performance can be achieved by taking
advantage of the collaborative technology’s potential to better fit the technol-
ogy and CONOPS to the task. Therefore, the following observations can
provide a basis for developing concepts of operation that are more effective,
identifying requirements for training and identifying requirements for the
CSE and similar innovative C2 systems. Developments in each of these areas
should improve effectiveness.
184 Battle of Cognition

IMPACT ON MISSION-ORIENTED THINKING


Perhaps the paramount illustration of the collaboration concepts presented
above was how the full potential of the CSE was realized when commanders
adopted a hierarchical organization in Experiment 7 (actively maintaining a
mission-command perspective) versus a flat organization in Experiment 6.
The battlefield visualization provided by the CSE technology enabled the
CAT and CAU commanders’ focus on individual entities. Moreover, an entity
focus can have positive consequences, as is illustrated in the cases below.
However, there is a double bind in this entity-level engagement. First,
engagement at this entity-level level of abstraction creates a type of cogni-
tive tunnel vision—a situation colloquially referred to as “not being able
to see the forest for the trees.” Second, in the battlespace, the entity-level
tempo became very high whenever the situation diverged from the battle
plan; even though at such a time a mission-command perspective would be
most important, engaged as they were at the entity level, the commanders
sometimes did not have time for mission assessments or real-time replan-
ning.
As opposed to the hypothetical hierarchical organization Figure 7.2, a
flat organization (all at the same level of abstraction) of Figure 7.5 was often
observed in Experiment 6.
In this organization, the CAT commander regularly discussed the opera-
tional battle damage assessment with the intelligence manager and directed
the intelligence manager to direct the effects manager on what specific tar-
gets to fire upon. There were often times when the CAT commander took
direct control of weapons to fire upon targets or directly maneuvered subor-
dinate units through their maneuver interface. As can be seen in the figure,
we observed the CAT effects manager getting fire requests from their effects
manager counterparts in CAU-1 and CAU-2, who in turn were being directed
by their commanders. It was not clear to what extent the CAT commander

Figure 7.5. Often observed flat task organization. See Appendix for
explanation of abbreviations.
Enabling Collaboration 185

was aware of this tasking of his own effects manager coming from other
(subordinate) units.
The process trace of the loss of MCS1–2 during one experiment (Run 8)
is shown in Table 7.2. This example illustrates how the CAU-1 commander
becomes heavily engaged at the entity level. The ultimate result is the loss
of both of his MCSs and consequently his ability to complete his mission—
although, revealingly, he never seems to make (or at least did not articulate)
that mission-command assessment.
At the beginning of this trace, the CAU-1 commander is indeed involved
in directing fires on specific targets. In fact, he is engaged by CAU-2’s com-
mander in mutual-adjustment coordination to fire on one of CAU-2’s targets.
For whatever reason, the commanders (and the rest of the command cell) did
not use the CSE grid-coordinate or latitude-longitude coordinate systems.
Without an absolute coordinate system in use, difficulty in establishing com-
mon ground leads to confusion over just where this target is. Because of this,
at least 60 seconds are spent on this coordination, which is hypothetically at
too low a level of abstraction for either commander and should have been
performed by their maneuver managers or effects managers.
In the meantime, an ATD pops up in front of MCS1–2 and kills it at
13:50:21. Because of a number of interface considerations, the CAU-1 com-
mander does not become aware of this situation until over three minutes later,
when his attention is drawn to it by the CAT commander! Even then, he
does not fully understand that this is a firepower-kill and mobility-kill until
13:59. He does not appear to assess the importance of this loss with respect to
accomplishing his mission; even when his second MCS is killed, he continues
to insist that he can take his objective, clearly showing a lack of Level-2 and
Level-3 SA.
We see further evidence of the CAU-1 commander’s lack of mission-
oriented assessment in his report to the CAT commander at 13:56. In that
report, with inadequate surveillance resources, the CAU-1 commander
erroneously concludes there are no enemy forces counterattacking. More-
over, having lost most of his surveillance assets, he does not direct his intel-
ligence manager to develop and execute plans to compensate to provide the
CAU-1 commander with better situation awareness of the enemy’s status.
The intelligence manager could have better deployed the remaining class-2
UAVs, used the CSE’s “range fans” to better visualize the UAVs’ capabili-
ties and limits, or collaborated with other intelligence managers for more
complete UAV surveillance coverage. Without direction from the CAU-1
commander or taking initiative on his own, the intelligence manager does
not provide the needed mission-oriented situation awareness, and CAU-1
is destroyed.
However, in Experiment 7, the commanders intentionally tried to maintain
a more hierarchical organization. They took better advantage of the CSE
capabilities to maintain a mission perspective and consequently were more
successful in maintaining their strategic standoff and their survival.
Table 7.2
Process Trace of the Event When CAU-1 Lost Its MCSs. See Appendix for Explanation of Abbreviations

Time Speaker Recipient Message Comments

13:46:00 CAU1 CDR CAU1 BSM CAU1 CDR slowing MCS down to let Appropriate level of abstraction.
infantry go ahead.

13:47:00 CAU1 CDR CAU1 EM Fire at two target sets. Too detailed level for CAU1 CDR; should be CAU1
BSM?
13:47:00 CAU2 CDR CAU1 CDR Can you fire infantry near my target Discussion ensues about exactly which target that
P20? is. Without clear markers, there again is confusion
about location. CAU2 CDR highlights the target
(which does show up on CAU1 CDR display—but
CAU1 CDR does not appear to see it because he asks
for clarification). CAU2 CDR? Says it’s “south of
27”—but 27 is far north of the target.
13:48:00 CAU1 EM CAU2 CDR We’re shooting those. Not clear they have identified the same target.
13:49:46 ATD pops up in front of northern MCS1-2 on CAT
right screen.
13:49:47 NLOS CAU2 fires PAM at ATD
Unknown.

186
13:50:00 CAU1 CDR Where the hell is 17?
13:50:21 MCS fire/mobility kill—indicated in simulation kill
data base.

13:50:25 MCS fire/mobility kill displayed on CAT’s screen


as a blue X over MCS1-2. X is almost same color as
MCS1-2, and the ATD at 13:50:37 partially covers
it. The “tab” for MCS1-2 disappears from CAU1’s
Weapons dialogue box.
13:50:37 ATD pops up in front of northern MCS on CAU1’s
screen.
13:50:58 CAU1 CDR ATD directly front of MCS.
13:51:58 CAU1 CDR CAU1 BSM Halt him for no more than 40 seconds. To let infantry go ahead. MCS1-2 is actually already
dead! CAU1 BSM should be able to know this at
least from the movement dialogue—but unlike the
Weapons dialogue, it doesn’t appear to be reflected
there.
13:53:30 CAT CDR CAU1 CDR Something happened to your MCS. MCS is still halted! What happened can’t be seen on
CAU1 CDR’s display as noted above. It is unclear
why CAT CDR must ask “what happened?”
13:53:48 CAU1 CDR CAT CDR I think it was a firepower kill. It’s removal from his Weapons dialogue tells him that
it’s a firepower kill. Blue “FIRE/MOBILITY . . .”?
kill color is the same as Tiger’s blue color, making it
hard to see.

(continued)
187
Table 7.2
Process Trace of the Event When CAU-1 Lost Its MCSs. See Appendix for Explanation of Abbreviations (continued)

Time Speaker Recipient Message Comments

13:56:00 CAU1 CDR CAT CDR “Mobility kill” to one MCS. Not seeing CAU1 CDR told CAT CDR that is now a “mobility”
additional resistance. kill? See 13:59—he may have “misreported” the
status. He is “red on eyes” (loss 1 class 2 and 2 class 3
UAVs)—but not “seeing” any counterattacking enemy
forces advancing! Representation of surveillance quality
is missing—“if I don’t see it then it is obviously not
there.” Does have class 1’s—maybe basing on their
limited view, without recognizing their range
(w/o range fan displayed).
13:59:00 CAU1 CDR Appears that he is a mobility kill and Tries to move him and can’t.
firepower kill.
14:02:00 CAU1 CDR Out of comms with MCS1-1. As indicated by simulation system—red triangle.
14:02:33 CAU1 CDR CAT CDR Reports out of comms with “Lead” CAT CDR at first thinks he is talking about MCS1-2.
MCS.

188
Enabling Collaboration 189

HIGH COGNITIVE COSTS OF MUTUAL ADJUSTMENTS


The COP window is a central component of digital C2 systems. The CEF
distinguishes the COP’s impact on sharing situation awareness and the COP’s
support for collaborative behaviors and for task transmissions. The MDC2
experiment illustrated these three points of impact. For example, the shared
COP allowed the CAU-2 commander to point out a target to the CAU-1
commander (ignoring for the moment that this interaction should be handled
by their staffs). However, the CAU-1 commander did not see the CAU-2
commander’s highlighting of the target—he appeared to be unaware that the
CAU-2 commander had in fact even made a change to the COP by introduc-
ing this highlight; instead he focused on a protracted verbal communication
to establish common ground on the location of the target. This kind of ver-
bal exchange to establish the location of entities was quite typical. There a
number of ways that the target highlighting could have been better notified
(e.g., flashing a more contrasting highlight symbol). This cursor-on-target
approach also would have eliminated the need for the verbal discussion of
location, leaving more time to discuss perhaps the mission-level rationale and
justification for the requested action.
In fact, although the COP did provide a graphical knowledge repository
for Level-1 situation awareness, collaboration about that knowledge mostly
occurred over the verbal communications channel. For example, when new
information was posted to the COP that was of significance to another party,
the collaborative behaviors of notification and confirmation were typically
done over voice communications. In fact, voice communications were used
for all of the collaborative behaviors and the task transmissions. This raises
the issue regarding when verbal communication is an effective communica-
tions mode.
Verbal communication by its nature usually requires some mutual adjust-
ment, although organizations like the military often try to standardize termi-
nology and use standardized phrases to enhance establishing and maintaining
common ground with minimum adjustment. Verbal communication is also
by its nature ephemeral, unless it is recorded, relying on human perception
and memory to understand and maintain a record of what was said. It is also
sequential, which is known to put a burden on human memory when the
sequence is long, particularly when the information is abstract. However, ver-
bal communication of complex abstract ideas can be quickly produced relative
to writing text of the same content. Nevertheless, other forms of computer-
representable gestures (such as circling a desired target on a shared screen)
when feasible can be quick, unambiguous, and persistent.
Therefore, verbal communication would be less desirable when informa-
tion can be easily transmitted and persistently recorded as, for example, an
overlay on a map. On the other hand, verbally communicating a short abstract
rationale for an order might be a most efficient approach. This should not
preclude looking for other efficient approaches. For example, if a reasonable
190 Battle of Cognition

number of standardized rationales can be predetermined, communication


could require only selecting the desired rationale from a list. Programs such
as MDC2 can facilitate trying alternative designs to examine the trade-offs
between various modes of supporting collaboration in a C2 context.
Given the prevalence of the COP concept in C2 systems, there seems
to be a general belief that having a shared COP should inherently lead to
a common understanding of each other’s state, rationale, and justification
for actions. A number of examples from the MDC2 experiment illustrate
the need for additional rationale and justifications to establish and maintain
common ground. For example, in one battle, a FRAGO required CAU-1
to halt its attack on an objective and begin a flanking movement. The CAT
commander was observed regularly urging CAU-1 to hurry. Even though
the movement of CAU-1’s entities and the context (terrain, vulnerability of
positions) of CAU-1’s movement were visible to the CAT commander, the
CAU-1 commander had to explain that the rate of his movement was due to
a need for ground surveillance of enemy positions in the forest area that ran
parallel to his path. Thus, merely sharing a common Level-1 SA did not lead
to a common Level-2 SA (i.e., establishing common ground regarding the
rationale and justification for the observed situation).
The CSE did not provide automation support for commander’s task-
specific transmissions, such as the Red Assessment, Blue Combat Effectiveness
Assessment, and the Assessment of Action in Relation to the Tactical Plan. All
of these were delivered verbally. However, some of the necessary knowledge
is beginning to be incorporated into the CSE to provide the Level-1 SA for
making these assessments. For example, the CSE has an Automated Guidance
Matrix to show weapon status, ISR fans for visualizing sensor coverage, and
an animation-based planning tool to illustrate movement and provide travel-
time estimates. In time, the CSE (and future collaborative C2 technology in
general) should be able to alert personnel of the situation awareness Level-2
and Level-3 (i.e., higher order) implications of (selective) battlespace changes
for their current (and possible future) course of action and support the neces-
sary task transmissions required to evaluate them in addition to the COP. And
the greater the extent to which the concept of operations for collaborative
technology can be moved from the more costly form of mutual adjustment to
more standardized forms, the more dynamic and heterogeneous the environ-
ment it will be able to deal with effectively.

COLLABORATION IN A DISRUPTED COMMAND


Yet even relatively imperfect collaboration support can have remarkably
positive impact. Evidence of effective collaboration becomes particularly
salient under extreme conditions, such as a severe disruption of the command
structure. Experiment 7 provided a unique opportunity to observe the effects
caused by the loss of a C2V and its entire command cell due to enemy action.
We were able to analyze how such a major loss and the subsequent command
Enabling Collaboration 191

succession affected the situation awareness, collaboration, and, ultimately, the


decision-making capability of the remaining command cells.
During the eight experimental runs, there were two episodes when the enemy
was able to destroy a single C2V and the command cell riding in the C2V. The
CAU-2 lost its C2V in Run 5, and the C2V of CAT was lost in Run 7.
When a C2V was destroyed during an experimental run, the commander
and staff occupying the destroyed C2V lost the capability to communicate via
radio and lost the use of the CSE and the capability to control assets. Then the
experiment control cell ordered the “destroyed” command-cell members to
exit their command stations without telling the remaining command cells about
their destruction. Shortly thereafter, the CSE interface viewed by all remaining
command cells displayed a “lost communications” symbol on the icon of the
destroyed C2V and on icons of all assets under the control of that C2V.
However, the same symbol could also appear on the interface for other rea-
sons, such as temporary loss of communications due to heavy network traffic,
and thus did not always signify a battle loss of a platform. In short, like in a
real-world battle, the surviving command cells did not have any clear Level-2
SA that a C2V was destroyed. It was the survivors’ responsibility to establish
that Level-2 SA and then to mark the C2V as destroyed via the CSE interface.
To do that, the surviving command cells would need corroborating Level-1
SA, such as observing the enemy artillery fire coming at the destroyed C2V.
Without such corroborating information, it could take them a fairly long
time to recognize the loss. Instead, they were likely to assume a communica-
tions problem. Even after they made repeated attempts to reestablish radio
communications with the unresponsive C2V, the explanation with the high-
est baseline probability was a technical problem because such problems were
more common. In the face of realistic ambiguities in Level-1 SA, establishing
an accurate Level-2 SA would be a function of experience and training.
When a C2V remained silent for an unreasonably long time, this became
sufficient corroboration for a commander or staff member at one of the
remaining cells to conclude that the C2V was destroyed and mark it as such
through the CSE interface. At this point, the CSE offered the commander
and staff at the next higher echelon a standardized process to reassign assets
that were previously under the control of the destroyed C2V, thereby elimi-
nating the need for any mutual adjustment to achieve this. Any of the assets
could be assigned to any of the remaining command cells. The process of
command succession was completed when all assets were reassigned and com-
munications reestablished with any available manned platforms.
A key challenge, of course, was to do all this very quickly. Once a C2V
was lost, its assets became largely disoriented, idle, and highly vulnerable to
enemy fires. It was important that the remaining cells quickly recognized the
loss of the C2V, reassigned the assets, and reevaluated the mission to deter-
mine the impact of the C2V loss and the necessary mission changes, if any.
Clearly, much depended on prior experience with such episodes and on how
quickly the survivors recognized the loss (Table 7.3).
192 Battle of Cognition

Table 7.3
The Timelines Observed in Two Episodes of Command Succession

Time to Recognize Time to Reassign


C2V Is Destroyed the Assets Previ- Total Time
and Mark It Dead in ously Controlled by for Command
the BCSE Lost C2V Succession

Run 5 8 minutes, 45 seconds 15 minutes, 5 seconds 23 minutes, 50 seconds


Run 7 1 minutes, 30 seconds 5 minutes, 13 seconds 6 minutes, 43 seconds
Average 5 minutes, 8 seconds 10 minutes, 9 seconds 15 minutes, 17 seconds

Run 5 was the first time the command cells had experienced the loss of a
C2V and exercised the command succession procedures, and the tardy time-
line reflects the inexperience and the attending confusion. When the same
command cells experienced the loss of a C2V again in Run 7, they improved
their timing considerably. Moreover, the average times shown indicate actu-
ally a very short period required to accomplish these tasks with the CSE, as
compared to a conventional, non-network-enabled environment.
As opposed to a conventional environment, the CSE interface clearly improved
Level-1 SA by providing visual stimulus such as “loss of communications”
symbols, as well as indicators for artillery and external fire impacts detected
by sensors. These assisted the commanders and staff in determining when a
C2V may have taken fire and reduced the time required to recognize the loss
of a C2V.
Furthermore, once the C2V was marked destroyed, that information was
distributed out to the force in real time, and the CSE interface provided a
standardized command succession tool that allowed the commander to quickly
view and reassign all assets that were previously under the control of the
destroyed C2V. The commander or staff responsible for reassigning the assets
could also use the CSE interface to view the status of all units as well as the
mission and tasks of each unit. They could see which units needed additional
assets and which units might not be able to handle the additional workload.
With this information, the commander or staff could make effective deci-
sions regarding the reassignment of assets. In addition, the network allowed
any commander or staff member to control any asset in the battlespace, if so
assigned. There was no need for them to relocate or physically be in the same
area as the assets that were under their control.
It is instructive to consider the dynamic changes in the volume of collabo-
rations associated with the command succession events. Table 7.4 shows the
average collaborations index across all the experimental runs. Interestingly, the
volume of collaborations for the two runs where a C2V was lost (i.e., Runs 5 and
7) did not show a significant increase or decrease versus the average for all runs.
In fact, the collaborations for each run fall within the 95 percent confi-
dence interval for the overall average of all runs. Given that in these two
Enabling Collaboration 193

Table 7.4
Impact of a Command-Cell Destruction on Average Volume of Collaboration
Activity

Collaboration

Avg. Before C2V loss After C2V loss

Run 1 Collaborations not recorded


Run 2 0.2569"
Run 3 0.3175"
Run 4 0.3550"
Run 5 0.3924" 0.4178 0.3854
Run 6 0.5240"
Run 7 0.3204" 0.3314 0.3081
Run 8 0.4412"
Avg. 0.3725"
95% Conf. (±) 0.082268801"

runs, one-third of the human command resources have been lost, one might
have expected the number of collaborations to decrease by perhaps one-third
or more since there were fewer people to engage in collaborations. Instead,
the collaboration statistic before and after the loss of the C2V in Runs 5
and 7 shows only a minor decrease in collaborations that is not statistically
significant. Apparently, we see a compensating increase in collaborations as
the CSE enabled the remaining command cells to mutually adjust and to
work together more intensively to address the loss of the C2V, coordinate on
reassignment of assets, and revise the mission accordingly.
This result should be at least partially attributed to the shared situation
awareness provided through the CSE interface that enables collaboration.
Additionally, the CSE provides several collaboration tools that facilitate col-
laboration through means other than radio transmissions. Overall, the CSE
demonstrated an effective capability to allow the remaining command cells to
quickly identify the loss of a C2V and to mitigate it collaboratively, yet with-
out an excessive, counterproductive increase in the volume of the required
collaborations.
Providing improved collaboration is only one element in a complex set of
elements that can help command decision making.
CHAPTER 8
The Time to Decide: How
Awareness and Collaboration
Affect the Command Decision
Making
Douglas J. Peters, LeRoy A. Jackson,
Jennifer K. Phillips, and Karol G. Ross

Ultimately, it is the command decision, and the resulting action, that affects
the battle outcome. All the processes we have discussed to this point—
collection of information, collaboration, and formation of situation aware-
ness—contribute to the success of the battle only inasmuch as they enable
effective battle decisions. Figure 8.1 depicts but a small part of the complex
relations between actions, decisions, collaboration, situation awareness, and
automation, as we observed them in the MDC2 program. Command deci-
sions—both the command cell’s decisions and the automated decisions—lead
to battle actions.
These, in turn, alter the battlefield situation, bring additional information,
often increase or decrease uncertainty, and engender or impede collabora-
tion. Changes in the availability of information lead to a modified common
operating picture, automated decisions produced by the system, and further
actions. These changes also lead to changes in the awareness of the battle
situation in the minds of the human decision makers. Collaboration impacts
the human situation awareness both positively and negatively (as we have seen
in the previous chapters), which in turn affects the quality and timeliness of
decisions and actions.
Still, the complexity of these relations in itself does not indicate that deci-
sion making in such an environment is difficult, or at least does not inform
us what makes it difficult. Yet, as the previous chapters have told us, the
command-cell members often find it very challenging to arrive at even a
remotely satisfactory decision. Why, then, is decision making so difficult in
this environment?
After all, we provide the cell members with a powerful information gather-
ing, integration, and presentation system. We give them convenient tools to

194
The Time to Decide 195

Figure 8.1. Commander Decision Environment—complex relations


between actions, decisions, collaboration, situation awareness, and
automation. See Appendix for explanation of abbreviations.

examine the available information and to explore its meaning in collabora-


tion with other decision makers. The CSE system offers many automatically
generated decisions, such as allocation and routing of resources for fire and
intelligence collection tasks. The cell has established effective procedures for
allocation and integration of decision-making tasks. Yet effective decision
making continues to be a challenge, in spite of all these aids.
One highly visible culprit is the lack of usable information: incompleteness
of battlespace information, doubts about the reliability of the available infor-
mation, and uncertainty about the likelihood of a decision’s consequences or
about the utility of the respective alternatives. A theorist of military command
argues that the lack of information and its uncertainty are the most important
drivers of command: “The history of command can be understood in terms
196 Battle of Cognition

of a race between the demand for information and the ability of command
systems to meet it. The quintessential problem facing any command system is
dealing with uncertainty” (van Creveld 1985).
Another major source of challenges involves the limits on the rationality
of human decision makers (Simon 1991). Such limitations are diverse: con-
straints on the amount and complexity of the information that a human can
processes or acquire in a given time period and multiple known biases in deci-
sion making. In particular, time pressure is a well-recognized source of errors
in human decision making—as the number of decision tasks per unit time
grows, the average quality of decisions deteriorates (Louvet, Casey, and Levis
1988). In network-enabled warfare, when a small command cell is subjected
to a flood of information much of which requires some decisions, the time
pressure can be a major threat to the quality of decision making (Kott 2007).
Galbraith, for example, argued that the ability of a decision-making organi-
zation to produce successful performance is largely a function of avoiding
information-processing overload (Galbraith 1974).
Human decision-making biases are surprisingly powerful and resistant to
mitigation. Many experiments demonstrate that real human decision mak-
ing exhibits consistent and pervasive deviations (often termed paradoxes) from
the expected utility theory, which for decades was accepted as a normative
model of rational decision making. For example, humans tend to prefer those
outcomes that have greater certainty, even if their expected utility is lower
than those of alternative outcomes. For this reason, it is widely believed that
bounded rationality is a more accurate characterization of human decision
making than is the rationality described by expected utility theory (Tversky
and Kahneman 1974; Kahneman and Tversky 1979). The anchoring and
adjustment biases, for example, can be very influential when decision mak-
ers, particularly highly experienced ones, follow the decisions made in similar
situations in the past (naturalistic decision making [Klein 1999]).
Although such biases can be valuable as cognitive shortcuts, especially under
time pressure, they also are dangerous sources of potential vulnerabilities. For
example, deception techniques are often based on the tendency of human
decision makers to look for familiar patterns, to interpret the available infor-
mation in light of their past experiences. Deceivers also benefit from confir-
mation bias, the tendency to discount evidence that contradicts an accepted
hypothesis (Bell and Whaley 1991).
With a system like CSE, one might expect that biases are at least partially
alleviated by computational aids. Decision-support agents like the Attack
Guidance Matrix that we discussed earlier can greatly improve the speed and
accuracy of decision making, especially when the information volume is large
and time pressure is high. But they also add complexity to the system, lead-
ing to new and often more drastic types of errors, especially when interacting
with humans (Perrow 1999).
Additional challenges of decision making stem from other factors, such as
social forces within an organization, which go beyond the purely information-
The Time to Decide 197

processing perspectives. For example, groupthink—the tendency of decision


makers within a cohesive group to pressure each other toward uniformity and
against voicing dissenting opinions ( Janis 1982)—can produce catastrophic
failures of decision making. Indeed, our observations of failures on command
cells’ collaboration point to possible groupthink tendencies, particularly in
view of the fact that information overload encourages groupthink ( Janis
1982, 196).
Which of these factors, if any, impact the decision making in network-
enabled warfare, and to what extent? How much can a system like the CSE
alleviate or perhaps aggravate such challenges to human decision making?
As a key part of the MDC2 program, we sought to evaluate the ability of
the command-cell members—commanders and staff—to make effective
decisions in the information-rich environment of network-enabled warfare.
Understanding the decision-making process of the commanders, the use
of automated decision aids, and the presentation of critical information for
decisions were crucial to this evaluation.
We begin this chapter by exploring how we collected information to sup-
port our decision-making analysis throughout the experimental program.
This section chronicles not only the progression of the approaches we took,
but also what we learned from the methods themselves and how they were
adapted to yield a richer set of data. We then proceed to discuss some of the
lessons learned in the analysis of the data and their potential relevance to the
development of future command tools.

COLLECTING THE DATA ABOUT DECISION MAKING


A key effort in the MDC2 experiments was to devise mechanisms for cap-
turing the data about decision making. We found it remarkably challenging
to obtain the data that would give us the desired insights. In a trial-and-error
fashion, we proceeded through a number of approaches.
To begin with, we built automated loggers that captured an enormous quan-
tity of data for each experimental run. For example, CSE automated deci-
sions and contextual decision-making information (such as the SAt curves)
were available directly from the data loggers. However, the raw data from
these loggers were of limited direct use in evaluating human decision making
because they could not quantify the commander’s cognitive processes and his
understanding of the situation. At best, they were helpful to support findings
and to understand what was happening during critical decisions.
In addition to automated data logging, five other mechanisms were used to
collect decision-related information: analytic observers, focus groups, opera-
tor interviews, surveys, and battle summary sessions. These were developed
and refined throughout the experimental campaign, especially over the last
five experiments in the campaign, beginning with Experiment 4a.
In Experiment 4a, we employed several traditional tools of the analyst’s tool-
box. During the experiments, analytic observers recorded significant events
198 Battle of Cognition

and characterized the effectiveness of the battle-command environment with


respect to the commander’s ability to execute his mission. Within our cadre
of observers, one person was dedicated to record and classify every decision
that the commander verbalized. Each decision was identified as relating to
seeing (for example, repositioning sensors or classifying imagery), striking
(for example, when and where to place fires), or moving (for example, how to
array the forces for movement).
Additionally, each decision was characterized according to the associated
complexity. We classified decisions that were prompted by a clear trigger and
appeared to be made according to a small set of understandable rules as auto-
matable decisions. Examples of automatable decisions were “Fire at that tank”
and “Let’s get BDA (Battle Damage Assessment) on that engagement.”
Those decisions that were based on a well-understood and limited set of
variables but required a degree of human judgment not reducible to well-
understood rules, were classified as adjustment decisions. An example of an
adjustment decision was to determine when the necessary conditions are sat-
isfied to begin operations.
Finally, decisions that required a broad, holistic understanding of the situa-
tion, encompassing a wide range of variables, and that fundamentally changed
(or confirmed) the entire operation’s strategy were characterized as complex
decisions. An example of a complex decision from Experiment 4a: the com-
mander identified a deficiency in his plan and saw the need to develop con-
tingency plans: “If the enemy gets into Granite Pass, it is going to be very
difficult for us to get through him. We need to look at some other maneuver
options.”
Collecting this information allowed us to characterize the decision making
in a number of ways. Analyzing the types of decisions made by the command-
ers, we identified the battle functions on which the commander focused most
of his attention. Likewise, decision complexity characterizations helped us
better understand whether the commander was making decisions that could
be automated with a tool or making frequent complex decisions. Together,
these two characterizations enabled us to identify specific areas of the CSE
that could better be tailored to support the decision maker’s needs. Figure 8.2
shows a partial analysis of decisions by type from Experiment 4a.
Surveys, on the other hand, proved much less useful. After each experi-
mental run, we asked each commander to complete a survey—his assessment
of how well the run went and what challenged him during the run. These
surveys, while containing occasional nuggets of interesting information were
largely ineffective because the questions were not specific to the events of a
given run, and because being the last event of a long day, surveys did not elicit
sufficiently detailed responses from the fatigued commander and his staff.
Overall, although the decision characterization was useful to help improve
the functions CSE, it did not tell us much about the effectiveness of the
decisions, or about the specific information and conditions supporting effec-
tive decisions. Therefore, in Experiment 4b (a repeat of the Experiment 4a
The Time to Decide 199

Figure 8.2 Experiment 4a summary of decisions by type and complexity.

with a different, less-experienced team of operators), we added a qualitative


assessment of decisions. We conducted this assessment in postexperiment
analysis sessions with the help of military subject matter experts after watch-
ing a replay of the battle and events leading to the decision in question. The
following criteria, derived from the Network Centric Operations Conceptual
Framework (Evidence Based Research 2003), were used to evaluate the qual-
ity of a decision as follows:

• Appropriateness: consistency of the decision with situation awareness (the situa-


tion as was known to the decision maker at the moment), mission objectives, and
commander’s intent.
• Correctness: consistency of the decision with ground truth (i.e., with the actual
situation).
• Timeliness: whether the decision is made within the window of opportunity.
• Completeness: the extent to which all the necessary factors are considered in making
a decision.
• Relevance: the extent to which the decision is directly related to mission.
200 Battle of Cognition

• Confidence: the extent to which the decision maker is confident in a decision


made.
• Outcome consistency: the extent to which the outcome of the decision is consistent
with the intended outcome.
• Criticality: the extent to which the decision made is critical to mission success.

Although this approach provided us with extensive data on the quality of


the decisions (e.g., see Figure 8.3), it also proved to be of limited use. Without
determining the context and the reasons for a decision, and the information
that led to the decision, we could not pinpoint ways for the CSE to improve
the decision-making environment.
In addition to the study of decision quality, we introduced another data-
collection approach that showed its initial promise in Experiment 4b but then
became a core analytic tool for later experiments. Process tracing (Shattuck
and Miller 2004) examines a single episode of an experimental run in detail.
This methodology connects collaboration to changes in SA and SA to deci-
sion making with a focus on the operators and their use of the CSE. Process
tracing externalizes internal processes (Woods 1993) and tries to explain the
genesis of a decision by mapping out how an episode unfolded, including
information elements available to the operators, what information was noted
by operators, and operators’ interpretations of the information in immediate
and larger contexts.
In Experiment 4b, we completed process tracing for a single event, and
although we were unable to draw any significant insights from one event, the
methodology showed promise for understanding both the context of a decision
and the challenges that faced the decision maker at the time of the decision.

Figure 8.3. Expert assessments considered the correctness, timeliness,


relevance, and other characteristics of decisions.
The Time to Decide 201

With the introduction of a manned dismounted platoon for Experiment 5,


the complexity of the decision-making environment increased significantly.
Now, instead of communicating his thoughts and decisions to staff members
located in the same vehicle, the commander had to convey his intent and
orders to subordinate commanders reachable via the radio and shared dis-
plays, with sufficient clarity and detail. Our analysts also examined informa-
tion requirements for warfighters conducting dismounted operations.
The process-tracing techniques were well suited for this complex environ-
ment, and we focused on identifying key decisions during each run and ana-
lyzing those decisions in detail. The detailed process tracing combined video
and audio playback of events leading to a decision, audio logs of the commu-
nications, query results from the automated loggers, the SAt curve, observer
notes, and interview records. All these components together supported a very
detailed study of short-duration events.
To facilitate these process tracings, we compiled critical information into
a single source. By plotting different types of information across a common
time axis, we were able to show what was happening at various time points
during the battle. Because these charts were developed by stacking multiple
variables against a common time axis, we referred to these composite views as
“tacked charts. An example is shown in Figure 5.10. This particular stacked
chart was developed to help us simultaneously view decision making, collabo-
ration, information availability, and battle tempo data. The relations between
these elements helped us understand what events shaped a key decision.
Of particular value in this methodology is a technique for extracting criti-
cal information through interviews. Given our earlier lack of success with
end-of-run surveys, we were eager to try a technique that would allow us to
identify details of critical decisions. The critical decision method of inter-
viewing (Klein, Calderwood, and MacGregor 1989) uses a two-person team
to identify a single decision made during a run and explore it in detail. There
are four steps to this interviewing technique:

Step 1 is incident identification. The interviewer presents a situation or a critical event


and asks the decision maker to talk about the event from his perspective with a
particular focus on the role he played during the event. The interviewer does not
interrupt the interviewees with clarifying questions (these come later in Step 3),
and the interview team takes careful notes regarding the actions and decisions made
during the event.
Step 2 establishes the timeline of the event. The interviewer repeats the story back
to the interviewee with special emphasis on the timing of events and decisions.
Through this process, the interviewer becomes familiar with the subcomponents
and timing of events, and how they impacted the outcomes and decisions made.
Special attention is paid to decision points, shifts in situation awareness, gaps, and
anomalies.
Step 3, deepening, tries to uncover the story behind the story. Here most of the detailed
information becomes apparent—why things were done as they were, why decisions
202 Battle of Cognition

were or were not made, what information and experiential components contributed
the most. This stage uses the event timeline and explores it in detail. Anomalies or
gaps in the story are investigated during this phase.
Step 4 focuses on the what-if queries. The purpose of this step is to consider what
conditions may have made a critical difference in how the situations unfolded and
in the decisions that were made. It also asks the question of what a less-experienced
person may have done in the same situation to further draw out the subtle factors
that enable the interviewee to make effective decisions.

Because both the process traces and the interviews proved to be effective
in Experiment 5, Experiments 6 and 7 built on these analytic tools and intro-
duced two additional tools.
The first additional tool—a detailed timeline of a run—became necessary
due to the increased complexity and duration of runs. Although we had exten-
sive and detailed records of what happened during each run (including video
and audio recordings), the task of producing a unified, concise description of
what happened during a run was difficult after the experiment was complete.
Therefore, after each experimental run, a group of analysts who had closely
observed the various echelons and cells (friendly and enemy) wrote a short
but complete synopsis of the run. In the synopsis they were able to capture
concisely the flow of the battle and detail the most significant events of the
battle from both the Blue and Red perspectives.
The second tool we introduced in the later experiments was focus groups.
Organized for each command cell, a focus group session was relatively short
(less than one hour) and was facilitated by a member of the core analysis team
who observed that cell during planning and execution. The facilitator began
the focus group session with candidate decisions of interest identified by the
analysis team during or immediately after the run. A recorder took notes.
After the focus group session, the facilitator or recorder briefed the entire
analytic observer team on key findings.
At the focus group sessions, we tried to understand the battle in general, and
the key events specifically, from the perspective of the operators. Facilitators
used the following questions to guide the focus group and to ensure that all
members participated in the session.

• Ask the operators to summarize the battle from their perspective. Brief back the key
elements of the battle summary. Use the operator’s words to the maximum extent
possible. Introduce the decisions of interest, placing them in the context of the
battle summary.
• Ask the operators to describe the events that led to a specific decision. Listen for
decision points, collaborations, shifts in situation awareness, gaps in the story, gaps
in the timeline, conceptual leaps, anomalies or violations of expectations, errors,
ambiguous cues, individual differences, and who played the key roles. Ask clarifying
questions and then brief back the incident timeline.
• Ask those operators who played key roles questions about situation assessment and
cues. Listen for critical decisions, cues and their implications, ambiguous cues, strategies,
The Time to Decide 203

anomalies, and violations of expected behavior with respect to the commander’s


intent.
• Ask operators to describe CSE-related issues. Ask probing questions as necessary.
What worked well? What features helped their situation awareness? What features
did you use to collaborate? How did you use automated decision-support functions?
What did you not use and why? What would you automate?
• Ask operators to describe procedure-related issues. Ask probing questions as neces-
sary. What responsibilities were assigned to each operator? What tools were associ-
ated with the assigned responsibilities? When were the operators overwhelmed with
the work-load? How did the commander adjust staff roles during the mission? What
new procedures did you implement and why? What did you struggle with? Why did
you use a certain procedure?

The combination of focus group and CTA interviews along with the other
quantitative data logs gave us an ability to reconstruct the battle, to examine
how decisions were made, and to identify issues that may affect battle com-
mand in the future force. The following sections describe some of the result-
ing conclusions.

THE HEAVY PRICE OF INFORMATION


Because the notional future force represented in our experimental program
was heavily armed but lightly armored, availability of information was excep-
tionally critical to mission success. The cost of stumbling upon an undetected
enemy asset was inevitably the loss of a critical piece of equipment. However,
if the commander could find the enemy, he could use his precision weapons to
engage the enemy at great distance. In order to find the enemy at long range,
the force was equipped with a rich set of sensor platforms. The sheer num-
ber of sensors, along with the well-understood importance of information
about enemy assets, led the commander to focus more attention on informa-
tion needs than is common for commanders of today’s forces. This additional
emphasis on information was not only the result of the increased importance of
information but also due to the increased availability of information.
Because even a single enemy entity could have a major impact on this
lightly armored force, our commanders focused much of their attention on
intelligence gathering regarding individual enemy platforms, in addition to
the more conventional tasks of aggregating information on enemy forma-
tions and possible enemy courses of action. Our commanders needed to know
where individual enemy entities were and, just as importantly, where they
were not. In addition, they paid attention to the classification of detected
entities and the condition of targets after they were engaged BDA. In fact,
the commander’s strong focus on “seeing the enemy” at the expense of other
functions became obvious when we analyzed the content of his decisions.
For example, in Experiment 4a, almost half of the decisions verbalized by
the command-cell members were characterized as see decisions (the other
204 Battle of Cognition

common types, move and strike decisions, accounted for about 25% each; see
Figure 8.2).
Still, the commanders in our experiments tended to delegate the entity-
based information-gathering responsibility to the intelligence manager. This
helped devolve a substantial cognitive load from the commander and also
served to unify control of the sensor assets. On the other hand, this dele-
gation deprived the cell of the critical big picture of the enemy since the
intelligence manager was focused on finding and characterizing individual
battlespace entities instead of developing an aggregated understanding of
the enemy.
In Experiment 6, one of the commanders recognized this deficiency and
saw that his intelligence manager was overloaded with tasks, while the effects
manager was being underutilized (since many of the engagement tasks were
automated or assisted by the CSE). The commander made the effects manager
responsible for coordinating with the intelligence manager to obtain images
for BDA and to conduct BDA assessments. The advantage of placing this
responsibility with the effects manager was obvious—not only did it alleviate
the cognitive load placed on the intelligence manager, but it also enabled a
rapid reengagement of assets that were not destroyed by the original engage-
ment. In general, the flexibility of CSE facilitated opportunities for creative
and unconventional allocation (and dynamic reallocation during the battle) of
responsibilities between members of the command cell.
BDA proved to be particularly critical and demanding throughout the experi-
mental program, and commanders struggled with obtaining quality assessments
from their available images. More often than not, BDA images (produced
with realistic imagery simulator) did not provide enough information to
make definitive conclusions about the results of an engagement. Thus, about
90 percent of BDA images from Experiment 4a were inconclusive (Figure 6.5
of Chapter 6). This ultimately led to frequent reengagements of targets in
order to ensure they were destroyed. In Experiment 4a, 44 percent of targets
were reengaged, and in Experiment 4b, 54 percent were reengaged.
The need to understand the state of enemy entities through effective BDA
was clearly demonstrated in Experiment 4a, Run 6, where a single enemy
armored personnel carrier destroyed enough of the Blue force to render
the unit combat ineffective. This particular enemy entity had been engaged
early in the battle and suffered a mobility-kill. However, the intelligence
manager classified the asset as dead based on a BDA picture. This mistake
was not found until it was too late. The Blue force was unable to continue
its mission.
Undoubtedly, tomorrow’s commanders will greatly benefit from the rich
information available to them. At the same time, they will be heavily taxed
with the need to process the vast information delivered through networked
sensors—both initial intelligence and BDA. Commanders should expect to
spend more time, perhaps over half of their time, on “seeing” the enemy.
Part of the solution is to equip them with appropriate information-processing
The Time to Decide 205

tools. In addition, the staff responsibilities should be continually reevaluated


and reallocated to ensure that all critical duties are well covered.

ADDICTION TO INFORMATION
Information can be addictive. We often observed situations when com-
manders delayed important decisions in order to pursue an actual or perceived
possibility of acquiring additional information. The cost of the additional
information is time, and lost time is a heavy price to pay, especially for the
future force that relies on agility.
As with today’s commanders, uncertainty is present in all decisions, and
decisions are often influenced by aversion to risk in the presence of uncer-
tainty. Unlike today’s commanders, however, our commanders had the tools
readily available to them to further develop their information picture. They
could reduce their uncertainty by maneuvering sensor platforms into position
to better cover a critical area. This availability of easy access to additional
information was a double-edged sword because it often slowed the Blue force
significantly. Commanders commonly sacrificed the speed advantage of their
lightly armored force in order to satisfy their perceived need for information.
These delays enabled the enemy to react to an assault and move to positions
of advantage.
An example of this occurred in Experiment 4a, Run 8, where the com-
mander incorrectly assessed that the enemy had a significant force along the
planned axis of advance. Even after covering this area several times with sen-
sors and not finding many enemy assets, the commander ordered “. . . need
to slow down a bit in the north . . . don’t want you wondering in there.” At
this time in the battle, the average velocity of moving Blue platforms dropped
from 20 km/h to 5 km/h. The commander exposed his force to enemy artil-
lery for the sake of obtaining even more detailed coverage of the area.
On the other hand, commanders also frequently made the opposite mistake
when they rushed into an enemy ambush without adequate reconnaissance.
An example of this occurred in Run 8 of Experiment 6 where several critical
sensor assets were lost early in the run, and the CAU-1 commander quickly
outran the coverage of his remaining sensors. In cases like this, the commander
was lulled by the lack of enemy detections on his CSE screen and advanced
without adequate information—perhaps perceiving the lack of detections as
sufficient information to begin actions on the objective. This event is discussed
in detail in the following section.
Today’s commanders are often taught that the effectiveness of a decision is
directly related to the timeliness of the decision. However, while timeliness
will remain critical, tomorrow’s commanders will need to pay more attention
to the complex trade-offs between additional information and decision timeli-
ness. Effective synchronization of information gathering with force maneu-
ver is a formidable challenge in information-rich (and therefore potentially
information addictive) warfare. Both specialized training and new tools are
206 Battle of Cognition

Figure 8.4. SAt curve for Experiment 6, Run 8. See Appendix for explanation of
abbreviations.

required to prevent the failures that commanders experienced so often in our


experiments.

THE DARK SIDE OF COLLABORATION


Effective decision making can also be delayed and even derailed by col-
laboration. In certain cases, we observed a commander’s understanding of the
current Blue or Red disposition degraded as a result of collaborations with
subordinates, peers, or higher headquarters commanders. Unlike in chapter 7
where we discuss cases of ineffective collaboration, here collaboration itself
went well. However, the effects of the collaboration on a commander’s deci-
sions were highly detrimental.
Run 8 of Experiment 6 provides an interesting example of how collabora-
tion can lull a decision maker into complacency by validating incorrect con-
clusions. In this run, the CAU-1 commander’s force was destroyed by a strong
enemy counterattack. Figure 8.4 shows the SAt curve for Run 8 with an over-
lay of time points when Blue entities were destroyed. At 32 minutes into this
run (vertical dashed line), the CAT commander assessed that the enemy was
defending heavy forward (i.e., mainly in the CAU-2 sector).
Several minutes later, the CAU-2 commander seemed to confirm that assess-
ment with his report “I suspect [the enemy’s] intent is to defend heavy forward
[in CAU-2 sector].” This assessment was derived from several detections made
very early in the run. The figure shows that little new information about the
enemy is acquired before the CAU-1 commander announces that “I’m not see-
The Time to Decide 207

ing any counterattacking forces moving towards us [i.e., CAU-1]. I think the
majority of the enemy force is in [CAU-2’s] sector” at 52 minutes into the run.
This would be a reasonable conclusion if he were using his sensors to
develop the picture of the enemy, but in fact CAU-1 had focused his sensors
on his flank and did not have any sensor coverage in the area where he was
moving his troops. Soon thereafter, CAU-1 stumbled into a major Red coun-
terattack force and was combat ineffective within minutes.
So, the obvious question is, why did the CAU-1 commander not make
more effective use of his sensors? Certainly, one important factor was a tac-
tical blunder early in the run that led to the destruction of several key sen-
sor assets, leaving him with fewer sensors to conduct his mission. With this
reduced set of sensors, the commander had to protect his flank, scout forward
to the objective, and conduct necessary BDA. At 44 minutes into the fight,
the commander tasked his staff member to reposition the sensors to scout the
objective but was distracted by the collaboration with a staff member who
declared that he had found several enemy assets far to the west.
Because of this collaboration, the commander neglected his intended mis-
sion of covering the area ahead of his force and began focusing attention far
to the western flank of the advancing force. Yet, less than 10 minutes later,
and with no new information about the objective, the commander was secure
enough in his assessment that he began his offensive and was met with a major
enemy counterattack force that decimated his unit.
There were several reasons for this poor decision to begin operations with-
out conducting proper reconnaissance. The collaborative assessment of the
situation with CAU-2 commander and with CAT commander led the CAU-1
commander to expect few enemy forces in his zone. Later, the commander’s
collaboration with a staff member confirmed his erroneous understanding
that the enemy force was far from his zone.
Though this was a rather extreme example of a collaboration negatively
affecting decision making, there were many other examples throughout the
experiments that showed collaborations either distracting the commander
from making critical decisions or lulling him into accepting an incorrect
understanding of the battlespace. In fact, of seven collaboration process traces
chosen for detailed analysis in Experiment 6, only three cases of collabora-
tion yielded improved cognitive situation awareness for the operators. In the
remaining four cases, collaboration dangerously distracted the decision maker
from his primary focus or reinforced an incorrect understanding of the cur-
rent Red or Blue disposition.
Consider that commanders in our experiments were equipped with a sub-
stantial collection of collaboration tools—instant messaging, multiple radio
frequencies, shared displays, graphics overlays, and a shared whiteboard.
Although the commanders took full advantage of these tools and found them
clearly beneficial, there was also a significant cost to collaboration. To min-
imize such costs, future command cells will need effective protocols—and
208 Battle of Cognition

corresponding discipline—for collaborating: how often and under what circum-


stances collaboration occurs, with what tools, and in what manner.

AUTOMATION OF DECISIONS
Commanders and staffs used automated decisions extensively and could use
them even more. However, the nature of these automated decisions requires
an explanation. In effect, the CSE allowed the commander to formulate his
decisions before a battle and enter them into the system. Then, during the
operations, a set of predefined conditions would trigger the decisions. Thus,
the decisions were actually made by the commander and staff. It was only the
invocation and execution of these decisions that was often performed auto-
matically when the proper conditions were met.
One type of such automatically triggered decision was the automated fires.
The conditions for invoking a fire mission included staff-defined criteria for
confidence level, type of target, the uncertainty of its location, and target-
acquisition quality. Recall that in chapter 3 we discussed the Attack Guidance
Matrix (AGM), an intelligent agent within the CSE that identified enemy tar-
gets and calculated the most suitable ways to attack them with Blue fire assets.
It could also execute fires; for example, it could issue a command to an auto-
mated unmanned mortar to fire at a particular target, automatically or semi-
automatically, as instructed by the human staff member. Typically, a commander
or an effects manager would specify the semiautomatic option: the AGM rec-
ommended the fire to them and would execute it only when a command-cell
member approved the recommendation. Occasionally, in extreme situations,
they would allow fully automated fires, without a human in the decision loop.
Another similar type of automated decision making was an intelligent
agent for automated BDA management. This agent used the commander-
established rules to determine which sensor asset was the most appropriate
to conduct BDA and would automatically task that asset to perform the BDA
assignment. For example, it would automatically command a UAV to collect
information about the status of a recently attacked target. Such decisions were
made based on the specified criteria regarding the available sensor platforms,
areas of responsibility, and enemy assets to be avoided.
In each experiment, we found that command-cell members used the auto-
mated fires feature effectively and frequently. Commanders and effects man-
agers spent ample time prior to the beginning of battle defining the conditions
for automated fires. During the runs, these settings were rarely changed and
almost every run had instances of automated engagements of enemy assets.
However, there were also many manual engagements that could have been
automated but weren’t. Instead, a cell member would manually identify a Red
target, select a Blue fire asset and suitable munitions, and then issue a com-
mand to fire—overall, a much more laborious and slower operation than a
semiautomated fire. One reason for preferring such manual fires was that
it often took too long to accumulate enough intelligence on an enemy tar-
The Time to Decide 209

get to meet the preestablished criteria for an automated or semiautomated


fire decision—they had to be fairly general and therefore too stringent. For
example, since in our experimental scenarios there were relatively few civilian
tracked vehicles in the battlespace (a bulldozer being an obvious exception),
the effects manager would often engage any vehicle classified as tracked even
before there was a clear indication that it was an enemy asset. At the same
time, he was hesitant to allow automatic fires on all tracked targets.
In such cases, a manual engagement was intentional, but in other cases,
the staff wondered aloud why an enemy vehicle was not being engaged. To the
effects manager’s eye, the specified conditions were apparently met, and the
AGM should have initiated a fire event when in fact the situation had not met
the full set of the prespecified trigger conditions. The staff’s puzzlement over
why an automated fire was not happening had an adverse affect. Because the
CSE was not performing as expected by the effects manager, his confidence
in the capability of the tool diminished. Unable to understand why the AGM
refused to fire, the effects manager tended to apply simple and very specific
rules so that only the most critical targets were automatically engaged.
The automated BDA tool suffered from this lack of understanding, which
lead to a lack of trust, much more so than with the AGM. One would think
that the seemingly less critical and, nonlethal nature of BDA would lead to
more ready acceptance by the operators. After all, the automated BDA tool
was developed at the request of commanders in an early experiment where
they routinely tasked a UAV to take a picture of engaged Red assets. The com-
manders felt that if this task was automated, not only would it lighten the load
of the staff, but it would also ensure that the task was conducted in a timely
fashion. This seemed like an obvious task to develop effective rules, and the
CSE developers set to work automating these seemingly obvious BDA tasks.
The solution worked exactly as expected by the tool designers and by the
command staff who originally requested the automation. Unfortunately, the
new command-cell members participating in the next experiment had rather
different expectations. Early in the experiment, they used the automated BDA
tool and became utterly confused. The information manager controlling the
UAVs would wonder aloud, “Who is moving my UAV?” and “Where is that
thing going now?”
What was originally designed to lighten the load of the command cell
quickly turned into a perceived loss of control over critical assets. The auto-
mated BDA tool became available in Experiment 4b, and in each subsequent
experiment, commanders and their staffs began by using the functionality but
then quickly abandoned it because of the perceived loss of control.
So, what decisions can and should be automated? Why was the automated
fires capability well received while the automated BDA was not? Based on our
experience, we believe the difference comes down to the following consider-
ations.
First, the commanders and staff must trust the system. Not only must the
system be reliable enough to work as expected every time, but it must also
210 Battle of Cognition

be simple enough for the operators to understand when it will act and when
it won’t. In particular, there must be a very clear and easily understandable
distinction between the computer control and human control.
For example, in case of the automated fires, it was very clear whether the
human or the computer was to make the final decision, and once a muni-
tion was launched, there were no opportunity for—or confusion about—the
control. However, in case of the BDA management, there was continuous
uncertainty about who was in control of a given platform—a human or a
computer—and the information manager had no means to collaborate with
the system to answer his questions about control.
Second, it should be easy for the operator to enter rules that govern an
automated decision-making tool. For example, it may initially seem obvious
to the developers of an automated tool to call for fires on detected enemy
tanks as soon as possible. However, when low on ammunition, a commander
might want to fire only at those tanks that are able to affect his axis of advance.
Likewise, he may not want to automatically engage tanks near populated areas
or if a civilian vehicle was spotted nearby. The more rules and tweaks, the
harder it is to understand the decisions made by the tool and the sooner an
operator will build distrust when the tool does not perform as he expects.
Naturally, other nontechnological factors also affect the extent to which
automated decisions will be available to a future force. Perhaps our com-
manders accepted the automated fires so easily because the experiments were
merely a simulation: the consequence of a wrong automated decision was
the destruction of computer bytes and not of real people. In today’s practice,
a human is personally accountable for every fire decision, and great care is
taken to avoid accidents. With any automation of decisions related to either
lethal fires or to any other battle actions come many challenging questions
about responsibility and accountability.

THE FOREST AND THE TREES


Decision making can suffer from an excessive volume of detailed informa-
tion offered by the network-enabled command system. In our experiments,
we observed several mechanisms by which the richness of information nega-
tively impacted the decision making.
First, recall that all operators’ displays were tied to the same underlying
data source. Therefore, soon after an enemy asset was detected, every screen
of every command-cell member in every command vehicle would show this
new information. At first glance, this seems to be exactly the right behavior
of the system, and the operators indeed desired to see all such information.
And yet, this faithful delivery of detailed information proved to be a major
distraction to the cell members’ decision making, especially to commanders.
Instead of focusing on understanding the enemy course of action and how to
best counter likely enemy actions, commanders became mesmerized with the
screen, hunting for changes in the display and reacting to them.
The Time to Decide 211

This so-called looking-for-trees behavior had at least two very adverse


impacts on the commander’s ability to understand the battlespace. On one
hand, the commander gravitated to a reactive mode: he responded to changes
on his display and frequently lost the initiative in the battle. This was espe-
cially true when inadequate sensor management led to detections of enemy
assets outside of the truly critical areas of the battlespace. In such cases, the
commander’s fixation on the screen led him to focus on largely irrelevant
topics while losing the grasp of main events in the unfolding battle.
On the other hand, responding to frequent updates on the screen pre-
vented the commander from spending the necessary time thinking about the
bigger picture of the situation. For example, in Experiment 4b, we noticed the
excessive frequency with which the commander shifted his attention. He was
almost constantly scanning the display for new information, moving his cur-
sor from one entity to another to determine if new information was available,
and reacting to the appearance of an enemy icon or alert box on the screen. In
Run 4, he shifted his attention 26 times over a 13-minute period—an average
of once every 30 seconds. During a 16-minute period in Run 6, he shifted his
attention 60 times, for an average dwell time of about 16 seconds.
The implications of this frequent attention shifting are interesting and dis-
turbing. The more often a decision maker shifts attention, the shorter the
dwell time on a data element, and the more shallow the cognitive processing.
The decision maker may determine, for example, that an enemy vehicle has
been detected and may decide how to react to it. Then he shifts his attention
to another change in his screen, without having enough time to reason about
the broader issues—the implications of the detection of that type of vehicle at
that place in the battlespace.
Furthermore, the commander would often “drag” the other cell mem-
bers along with him as he shifted attention—announcing the updates he was
noticing or issuing reactive tasks such as “DRAEGA just popped up, let’s get
a round down there.” Such unnecessary and counterproductive communi-
cations about the newly arriving information were depressingly common.
For example, in Experiment 6, as the commander watched on his screen the
reports of Red artillery rounds landing around one of his platoon leader’s
vehicle, he felt compelled to keep announcing this fact to the beleaguered
platoon leader. Of course, the platoon leader was well aware that he was under
fire, and the commander’s communications only served to distract him.
Concluding Thoughts
Alexander Kott

It is intriguing to consider how many diverse factors converge to provide


the impetus for the new paradigm of network-enabled battle command. One
of them, as we discussed in chapter 1, is the centuries-old trend toward the
greater dispersion of forces in the battlespace. Eventually, the dispersion of
even a small unit reaches the point where human voice, hearing, and sight
no longer suffice for obtaining or communicating information. Man-made
devices, such as acoustic and imaging sensors along with wireless networks,
become indispensable. These begin to produce and deliver more information
than a commander can handle, which in turn creates the need for automated
processing—information translation, fusion, and interpretation.
Closely related to the force dispersion, but of a different nature, is the
emergence of intelligent weapons and platforms, such as unmanned aerial and
ground vehicles, and missiles for precision standoff engagements. Now the
commander has both the need and the opportunity to see and to command
at ranges far beyond the human sight. It is the first time in history when even
ground forces possess both the desire and technological means to fight largely
beyond the line of sight. This too calls for reliance on nonhuman sensors
and command links, and the attending need for interpreting and generating
voluminous data.
With new information technologies delivering war images to every TV
and every Web site in the world, the commander is under a great pressure
to minimize both friendly and civilian casualties. For this, he must rely on
greater detail, timeliness, and precision of battlespace intelligence, plans, and
execution.
His challenge is exacerbated by the increasing standoff lethality of enemy
combatants, even irregular dismounted fighters with weapons like RPGs and

212
Concluding Thoughts 213

IEDs. Their preference for operations in urban environments creates an even


greater complexity: physical clutter, truly three-dimensional terrain, presence
of civilians, and great opportunities for enemy cover and concealment. All this
greatly multiplies the range of possibilities and details the commander must
consider. Once again, he needs better means to collect, process, and coordi-
nate information and to generate effective decisions.
On the other hand, the incessant arguments about the need to create lighter
ground forces and therefore to trade “armor for information” (Wilson, Gordon,
and Johnson 2004) are somewhat misleading. Battle-command systems suit-
able for network-enabled warfare can be of great value to any force: to heavy
mechanized forces, light infantry, and anything in between. The lightness of
the battle vehicles and the richness of the information should not be conflated.
These are orthogonal issues; each should be evaluated on its own merit.
In fact, without waiting for any major changes in either platforms or com-
munication networks, network-enabled battle command is already emerging in
practice. In the few years it took us to develop and explore the command tools
of the MDC2 program, they ceased to appear far-fetched and futuristic. Recent
successful developments like the Command Post of the Future (http://www.
globalsecurity.org/military/systems/ground/cpof.htm) and practical experiments
like the Air Assault Expeditionary Force (Bailey 2005) already demonstrate some
of the ideas that underpin MDC2.
However, as battle-command technologists are learning to provide com-
manders with more information in network-enabled warfare, the challenge of
a cognitive bottleneck is growing in importance. On one hand, as we see in the
MDC2 experiments, tomorrow’s commanders will benefit from the rich infor-
mation available to them. At the same time, they will be heavily taxed with the
need to process the vast amount of information. Our experiments show a strong
tendency for the commander to reallocate the bulk of his resources to the battle
of cognition—particularly the efforts to maintain situation awareness.
Somewhat alarmingly, in spite of our best efforts to enhance the CSE and
to optimize the command cell’s processes, the commander and staff face a
very heavy cognitive load. Gaps and misinterpretations in their situation
awareness are surprisingly common. Even with adequate SAt (the measure of
correct information that the command system presents to the commander),
we find numerous cases when the commanders and staff fail to interpret the
situation correctly, resulting in low SAc (the measure of information the com-
mander understands correctly). Human psychological biases are the likely
mechanisms behind these deficiencies. The challenge, then, is to build battle-
command tools that match the minds of human commanders and staffs—both
their strengths and their weaknesses.

THE TOOLS OF NETWORK-ENABLED COMMAND


Perhaps the most important achievement of the MDC2 program is sim-
ply the proof of existence—the experimental evidence that it is possible
214 Battle of Cognition

to prototype a working, real-time, multi-echelon, network-enabled battle-


command system. Using a suite of decision-support applications, the CSE
demonstrates the ability to help the commander and staff manage a large vol-
ume of information arriving at high rate, acquire situation awareness, and
execute the battle by issuing commands to a demanding array of assets.
It also shows that distinct battlefield functional areas (maneuver, intelli-
gence, logistics, and fires—in conventional practice supported by different
systems) can be integrated into a single application that the operator can
tailor to his specific needs while still having access to all these functions.
Further, the CSE offers a working example of how planning and execution
processes—conventionally performed separately and with separate tools—can
be performed from a single unified application in a spiral process.
Whether such technical innovations can deliver a value to warfighters is
a matter of experimental confirmation. Although never fielded, the CSE
does show strong benefits in multiple simulation-based experiments. The
Blue command cells execute extremely demanding missions with agility, pre-
cision, and coordination that far exceeds what would be possible given the
present-day battle-command tools and processes. The battle-command tools
of CSE help commanders and staff in a number of ways. First, they increase
the commander’s situation awareness and reduce his uncertainty in situation
understanding. They help the commander visualize the current situation and
project it into the future. The commander is able to recognize emergency
situations and rapidly reconfigure his assets to meet the requirements of an
emerging tactical situation. The increased speed of command helps the com-
mander dictate the operational tempo to the enemy. The sharing of situa-
tion awareness across several command cells enables them to collaborate and
cooperate: they assist each other with their sensing and fire assets even when
separated by exceptionally large (by contemporary standards) distances.
Admittedly, one must maintain some healthy skepticism about the degree
to which such promising advantages can transfer from simulated war games
to the real-world battlespace. On the other hand, one can certainly transfer
many of the lessons learned in the MDC2 program to the development of real
network-enabled command systems. To begin with, the development pro-
cess requires an extensive investment into rigorous experimentation, much
more extensive than is typical in today’s practice. In systems where the key
challenge is a subject as poorly understood as human cognition, science still
provides relatively little help to an engineer, while reliance on common-sense,
seat-of-the-pants solutions is often outright harmful. Even more misguided
is the practice of producing voluminous requirements documents based on
predevelopment assumptions and guesses and then driving the development
process by adherence to the requirements.
Instead, the process should be spiral and driven by frequent series of experi-
ments. (These should not be confused with user acceptance tests or system
validation tests.) The experiments should be heavily instrumented to enable
detailed and quantitative analysis of the relevant aspects of the operators’
Concluding Thoughts 215

cognitive processes. For example, we instrumented the experimental envi-


ronment in order to automatically collect data regarding the information
available to each operator of the command cells, and the ground truth of the
battlespace, as well as the real-time qualitative assessments of specially trained
observers. Additional techniques for postexperiment data collection included
interviews that traced critical events and decisions, and a temporal analysis to
correlate quantitative trends with battle events and decisions. Much of this
data was processed in real-time and displayed to experiment controllers.
In our experience, instrumentation and quantitative analysis of the opera-
tor’s situation awareness, with often surprising results, was a key factor guid-
ing the system development. If at all possible, the series of experiments should
be designed to provide quantitative comparison with an alternative approach,
and the experimental results should be statistically significant. Naturally,
capable simulation tools are critical to success of simulation-based experi-
ments. In order to meet the needs of our experiments, we had to make a sig-
nificant investment in upgrading the available simulation system.
Among the difficult design questions that only experiments can answer is the
appropriate extent of automated support to decision making. What decision-
making functions should be automated or supported, and to what degree? The
answers may not be obvious, and the critical factors involved in the choice can
be subtle. We find that decision automation (or partial automation) is very
important to alleviating the cognitive load in network-enabled command. Yet
it requires a careful exploration to determine the exact place and form that the
automation should take. For example, we found operators gave dramatically
different receptions to two nearly identical decision aids, one for automation
of fire decisions and another for automation of BDA missions. Counterintui-
tively, the former became highly popular, while the latter was utterly rejected.
To the extent that our limited experience can be generalized, much depends
on the cognitive cost-benefit ratio. A decision-aid’s intervention tends to be
more successful when the human operator can either simply reject it or read-
ily transform it into an automated action, with no further complications to
his cognitive processes. Not surprisingly, the higher the cognitive load, for
example in high-tempo operations or when a part of the command cell is out
of action, the greater is the use of low-overhead decision aids. An important
consideration is the degree of control that operators have over the decision-
support tool. For example, because many of the CSE’s decision aids use rules,
we find that operators must be given the means to easily modify and adapt the
rules to their preferences.
Although building successful decision aids is difficult, their value to a cog-
nitively overloaded commander can be enormous. Our findings stress the
need for additional decision-aids tools in the CSE. One example is a sensor-
coverage management tool. We find that operators exhibit consistent diffi-
culties in knowing what they (or rather their sensors, such as a UAV-based
camera) have seen and what they have not. Surprisingly often, a commander
would believe—incorrectly and disastrously for his forces—that an area was
216 Battle of Cognition

adequately reviewed by his sensors and was devoid of enemy assets. A tool
could help operators maintain awareness of areas and threats that have been
(or not) seen by various sensors, the capabilities of the sensors, and the time
passed since the sensors visited the area. It could also proactively highlight to
the operator the areas that were inadequately explored or explored too long
ago. On a related note, tools that predict possible actions of the enemy, such
as the RAID system developed by DARPA (Ownby and Kott 2006), can alert
a commander to potential threats he may not have considered.
The necessity and difficulty of developing decision aids point to a yet more
challenging and overarching question—the nature of relationships between
the tools of command and the human mind of commanders. There is a well-
respected genre of literature dedicated to the history and the relationships of
technology and warfare (Boot 2006). A common theme of such works is the
assertion that technology is important but generally subordinate to other non-
technological factors, such as tactics and training. In other words, technology
is a collection of physical things, tools, artifacts, and as such it is entirely dis-
tinct and different from tactics, techniques, procedures, education, training,
and other things that exist in the human mind.
This view is misleading. More insightful definitions of technology stress
that technology is not a collection of tools, but rather a know-how of tech-
niques and processes. To explain how this applies to military technology, let
us digress into a historical example of military technology. Consider tercio, a
successful sixteenth-century invention of the Spanish military (Oman 1937). A
formation of about 1,500 to 3,000 soldiers, it was composed of several mobile
groups of musketeers and a square of pikemen. Combining firepower, the
stability of heavy infantry, and the discipline of its well-trained professional
soldiers, tercio was highly effective for over a century.
Even though the primary technical implements of tercio were the pike and
the musket, it would be misleading to identify them as the technology of
tercio. The know-how definition of technology is much more useful. The
tercio was a technology system, and its effectiveness as a technology was a
product of the collective know-how of its soldiers and commanders: how to
make and use pikes and muskets, how to form and operate the solid square
and the mobile teams of musketeers, how to maintain discipline and control
fear in the face of danger, and how to position and move the tercio. It was the
systemic know-how embodied in an integration of the weapons’ hardware
and the so-called software of human minds that constituted the technology
of tercio. It was not merely the pike and the musket.
Similarly, the technology of battle command is not its technical compo-
nents—a network or a computer or battle-planning software. Instead, it is
the collective know-how of the battle-command embodied in an integrated
whole: tools with their hardware and software, and human minds with their
techniques, procedures, and training. The oft-repeated arguments that dif-
ferentiate military technology from tactics and training are misleading. The
latter is a part of the former; they are an inseparable whole.
Concluding Thoughts 217

This digression leads to two practical observations. First, as our MDC2


experience confirms, a useful approach to the development of a battle-
command system should focus on identifying and matching the cognitive
needs of command-cell operators. The identification of such needs should
be a key task of the development process and the focus of well-instrumented,
rigorously designed experiments. The most critical needs are often related to
cognitive limitations and biases, not obvious to developers and rarely known
to the operators themselves (and sometimes even denied by them). For this
reason, the true needs are best determined by experiments and not by com-
pilations of preconceived requirements. It helps to think about the human
mind not as a user of battle-command technology, but rather as an intrin-
sic part of such technology—certainly a unique and precious part, but a part
nevertheless. The rest of the technology must be built around this unique,
predefined component in a way that carefully matches its special strengths
and weaknesses.
The second observation is the importance of training as an intrinsic part
of a battle-command technology. All too often, training is designed around
the requirements of a tool. Instead, battle-command tools should be designed
around the requirements and constraints of training and trainees. The heavy
cognitive load of network-enabled warfare is one factor that amplifies con-
cerns about training requirements. Another concern that became apparent in
the course of the MDC2 program was the disproportionate decision-making
load—and commensurate requirements for skills and training—on the rela-
tively junior commanders and staffs of company-sized units. With the disper-
sion and relative independence of such units, and their complex set of assets,
the relatively junior commander is responsible for a greater tempo and com-
plexity of decisions than his more senior superiors at the battalion and brigade
levels.
Clearly, a new level of attention to designs and tools for training becomes
mandatory. For example, the training system may focus more specifically on
mitigating known cognitive limitations. Instrumentation and measurements
similar to those employed in system development, such as used for situation
awareness, can help measure the special needs and progress of an individ-
ual trainee. Unlike the laboratory-based stationary mock-ups of command
vehicles used in MDC2, the trainees may benefit from a combination of a
command cell’s live operation with realistically simulated behaviors of other
assets.

THE CHALLENGES OF NETWORK-ENABLED COMMAND


Even with the advanced tools provided by the CSE, commanders and staff
find it difficult to acquire and maintain adequate situation awareness. Mis-
interpretations of the available information, dismissal or inattention to the
available information, and failure to collect the most critical information with
the available sensing assets—all these are consistent tendencies of the MDC2
218 Battle of Cognition

commanders and staff, sometimes with catastrophic results to the Blue force.
In view of the extremely strong role played by situation awareness in the
success of a battle, this difficulty deserves a great deal of attention.
Our experimental data suggest that situation awareness—as measured quan-
titatively using the instrumentation and techniques described in chapter 5—is
the most influential factor in determining the success of a mission. More
precisely, the critical factor is the difference between the situation awareness
of the Blue command and the situation awareness of the Red command. With
a greater positive difference, the Blue has greater chances of winning. Even
the temporal dynamics of situation awareness are very influential. When the
Blue force fails to develop a positive advantage in situation awareness—usually
due to an unsuccessful counterreconnaissance battle—its inadequate situation
awareness enters a self-reinforcing cycle that is rarely reversed.
What, then, are the challenges of acquiring situation awareness? While the
full answer must await further research, some culprits are fairly apparent. Part
of the blame can be placed on the CSE tools, especially on the means of pre-
senting the information to the operators. Further work on such tools must
focus on more meaningful and insightful presentations than merely the dis-
play of icons on the map. More important, however, seem to be the operators’
psychological biases. Learning, for example, can be a double-edged sword.
Having noticed a pattern of behavior displayed by the Red force in an earlier
war game, the commander tends to “recognize” it in the current situation.
With a creative, intelligent Red commander, however, the recognition is not
always helpful. Instead, it can lead to an erroneous assessment of the Red
situation or, worse yet, into a deception trap. Once a hypothesis is formed,
the commander is reluctant to abandon it and tends to ignore or rationalize
contradicting evidence. Also of note is the common and apparently uninten-
tional tendency to look for new enemy targets at the expense of assessing the
damage done to an already engaged target.
Of a somewhat similar nature is what appears to be the near-obsessive behav-
ior of a commander who eagerly watches for a change to appear on his screen—
often a new enemy platform detected by Blue sensors—and then vigorously
explores the new information in every detail and immediately proceeds to issue
related commands. Instead of concentrating on the broader meaning of the
unfolding battle, such a commander is absorbed in a potentially insignificant
detail. Experiment observers refer to such behavior as missing the forest for
the trees. In a related form of this behavior, the commander is paralyzed by an
endless cycle of hunting for new information—as an enemy asset is detected,
he calls for additional reconnaissance of the area while delaying any action, and
so on. While only a few commanders exhibit such behaviors consistently, most
succumb to them on occasion.
Other challenges have more obvious and rational causes. For example, an
important requirement for a force reliant on standoff engagements is to syn-
chronize maneuver and information collection: the maneuver assets should
not move into an area until it has been properly explored by sensing assets,
Concluding Thoughts 219

such as UAVs or unmanned ground sensors, and cleared of enemy assets if


any are found. Although the concept is fully understood by the command-
cell operators, the proper execution of such synchronization turns out to be
surprisingly difficult. In some cases, commanders slow down the force exces-
sively in order to acquire more information, making it vulnerable to the Red’s
long-range fires or dismounted attacks. In other cases, the commander rashly
moves his force into an enemy-occupied territory and stumbles unintention-
ally into a direct-fire fight. Both failures can be exhibited by the same com-
mander and even in the same war game. A big part of the cause here is simply
the difficulty of assessing the time required to collect the information, given
the nature of the area, the enemy assets, and available sensors. Additional
tools and focused training may cure this problem.
In addition to individual biases and limitations, the command cell’s opera-
tion as a team presents its own complications (Kott 2007). Conventional
allocation of functions between the cell members is not necessarily optimal
and may require adjustments as the battle unfolds. In particular, information
management (including collection management and BDA) consumes a much
greater fraction of the command cell’s efforts than in a conventional force.
Unlike in a conventional force, commanders find that BDA has emerged as a
highly demanding task, critical to proper situation awareness. Without effec-
tive BDA, the force slows down (becoming vulnerable) and engages a tar-
get multiple times (wasting assets). Generally, in our experiments, the fires
manager is the least overloaded, followed by the maneuver manager, and the
information manager, in order of increasing cognitive load. The commander
himself often dedicates at least half of his time to information management as
well. Experiments confirm that the CSE permits the commander to arrange
alternative allocations of responsibilities, depending on situation and staff
attrition, even during the fight. However, such a reorganization can be con-
fusing when it has to be done in the midst of a high-tempo action.
With the appropriate allocation of responsibilities and supported by effec-
tive collaboration tools—instant messaging, multiple radio frequencies, shared
displays, graphics overlays, and a shared whiteboard—command cells can
support each other both within a given cell and between cells. Experiments
show many remarkable examples when collaboration enables widely dispersed
small units to effectively support each other by sharing information, sensor
resources, fires, and long-range munitions. Unfortunately, many other exam-
ples are less encouraging. In many cases, collaboration either distracts the
commander from making critical decisions or induces him into accepting an
erroneous understanding of the situation. The network-enabled command
environment also seduces the staff and commanders into an excessive amount
of discussion, often to the detriment of their direct responsibilities. Although
our number of observations is not adequate for statistically significant conclu-
sions, we find that a large fraction, possibly even a majority, of intercell col-
laborations result in diminished situation awareness for at least some of the
collaborators. Clearly, the benefits of collaboration come with a substantial
220 Battle of Cognition

cost. To minimize such costs, future command cells will need specialized
training, discipline, and protocols for collaborating. These should guide the
proper frequency, conditions, and modes of collaboration.
Still, with all the organization, training, procedures, and tools, battle-
command technology cannot and will not produce perfect situation awareness.
This will remain the inescapable fact of warfare as long as it involves an intel-
ligent enemy who works hard to disguise the situation from his opponents.
Not only is perfect situation awareness impossible, it is also unnecessary.
Recall that we find the key determinant of success is not the absolute level
of situation awareness but the difference between the Red and Blue situation
awareness levels. A modest measure of situation awareness suffices when the
enemy is left with an even smaller measure.
Critics of network-enabled warfare sometimes lampoon the concept by
arguing that it relies on an impossibility—perfect intelligence (Kagan 2003).
The argument is fallacious. Network-enabled warfare neither requires nor
relies on perfect intelligence. In modern warfare, the fog of war is bound
to grow thicker, and a key contribution of network-enabled approaches
should be to enable operations under conditions of greater, not lower, lev-
els of uncertainty in battlespace intelligence. The proliferation of technology
and the gradual reduction in the technology gap between the United States
and its adversaries, the urbanization of combat, the growing sophistication
of irregular warfare practiced by the adversary, information warfare, and the
rigorous adherence to the laws of war by U.S. forces—all these contribute to
the thickening of the fog.
More disturbing than the relatively low level of achievable situation aware-
ness is the poor ability of commanders to self-assess their situation awareness.
In our experiments, we find limited correlation between the actual situation
awareness and the commander’s perception of his situation awareness. In
some cases, the commander gloomily worries about unknown dangers while
in fact possessing a nearly perfect picture of the enemy situation. In other
cases, with a grossly misunderstood situation, the commander marches con-
fidently into an ambush. Self-awareness seems even harder than the aware-
ness of the enemy. Can some tools help in this matter? It appears doubtful.
Is it possible that some yet unknown type of training will help? There is a
particularly troubling possibility: what if the very nature of network-enabled
command—with its massive flows of information, vivid displays, and chal-
lenged cognition—leads the commander to reduced self-awareness?
Such doubts aside, developments in battle-command technologies can
help the commander cope with the cognitive challenge, even if they cannot
eliminate it. To explain this point, let us resort to another historical analogy—
armored warfare. From the fifteenth-century battlewagons of Jan Zizka (see
Oman 1960) to the present-day development of the Future Combat System,
the struggle to provide warfighters with greater protection and lethality
always demands greater weight and propulsion power, which in turn strains
mobility and logistics. The progress of technology helps us reach increasingly
Concluding Thoughts 221

better compromises between these conflicting demands but cannot eliminate


the underlying conflict itself. Similarly, giving warfighters greater situation
awareness demands greater flows of inevitably foggy information, which in
turn taxes the commander’s cognitive processes at the expense of decision
making. Although these conflicts cannot be eliminated, better technologies—
in the broad sense of know-how—can and will help us find increasingly more
effective compromises.
222
Appendix: Terms, Acronyms,
and Abbreviations

The terms here are defined in the way they are used in this book, which may
differ from the usage accepted elsewhere.
Some of the terms and abbreviations describe the systems used by the
hypothetical Red and Blue forces in our experiments. In the experimental
war games, the equipment of the Blue force was partly inspired by—but not
identical to—the U.S. Army FCS family of systems. More information on
FCS-related systems can be found at the FCS Web site (http://www.army.
mil/fcs/). Also see the 2005 FCS Briefing Book at http://www.boeing.com/
defense-space/ic/fcs/bia/041029_2005flipbook.html.
The equipment of the experimental Red force was modeled usually as
upgrades of existing non-U.S. systems. Below, in describing such systems, we
often refer to a comparable modern, currently existing system. The reader
can find more information regarding the modern weapon systems at Web
sites such as Wikipedia (www.wikipedia.org), Globalsecurity (www.globalse
curity.org), and FAS (www.fas.org).

AAEF: Air Assault Expeditionary Force


AAR: After Action Review
Abrams: the main battle tank used by the U.S. military
ADA: Air Defense Artillery
AGM: Attack Guidance Matrix
AIB: Azeri Islamic Brotherhood (a fictional organization)
airstrike: an attack by airborne assets on an enemy ground position
AO: Area of Operations

223
224 Appendix

ARI: Army Research Institute


ARK-1: see Ricebag ARK-1M
ARV-A(L): Blue Armed Robotic Vehicle for Assault (light); a robotic, wheeled, light
armored platform used by CAU for direct-fire support to infantry, with XM307
gun and multiple Javelin missiles, with acoustic, DVO, IR, LRF sensors; a hypo-
thetical system partly inspired by the eponymous system of FCS
ARV-RSTA: Blue Armed Robotic Vehicle for Reconnaissance, Surveillance, and Target
Acquisition; a robotic, wheeled, light armored platform used by the CAU for remote
reconnaissance and BDA, with XM307 gun, with acoustic, GSR, DVO, IR, LRF
sensors; a hypothetical system partly inspired by the eponymous system of FCS
asset: here an entity or a group of entities that serve significant military purpose (e.g.,
a tank, a soldier, a battalion)
ASTAMIDS: Airborne Standoff Mine Detection System
ATD: Automatic Target Detection
attentional: pertaining to human attention
battle cruiser: lightly armored but heavily armed warship
battlespace: the entire environment in which a battle unfolds, including terrain and
airspace
BCSE: Battle Command Support Environment
BCT: Brigade Combat Team
BDA: Battle Damage Assessment
BDAGM: Battle Damage Assessment Guidance Matrix
BFA: Battlefield Functional Area
Blue: refers to the friendly force
BM: Battle Manager
BSM: Battlespace Manager
Bradley: a tracked armored infantry carrier vehicle used by the U.S. military
BRDM-2: here a hypothetical upgrade of the modern Russian BRDM-2; Red light-
armored close-combat amphibious recon vehicle with DVO, IR, and radar sensors
brigade: here a military unit of several thousand soldiers, including several CATs
BTRA: Battlespace Terrain Reasoning and Awareness
C2: Command and Control
C2V: Blue Command and Control Vehicle is wheeled, fast, lightly armored and carries
the commander and staff (four people plus the driver and the gunner), C2 comput-
ers and communication equipment onboard, DVO, IR, LRF and acoustic sensors,
XM307 gun; inspired by the C2V of the FCS program
C4ISR: Command, Control, Communications, Computers, Intelligence, Surveillance,
and Reconnaissance
CA/CB: Counterartillery/Counterbattery Radar
CAT: Combined Arms Team
catastrophic-kill: condition of an asset in which the asset has no useful combat func-
tions
Appendix 225

CAU: Combined Arms Unit


CCIR: Commander’s Critical Information Requirement
CDR: Commander
CEF: Collaboration Evaluation Framework
cell: in this book, a command cell is a group of human decision makers (a commander
and his staff) that collectively command a unit
CEP: Circular Error Probable
CFLCC: Coalition Forces Land Component Commander
CFR: Counter-fire Radar
CIM: Collective Intelligence Module
CJMTK: Commercial/Joint Mapping Tool Kit
CL I UAV: Blue Class I UAV provides RSTA to the dismounted warfighter; man-
portable, short-endurance, operates in complex urban and jungle terrains with a
vertical takeoff and landing capability; inspired by the eponymous system of FCS
CL II UAV: Blue Class II UAV has greater endurance and capabilities than Class I,
with vertical takeoff and landing capability; used at CAU for reconnaissance, secu-
rity/early warning, target acquisition and designation; carried by warfighters or on
a vehicle; inspired by the eponymous system of FCS
CL III UAV: Blue Class III UAV range and endurance to support RSTA at CAT and
CAU echelons, with capabilities of the Class I and Class II UAVs but also serves for
communications relay; mine detection; chemical, biological, radiological and nuclear
detection; and meteorological survey; inspired by the eponymous system of FCS
CL IV UAV: Blue Class IV long-range, long-endurance UAV carries COMINT,
ELINT, MTI/SAR, or FOPEN SAR sensors; generally belongs to echelons above
CAT; inspired by the eponymous system of FCS
COA: Course of Action
COL: Colonel
COMINT: Communications Intelligence
communications-kill: condition of an asset in which the asset is unable to communi-
cate but can perform other functions
CONOPS: Concept of Operations
COP: Common Operating Picture
counterfire: fire intended to neutralize or destroy enemy weapons, often in response
to enemy fire
CPOF: Command Post of the Future
CS: Collaboration Server
CSE: Commander Support Environment
CTA: Counter-battery Target Acquisition
DARPA: Defense Advanced Research Projects Agency
Darya: Red self-propelling air-defense artillery system, with 35 mm cannon and
surface-to-air missiles, tracked, medium armored; a hypothetical upgrade of the
modern Russian 2S6 Tunguska
226 Appendix

DCSINT: Deputy Chief of Staff for Intelligence


deployability: the ease with which an asset can moved at a significant distance
DI: Dismounted Infantry
DIL: Dismounted Infantry Leader
DIU: Dismounted Infantry Unit
DoD: Department of Defense
Draega: Red heavily armored main battle tank, 152 mm gun, 30 mm gun, ATGMs,
DVO and IR sensors; a hypothetical upgrade of the modern Russian T-90
Draega Decoy: Red static decoy that emulates the Draega system visually and by
electronic and heat emissions
DSS: Decision Support System
DVO: Direct View Optics
ELINT: Electronic Intelligence
EM: Effects Manager
entity: here a physical thing within the battlespace such as a tank, a soldier, a house,
a civilian person
EO: Electro-optical
ERDC: the U.S. Army Engineer Research and Development Center
exfiltrate: to move from enemy-held or hostile areas to areas under friendly control
exploitation: transformation of raw data into useful information
F-117A: a stealth ground-attack aircraft used by the U.S. military
FCS: Future Combat System
firepower-kill: condition of an asset in which the asset is unable to fire but can per-
form other functions
fireteam: a small military unit, usually of four or fewer warfighters
first detect: the initial acquisition of a target
FLOT: Forward Line of Troops
FOPEN: Foliage Penetrating Radar
FRAGO: Fragmentary Order
FTTS-MS: Blue supply carrier, Future Tactical Truck System for Maneuver Sustain-
ment, wheeled, unarmored, can resupply robotic vehicles, with XM307 gun and
acoustic, DVO, IR, LRF sensors; a hypothetical system partly inspired by the epon-
ymous system of FCS
Garm: Red infantry combat vehicle; tracked, heavy-medium armor, 45 mm cannon,
ATGMs, DVO and IR sensors; a hypothetical upgrade of the modern Russian
BTR-T
GCM: Graphical Control Measure
GDTA: Goal-Directed Task Analysis
GPS: Global Positioning System
GSR: Ground Surveillance Radar
Appendix 227

GSTAMIDS: Ground Standoff Mine Detection System


HHQ: Higher Headquarters
HIMARS: Blue High Mobility Artillery Rocket System; wheeled, unarmored, carries
and fires multiple rockets of several types at a long distance in automatic or manual
mode; operated by a crew of three; a hypothetical extension of the currently exist-
ing system (see http://www.army-technology.com/projects/himars/)
HPT: High Payoff Target
HQ: Headquarters
HUD: Heads-Up Display
ICV: Blue Infantry Carrier Vehicle; tracked, light-armored and carries up to a squad
(nine persons) of warfighters, communications and C2 equipment, XM307 gun,
with acoustic, target ranging, LRF, IR and DVO sensors; inspired by the epony-
mous system of FCS
IED: Red concealed stationary mines of various designs, antipersonnel and antiarmor,
activated by a remote observer or by autonomous sensors
Igla MRL: Red multiple rocket launcher; a hypothetical upgrade of the modern
Russian 9A52 Smerch
IK: Interface Knowledge
IM: Information (or Intelligence) Manager
INET UGS: Internetted Unattended Ground Sensor
IPB: Intelligence Preparation of the Battlefield
IR: Infrared
ISR: Intelligence, Surveillance, and Reconnaissance
IUGS: hypothetical Internetted Unmanned Ground Sensors Systems; used by both
Red and Blue forces for surveillance and target detection
JROC: Joint Requirements Oversight Council
KPP: Key Performance Parameters
LAM: Blue Loitering Attack Munition; a missile that can attack light-to-heavy
armored targets at great distances, loiter over the battlefield for a significant period
of time before detecting and attacking the target, and be redirected in flight; a
hypothetical system
LD: Line of Departure
LER: Loss Exchange Ratio
LOC: Location
logistics: planning and carrying out the movement and maintenance of military forces
LOS: Line of Sight
LOS weapon: a weapon effective only against an enemy located within line of sight
from the weapon
LRF: Laser Range Finder
LRS: Long Range Surveillance
LS: Launch System
228 Appendix

LTC: Lieutenant Colonel


MAJ: Major
MCS: Blue Maneuver Combat System is a robotic tracked, light-armored platform
with 105 mm MRAAS and XM307 guns for close combat fire, with acoustic, GSR,
DVO, IR, LRF sensors; a hypothetical system
MDC2: Multicell and Dismounted Command and Control
MDMP: Military Decision-Making Process
MDT: Most Dangerous Targets
METT-TC: Mission, Enemy, Troops, Terrain, Time, and Civilians
MMR: Blue Multi-Mission Radar for counterfire, target acquisition, and air defense
on robotic, wheeled, lightly armored vehicle with XM307 gun; a hypothetical
extension of the current developmental MMR system
mobility: an ability of the asset to move in space and to overcome obstacles
mobility-kill: condition of an asset in which the asset is unable to move but can per-
form other functions
MOE: Measures of Effectiveness
Mohajer: Red reconnaissance UAV; twin-boom with pusher engine and DVO sen-
sors; a hypothetical upgrade of the modern Iranian Mohajer 4
MoM: Measures of Merit
MOP: Measures of Performance
MPERM: Multi-Purpose Extended Range Munition
MRAAS: Multi-Role Armament and Ammunition System (MRAAS) is a 105 mm
lightweight gun for firing a broad range of munitions; a hypothetical system partly
inspired by the eponymous system of FCS
MRB: Motorized Rifle Battalion
MTI: Moving Target Indicator
MTLB ELINT/COMINT: Red electronic combat system; hypothetical equipment
carried on the modern Russian MTLB tracked, light-armored vehicle
MULE: Blue Multifunctional Utility/Logistics and Equipment; robotic wheeled vehi-
cle supports dismounted operations by carrying warfighters’ equipment; can also
carry countermine sensors GSTAMIDS and a minefield breaching plow; a hypo-
thetical system partly inspired by the eponymous system of FCS
multicell: refers to a force commanded by multiple command cells
NAI: Named Area of Interest
NCW: Network-Centric Warfare
NKILO: Nagorno-Karabakh Internal Liberation Organization (a fictional organization)
NLOS: Non-Line of Sight
NLOS-C: Blue Non-Line of Sight Cannon; a robotic, light-armored, tracked plat-
form with 155 mm howitzer for LOS and NLOS fires, with GSR, DVO, IR, LRF
sensors; a hypothetical system
NLOS-LS: Blue Non-Line of Sight Launch System; a robotic, wheeled unarmored
vehicle, carries the launch unit with multiple missiles including PAMs and LAMs;
a hypothetical system
Appendix 229

NLOS-M: Blue Non-Line of Sight Mortar; a robotic, tracked, light-armored plat-


form with 120 mm mortar for short-range NLOS fires, with acoustic, GSR, DVO,
IR, LRF sensors; a hypothetical system
NLOS weapon: a weapon effective even against an enemy that is not located within
line of sight from the weapon
Nona: Red self-propelled 120 mm mortar, wheeled, light-armored; a hypothetical
upgrade of the modern Russian 2S23 Nona-SVK
NTC: National Training Center
O&O: Organizational and Operational Plan
OAEF: Operation Enduring Freedom
OBJ: Objective
OIF: Operation Iraqi Freedom
OneSAF: One Semi-Automated Force
OODA: Observe, Orient, Decide, Act
OPFOR: Opposing Force
OPORD: Operations Order
ORD: Operational Requirements Document
Orel: Red reconnaissance and combat vehicle; tracked, heavy-medium armor, 45mm
cannon and ATGMs, DVO and IR sensors; a hypothetical upgrade of the modern
Russian BRM-3
OSD: Office of the Secretary of Defense
OTB: OneSAF Tested Baseline
overwatch: the state of one unit supporting another while executing fire and move-
ment tactics
PAM: Blue Precision Attack Munition; a missile capable of attacking heavy-armor
and other targets many kilometers away with great precision; can be redirected in-
flight; a hypothetical system
PEO: Program Executive Office
PL: Platoon or Platoon Leader; alternatively, Phase Line
platform: here a military vehicle, ground based or airborne, capable of carrying war-
fighters, weapons, or sensors
PM: Project Manager
PSE: Platform Support Environment
Purga Decoy: Red static decoy that emulates the Purga system visually and by elec-
tronic emissions
Purga: Red self-propelled, tracked, medium-armored 150 mm howitzer; a hypotheti-
cal upgrade of the modern Russian 2S19 Msta
R&D: Research and Development
R&S: Reconnaissance and Surveillance
RDEC: Research and Development Center
reasoner: a software component that performs functions reminiscent of human rea-
soning
230 Appendix

recon: reconnaissance
Red: refers to an enemy force
RedSEM: Red Sensor Effects Module
resupply: replenishing stocks in order to maintain required levels of supply
retasking: assigning a new or modified task to an asset
RFA: Restricted Fire Area
Ricebag ARK-1M: Red counterbattery artillery-locating radar on a tracked, medium-
armored platform; a hypothetical upgrade of the modern Russian ARK-1M Rys
ROE: Rules of Engagement
RPG: Rocket-Propelled Grenade
RPG-22: Red infantry’s and insurgent’s man-portable rocket-propelled grenade
launcher; a hypothetical upgrade of the modern Russian RPG-22
RSTA: Reconnaissance, Surveillance, and Target Acquisition
SA: Situation Awareness
SA-13: Red mobile, short-range, low-altitude air-defense surface-to-air missile system,
tracked, medium armored; a hypothetical upgrade of the modern Russian 9K35
Strela-10
SA-15 Decoy: Red static decoy that emulates the SA-15 system visually and by elec-
tronic emissions
SA-15: Red mobile, low-to-medium-altitude air-defense surface-to-air missile system,
tracked, medium armored; a hypothetical upgrade of the modern Russian 9K330
Tor
SA-18: Red man-portable surface-to-air missile system; a hypothetical upgrade of the
modern Russian 9K38 Igla
SAc: Situation Awareness: Cognitive
SAF: Semi-Automated Forces
SAGAT: Situation Awareness Global Assessment Technique
SAM: Surface-to-Air Missile
SAR: Synthetic Aperture Radar
SASO: Stability and Support Operation
SAt: Situation Awareness: Technical
SCT-TM: Scout Team
SEM: Sensor Effects Module
sensor: a device that responds to a stimulus, such as heat or light, and generates a
signal
SINCGARS: Single Channel Ground and Airborne Radio System
SK: System Knowledge
SME: Subject Matter Expert
SOF: Special Operations Forces
SP: Self-Propelled
SPF: Special Purpose Forces
Appendix 231

SSA: SAR Search Area


SSE: Soldier Support Environment
staff: assistants to the commander who process information and may, under the
guidance of the commander, make decisions and issue commands to subordinate
assets
STRI: Simulation, Training, and Instrumentation
subgoal: a goal that has to be met in order to accomplish a broader goal
SUGV: Blue Small Unmanned Ground Vehicle; robotic, tracked, unarmored, man-
portable, controlled by an infantry squad, mounts and dismounts the ICV, used for
reconnaissance with acoustic, DVO, IR and LRF sensors; inspired by the epony-
mous system of FCS
SVS: Soldier Virtual System
symbology: a set of symbols and the rules for the use of the symbols
TacFire: an automated artillery fire direction system used at one time by the U.S.
military
targetable: refers to an enemy asset that meets the specified requirements for designa-
tion as a target
tasking: assigning a task to an asset
testbed: a system in which experimental tools and products may be deployed and
allowed to interact
threat: here an enemy asset
TM: team
TOC: Tactical Operations Center
TRAC: TRADOC Analysis Center
TRADOC: Training and Doctrine Command
TTPs: Tactics, Techniques, and Procedures
UAV: Unmanned Aerial Vehicle
UE: Unit of Employment; a division-like military organization that includes several
BCTs
UGS: Unattended Ground Sensor
UGV: Unmanned Ground Vehicle
unit: here a military organization such as a CAU, a CAT, a brigade.
Ural: Red supply-carrier truck; a hypothetical upgrade of the modern Russian Ural
4320–31
U.S.: United States
VBIED: Red IED concealed in a civilian vehicle, stationary or moving, and activated
by a remote observer, autonomous sensors, or the suicide driver
VDSF: Viecore Decision Support Framework
VSE: Vehicle Support Environment
warfighter: an armed forces member engaged in combat against an enemy force
warfighting: actions against an enemy force
232 Appendix

war game: a simulation (manual or computerized) of a contest between opposing


forces
XM307: Blue 25 mm, belt-fed grenade machine gun with laser range finder and
day/night sight; can be robotically operated, lethal against personnel and lightly
armored vehicles, man-portable, able to reach into foxholes, behind rocks and
walls; a hypothetical improvement of the current developmental XM307 system
Acknowledgments

Not only is this book largely inspired by two research programs, but it also bor-
rows heavily from the programs’ reports and archives. This places the authors
in heavy debt to a very large number of people who conceived the programs,
built experimental systems, conducted experiments, analyzed the data, and
generated many of the ideas we attempted to present in this work. The authors
are also acknowledged as important leaders, contributors, and participants in
the many activities that are the basis for this book.
Unfortunately for the authors, a policy of the U.S. Department of Defense
restricts our ability to mention by name—for obvious reasons—the depart-
ment’s military and civilian personnel who contributed to this program. This
in no way diminishes our gratitude and appreciation of their enormous efforts.
The best we can do in such cases is to mention the organizations for which
these contributors and supporters work.
A number of senior leaders of the U.S. Army deserve deep thanks for
encouraging, motivating, and sponsoring this research. We are able to men-
tion only a few of them, particularly retired generals Eric Shinseki and Kevin
Byrnes. The goals and vision of our work greatly benefited from the expe-
rience and wisdom of James Barbarello, Allan Tarbell, John Gilmore, Paul
Casselburg, General (retired) David Maddox, and Colonel (retired) Greg
Fontenot. Critical operational concepts were provided by Joe Braddock, Lou
Marquet, and James Tegnalia; retired generals Paul Gorman, Paul Funk, and
Huba Wass de Czege; and retired colonels Ted Cranford, Brooks Lyles, Jack
Gumbert, and Dave Redding. Multiple military leaders, managers, and ana-
lysts at the Army TRADOC provided continued sponsorship, guidance, and
liaison with the army development community. Faculty, cadets, and interns of

233
234 Acknowledgments

the USMA provided useful studies. Contributions were also provided by the
Naval Postgraduate School and the Army Research Lab.
Experimental battle command systems and the simulation testbed required
a broad range of technologies. Mark Curry, Diane Oconnor, Tom Ince, Rob
Lawrence, Craig Klementowski, and Digant Modha of the Viecore Federal
Systems Division led the development of the Battle Command Support Envi-
ronment. John Sausman of Lockheed Martin provided Dismounted Infantry
Behaviors software. The U.S. Army Communications-Electronics Research,
Development and Engineering Center’s Information and Intelligence Warfare
Directorate helped us with the Synthetic Aperture Radar Model, while the
Night Vision & Electronic Sensors Directorate supplied the Mine/Counter-
mine Server. Ralph Forkenbrock, Jim Page, Ray Miller, and Mike Dayton of
Science Applications International Corp. greatly enhanced the OTB simula-
tion system and the Driver-Gunner Simulation model. The Army Topographic
Engineering Center provided the crucial Terrain Server. John Huebner and
John Roberts of Atlantic Consulting Services Inc. developed the very useful
C2 Tasking Library. Mark Berry and Jim Adametz of Computer Sciences Corp.
led the development of the Sensor Effects Model.
Integration of such complex systems—and the management of the required
multifaceted engineering efforts—were ably handled by the Army Research,
Development and Engineering Command; the Army Communications-
Electronics Command; and the Army Program Executive Office for Simula-
tion, Training and Instrumentation.
A great fraction of the efforts in this research was dedicated to experiment
design, experiment execution, data collection, and analysis. We are grateful to
Darrin Meek, LeeAnn Bongiorno, and retired colonels Steve Williams and
Todd Sherrill of Applied Research Associates Inc. for their contributions to
the Sensor Coverage Tool and data collection systems; to Don Timian and
Rick Hyde of Northrop Grumman for Experiment 1 and 2 design and collec-
tion plans; to the researchers of Army Research Institute for human factors
performance analysis; to James Hillman and Andrea Kagle of Johns Hopkins
University-Applied Physics Lab for the information exchange requirements
analysis. Other important contributions to the experimental design and analy-
sis have been made by Beth Meinert and Colonel (retired) Robert Chadwick of
the MITRE Corp. and by personnel of the Army Training and Doctrine Com-
mand Analysis Center. The extensive laboratory infrastructure that housed
and supported the experiments was the work of Jim Seward and Manish Bhatt
of David H. Pollack Consulting. Execution of the experiments, particularly
the portrayal of the Red force and the after-action reviews for the Blue force,
were made possible by the talents of retired colonels Darrell Combs and Al
Rose, and their colleagues from Military Professional Resources Inc.
The primary funding for this research has been provided by DARPA and by
the U.S. Army. Happily, we are allowed to mention and to thank the DARPA
leaders and managers who made this work possible: Frank Fernandez, Tony
Tether, Dick Wishner, Ted Bially, David Whelan, and Allan Adler. DARPA
Acknowledgments 235

also granted us the permission to use the materials on which this book is
partly based; it has been approved for public release, distribution unlimited.
The work reflected in chapter 4 was supported through Army Research Labo-
ratory’s Advanced Decision Architectures Collaborative Technology Alliance.
Of course, the views, opinions, and findings presented here are those of the
authors and should not be construed as those of any agency or organization
of the U.S. government.
Finally, special thanks to Susan Parks, Scott Fuhrer, James Scrocca, Terry
Stephenson, and Michael Ownby who supported this effort in numerous
ways.
236
Notes

INTRODUCTION
1. Coevolution of technology and warfare is a topic of many excellent studies.
A recent example is Max Boot, War Made New (New York: Gotham Books, 2006).
2. A highly influential work is David S. Alberts, John J. Garstka, and Frederick
P. Stein, Network Centric Warfare: Developing and Leveraging Information Superiority
(Washington, DC: CCRP, 2000).
3. Adoption of unmanned aerial vehicles by all services of the U.S. military has
been rapid and rather noncontroversial. A readable introductory history is offered in
Laurence R. Newcome, Unmanned Aviation: A Brief History of Unmanned Aerial Vehicles
(Reston, VA: AIAA [American Institute of Aeronautics], 2004).
4. U.S. Army, FCS Web site, http://www.army.mil/fcs/.
5. Discussed in A. Bacevich, The Pentomic Era: The U.S. Army Between Korea and
Vietnam (Darby, PA: DIANE Publishing Co., 1995).
6. Congressional Budget Office, “The Army’s Future Combat Systems Program
and Alternatives,” August 2006, p. XXII.
7. DARPA Web site, http://www.darpa.mil/.
8. Max Boot, War Made New, p. 463.
9. J. Gumbert, T. Cranford, T. Lyles, and D. Redding, “DARPA’s Future Combat
System Command and Control,” Military Review (May–June 2003): 79–84.
10. For the sake of brevity, we will refer to the combination of these two programs
as MDC2.
11. J. Barbarello, M. Molz, and G. Sauer, “Multicell and Dismount Command and
Control—Tomorrow’s Battle Command Environment Today,” Army AL&T (July–
August 2005): 66–71.
12. For the sake of simplicity and brevity, we use he when referring to a com-
mander or a staff member. This is not to imply the gender of the person.

237
238 Notes

CHAPTER 1
The material in this chapter draws extensively on a report from Carrick Communica-
tions Inc., which has kindly given permission for its use.
1. Histories and analyses of Jutland and its subsequent controversies are legion. A
very readable online summary of the battle is found at http://www.worldwar1.co.uk/
jutland.html.
2. Andrew Gordon, The Rules of the Game: Jutland and British Naval Command
(London: John Murray Publisher Ltd., 2000).
3. U.S. Army, “Battle Command,” in 2003 U.S. Army Transformation Roadmap,
http://www.army.mil/2003TransformationRoadmap.
4. Robert Coram, Boyd: The Fighter Pilot Who Changed the Art of War (New York:
Little, Brown and Co., 2002), pp. 327–44.
5. Carl von Clausewitz, On War, ed. Michael Howard and Peter Paret (Princeton,
NJ: Princeton University Press, 1976), pp. 101–2.
6. As one highly regarded military theorist stated, “The purpose of discipline
is to make men fight in spite of themselves.” Charles Ardant du Picq, Battle Studies,
trans. Col. John N. Greely and Maj. Robert C. Cotton, 1921, http://www.gutenberg.
org/dirs/etext05/8btst10.txt.
7. For more discussion of this challenge, see, for example, Victor Davis Hanson,
“Discipline,” in Reader’s Companion to Military History, http://college.hmco.com/his
tory/readerscomp/mil/html/mh_015100_discipline.htm.
8. Clausewitz, On War, p. 113.
9. Gordon, The Rules of the Game, p. 21.
10. Clausewitz, On War, p. 101.
11. For a perceptive analysis of two revealing cases of such commander–subordinate
disconnection, see Col. Adolf Carlson, A Chapter Not Yet Written: Information Manage-
ment and the Challenge of Battle Command (Washington, DC: Institute for National
Strategic Studies, 1995), http://www.ndu.edu/inss/siws/ch5.html.
12. Clausewitz, On War, p. 120.
13. Atul Gawande, Complications: A Surgeon’s Notes on an Imperfect Science (New York:
Henry Holt, 2002).
14. Gordon C. Rhea, The Battle of the Wilderness May 5–6, 1864 (Baton Rouge:
Louisiana State University Press, 1994).
15. Quoted in Martin van Creveld, Command in War (Boston: Harvard University
Press, 1985), p. 153. Contrast this with Ulysses S. Grant’s view that “the distant rear
of an army engaged in battle is not the best place from which to judge correctly what
is going on in front,” Ulysses S. Grant, Personal Memoirs (New York: Penguin Books,
1999), p. 185.
16. Timothy Lupfer, The Dynamics of Doctrine: The Changes in German Tactical
Doctrine during the First World War (Fort Leavenworth, KS: U.S. Army Command and
General Staff College, 1981). See also van Creveld, Command in War, pp. 183–84.
17. At Spottsylvania in 1864, for example, it prompted a bitter dispute between
Union generals George Meade and Philip Sheridan that finally had to be resolved by
Grant himself. See, for example, Bruce Catton, A Stillness at Appomattox (New York:
Doubleday & Company, 1953), pp. 99–100.
18. Also Russia, but Stalin’s prewar purge of his officer corps largely stifled practi-
cal implementation, as early Soviet defeats demonstrated only too starkly.
19. Ulysses S. Grant, attributed (http://en.wikiquote.org/wiki/Ulysses_S._Grant).
Notes 239

20. Field Marshal The Viscount Slim, Defeat into Victory (Philadelphia: David
McKay Company, 1961), p. 460.
21. TRADOC Pamphlet 525–3–0, The Army in Joint Operations: The Army Future
Force Capstone Concept (Fort Monroe, VA: U.S. Army Training and Doctrive Command,
April 7, 2005).
22. Clausewitz, On War, p. 77.
23. As an NCO in Iraq recently stated, “You know that mission we had all planned
out? That all just went to s—t.” Margaret Friedenauer, “Soldiers Employ Daring
Tactic,” Fairbanks Daily News–Miner, December 21, 2005.
24. van Creveld, Command in War, p. 8.
25. van Creveld, Command in War, pp. 255–56.
26. “The Leadership Legacy of John Whyte,” ARMY, December 2005, p. 64.
27. Army Field Manual 3.0, Operations (Washington, DC: Department of the Army,
June 2001), pp. 4–17. Debate persists about whether the term should be replaced by
orchestrating to diminish what some see as an unhealthy fixation on scheduling.
28. Mark Adkin, The Charge: Why the Light Brigade Was Lost (South Yorkshire, UK:
Leo Cooper, 1996), pp. 125–37.
29. Bill Mauldin, Up Front (New York: W.W. Norton & Co., 2000), p. 225.
30. Col. (Ret) Gregory Fontenot, E. J. Degen, and David John, On Point: The
United States Army in Operation Iraqi Freedom (Fort Leavenworth, KS: Combat Studies
Institute Press, 2004), p. 220.
31. Clausewitz, On War, p. 119.
32. One of Clausewitz’s modern successors went so far as to argue economy of
force to be the foundation of all other principles of war. See J.F.C. Fuller, The General-
ship of Ulysses S. Grant (Cambridge, MA: Da Capo Press, 1991), p. 18.
33. In one of his less-quoted comments, Moltke warned that an error in initial
deployment might well prove irremediable. He was speaking of operations, but the
problem is no less acute for the tactical commander.
34. Clausewitz, On War, p. 75.
35. Col. David Perkins, “Command Briefing,” May 18, 2003, quoted in Fontenot,
On Point, p. 295.
36. Maj. John Altman, quoted in Fontenot, On Point, p. 284.
37. Charles B. Macdonald and Sidney T. Matthews, Three Battles: Arnaville, Altuzzo,
and Schmidt (Washington, DC: Office of the Chief of Military History, Department of
the Army, 1952), pp. 268–71.
38. For the account prompting the comment, see Correlli Barnett, The Desert
Generals (Bloomington: Indiana University Press, 1982).
39. An excellent treatment is Donald W. Engels, Alexander the Great and the Logis-
tics of the Macedonian Army (Berkeley: University of California Press, 1980).
40. Quoted in Martin van Creveld, Supplying War: Logistics from Wallenstein to
Patton (Cambridge, MA: Cambridge University Press, 1977), p. 232.
41. Fontenot, On Point, pp. 408–9.
42. Chief of Staff, Army Warfighter Conference, Washington, DC, July 1984. The
writer was present.
43. Office of the Inspector General, “No Gun Ri Review” (Washington, DC:
Department of the Army, January 2001).
44. Proportionately, “human wave” attacks during the 1980–1988 Iran–Iraq War
may have come close. See, for example, Efraim Karsh, The Iran-Iraq War 1980–1988
(Oxford, UK: Osprey Publishing Ltd., 2002), pp. 35–36.
240 Notes

45. For a current example, see Sally B. Donnelly, “Long–Distance Warriors,”


Time, December 12, 2005.
46. Several accounts of this incident have been published. One of the better dis-
cussions is in Lt. Col. John G. Humphries, “Operations Law and the Rules of Engage-
ment in Operations Desert Shield and Desert Storm,” Airpower Journal, Fall 1992.
47. See, for example, “Pentagon Justifying Incendiary Arms Use,” New York Times,
November 17, 2005.
48. Carlson, A Chapter Not Yet Written.
49. Richard Sparshatt and Col. Nick Justice, “Future Battle Command and Control
System,” http://www.agile-com.net/agile/documents/FC2S9.pdf.
50. 2003 U.S. Army Transformation Roadmap, pp. 2–5.
51. van Creveld, Command in War, p. 261.
52. Stephen Vincent Benet, John Brown’s Body (Cutchogue, NY: Buccaneer Books
Inc., 1986), p. 82.

CHAPTER 2
1. Not a real name. When referring to the future, all names, characters, organiza-
tions, places, and incidents featured in this publication are either the product of the
authors’ imaginations or are used fictitiously.
2. David L. Grange, Huba Wass De Czege, Richard D. Liebert, John E. Rich-
ards, Charles A. Jarnot, Allen L. Huber, and Emery E. Nelson, Air-Mech-Strike: Asym-
metric Maneuver Warfare for the 21st Century, ed. Michael L. Sparks (Padukah, KY:
Turner Publishing Company, 2002).
3. B. Berkowitz, The New Face of War (New York: The Free Press, 2003),
pp. 111–15
4. One example is David S. Alberts, John J. Garstka, and Frederick P. Stein, Net-
work Centric Warfare: Developing and Leveraging Information Superiority (Washington,
DC: CCRP, 2000).
5. U.S. Army, FCS Web site, http://www.army.mil/fcs/.
6. Congressional Budget Office, “The Army’s Future Combat Systems Program
and Alternatives,” August 2006, pp. 35–39.
7. P.A. Wilson, J. Gordon, and D. E. Johnson, “An Alternative Future Force:
Building a Better Army,” Parameters (winter 2003–2004): 19–39.
8. Congressional Budget Office, “The Army’s Future Combat Systems Program
and Alternatives,” pp. 31–32.
9. Congressional Budget Office, “The Army’s Future Combat Systems Program
and Alternatives,” pp. 40–43.
10. Congressional Budget Office, “The Army’s Future Combat Systems Program
and Alternatives,” pp. 44–45.
11. F. Kagan, “War and Aftermath,” Policy Review, August-September 2003, http://
www.hoover.org/publications/policyreview/3448101.html.
12. U.S. Army, Army assessment of Congressional Budget Office study “The Army’s
Future Combat Systems Program and Alternatives,” http://www.army.mil/fcs/.
13. J. Gumbert, T. Cranford, T. Lyles, and D. Redding, “DARPA’s Future Combat
System Command and Control,” Military Review (May–June 2003): 79–84.
14. U.S. Army, TRADOC Pamphlet 525–3-90, O & O, The United States Army
Future Force Operational and Organizational Plan for the Unit of Action (Fort Knox, KY:
Unit of Action Maneuver Battle Lab, December 15, 2004).
Notes 241

15. Boeing, FCS 2005 Flipbook 2005, http://www.globalsecurity.org/military/


library/report/2005/050000-fcs2005flipbook.pdf.
16. J. Barbarello, M. Molz, and G. Sauer, “Multicell and Dismount Command and
Control—Tomorrow’s Battle Command Environment Today,” Army AL&T ( July–
August 2005): 66–71.
17. Caspian Sea Scenarios, http://www.defenselink.mil/news/Apr2002/
n04292002_200204293.html.
18. OneSAF Testbed Web site, http://www.onesaf.org/onesafotb.html.
19. U.S. Army Field Manual 3.0, Operations (Washington, DC: U.S. Government
Printing Office, June 2001), pp. 4–10.
20. Martin van Creveld, Command in War (Cambridge, MA: Harvard University
Press), pp. 265–68.
21. Martin van Creveld, “Command in War: A Historical Overview,” in A. Kott,
ed., Advanced Technology Concepts for Command and Control (Philadelphia: Xlibris, 2004),
pp. 33–36.
22. Martin van Creveld, Art of War (London: Cassell, 2000).
23. J. Galbraith, “Organization Design: An Information Processing View,” Inter-
faces 4 (May 1974): 28–36.

CHAPTER 3
1. Gheorghe Tecuci, Building Intelligent Agents. An Apprenticeship Multistrategy
Learning Theory, Methodology, Tool and Case Studies (San Diego, CA: Academic Press,
1988), p. 1.
2. OneSAF.org, http://www.onesaf.org/onesafotb.html (accessed October 12, 2006).
3. CJMTK, “What is the CJMTK?” 2006, http://www.cjmtk.com//Faq/FaqMain.
aspx#Q1 (accessed September 20, 2006).
4. Michael Powers, “Battlespace Terrain Reasoning and Awareness (BTRA),”
2003, http://www.tec.army.mil/fact_sheet/BTRA.pdf (accessed October 12, 2006).
5. Rich Bormann, “A Decision Support Framework for Command and Control in
a Network Centric Warfare Environment,” technical report (Eatontown, NJ: Viecore,
2006).
6. Haley Systems Inc., “Rete Algorithm,” 2006, http://www.haley.com/28147578
2021120/brmsoverview/retereport.html (accessed September 20, 2006).
7. Haley Systems Inc., “HaleyRules: Business Rules Engine,” 2006, http://www.haley.
com/1548387250868224/products/HaleyRules.html (accessed October 12, 2006).
8. Production Systems Technologies Inc., “Clips/R2,” 2003, http://www.pst.com/
clpbro.htm (accessed October 12, 2006).Figure 3.5a. The SSE provides C2 and deci-
sion support to the dismounted warfighter.Figure 3.10. The use of automated code
generation minimizes development time and maximizes code reuse.

CHAPTER 4
Bolstad, C. A., and M. R. Endsley. 1999. Shared Mental Models and Shared Displays:
An Empirical Evaluation of Team Performance. Proceedings of the 43rd Annual
Meeting of the Human Factors and Ergonomics Society, Houston, TX, Human
Factors and Ergonomics Society, September 27–October 1, pp. 213–17.
Bolstad, C. A., and M. R. Endsley. 2000. The Effect of Task Load and Shared Displays
on Team Situation Awareness. Proceedings of the 14th Triennial Congress of
242 Notes

the International Ergonomics Association and the 44th Annual Meeting of the
Human Factors and Ergonomics Society, Santa Monica, CA, Human Factors
and Ergonomics Society, July 30–August 4, pp. 189–92.
Bolstad, C. A., and M. R. Endsley. 2003. Measuring Shared and Team Situation Aware-
ness in the Army’s Future Objective Force. Proceedings of the Human Factors
and Ergonomics Society 47th Annual Meeting, Denver, CO, Human Factors
and Ergonomics Society, October 13–17, pp. 369–73.
Bolstad, C. A., J. M. Riley, D. G. Jones, and M. R. Endsley. 2002. Using Goal Directed
Task Analysis with Army Brigade Officer Teams. Proceedings of the 46th Annual
Meeting of the Human Factors and Ergonomics Society, Baltimore, MD, Human
Factors and Ergonomics Society, September 30–October 4, pp. 472–76.
Collier, S. G., and K. Folleso. 1995. SACRI: A Measure of Situation Awareness for
Nuclear Power Plant Control Rooms. In Experimental Analysis and Measure-
ment of Situation Awareness, ed. D. J. Garland and M. R. Endsley, pp. 115–22.
Daytona Beach, FL: Embry-Riddle University Press.
Dyer, J. L., R. J. Pleban, J. H. Camp, G. H. Martin, D. Law, S. M. Osborn, et al. 1999.
What Soldiers Say about Night Operations. In Volume 1: Main Report (No.
ARI Research Report 1741). Alexandria, VA: Army Research Institute for the
Behavioral and Social Sciences.
Endsley, M. R. 1988. Design and Evaluation for Situation Awareness Enhancement.
Proceedings of the Human Factors Society 32nd Annual Meeting, Anaheim,
CA, Human Factors Society, October 24–28, pp. 97–101.
Endsley, M. R. 1990. Predictive Utility of an Objective Measure of Situation Aware-
ness. Proceedings of the Human Factors Society 34th Annual Meeting,
Orlando, FL, Human Factors Society, October 8–12, pp. 41–45.
Endsley, M. R. 1995a. Direct Measurement of Situation Awareness in Simulations
of Dynamic Systems: Validity and Use of SAGAT. In Experimental analysis
and measurement of situation awareness, ed. D. J. Garland and M. R. Endsley,
pp. 107–13. Daytona Beach, FL: Embry-Riddle University.
Endsley, M. R. 1995b. Measurement of Situation Awareness in Dynamic Systems.
Human Factors 37(1): 65–84.
Endsley, M. R. 1995c. Toward a Theory of Situation Awareness in Dynamic Systems.
Human Factors 37(1): 32–64.
Endsley, M. R. 1996. Situation Awareness Measurement in Test and Evaluation. In
Handbook of Human Factors Testing and Evaluation, ed. T. G. O’Brien and S. G.
Charlton, pp. 159–80. Mahwah, NJ: Lawrence Erlbaum.
Endsley, M. R. 2000. Direct Measurement of Situation Awareness: Validity and
Use of SAGAT. In Situation awareness analysis and measurement, ed. M. R.
Endsley and D. J. Garland, pp. 147–74. Mahwah, NJ: Lawrence Erlbaum
Associates.
Endsley, M. R., and C. A. Bolstad. 1994. Individual Differences in Pilot Situation
Awareness. International Journal of Aviation Psychology 4(3): 241–64.
Endsley, M. R., B. Bolte, and D. G. Jones. 2003. Designing for Situation Awareness: An
Approach to Human-Centered Design. London: Taylor and Francis.
Endsley, M. R., and D. J. Garland, eds. 2000. Situation Awareness Analysis and Measure-
ment. Mahwah, NJ: Lawrence Erlbaum.
Endsley, M. R., and W. M. Jones. 1997. Situation Awareness, Information Domi-
nance, and Information Warfare (No. AL/CF-TR-1997-0156). Wright-
Patterson AFB, OH: United States Air Force Armstrong Laboratory.
Notes 243

Endsley, M. R., and W. M. Jones. 2001. A Model of Inter- and Intrateam Situation
Awareness: Implications for Design, Training and Measurement. In New Trends
in Cooperative Activities: Understanding System Dynamics in Complex Environ-
ments, ed. M. McNeese, E. Salas, and M. Endsley, pp. 46–67. Santa Monica,
CA: Human Factors and Ergonomics Society.
Endsley, M. R., and E. O. Kiris. 1995. The Out-of-the-Loop Performance Problem
and Level of Control in Automation. Human Factors 37(2): 381–94.
Endsley, M. R., and M. M. Robertson. 2000. Training for Situation Awareness in Indi-
viduals and Teams. In Situation Awareness Analysis and Measurement, ed. M. R.
Endsley and D. J. Garland. Mahwah, NJ: LEA.
Endsley, M. R., S. J. Selcon, T. D. Hardiman, and D. G. Croft. 1998. A Compara-
tive Evaluation of SAGAT and SART for Evaluations of Situation Awareness.
Proceedings of the Human Factors and Ergonomics Society Annual Meeting,
Chicago, Human Factors and Ergonomics Society, October 5–9, pp. 82–86.
Gugerty, L. J. 1997. Situation Awareness during Driving: Explicit and Implicit Knowl-
edge in Dynamic Spatial Memory. Journal of Experimental Psychology: Applied 3:
42–66.
Hockey, G.R.J. 1986. Changes in Operator Efficiency as a Function of Environmen-
tal Stress, Fatigue and Circadian Rhythms. In Handbook of perception and
performance, vol. 2, ed. K. Boff, L. Kaufman, and J. Thomas, pp. 44/41–49.
New York: John Wiley.
National Research Council. 1997. Tactical Display for Soldiers. Washington, DC:
National Research Council.
Sharit, J. and G. Salvendy. 1982. Occupational Stress: Review and Reappraisal. Human
Factors 24(2): 129–62.
Strater, L. D., D. Jones, and M. R. Endsley. 2003. Improving SA: Training Challenges
for Infantry Platoon Leaders. Proceedings of the 47th Annual Meeting of the
Human Factors and Ergonomics Society, Denver, CO, Human Factors and
Ergonomics Society, October 13–17, pp. 2045–49.
U.S. Army. 2001. Concepts for the Objective Force. Washington, DC: U.S. Army.

CHAPTER 5
Brownlee, Les, and Peter J. Schoomaker. 2004. “Serving a Nation at War: A Campaign
Quality Army with Joint and Expeditionary Capabilities.” Parameters 34, no. 2
(Summer):18.
Klein, G. A., R. Calderwood, and D. MacGregor. 1989. Critical Decision Method for
Eliciting Knowledge. IEEE Transactions on Systems, Man, and Cybernetics 19(3):
462–72.
Woods, David D. 1993. Process Tracing Methods for the Study of Cognition Outside
of the Experimental Psychology Laboratory. In Decision Making in Action:
Models and Methods, ed. G. Klein, J. Orasanu, R. Calderwood, and C. Zsambok,
pp. 228–51. Norwood, NJ: Ablex Publishing Corporation.

CHAPTER 6
Cheikes, B. A., M. J. Brown, P. E. Lehner, and L. Alderman. 2004. Confirmation Bias
in Complex Analysis. Technical Report No. MTR 04B0000017. Bedford, MA:
MITRE.
244 Notes

Endsley, Mica R. 2000. “Theoretical Underpinings of Situation Awareness—A Critical


Review.” In Situation Awareness Analysis and Measurement, ed. Mica R. Endsley
and Daniel J. Garland. Mahwah, NJ: Lawrence Erlbaum Associates.
Endsley, Mica R., and D. G. Jones. 1995. Situation Awareness Requirements Analysis for
TRACON Air Traffic Control (TTU-IE-95-01). Lubbock: Texas Tech University.
Endsley, Mica R., Cheryl A. Bolstad, Debra G. Jones, and Jennifer M. Riley. 2003.
“Situation Awaremess Oriented Design: From User’s Cognitive Requirements
to Creating Effective Supporting Technologies.” Proceedings of the 47th Annual
Meeting of the Human Factors & Ergonomics Society. Human Factors & Ergo-
nomics Society, Santa Monica, CA, 268–72.
Jones, Debra G., and Mica R. Endsley. 1996. “Sources of Situation Awareness Errors
in Aviation.” Aviation, Space and Environmental Medicine 67(6): 507–12.
Jones, Debra G., and Mica R. Endsley. 2000. “Overcoming Representational Errors
in Complex Environments.” Human Factors 42(3): 367–78.
Miller, Nita L., and Lawrence G. Shattuck. 2004. “A Process Model of Situated Cog-
nition in Military Command and Control.” Paper presented at the Command
and Control Research and Technology Symposium, San Diego, CA, May 2004.
Nickerson, R. S. 1998. “Confirmation Bias: A Ubiquitous Phenomenon in Many
Guises.” Review of General Psychology 2: 175–220.
Woods, D. D., L. Johannesen, R. I. Cook, and N. B. Sarter. 1994. Behind Human Error:
Cognitive Systems, Computers, and Hindsight (CSERIAC SOAR Report 94-01).
Wright-Patterson Air Force Base, OH: Crew Systems Ergonomic Information
and Analysis Center.

CHAPTER 7
Brehmer, B. 1991. Organization for Decision Making in Complex Systems. In Distrib-
uted Decision Making: Cognitive Models for Cooperative Work, ed. J. Rasmussen,
B. Brehmer, and J. Leplat. New York: Wiley and Sons.
Clark, H. H. 1996. Using Language. New York: Cambridge University Press.
Endsley, M. R. 1995. Toward a Theory of Situation Awareness in Dynamic Systems.
Human Factors 37(1): 32–64.
Field Manual 6–0. 2003. Battle Command: Command and Control of Army Forces. Wash-
ington, DC: Headquarters, Department of the Army.
Flake, G. W. 1998. The Computational Beauty of Nature. Cambridge, MA: MIT Press.
Garstka, J., and D. Alberts. 2004. Network Centric Operations Conceptual Framework
Version 2. Vienna, VA: Evidence Based Research.
Katz, D., and R. L. Kahn. 1978. The Social Psychology of Organizations. New York: Wiley.
Klein, G. A. 1999. Sources of Power. Cambridge, MA: MIT Press.
Rasmussen, J., A. Pejtersen, and L. Goodstein. 1994. Cognitive Systems Engineering.
New York: John Wiley and Sons.
Simon, H. A. 1996. The Sciences of the Artificial. Cambridge, MA: MIT Press.
Thompson, J. D. 1967. Organizations in Action. New York: McGraw-Hill.

CHAPTER 8
Bell, J. B., and B. Whaley. 1991. Cheating and Deception. Edison, NJ: Transaction
Publishers.
Notes 245

Evidence Based Research, Inc. 2003. Network Centric Operations Conceptual Frame-
work Version 1.0. http://www.iwar.org.uk/rma/resource/new/new-conceptual-
framework.pdf.
Galbraith, J. 1974. Organization Design: An Information Processing View. Interfaces
4 (May): 28–36.
Janis, I. L. 1982. Groupthink: Psychological Studies of Policy Decisions and Fiascoes. Boston:
Houghton Mifflin Company.
Kahneman, D., and A. Tversky. 1979. Prospect Theory: An Analysis of Decision under
Risk. Econometrica 47(2): 263–92.
Klein, G. 1999. Sources of Power: How People Make Decisions. Cambridge, MA: MIT
Press.
Klein, G. A., R. Calderwood, and D. MacGregor. 1989. Critical Decision Method for
Eliciting Knowledge. IEEE Transactions on Systems, Man, and Cybernetics 19(3):
462–72.
Kott, A., ed. 2007. A Model of Self-Reinforcing Defeat in Command Structures Due
to Decision Overload. In Information Warfare and Organizational Decision-
Making, ed. A. Kott, pp. 135–41. Norwood, MA: Artech House.
Louvet, A-C., J. T. Casey, and A. H. Levis. 1988. “Experimental Investigation of the
Bounded Rationality Constraint.” In Science of Command and Control: Coping
with Uncertainty, ed. S. E. Johnson and A. H. Levis, pp. 73–82. Washington,
DC: AFCEA.
Perrow, C. 1999. Normal Accidents: Living with High-Risk Technologies. Princeton, NJ:
Princeton University Press.
Shattuck, L. G., and N. L. Miller. 2004. A Process Tracing Approach to the Investiga-
tion of Situated Cognition. Proceedings of the Human Factors and Ergonom-
ics Society’s 48th Annual Meeting, New Orleans, pp. 658–62.
Simon, H. 1991. Models of My Life. New York: Basic Books.
Tversky, A., and D. Kahneman. 1974. Judgment under Uncertainty: Heuristics and
Biases. Science 185: 1124–31.
van Creveld, M. 1985. Command in War. Cambridge, MA: Harvard University
Press.
Woods, David D. 1993. Process Tracing Methods for the Study of Cognition Out-
side of the Experimental Psychology Laboratory. In Decision Making in Action:
Models and Methods, ed. G. Klein, J. Orasanu, R. Calderwood, and C. Zsambok,
pp. 228–51. Norwood, NJ: Ablex Publishing Corporation.

CONCLUDING THOUGHTS
Bailey, Tracy A. 2005. “Air Assault Expeditionary Force Tests Technologies.” Army
News Service, December 1.
Boot, M. 2006. War Made New. New York: Gotham Books.
Kagan, F. 2003. War and Aftermath, Policy Review (August–September). http://www.
hoover.org/publications/policyreview/3448101.html.
Kott, A., ed. 2007. Information Warfare and Organizational Decision Process. Norwood,
MA: Artech House Publishers.
Kott, A., and W. McEneaney, eds. 2006. Adversarial Reasoning: Computational
Approaches to Reading the Opponent’s Mind. New York: Chapman and Hall,
CRC Press.
246 Notes

Oman, C. 1937. History of the Art of War in the Sixteenth Century. New York: E.P.
Hutton.
Oman, C. 1960. The Art of War in the Middle Ages. Ithaca, NY: Cornell University
Press, pp. 152–59.
Wilson, P. A., J. Gordon, and D. E. Johnson. 2004. An Alternative Future Force:
Building a Better Army. Parameters (Winter): 19–39.
Index

Abrams tank, 42 Attention shifting, 151–52, 211


Acute incidents, definition, 153 Audio logs, 132
Adjustment decisions, 198 Automated Guidance Matrix, weapon
Afghanistan, noncombatants in, 28 status, 190
Air Assault Expeditionary Force, 213 Automatic route generation, 87
Air weapons, 30–31. See also Unmanned Automatic Target Detections (ATDs),
aerial vehicles (UAVs) 158
Alerts, 53, 153 Auto recon tasking, 53
Alert Tracker, 68 Azerbaijan, battle scenario, 55
Alexander the Great, 26, 167
Al Firdos bunker destruction, 32 Balaclava, Crimea, 22
Alternative evaluation, 171 Battle command: battlefield
Alternative generation, 171 enlargement and, 15–16; challenge of
Analytic observation: about decision tactical agility, 31–32; collaboration
making, 197–98; situation awareness in, 167; definitions of, 11–12;
scores, 134; use of, 121–24 deletation and, 20–21; disrupted,
Animation-based tools, 190 190–93; functions of, 12; future of,
Appropriateness. definition, 199, 200 33–36; history of, 12; human biases
Army Field Manual 3.0, Operations, 21–22 in, 143–47; human dimension of,
Army Transformation Roadmap, 34–35 35–36; information processing, 43,
Artillery fire detection systems, 20 61–63; judging timing, 25–26; key
Asset agents, 66, 67 tasks, 18–23; motivation and, 23;
Assets, localization of, 76 multiplication of domains, 17;
ASTi simulated radio and Napoleonic, 15, 62–63; organiza-
communication system, 85 tional complexity of, 16–17; pattern
Attack Guidance Matrix (AGM): recognition and, 18; planning and, 19;
command-cell interface, 82; descrip- recurring dilemmas, 24–29; simul-
tion, 208–9; function of, 53, 66, 68, taneity, 30; situation awareness in,
81; impact on timing, 196 95–119; synchronizing and, 21–22;

247
248 Index

Battle command (Continued) command and, 143–47, 218–19;


technology and, 216; time compres- confirmation type, 149–50, 196;
sion challenge, 29; tools for, 64–94; decision making and, 196;
transparency of, 32–33 overcoming preconceptions, 149–59;
Battle Command Support Environment role in battle command, 143–47;
(BCSE): C2 architecture of, 65–75, 68; towards target acquisition, 143, 148
Commander Support Environment Blue command, 48–50; biases, 150, 151;
and, 67; communication displays, SAt scores, 138, 141, 142; sense-
85–86; control of, 67; decision making failures, 157–58; situation
support units, 74–75; definition, 64; awareness of, 133
tools within, 70 Blue force: battle planning, 58–61;
Battle command systems: 2018 battle scenario, 37–41; command
collaboration-oriented technologies, cell, 41; commander support
168; network-enabled warfare and, 213 environment, 50–53; organization
Battle Damage Assessment (BDA): of, 58; situation awareness and, 218
automation of, 83–85, 208–10, 214; Blue unit viewer, 53
biased use of, 143–47; collaboration Bomb Line, origins of, 17
in, 173; correctness of, 145; CSE Bonaparte, Napoleon, 15, 62–63
visualization of, 76; demands of, Boyd, John (Colonel), 12
219; guidance matrix, 53, 79; image Bradley personnel carrier, 42
quality, 146; importance of, 204; Brehmer, B., 171
situation awareness and, 175; sources Briefing Tools, function of, 77–78
for updates to, 145; Threat Manager Brigade Command Team (BCT), 58, 59
reports, 78–79 Brigade Logistics Coordinator, 108
Battle Damage Assessment Guidance Britain, Battle of Jutland, 10–11
Matrix (BDAGM), 53, 79, 83–85, 90
Battlefield Functional Areas (BFAs), Caspian Sea scenarios, 46, 55–58
65, 69 Cebrowski, Arthur K. (Admiral), 42
Battlefields: enlargement, 15–16; Chance, battle and, 14–15
handling of casualties, 28; operating Chu, Specialist, 40, 87
systems for, 34; terrain appreciation Churchill, Winston, 13
and, 18 Civilians. See Noncombatants
Battle plans, typical history of, 58–61 Clark, H. H., 177
Battlespace managers, 49. See also Clausewitz, Carl von: on battles, 18;
Maneuver managers on the “fog of war,” 13; on “friction,”
Battlespaces: covered by sensors, 156; 14–15; on genius, 30; on the “realms
perceptions of, 131 (See also Situation of war,” 13; on the scale of war, 25;
awareness); visualization of, 76–77 on the uncertainty of war, 14
Battlespace Terrain Reasoning and Clips/R2, 92
Awareness (BTRA) system, 87 Code generation, automated, 91
Battle tempo: entity-level, 184; on Cognitive load, 162–65, 213, 215
process traces, 138, 139; situation Cognitive processing, 105–6
awareness and, 135–36 Cold Harbor casualty rates, 30
Beatty, David (Vice Admiral), 10–11, 25 Collaboration: behavior types in, 177;
Belief persistence, 149–50 coordination and, 176–79; COPs
Benet, Stephen Vincent, 35 and, 189; dark side of, 206–8; defini-
BFAs (battlefield operating systems), 34 tion, 176; in disrupted command,
Biases: in Battle Damage Assessment, 190–93; impact on timing, 194–211;
143–47; of Blue command, 150, 151; information sharing and, 219;
Index 249

mission-oriented thinking and, Command and Control Vehicles


184–85; in network-enabled warfare, (C2Vs), 191–92, 193
167–93; points of impact, 169–70; Command Center, description of, 70
situation awareness and, 174–76; Commander Decision Environment, 195
task transmission and, 170–71; Commanders: audio logs of, 132;
technology and, 169–70, 180–81; challenge of force dispersion, 212–13;
training in, 220 confidence in subordinates, 21;
Collaboration Evaluation Framework coping skills of, 15; personal influ-
(CEF), 168–69, 183 ence of, 23; self-assessment by, 220;
Collaborative technology, 182 situation awareness cognitive,
Collaborative Technology Alliances 131–32; staff agents and, 66
(CTAs), 91 Commander’s Critical Information
Collateral damage, 28. See also Requirements (CCIR), 69, 70, 85, 153
Noncombatants Commander Support Environment
Collection Management tool, 79 (CSE): battle command system
Collective Agents, 66, 67, 71 interfaces, 147–48; 2018 battle
Collective Intelligence Module (CIM), scenario, 40; Blue force, 50–53;
89 command functions, 75–87;
Combat power, situation awareness command succession functions, 86;
and, 62 COP limitations, 147–49;
Combat Power tool, 86 customization of user interfaces, 77;
Combined Arms Teams (CATs): function of, 68; interface clutter, 153;
allocation of responsibilities, 49; key functions of, 53; knowledge
battle planning, 58–61; CAU management functions, 69; loss
command cells, 48–49; command- of communication symbols, 191;
ers of, 172; composition of, 49, 50; MDC2 and, 51–53; screen view, 54;
equipment of, 59; organization of, situation awareness maintenance,
59, 173; in OTB simulations, 47; 143–49; suite of tools in, 68–69;
reporting structure, 58; SAts, 164 tiers of, 68, 71–73; tools of, 51, 52
Combined Arms Units (CAUs): Command Post of the Future, 168, 213
allocation of responsibilities, 49; battle Commercial Joint Mapping Toolkit
planning, 58–61; 2018 battle scenario, (CJMTK), 86–87
38–41; in Combined Arms Teams, 47; Common ground preservation, 177
composition of, 49–50, 50; coordina- Common operating picture (COP):
tion of, 177; C2V, 40, 49; directing CSE generation of, 67, 69; definition,
fire, 184–85; 64; impact on situation awareness,
equipment of, 40; loss of MCSs, 189; limitations of, 147–49; overtrust
186–88; mission-oriented assessments, in, 148; screen views, 69; sensor
185; organization of, 40, 45, 173 detections in, 128; shared data,
Command: changing context of, 15–17; 159–62; terrain perspectives on, 87
description of cells, 46; destruction of Communications: bandwidth
cells, 193; functions of, 75–87; management, 86; BCSE display of,
succession, 86, 192 85–86; collaborative technology and,
Command and Control (C2): Data 181–83, 189–90; delegation and, 21;
Model, 94; execution centric, 64; history of, 22–23; organizational
experimental design and, 165–66; complexity and, 16; transmission,
situation awareness and, 98–107; 177; verbal, 189–90
task transmission and, 171; in team Completeness, definition, 199, 200
operations, 112–17; VDSF and, 91 Computer-generated forces, 47
250 Index

Computerization, enterprise structure Defense Advanced Research Projects


and, 63 Agency. See DARPA
Concept of operations (CPNOPS), Delegation, 20–21, 30
181–83 Democratic societies, 13
Confidence, definition, 200 Detection Catalog tool, 83
Confirmation, collaboration and, 177 Device Dependent Interface (DDI)/
Confirmation bias, 149–50, 196 Device Translator (DX), 92–93
Connection, collaboration and, 177 Device Independent Interface (DII), 93
Context sensitivity, definition, 80 Differentiation, 171, 178
Contingency planning, 24 Direct vision optics (DVO) sensors, 155
Control, definition of, 12 Discipline, resistance to fear and, 13
Control functions, CSE, 53 Draega, timing of, 81
Coordination: collaborative technology DSS Reasoner, 93
and, 182; costs of, 181; definition,
176; types of, 176–79 Economic imperatives, battle command
Correctness, definition, 199, 200 and, 24
Courses of action (COAs), 75–76 Effects managers, 49. See also Fires
Crimean War, 28 managers
Critical Decision Method, 133–34 Election, collaboration and, 177
Criticality, definition, 200 Electronic components,
noninterchangable, 27
Danger: battle and, 13–14; impact of Enemies, awareness of, 97
automation, 31 Engagement status, visualization of, 76
DARPA (Defense Advanced Research Engineers, situation awareness
Projects Agency): 2018 battle requirements, 115–16
scenario, 37–41; history of, 44–46; Execution Synchronization Matrix, 80
MDC2 project and, 18; RAID Experimental design: analysis and,
system, 216 165–66; data collection, 120–24,
Data collection. See also Multicell and 152–53, 197–203; interviews, 121,
Dismounted Command and Control 133–34, 201–2; MDC2 project,
(MDC2) project: about decision 53–58
making, 197–203; data filtering and,
152–53; for situation awareness, Face-to-face conferencing, 181
120–24 Fear, resistance to, 13
Decision making: adjustment decisions, Fires: automation of, 81, 214; control
198; automatable, 198, 208–10; auto- of, 82; manual execution of, 81–82
mated support, 214; battle command Fires managers, 49. See also Effects
and, 12; collaboration and, 206; managers
complex decisions, 198; data collection Fire Support Coordination Line, 17
about, 197–203; information overload Focus groups, 202–3
and, 210–11; information process- “Fog of war,” 13, 64–94
ing and, 61–63; problem solving Fragmentary orders (FRAGOs), 190
and, 19–20; rationality and, 196; risk Friendly situation, awareness of, 97
assessment and, 19; social forces and, Fuel consumption, visualization of, 76
196–97; timing of, 194–211; tools Future Combat System (FCS):
for, 73; translation to action, 12 collaboration-oriented technologies,
Decisions: characteristics of, 200; evalu- 168; development of, 220–21; infor-
ation of, 199–203; types of, 198–99 mation rich platforms, 43; origins of,
Decision support framework, 90–94 42–43; VDSF and, 91
Index 251

Future Combat Systems Command and of, 61–63; shared, 157–62, 175–76
Control (FCS C2), 44–45, 46 (See also Collaboration); sharing of,
Future Force leaders, 18 219; task-specific, 171; transforma-
Future Force Warrior (FFW), 91 tion of, 172; visualization of, 70.
See also Common operating picture
Gantt charts, 80 (COP)
Garm, reconnoitering of, 80 Information advantage rules, 140–43
Garstka, John, 42 Information managers, 49, 73. See also
Gawande, Atul, 15 Intelligence managers
Geodetic coordinate system, 87 Information overload: cognitive
Geographic Intelligence Overlays, 77 filtering, 152–53; CSE interface
Georeferenced satellite imagery, 87 clutter, 153; groupthink and, 197;
Germany, Battle of Jutland, 10–11 during the invasion of Iraq, 22–23;
Gettysburg, Confederate artillery at, 16 negative impact of, 210–11; robotic
Globalization, enterprise structure sensors and, 50–53; situation
and, 63 awareness and, 104–5
Goal-Directed Task Analysis (GDTA), Insurgents, identification of, 28
107, 108, 113 Intelligence, 115–16, 117. See also
“Going sour” incidents, 153 Information
Gordon, Andrew, 11 Intelligence, Surveillance, and
Grand Fleet, United Kingdom, 10–11 Reconnaissance (ISR) protocols:
Grant, Ulysses S., 16, 18 development of, 144; information
Graphical Control Measures (GCMs), transformation and, 173–74;
76, 87 integration of, 163–64; visualization
Ground surveillance radar (GSR), 88 of sensor coverage, 190
Group interviews, 121 Intelligence gaps, misinterpretations
Groupthink, 197 and, 149–57
Intelligence management:
HaleyRules, 92 coordination and, 176; overload, 204;
Hammurabi [Division], 25 Picture Viewer function, 82–83
Heisenberg, Werner, 14 Intelligence managers, 49. See also
High Mobility Artillery Rocket Systems Information managers
(HIMARS), 58 Intelligence Preparation of the Battle-
High-payoff targets (HPT), 127–28 field (IPB) process, 141–42
High Seas Fleet, Germany, 10–11 Intelligent Agents: components of, 67;
Huertgen Forest, 26 CSE Tier 2, 71; CSE Tier 3, 71–72;
Human perceptions, 13 definitions, 65; functional areas of,
66–67
Identification, collaboration and, 177 Intel Viewer tool, 83, 84
Incident identification, 202 Intensive processes, 179
Information: abstraction, 171–74; Interdependence: collaboration and,
addiction to, 205–6; availability 180; types of, 180
of, 204; cost of, 203–5; deficien- Interviews: group, 121; methodology,
cies in, 18; degree of urgency, 157; 133–34; scoring, 201–2
distribution of, 167–68; drivers Iraq: insurgence, 33; invasion of, 22–23,
of, 171; gaps in, 105, 147–48, 213; 26–27; noncombatants in, 28
hierarchical levels, 171–74; indica-
tion of certainty, 157; networked, Javelins, history of, 21
34; presentation of, 218; processing Jellico, John (Sir), 10–11, 14
252 Index

Johnson, Captain: battle scenario, Motivation, battle command and, 23


37–41, 87–90; commander support MULEs (mine-detecting sensors),
environment for, 51; situation 158, 159
awareness of, 63 Multicell and Dismounted Command
Justifications, 171, 178 and Control (MDC2) project:
Jutland Peninsula, 10–11, 15, 25 achievements of, 213–17; Battle
Command Support Environment,
Karbala Gap, Iraq, 25 64; central focus, 31, 35; commander
Klein, G. A., 178 support environment and, 51–53;
Knowledge bases, 67 data collection approaches, 122,
Kura Brigade, 37–38, 46–47 197–203; experimental design,
53–58; experimental testbed, 46–48;
Leadership. See also Command: battle history of, 45, 46; hypothetical
command and, 12; from the front, 15 organization, 173; launch of, 18;
Lee, Robert E., 16 long-term goals of, 181; study of col-
Lethality, trends in, 30–31, 212–13 laboration issues, 168; VDSF and, 91;
Leuthen, Battle of, 13 visual experimentation model, 73
Line of Sight (LOS) tools, 77 Multi-Role Armament and Ammunition
Logistical risks: CSE visualization System (MRAAS), 88
of, 76; displays of assets, 70; Munitions-on-Hand tool, 86
management of, 26–27; simultaneity Mutual adjustments: behaviors required
and, 30 for, 178; cognitive costs of, 189–90;
Logistical tools: within CSE, 86; coordination and, 176–77
situation awareness requirements,
115–17; supply needs analysis, 109 Nagorno-Karabakh Internal Liberation
Long-linked processes, 179 Organization (NKILO), 39–41
Long Range Surveillance (LRS) Named Areas of Interest (NAI), 141–42
soldiers, 82 Network Centric Operations
Conceptual Framework, 199
Maneuver managers, 49. See also Networked automation: decision
Battlespace managers making and, 20; diagnostic functions
Maps, 86–87 and, 18–19; functions of, 34; human
Map sets, 77 interface with, 36; impact on
Mauldin, Bill, 23 collateral damage, 28; logistical
Media, transparency to, 33 problems and, 27; management
Mediating processes, 179 of information overload, 22–23;
Military grid reference system prioritization dilemma and, 24;
(MGRS), 87 redundancy of command and, 32;
Military procurement, 43 synchronizing and, 22; system limi-
Military socialization, 13 tations, 34–35; tactical headquarter
Misinterpretations, situation awareness footprint and, 32; timing judgments
and, 213 and, 25; trust in the system, 210
Mission-complete estimates, 172 Network-enabled command: challenges
Mission-oriented thinking, 184–85 of, 217–21; tools of, 213–17
Missions, awareness of, 98 Network-enabled warfare: battle
Mission Workspace, 76 command systems and, 213; battle of
Moltke, Helmuth von, (General), 19 2018, 41–44; collaborative potential
Molz, Maureen, 45 of, 167–93; critics of, 220;
Most dangerous targets (MDTs), 127–28 decision-making environment, 196
Index 253

Neutral forces, simulated, 48 Process traces, 137–38, 186–88, 201


Noncombatants: 2018 battle scenario, Prognostics, self-alerting, 27
59; casualties among, 32; coping Prohibit Fire tool, 80
with, 27–29; level-1 situation Public opinion, 28, 33
awareness of, 97; logistical burden
of, 28–29 Quick Fire tool, 80, 81–82
Non-Line of Sight (NLOS) assets, 49, 88
Rahim, Sargent, 40–41, 51
Objective SAINTS, 25 RAID system, 216
Office of Force Transportation, 42 Rationalization, definition, 170
OneSAF Testbed (OTB), 46–48, 73–74 React to Contact behaviors, 47
OODA Loop (Observe, Orient, Decide, Reasoning engines, 67
Act), 12 Red command: SAt scores, 138, 141,
Operation Desert Storm, 28–29 142; situation awareness of, 134
Operation Iraqi Freedom, 23 Red force: battle planning, 58–61; 2018
Operations: concept of, 181–83; battle scenario, 38–41; command
situation awareness requirements, cells, 48; Kura Brigade command, 47;
115–16 size of, 54
OPORD, changes to, 77 Redundancy: of command, 32;
Organizational complexity, 16–17 command succession, 86; in data
Outcome consistency, 200 models, 67; of information models,
65; in ship design, 32
Paradoxes, 196 Relevance, definition, 199, 200
“Paranoia factor,” 132 Remote control, 31
Pattern recognition, 18 Representational errors, 149–50
Patton, George, 23 Request for Fire tools, 82
Performance measures: collaboration Resident Agents, function of, 71–72
and, 174–75; of decision making, Resource Availability tool, 79
199–203; for situation awareness, Resource utilization: commander
110–11 support environment for, 51;
Physical exertion, battle and, 14 dispersal of tactical units, 31–32;
Picture Viewer, 82–84 planning and, 19; prioritizing
Planning: animation-based tools for, requirements, 24; simultaneity and,
190; battle command and, 19; 30; synchronizing and, 21–22
coordination and, 176; within CSEs, Rete Algorithm, 91
75–76; event sequences in, 26; Risk assessment, decision making
Mission Workspace, 76; unexpected and, 19
changes in mission, 31–32 Robotic platforms, 51. See also
Platform-centric computing, 42 Unmanned aerial vehicles (UAVs)
Platform Support Environments Robotic sensors. See also Sensors
(PSEs), 72, 88–89 Rommel, Erwin (German), 26
Precision weapons, intolerance to Route generation, tools for, 87
error, 31 Royal Navy, 10–11
Preconceptions, overcoming, 149–59 RSTA (Reconnaissance, Surveillance
Prioritization, battle command and Target Acquisition) tools,
and, 24 88–89
Prisoners of war, 28–29 Rule making, 210
Problem solving, decision making and, Rules Engine, description, 93
19–20 Rules Helper Methods, 93
254 Index

Rule Trigger Method, 94 information processing and, 61–63;


Russo-Japanese War, 16 level-1, 95, 97–98, 107, 114–15, 175;
level-2, 96, 99–100, 107, 114–16,
Sauer, Gary, 44 175; level-3, 96, 98, 101–2, 107, 114,
Scheer, Reinhard (Admiral), 10–11 116, 151, 175; location component,
Schlieffen, Alfred von (General), 16, 124–25; maintenance of, 143–49;
35–36 measurement of, 110–11;
Selection, 171, 178 misinterpretations and, 213;
Self-awareness, of commanders, 220 misinterpretations in, 149–57;
Semiautomated forces, 47 overload, 104–5; overtime for Blue
Sense-making failures, 157–58 and Red, 160; perceptual constraints,
Sensor coverage, 128–31; analysis tools, 103; quantitative analysis of, 214;
154–55; SAt and, 155; situation requirements analysis, 107–9; shared
awareness and, 153–57; visualization information and, 157–62; sources of,
of, 190 103; state component, 126; stressors,
Sensor Coverage tools, 156, 214–15 103–4; system design for, 105–10; in
Sensors: 2018 battle scenario, 39–41; team operations, 112–19; underload,
control of, 58, 80; with direct vision 104–5
optics, 155; gaps, 147, 148; Situation Awareness—Technical (SAt):
information overload and, 50–53; analysis, 166; calculation of, 127;
target detection by, 31 component scores, 125–28; curve
Sherman, William Tecumseh (General), over time, 138; development of, 124;
27 information advantage rules, 141–43;
Ship design, redundancy in, 32 level-1, 159, 162–63; level-2, 159;
Simulations, 46–48, 165 overtime for Blue and Red, 160;
Simultaneity, resource utilization and, 30 sensor coverage and, 153–57; UAV
SINCGARS, 85 losses and, 206
Situation Awareness—Cognitive (SAc), Slim, William (British Field Marshal), 18
130–35 Soldiers, willingness to fight, 13
Situation Awareness Global Assessment Soldier/Vehicle Support Environment
Technique (SAGAT), 111, 112 (SSE), 72–74
Situation Awareness (SA): acquisition Soviet Union, 41–41
component, 125–26; attention Special Operations Force teams, 58
shifting and, 151–52; battle Stacked charts, 136–39, 201
command effectiveness and, 95–119; Standardization, coordination and, 176
battle tempo and, 135–36; C2 Standoff lethality, 212–13
challenges, 98–105; cognitive load, Subordinates: commander confidence
162–65; collaboration and, 174–76; in, 21; fostering initiative in, 30
combat power and, 62; common Suffering, battle and, 14
operating picture and, 64; Sun Microsystems, 42
confirmation of, 170, 178; CSE tools Supply lines, logistical risk and, 26–27
for, 78–79; data collection for, Surveys, 131, 198–99
120–24; definition, 95; gaps in, Survivability estimates, 66
149–57; impact on timing, 194–211; Synchronization: battle command and,
individual limitations, 100, 102; 21–22; collaboration and, 177;
information advantage rules, Commander Support Environment,
140–43; information gaps and, 213; 53; ViewSync and, 77–78
information overload and, 210–11; System design, 107–11, 147–48
Index 255

TacFire, 20 Transparency, battle command and,


Tactical agility, 31–32 32–33
Tactical engagements, 29 Trust, importance of, 210
Tactical headquarters footprints, 32 2018 battle scenario, 37–41, 55–59
Tactical tasking, 53
Tactical unit dispersal, 31–32 Uncertainty, 14, 196
Tank warfare, impact of, 17 Unit Viewer tool, 83
Targets: acquisition, 143, 148, 209; Unmanned aerial vehicles (UAVs), 31,
detection of, 31, 76, 81; identification 67, 72, 88
of, 49; status of, 76 U.S. Army Engineer Research and
Targets Reconnaissance tasks, 80 Development Center (ERDC), 87
Task analysis, 107–8, 169 U.S. Army Training and Doctrine
Task assignment tools, 79–81 Command (TRADOC), 44, 55–58
Task Decomposition, 70 User interfaces, 77, 156
Task environments, 176–77, 180–81
Task organization, 184 Vetronics Technology Integration
Task processes, 179 Program (VTI), 91
Task Synchronization Matrix, 68 Video conferencing, 181
Task transmission, 170–71, 179, 189 Viecore Decision Support Framework
Technology: collaborative, 180–81; (VDSF), 90, 93
“fog of war” and, 64–94; human Vietnam War, 20–21, 28
dimensions of battle and, 33–34; ViewSync tool, 77–78
support for collaboration, 169–70; Visualization: Battle Damage
system design issues, 105–10 Assessment, 76; of battlespace, 76–77;
Tecuci, Gheorghe, 65 of information, 70; Picture Viewer
Tercio, effectiveness of, 216 function, 82–83; sensor coverage, 190;
Terrain: appreciation of, 18; situation situation awareness and, 214
awareness of, 97, 115–16
Terrain Analysis, 70, 86–87 Washington, George, 23
Thompson, J. D., 176, 180 Waterloo, Battle of, 13, 15–16, 23
Threat analysis, 66, 78–79 Wavell, Sir Archibald, 26, 33
Threat Manager, 53, 68, 78–79 Weapon-to-target pairing, 66
Time compression, 29 Weather, 97
Timelines, of runs, 202 Wellington, Duke of, 15, 23
Timeliness: definition, 199, 200; White phosphorus, 33
information trade-offs and, 206 Workspaces, customization of, 77
Timing: command judgments, 25–26; World War I, 16–17
decision making and, 194–211 World War II, 17
Transaction Processor, 93
Transmission, collaboration and, 177 Zizka, Jan, 220
256
About the Contributors

LEONARD ADELMAN’s research focuses on judgment, decision, and col-


laborative processes; cognitive systems engineering; and decision support and
expert system evaluation. Adelman is a tenured, full professor in the Depart-
ment of Systems Engineering and Operations Research at George Mason
University. He is also the coordinator for the Command Support Technical
Area in GMU’s Center for Excellence in Command, Control, Communica-
tions, Computing, and Intelligence. Adelman has authored or coauthored
more than 50 journal papers, book chapters, and conference proceedings. His
three books are Evaluating Decision Support and Expert Systems (Wiley 1992);
Cognitive Systems Engineering for User-Centered Interface Design, Prototyping,
and Evaluation (with Stephen Andriole, LEA 1995); and Handbook for Evaluat-
ing Knowledge-Based Systems (with Sharon Riedel, Kluwer 1997). In addition,
he has participated in developing and evaluating prototypes for improving
operational systems, including AWACS and Patriot. Adelman is a member
of the Judgment & Decision Making Society, Human Factors & Ergonom-
ics Society, Brunswik Society, and IEEE (elected senior member). He earned
his PhD from the University of Colorado in 1976. Prior to joining George
Mason University, Adelman cofounded the Decision Sciences Section of
PAR Technology Corp. and, as the manager of this section from 1984–1986,
led R&D efforts designing, developing, and evaluating decision system pro-
totypes for all three branches of the armed forces. In the MDC2 program,
Adelman focused on the collaboration aspects.

RICHARD J. BORMANN JR. is the senior director of research and develop-


ment for Viecore FSD. Bormann is responsible for the architecture, design,

257
258 About the Contributors

and development on several defense research Battle Command programs, as


well as several decision support systems for handheld and robotic systems. In
particular, he leads the work on extending Viecore FSD’s capabilities in the
area of artificial intelligence, expert systems, and robotic elements supporting
the Future Force. In 1998, Bormann held the title of Distinguished Member
of the Technical Staff at AT&T, where he worked in the research area to pro-
vide expert systems for improving call center performance and efficiency. He
received his BS in computer science from Kean University and a master’s in
computer science from Stevens Institute of Technology, both in New Jersey.
Bormann led the development of MDC2 program’s technical architecture
and much of its software.

MICA R. ENDSLEY is president of SA Technologies, a cognitive engineering


firm specializing in the development of operator interfaces for advanced sys-
tems, including the next generation of systems for aviation, air traffic control,
power, medical, and military operations. Prior to forming SA Technologies,
she was a visiting associate professor at MIT in the Department of Aero-
nautics and Astronautics and associate professor of Industrial Engineering
at Texas Tech University. Endsley received a PhD in Industrial and Systems
Engineering from the University of Southern California. She is a registered
professional engineer and certified professional ergonomist and is a fellow of
the Human Factors and Ergonomics Society. She has published extensively
in the areas of situation awareness, decision making, and automation and is
coauthor of Situation Awareness Analysis & Measurement and Designing for Situ-
ation Awareness. Endsley’s ideas on situation awareness were a major influence
on the MDC2 program.

LEROY A. JACKSON is deputy director and senior operations research ana-


lyst at the U.S. Army Training and Doctrine Command Analysis Center in
Monterey, California. He has more than 11 years of experience conducting
applied research to enhance army analysis of future force advanced concepts
and requirements, including his work on the MDC2 program. His research
interests include artificial intelligence and human cognition. He is a mem-
ber of the Military Operations Research Society where he has served as the
cochair of several symposium and workshop working groups. Jackson has
more than 29 years of experience in military operations and battle command,
having served more than 24 years on active duty as a soldier, noncommis-
sioned officer, and commissioned officer in the Field Artillery and as a mili-
tary operations research and systems analyst. He served for 4years at the Field
Artillery Board conducting operational tests of Field Artillery battle com-
mand systems. He served in 105 mm, 155 mm, 175 mm, and 8 inch cannon
artillery battalions. He is a member of the honorable Order of Saint Barbara,
and his many awards include the Legion of Merit. Jackson received an MS in
operations research from the Naval Postgraduate School in 1995 where he
About the Contributors 259

was the distinguished Department of Defense graduate. He is a recipient of


the U.S. Army Chief of Staff’s Award for Excellence in Operations Research.
Jackson also earned a BA in Mathematics with highest honors from Cameron
University in 1990 and is a member of Phi Kappa Phi collegiate honor society
and the Pi Mu Epsilon mathematics honor society.

STEPHEN KIRIN, a retired colonel of the U.S. Army, has been a member of
the MITRE Corp. since July 2000. Since January 2006, he has served as the lead
of the Operations Research—Systems Analysis Division in the Joint Improvised
Device Defeat Organization. Kirin’s culminating active duty assignment was as
the deputy director for the TRADOC Analysis Center. During his four years at
TRAC, he was the lead analyst for a number of key experiments and studies to
underpin Army Transformation. In his 27 years of service, Kirin served at every
level from platoon to corps and has been a student of the issues associated with
battle command. Since joining MITRE and prior to his support to JIEDDO, he
has continued to investigate and analyze operational issues with a focus on battle
command. Kirin received a bachelor of science in engineering from the United
States Military Academy and a master’s of science in operations research and
applied mathematics from Rensselaer Polytechnic Institute. He was a U.S. Army
Rand Fellow for two years and is a graduate of the U.S. Naval War College.

GARY L. KLEIN focuses his work on modeling how people acquire and use
information. As the senior principal scientist in Cognitive Science & Artifi-
cial Intelligence in the C2C, he is responsible for developing and promoting
both of those technical areas with respect to supporting the development of
enhanced decision support. He also is developing the application of cognitive
systems engineering throughout MITRE. His current work is applying cogni-
tive systems engineering to the army’s 1st Information Operations Command
(Land). The objective is to identify transformational technology opportunities
related to 1st IO Command, which have new start potential for the Defense
Advanced Projects Agency. He and Leonard Adelman developed the Collabo-
ration Evaluation Framework originally to assess collaborative tools in intelli-
gence analysis in terms of their impact on collaboration per se. In an extension
of that effort, he led a team to help the intelligence community’s Disruptive
Technology Office (formerly ARDA) assess intelligence analysis tools with
regard to their ergonomic, cognitive, and collaborative suitability. In other
work, to improve understanding how policy changes lead to changes in deci-
sion making and subsequently organizational behavior, Klein developed the
Adaptive Decision Modeling Shell (ADMS) for creating cognitively realistic
agent-based social simulation models. For MITRE’s Center for Advanced
Aviation Systems Development, he recently led a C2C technical team in using
ADMS to develop a social simulation model of airline-scheduling decision
making. Dr. Klein led the MDC2 program’s research in collaboration within
and between command cells.
260 About the Contributors

ALEXANDER KOTT is a program manager in the Defense Advanced


Research Projects Agency, the central R&D organization of the U.S. Depart-
ment of Defense. He earned his PhD from the University of Pittsburgh,
Pennsylvania, where his research focused on applications of artificial intel-
ligence for innovative engineering design. Later he directed R&D organiza-
tions at technology companies including Carnegie Group, Honeywell, and
BBN. Kott’s affiliation with DARPA included serving as the chief architect
of DARPA’s Joint Forces Air Component Commander (JFACC) program and
managing the Advanced ISR Management program as well as the Mixed Ini-
tiative Control of Automa-teams program. He initiated the DARPA Real-
time Adversarial Intelligence and Decision-making (RAID) program and
also managed the MDC2 program. Kott’s research interests include dynamic
planning in resource-, time-, and space-constrained problems in dynamically
changing, uncertain, and adversarial environments; and dynamic, unstable,
and “pathological” phenomena in distributed decision-making systems. He
has published more than 60 technical papers and served as the editor and
coauthor of several other books, including Advanced Technology Concepts for
Command and Control, Adversarial Reasoning, and Information Warfare and
Organizational Decision-Making.

DOUGLAS J. PETERS leads the Live, Virtual, and Constructive Modeling


group for Applied Research Associates (ARA). In this role and in his prior role
leading the Command and Control Concepts group, he has led experimen-
tal design and analysis efforts for several DARPA programs. On the MDC2
program, Peters led portions of the analysis as well as ARA’s efforts to collect
experimental data, query the data, and develop innovative visualizations for the
data. On the DARPA Real-time Adversarial Intelligence and Decision-making
(RAID) project, Peters was responsible for experimental design, data collec-
tion, information elicitation, and analysis. Currently, he is leading weapon
engagement modeling for U.S. Army live training systems. Also at ARA, he
has led the Hard Target Uncertainties program and the tunnels defeat por-
tion of the Integrated Munitions Effects Assessment, both sponsored by the
Defense Threat Reduction Agency. Peters is interested in developing models
of engineering phenomena and experimental metrics and has been develop-
ing/implementing models for 11 years. He currently serves on the Research
and Development Subcommittee for the Interservice/Industry Training, Sim-
ulation and Education Conference. Peters received his bachelor’s degree in
architectural engineering from the Pennsylvania State University and his
master’s degree in civil engineering from North Carolina State University.
He is a registered professional engineer in the state of North Carolina.

JENNIFER K. PHILLIPS is the president and principal scientist of Cogni-


tive Training Solutions, LLC. Her interests include skill acquisition, cognitive
performance improvement, and the nature of expertise. MDC2 is one of the
About the Contributors 261

programs in which she has conducted research studies to examine and model
naturalistic human cognition. In a program of research sponsored by the army,
she investigated the process by which individuals make sense of situations
as they unfold and developed a model of sense making. Phillips has applied
her research to the development of several training interventions focused on
improving complex cognitive skills such as decision making, sense making,
and situation awareness, and problem detection. She has extensive experience
conducting Cognitive Task Analysis to elicit expert knowledge and generate
design and training requirements. She has conducted studies to develop
decision-making training scenarios at all echelons of military command and
using a range of media, including Web- and computer-based simulations. She
has also studied the role of instructors as facilitators of the learning process
and has developed instructor guides and train-the-trainer workshops to ensure
a focus on the cognitive elements of decision making. In addition, Phillips has
developed assessment measures and conducted evaluation studies to deter-
mine the effectiveness of training interventions for improving cognitive skills.
Phillips received a BA in psychology from Kenyon College in 1995.

STEPHEN RIESE is a member of the Senior Professional Staff at the Johns


Hopkins University Applied Physics Laboratory (APL), where he directs the
APL counterimprovised explosive device (IED) analysis program. He joined
APL after completing 24 years of service in the U.S. Army as a combat engi-
neer and operations research analyst. He taught systems engineering design
to cadets at the U.S. Military Academy at West Point, led engineer forces as
part of the NATO Peace Implementation Force in Bosnia-Herzegovina, led
analysis of future army combat systems and command and control structures
at the army’s Training and Doctrine Command Analysis Center, and directed
the analysis of strategic deterrence operations at the U.S. Strategic Command
at Offutt Air Force Base. Riese earned an undergraduate degree in architec-
ture from the University of Notre Dame, a master’s degree in industrial engi-
neering from Kansas State University, a master’s degree in military history
from the U.S. Army Command and General Staff College, and a doctorate
in systems engineering from the University of Virginia, where his research
focused on empirical spatial forecasting models. Continuing this research, he
currently develops spatial analysis methods for the U.S. Army Topographic
Engineering Center to be deployed in the Threat Mapper geospatial analysis
tool set and used to support operational analysis in the counter-IED fight and
the global war on terrorism. Riese contributed extensively to the analysis of
MDC2 experimental data.

KAROL G. ROSS is a research psychologist currently working at the Insti-


tute for Simulation and Training, University of Central Florida. Her area
of expertise in applied research includes qualitative methods for the assess-
ment of expertise and the development of training interventions for tactical
262 About the Contributors

thinking in military environments. Her current project at the lab focuses on


scientific oversight for a program of research and development regarding
IED (improvised explosive device) defeat training for the U.S. Marine Corps.
At her previous position with Klein Associates, she conducted research and
development for the U.S. Army, the U.S. Marine Corps, the U.S. Air Force,
and the Office of Naval Research. She recently directed and participated in
research to develop a framework of expertise for guiding the development
of technology-supported training, an online tool to support subject-matter
experts in developing training vignettes, cognitive task analysis for the design
of antiterrorism training, a new assessment method for tactical thinking skills,
and online training scenarios for coalition warfare. In addition, she conducted
research into knowledge management processes for the U.S. Army’s Battle
Command Knowledge System. She has developed and conducted workshops
on qualitative research methods for the military and industry. In the MDC2
program, she helped to design the experimental study of command decision
making. Ross has previously held positions at the U.S. Army Research Labo-
ratory and the Army Research Institute.

GARY SAUER is a 1979 graduate of the United States Military Academy,


West Point, where he majored in civil engineering. Upon graduation he
served 22 years in the U.S. Army in a variety of command and staff positions
in which he was instrumental in authoring command and control doctrine,
developing and fielding command and control technology, and assessing their
uses in the operational and joint environments. In 1998 he was appointed
to the Defense Advanced Research Projects Agency (DARPA) as the agency
director’s operational liaison. While at DARPA he was instrumental in assist-
ing the agency in the creation of the Future Combat Systems Program with
the army. Sauer additionally held the positions of director, Office of Manage-
ment Operations, and program manager, Future Combat Systems Command
and Control (FCS C2, MDC2) through 2005. Currently, he is the director,
Combat Identification and Antenna Programs at BAE Systems Inc., respon-
sible for the development of combat identification and advanced antenna
solutions to support DOD and HLS requirements. He holds a master of
science, business administration, Central Michigan University, and a master
of military arts and science, School of Advanced Military Studies. He is also a
senior executive fellow, JFK School of Government, Harvard University, and
national security fellow, Massachusetts Institute of Technology.

RICHARD HART SINNREICH retired from the U.S. Army in June 1990.
A 1965 West Point graduate, he earned a master’s degree in foreign affairs
from Ohio State University and is a graduate of the U.S. Army’s Command
and General Staff College and the National War College. His military service
included field commands from battery through division artillery; combat ser-
vice in Vietnam; teaching at West Point and Fort Leavenworth; tours on the
About the Contributors 263

Army, Joint, National Security Council, and SHAPE staffs; and appointment
as the first Army Fellow at the Center for Strategic and International Studies.
As first deputy director and second director of the army’s School of Advanced
Military Studies, he helped write the 1986 edition of the army’s capstone Air-
Land Battle doctrine and has published widely in military and foreign affairs.
Since retiring from military service, he has consulted for a number of defense
agencies, including the army’s Training and Doctrine Command, Joint Forces
Command, the Institute for Defense Analyses, and the Defense Advanced
Research Projects Agency. His defense column in the Lawton (OK) Constitu-
tion has been reprinted by the Washington Post, ARMY Magazine, and other
journals. His most recent book, with historian Williamson Murray and oth-
ers, is The Past as Prologue: The Importance of History to the Military Profession,
Cambridge University Press, May 2006. He led a team of military experts that
advised the MDC2 program and guided the program’s experiments.

THOMAS WILK is a lead operations research analyst with the MITRE


Corp. in McLean, Virginia. In this capacity, Wilk has led, or been a team
member in, analysis teams for several experiments investigating concepts
related to future battle command for both the DARPA/Army cosponsored
Multi-cell & Dismounted Command & Control Program (MDC2) and the
2006 PdM C4ISR On-the-Move Experiment at Fort Dix, New Jersey. Ear-
lier, Wilk served 14 years as an infantry officer and operations research analyst
in the U.S. Army, in a variety of command and staff assignments. His military
awards include the Meritorious Service Medal, Army Commendation Medal,
Armed Forces Expeditionary Medal, Humanitarian Service Medal, the Joint
Meritorious Unit Citation, the Combat Infantryman’s Badge for service in
Somalia, and the Expert Infantryman’s Badge. While serving in the infan-
try, Wilk was Ranger, airborne and air assault qualified. He holds a BS in
mechanical engineering (aerospace) from the United States Military Academy
and an MS in operations research from the Naval Postgraduate School. His
master’s thesis concerned modeling Theater Level ground logistics within a
low-intensity combat simulation.
264

You might also like