Download as pdf or txt
Download as pdf or txt
You are on page 1of 20

r Academy of Management Review

2021, Vol. 46, No. 1, 192–210.


https://doi.org/10.5465/amr.2018.0072

ARTIFICIAL INTELLIGENCE AND MANAGEMENT:


THE AUTOMATION–AUGMENTATION PARADOX
SEBASTIAN RAISCH
University of Geneva

SEBASTIAN KRAKOWSKI
Stockholm School of Economics

Taking three recent business books on artificial intelligence (AI) as a starting point, we
explore the automation and augmentation concepts in the management domain.
Whereas automation implies that machines take over a human task, augmentation
means that humans collaborate closely with machines to perform a task. Taking a
normative stance, the three books advise organizations to prioritize augmentation,
which they relate to superior performance. Using a more comprehensive paradox
theory perspective, we argue that, in the management domain, augmentation cannot
be neatly separated from automation. These dual AI applications are interdependent
across time and space, creating a paradoxical tension. Overemphasizing either aug-
mentation or automation fuels reinforcing cycles with negative organizational and
societal outcomes. However, if organizations adopt a broader perspective comprising
both automation and augmentation, they could deal with the tension and achieve
complementarities that benefit business and society. Drawing on our insights, we
conclude that management scholars need to be involved in research on the use of AI in
organizations. We also argue that a substantial change is required in how AI research
is currently conducted in order to develop meaningful theory and to provide practice
with sound advice.

The rise of powerful AI will be either the best or the associated with human minds, such as learning,
worst thing ever to happen to humanity. We do not yet interacting, and problem solving (Nilsson, 1971). Or-
know which. ganizations have long used AI-based solutions to au-
—Stephen Hawking, theoretical physicist tomate routine tasks in operations and logistics.
(University of Cambridge, 2016) Recent advances in computational power, the expo-
What all of us have to do is to make sure we are using nential increase in data, and new machine-learning
AI in a way that is for the benefit of humanity, not to techniques now allow organizations to also use AI-
the detriment of humanity. based solutions for managerial tasks (Brynjolfsson &
—Tim Cook, CEO of Apple (Byrnes, 2017) McAfee, 2017). For example, AI-based solutions now
Artificial intelligence (AI) refers to machines per- play important roles in Unilever’s talent-acquisition
forming cognitive functions that are usually process (Marr, 2018), in Netflix’s decisions regarding
movie plots, directors, and actors (Westcott Grant,
The authors wish to thank Associate Editor Sub- 2018), and in Pfizer’s drug discovery and develop-
ramanian Rangan and two anonymous reviewers for their ment activities (Fleming, 2018).
guidance on developing the paper. Furthermore, they are In the 1950s, pioneering research predicted that AI
grateful to Tina Ambos, Jean Bartunek, Jonathan Schad, would become essential for management (Newell,
Katherine Tatarinov, Xena Welch Guerra, Udo Zander, and Shaw, & Simon, 1959; Newell & Simon, 1956).
the participants of research seminars at HEC Lausanne, However, initial technological progress was slow
Nova School of Business and Economics, and the Geneva
and the discussion of AI in management was “ef-
School of Economics and Management for their valuable
comments on earlier versions of the paper. This research fectively liquidated” in the 1960s (Cariani, 2010: 89).
received partial support from the Swiss National Science Scholars subsequently adopted a contingency view:
Foundation [SNSF grants 181364 and 185164] and the The routine operational tasks that machines could
European Union [EU Grant 856688]. handle were separated from the complex managerial
192
Copyright of the Academy of Management, all rights reserved. Contents may not be copied, emailed, posted to a listserv, or otherwise transmitted without the copyright holder’s express
written permission. Users may print, download, or email articles for individual use only.
2021 Raisch and Krakowski 193

tasks reserved for humans. Consequently, AI was by reviewing prescriptions for managerial practice
researched in computer science and operations re- and developing more comprehensive perspectives.
search, whereas organization and management stud- They could do so by changing the ways they conduct
ies focused on humans (Rahwan et al., 2019; Simon, research in order to accurately analyze and describe
1987). Management scholars have therefore provided AI’s implications for managerial practice.
very little insight into AI during the last two decades
(Kellogg, Valentine, & Christin, 2020; Lindebaum,
REVIEWED MATERIALS
Vesa, & den Hond, 2020). Nonetheless, these scholars’
understanding will be required, because AI is be- We started with a review of three recent business
coming increasingly pervasive in managerial contexts. books on the use of AI in organizations. While there
In this review essay, we strive to reposition AI at are many other books on this topic, we selected the
the crux of the management debate. Three highly following three, which have been widely influential
influential business books on AI (Brynjolfsson & in managerial practice, filling the void arising
McAfee, 2014; Daugherty & Wilson, 2018; Davenport from the lack of scholarly research. The New York
& Kirby, 2016) serve as a source of inspiration to Times bestseller The Second Machine Age by MIT
challenge our thinking and spark new ideas in the Professors Erik Brynjolfsson and Andrew McAfee
management field (Bartunek & Ragins, 2015). (Brynjolfsson & McAfee, 2014) was called “the
The three books have developed a common AI most influential recent business book” in a memo-
narrative for practicing managers. The authors dis- randum that Harvard Business School’s dean sent to
tinguished two broad AI applications in organizations: the senior faculty (Economist, 2017). The much-
automation and augmentation. Whereas automation debated (Press, 2016) Only Humans Need Apply
implies that machines take over a human task, aug- is the latest book on AI by Babson Professor
mentation means that humans collaborate closely Thomas H. Davenport and Harvard Business Review
with machines to perform a task. Taking a normative Contributing Editor Julia Kirby (Davenport & Kirby,
stance, the authors accentuated the benefits of aug- 2016). Finally, the recently published Human 1
mentation while taking a more negative viewpoint on Machine by Accenture leaders Paul R. Daugherty
automation. Their combined advice was that organi- and H. James Wilson (Daugherty & Wilson, 2018) has
zations should prioritize augmentation, which the had an immediate impact on both academia and
authors related to superior performance. In addition, practice (Wladawsky-Berger, 2018).
the two more recent books (Daugherty & Wilson, Collectively, the three books have suggested that
2018; Davenport & Kirby, 2016) provided managers we are on the cusp of a major transformation in
with ample advice on how to develop and implement business, comparable to the industrial revolution in
such an augmentation strategy. scope and impact. During this “first machine age,”
Assuming a more encompassing paradox theory which started with the invention of the steam ma-
perspective (Schad, Lewis, Raisch, & Smith, 2016; chine in the eighteenth century, mechanical ma-
Smith & Lewis, 2011), we argue that augmentation chines enabled mass production by taking over
cannot be neatly separated from automation in the manual labor tasks at scale. Today, we face an anal-
management domain. These dual AI applications are ogous inflection point of unprecedented progress in
interdependent across time and space, which creates digital technology, taking us toward the “second
a paradoxical tension. Overemphasizing either aug- machine age” (Brynjolfsson & McAfee, 2014: 7). In-
mentation or automation fuels reinforcing cycles stead of performing mechanical work, machines
that not only harm an organization’s performance now take on cognitive work, which was traditionally
but also have negative societal implications. How- an exclusively human domain. However, machines
ever, organizations adopting a broader perspective still have many limitations, which means we are
comprising both automation and augmentation are entering an era in which the human–machine rela-
not only able to deal with the tension but also achieve tionship is no longer dichotomous, but evolving
complementarities that benefit business and society. into a machine “augmentation” of human capabil-
We conclude by discussing our insights’ implications ities. Rather than being adversaries, humans and
for organization and management research. The emer- machines should combine their complementary
gence of AI-based solutions and humans’ increasing strengths, enabling mutual learning and multiplying
interactions with them creates a new managerial tension their capabilities. Instead of fearing automation and
that requires research attention. Management scholars its effects on the labor market, managers should ac-
should therefore play a more active role in the AI debate knowledge that AI has the potential to augment,
194 Academy of Management Review January

rather than replace, humans in managerial tasks replacing the term “artificial intelligence” with “aug-
(Davenport & Kirby, 2016: 30–31). mented intelligence” (La Roche, 2017).
Building on this analysis, the three books have ad-
vised organizations to focus on augmentation rather
THE AUTOMATION–AUGMENTATION
than on automation. The two more recent books ex-
PARADOX
plicitly related such an augmentation strategy to su-
perior firm performance. For example, Daugherty Taking the three books as a starting point, we use a
and Wilson (2018: 214) concluded that companies paradox theory perspective (Schad et al., 2016; Smith &
using “AI to augment their human talent (. . .) achieve Lewis, 2011) to explore organizations’ use of AI further.
step gains in performance, propelling them to the A paradox lens allows us to elevate the level of analysis
forefront of their industries.” Conversely, companies to study both automation and augmentation, which
focusing on automation may “see some performance reveals a paradoxical tension between these dual AI
benefits, but those improvements will eventually applications in management. Following the Smith and
stall” (Daugherty & Wilson, 2018: 214). Similarly, Lewis (2011) paradox framework, we will analyze the
Davenport and Kirby (2016: 214) predicted that “a paradoxical tension, the management strategies used
company whose strategy all along has emphasized to address it, and their outcomes.
augmentation, not automation (. . .) will win big.”
Consequently, Davenport and Kirby (2016) advised
Paradoxical Tension
companies to prioritize augmentation (“don’t auto-
mate, augment” [59]), which they hailed as “the only The three books described the relationship between
path to sustainable competitive advantage” (204). automation and augmentation as a trade-off decision:
The two more recent books also provided managers Organizations attempting to use AI have the choice of
with ample advice on how to develop and implement either automating the task or using an augmentation
such an augmentation strategy in their organizations. approach. If they opt for automation, humans hand
Davenport and Kirby (2016: 89) described five strate- over the task to a machine with little or no further in-
gies for “post-automation” human work, all involving volvement. The objective is to keep humans out of the
some form of augmentation. In addition, they pro- equation to allow more comprehensive, rational, and
vided a seven-step process for planning and develop- efficient processing (Davenport & Kirby, 2016: 21). In
ing an augmentation strategy (Davenport & Kirby, contrast, augmentation implies continued close inter-
2016: 201). Daugherty and Wilson (2018: 105ff) de- action between humans and machines. This approach
scribed a range of new jobs that organizations could allows for complementing a machine’s abilities with
create and in which managers complement machines humans’ unique capabilities, such as their intuition
and machines augment managers. The authors further and common-sense reasoning (Daugherty & Wilson,
described how augmentation could be implemented 2018: 191f). The nature of the task determines whether
across domains, ranging from sales, marketing, organizations opt for one or the other approach. Rela-
and customer service to research and development tively routine and well-structured tasks can be auto-
(Daugherty & Wilson, 2018: 67ff). mated, while more complex and ambiguous tasks
Consistent with the books’ recommendations, cannot, but can be addressed through augmentation
companies have started adopting an augmentation (Brynjolfsson & McAfee, 2014: 138ff; Daugherty &
strategy. For example, Satya Nadella, CEO of Wilson, 2018: 107ff; Davenport & Kirby, 2016: 34ff).
Microsoft, has announced that the firm will “build The arguments that the books have provided are dif-
intelligence that augments human abilities and ex- ficult to refute, but their perspective is largely limited to
periences. Ultimately, it’s not going to be about hu- a given task at a specific point in time. Paradox theory,
man vs. machine” (Nadella, 2016). Similarly, in the however, warns that such a narrow trade-off perspective
preamble to its AI guidelines, Deutsche Telekom does not adequately represent reality (Smith & Lewis,
(2018) stated that “AI is intended to extend and 2011). A paradox lens can help increase the scale or
complement human abilities rather than lessen or level of analysis for a more systemic perspective (Schad
restrict them.” At IBM, the corporate principles have & Bansal, 2018), which allows organizations to perceive
declared that “the purpose of AI and cognitive sys- not only the contradictions but also the interdepen-
tems developed and applied by the IBM company is dencies between automation and augmentation. A more
to augment human intelligence” (IBM Think Blog, comprehensive paradox perspective (both–and) then
2017). In her speech at the World Economic Forum, replaces the traditional trade-off perspective (either–or).
IBM’s President and CEO Ginni Rometty suggested The essence of paradox is that the dual elements are
2021 Raisch and Krakowski 195

both contradictory and interdependent—forming a per- an explicitly stated domain model, which optimizes
sistent tension (Schad et al., 2016). the chosen utility function (Russell & Norvig, 2009).2
Automation and augmentation are contradictory, With clear rules in place, managers can relinquish
because organizations choose either one or the other the task to a machine.
approach to address a given task at a specific point in However, most managerial tasks are more com-
time. This choice creates a tension, since these AI plex, and the rules and models are therefore not fully
approaches rely on competing logics with different known or readily available. In such cases, rule-based
organizational demands. For example, Lindebaum automation is impossible, but managers could use an
et al. (2020) maintained that automation instills a logic augmentation approach to explore the problem fur-
of formal rationality in organizations that conflicts ther (Holzinger, 2016). This choice allows managers
with the logic of substantive rationality, or the human to remain involved and to collaborate closely with
capacity for value-rational reflection, whereas aug- machines on these tasks. It is a common miscon-
mentation preserves this substantive rationality. The ception that this augmentation process can be dele-
tension is further reinforced because some organiza- gated to the IT department or external solution
tional actors prefer augmentation (e.g., managers at providers. While rule-based automation allows such
risk of losing their jobs to automation) while others delegation, because the rules can be explicitly for-
prioritize automation (e.g., owners interested in effi- mulated, codified, and passed on to data scientists,
ciencies) (Davenport & Kirby, 2016: 61). complex tasks’ augmented learning relies on domain
While these contradictions are real, they only re- experts’ tacit knowledge, which cannot be easily
veal a partial picture. If we increase our analysis’s codified (Brynjolfsson & Mitchell, 2017). Data sci-
temporal scale (from one point in time to its evolu- entists can provide technical support, but domain
tion over time) and spatial scale (from one to multi- experts need to stay “in the loop” in augmented
ple tasks), we comprehend that, in the management learning (Holzinger, 2016: 119).
domain, the two AI applications are not only con- Augmentation is therefore a coevolutionary pro-
tradictory but also interdependent. cess during which humans learn from machines and
Increasing the temporal scale. Taking a process machines learn from humans (Amershi, Cakmak,
view of paradox reveals a cyclical relationship between Knox, & Kulesza, 2014; Rahwan et al., 2019). In this
opposing forces (Putnam, Fairhurst, & Banghart, 2016; iterative process, managers and machines interact to
Raisch, Hargrave, & van de Ven, 2018). Engagement learn new rules or create models and improve them
with one side of the tension may set the stage, or even over time. The type and extent of human involve-
create the conditions necessary, for the other’s exis- ment vary with the specific machine-learning solu-
tence; in addition, over time there is often a mutual tion (Russell & Norvig, 2009).3 Human domain
influence between the opposing forces, with swings expertise is the starting point for supervised learn-
from one side to the other (Poole & van de Ven, 1989). ing. Managers provide a machine with a set of la-
Elevating the temporal scale from one point in time to beled training data specifying the inputs (or features)
the process over time allows for exploring this cyclical and the corresponding outputs. The machine analyzes
relationship between automation and augmentation.
As the books suggest, the process of using AI for a 2
A utility function represents the organization’s pref-
managerial task starts with a choice between auto- erence ordering over a choice set, allowing it to assign a real
mation and augmentation. Organizations addressing value to each alternative. In the field of AI, utility functions
a well-structured routine task, such as completing are used to convey various outcomes’ relative value to
invoices or expense claims, could opt for automation. machines, which in turn allows them to propose alterna-
They could do so by drawing on codified domain tives that optimize the utility function (Russell & Norvig,
expertise to program rules into the system in the form 2009).
3
of algorithms specifying the relationships between Consistent with the three books, we adopt a broad
the conditions (“if”) and the consequences (“then”) definition of AI comprising both rule-based automation
and machine learning. In rule-based automation, which is
(Gillespie, 2014).1 Such rule-based automation requires
sometimes also called robotic process automation, the
machine is static in the sense that it adheres to the explicit
1
This step can also be done by using unsupervised rules it has been given (Daugherty & Wilson, 2018: 50;
machine learning (Russell & Norvig, 2009), which allows Davenport & Kirby, 2016: 48). In contrast, machine learn-
the machine to induce rules directly from the data. If the ing gives the machine the ability to learn from experience
task is deterministic, and the rules are simple and clear, without being explicitly programmed to do so (Mitchell,
these rules can be readily used for automation. 1997).
196 Academy of Management Review January

the training data and generates rules or models. In increasing levels of automation, replacing time-
contrast, unsupervised learning allows managers to consuming human activity with automated pro-
induce patterns, of which they were not previously cesses that improve accuracy, efficiency, or effectiveness
aware, directly from the unlabeled data (Jordan & (Langley & Simon, 1995). Consequently, augmenta-
Mitchell, 2015). tion may enable a transition to automation over
In both applications, managers then use their do- time.
main expertise to evaluate, select, and complement To provide illustrations of such transitions from
machine outputs. Spurious correlations or other automation to augmentation, we briefly discuss two
statistical biases need to be weeded out. For example, examples from managerial practice. Organizations
machines generally learn from large, noisy data sets are increasingly employing AI-based solutions in
containing random errors. Overfitting is a key risk in human resource (HR) management to acquire talent
this context, which means that a machine may learn (Stephan, Brown, & Erickson, 2017). For example, JP
a complete model that also explains the errors, con- Morgan Chase chose an augmentation approach to
sequently failing to generalize appropriately beyond assess candidates. A team of experienced HR man-
its training data (Fan, Han, & Liu, 2014). The experts’ agers worked closely with an AI-based solution to
revision of the learned knowledge is therefore an identify reliable, firm-specific predictors of candi-
important part of the augmented learning process dates’ future job performance. It took a full year of
(Fails & Olsen, 2003).4 In each iteration, managers intensive interaction between the human experts
assess the current model’s quality, subsequently and the AI-based solution to remove statistically bi-
deciding on how to proceed (Langley & Simon, ased or socially vexed predictors, and make the
1995). The resulting tight coupling between humans system robust. After the initial augmentation stage,
and machines, with the two influencing one another, JP Morgan Chase decided to automate the candidate
makes it increasingly difficult, or even impossible, assessment task on the basis of the identified criteria.
to decouple their influence on the resulting model By removing humans from this activity, the bank
(Amershi et al., 2014). intended to increase the candidate assessment’s
Over time, this close collaboration with machines fairness and consistency, while also making the
sometimes allows managers to identify rules or process faster and more efficient (Riley, 2018).
models that either optimize the utility function or Product innovation is another key domain of AI
come sufficiently close to an optimal solution to be application in management (Daugherty & Wilson,
practically useful.5 If these models are sufficiently 2018: 67f). For example, Symrise, a major global
robust, they can subsequently be used to automate a fragrance company, adopted an augmentation ap-
task. Managers are taken “out of the loop,” which proach to generate ideas. An AI-based solution hel-
allows them to focus on more demanding and valu- ped the company’s master perfumers identify
able tasks. Augmented learning thus aims to provide correlations between specific customer demograph-
ics and different combinations of ingredients based
4 on the company’s database of 1.7 million fragrances.
Machine-learning solutions already employ measures
Subsequently, Symrise’s master perfumers used
against overfitting, such as cross-validation and regulari-
zation. However, these measures can only complement, their expertise to confirm or reject the possible con-
and cannot replace, human responsibility and interven- nections, create additional ones, and refine them
tion in managerial tasks (Greenwald & Oertel, 2017). further. After two years of close interaction between
5
The computer-science literature has distinguished the master perfumers and the machine, the resulting
between tractable or polynomial (P) problems and intrac- model was considered sufficiently robust to auto-
table or nondeterministic polynomial (NP) problems mate the idea generation task. Based on a customer’s
(Dean, 2016; Hartmanis & Stearns, 1965). Less complex (P) requirements, the AI-based system now searches for
problems are amenable to optimization and rule-based possible new fragrance formulas far more rapidly
automation. In contrast, machines working on more com- and comprehensively than humans can, which has
plex (NP) problems encounter optimization problems.
helped increase these formulas’ novelty while si-
While the optimal solution may be out of reach, machine-
multaneously greatly reducing the search cost and
learning solutions can find models that approximate such a
solution with certain accuracy. These solutions are there- time (Bergstein, 2019).
fore suboptimal (given that they inevitably relax cer- As the examples illustrate, organizations may
tain real-life constraints), but may be close enough to the initially choose augmentation to address a complex
optimal solution to be suitable for practical application task, but this advanced interaction between man-
(Fortnow, 2013). agers and AI-based solutions helps them expand
2021 Raisch and Krakowski 197

their understanding of the task over time, which “objective setting” and the succeeding “idea se-
sometimes allows subsequent automation. While lection” tasks in the product innovation process.
such a transition relaxes the tension temporarily, the Consequently, in the initial, objective-setting stage
issue resurfaces when conditions change over time the company’s master perfumers must now enter
(Smith & Lewis, 2011). For example, digitalization customers’ objectives and constraints into the AI-
is likely to significantly alter the skills that JP based system to allow the automated generation of
Chase Morgan’s future talents need to be success- fragrance formulas matching these requirements in
ful. Bankers will need advanced data-science skills, the subsequent idea-generation stage (Goodwin
which did not play a role in the extant employee et al., 2017). This is often an iterative process,
data. Such substantial changes therefore make with the master perfumers circling back to adjust
the automated solutions function less effectively the objectives and the constraints according to the
(Davenport & Kirby, 2016: 72). Organizations should system outputs. In the later idea-selection stage, the
therefore, at least temporarily, return to augmenta- master perfumers continue using their human
tion, which allows humans and machines to jointly senses, expertise, and intuition to select one of the
work through the changing situation and adjust their formulas that the machine proposed. They subse-
models accordingly. quently use the AI-based solution to further refine
We conclude that the two AI applications in man- their chosen formula (Bergstein, 2019). For exam-
agement are not only contradictory but also interde- ple, the master perfumers employ the machine to
pendent. Organizations may opt for one or the other experiment with different dosages of the selected
application at a given point in time, which softens the formula’s ingredients. This refinement process can
underlying tension temporarily but fails to resolve it. include hundreds of iterations between the ma-
Eventually, organizations will face the same choice chine and the master perfumers. This iterative
again, demonstrating the two applications’ interde- process involving close human–machine interac-
pendent nature and cyclical relationship. tion has led to the augmentation of the idea selec-
Increasing the spatial scale. Paradox theory tion task.6
explores tensions not only over time but also across As this example illustrates, a task’s automation
space. Paradoxical tensions are nested and interwoven can lead to human–machine interaction in the
across multiple levels of analysis (Andriopoulos & preceding or the succeeding tasks in the manage-
Lewis, 2009). Addressing a tension at one level of rial process. Automation in one task “spills over,”
analysis may therefore simply reproduce the ten- enabling adjacent tasks’ augmentation. These
sion on a different level (Smith & Lewis, 2011). spillovers are particularly rapid in AI systems,
Elevating the spatial scale from one task to multi- which often rely on distributed computing and
ple tasks allows us to explore automation and aug- cloud-based solutions that make the knowledge
mentation’s nested interdependence across levels of gained from a given insight immediately accessi-
analysis. ble across the system (Benlian, Kettinger, Sunyaev,
Focusing our attention on the use of either one & Winkler, 2018; Gregory, Henfridsson, Kaganer,
(i.e., automation) or the other (i.e., augmentation) so- & Kyriakou, 2020). The two AI applications in
lution for a specific task sets artificial boundaries, management are therefore interdependent not
fosters distinctions, and fuels opposites (Smith & only across time but also across space. While
Tracey, 2016). However, in practice managerial tasks automation and augmentation are distinct activi-
rarely occur in isolation, but are generally embedded ties operating in different temporal or spatial
in a managerial process. There are interdependencies spheres, they are nevertheless intertwined at a
between the various tasks constituting this process. higher level of analysis. Viewed as a paradox, au-
These interdependencies cause managerial inter- tomation and augmentation are no longer separate,
ventions in one task to have ripple effects throughout but are mutually enabling and constituent of one
the process (Lüscher & Lewis, 2008). If organizations another.
automate a task hitherto reserved for humans, this
change could affect other, closely related human 6
The talent acquisition process in our JP Morgan Chase
tasks, and lead managers to start interacting with ma- example functions similarly: HR managers now engage in
chines. Such interactions are often iterative, resulting augmentation to initially set the machine’s objectives and
in the augmentation of adjacent tasks. constraints (objective setting) and to subsequently select
For example, at Symrise the automation of the from the candidates that the machine suggested (candidate
“idea generation” task also affected the preceding selection) (Riley, 2018).
198 Academy of Management Review January

Persistence of the tension. Paradox refers to a Second, with respect to complex managerial
tension between interdependent elements; how- tasks, machines can only provide a range of options
ever, this tension is only considered paradoxical if it that all relax certain real-life constraints.8 Managers
persists over time (Schad et al., 2016). We argue that need to use their intuition and common-sense
the emerging coexistence of interdependent auto- judgment—reconciling the machine output with
mated and augmented tasks will persist in the reality—to make a final decision about the most
management domain. Sometimes, highly visible desirable option (Brynjolfsson & McAfee, 2014: 92).
advancements driven by machine-learning appli- In our example of talent acquisition at JP Morgan
cations are misinterpreted and extrapolated to im- Chase, the AI-based solution enabled the candidate
ply that we are on the threshold of advancing toward assessment’s automation, but HR managers are still
artificial general intelligence.7 However, there is needed for the subsequent candidate selection
widespread agreement among computer scientists (Riley, 2018) because no model can cover this task’s
that we are actually far from machines wholly sur- full complexity. Machines cannot fully capture
passing human intelligence (Brynjolfsson & ambiguous predictors, such as cultural fit or inter-
Mitchell, 2017; Walsh, 2017). Technical and so- personal relations, for which there are simply no
cial limitations make the full automation of codified data available. The same applies to the
complex managerial processes impossible in the product development process at Symrise, where the
foreseeable future. Managers will therefore re- master perfumers ultimately choose one of the ma-
main involved in these processes and interact chine’s suggested fragrance formulas (Bergstein,
with machines on a wide range of tasks. 2019).
A few limitations of machines are worth pointing Third, machines are limited to the specific task
out here: First, machines have no sense of self or for which they have been trained. They cannot take
purpose, which means managers need to define their on other tasks, since they do not possess the gen-
objectives (Braga & Logan, 2017). Setting objectives eral intelligence to learn from their experience in
is closely related to taking responsibility for the as- one domain to conduct tasks in other domains
sociated tasks and outcomes; consequently, while (Davenport & Kirby, 2016: 35).9 Managers therefore
organizations can extend accountability to ma- need to ensure contextualization beyond an auto-
chines, responsibility requires intentionality, which mated task. For example, HR managers still need to
is an exclusively human ability (Floridi, 2008). In spend hours coordinating meetings to ensure that
turn, humans can only take responsibility if they their hiring decisions are aligned with the business
retain some level of involvement with and control strategy, and product developers need to continue
over the relevant tasks. In our example of product interacting with marketing departments to align
innovation at Symrise, the perfumers set the objec- their products with the business models.
tives, remain involved throughout the innovation Fourth, machines do not possess human senses,
process, and take responsibility for its outcomes. The perceptions, emotions, and social skills (Braga &
same is true of HR managers in the talent-acquisition
process. 8
Intractable (NP) problems imply discrete optimiza-
tion problems. While the optimal solution is out of reach,
7
A recent example is the media hype around AlphaGo relaxation of the constraints allows these problems to be
Zero, an AI-based system representing state-of-the-art addressed (Fortnow, 2013). Humans use their experi-
chess and go play (Silver et al., 2017). AlphaGo Zero ence to reduce the search space of exponential possi-
learned these games through trial and error (or reinforce- bilities by means of heuristic selection. Machines can
ment) without human guidance, only playing games subsequently use approximation algorithms to provide a
against itself. However, people often overlook that pro- range of possible solutions that all relax certain real-life
grammers still needed to feed AlphaGo Zero an important constraints.
piece of human knowledge: the rules of the game. Chess 9
The phenomenon of “catastrophic forgetting” ex-
and go have explicit, finite, and stable goals, rules, and plains this machine limitation; that is, having learned
reward signals, which allow machine learning to be opti- one task and subsequently transferred to another, a
mized. Most real-world managerial problems are far more machine-learning system simply “forgets” how to per-
complex than games like chess or go. For example, the form the previously learned task (Taylor & Stone, 2009).
rules of managerial problems might not be known, might be Humans, on the other hand, possess the capacity to
ambiguous, or might change over time. While AlphaGo transfer learning, allowing them to generalize from one
Zero is impressive, it represents little, if any, progress task context to another (Parisi, Kemker, Part, Kanan, &
toward artificial general intelligence. Wermter, 2019).
2021 Raisch and Krakowski 199

Logan, 2017).10 For example, HR managers can use the tension. Conversely, organizations that accept
their emotional and social intelligence to provide a a tension as paradoxical and pay attention to its
“human touch,” or the advanced communication competing demands could enable virtuous cycles
required to build true relationships, entice talent to (Schad et al., 2016).
work for their firm, and convince others to support Vicious cycles. Organizations are likely to priori-
the decisions made (Davenport & Kirby, 2016: 74). tize automation due to its promise of short-term cost
In the Symrise case, machines can neither smell efficiencies (Davenport & Kirby, 2016: 204). This
nor fully predict how humans will perceive new strategy forces organizations’ competitors to also
fragrances, or the emotions and memories they pursue automation in order to remain cost competi-
trigger. Master perfumers have these skills and can tive. Consequently, the whole industry may be “en-
also use them to tell a compelling story about a tering (. . .) in a race toward the zero-margin reality of
fragrance and its meaning, which is important for commoditized work” (Davenport & Kirby, 2016:
its commercialization. 204). Over time, these organizations lose the human
To conclude, the augmentation of a managerial task skills required to alter their processes (Endsley &
may enable its subsequent automation. Such automa- Kiris, 1995). Human experts are either made redun-
tion can, in turn, trigger further augmentation in dant through automation or they lose their specific
closely related managerial tasks. While these dynamics skills regarding the tasks they no longer pursue. Prior
are likely to promote increasing augmentation and research has shown that automation can deskill
automation, technological and social limitations pre- humans, make them complacent, and diffuse their
vent progress toward the full automation of managerial sense of responsibility (Parasuraman & Manzey,
processes in organizations. This is particularly true of 2010; Skitka, Mosier, & Burdick, 2000). Ultimately,
managerial contexts characterized by high degrees of organizations become entrenched in their automated
ambiguity, complexity, and rare events, which limit processes, because automation is limited to specific
deterministic approaches’ applicability (Davis & tasks in well-understood domains and imposes for-
Marcus, 2015). In such contexts, automation and aug- mal rules that narrow organizations’ choices and
mentation provide different, partly conflicting, but also penalize deviation (Lindebaum et al., 2020). To
complementary logics and functionalities that organi- conclude, while automation can free up resources for
zations require. While the optimal balance between potential search activities, it is also associated with
automation and augmentation depends on contin- short-term thinking, the loss of human expertise, and
gencies, such as organizations’ AI expertise and the lock-in effects that, together, fuel a reinforcing cycle,
nature of the environmental contexts they face, orga- which makes it increasingly difficult for organiza-
nizations will experience a persistent tension between tions to implement such search activities.11
these interrelated applications of AI in management. In contrast, organizations could follow the three
books’ combined advice and focus on augmentation.
This AI application requires extensive resources to
Management Strategies
work through iterative cycles of human–machine
Recent technological progress has made the AI learning. Contrary to automation, augmentation de-
tension salient for organizations. Organizations fac- mands continued human involvement and experi-
ing such a salient tension tend to apply management mentation (Amershi et al., 2014). Since emotions and
strategies to address it. According to paradox theory, other subjective factors affect humans, augmentation
these organizational responses fuel reinforcing cy- is difficult or even impossible to replicate, which
cles that can be either negative or positive (Smith & means every augmentation initiative is a new learn-
Lewis, 2011). If organizations are unaware of a ten- ing effort (Holzinger, 2016). Owing to their inherent
sion’s paradoxical nature they risk applying partial complexity and uncertainty, augmentation efforts
strategies, which cause vicious cycles that escalate often fail (Amershi et al., 2014). Furthermore, the
continued human involvement implies that human
10 biases persist, which means augmentation outcomes
Several current projects are aimed at deploying AI-
based agents capable of perceiving and responding to
11
emotional cues. However, these agents are very limited Prior studies have shown that slack resources are a
in their capabilities, because fundamental technical and necessary, but insufficient, prerequisite for organizational
ethical challenges limit their potential for human-level search. Slack resources can also induce complacency and
emotional sentience in the foreseeable future (McDuff & inertia, especially if organizational factors work against
Czerwinski, 2018). leveraging slack resources for search (Desai, 2020).
200 Academy of Management Review January

are never fully consistent, reliable, or persistent benefits and leverage them separately. Organizations
(Huang, Hsu, & Ku, 2012). To legitimize their large can purposefully iterate between distinct automa-
augmentation investments, organizations experi- tion and augmentation tasks, allowing long-term
encing failure may be tempted to reinforce their engagement with both forces. For example, Symr-
augmentation efforts further, which could escalate ise’s master perfumers iterate between automation
their commitment (Sabherwal & Jeyaraj, 2015; Staw, (i.e., when generating alternative fragrance formulas)
1981), with failure leading to continued augmenta- and augmentation (i.e., when selecting and refining
tion, in turn leading to continued failure. the most promising formula). The use of automation
To conclude, one-sided orientations toward either allows exploration beyond humans’ abilities by search-
automation or augmentation cause vicious cycles, ing through the whole landscape of possible options.
because they neglect the dynamic interdependencies Their cognitive limitations mean that humans’
between AI’s dual applications in management. search field is restricted, while machines do not face
Managers limiting their perspective to either auto- such information-processing limitations (Davenport
mation or augmentation risk developing partial and & Kirby, 2016: 17). Excluding humans at this stage
incomplete managerial solutions. While these solu- may help break path dependencies and promote
tions may be appropriate within the strict boundaries greater novelty. Switching to augmentation allows
that time and space impose, the use of AI in man- machine limitations to be overcome by subsequently
agement causes an organizational tension that per- using humans’ more holistic and intuitive information
sists across time and space. processing to choose between options and contextu-
Virtuous cycles. Paradox theory offers a more alize beyond the specific task at hand (Brynjolfsson &
constructive response to tensions by envisioning a McAfee, 2014: 92).
virtuous cycle, with organizations overcoming their While such differentiation allows for engaging in
defensiveness to embrace these tensions and view- both automation and augmentation, integration en-
ing them as an opportunity to find synergies that ables the finding of linkages that transcend the two
accommodate and transcend the opposing poles poles (Smith & Lewis, 2011). By switching, the ma-
(Schad et al., 2016). chine’s independent output can be used to challenge
A first step toward enabling such a virtuous cycle is human intuition and judgment, with human feed-
the acceptance of the tensions as paradoxical (Smith & back enabling further rounds of machine analysis
Lewis, 2011). While managers initially perceive auto- (Hoc, 2001). At these transition points, automation
mation and augmentation as a trade-off, they may and augmentation become mutually enabling. The
eventually recognize that they cannot simply choose two AI approaches’ juxtaposition stimulates learn-
between these dual AI applications, because either ing and fosters adaptability, allowing the combina-
choice intensifies the need for its opposite. However, tion of (machine) rationality and (human) intuition,
transitioning to a more encompassing paradox per- which enables more comprehensive information
spective requires cognitive and behavioral complexity processing and better decisions (Calabretta, Gemser,
(Miron-Spektor, Ingram, Keller, Smith, & Lewis, 2018). & Wijnberg, 2017). Through integration, automation
Stimulating an exchange between organizational ac- and augmentation jointly generate outcomes that
tors with different perspectives, such as data scientists neither application can enable individually.
and business managers, could develop more complex It is no easy feat to ensure such integration. As
understandings of the phenomenon. Once actors described above, the risks of organizations over-
accept that automation and augmentation can and emphasizing either automation or augmentation are
should coexist, they can explore the dynamic rela- real. Integration therefore requires humans to retain
tionship between them mindfully, which could be part overall responsibility for a managerial process. Prior
of their organization’s vision or guiding principles re- studies have shown that maintaining overall human
garding the use of AI in management. responsibility not only reduces human bias (Larrick,
While acceptance lays the groundwork for virtu- 2004), but also prevents human–machine collabo-
ous cycles, it has to be complemented with a subse- ration biases (Skitka et al., 2000). As these studies
quent resolution strategy (Smith & Lewis, 2011). have shown, assigning the overall responsibility for
Resolution involves seeking responses to paradoxi- processes to humans leads to increased vigilance
cal tensions through a combination of differentiation and verification behavior, the consideration of a
and integration practices (Poole & van de Ven, 1989). wider range of inputs prior to making decisions, and
Differentiation allows organizations to recognize and the use of greater cognitive complexity when pro-
appreciate automation and augmentation’s distinctive cessing such information. Consequently, retaining
2021 Raisch and Krakowski 201

human responsibility for managerial processes pro- is, for example, the major driver behind the current
motes integration, which transcends automation and trend toward personalized medicine, with treat-
augmentation. ments being tailored to each patient’s specific bio-
logical profile (Fleming, 2018; Lichfield, 2018).
While augmentation allows for identifying patterns
Outcomes
in large volumes of patient data, automation makes
Paradox theory suggests that managing tensions the design and manufacture of tailored drugs eco-
through the dynamic strategies of acceptance and nomically viable.
resolution fosters sustainability (Smith & Lewis, These varied benefits suggest that automation and
2011). By managing paradox, organizations enable augmentation’s combination creates complemen-
learning and creativity, promote flexibility and reli- tary returns that lead to superior firm performance.
ance, and unleash human potential. However, para- Together, AI’s dual applications in management
dox scholars have also acknowledged that narrow provide organizations with a range of benefits that
organizational attention to just one of the tensions’ neither automation nor augmentation can provide
poles can trigger unintended organizational and so- alone. However, realizing these benefits is contin-
cietal consequences (Schad & Bansal, 2018). We gent upon organizations’ active management of the
therefore conclude our analysis of the automation– automation–augmentation paradox.
augmentation paradox by assessing its organiza- Societal outcomes. Tensions’ systemic nature is a
tional and societal outcomes. central tenet of paradox theorizing (Jarzabkowski,
Organizational outcomes. The three books we Bednarek, Chalkias, & Cacciatori, 2019; Smith &
reviewed argued that organizations benefit greatly Lewis, 2011). Paradoxes are embedded in open sys-
from using AI. In particular, the authors emphasized tems and their implications extend beyond a single
augmentation’s potential to increase productivity, organization’s boundaries. Consequently, it is im-
improve service quality, and foster innovation. portant to adopt a more systemic perspective of
Moreover, they assumed that the combination of paradox, which takes not only the organizational
complementary human and machine skills will in- outcomes into consideration but also tensions’ and
crease the quality, speed, and extent of learning in their management’s larger, system-wide or societal
organizations (Brynjolfsson & McAfee, 2014: 182; implications (Schad & Bansal, 2018).
Daugherty & Wilson, 2018: 106; Davenport & Kirby, While firms may gain profits from their use of AI
2016: 206). In contrast, we have argued that focusing in management, the three books—to a varying
on either automation or augmentation can lead to extent—also pointed out that the larger societal im-
reinforcing cycles that harm long-term performance. plications are less certain (e.g., Brynjolfsson &
We suggest that organizations benefit if they differ- McAfee, 2014: 171). There is a risk that organiza-
entiate between and integrate across automation and tions could take a narrow perspective of either au-
augmentation. tomation or augmentation, triggering unintended
Differentiation allows organizations to benefit consequences that affect society negatively. How-
from both AI applications’ unique benefits. Auto- ever, if organizations adopt a more comprehensive
mation enables organizations to drive cost efficien- perspective, the outcomes could be positive for both
cies, establish faster processes, and ensure greater business and society. We explore these issues further
information-processing rationality and consistency. by focusing our attention on two societal outcomes
As described above, augmentation provides comple- discussed extensively in the three books: AI’s labor
mentary benefits arising from the mutual enhancing market impact (e.g., Brynjolfsson & McAfee, 2014:
of human and machine skills. The integration of au- 147ff), and its effects regarding social equality and
tomation and augmentation leads to additional ben- justice (e.g., Daugherty & Wilson, 2018: 129ff).
efits that accrue from the synergies between these First, a one-sided focus on automation could cause
interdependent activities. Automation could free up extensive job losses and result in the deskilling of
scarce resources for augmentation, which, in turn, managers who relinquish tasks to machines, which
could help identify the rules or models that enable could lead to the further risks of rising unemploy-
automation. Balancing automation and augmenta- ment and social inequality (Brynjolfsson & McAfee,
tion helps prevent the escalating cycles that focusing 2014: 171; see also Autor, 2015). Conversely, one-
on just one of these AI applications could cause. sided augmentation is likely to cause another “digital
Furthermore, the combination of automation and divide” (Norris, 2001), with social tensions arising
augmentation could enable new business models. AI between the few who currently have the capabilities
202 Academy of Management Review January

and resources for augmentation and those who do or socially vexed predictors often lead to new, even
not (Brynjolfsson & McAfee, 2014: 134f; see also more systematic discrimination. Daugherty and
Brynjolfsson & Mitchell, 2017). Wilson (2018: 179) cited the example of an auto-
Balancing automation and augmentation could, mated AI system, used to predict defendants’ future
however, enable a virtuous cycle of selective des- criminal behavior, that was biased against Black
killing (i.e., humans offload tasks where their abili- defendants. Another example involves Amazon,
ties are inferior to those of machines) and strategic which discontinued the use of an automated AI-
requalification (i.e., humans stay ahead of machines hiring tool found to discriminate against female ap-
in their core abilities), thereby gradually enhancing plicants for technical jobs (Dastin, 2018). In contrast,
both human and machine capabilities. This virtuous augmentation is likely to reduce such machine
cycle could help organizations reduce the digital biases through human back-testing and feedback.
transition’s negative effects on their employees and However, the intense interaction between managers
the labor market at large. Employees and managers and machines increases the risk of human biases
whose basic skills are made redundant by automa- being carried over to machines. This problem is
tion could be given the opportunity to gradually particularly pernicious, since machines then con-
build higher-level augmentation skills that remain in firm humans’ biased intuition, which makes humans
demand. This skill-enhancement cycle could also even less likely to question their preconceived po-
help “rehumanize work” by gradually shifting the sitions (Huang et al., 2012).
focus from repetitive and monotonous tasks to more The solution could be once more to combine dif-
creative and fulfilling tasks (Daugherty & Wilson, ferentiation and integration practices to address
2018: 214). the paradox. Differentiation allows for independent
A recent initiative at UBS’s investment-banking analyses with (i.e., augmentation) and without
division illustrates this virtuous cycle. UBS used an (i.e., automation) human involvement. Integration
AI-based solution to automate the task of reading and ensures that machine outputs are used to challenge
executing client demands for fund transfers, which humans, and human inputs to challenge machines
previously took an investment banker 45 minutes (Hoc, 2001), allowing for mutual learning (Panait &
per demand. Simultaneously, the bank implemented Luke, 2005) and debiasing (Larrick, 2004). Further-
another AI-based solution to augment the develop- more, humans “in the loop” could explain the sys-
ment of trading strategies. The investment bankers tem’s outputs, rendering algorithmic decisions more
used the time freed up by automation to collaborate accountable and transparent (Binns, Van Kleek,
closely with the AI-based tool to explore new strat- Veale, Lyngs, Zhao, & Shadbolt, 2018).
egies for adaptive trading. Data scientists provided For example, much like JP Morgan Chase, Uni-
the investment bankers with organizational support lever differentiates clearly between automation
to develop their augmentation skills. Consequently, (used for the initial candidate assessment) and aug-
the combined use of automation and augmentation mentation (used for the final selection) in their
allowed UBS to exploit cost efficiencies while ex- talent-acquisition process. Integration is ensured by
ploring new client solutions (Arnold & Noonan, retaining overall human responsibility for the entire
2017). process. Unilever has reported that the combination
Second, the use of AI in management could also of automation and augmentation has led to a 16%
have implications for social equality and fairness. increase in new hires’ ethnic and gender diversity,
Automation takes humans “out of the loop,” reduc- making it the “most diverse class to date.” At the
ing human biases and, in turn, promising greater same time, the company managed to save 70 000
equality and fairness. For example, using automa- person-days of interviewing and assessing candi-
tion for credit approval could reduce bankers’ bias dates, which resulted in annual cost savings of GBP 1
that might previously have kept people from quali- million and a reduction of the time-to-hire by 90%
fying for credit due to their ethnicity, gender, or (from an average of four months to two weeks)
postal code (Daugherty & Wilson, 2018: 167). Simi- (Feloni, 2017).
larly, automated candidate assessment based on
predetermined criteria and consistent machine pro-
DISCUSSION
cessing could help eliminate humans’ implicit
biases in their hiring decisions. Inspired by three recent business books, we have
However, real-world applications show that ma- explored the emergent use of AI-based applications
chine biases caused by noisy data, statistical errors, in organizations to automate and augment managerial
2021 Raisch and Krakowski 203

tasks. The central argument of this review essay is The three business books’ augmentation perspec-
that automation and augmentation are not only sep- tive reveals the importance of inducing social sci-
arable and conflicting but in fact fundamentally in- entists to participate in the debate. With respect to
terdependent. We suggest that the prevailing trade-off managerial tasks, humans will remain “in the loop”
perspective is overdrawn; viewed as a paradox, aug- and interact closely with machines. Management
mentation is both the driver and outcome of auto- scholars are particularly well-equipped to study
mation, and the two applications of AI develop and these human–machine interactions in real-world
fold into one another across space and time. The settings, as well as to explore the organizational and
automation–augmentation model’s popularity stems societal implications thereof. It is therefore no won-
largely from its clear boundaries and simplicity. der that the three business books devote the bulk of
However, adopting this intuitively appealing model their attention to augmentation. However, our anal-
uncritically obscures how automation and augmen- ysis suggests that management research limited to
tation intertwine in managerial practice. This review augmentation may be as biased as computer sci-
essay therefore sheds light on automation and aug- ence’s traditional focus on automation. Research on
mentation’s complementarities and identifies oppor- AI in management would therefore benefit greatly
tunities to transcend the paradoxical relationship from more interdisciplinary efforts. The technologi-
between them. cal and the social worlds are merging (Orlikowski,
In the following, we discuss our reflections’ im- 2007), which means that computer scientists and
plications for organization and management re- management scholars no longer study separate
search. We argue that the ways in which scientific phenomena. By juxtaposing and integrating their
research on AI is conducted need to change in order different perspectives, theories, and methodologies,
to accurately capture and analyze its organizational computer scientists and management scholars can
and societal implications for managerial practice. jointly create a foundation for meaningful research
Our discussion will be loosely structured along the on the use of AI in management.
famous “5w & 1h” questions (addressed in order here Such efforts also require a change in how research
as who, how, what, why, where, and when).12 on AI is conducted in the management domain. The
Our analysis of the automation–augmentation par- limited work to date has provided separate accounts
adox suggests that the jury is out on whether the use of with clear-cut contrasts between augmentation and
AI in management will turn out to be a blessing or a automation. On the one side, the research and prac-
curse. Scholars can still make a difference by exploring tice doomsayers have warned us that automation
the topic and informing practice regarding the ways will enslave humans, supervise and control them,
forward. However, this will require a change in who and drive out every iota of humanity (e.g., Bostrom,
conducts research on AI. Currently, computer scien- 2014; Ford, 2015). For example, in their recent AMR
tists, roboticists, and engineers are the scholars most review essay, Lindebaum et al. (2020) maintained
commonly studying AI. Their primary objective is to that automation may lead to a technology-enabled
automate as far as possible, because they often regard totalitarian system with formal and oppressive rules
humans as “a mere disturbance in the system that can representing the end of human choice. On the other
and should be designed out” (Cummings, 2014: 62). side are technology utopians (e.g., Kurzweil, 2014;
Computer scientists may be expert technologists, but More & Vita-More, 2013), including the authors of
they are generally not trained social or behavioral sci- two of the reviewed books (Daugherty & Wilson,
entists (Rahwan et al., 2019). Instead, they tend to use 2018; Davenport & Kirby, 2016), who have argued
laboratory settings for their research, which reduce the that humans will remain in control and use aug-
inherent variability that characterizes human behav- mentation to create huge benefits for organizations
ior. These settings permit the use of methodologies and society.13
aimed at maximizing algorithmic performance, but The automation–augmentation paradox suggests
disregard the role of humans and the wider organiza- that both perspectives are equally biased. Automa-
tional and societal implications. tion and augmentation are not good or evil per se.

12
The “5w & 1h” questions are a method used in areas as 13
We acknowledge that Brynjolfsson and McAfee (2014)
varied as journalism, research, and police investigations to provided a far more balanced discussion of AI’s organiza-
describe and evaluate a subject comprehensively. The tional and societal implications than the two more recent
method’s origins have been traced to Aristotle’s Nic- business books (Daugherty & Wilson, 2018; Davenport &
omachean Ethics (Sloan, 2010). Kirby, 2016).
204 Academy of Management Review January

Derrida (1967) argued that humans tend to construct limits and cognitive biases—engage in satisficing
binary oppositions in their narratives, with a hier- rather than maximizing behavior (Argote & Greve,
archy that privileges one side of the dichotomy while 2007; Cyert & March, 1963). However, intelli-
repressing the other. He warned that this approach is gent machines used for automation do not have
overly simplistic; it makes us forget that one side of a these limitations; they have practically unlimited
dichotomy cannot exist without the other. Similarly, information-processing capacity and exhibit per-
researchers need to accept that automation and fectly consistent behavior. Nonetheless, they can
augmentation are interdependent AI applications in introduce statistical biases and have other limita-
management that cannot be neatly separated and tions that humans do not have (Elsbach & Stigliani,
designated as either good or evil. These applications 2019). These differences lead to entirely new ma-
provide complementary functionalities that are both chine behaviors (Rahwan et al., 2019). We could,
potentially useful for organizations. The complex for example, speculate that because machines do
interaction between these varied AI applications not have humans’ limitations when searching, or-
over time could have both positive and negative ganizations using automation and augmentation
organizational and societal implications. could suffer less from learning myopia and path
Research on AI in management therefore needs to dependencies (Levinthal & March, 1993). Manage-
complexify its theorizing by moving from simple ment scholars thus need to broaden their perspec-
either–or perspectives to more encompassing both– tive to include human and machine agents and
and ones. Complexifying theories is essential for explore their distinct behaviors in organizational
understanding complex phenomena (Tsoukas, contexts.
2017), because “it takes richness to grasp richness” If we increase the complexity further, we have to
(Weick, 2007: 16). More encompassing perspectives, acknowledge that these human and machine agents
such as paradox theory (Schad et al., 2016) or sys- do not simply coexist in separate worlds (working on
tems theory (Sterman, 2000), offer a vantage point separate tasks), but are interdependent (interacting
from which researchers can observe the dynamic on the same or closely related tasks). Augmentation
interplay between automation and augmentation. therefore implies close collaboration between humans
Accepting and embracing this complexity allows for and machines. Since automation and augmentation
studying AI and its managerial applications “in the are interdependent, this interaction spreads across or-
wild.” This will lead to a more comprehensive and, ganizations. When addressing this human–machine
ultimately, more rigorous and relevant discussion of interaction, management scholars need to first explore
AI’s organizational and societal implications. how machines shape managerial behavior. For exam-
By working through this complexity, it becomes ple, Lindebaum et al. (2020) described how autono-
apparent what research needs to be conducted. At mous algorithms can direct and constrain human
the most basic level, management scholars need to behavior by imposing formal rationality. This per-
acknowledge that humans are no longer the sole spective resonates with Foucault’s (1977) concept of
agents in management, although most theories to panoptic surveillance, characterizing information
date focus exclusively on human agency. Scholars technology as a form of omnipresent architecture of
need to overcome this human bias and integrate in- control that creates, maintains, and cements central
telligent machines into their theories. The use of AI norms of expected behavior (see also Lyon, 2003;
for managerial tasks implies that machines are no Zuboff, 1988, 2019).
longer simple artifacts, but a new class of agents in However, our broader paradox perspective of AI in
organizations (Floridi & Sanders, 2004). While ma- management reveals that humans also shape ma-
chines have fundamental limitations, their actions chine behavior. Managers define the objectives, set
nevertheless enjoy far-reaching autonomy, because constraints, generate and choose the training data,
humans delegate knowledge tasks to these machine and provide machines with feedback. In machine-
agents and allow agents to act on their behalf (Rai, learning systems, humans shape and reshape algo-
Constantinides, & Sarker, 2019). rithms through their daily actions and interactions
Such automation leads to machine behavior that (Deng, Bao, Kong, Ren, & Dai, 2017). In a manage-
deviates significantly from the human behavior that ment context, machines may influence human be-
management theories traditionally describe. The bulk havior but without setting a static norm of conduct or
of extant management research has relied on the be- an unsurpassable rule (Cheney-Lippold, 2011).
havioral assumptions of boundedly rational human Managers participate in writing and rewriting the
actors, who—due to their information-processing rules that shape behavior. Management research
2021 Raisch and Krakowski 205

should thus explore both machines’ influence on Taddeo, Wachter, & Floridi, 2016). There is wide-
human behavior and humans’ influence on machine spread fear—and initial empirical evidence—that
behavior in the context of AI use in management. this lack of accountability can have harmful societal
If we increase the complexity even further, we fi- consequences regarding equality (Autor, 2015), pri-
nally see that humans and machines influence one vacy (Acquisti, Brandimarte, & Loewenstein, 2015),
another in an iterative process. While machine al- security (Brundage et al., 2018), and transparency
gorithms shape human actions, humans shape these (Castelvecchi, 2016).
algorithms through their actions, which create a re- Traditional managerial and organizational solu-
cursive relationship (Beer, 2017). Through augmen- tions may be inadequate for sufficiently addressing
tation, humans and machines become so closely such systemic problems (Schad & Bansal, 2018).
intertwined that they collectively exhibit entirely Management research should therefore contribute to
new, emergent behaviors, which neither show indi- the development of new organizational solutions
vidually (Amershi et al., 2014). The use of AI in that allow AI’s benefits to be realized, while miti-
management leads to hybrid organizational systems gating the associated negative side effects. In order
manifesting collective behaviors. It may therefore be to fully understand the automation–augmentation
difficult, or even impossible, to distinguish between paradox’s societal implications, scholars could
humans and machines or the respective learning and adopt a relational ontology which accepts that, in the
actions of each. digital age, human and machine agents are so closely
While it is convenient, and sometimes helpful, to intertwined in hybrid collectives that their relations
separate research studies that have analyzed how determine their actions (Floridi & Taddeo, 2016).
machines influence managers and vice versa, studies Rather than focusing on individual actors, the in-
examining hybrid organizational systems compris- teractions between these actors should be the unit
ing both managers and machines should provide of analysis. Such a perspective will enable a dis-
the greatest benefit. Only such studies can examine cussion of “distributed morality” (Floridi, 2013),
the feedback loops between human influence on which relies on shared ethical norms developed
machine behavior and machine influence on hu- through collaborative practices (Mittelstadt et al.,
man behavior, which cause the emergent behaviors 2016) and critically assesses whether, and to what
that are otherwise impossible to predict (Rahwan extent, such morality replaces or complements our
et al., 2019). Management scholars must provide current focus on individual responsibility in the
further insights into how such hybrid organizational digital age.
systems function by exploring the interactive be- With regard to the question of where, this refers to
haviors thereof. This systemic perspective will also the locus of management scholars’ research atten-
enable them to predict or explore the automation– tion. AI is a particularly broad research field with a
augmentation paradox’s systemic dynamics, which, great variety of organizational and societal implica-
as we have described above, can include unintended tions. Accordingly, researchers from disciplines
consequences and escalating cycles. Management such as astronomy (Haiman, 2019), biology (Webb,
research therefore has a crucial mandate to explore 2018), law (Corrales, Fenwick, & Forgó , 2018),
managers’ continued interactions with machines, as medicine (Topol, 2019), politics (Helbing et al.,
well as the emergent behaviors and systemic out- 2019), psychology (Jaeger, 2016), and sociology
comes they cause. (McFarland, Lewis, & Goldberg, 2016) have
This discussion leads us to the heart of the fourth addressed and presented conceptual ideas. In this
question, namely why research on AI in management regard, our focus was exclusively on the emergent
is essential. As argued above, the emergent use of AI use of AI for managerial tasks in practice. While
in management leads to iterative interactions be- management scholars may (and should) become in-
tween humans and machines. The resulting hybrid volved in the broader discussion of AI and its societal
organizational systems exhibit behaviors and pro- implications, the use of automation and augmenta-
duce organizational, as well as societal, effects that tion in management relates to the core of management
are impossible to predict precisely and are often scholars’ research. We therefore suggest that the
entirely unanticipated (O’Neil, 2016). No single actor management domain should be the specific focus of
in these systems has full control over these out- our attention.
comes. Consequently, it is difficult to apportion ac- In this review essay, our focus was predominantly
countability for outcomes to specific actors, which on questions of organizational functioning, such
creates an “accountability gap” (Mittelstadt, Allo, as those pertaining to the effect of collaboration
206 Academy of Management Review January

between multiple humans and machines on mana- still have the opportunity to make a lasting impact on
gerial tasks. While such meso-level topics will how organizations perceive and cope with the com-
likely play a central role in future research, organi- plex challenges they face. Our review essay shows
zations’ use of AI in management should also be that there is an urgent need for better understanding,
explored on the micro and macro levels of analysis. more reliable theories, and sustainable managerial
Micro-level research could investigate how the solutions. We therefore close with a call to action and
emergence of AI-based solutions changes the role of encourage our readers to embrace the topic of AI in
managers in organizations. In the past, management management.
theories emphasized managers’ domain expertise,
which granted them expert power and status in their
organizations (Finkelstein, 1992). Although domain REFERENCES
expertise remains relevant for managers regarding Acquisti, A., Brandimarte, L., & Loewenstein, G. 2015.
educating and challenging machines, automation Privacy and human behavior in the age of information.
and augmentation will lead to institutionalized Science, 347: 509–514.
knowledge—for example, in the form of algorithms— Amershi, S., Cakmak, M., Knox, W. B., & Kulesza, T. 2014.
which is often superior to individual managers’ expert Power to the people: The role of humans in interactive
knowledge. At the same time, general human skills that machine learning. AI Magazine, 35: 105–120.
complement machines, such as creativity, common Andriopoulos, C., & Lewis, M. W. 2009. Exploitation-
sense, and advanced communication (Davenport & exploration tensions and organizational ambidexter-
Kirby, 2016: 30), as well as integration skills such as AI ity: Managing paradoxes of innovation. Organization
literacy, will gain further importance in an era of auto- Science, 20: 696–717.
mation and augmentation (Daugherty & Wilson, 2018:
Argote, L., & Greve, H. R. 2007. A behavioral theory of the
191). These developments could lead to important firm—40 years and counting: Introduction and im-
shifts in managers’ roles, competencies, and status. pact. Organization Science, 18: 337–349.
Macro-level research could explore how the emer-
Arnold, M., & Noonan, L. 2017, July 7. Robots enter
gence of automation and augmentation in management
investment banks’ trading floors. Financial Times.
leads to institutional action and change. For example,
Retrieved from https://www.ft.com/content/da7e3ec2-
AI is often applied in open systems, blurring organiza- 6246-11e7-8814-0ac7eb84e5f1.
tional boundaries (Panait & Luke, 2005). Data are col-
Autor, D. H. 2015. Paradox of abundance: Automation
lected widely, with diverse stakeholders updating them
anxiety returns. In S. Rangan (Ed.), Performance and
continuously and collectively through their actions
progress: Essays on capitalism, business, and soci-
(Gregory et al., 2020). Inputs from agents within and ety: 237–260. Oxford, U.K.: Oxford University Press.
outside the organization thereby impact the automation
and augmentation process, which, in turn, can have Bartunek, J. M., & Ragins, B. R. 2015. Extending a provoc-
ative tradition: Book reviews and beyond at AMR.
wide-reaching societal implications. A core focus of
Academy of Management Review, 40: 474–479.
management scholars’ future research attention should
therefore be on studying how broader networks of ac- Beer, D. 2017. The social power of algorithms. Informa-
tors, comprising activists, companies, governments, tion, Communication & Society, 20: 1–13.
international organizations, and public institutions, Benlian, A., Kettinger, W. J., Sunyaev, A., & Winkler, T. J.
collaborate to set standards, build institutions, and or- 2018. The transformative value of cloud computing:
ganize collective action to address issues pertaining to A decoupling, platformization, and recombination
the use of AI in management. theoretical framework. Journal of Management In-
This leaves a final question of when scholars should formation Systems, 35: 719–739.
address the phenomenon of emerging AI use in Bergstein, B. 2019. Can AI pass the smell test? MIT Tech-
managerial practice. The answer is: “Immediately!” nology Review, 122: 82–86.
Managers in key organizational domains, including Binns, R., Van Kleek, M., Veale, M., Lyngs, U., Zhao, J., &
customer management, human resources, marketing, Shadbolt, N. 2018. “It’s reducing a human being to a
product innovation, sales, and strategy have already percentage”: Perceptions of justice in algorithmic de-
started working closely with intelligent machines on cisions. Proceedings of the 2018 CHI Conference on
automated and augmented tasks. This introduction of Human Factors in Computing Systems: 377–391.
AI in practice will profoundly change the nature of New York, NY: ACM Press.
management. These developments offer many fruitful Bostrom, N. 2014. Superintelligence: Paths, dangers,
areas for scholarly research. Management scholars strategies. Oxford, U.K.: Oxford University Press.
2021 Raisch and Krakowski 207

Braga, A., & Logan, R. 2017. The emperor of strong AI has Davenport, T. H., & Kirby, J. 2016. Only humans need
no clothes: Limits to artificial intelligence. Informa- apply: Winners and losers in the age of smart ma-
tion, 8: 156–177. chines. New York, NY: HarperCollins.
Brundage, M., Avin, S., Clark, J., Toner, H., Eckersley, P., Davis, E., & Marcus, G. 2015. Commonsense reasoning and
Garfinkel, B., Dafoe, A., Scharre, P., Zeitzoff, T., Filar, commonsense knowledge in artificial intelligence.
B., Anderson, H., Roff, H., Allen, G. C., Steinhardt, J., Communications of the ACM, 58: 92–103.
Flynn, C., Ó hÉigeartaigh, S., Beard, S., Belfield, H., Dean, W. 2016. Computational complexity theory. In E. N.
Farquhar, S., Lyle, C., Crootof, R., Evans, O., Page, M., Zalta (Ed.), The Stanford encyclopedia of philoso-
Bryson, J., Yampolskiy, R., & Amodei, D. 2018, Feb- phy. Stanford, CA: Metaphysics Research Lab, Stan-
ruary 20. The malicious use of artificial intelligence: ford University.
Forecasting, prevention, and mitigation. Retrieved
from https://maliciousaireport.com. Deng, Y., Bao, F., Kong, Y., Ren, Z., & Dai, Q. 2017. Deep
direct reinforcement learning for financial signal rep-
Brynjolfsson, E., & McAfee, A. 2014. The second ma- resentation and trading. IEEE Transactions on Neural
chine age: Work, progress, and prosperity in a time Networks and Learning Systems, 28: 653–664.
of brilliant technologies. New York, NY: W.W.
Norton. Derrida, J. 1967. De la grammatologie. Paris, France: Les
Éditions de Minuit.
Brynjolfsson, E., & McAfee, A. 2017. The business of arti-
ficial intelligence. Harvard Business Review, 7: 3–11. Desai, V. M. 2020. Can busy organizations learn to get
better? Distinguishing between the competing effects
Brynjolfsson, E., & Mitchell, T. 2017. What can machine of constrained capacity on the organizational learning
learning do? Workforce implications. Science, 358: process. Organization Science, 31: 67–84.
1530–1534.
Deutsche Telekom. 2018, April 24. Deutsche Telekom’s
Byrnes, N. 2017, June 14. Tim Cook: Apple isn’t falling guidelines for artificial intelligence. Deutsche Telekom.
behind, it’s just not ready to talk about the future. MIT Retrieved from https://www.telekom.com/resource/blob/
Technology Review. Retrieved from https://www. 532446/f32ea4f5726ff3ed3902e97dd945fa14/dl-180710-
technologyreview.com/2017/06/14/4525/tim-cook- ki-leitlinien-en-data.pdf.
apple-isnt-falling-behind-its-just-not-ready-to-talk-about-
the-future. Economist. 2017, May 4. Harvard Business School risks
going from great to good. Retrieved from https://
Calabretta, G., Gemser, G., & Wijnberg, N. M. 2017. The www.economist.com/business/2017/05/04/harvard-
interplay between intuition and rationality in strategic business-school-risks-going-fromgreat-to-good.
decision making: A paradox perspective. Organiza-
tion Studies, 38: 365–401. Elsbach, K. D., & Stigliani, I. 2019. New information tech-
nology and implicit bias. Academy of Management
Cariani, P. 2010. On the importance of being emergent. Perspectives, 33: 185–206.
Constructivist Foundations, 5: 86–91.
Endsley, M. R., & Kiris, E. O. 1995. The out-of-the-loop
Castelvecchi, D. 2016. Can we open the black box of AI? performance problem and level of control in automa-
Nature, 538: 20–23. tion. Human Factors, 37: 381–394.
Cheney-Lippold, J. 2011. A new algorithmic identity. Fails, J. A., & Olsen, D. R. 2003. Interactive machine
Theory, Culture & Society, 28: 164–181. learning. Proceedings of the 8th International Con-
Corrales, M., Fenwick, M. & Forgó, N. (Eds.). 2018. Ro- ference on Intelligent User Interfaces: 39–45. New
botics, AI and the future of law. Singapore: Springer. York, NY: ACM Press.
Cummings, M. M. 2014. Man versus machine or man 1 Fan, J., Han, F., & Liu, H. 2014. Challenges of big data
machine? IEEE Intelligent Systems, 29: 62–69. analysis. National Science Review, 1: 293–314.
Cyert, R. M., & March, J. G. 1963. A behavioral theory of Feloni, R. 2017, June 28. Consumer-goods giant Unilever
the firm. Englewood Cliffs, NJ: Prentice-Hall. has been hiring employees using brain games and ar-
Dastin, J. 2018, October 10. Amazon scraps secret AI tificial intelligence - and it’s a huge success. Retrieved
recruiting tool that showed bias against women. Reu- from Business Insider. https://www.businessinsider.
ters. Retrieved from https://www.reuters.com/article/ com/unilever-artificial-intelligence-hiring-process-
us-amazon-com-jobs-automation-insight/amazon- 2017-6.
scraps-secret-ai-recruiting-tool-that-showed-bias-against- Finkelstein, S. 1992. Power in top management teams:
women-idUSKCN1MK08G. Dimensions, measurement, and validation. Academy
Daugherty, P., & Wilson, H. J. 2018. Human 1 machine: of Management Journal, 35: 505–538.
Reimagining work in the age of AI. Boston, MA: Fleming, N. 2018. How artificial intelligence is changing
Harvard Business Review Press. drug discovery. Nature, 557: S55–S57.
208 Academy of Management Review January

Floridi, L. 2008. Information ethics: A reappraisal. Ethics Holzinger, A. 2016. Interactive machine learning for health
and Information Technology, 10: 189–204. informatics: When do we need the human-in-the-
Floridi, L. 2013. The philosophy of information. Oxford, loop? Brain Informatics, 3: 119–131.
U.K.: Oxford University Press. Huang, H.-H., Hsu, J. S.-C., & Ku, C.-Y. 2012. Understand-
Floridi, L., & Sanders, J. W. 2004. On the morality of arti- ing the role of computer-mediated counter-argument
ficial agents. Minds and Machines, 14: 349–379. in countering confirmation bias. Decision Support
Systems, 53: 438–447.
Floridi, L., & Taddeo, M. 2016. What is data ethics? Phil-
IBM Think Blog. 2017, January 17. Transparency and
osophical Transactions of the Royal Society A:
trust in the cognitive era. IBM Think Blog. Retrieved
Mathematical, Physical and Engineering Sciences,
from https://www.ibm.com/blogs/think/2017/01/
374: 1–4.
ibm-cognitive-principles.
Ford, M. 2015. The rise of the robots: Technology and the
Jaeger, H. 2016. Deep neural reasoning. Nature, 538: 467–
threat of mass unemployment. New York, NY: Basic
468.
Books.
Jarzabkowski, P., Bednarek, R., Chalkias, K., & Cacciatori,
Fortnow, L. 2013. The golden ticket: P, NP, and the search
E. 2019. Exploring inter-organizational paradoxes:
for the impossible. Princeton, NJ: Princeton Univer-
Methodological lessons from a study of a grand chal-
sity Press.
lenge. Strategic Organization, 17: 120–132.
Foucault, M. 1977. Discipline and punish: The birth of
Jordan, M. I., & Mitchell, T. M. 2015. Machine learning:
the prison. New York, NY: Pantheon Books. Trends, perspectives, and prospects. Science, 349:
Gillespie, T. 2014. The relevance of algorithms. In T. 255–260.
Gillespie, P. J. Boczkowski, & K. A. Foot (Eds.), Media Kellogg, K., Valentine, M., & Christin, A. 2020. Algorithms
technologies: Essays on communication, material- at work: The new contested terrain of control. Acad-
ity, and society: 167–194. Cambridge, MA: The MIT emy of Management Annals, 14: 366–410.
Press.
Kurzweil, R. 2014. The singularity is near. In R. L. Sandler
Goodwin, R., Maria, J., Das, P., Horesh, R., Segal, R., Fu, J., (Ed.), Ethics and emerging technologies: 393–406.
& Harris, C. 2017. AI for fragrance design. Proceed- London, U.K.: Palgrave Macmillan.
ings of the Machine Learning for Creativity and
Langley, P., & Simon, H. A. 1995. Applications of machine
Design Workshop at NIPS. San Diego, CA: Neural
learning and rule induction. Communications of the
Information Processing Systems Foundation.
ACM, 38: 54–64.
Greenwald, H. S., & Oertel, C. K. 2017. Future directions
Larrick, R. P. 2004. Debiasing. In D. J. Koehler & N. Harvey
in machine learning. Frontiers in Robotics and AI,
(Eds.), Blackwell handbook of judgment and deci-
3 (79): 1–7.
sion making: 316–338. Malden, MA: Blackwell.
Gregory, R. W., Henfridsson, O., Kaganer, E., & Kyriakou,
La Roche, J. 2017, November 16. IBM’s Rometty: The skills
H. 2020. The role of artificial intelligence and data gap for tech jobs is “the essence of divide.” Yahoo!
network effects for creating user value. Academy of Finance. Retrieved from https://finance.yahoo.com/
Management Review. doi: 10.5465/amr.2019.0178. news/ibms-rometty-skills-gap-tech-jobs-essence-
Haiman, Z. 2019. Learning from the machine. Nature divide-175847484.html.
Astronomy, 3: 18–19. Levinthal, D. A., & March, J. G. 1993. The myopia of
Hartmanis, J., & Stearns, R. E. 1965. On the computational learning. Strategic Management Journal, 14: 95–112.
complexity of algorithms. Transactions of the Ameri- Lichfield, G. 2018, October 23. The precision medicine
can Mathematical Society, 117: 285–306. issue. MIT Technology Review. Retrieved from
Helbing, D., Frey, B. S., Gigerenzer, G., Hafen, E., Hagner, https://www.technologyreview.com/s/612285/editors-
M., Hofstetter, Y., van den Hoven, J., Zicari, R. V., & letter-the-precision-medicine-issue..
Zwitter, A. 2019. Will democracy survive big data and Lindebaum, D., Vesa, M., & den Hond, F. 2020. Insights
artificial intelligence? In D. Helbing (Ed.), Towards from The Machine Stops to better understand rational
digital enlightenment: 73–98. Cham, Switzerland: assumptions in algorithmic decision-making and its
Springer. implications for organizations. Academy of Man-
Hoc, J.-M. 2001. Towards a cognitive approach to human- agement Review, 45: 247–263.
machine cooperation in dynamic situations. Interna- Lüscher, L. S., & Lewis, M. W. 2008. Organizational change
tional Journal of Human-Computer Studies, 54: and managerial sensemaking: Working through para-
509–540. dox. Academy of Management Journal, 51: 221–240.
2021 Raisch and Krakowski 209

Lyon, D. 2003. Surveillance as social sorting: Privacy, risk, Parasuraman, R., & Manzey, D. H. 2010. Complacency and
and digital discrimination. London, U.K.: Routledge. bias in human use of automation: An attentional in-
Marr, B. 2018, December 14. The amazing ways how Uni- tegration. Human Factors, 52: 381–410.
lever uses artificial intelligence to recruit & train Parisi, G. I., Kemker, R., Part, J. L., Kanan, C., & Wermter, S.
thousands of employees. Forbes. Retrieved from 2019. Continual lifelong learning with neural net-
https://www.forbes.com/sites/bernardmarr/2018/12/ works: A review. Neural Networks, 113: 54–71.
14/the-amazing-ways-how-unilever-uses-artificial- Poole, M. S., & van de Ven, A. H. 1989. Using paradox to
intelligence-to-recruit-train-thousands-of-employees. build management and organization theories. Acad-
McDuff, D., & Czerwinski, M. 2018. Designing emotionally emy of Management Review, 14: 562–578.
sentient agents. Communications of the ACM, 61: 74–83. Press, G. 2016, September 21. Only humans need apply is a
McFarland, D. A., Lewis, K., & Goldberg, A. 2016. Sociol- must-read on AI for Facebook executives. Forbes. Re-
ogy in the era of big data: The ascent of forensic social trieved from https://www.forbes.com/sites/gilpress/
science. American Sociologist, 47: 12–35. 2016/09/21/only-humans-need-apply-is-a-must-read-
on-ai-for-facebook-executives.
Miron-Spektor, E., Ingram, A., Keller, J., Smith, W. K., &
Lewis, M. W. 2018. Microfoundations of organiza- Putnam, L. L., Fairhurst, G. T., & Banghart, S. 2016. Con-
tional paradox: The problem is how we think about tradictions, dialectics, and paradoxes in organizations:
the problem. Academy of Management Journal, 61: A constitutive approach. Academy of Management
26–45. Annals, 10: 65–171.
Mitchell, T. M. 1997. Machine learning. Boston, MA: Rahwan, I., Cebrian, M., Obradovich, N., Bongard, J.,
McGraw-Hill. Bonnefon, J.-F., Breazeal, C., Crandall, J. W., Christakis,
N. A., Couzin, I. D., Jackson, M. O., Jennings, N. R.,
Mittelstadt, B. D., Allo, P., Taddeo, M., Wachter, S., &
Kamar, E., Kloumann, I. M., Larochelle, H., Lazer, D.,
Floridi, L. 2016. The ethics of algorithms: Mapping
McElreath, R., Mislove, A., Parkes, D. C., Pentland, A.,
the debate. Big Data & Society, 3: 1–21.
Roberts, M. E., Shariff, A., Tenenbaum, J. B., &
More, M., & Vita-More, N. 2013. The transhumanist Wellman, M. 2019. Machine behaviour. Nature, 568:
reader: Classical and contemporary essays on the 477–486.
science, technology, and philosophy of the human
Rai, A., Constantinides, P., & Sarker, S. 2019. Next-
future. Malden, MA: Wiley-Blackwell.
generation digital platforms: Toward human-AI hy-
Nadella, S. 2016, June 28. The partnership of the future. brids. MIS Quarterly, 43: iii–ix.
Slate. Retrieved from https://slate.com/technology/
Raisch, S., Hargrave, T. J., & van de Ven, A. H. 2018. The
2016/06/microsoft-ceo-satya-nadella-humans-and-a-i-
learning spiral: A process perspective on paradox.
can-work-together-to-solve-societys-challenges.html.
Journal of Management Studies, 55: 1507–1526.
Newell, A., Shaw, J. C., & Simon, H. A. 1959. Report on
Riley, T. 2018, March 13. Get ready, this year your next
a general problem solving program. International
job interview may be with an A.I. robot. CNBC. Re-
Conference on Information Processing: 256–264.
trieved from https://www.cnbc.com/2018/03/13/
Santa Monica, CA: Rand Corporation.
ai-job-recruiting-tools-offered-by-hirevue-mya-other-
Newell, A., & Simon, H. 1956. The logic theory machine—A start-ups.html.
complex information processing system. IRE Trans-
Russell, S., & Norvig, P. 2009. Artificial intelligence: A
actions on Information Theory, 2: 61–79.
modern approach (3rd ed.). Englewood Cliffs, NJ:
Nilsson, N. J. 1971. Problem-solving methods in artificial Prentice-Hall.
intelligence. New York, NY: McGraw-Hill.
Sabherwal, R., & Jeyaraj, A. 2015. Information technology
Norris, P. 2001. Digital divide: Civic engagement, infor- impacts on firm performance: An extension of Kohli
mation poverty, and the internet worldwide. Cam- and Devaraj (2003). MIS Quarterly, 39: 809–836.
bridge, UK: Cambridge University Press. Schad, J., & Bansal, P. 2018. Seeing the forest and the trees:
O’Neil, C. 2016. Weapons of math destruction: How big How a systems perspective informs paradox research.
data increases inequality and threatens democracy. Journal of Management Studies, 55: 1490–1506.
New York, NY: Crown. Schad, J., Lewis, M. W., Raisch, S., & Smith, W. K. 2016.
Orlikowski, W. J. 2007. Sociomaterial practices: Ex- Paradox research in management science: Looking
ploring technology at work. Organization Studies, back to move forward. Academy of Management
28: 1435–1448. Annals, 10: 5–64.
Panait, L., & Luke, S. 2005. Cooperative multi-agent Silver, D., Schrittwieser, J., Simonyan, K., Antonoglou, I.,
learning: The state of the art. Autonomous Agents Huang, A., Guez, A., Hubert, T., Baker, L., Lai, M.,
and Multi-Agent Systems, 11: 387–434. Bolton, A., Chen, Y., Lillicrap, T., Hui, F., Sifre, L., van
210 Academy of Management Review January

den Driessche, G., Graepel, T., & Hassabis, D. 2017. worst-thing-to-happen-to-humanity-stephen-hawking-


Mastering the game of Go without human knowledge. launches-centre-for-the-future-of.
Nature, 550: 354–359. Walsh, T. 2017. The singularity may never be near. AI
Simon, H. A. 1987. Two heads are better than one: The Magazine, 38: 58–62.
collaboration between AI and OR. Interfaces, 17: Webb, S. 2018. Deep learning for biology. Nature, 554:
8–15. 555–557.
Skitka, L. J., Mosier, K., & Burdick, M. D. 2000. Account- Weick, K. E. 2007. The generative properties of richness.
ability and automation bias. International Journal of Academy of Management Journal, 50: 14–19.
Human-Computer Studies, 52: 701–717.
Westcott Grant, K. 2018, May 28. Netflix’s data-driven
Sloan, M. C. 2010. Aristotle’s Nicomachean Ethics as the strategy strengthens claim for “best original content”
original Locus for the Septem Circumstantiae. Clas- in 2018. Forbes. Retrieved from https://www.forbes.
sical Philology, 105: 236–251. com/sites/kristinwestcottgrant/2018/05/28/netflixs-
Smith, W. K., & Lewis, M. W. 2011. Toward a theory of data-driven-strategy-strengthens-lead-for-best-original-
paradox: A dynamic equilibrium model of organizing. content-in-2018.
Academy of Management Review, 36: 381–403. Wladawsky-Berger, I. 2018, April 13. Human 1 Machine:
Smith, W. K., & Tracey, P. 2016. Institutional complexity The impact of AI on business transformation. Wall
and paradox theory: Complementarities of competing Street Journal. Retrieved from https://blogs.wsj.com/
demands. Strategic Organization, 14: 455–466. cio/2018/04/13/human-machine-the-impact-of-ai-on-
business-transformation.
Staw, B. M. 1981. The escalation of commitment to a
course of action. Academy of Management Review, Zuboff, S. 1988. In the age of the smart machine: The
6: 577–587. future of work and power. New York, NY: Basic
Books.
Stephan, M., Brown, D., & Erickson, R. 2017, February 28.
Zuboff, S. 2019. The age of surveillance capitalism:
Talent acquisition: Enter the cognitive recruiter. Deloitte
The fight for a human future at the new frontier of
Insights. Retrieved from https://www2.deloitte.com/
power. New York, NY: PublicAffairs.
insights/us/en/focus/human-capital-trends/2017/
predictive-hiring-talent-acquisition.html.
Sterman, J. 2000. Business dynamics: Systems thinking
and modeling for a complex world. Boston, MA:
Irwin/McGraw-Hill. Sebastian Raisch (sebastian.raisch@unige.ch) is professor
of strategy at Geneva School of Economics and Manage-
Taylor, M. E., & Stone, P. 2009. Transfer learning for rein-
ment, University of Geneva, Switzerland. He received his
forcement learning domains: A survey. Journal of Ph.D. in management from the University of Geneva and
Machine Learning Research, 10: 1633–1685. his habilitation from the University of St. Gallen. His re-
Topol, E. J. 2019. High-performance medicine: The con- search interests include artificial intelligence, organiza-
vergence of human and artificial intelligence. Nature tional ambidexterity, and organizational paradox.
Medicine, 25: 44–56.
Sebastian Krakowski (sebastian.krakowski@hhs.se) is a
Tsoukas, H. 2017. Don’t simplify, complexify: From dis- Postdoctoral Fellow at the House of Innovation, Stock-
junctive to conjunctive theorizing in organization and holm School of Economics, Sweden. He obtained his
management studies. Journal of Management Stud- Ph.D. from the University of Geneva and was previously
ies, 54: 132–153. a visiting researcher at Warwick Business School. His
University of Cambridge. 2016. “The best or worst thing to research explores artificial intelligence’s behavioral impli-
happen to humanity” - Stephen Hawking launches cations in organizations and society.
Centre for the Future of Intelligence. Retrieved from
https://www.cam.ac.uk/research/news/the-best-or-
Copyright of Academy of Management Review is the property of Academy of Management
and its content may not be copied or emailed to multiple sites or posted to a listserv without
the copyright holder's express written permission. However, users may print, download, or
email articles for individual use.

You might also like