What humans lose when we let AI decide

You might also like

Download as pdf or txt
Download as pdf or txt
You are on page 1of 4

FRONTIERS

[DECISION MAKING]

What Humans Lose When We Let AI Decide


Why you should start worrying about artificial intelligence now.
BY CHRISTINE MOSER, FRANK DEN HOND, AND DIRK LINDEBAUM

used now is increasingly dangerous. That’s


not because Google and Alexa are break-
ing bad but because we now rely on
machines to make decisions for us and
thereby increasingly substitute data-
driven calculations for human judgment.
This risks changing our morality in fun-
damental, perhaps irreversible, ways, as
we argued in our recent essay in Academy
of Management Learning & Education
(which we’ve drawn on for this article).1
When we employ judgment, our deci-
sions take into account the social and
historical context and different possible
outcomes, with the aim, as philosopher
John Dewey wrote, “to carry an incomplete
situation to its fulfilment.”2 Judgment re-
lies not only on reasoning but also, and

I
t’s been more than 50 years since HAL, the malevolent computer in the movie 2001: A importantly so, on capacities such as
Space Odyssey, first terrified audiences by turning against the astronauts he was sup- imagination, reflection, examination,
posed to protect. That cinematic moment captures what many of us still fear in AI: valuation, and empathy. Therefore, it
that it may gain superhuman powers and subjugate us. But instead of worrying about fu- has an intrinsic moral dimension.
turistic sci-fi nightmares, we should instead wake up to an equally alarming scenario that Algorithmic systems, in contrast,
is unfolding before our eyes: We are increasingly, unsuspectingly yet willingly, abdicating output decisions after processing data
our power to make decisions based on our own judgment, including our moral convic- through an accumulation of calculus,
tions. What we believe is “right” risks becoming no longer a question of ethics but simply computation, and rule-driven rationality —
what the “correct” result of a mathematical calculation is. what we call reckoning.3 The problem is
Day to day, computers already make many decisions for us, and on the surface, they that having processed our data, the an-
seem to be doing a good job. In business, AI systems execute financial transactions and swers these systems give are constrained
help HR departments assess job applicants. In our private lives, we rely on personalized by the narrow objectives for which they
recommendations when shopping online, monitor our physical health with wearable de- were designed, without regard for poten-
vices, and live in homes equipped with “smart” technologies that control our lighting, tially harmful consequences that violate
climate, entertainment systems, and appliances. our moral standards of justice and fair-
Unfortunately, a closer look at how we are using AI systems today suggests that we ness. We’ve seen this in the error-prone,
may be wrong in assuming that their growing power is mostly for the good. While much racially biased predictive analytics used by
of the current critique of AI is still framed by science fiction dystopias, the way it is being many American judges in sentencing.4

12 MIT SLOAN MANAGEMENT REVIEW SPRING 2022 YAREK WASZUL/THEISPOT.COM


And in the Netherlands, some 40,000 philosopher Vilém Flusser called “techni- indisputably correct. The risk is that we
families unfairly suffered profound finan- cal images”: the abstract patterns of pixels end up fashioning ourselves and our
cial harm and other damages due to the or bytes — the digital images of the “world” society based on the image that the tech-
tax authorities’ reliance on a flawed AI that the computer has been trained to pro- nology has formed of us. If we rely too
system to identify potential fraudulent use duce and process. A technical image is the much on these algorithms, we risk mis-
of a child benefit tax-relief program. The computed transformation of digitalized taking the map for the territory, like the
ensuing scandal forced the Dutch govern- data about some object or idea, and, as unfortunate motorists in Marseille,
ment to resign in January 2021. such, it is a representation of the world. France, who let their GPS direct them
However, as with any representation, it is straight off a quayside street and into
Mind the Gap incomplete; it makes discrete what is con- the harbor.
These kinds of unintended consequences tinuous and renders static what is fluid, Consider, for instance, the way people
should come as no surprise. Algorithms and therefore one must learn how to use it use wearable electronic devices that
are nothing more than “precise recipes intelligently. Like a two-dimensional map, monitor bodily functions, including
that specify the exact sequence of steps it is a selective representation, perhaps pulse rate, steps taken, temperature, and
required to solve a problem,” as one defi- even a misrepresentation. During the hours of sleep, as indicators of health.
nition puts it.5 The precision and speed making of the map, a multidimensional Instead of asking yourself how you feel,
with which the problem is solved makes reality is reduced by forcing it to fit onto you can check your wearable. It may tell
it easy to accept the algorithm’s answer a two-dimensional surface with a limited you that you should be concerned be-
as authoritative, particularly as we interact number of colors and symbols. Many of us cause you don’t take the minimum
with more and more such systems designed still know how to use a map. The trouble number of steps generally recommended
to learn and then act autonomously. with AI systems is that we cannot fully for a healthy life — a target that may
But as the examples above suggest, this understand how the machine “draws” its make sense for many but could be coun-
gap between our inflated expectation of technical images — what it emphasizes, terproductive if the air quality is bad or
what an algorithm can do and its actual what it omits, and how it connects pieces you have weak lungs.
capabilities can be dangerous. Although of information in transitions toward the How can we capture the undeniable
computers can now learn independently, technical image. As statistician George benefits of these smart machines without
draw inferences from environmental Box once noted, “All models are wrong, mistaking the machine’s abstraction for
stimuli, and then act on what they have but some are useful.” We add, “If we don’t the whole story and delegating to AI the
learned, they have limits. The common know how a model is wrong, it is not only complicated and consequential decisions
interpretation of these limits as “bias” is useless, but using it can be dangerous.” that should be made by humans?
missing the point, because to call it bias This is because reckoning based on a tech- The problem is less the machines than
implies that there is the possibility that the nical image can never reveal the full truth, we ourselves. Our trust in AI leads us to
limits of AI reckoning can be overcome, simply because this is not what it has been confuse reckoning — decision-making
that AI will eventually be able to eliminate made for. based on the summing up of various
bias and render a complete and truthful Through a process of reckoning, the kinds of data and technical images —
representation of reality. But that is not machines make decisions — which are with judgment. Too much faith in the
the case. The problem lies deeper. essentially based on technical images — machine — and in our ability to program
AI systems base their decisions on what that have an air of precision, and of being and control that machine — can produce
untenable, and perhaps even irreversible,
situations. We see(k) in AI’s confident
answers the kind of certainty that our
ancestors sought in vain in entrails, tarot
Through a process of reckoning, the machines cards, or the stars above.
make decisions that have an air of precision, and of Some observers — scholars, managers,
policy makers — believe that following re-
being indisputably correct. The risk is that we end sponsible AI development principles is a
up fashioning ourselves and our society based on valid and effective approach to injecting
the image that the technology has formed of us. ethical considerations into AI systems. We

SLOANREVIEW.MIT.EDU SPRING 2022 MIT SLOAN MANAGEMENT REVIEW 13


FRONTIERS

What Humans Lose When We Let


AI Decide (Continued from page 13)
In the absence of vigilance and doubt, we might
agree when they argue that as a cultural
product, AI is bound to reflect the out-
miss the moment when our decision-making frame
looks of people who commissioned the has transitioned from judgment to reckoning. This
algorithm, wrote its code, and then used would mean that we would hand over our power
the program. We disagree when they say
that therefore careful attention to the
to make moral decisions to the machine’s reckoning.
overall project and its programming is all
that is needed to keep the results mostly
positive. Critical voices about our ability
to instill ethics into AI are increasing, ask- problems that are not really problems at Given the rapid advances of AI, there
ing the basic question of whether we can all.6 Mesmerized by the promise of eternal isn’t much time. We all must relearn how
“teach” ethics to AI systems in the first improvement, technological solutionism to make decisions informed by our own
place. Going back to the roots of AI, we blunts our ability to think deeply about judgment rather than let ourselves be
should realize that the very fundaments whether, when, how, and why to use a lulled by the false assurance of algorith-
of judgment and reckoning are different given tool. mic reckoning.
and cannot be reconciled. This means that Second, we also make a call to action.
Christine Moser (@tineadam) is an associ-
AI systems will never be capable of judg- As a society, we need to learn what AI ate professor of organization theory at Vrije
ment, only of reckoning. Any attempt to really is and how to work with it. Universiteit Amsterdam in the Netherlands.
inject ethics into AI therefore means that Specifically, we need to learn how to Frank den Hond is the Ehrnrooth Professor
in Management and Organisation at the
the ethics will be straightjacketed, dis- understand and use its technical images, Hanken School of Economics in Finland
torted, and impoverished to fit the just as we needed to learn how to work and is affiliated with Vrije Universiteit
characteristics of AI’s reckoning. with models and maps — not just in a Amsterdam. Dirk Lindebaum is a senior
professor in organization and management
Those who believe in ethical AI con- general, abstract, and idealized sense, at Grenoble Ecole de Management. Com-
sider the technology to be a tool, on par but for each specific project where it is ment on this article at https://sloanreview
with other tools that are essentially exten- envisioned. .mit.edu/x/63307.

sions of the human body, such as lenses In practice, that means that managers
REFERENCES
(to extend the view of the eyes), or spoons should judge on a case-by-case basis
1. C. Moser, F. den Hond, and D. Lindebaum,
and screwdrivers (to extend the dexterity whether, how, and why to use AI.
“Morality in the Age of Artificially Intelligent
of the hands). But we are skeptical. In our Avoiding blanket applications of the Algorithms,” Academy of Management
view, as with wearables, it’s crucial to con- technology means taking AI’s limits seri- Learning & Education, April 7, 2021, https://
journals.aom.org.
sider the user as part of the tool: We need ously. For example, while algorithms
2. J. Dewey, “Essays in Experimental Logic”
to think hard about how algorithms shape gain greater predictive power when fed
(Chicago: University of Chicago Press, 1916),
us. If the substitution of data-driven reck- more data, they will always assume a 362.
oning for human judgment is something static model of society, when, in fact, 3. B.C. Smith, “ The Promise of Artificial Intelli-
to be very cautious about, then following time and context are ever changing. And gence: Reckoning and Judgment” (Cambridge,
Massachusetts: MIT Press, 2019).
the rallying cry to develop responsible once managers have judged AI to be a
4. K.B. Forrest, “When Machines Can Be Judge,
AI will not help us break the substitution. useful tool for a given project, we see
Jury, and Executioner: Justice in the Age of Arti-
What can we do? essential merit in managers developing ficial Intelligence” (Singapore: World Scientific
and maintaining an attitude of vigilance Publishing, 2021).
A Call to Action (and Inaction) and doubt. This is because, in the ab- 5. J. MacCormick, “Nine Algorithms That
Changed the Future” (Princeton, New Jersey:
We see two ways forward. sence of vigilance and doubt, we might
Princeton University Press, 2012), 3.
First, we want to make a call for inac- miss the moment when our decision-
6. E. Morozov, “To Save Everything, Click Here:
tion: We need to stop trying to automate making frame has transitioned from The Folly of Technological Solutionism” (New
everything that can be automated just be- judgment to reckoning. This would York: PublicAffairs, 2013).
cause it is technically feasible. This trend mean that we would hand over our
Reprint 63307.
is what Evgeny Morozov describes as tech- power to make moral decisions to the Copyright © Massachusetts Institute of Technology,
nological solutionism: finding solutions to machine’s reckoning. 2022. All rights reserved.

14 MIT SLOAN MANAGEMENT REVIEW SPRING 2022 SLOANREVIEW.MIT.EDU


Reproduced with permission of copyright owner. Further reproduction
prohibited without permission.

You might also like