Professional Documents
Culture Documents
What humans lose when we let AI decide
What humans lose when we let AI decide
What humans lose when we let AI decide
[DECISION MAKING]
I
t’s been more than 50 years since HAL, the malevolent computer in the movie 2001: A importantly so, on capacities such as
Space Odyssey, first terrified audiences by turning against the astronauts he was sup- imagination, reflection, examination,
posed to protect. That cinematic moment captures what many of us still fear in AI: valuation, and empathy. Therefore, it
that it may gain superhuman powers and subjugate us. But instead of worrying about fu- has an intrinsic moral dimension.
turistic sci-fi nightmares, we should instead wake up to an equally alarming scenario that Algorithmic systems, in contrast,
is unfolding before our eyes: We are increasingly, unsuspectingly yet willingly, abdicating output decisions after processing data
our power to make decisions based on our own judgment, including our moral convic- through an accumulation of calculus,
tions. What we believe is “right” risks becoming no longer a question of ethics but simply computation, and rule-driven rationality —
what the “correct” result of a mathematical calculation is. what we call reckoning.3 The problem is
Day to day, computers already make many decisions for us, and on the surface, they that having processed our data, the an-
seem to be doing a good job. In business, AI systems execute financial transactions and swers these systems give are constrained
help HR departments assess job applicants. In our private lives, we rely on personalized by the narrow objectives for which they
recommendations when shopping online, monitor our physical health with wearable de- were designed, without regard for poten-
vices, and live in homes equipped with “smart” technologies that control our lighting, tially harmful consequences that violate
climate, entertainment systems, and appliances. our moral standards of justice and fair-
Unfortunately, a closer look at how we are using AI systems today suggests that we ness. We’ve seen this in the error-prone,
may be wrong in assuming that their growing power is mostly for the good. While much racially biased predictive analytics used by
of the current critique of AI is still framed by science fiction dystopias, the way it is being many American judges in sentencing.4
sions of the human body, such as lenses In practice, that means that managers
REFERENCES
(to extend the view of the eyes), or spoons should judge on a case-by-case basis
1. C. Moser, F. den Hond, and D. Lindebaum,
and screwdrivers (to extend the dexterity whether, how, and why to use AI.
“Morality in the Age of Artificially Intelligent
of the hands). But we are skeptical. In our Avoiding blanket applications of the Algorithms,” Academy of Management
view, as with wearables, it’s crucial to con- technology means taking AI’s limits seri- Learning & Education, April 7, 2021, https://
journals.aom.org.
sider the user as part of the tool: We need ously. For example, while algorithms
2. J. Dewey, “Essays in Experimental Logic”
to think hard about how algorithms shape gain greater predictive power when fed
(Chicago: University of Chicago Press, 1916),
us. If the substitution of data-driven reck- more data, they will always assume a 362.
oning for human judgment is something static model of society, when, in fact, 3. B.C. Smith, “ The Promise of Artificial Intelli-
to be very cautious about, then following time and context are ever changing. And gence: Reckoning and Judgment” (Cambridge,
Massachusetts: MIT Press, 2019).
the rallying cry to develop responsible once managers have judged AI to be a
4. K.B. Forrest, “When Machines Can Be Judge,
AI will not help us break the substitution. useful tool for a given project, we see
Jury, and Executioner: Justice in the Age of Arti-
What can we do? essential merit in managers developing ficial Intelligence” (Singapore: World Scientific
and maintaining an attitude of vigilance Publishing, 2021).
A Call to Action (and Inaction) and doubt. This is because, in the ab- 5. J. MacCormick, “Nine Algorithms That
Changed the Future” (Princeton, New Jersey:
We see two ways forward. sence of vigilance and doubt, we might
Princeton University Press, 2012), 3.
First, we want to make a call for inac- miss the moment when our decision-
6. E. Morozov, “To Save Everything, Click Here:
tion: We need to stop trying to automate making frame has transitioned from The Folly of Technological Solutionism” (New
everything that can be automated just be- judgment to reckoning. This would York: PublicAffairs, 2013).
cause it is technically feasible. This trend mean that we would hand over our
Reprint 63307.
is what Evgeny Morozov describes as tech- power to make moral decisions to the Copyright © Massachusetts Institute of Technology,
nological solutionism: finding solutions to machine’s reckoning. 2022. All rights reserved.