Download as pdf or txt
Download as pdf or txt
You are on page 1of 2

Frontiers

RESPONSIBLE HUMANS

How AI Skews Our Sense of Responsibility


Research shows how using an AI-augmented system may affect humans’ perception of their own agency
and responsibility.
By Ryad Titah

A
s artificial intelligence plays an ever- current discussions about responsible AI. punishment, as in criminal law.
larger role in automated systems and In reality, such practices are intended to What we’re most concerned about here
decision-making processes, the ques- manage legal and reputational risk — a is the third type, what Jonas called the
tion of how it affects humans’ sense limited view of responsibility, if we draw sense of responsibility. It’s what we mean
of their own agency is becoming less theo- on German philosopher Hans Jonas’s use- when we speak admiringly of someone
retical — and more urgent. It’s no surprise ful conceptualization. He defined three “acting responsibly.” It entails critical think-
that humans often defer to automated deci- types of responsibility, but AI practice ing and predictive reflection on the purpose
sion recommendations, with exhortations appears concerned with only two. The first and possible consequences of one’s actions,
to “trust the AI!” spurring user adoption in is legal responsibility, wherein an individ- not only for oneself but for others. It’s this
corporate settings. However, there’s grow- ual or corporate entity is held responsi- sense of responsibility that AI and auto-
ing evidence that AI diminishes users’ sense ble for repairing damage or compensating mated systems can alter.
of responsibility for the consequences of for losses, typically via civil law, and the To gain insight into how AI affects users’
those decisions. second is moral responsibility, wherein perceptions of their own responsibility and
This question is largely overlooked in individuals are held accountable via agency, we conducted several studies. Two

18 MIT Sloan Management Review summer 2024 Matt Chinworth / theispot.com


studies examined what influences a driver’s the tools’ results, and taking for granted the models this by empowering anyone on the
decision to regain control of a self-driving results provided by the tool. Besides raising factory floor to shut down the production
vehicle when the autonomous driving fundamental legal and ethical questions line if they see a problem. This will encour-
system is activated. In the first study, we about the fairness, equity, and transparency age employees to systematically question
found that the more individuals trust the of such automated judicial decisions, this AI systems and processes and therefore
autonomous system, the less likely they result also points to an abdication of indi- maintain their sense of agency in order
are to maintain situational awareness that vidual responsibility in favor of algorithms. to avoid harmful consequences for their
would enable them to regain control of the Overall, these initial results confirm organizations.
vehicle in the event of a problem or inci- what has been observed in similar con- Ultimately, a culture of responsibility —
dent. Even though respondents overall said texts, namely that individuals tend to lose versus a culture of avoiding culpability — is
they accepted responsibility when operat- their sense of agency in the presence of an always going to mean a healthier and more
ing an autonomous vehicle, their sense of intelligent system. When individuals feel
ethically robust organization. It will be even
agency had no significant influence on their less control — or that something else is in
more important to foster such a culture in
intention to regain control of the vehicle in control — their sense of responsibility is
the age of AI by leaving clear spaces of pos-
the event of a problem or incident. On the likewise diminished.
sibility for human intelligence.
basis of these findings, we might expect to In light of the above, we must ask
Otherwise, Roderick Seidenberg’s pre-
find that a sizable proportion of users feel whether human in the loop, which is
diction about technologies far less powerful
encouraged, in the presence of an auto- increasingly understood as a best practice
than current AI could materialize:
mated system, to shun responsibility to for responsible AI use, is an adequate safe-
intervene. guard. Instead, the question becomes: How The functioning of the system, aided
In the second study, conducted with do we encourage humans to accept they increasingly by automation, acts —
the Société de l’Assurance Automobile du have proactive responsibility and exercise it? certainly by no malicious intent — as
Québec, a government agency that admin- As noted at the outset of this article, a channel of intelligence, by which the
isters the province’s public auto insur- managers tend to exacerbate the prob- relatively minute individual contribu-
ance program, we were able to conduct lem by encouraging trust in AI in order to tions of engineers, inventors, and sci-
more-refined analyses. We surveyed 1,897 increase adoption. This message is often in entists eventually create a reservoir of
drivers (mostly of Tesla and Mercedes cars terms that denigrate human cognition and
established knowledge and procedure
with some level of autonomous driving decision-making as limited and biased com-
no individual can possibly encompass.
capabilities) to look at the separate effect pared with AI recommendations — despite
Thus, man appears to have surpassed
of each type of responsibility on the driver’s the fact that all AI systems necessarily reflect
himself, and indeed in a certain sense he
intention to regain control of the vehicle human biases in data selection, specifica-
has. Paradoxically, however, in respect
and found that only the sense of responsi- tion, and so forth. This position assumes that
to the system as a whole, each individ-
bility had a significant effect. As in the first every AI decision is correct and superior to
ual becomes perforce an unthinking
study, the more trust respondents reported the human decision and invites humans to
having in the automated system, the lower disengage in favor of the AI. beneficiary — the mindless recipient
their intention to regain control behind the To offset this tendency, we recommend of a socially established milieu. Hence
wheel. It’s particularly notable that only the shifting emphasis in employee communica- we speak, not without justification, of a
proactive, individual sense of responsibility tions from trust the AI to understand the AI push-button civilization — a system of
motivated respondents to act, which indi- to engender informed and conditional trust things devised by intelligence for the pro-
cates that the threat of liability will be insuf- in the outputs of AI systems and processes. gressive elimination of intelligence! 2
ficient to prevent AI harm. Managers need to educate users to under-
Ryad Titah is associate professor and chair
In another study aimed at understanding stand the automated processes, decision of the Academic Department of Information
the use of risk-prediction algorithms in the points, and potential for errors or harm of Technologies at HEC Montréal. The research in
context of criminal justice in the U.S., a sig- the AI. It’s also critical that users are aware progress described in this article is being con-
nificant proportion of the 32 respondents of and understand nuances of the edge cases ducted with Zoubeir Tkiouat, Pierre-Majorique
relied excessively on these tools to make where AI systems can flounder and the risk Léger, Nicolas Saunier, Philippe Doyon-Poulin,
Sylvain Sénécal, and Chaïma Merbouh.
their decisions. We made a determination of a bad automated decision is greatest.
of overuse based on respondents reporting Managers also need to position and pre-
Reprint 65427. For ordering information, see page 4.
use of the tools to determine the length or pare employees to own and exercise their Copyright © Massachusetts Institute of Technology,
severity of a sentence, strictly abiding by sense of responsibility. Toyota famously 2024. All rights reserved.

summer 2024 sloanreview.mit.edu 19

You might also like