Download as pdf or txt
Download as pdf or txt
You are on page 1of 4

We’re Focusing on the Wrong Kind of AI

Apocalypse
5 MINUTE READ

Getty Images
IDEAS
BY ETHAN MOLLICK
APRIL 1, 2024 7:00 AM EDT
Mollick is a professor of management at Wharton, specializing in
entrepreneurship and innovation. His research has been featured in various
publications, including Forbes, The New York Times, and The Wall Street
Journal. He is the creator of numerous educational games on a variety of topics.
He is the author of Co-Intelligence: Living and Working with AI
C onversations about the future of AI are too apocalyptic. Or rather, they focus on
the wrong kind of apocalypse.
There is considerable concern of the future of AI, especially as a number of
prominent computer scientists have raised, the risks of Artificial General
Intelligence (AGI)—an AI smarter than a human being. They worry that an AGI
will lead to mass unemployment or that AI will grow beyond human control—or
worse (the movies Terminator and 2001 come to mind).
Watch more from TIME
Click to Unmute

What Keeps These TIME100 AI Innovators Up At Night?


pause
skip_next
volume_off
fullscreen
Up Next

What Keeps These TIME100 AI Innovators Up At Night?

Discussing these concerns seems important, as does thinking about the much
more mundane and immediate threats of misinformation, deep fakes, and
proliferation enabled by AI. But this focus on apocalyptic events also robs most of
us of our agency. AI becomes a thing we either build or don’t build, and no one
outside of a few dozen Silicon Valley executives and top government officials
really has any say over.
0 seconds of 3 minutes, 58 secondsVolume 0%

But the reality is we are already living in the early days of the AI Age, and, at
every level of an organization, we need to make some very important decisions
about what that actually means. Waiting to make these choices means they will
be made for us. It opens us up to many little apocalypses, as jobs and workplaces
are disrupted one-by-one in ways that change lives and livelihoods.
We know this is a real threat, because, regardless of any pauses in AI creation,
and without any further AI development beyond what is available today, AI is
going to impact how we work and learn. We know this for three reasons: First, AI
really does seem to supercharge productivity in ways we have never really seen
before. An early controlled study in September 2023 showed large-scale
improvements at work tasks, as a result of using AI, with time savings of more
than 30% and a higher quality output for those using AI. Add to that the near-
immaculate test scores achieved by GPT-4, and it is obvious why AI use is already
becoming common among students and workers, even if they are keeping it
secret.
Read More: There Is Only One Question That Matters with AI
We also know that AI is going to change how we work and learn because it is
affecting a set of workers who never really faced an automation shock before.
Multiple studies show the jobs most exposed to AI (and therefore the people
whose jobs will make the hardest pivot as a result of AI) are educated and highly
paid workers, and the ones with the most creativity in their jobs. The pressure
for organizations to take a stand on a technology that affects these workers will
be immense, especially as AI-driven productivity gains become widespread.
These tools are on their way to becoming deeply integrated into our work
environments. Microsoft, for instance, has released Co-Pilot GPT-4 tools for its
ubiquitous Office applications, even as Google does the same for its office tools.
As a result, a natural instinct among many managers might be to say “fire people,
save money.” But it doesn’t need to be that way—and it shouldn’t be. There are
many reasons why companies should not turn efficiency gains into headcount or
cost reduction. Companies that figure out how to use their newly productive
workforce have the opportunity to dominate those who try to keep their post-AI
output the same as their pre-AI output, just with less people. Companies that
commit to maintaining their workforce will likely have employees as partners,
who are happy to teach others about the uses of AI at work, rather than scared
workers who hide AI for fear of being replaced. Psychological safety is critical to
innovative team success, especially when confronted with rapid change. How
companies use this extra efficiency is a choice, and a very consequential one.
There are hints buried in the early studies of AI about a way forward. Workers,
while worried about AI, tend to like using it because it removes the most tedious
and annoying parts of their job, leaving them with the most interesting tasks. So,
even as AI removes some previously valuable tasks from a job, the work that is
left can be more meaningful and more high value. But this is not inevitable, so
managers and leaders must decide whether and how to commit themselves to
reorganizing work around AI in ways that help, rather than hurt, their human
workers. They need to ask “what is my vision about how AI makes work better,
rather than worse?”
Rather than just being worried about one giant AI apocalypse, we need to worry
about the many small catastrophes that AI can bring. Unimaginative or stressed
leaders may decide to use these new tools for surveillance and for layoffs.
Educators may decide to use AI in ways that leave some students behind. And
those are just the obvious problems.
But AI does not need to be catastrophic. Correctly used, AI can create local
victories, where previously tedious or useless work becomes productive and
empowering. Where students who were left behind can find new paths forward.
And where productivity gains lead to growth and innovation.
The thing about a widely applicable technology is that decisions about how it is
used are not limited to a small group of people. Many people in organizations will
play a role in shaping what AI means for their team, their customers, their
students, their environment. But to make those choices matter, serious
discussions need to start in many places—and soon. We can’t wait for decisions
to be made for us, and the world is advancing too fast to remain passive.

You might also like