Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1of 71

NARRATOR:

This is the world’s most complex board game.


There are more possible moves in the game of Go than there are
atoms in the universe. Legend has it that in 2300 BCE, Emperor Yao
devised it to teach his son discipline, concentration and balance.
And over 4,000 years later, this ancient Chinese game would signal
the start of a new industrial age.
It was 2016, in Seoul, South Korea.
FEMALE NEWSREADER:
Can machines overtake human intelligence?
A breakthrough moment when the world champion of the Asian board
game Go takes on an AI program developed by Google.
LEE SEDOL:
[Speaking Korean] I'm confident about the match. I believe that human
intuition is still too advanced for AI to have caught up.
PEDRO DOMINGOS, University of Washington:
In countries where it’s very popular, like China and Japan and South
Korea, to them, Go is not just a game. It's how you learn strategy. It
has an almost spiritual component.
You know, if you talk to South Koreans, then Lee Sedol is the world's
greatest Go player; he's a national hero in South Korea. They were
sure that Lee Sedol would beat AlphaGo, hands down.
NARRATOR:
Google’s AlphaGo was a computer program that, starting with the
rules of Go and a database of historical games, had been designed to
teach itself.
KAI-FU LEE, AI scientist:
I was one of the commentators at the Lee Sedol games. And yes, it
was watched by tens of millions of people.
NARRATOR:
Throughout Southeast Asia this was seen as a sports spectacle with
national pride at stake.
But much more was in play.
This was the public unveiling of a form of artificial intelligence called
"deep learning" that mimics the neural networks of the human brain.
NICHOLAS THOMPSON, Editor-in-chief, Wired:
So what happens with machine learning, artificial intelligence, initially
with AlphaGo, is that the machine is fed all kinds of Go games and
then it studies them; learns from them; and figures out its own moves.
And because it's an AI system, it's not just following instructions; it's
figuring out its own instructions. It comes up with moves that humans
hadn't thought of before. So it studies games that humans have
played, it knows the rules and then it comes up with creative moves.
FEMALE COMMENTATOR:
[Speaking Korean] Oh, totally unthinkable move.
MALE COMMENTATOR 1:
That's a very surprising move.
MALE COMMENTATOR 2:
I thought it was a mistake.
NARRATOR:
Game 2, move 37.
KAI-FU LEE:
That move 37 was a move that humans could not fathom, but yet it
ended up being brilliant and woke people up to say, "Wow, after
thousands of years of playing, we never thought about making a move
like that."
MALE COMMENTATOR 1:
Oh, he resigned. It looks like Lee Sedol has just resigned, actually.
NARRATOR:
In the end, the scientists watched their algorithms win four of the
games; Lee Sedol took one.
PEDRO DOMINGOS:
So what happened with Go, first and foremost, was a huge victory for
DeepMind and for AI. It wasn't that the computers beat the humans; it
was that one type of intelligence beat another.
NARRATOR:
Artificial intelligence had proven it could marshal a vast amount of
data, beyond anything any human could handle, and use it to teach
itself how to predict an outcome.
The commercial implications were enormous.
KAI-FU LEE:
While AlphaGo is a toy game, but its success and its waking everyone
up I think is going to be remembered as the pivotal moment where AI
became mature and everybody jumped on the bandwagon.
NARRATOR:
This is about the consequences of that defeat. How the AI algorithms
are ushering in a new age of great potential and prosperity.
But an age that will also deepen inequality, challenge democracy and
divide the world into two AI superpowers.
Tonight, five stories about how artificial intelligence is changing our
world.

One: China Has a Plan


NARRATOR:
China has decided to chase the AI future.
MALE CONFERENCE PRESENTER:
The difference between the internet mindset and the AI mindset
results—
NARRATOR:
A future made and embraced by a new generation.
ORVILLE SCHELL, U.S.-China relations, Asia Society:
Well, it's hard not to feel the immense energy and also the obvious
fact of the demographics. They're mostly very younger people, so that
this clearly is technology which is being generated by a whole new
generation.
NARRATOR:
Orville Schell is one of America’s foremost China scholars. He first
came here 45 years ago.
ORVILLE SCHELL:
When I first came here in 1975, Chairman Mao was still alive, the
Cultural Revolution was coming on and there wasn’t a single whiff of
anything of which you see here. It was unimaginable. In fact, in those
years one very much thought, "This is the way China is, this is the way
it’s going to be," and the fact that it has gone through so many
different changes since is quite extraordinary.
NARRATOR:
This extraordinary progress goes back to that game of Go.
PAUL MOZUR, The New York Times:
I think that the government recognized that this was a sort of critical
thing for the future and we need to catch up in this; that we cannot
have a foreign company showing us up at our own game. And this is
going be something that was gonna be critically important in the
future.
So we called it "the Sputnik moment" for the Chinese government; the
Chinese government kind of woke up.
XI JINPING, President of China:
[Speaking Chinese] As we often say in China, the beginning is the
most difficult part.
NARRATOR:
In 2017, Xi Jinping announced the government’s bold new plans to an
audience of foreign diplomats.
China would catch up with the U.S. in artificial intelligence by 2025
and lead the world by 2030.
XI JINPING:
[Speaking Chinese] —and intensified cooperation in frontier areas
such as digital economy, artificial intelligence, nanotechnology and
quantum computing.
NARRATOR:
Today, China leads the world in e-commerce. Drones deliver to rural
villages. And a society that bypassed credit cards now shops in stores
without cashiers, where the currency is facial recognition.
KAI-FU LEE:
No country has ever moved that fast. And in a short 2 1/2 years,
China's AI implementation really went from a minimal amount to
probably about 17 or 18 unicorns—that is, billion-dollar companies—in
AI today. And that progress is hard to believe.
NARRATOR:
The progress was powered by a new generation of ambitious young
techs pouring out of Chinese universities, competing with each other
for new ideas and financed by a new cadre of Chinese venture
capitalists.
This is Sinovation, created by U.S.-educated AI scientist and
businessman Kai-Fu Lee.
KAI-FU LEE:
These unicorns—we’ve got one, two, three, four, five, six, in the
general AI area. And a unicorn means a billion-dollar company—a
company whose valuation or market capitalization is at $1 billion or
higher.
I think we put two unicorns to show $5 billion or higher.
NARRATOR:
Kai-Fu Lee was born in Taiwan. His parents sent him to high school in
Tennessee. His Ph.D. thesis at Carnegie Mellon was on computer
speech recognition, which took him to Apple—
JOAN LUNDEN:
Well, reality is a step closer to science fiction with Apple Computer's
new developed program that allows—
NARRATOR:
—and at 31, an early measure of fame.
JOAN LUNDEN:
—and Kai-Fu Lee, the inventor of Apple's speech-recognition
technology.
KAI-FU LEE:
Casper, copy this to make right two. Casper, paste. Casper, 72-point
italic outline.
NARRATOR:
He would move on to Microsoft Research in Asia and became the
head of Google China.
Ten years ago he started Sinovation in Beijing and began looking for
promising startups and AI talent.
KAI-FU LEE:
So the Chinese entrepreneurial companies started as copycats, but
over the last 15 years, China has developed its own form of
entrepreneurship, and that entrepreneurship is described as
tenacious, very fast, winner-take-all and incredible work ethic.
I would say these few thousand Chinese top entrepreneurs, they could
take on any entrepreneur anywhere in the world.
NARRATOR:
Entrepreneurs like Cao Xudong, the 33-year-old CEO of a new startup
called Momenta.
This is a ring road around Beijing. The car is driving itself.
CAO XUDONG, Founder and CEO, Momenta:
You see another cutting—another cutting in.
MALE DRIVER:
Another cutting. Yeah, yeah.
NARRATOR:
Cao has no doubt about the inevitability of autonomous vehicles.
CAO XUDONG:
Just like AlphaGo can beat the human player in Go, I think the
machine will definitely surpass the human driver, in the end.
NARRATOR:
Recently there have been cautions about how soon autonomous
vehicles will be deployed.
But Cao and his team are confident they’re in for the long haul.
KAI-FU LEE:
U.S. will be the first to deploy, but China will may be the first to
popularize. It's 50-50 right now. U.S. is ahead in technology; China
has a larger market, and the Chinese government is helping with
infrastructure efforts. For example, building a new city the size of
Chicago with autonomous driving enabled and also a new highway
that has sensors built in to help autonomous vehicle be safer.
NARRATOR:
Their early investors included Mercedes-Benz.
CAO XUDONG:
I felt very lucky and very inspiring and very exciting that we're living in
this era.
NARRATOR:
Life in China is largely conducted on smartphones. A billion people
use WeChat, the equivalent of FaceBook, Messenger and PayPal and
much more combined into just one super-app. And there are many
more.
KAI-FU LEE:
China is the best place for AI implementation today, because the vast
amount of data that's available in China. China has a lot more users
than any other country—three to four times more than the U.S. There
are 50 times more mobile payments than the U.S. There are 10 times
more food deliveries, which serve as data to learn more about user
behavior, than the U.S. Three hundred times more shared bicycle
rides, and each shared bicycle ride has all kinds of sensors submitting
data up to the cloud.
We're talking about maybe 10 times more data than the U.S., and AI
is basically run on data and fueled by data. The more data, the better
the AI works—more importantly than how brilliant the researcher is
working on the problem. So in the age of AI, where data is the new oil,
China is the new Saudi Arabia.
NARRATOR:
And access to all that data means that the deep learning algorithm
can quickly predict behavior—like the creditworthiness of someone
wanting a short-term loan.
JIAO KE, Founder and CEO, Smart Finance:
Here is our application. The customer can choose how many money
they want to borrow and how long they want to borrow and they can
input their datas here. And after that, you can just borrow very quickly.
NARRATOR:
The CEO shows us how quickly you can get a loan.
JIAO KE:
It has done.
NARRATOR:
It takes an average of 8 seconds.
JIAO KE:
It has passed through banks. Yeah, our job is finished.
NARRATOR:
In the 8 seconds, the algorithm has assessed 5,000 personal features
from all your data.
JIAO KE:
Five thousand features that is related with the delinquency, when
maybe the banks only use few—maybe 10 features when they are
doing their risk management.
NARRATOR:
Processing millions of transactions, it’ll dig up features that would
never be apparent to a human loan officer, like how confidently you
type your loan application or, surprisingly, if you keep your cell phone
battery charged.
JIAO KE:
It's very interesting—the battery of the phone is related with their
delinquency rate. Someone who has much more lower battery, they
get much more dangerous than others.
KAI-FU LEE:
It's probably unfathomable to an American how a country can
dramatically evolve itself from a copycat laggard to all of a sudden to
nearly as good as the U.S. in technology.
NARRATOR:
Like this facial recognition startup he invested in. Megvii was started
by three young graduates in 2011; it’s now a world leader in using AI
to identify people.
WU WENHAO, Vice president, Megvii:
It's pretty fast. For example, on the mobile device, we have timed the
facial recognition speed. It's actually less than 100 milliseconds. So
that's very, very fast. So 0.1 second that we will be able to recognize
you, even on a mobile device.
NARRATOR:
The company claims the system is better than any human at
identifying people in its database.
And for those who aren’t, it can describe them. Like our director—what
he’s wearing and a good guess at his age, missing it by only a few
months.
WU WENHAO:
We are the first one to really take facial recognition to commercial
quality.
NARRATOR:
That’s why in Beijing today you can pay for your KFC with a smile.
PAUL MOZUR:
You know, it’s not so surprising; we've seen Chinese companies
catching up to the U.S. in technology for a long time. And so if
particular effort and attention is paid in a specific sector, it's not so
surprising that they would surpass the rest of the world, and facial
recognition is one of the really—the first places we've seen that start
to happen.
NARRATOR:
It’s a technology prized by the government, like this program in
Shenzhen to discourage jaywalking. Offenders are shamed in public
and with facial recognition can be instantly fined.
Critics warn that the government and some private companies have
been building a national database from dozens of experimental “social
credit” programs.
XIAO QIANG, Research scientist, UC Berkeley:
The government wants to integrate all these individual behaviors, or
corporations' records, into some kind of matrix and compute out a
single number or set of number associated with a individual citizen,
and using that to implement a incentive or punishment system.
NARRATOR:
A high social credit number can be rewarded with discounts on bus
fares; a low number can lead to a travel ban.
Some say it’s very popular with a Chinese public that wants to punish
bad behavior. Others see a future that rewards party loyalty and
silences criticism.
XIAO QIANG:
Right now there is no final system being implemented. And from those
experiments we already see that—the possibility of what this social
credit system can do to individual. It's very powerful, Orwellian-like,
and it's extremely troublesome in terms of civil liberty.
NARRATOR:
Every evening in Shanghai, ever-present cameras record the crowds
as they surge down to the Bund, the promenade along the banks of
the Huangpu River. Once, the great trading houses of Europe came
here to do business with the Middle Kingdom. In the last century, they
were all shut down by Mao’s revolution.
But now, in the age of AI, people come here to take in a spectacle that
reflects China’s remarkable progress and illuminates the great political
paradox of capitalism taken root in the communist state.
ORVILLE SCHELL:
People have called it market Leninism, authoritarian capitalism. We
are watching a kind of a petri dish in which an experiment of
extraordinary importance to the world is being carried out: whether
you can combine these things and get something that's more
powerful, that's coherent, that's durable in the world; whether you can
bring together a one-party state with an innovative sector—both
economically and technologically innovative. And that's something we
thought could not coexist.
NARRATOR:
As China reinvents itself, it has set its sights on leading the world in
artificial intelligence by 2030.
But that means taking on the world’s most innovative AI culture.

Two: The Promise


NARRATOR:
On an interstate in the U.S. Southwest, artificial intelligence is at work
solving the problem that’s become emblematic of the new age:
replacing a human driver.
This is the company's CEO, 24-year-old Alex Rodrigues.
ALEX RODRIGUES, CEO, Embark:
The more things we build successfully, the less people ask questions
about how old you are when you have working trucks.
NARRATOR:
And this is what he’s built. Commercial goods are being driven from
California to Arizona on Interstate 10. There is a driver in the cab, but
he’s not driving. It’s a path set by a CEO with an unusual CV.
YOUNG ALEX RODRIGUES:
Are we ready, Henry? The aim is to score these pucks into the scoring
area.
ALEX RODRIGUES:
So I did competitive robotics starting when I was 11, and I took it very,
very seriously. To give you a sense, I won the Robotics World
Championships for the first time when I was 13; been to World's seven
times between the ages of 13 and 20-ish. I eventually founded a team,
did a lot of work at a very high competitive level.
Things are looking pretty good.
NARRATOR:
This was a prototype of sorts, from which he has built his multimillion-
dollar company.
ALEX RODRIGUES:
I hadn't built a robot in a while, wanted to get back to it, and felt that
this was by far the most exciting piece of robotics technology that was
up-and-coming. A lot of people told us we wouldn't be able to build it.
But I knew roughly the techniques that you would use, and I was
pretty confident that if you put them together, you would get something
that worked. Took the summer off, built in my parents garage a golf
cart that could drive itself.
NARRATOR:
That golf cart got the attention of Silicon Valley and the first of several
rounds of venture capital.
He formed a team and then decided the business opportunity was in
self-driving trucks.
He says there’s also a human benefit.
ALEX RODRIGUES:
If we can build a truck that’s 10 times safer than a human driver, then
not much else actually matters.
When we talk to regulators especially, everyone agrees that the only
way that we're going to get to zero highway deaths, which is
everyone's objective, is to use self-driving.
And so, I'm sure you've heard the statistic, more than 90% of all
crashes have a human driver as the cause. And so if you want to
solve traffic fatalities, which in my opinion are the single biggest
tragedy that happens year after year in the United States, this is the
only solution.
NARRATOR:
It’s an ambitious goal, but only possible because of the recent
breakthroughs in deep learning.
ALEX RODRIGUES:
Artificial intelligence is one of those key pieces that has made it
possible now to do driverless vehicles where it wasn't possible 10
years ago, particularly in the ability to see and understand scenes.
A lot of people don't know this, but it's remarkably hard for computers,
until very, very recently, to do even the most basic visual tasks, like
seeing a picture of a person and knowing that it's a person.
And we've made gigantic strides with artificial intelligence in being
able to do scene-understanding tasks, and that's obviously
fundamental to being able to understand the world around you with
the sensors that you have available.
NARRATOR:
That's now possible, because of the algorithms written by Yoshua
Bengio and a small group of scientists.
YOSHUA BENGIO, University of Montreal:
There are many aspects of the world which we can't explain with
words. And that part of our knowledge is actually probably the majority
of it. The stuff we can communicate verbally is the tip of the iceberg.
And so, to get at the bottom of the iceberg, the solution was the
computers have to acquire that knowledge by themselves from data—
from examples. Just like children learn, most not from their teachers,
but from interacting with the world and playing around and trying
things and seeing what works and what doesn't work.
NARRATOR:
This is an early demonstration. In 2013, DeepMind scientists set a
machine learning program on the Atari video game Breakout. The
computer was only told the goal: to win the game.
After 100 games
NARRATOR:
After 100 games, it learned to use the bat at the bottom to hit the ball
and break the bricks at the top.
After 300 games
NARRATOR:
After 300, it could do that better than a human player.
After 500 games
NARRATOR:
After 500 games, it came up with a creative way to win the game: by
digging a tunnel on the side and sending the ball around the top to
break many bricks with one hit.
That was deep learning.
YOSHUA BENGIO:
That's the AI program based on learning, really, that has been so
successful in the last few years, and has—it wasn't clear 10 years ago
that it would work, but it has completely changed the map and is now
used in almost every sector of society.
AMY WEBB, Founder, Future Today Institute:
Even the best and brightest among us, we just don't have enough
compute power inside of our heads.
NARRATOR:
Amy Webb is a professor at NYU and founder of the Future Today
Institute.
AMY WEBB:
As AI progresses, the great promise is that they—these machines
alongside of us are able to think and imagine and see things in ways
that we never have before. Which means that maybe we have some
kind of new, weird, seemingly implausible solution to climate change.
Maybe we have some radically different approach to dealing with
incurable cancers.
The real practical and wonderful promise is that machines help us be
more creative, and using that creativity, we get to terrific solutions.
NARRATOR:
Solutions that could come unexpectedly to urgent problems.
CONSTANCE LEHMAN, M.D., Chief of breast imaging,
Massachusetts General Hospital:
It's going to change the face of breast cancer.
Right now, 40,000 women in the U.S. alone die from breast cancer
every single year.
NARRATOR:
Dr. Connie Lehman is head of the breast imaging center at
Massachusetts General Hospital in Boston.
CONSTANCE LEHMAN:
We've become so complacent about it. We almost don't think it can
really be changed. We somehow think we should put all of our energy
into chemotherapies to save women with metastatic breast cancer,
and yet, when we find it early, we cure it, and we cure it without having
the ravages to the body when we diagnose it late.
This shows a progression of a small, small spot from one year to the
next and then to the diagnosis of the small cancer here.
NARRATOR:
This is what happened when a woman who had been diagnosed with
breast cancer started to ask questions about why it couldn’t have been
diagnosed earlier.
REGINA BARZILAY, MIT:
It really brings a lot of anxiety, and you're asking the questions, "Am I
gonna survive? What's gonna happen to my son?" And I started
asking other questions.
NARRATOR:
She was used to asking questions. At MIT's Artificial Intelligence Lab,
Professor Regina Barzilay uses deep learning to teach the computer
to understand language as well as read text and data.
REGINA BARZILAY:
I was really surprised that the very basic question that I ask my
physicians, which were really excellent physicians here at MGH, they
couldn't give me answers that I was looking for.
NARRATOR:
She was convinced that if you analyze enough data, from
mammograms to diagnostic notes, the computer could predict early-
stage conditions.
CONSTANCE LEHMAN:
If we fast forward from 2012 to '13 to 2014, we then see when Regina
was diagnosed because of this spot on her mammogram. Is it
possible, with more elegant computer applications, that we might have
identified this spot the year before, or even back here?
REGINA BARZILAY:
So those are standard prediction problems in machine learning; there
is nothing special about them. And to my big surprise, none of the
technologies that we are developing at MIT, even in the most simple
form, doesn't penetrate the hospital.
NARRATOR:
Regina and Connie began the slow process of getting access to
thousands of mammograms and records from MGH’s breast imaging
program.
CONSTANCE LEHMAN:
So our first foray was just to take all of the patients we had at MGH,
during a period of time, who had had breast surgery for a certain type
of high-risk lesion. And we found that most of them didn't really need
the surgery; they didn't have cancer. But about 10% did have cancer.
With Regina's techniques and deep learning and machine learning,
we were able to predict the women that truly needed the surgery and
separate out those that really could avoid the unnecessary surgery.
REGINA BARZILAY:
What machine can do, it can take hundreds of thousands of images
where the outcome is known and learn, based on how pixels are
distributed, what are the very unique patterns that correlate highly with
future occurrence of the disease. So instead of using human capacity
to recognize pattern, formalize pattern, which is inherently limited by
our cognitive capacity and how much we can see and remember,
we're providing machine with a lot of data and make it learn this
prediction.
CONSTANCE LEHMAN:
So we are using technology not only to be better at assessing the
breast density, but to get more to the point of what we're trying to
predict: Does this woman have a cancer now and will she develop a
cancer in five years? And that's again where the artificial intelligence
machine and deep learning can really help us and our patients.
NARRATOR:
In the age of AI, the algorithms are transporting us into a universe of
vast potential and transforming almost every aspect of human
endeavor and experience.
Andrew McAfee is a research scientist at MIT who co-authored “The
Second Machine Age."
ANDREW McAFEE, MIT:
The great compliment that a songwriter gives another one is "Gosh, I
wish I'd written that one." The great compliment a geek gives another
one is "Wow, I wish I had drawn that graph." So I wish I had drawn
this graph.
NARRATOR:
The graph uses a formula to show human development and growth
since 2000 BCE.
ANDREW McAFEE:
The state of human civilization is not very advanced and it's not
getting better very quickly at all, and this is true for thousands and
thousands of years.
When we formed empires and empires got overturned; when we tried
democracy; when we invented zero and mathematics and
fundamental discoveries about the universe—big deal. It just—the
numbers don't change very much.
What's weird is that the numbers change essentially in the blink of an
eye at one point in time, and it goes from really horizontal,
unchanging, uninteresting to holy Toledo, crazy vertical. And then the
question is, "What on earth happened to cause that change?"
And the answer is the Industrial Revolution. There were other things
that happened, but really what fundamentally happened is we
overcame the limitations of our muscle power.
Something equally interesting is happening right now. We are
overcoming the limitations of our minds. We're not getting rid of them,
we're not making them unnecessary, but holy cow, can we leverage
them and amplify them now. You have to be a huge pessimist not to
find that profoundly good news.
BRAD SMITH, President, Microsoft:
I really do think the world has entered a new era. Artificial intelligence
holds so much promise, but it's going to reshape every aspect of the
economy, so many aspects of our lives.
Because AI is a little bit like electricity. Everybody's going to use it.
Every company is going to be incorporating AI, integrating it into what
they do; governments are going to be using it; nonprofit organizations
are going to be using it. It's going to create all kinds of benefits in
ways large and small, and challenges for us as well.
NARRATOR:
The challenges, the benefits.
The autonomous truck represents both as it maneuvers into the
marketplace.
The engineers are confident that, in spite of questions about when this
will happen, they can get it working safely sooner than most people
realize.
ALEX RODRIGUES:
I think that you will see the first vehicles operating with no one inside
them moving freight in the next few years, and then you're gonna see
that expanding to more freight, more geographies, more weather over
time as that capability builds up. We're talking less than half a decade.
NARRATOR:
He already has a Fortune 500 company as a client, shipping
appliances across the Southwest. He says the sales pitch is
straightforward.
ALEX RODRIGUES:
They spend hundreds of millions of dollars a year shipping parts
around the country. We can bring that cost in half.
And they're really excited to be able to start working with us, both
because of the potential savings from deploying self-driving, and also
because of all the operational efficiencies that they see, the biggest
one being able to operate 24 hours a day. So right now, human
drivers are limited to 11 hours by federal law, and a driverless truck
obviously wouldn't have that limitation.
NARRATOR:
The idea of a driverless truck comes up often in discussions about
artificial intelligence.
Steve Viscelli is a sociologist who drove a truck while researching his
book “The Big Rig” about the industry.
STEVE VISCELLI, University of Pennsylvania:
This is one of the most remarkable stories in U.S. labor history, I think,
is the decline of unionized trucking. The industry was deregulated in
1980, and at that time truck drivers were earning the equivalent of
over $100,000 in today's dollars. And today the typical truck driver will
earn a little over $40,000 a year.
I think it's an important part of the automation story, right? Why are
they so afraid of automation? Because we've had four decades of
rising inequality in wages, and if anybody is going to take it on the chin
from automation, the trucking industry—the first in line is going to be
the driver, without a doubt.
NARRATOR:
For his research, Viscelli tracked down truckers and their families, like
Shawn and Hope Cumbee of Beaverton, Michigan.
STEVE VISCELLI:
Hey, Hope!
HOPE CUMBEE:
Hi!
STEVE VISCELLI:
I'm Steve Viscelli.
HOPE CUMBEE:
Hi, Steve, nice to meet you.
STEVE VISCELLI:
Great to meet you, too.
HOPE CUMBEE:
Come on in.
STEVE VISCELLI:
Thanks.
NARRATOR:
And their son Charlie.
CHARLIE CUMBEE:
This is Daddy, me, Daddy and Mommy.
NARRATOR:
But Daddy’s not here. Shawn Cumbee’s truck has broken down in
Tennessee.
Hope, who drove a truck herself, knows the business well.
HOPE CUMBEE:
We made $150,000 in a year. That sounds great, right? That's good
money. We paid $100,000 in fuel, OK? So right there, now I made
$50,000. But I didn't really, because you get an oil change every
month, so that's $300 a month. You still have to do all the
maintenance. We had a motor blow out—$13,000. Right?! [Laughs] I
know. I mean, I choke up a little just thinking about it, because it was
—and it was $13,000, and we were off work for two weeks!
So by the end of the year, with that $150,000, by the end of the year
we'd made about $22,000.
NARRATOR:
In a truck stop in Tennessee, Shawn has been sidelined waiting for a
new part. The garage owner is letting him stay in the truck to save
money.
SHAWN CUMBEE:
Hi, baby.
HOPE CUMBEE:
Hey. How’s it going?
SHAWN CUMBEE:
It’s going. Chunky-butt!
CHARLIE CUMBEE:
Hey, Daddy!
SHAWN CUMBEE:
Hi, Chunky-butt. What’re you doing?
Believe it or not, I do it because I love it. I mean, it’s in the blood; third-
generation driver. And my granddaddy told me a long time ago, when I
was probably 11, 12 years old, probably, he said, "The world meets
nobody halfway. Nobody." He said, "If you want it, you have to earn it."
And that’s what I do every day; I live by that creed, and I've lived by
that since it was told to me.
HOPE CUMBEE:
So if you're down for a week in a truck, you still have to pay your bills.
I have enough money in my checking account at all times to pay a
month's worth of bills. That does not include my food. That doesn't
include field trips for my son's school.
My son and I just went to our yearly doctor appointment. I took money
out of my son's piggy bank to pay for it because it's not scheduled in.
It's not something that you can afford. I mean, like, when—sorry.
STEVE VISCELLI:
It's OK.
Have you guys ever talked about self-driving trucks? Is he—
HOPE CUMBEE:
[Laughs] So, kind of. I asked him once. He laughed so hard. He's, "No
way will they ever have a truck that can drive itself."
SHAWN CUMBEE:
It is kind of interesting, when you think about it. They’re putting all this
new technology into things, but it’s still man-made, and man does
make mistakes. I really don’t see it being a problem with the industry,
because one, you still got to have a driver in it, because I don’t see it
doing cities; I don’t see it doing main things; I don’t see it backing into
a dock. I don’t see the automation part doing—maybe the box trailer
side, I could see that, but not stuff like I do. So I really ain’t worried
about the automation of trucks.
HOPE CUMBEE:
How near of a future is it?
STEVE VISCELLI:
Yeah, self-driving. So, some companies are already operating.
Embark, for instance, is one that has been doing driverless trucks on
the interstate and what's called "exit-to-exit self-driving." And they're
currently running real freight.
HOPE CUMBEE:
Really?
STEVE VISCELLI:
Yeah, on I-10.
MALE TRUCK STOP ANNOUNCER:
Shower guest 100, your shower is now ready.
NARRATOR:
Over time, it has become harder and harder for veteran independent
drivers like the Cumbees to make a living. They've been replaced by
younger, less experienced drivers.
STEVE VISCELLI:
So the trucking industry is $740 billion a year, and, again, in many of
these operations, labor's a third of that cost. By my estimate, I think
we're in the range of 300,000 or so jobs in the foreseeable future that
could be automated to some significant extent.

Three: The Future of Work


NARRATOR:
The AI future was built with great optimism out here in the West.
In 2018, many of the people who invented it gathered in San
Francisco to celebrate the 25th anniversary of the industry magazine.
ROBOT:
Howdy! Welcome to Wired 25!
NARRATOR:
It is a celebration, for sure, but there’s also a growing sense of
caution, and even skepticism.
NICHOLAS THOMPSON:
We're having a really good weekend here.
NARRATOR:
Nick Thompson is editor-in-chief of Wired.
NICHOLAS THOMPSON:
When it started it was very much a magazine about what's coming
and why you should be excited about it. Optimism was the defining
feature of Wired for many, many years. Or, as our slogan used to be,
"Change is good!"
And over time, it's shifted a little bit. Now it's more "We love
technology, but let's look at some of the big issues, and let's look at
some of them critically; and let's look at the way algorithms are
changing the way we behave, for good and for ill." So the whole
nature of Wired has gone from a champion of technological change to
more of a observer of technological change.
ANNA WINTOUR:
So, before we start—
NARRATOR:
There are 25 speakers, all named as “icons" of the last 25 years of
technological progress.
ANNA WINTOUR:
Why is Apple so secretive?
NARRATOR:
Jony Ive, who designed Apple’s iPhone.
JONY IVE:
It would be bizarre not to be.
JARON LANIER:
There’s this question of what are we doing here in this life, in this
reality.
NARRATOR:
Jaron Lanier, who pioneered virtual reality; and Jeff Bezos, the
founder of Amazon.
JEFF BEZOS:
Amazon was a garage startup. Now it’s very large company.
NARRATOR:
His message is, “All will be well in the new world.”
JEFF BEZOS:
I guess first of all, I remain incredibly optimistic about technology, and
technologies always are two-sided. But that’s not new; that's always
been the case, and we will figure it out. The last thing we would ever
want to do is stop the progress of new technologies, even when there
are dual-use—
NARRATOR:
But, says Thompson, beneath the surface there’s a worry most of
them don’t like to talk about.
NICHOLAS THOMPSON:
There are some people in Silicon Valley who believe that you just
have to trust the technology. Throughout history, there's been a
complicated relationship between humans and machines, we've
always worried about machines and it's always been fine. And we
don't know how AI will change the labor force, but it will be OK. So
that argument exists.
There's another argument, which is what I think most of them believe
deep down, which is, this is different. We're gonna have labor force
disruption like we've never seen before. And if that happens, will they
blame us?
NARRATOR:
There is, however, one of the Wired25 icons willing to take on the
issue.
On stage, Kai-Fu Lee dispenses with one common fear.
KAI-FU LEE:
Well, I think there are so many myths out there. I think one myth is
that because AI is so good at a single task, that one day we'll wake up
and we’ll all be enslaved or forced to plug our brains to the AI. But it is
nowhere close to displacing humans.
NARRATOR:
But in interviews around the event and beyond, he takes a decidedly
contrarian position on AI and job loss.
KAI-FU LEE:
The AI giants want to paint a rosier picture because they're happily
making money, so I think they prefer not to talk about the negative
side.
I believe about 50% of jobs will be somewhat or extremely threatened
by AI in the next 15 years or so.
NARRATOR:
Kai-Fu Lee also makes a great deal of money from AI. What
separates him from most of his colleagues is that he’s frank about its
downside.
KAI-FU LEE:
Yes, yes, we've made about 40 investments in AI. I think based on
these 40 investments, most of them are not impacting human jobs.
They're creating value; making high margins; inventing a new model.
But I could list seven or eight that would lead to a very clear
displacement of human jobs.
NARRATOR:
He says that AI is coming whether we like it or not, and he wants to
warn society about what he sees as inevitable.
MALE CNBC NEWSREADER:
You have a view that is different than many others, which is that AI is
not going to take blue-collar jobs so quickly but is actually going to
take white-collar jobs.
KAI-FU LEE:
Well, both will happen. AI will be at the same time a replacement for
blue-collar, white-collar jobs, and be a great symbiotic tool for doctors,
lawyers and you, for example. But the white-collar jobs are easier to
take, because they’re a pure quantitative analytical process. Let’s say
reporters, traders, telemarketing, telesales, customer service—
FEMALE CNBC NEWSREADER:
Analysts?
KAI-FU LEE:
Analysts, yes—these can all be replaced just by a software. To do
blue-collar, some of the work requires, you know, hand-eye
coordination, things that machines are not yet good enough to do.
JERRY KAPLAN, Computer scientist and entrepreneur:
Today, there are many people who are ringing the alarm—"Oh, my
god, what are we going to do? Half the jobs are going away." I believe
that's true.
But here's the missing fact. I've done the research on this, and if you
go back 20, 30 or 40 years ago you will find that 50% of the jobs that
people performed back then are gone today. Where are all the
telephone operators, bowling pin setters, elevator operators? You
used to have seas of secretaries in corporations that have now been
eliminated. Travel agents. You can just go through field after field after
field.
That same pattern has recurred many times throughout history with
each new wave of automation.
KAI-FU LEE:
But I would argue that history is only trustable if it is multiple
repetitions of similar events, not a once in a blue moon occurrence. So
over the history of many tech inventions, most are small things. Only
maybe three are at the magnitude of AI revolution: the steam engine,
electricity and the computer revolution. I'd say everything else is too
small.
And the reason I think it might be something brand new is that AI is
fundamentally replacing our cognitive process in doing a job in its
significant entirety, and it can do it dramatically better.
NARRATOR:
This argument about job loss in the age of AI was ignited six years
ago amid the gargoyles and spires of Oxford University.
Two researchers had been poring through U.S. labor statistics,
identifying jobs that could be vulnerable to AI automation.
CARL FREY:
Vulnerable to automation, in the context that we discussed five years
ago now, essentially meant that those jobs are potentially automatable
over an unspecified number of years, and the figure we came up with
was 47%.
NARRATOR:
Forty-seven percent. That number quickly travelled the world in
headlines and news bulletins
But authors Carl Frey and Michael Osborne offered a caution: They
can't predict how many jobs will be lost, or how quickly.
But Frey believes that there are lessons in history.
CARL FREY:
And what worries me the most is that there is actually one episode
that looks quite familiar to today, which is the British Industrial
Revolution, where wages didn't grow for nine decades and a lot of
people actually saw living standards decline as technology
progressed.
NARRATOR:
Saginaw, Michigan, knows about decline in living standards.
Harry Cripps, an autoworker and a local union president, has
witnessed what 40 years of automation can do to a town.
HARRY CRIPPS, President, UAW Local 668:
You know, we’re one of the cities in the country that I think we were
left behind in this recovery, and I just—I don’t know how we get on the
bandwagon now.
NARRATOR:
Once, this was the UAW hall for one local union. Now, with falling
membership, it’s shared by five locals.
HARRY CRIPPS:
Rudy didn't get his yet.
NARRATOR:
This day it’s the center for a Christmas food drive. Even in a growth
economy, unemployment here is near 6%; poverty in Saginaw is over
30%.
HARRY CRIPPS:
Our factory has about 1.9 million square feet. Back in the '70s, that 1.9
million square feet had about 7,500 UAW automotive workers making
middle-class wage with decent benefits and able to send their kids to
college and do all the things that the middle-class families should be
able to do.
Our factory today, with automation, they would probably be about 700
United Auto workers. That's a dramatic change.
A lot of union brothers used to work there, buddy.
STEVE LAMB, United Way, Saginaw, Michigan:
Yep. TRW plant; that was unfortunate when that went down.
HARRY CRIPPS:
Delphi, Gen—Looks like they’re starting to tear it down now. Wow.
Automations is definitely taking away a lot of jobs. Robots—I don't
know how they buy cars; I don't know how they buy sandwiches; I
don't know how they go to the grocery store. They definitely don't pay
taxes, which hurts the infrastructure, so you don't have the sheriffs
and the police and the firemen and anybody else that supports the city
is gone, 'cause there's no tax base. Robots don't pay taxes.
NARRATOR:
The average personal income in Saginaw is $16,000 a year.
STEVE LAMB:
A lot of the families that I work with here in the community, both
parents are working; they're working two jobs.
Mainly it's the wages. You know, people not making a decent wage to
be able to support a family. Back in the day, my dad even worked at
the plant. My mom stayed home, raised the children. And that gave us
opportunity, put food on the table and things of that nature. And them
times are gone.
ANDREW McAFEE:
If you look at this graph of what's been happening to America since
the end of World War II, you see a line for our productivity, and our
productivity gets better over time.
It used to be the case that our pay, our income, would increase in
lockstep with those productivity increases. The weird part about this
graph is how the income has decoupled—is not going up the same
way that productivity is any more.
NARRATOR:
As automation has taken over, workers are either laid off or left with
less skilled jobs for less pay while productivity goes up.
ANDREW McAFEE:
There are still plenty of factories in America. We are a manufacturing
powerhouse, but if you go walk around an American factory, you do
not see long lines of people doing repetitive manual labor. You see a
whole lot of automation.
If you go upstairs in that factory and look at the payroll department,
you see one or two people looking into a screen all day. So the activity
is still there, but the number of jobs is very, very low because of
automation and tech progress.
Now, dealing with that challenge, and figuring out what the next
generation of the American middle class should be doing, is a really
important challenge, because I'm pretty confident that we are never
again going to have this large, stable, prosperous middle class doing
routine work.
NARRATOR:
Evidence of how AI is likely to bring accelerated change to the U.S.
workforce can be found not far from Saginaw.
This is the U.S. headquarters for one of the world’s largest builders of
industrial robots, a Japanese-owned company called FANUC
Robotics.
MIKE CICCO, President and CEO, FANUC America:
We've been producing robots for well over 35 years. And you can
imagine, over the years, they've changed quite a bit.
We're utilizing the artificial intelligence to really make the robots easier
to use and be able to handle a broader spectrum of opportunities.
We see a huge growth potential in robotics, and we see that growth
potential as being, really, there's 90% of the market left.
NARRATOR:
The industry says optimistically that with that growth they can create
more jobs.
MIKE CICCO:
Even if there were five people on a job and we reduce that down to
two people because we automated some level of it, we might produce
two times more parts than we did before because we automated it. So
now, there might be the need for two more fork truck drivers, or two
more quality inspection personnel. So although we reduce some of the
people, we grow in other areas as we produce more things.
HARRY CRIPPS:
When I increase productivity through automation, I lose jobs. Jobs go
away. And I don't care what the robot manufacturers say, you aren't
replacing those 10 production people that that robot is now doing that
job with 10 people.
You can increase productivity to a level to stay competitive with the
global market; that's what they're trying to do.
NARRATOR:
In the popular telling, blame for widespread job loss has been aimed
overseas, at what’s called "offshoring."
PRESIDENT DONALD TRUMP:
We want to keep our factories here. We want to keep our
manufacturing here. We don't want it moving to China, to Mexico, to
Japan, to India, to Vietnam.
NARRATOR:
But it turns out most of the job loss isn’t because of offshoring.
MIKE HICKS, Ball State University:
There’s been offshoring, and I think offshoring is responsible for
maybe 20% of the jobs that have been lost.
I would say most of the jobs that have been lost, despite what most
Americans think, was due to automation or productivity growth.
NARRATOR:
Mike Hicks is an economist at Ball State in Muncie, Indiana. He and
sociologist Emily Wornell have been documenting employment trends
in Middle America. Hicks says that automation has been a mostly
silent job killer, lowering the standard of living.
MIKE HICKS:
So in the last 15 years the standard of living has dropped by 10% to
15%. So that's unusual in a developed world. A one-year decline is a
recession; a 15-year decline gives an entirely different sense about
the prospects of a community. And so that is common from the
Canadian border to the Gulf of Mexico and the middle swath of the
United States.
HARRY CRIPPS:
Something we're gonna do for you guys—these were left over from
our suggestion drive that we did, and we’re gonna give them each
two.
EMILY YEAGER, Social worker:
That is awesome.
HARRY CRIPPS:
I mean, that's going to go a long ways, right? I mean, that'll really help
that family out during the holidays.
EMILY YEAGER:
Yes. Well, with the kids home from school, the families have three
meals a day that they've got to put on the table. So it's going to make
a big difference. So, thank you, guys! This is wonderful.
HARRY CRIPPS:
You're welcome. Let ‘em know Merry Christmas on behalf of us here
at the local, OK?
EMILY YEAGER:
Absolutely. You guys are just amazing. Thank you. And please, tell all
the workers how grateful these families will be.
HARRY CRIPPS:
We will.
EMILY YEAGER:
I mean, this is not a small problem. The need is so great. And I can tell
you that it's all races, it's all income classes that you might think
someone might be from. But I can tell you that when you see it, and
you deliver this type of gift to somebody who is in need, just the
gratitude that they show you is incredible. [Crying]
EMILY WORNELL, Ball State University:
We actually know that people are at greater risk of mortality for over
20 years after they lose their job due to no fault of their own, so
something like automation or offshoring. They're at higher risk for
cardiovascular disease; they're at higher risk for depression and
suicide.
But then with the intergenerational impacts we also see their children
are more likely—children of parents who have lost their job due to
automation are more likely to repeat a grade; they're more likely to
drop out of school; they're more likely to be suspended from school;
and they have lower educational attainment over their entire lifetimes.
MIKE HICKS:
It's the future of this, not the past, that scares me. Because I think
we're in the early decades of what is a multidecade adjustment period.
NARRATOR:
The world is being reimagined.
This is a supermarket. Robots, guided by AI, pack everything from
soap powder to cantaloupes for online consumers.
Machines that pick groceries, machines that can also read reports,
learn routines and comprehend, are reaching deep into factories,
stores and offices.
At a college in Goshen, Indiana, a group of local business and political
leaders come together to try to understand the impact of AI and the
new machines.
Molly Kinder studies the future of work at a Washington think tank.
MOLLY KINDER, Senior fellow, New America:
How many people have gone into a fast-food restaurant and done a
self-ordering? Anyone, yes? Panera, for instance, is doing this.
Cashier was my first job, and where I live in Washington, D.C., it’s
actually the No. 1 occupation for the greater D.C. region; there are
millions of people who work in cashier positions. This is not a futuristic
challenge; this is something that's happening sooner than we think.
In the popular discussions about robots and automation and work,
almost every image is of a man on a factory floor or a truck driver. And
yet, in our data, when we looked, women disproportionately hold the
jobs that today are at highest risk of automation, and that's not really
being talked about.
And that's in part because women are overrepresented in some of
these marginalized occupations like a cashier or a fast-food worker,
and also in large numbers in clerical jobs in offices, HR departments,
payroll, finance; a lot of that is more-routine processing information,
processing paper, transferring data. That has huge potential for
automation. AI is going to do some of that, software, robots are going
do some of that.
So how many people are still working as switchboard operators?
Probably none in this country.
NARRATOR:
The workplace of the future will demand different skills, and gaining
them, says Molly Kinder, will depend on who can afford them.
MOLLY KINDER:
I mean, it's not a good situation in the United States. There's been
some excellent research that says that half of Americans couldn't
afford a $400 unexpected expense. If you want to get to $1,000,
there's even less.
So imagine you're going to go out without a month's pay, two months'
pay, a year. Imagine you want to put savings toward a course to
redevelop your career. People can't afford to take time off of work;
they don't have a cushion. So this lack of economic stability, married
with the disruptions in people's careers, is a really toxic mix.
NARRATOR:
The new machines will penetrate every sector of the economy, from
insurance companies to human resource departments; from law firms
to the trading floors of Wall Street.
STEVEN BERKENFELD, Investment banker:
Wall Street's going through it, but every industry is going through it.
Every company is looking at all of the disruptive technologies—could
be robotics or drones or blockchain. And whatever it is, every
company's using everything that's developed, everything that's
disruptive, and thinking about "How do I apply that to my business to
make myself more efficient?" And what efficiency means is mostly,
"How do I do this with fewer workers?"
And I do think that when we look at some of the studies about
opportunity in this country and the inequality of opportunity, the
likelihood that you won't be able to advance from where your parents
were, I think that is very serious and gets to the heart of the way we
like to think of America as the land of opportunity.
NARRATOR:
Inequality has been rising in America.
It used to be the top 1% of earners—here in red—owned a relatively
small portion of the country’s wealth; middle and lower earners—in
blue—had the largest share. Then, 15 years ago, the lines crossed,
and inequality has been increasing ever since.
JERRY KAPLAN:
There's many factors that are driving inequality today, and
unfortunately, artificial intelligence, without being thoughtful about it, is
a driver for increased inequality because it's a form of automation, and
automation is the substitution of capital for labor. And when you do
that, the people with the capital win.
So Karl Marx was right: It's a struggle between capital and labor, and
with artificial intelligence we're putting our finger on the scale on the
side of capital. And how we wish to distribute the benefits, the
economic benefits that that will create is going to be a major moral
consideration for society over the next several decades.
KAI-FU LEE, CEO, Sinovation Ventures:
This is really an outgrowth of the increasing gaps of haves and have-
nots; the wealthy getting wealthier, the poor are getting poorer. It may
not be specifically related to AI, but the AI will exacerbate that, and
that, I think, will tear the society apart because the rich will have just
too much, and those who are have-nots will have perhaps very little
way of digging themselves out of the hole. And with AI making its
impact, it'll be worse, I think.
PRESIDENT DONALD TRUMP:
—we're all going to be happy. I'm here today for one main reason: To
say thank you to Ohio!
HARRY CRIPPS:
I think the Trump vote was a protest. I mean, for whatever reason,
whatever the hot button was that really hit home with these Americans
that voted for him were—it was a protest vote. They didn't like the
direction things were going.
I'm scared. I'm going to be quite honest with you: I worry about the
future of not just this country, but the entire globe. If we continue to go
in an automated system, what are we gonna do?
Now I've got a group of people at the top that are making all the
money and I don't have anybody in the middle that can support a
family. So do we have to go to the point where we crash to come
back? And in this case, the automation's already going to be there, so
I don't know how you come back. I'm really worried about where this
leads us in the future.

Four: The Surveillance Capitalists


NARRATOR:
The future is largely being shaped by a few hugely successful tech
companies. They're constantly buying up successful smaller
companies and recruiting talent. Between the U.S and China, they
employ a great majority of the AI researchers and scientists.
In the course of amassing such power, they’ve also become among
the richest companies in the world.
KAI-FU LEE:
AI really is the ultimate tool of wealth creation.
Think about the massive data that Facebook has on user preferences
and how it can very smartly target an ad that you might buy something
and get a much bigger cut that a smaller company couldn't do. Same
with Google; same with Amazon. So, it's—AI is a set of tools that
helps you maximize an objective function, and that objective function
initially will simply be "make more money."
NARRATOR:
And it is how these companies make that money, and how their
algorithms reach deeper and deeper into our work, our daily lives and
our democracy, that makes many people increasingly uncomfortable.
Pedro Domingos wrote the book "The Master Algorithm."
PEDRO DOMINGOS:
Everywhere you go, you generate a cloud of data; you're trailing data.
Everything that you do is producing data. And then there are
computers looking at that data that are learning, and these computers
are essentially trying to serve you better. They're trying to personalize
things to you. They're trying to adapt the world to you. So on the one
hand, this is great, because the world will get adapted to you without
you even having to explicitly adapt it.
There's also a danger, because the entities in the companies that are
in control of those algorithms don't necessarily have the same goals
as you, and this is where I think people need to be aware that—of
what's going on, so that they can have more control over it.
SHOSHANA ZUBOFF, Author, "The Age of Surveillance Capitalism":
You know, we came into this new world thinking that we were users of
social media. It didn't occur to us that social media was actually using
us. We thought that we were searching Google; we had no idea that
Google was searching us.
NARRATOR:
Shoshana Zuboff is a Harvard Business School professor emerita. In
1988, she wrote a definitive book called “In the Age of the Smart
Machine." For the last seven years she has worked on a new book,
making the case that we have now entered a new phase of the
economy, which she calls “surveillance capitalism.”
SHOSHANA ZUBOFF:
So, famously, industrial capitalism claimed nature—innocent rivers
and meadows and forests, and so forth—for the market dynamic to be
reborn as real estate—as land that could be sold and purchased.
Industrial capitalism claimed work for the market dynamic to be reborn
as labor that could be sold and purchased.
Now, here comes surveillance capitalism, following this pattern, but
with a dark and startling twist. What surveillance capitalism claims is
private, human experience. Private, human experience is claimed as a
free source of raw material, fabricated into predictions of human
behavior. And it turns out that there are a lot of businesses that really
want to know what we will do now, soon and later.
NARRATOR:
Like most people, Alastair Mactaggart had no idea about this new
surveillance business.
Until one evening in 2015.
ALASTAIR MACTAGGART, Founder, Californians for Consumer
Privacy:
I had a conversation with a fellow, who's an engineer, and I was just
talking to him one night at dinner, at a cocktail party. And I—there had
been something in the press that day about privacy, in the paper, and
I remember asking him—he worked for Google—"What's the big deal
about all—why are people so worked up about it?" And I thought it
was gonna be one of those conversations, like with—if you ever ask
an airline pilot, "Should I be worried about flying?" And they say, "Oh,
the most dangerous part is coming to the airport in the car." And he
said, "Oh, you would be horrified if you knew how much we knew
about you."
And I remember that kind of stuck in my head because it was not what
I expected.
NARRATOR:
That question would change his life.
A successful California real estate developer, Mactaggart began
researching the new business model.
ALASTAIR MACTAGGART:
What I've learned since is that their entire business is learning as
much about you as they can. Everything about your thoughts and your
desires and your dreams, and who your friends are and what you're
thinking, what your private thoughts are. And with that, that's true
power. And so, I think—I didn't know that at the time, that their entire
business is basically mining the data of your life.
NARRATOR:
Shoshana Zuboff had been doing her own research.
SHOSHANA ZUBOFF:
You know, I’d been reading and reading and reading. From patents, to
transcripts of earnings calls; research reports. Just literally everything
for years and years and years.
NARRATOR:
Her studies included the early days of Google, started in 1998 by two
young Stanford grad students, Sergey Brin and Larry Page. In the
beginning, they had no clear business model. Their unofficial motto
was “Don’t be evil."
SHOSHANA ZUBOFF:
Right from the start, the founders, Larry Page and Sergey Brin, they
had been very public about their antipathy toward advertising.
Advertising would distort the internet and it would distort and disfigure
the purity of any search engine, including their own.
MALE NEWSREADER:
Once in love with e-commerce, Wall Street has turned its back on the
dot-coms—
NARRATOR:
Then came the dot-com crash of the early 2000s.
MALE NEWSREADER:
—has left hundreds of unprofitable internet companies begging for
love and money.
NARRATOR:
While Google had rapidly become the default search engine for tens
of millions of users, their investors were pressuring them to make
more money. Without a new business model, the founders knew that
the young company was in danger.
SHOSHANA ZUBOFF:
In this state of emergency, the founders decided, “We've simply got to
find a way to save this company."
And so parallel to this were another set of discoveries where it turns
out that whenever we search or whenever we browse, we're leaving
behind traces, digital traces of our behavior. And those traces, back in
these days, were called "digital exhaust."
NARRATOR:
They realized how valuable this data could be by applying machine
learning algorithms to predict users’ interests.
SHOSHANA ZUBOFF:
What happened was they decided to turn to those data logs in a
systematic way and to begin to use these surplus data as a way to
come up with fine-grained predictions of what a user would click on—
what kind of ad a user would click on.
And inside Google they started seeing these revenues pile up at a
startling rate. They realized that they had to keep it secret. They didn't
want anyone to know how much money they were making or how they
were making it. Because users had no idea that these extra-
behavioral data that told so much about them was just out there, and
that was being used to predict their future.
NARRATOR:
When Google’s IPO took place just a few years later, the company
had a market capitalization of around $23 billion.
Google’s stock was now as valuable as General Motors.
SHOSHANA ZUBOFF:
And it was only when Google went public in 2004 that the numbers
were released. And it's at that point that we learn that between the
year 2000 and the year 2004, Google's revenue line increased by
3,590%.
ERIC SCHMIDT:
Let’s talk a little about information and search and how people
consume it.
NARRATOR:
By 2010, the CEO of Google, Eric Schmidt, would tell The Atlantic
magazine.
ERIC SCHMIDT:
—is we don’t need you to type at all because we know where you are
—with your permission; we know where you’ve been—with your
permission; we can more or less guess what you’re thinking about.
[Laughter] Now is that over the line?
NARRATOR:
Eric Schmidt and Google declined to be interviewed for this program.
Google’s new business model for predicting users’ profiles had
migrated to other companies, particularly Facebook.
Roger McNamee was an early investor and adviser to Facebook. He’s
now a critic, and wrote a book about the company.
He says he’s concerned about how widely companies like Facebook
and Google have been casting the net for data.
ROGER McNAMEE:
And then they realized, "Wait a minute, there’s all this data in the
economy we don’t have." So they went to credit card processors and
credit rating services and said, "We want to buy your data." They go to
health and wellness apps and say, "Hey, you got women's menstrual
cycles? We want all that stuff."
Why are they doing that? They’re doing that because behavioral
prediction is about taking uncertainty out of life. Advertising and
marketing are all about uncertainty—you never really know who’s
going to buy your product. Until now.
We have to recognize that we gave technology a place in our lives
that it had not earned; that essentially, because technology always
made things better in the '50s, '60s, '70s, '80s and '90s, we developed
a sense of inevitability that we'll always make things better. We
developed a trust, and the industry earned goodwill that Facebook and
Google have cashed in.
NARRATOR:
The model is simply this: Provide a free service—like Facebook—and
in exchange you collect the data of the millions who use it. And every
sliver of information is valuable.
SHOSHANA ZUBOFF:
It's not just what you post, it's that you post. It's not just that you make
plans to see your friends later. It's whether you say, "I'll see you later"
or "I'll see you at 6:45." It's not just that you talk about the things that
you have to do today; it's whether you simply rattle them on in a
rambling paragraph or list them as bullet points.
All of these tiny signals are the behavioral surplus that turns out to
have immense predictive value.
NARRATOR:
In 2010, Facebook experimented with AI’s predictive powers in what
they called a “social contagion experiment.” They wanted to see if,
through online messaging, they could influence real-world behavior.
The aim was to get more people to the polls in the 2010 midterm
elections.
PRESIDENT BARACK OBAMA:
Cleveland, I need you to keep on fighting. I need you to keep on
believing.
NARRATOR:
They offered 61 million users an “I Voted” button together with faces of
friends who had voted; a subset of users received just the button. In
the end they claimed to have nudged 340,000 people to vote.
They would conduct other “massive contagion” experiments—among
them, one showing that by adjusting their feeds, they could make
users happy or sad.
SHOSHANA ZUBOFF:
When they went to write up these findings, they boasted about two
things. One was, "Oh, my goodness. Now we know that we can use
cues in the online environment to change real-world behavior." That's
big news.
The second thing that they understood and they celebrated was that,
"We can do this in a way that bypasses the user's awareness."
ROGER McNAMEE:
Private corporations have built a corporate surveillance state without
our awareness or permission, and the systems necessary to make it
work are getting a lot better, specifically with what are known as
"internet of things"—smart appliances, powered by the Alexa voice
recognition system or the Google Home system.
PROMOTIONAL VIDEO ACTOR:
OK, Google, play the morning playlist.
FEMALE GOOGLE ASSISTANT:
Playing morning playlist.
PROMOTIONAL VIDEO ACTOR:
OK, Google, play music in all rooms.
ROGER McNAMEE:
And those will put the surveillance in places we're never had it before
—living rooms, kitchens, bedrooms. And I find all of that terrifying.
PROMOTIONAL VIDEO ACTOR:
OK, Google, I'm listening.
NARRATOR:
The companies say they’re not using the data to target ads, but
helping AI improve the user experience.
PROMOTIONAL VIDEO ACTOR:
Alexa, turn on the fan.
ALEXA:
NARRATOR:
Meanwhile, they are researching and applying for patents to expand
their reach into homes and lives.
PROMOTIONAL VIDEO ACTOR:
Alexa, take a video.
AMY WEBB, Founder, Future Today Institute:
The more and more that you use spoken interfaces—so, smart
speakers—they're being trained not just to recognize who you are, but
they're starting to take baselines and comparing changes over time.
So, does your cadence increase or decrease? Are you sneezing while
you're talking? Is your voice a little wobbly?
The purpose of doing this is to understand more about you in real time
so that a system could make inferences—perhaps like, "Do you have
a cold? Are you in a manic phase? Are you feeling depressed?"
So that is an extraordinary amount of information that can be gleaned
by you simply waking up and asking your smart speaker, "What's the
weather today?"
PROMOTIONAL VIDEO ACTOR:
Alexa, what’s the weather for tonight?
ALEXA:
Currently in Pasadena, it’s 58 degrees with cloudy skies.
PROMOTIONAL VIDEO ACTOR:
Inside it is then. Dinner!
SHOSHANA ZUBOFF:
The point is that this is the same microbehavioral targeting that is
directed toward individuals based on intimate, detailed understanding
of personalities. So this is precisely what Cambridge Analytica did,
simply pivoting from the advertisers to the political outcomes.
NARRATOR:
The Cambridge Analytica scandal of 2018 engulfed Facebook, forcing
Mark Zuckerberg to appear before Congress to explain how the data
of up to 87 million Facebook users had been harvested by a political
consulting company based in the U.K.
The purpose was to target and manipulate voters in the 2016
presidential campaign, as well as the Brexit referendum; Cambridge
Analytica had been largely funded by conservative hedge fund
billionaire Robert Mercer.
SHOSHANA ZUBOFF:
And now we know that any billionaire with enough money, who can
buy the data, buy the machine intelligence capabilities, buy the skilled
data scientists, they, too, can commandeer the public and infect and
infiltrate and upend our democracy with the same methodologies that
surveillance capitalism uses every single day.
MARK ZUCKERBERG:
We didn’t take a broad enough view of our responsibility, and that was
a big mistake. And it was my mistake, and I am sorry.
NARRATOR:
Zuckerberg has apologized for numerous violations of privacy, and his
company was recently fined $5 billion by the Federal Trade
Commission.
He has said Facebook will now make data protection a priority, and
the company has suspended tens of thousands of third-party apps
from its platform as a result of an internal investigation.
KATE CRAWFORD, Co-founder, AI Now Institute, NYU:
You know, I wish I could say that after Cambridge Analytica we've
learned our lesson and that everything will be much better after that,
but I'm afraid the opposite is true. In some ways, Cambridge Analytica
was using tools that were 10 years old. It was really, in some ways,
old-school, first-wave data science.
What we're looking at now, with current tools and machine learning, is
the ability for manipulation, both in terms of elections and opinions, but
more broadly, just how information travels. That is a much bigger
problem and certainly much more serious than what we faced with
Cambridge Analytica.
NARRATOR:
AI pioneer Yoshua Bengio also has concerns about how his
algorithms are being used.
YOSHUA BENGIO:
So the AIs are tools, and they will serve the people who control those
tools. If those people's interests go against the values of democracy,
then democracy is in danger.
So I believe that scientists who contribute to science, when that
science can or will have an impact on society, those scientists have a
responsibility. It's a little bit like the physicists around the Second
World War who rose up to tell the governments, "Wait, nuclear power
can be dangerous, and nuclear war can be really, really destructive."
And today the equivalent of a physicist of the '40s and '50s and '60s
are the computer scientists who are doing machine learning and AI.
NARRATOR:
One person who wanted to do something about the dangers was not a
computer scientist, but an ordinary citizen.
Alastair Mactaggart was alarmed.
ALASTAIR MACTAGGART:
Voting is for me the most alarming one. If less than 100,000 votes
separated the last two candidates in the last presidential election in
three states, this is not—
NARRATOR:
He began a solitary campaign.
ALASTAIR MACTAGGART:
—this is not a very difficult problem. You’re talking about convincing a
relatively tiny fraction of the voters in a handful of states to either
come out and vote or stay home. And remember, these companies
know everybody intimately. They know who’s a racist, who’s a
misogynist, who’s a homophobe, who’s a conspiracy theorist; they
know the lazy people and the gullible people. They have access to the
greatest trove of personal information that’s ever been assembled;
they have the world’s best data scientists; and they have essentially a
frictionless way of communicating with you.
This is power.
NARRATOR:
Mactaggart started a signature drive for a California ballot initiative for
a law to give consumers control of their digital data. In all, he would
spend $4 million of his own money in an effort to rein in the goliaths of
Silicon Valley.
Google, Facebook, AT&T and Comcast all opposed his initiative.
ALASTAIR MACTAGGART:
I'll tell you I was scared. Fear. [Laughs] Fear of looking like a world-
class idiot. The market cap of all the firms arrayed against me was
over $6 trillion.
NARRATOR:
He needed 500,000 signatures to get his initiative on the ballot; he got
well over 600,000. Polls showed 80% approval for a privacy law. That
made the politicians in Sacramento pay attention.
So Mactaggart decided that because he was holding a strong hand, it
was worth negotiating with them.
ALASTAIR MACTAGGART:
And if AB 375 passes by tomorrow and is signed into law by the
governor, we will withdraw the initiative. Our deadline to do so is
tomorrow at 5.
NARRATOR:
At the very last moment, a new law was rushed to the floor of the
Statehouse.
FEMALE POLITICIAN:
Can everyone take their seats, please? Mr. Secretary, please call the
roll.
ALASTAIR MACTAGGART:
The voting starts.
MALE POLITICIAN:
Allen, aye.
ALASTAIR MACTAGGART:
And the first guy I think was a Republican, and he voted for it. And
everybody had said the Republicans won't vote for it because it has
this private right of action, where consumers can sue. And the guy in
the Senate, he calls a name—
MALE POLITICIAN:
Aye, Roth; aye, Skinner; aye, Stern; aye, Stone; aye, Vidak—
ALASTAIR MACTAGGART:
You can see down, and everyone went green. And then it passed
unanimously.
FEMALE POLITICIAN:
Ayes, 36, noes, 0. The measure passes. Immediate transmittal to the
Assembly.
ALASTAIR MACTAGGART:
So I was blown away. It was a day I will never forget.
So in January, next year, you as a California resident will have the
right to go to any company and say, "What have you collected on me
in the last 12 months? What of my personal information do you have?"
So that's the first right; it's a right of—we call that "the right to know."
The second is "the right to say no." And that's the right to go to any
company and click a button on any page where they're collecting your
information and say, "Do not sell my information."
More importantly, we require that they honor what's called a third-
party opt-out. You will click once in your browser, "don't sell my
information," and it will then send the signal to every single website
that you visit. Don't sell this person's information.
And that's gonna have a huge impact on the spread of your
information across the internet.
NARRATOR:
The tech companies had been publicly cautious but privately alarmed
about regulation. Then one tech giant came on board in support of
Mactaggart’s efforts.
BRAD SMITH:
I find the reaction among other tech companies to at this point be
pretty much all over the place. Some people are saying, "You're right
to raise this, these are good ideas"; some people say, "We're not sure
these are good ideas, but you're right to raise it"; and some people are
saying, "We don't want regulation."
And so we have conversations with people where we point out that the
auto industry is better because there are safety standards.
Pharmaceuticals, even food products, all of these industries are better
because the public has confidence in the products, in part because of
a mixture of responsible companies and responsible regulation.
NARRATOR:
But the lobbyists for big tech have been working the corridors in
Washington. They’re looking for a more lenient national privacy
standard, one that could perhaps override the California law and
others like it.
But while hearings are held and antitrust legislation threatened, the
problem is that AI has already spread so far into our lives and work.
KATE CRAWFORD:
Well, it's in health care; it's in education; it's in criminal justice; it’s in
the experience of shopping as you walk down the street. It has
pervaded so many elements of everyday life and in a way that in many
cases is completely opaque to people. While we can see a phone and
look at it and we know that there's some AI technology behind it, many
of us don't know that when we go for a job interview and we sit down
and we have a conversation, that we're being filmed, and that our
microexpressions are being analyzed by hiring companies.
Or that if you're in the criminal justice system, that there are risk-
assessment algorithms that are deciding your "risk number," which
could determine whether or not you receive bail or not.
These are systems which in many cases are hidden in the back end of
our sort of social institutions. And so one of the big challenges we
have is how do we make that more apparent, how do we make it
transparent and how do we make it accountable?
AMY WEBB:
For a very long time, we have felt like, as humans, as Americans, we
have full agency in determining our own futures. What we read, what
we see—we're in charge. What Cambridge Analytica taught us, and
what Facebook continues to teach us, is that we don't have agency;
we're not in charge.
This is machines that are automating some of our skills but have
made decisions about who we are. And they're using that information
to tell others the story of us.

Five: The Surveillance State


NARRATOR:
In China in the age of AI, there’s no doubt about who is in charge.
In an authoritarian state, social stability is the watchword of the
government, and artificial intelligence has increased its ability to scan
the country for signs of unrest.
It’s been projected that over 600 million cameras will be deployed by
2020. Here they may be used to discourage jaywalking, but they also
serve to remind people that the state is watching.
XIAO QIANG:
And now, there is a project called Sharp Eyes which is putting camera
on every major street and corner of every village in China, meaning
everywhere; matching with the most advanced artificial intelligence
algorithm which they can actually use this data, real-time data, to pick
up a face or pick up an action.
NARRATOR:
Frequent security expos feature companies like Megvii and its facial
recognition technology. They show off cameras with AI that can track
cars and identify individuals by face—or just by the way they walk.
PAUL MOZUR:
The place is just filled with these screens where you can see the
computers are actually reading people's faces and trying to digest that
data, and basically track and identify who each person is. And it's
incredible to see so many, because just two or three years ago we
hardly saw that kind of thing.
So a big part of it is government spending, and so the technology's
really taken off, and a lot of companies have started to sort of glom
onto this idea that this is the future.
XIAO QIANG:
China is on its way to building a total surveillance state.
NARRATOR:
And this is the test lab for the surveillance state.
Here, in the far northwest of China, is the autonomous region of
Xinjiang. Of the 25 million people who live here, almost half are a
Muslim Turkic-speaking people called the Uighurs.
In 2009, tensions with local Han Chinese led to protests and then riots
in the capital, Urumqi.
As the conflict has grown, the authorities have brought in more police
and deployed extensive surveillance technology. That data feeds an
AI system that the government claims can predict individuals prone to
“terrorism” and detect those in need of “reeducation” in scores of
recently built camps. It is a campaign that has alarmed human rights
groups.
SOPHIE RICHARDSON, China director, Human Rights Watch:
Chinese authorities are without any legal basis arbitrarily detaining up
to a million Turkic Muslims simply on the basis of their identity. But
even outside of the facilities in which these people are being held,
most of the population there is being subjected to extraordinary levels
of high-tech surveillance such that almost no aspect of life anymore
takes place outside the state's line of sight.
And so the kinds of behavior that's now being monitored—you know,
which language do you speak at home, whether you're talking to your
relatives in other countries, how often you pray—that information is
now being Hoovered up and used to decide whether people should be
subjected to political reeducation in these camps.
NARRATOR:
There have been reports of torture and deaths in the camps.
And for Uighurs on the outside, Xinjiang has already been described
as an “open-air prison.”
Surveillance company video
NURY TURKEL, Uyghur Human Rights Project:
Trying to have a normal life as a Uighur is impossible both inside and
outside of China. Just imagine, while you’re on your way to work,
police subject you to scan your ID; forcing you to lift your chin while
machines take your photo and you wait to find out if you can go.
Imagine police take your phone and run data scan and force you to
install compulsory software allowing your phone calls and messages
to be monitored. Imagine if the—
NARRATOR:
Nury Turkel, a lawyer and a prominent Uighur activist, addresses a
demonstration in Washington, D.C.
Many among the Uighur diaspora have lost all contact with their
families back home. Turkel warns that this dystopian deployment of
new technology is a demonstration project for authoritarian regimes
around the world.
NURY TURKEL:
They have bar codes in somebody's home doors to identify what kind
of citizen that he is.
What we're talking about is collective punishment of an ethnic group.
Not only that, the Chinese government has been promoting its
methods, its technology, it is—to other countries—namely Pakistan,
Venezuela, Sudan and others—to utilize to squelch political
resentment or prevent a political upheaval in their various societies.
NARRATOR:
China has a grand scheme to spread its technology and influence
around the world. Launched in 2013, it started along the old Silk Road
out of Xinjiang and now goes far beyond; it’s called the Belt and Road
Initiative.
PAUL MOZUR:
So effectively what the Belt and Road is, is China's attempt to, via
spending and investment, project its influence all over the world.
And we've seen massive infrastructure projects going in in places like
Pakistan, in Venezuela, in Ecuador, in Bolivia—all over the world;
Argentina, in America's backyard; in Africa. Africa's been a huge
place.
And what the Belt and Road ultimately does is it attempts to kind of
create a political leverage for the Chinese spending campaign all over
the globe.
NARRATOR:
Like Xi Jinping’s 2018 visit to Senegal, where Chinese contractors had
just built a new stadium, arranged loans for new infrastructure
development and, said the foreign ministry, there would be help
"maintaining social stability."
PAUL MOZUR:
As China comes into these countries and provides these loans, what
you end up with is Chinese technology being sold and built out by
Chinese companies in these countries.
We've started to see it already in terms of surveillance systems. Not
the kind of high-level AI stuff yet, but lower-level, camera-based,
manual observation-type things all over. You see it in Cambodia; you
see it in Ecuador; you see it in Venezuela. And what they do is they
sell a dam, sell some other stuff and they say, "By the way, we can
give you these camera systems for your emergency response. And it'll
cost you $300 million and we'll build a ton of cameras and we'll build
you a main center where you have police who can watch these
cameras."
And that's going in all over the world already.
AMY WEBB:
There are 58 countries that are starting to plug into China's vision of
artificial intelligence.
Which means effectively that China is in the process of raising a
Bamboo Curtain; one that does not need to—one that is sort of all
encompassing; that has shared resources; shared
telecommunications systems; shared infrastructure; shared digital
systems; even shared mobile phone technologies. That is quickly
going up all around the world to the exclusion of us in the West.
NICHOLAS THOMPSON:
Well, one of the things I worry about the most is that the world is going
to split in two, and that there will be a Chinese tech sector, and there
will be an American tech sector; and countries will effectively get to
choose which one they want. It'll be kind of like the Cold War, where
you decide, "Are we gonna align with the Soviet Union, or are we
gonna align with the United States?" And the Third World gets to
choose this or that.
And that's not a world that's good for anybody.
FEMALE NEWSREADER:
The markets in Asia and the U.S. falling sharply on news that a top
Chinese executive has been arrested in Canada. Her name is Sabrina
Meng; she is the CFO of the Chinese telecom Huawei—
NARRATOR:
News of the dramatic arrest of an important Huawei executive was
ostensibly about the company doing business with Iran, but it seemed
to be more about American distrust of the company’s technology.
From its headquarters in southern China—designed to look like
fanciful European capitals—Huawei is the second-biggest seller of
smartphones and the world leader in building 5G networks, the high-
speed backbone for the age of AI.
Huawei’s CEO, a former officer in the People’s Liberation Army, was
defiant about the American actions.
REN ZHENGFEI, Founder and CEO, Huawei:
[Speaking Chinese] There’s no way the U.S. can crush us. The world
needs Huawei because we are more advanced. If the lights go out in
the West, the East will still shine. And if the North goes dark, then
there is still the South. America doesn’t represent the world.
NARRATOR:
The U.S. government fears that as Huawei supplies countries around
the world with 5G, the Chinese government could have backdoor
access to their equipment.
Recently, the CEO promised complete transparency into the
company’s software, but U.S. authorities are not convinced.
ORVILLE SCHELL:
Nothing in China exists free and clear of the party state. Those
companies can only exist and prosper at the sufferance of the party.
And it's made very explicit that when the party needs them, they either
have to respond or they will be dethroned.
So this is the challenge with a company like Huawei. So Huawei—Ren
Zhengfei, the head of Huawei, he can say, "Well, we're just a private
company and we just—we don't take orders from the Communist
Party." Well, maybe they haven't yet. But what the Pentagon sees, the
National Intelligence Council sees, and what the FBI sees is, "Well,
maybe not yet." But when the call comes, everybody knows what the
company's response will be.
NARRATOR:
The U.S. Commerce Department has recently blacklisted eight
companies for doing business with government agencies in Xinjiang,
claiming they are aiding in the repression of the Muslim minority.
Among the companies is Megvii. They have strongly objected to the
blacklist, saying that it’s “a misunderstanding of our company and our
technology.”
President Xi has increased his authoritarian grip on the country. In
2018, he had the Chinese constitution changed so that he could be
president for life.
NICHOLAS THOMPSON
If you had asked me 20 years ago what will happen to China, I
would've said, "Well, over time, the Great Firewall will break down. Of
course people will get access to social media; they'll get access to
Google. Eventually, it'll become a much more democratic place, with
free expression and lots of Western values."
And the last time I checked, that has not happened. In fact,
technology's become a tool of control. And as China has gone through
this amazing period of growth and wealth and openness in certain
ways, there has not been the democratic transformation that I thought.
And it may turn out that, in fact, technology is a better tool for
authoritarian governments than it is for democratic governments.
NARRATOR:
To dominate the world in AI, President Xi is depending on Chinese
tech to lead the way.
While companies like Baidu, Alibaba and Tencent are growing more
powerful and competitive, they’re also beginning to have difficulty
accessing American technology and are racing to develop their own.
With a continuing trade war and growing distrust, the longtime
argument for engagement between the two countries has been losing
ground.
ORVILLE SCHELL:
I've seen more and more of my colleagues move from a position when
they thought, "Well, if we just keep engaging China, the lines between
the two countries will slowly converge," whether it's in economics,
technology, politics. And the transformation where they now think
they're diverging. So in other words, the whole idea of engagement is
coming under question.
And that's cast an entirely different light on technology, because if
you're diverging, and you're heading into a world of antagonism—
conflict, possibly—then suddenly technology is something that you
don’t want to share; you want to sequester, to protect your own
national interest.
And I think the tipping-point moment we are at now, which is what is
casting the whole question of things like artificial intelligence and
technological innovation into a completely different framework, is that
if in fact China and the U.S. are in some way fundamentally
antagonistic to each other, then we are in a completely different world.
NARRATOR:
In the age of AI, a new reality is emerging: that with so much
accumulated investment and intellectual power, the world is already
dominated by just two AI superpowers.
That’s the premise of a new book written by Kai-Fu Lee.
KAI-FU LEE:
Hi, I'm Kai-Fu.
FEMALE LECTURE HOST:
Hi, Dr. Lee. It's so nice to meet you.
KAI-FU LEE:
Nice to meet you.
Look at all these dog-ears, I love that.
MALE FAN ON STREET:
You like that? I did my homework.
KAI-FU LEE:
But I don’t like you didn’t buy the book, you borrowed it.
MALE FAN ON STREET:
I couldn’t find it!
KAI-FU LEE:
Oh, really? And you're coming to my talk?
MALE FAN ON STREET:
Of course! I did my homework, I’m telling you.
KAI-FU LEE:
Oh, my goodness, thank you.
Laurie, can you get this gentleman a book?
NARRATOR:
In his book and in life, the computer scientist-come-venture capitalist
walks a careful path. Criticism of the Chinese government is avoided,
while capitalist success is celebrated.
MALE LECTURE ATTENDEE:
I'm studying electrical engineering.
KAI-FU LEE:
Sure, send me your resume.
MALE LECTURE ATTENDEE:
OK, thanks.
NARRATOR:
Now, with the rise of the two superpowers, he wants to warn the world
of what’s coming.
KAI-FU LEE:
Are you the new leaders?
FEMALE LECTURE ATTENDEE:
If we’re not the new leaders, we’re pretty close. [Laughs] Thank you
very much.
KAI-FU LEE:
Thanks.
NARRATOR:
“Never,” he writes, “has the potential for human flourishing been
higher or the stakes of failure greater.”
KAI-FU LEE:
So if one has to say, "Who’s ahead?", I would say today China is
quickly catching up. China actually began its big push in AI only 2 1/2
years ago, when the AlphaGo-Lee Sedol match became the Sputnik
moment.
NARRATOR:
He says he believes that the two AI superpowers should lead the way
and work together to make AI a force for good. If we do, we may have
a chance of getting it right.
KAI-FU LEE:
If we do a very good job in the next 20 years, AI will be viewed as an
age of enlightenment. Our children and their children will see AI as
serendipity; that AI is here to liberate us from having to do routine jobs
and push us to do what we love and push us to think what it means to
be human.
NARRATOR:
But what if humans mishandle this new power?
Kai-Fu Lee understands the stakes. After all, he invested early in
Megvii, which is now on the U.S. blacklist. He says he’s reduced his
stake and doesn’t speak for the company. Asked about the
government using AI for social control, he chose his words carefully.
KAI-FU LEE:
AI is a technology that can be used for good and for evil. So how do
governments limit themselves in on the one hand using this AI
technology and the database to maintain a safe environment for its
citizens, but not encroach on individuals' rights and privacies. That, I
think, is also a tricky issue, I think for every country. I think every
country will be tempted to use AI probably beyond the limits to which
that you and I would like the government to use.
NARRATOR:
Emperor Yao devised the game of Go to teach his son discipline,
concentration and balance. Over 4,000 years later, in the age of AI,
those words still resonate with one of its architects.
YOSHUA BENGIO:
So AI can be used in many ways that are very beneficial for society.
But the current use of AI isn't necessarily aligned with the goals of
building a better society, unfortunately. But we could change that.
NARRATOR:
In 2016, a game of Go gave us a glimpse of the future of artificial
intelligence. Since then it has become clear that we will need a careful
strategy to harness this new and awesome power.
YOSHUA BENGIO:
I do think that democracy is threatened by the progress of these tools
unless we improve our social norms and we increase the collective
wisdom at the planet level to deal with that increased power.
I'm hoping that my concerns are not founded, but the stakes are so
high that I don't think we should take these concerns lightly. I don't
think we can play with those possibilities and just race ahead without
thinking about the potential outcomes.

You might also like