We Say Who Is Civilization

You might also like

Download as pdf or txt
Download as pdf or txt
You are on page 1of 14

We say who is civilization for?

And of course, I think most people would immediately have an


answer. I think most people would say it should benefit human beings. It should sustain the human
project. It should benefit the overall biosphere, the ecosystem of the Earth so that this all
continues. That's probably who civilization is for but it's never quite that simple. When I was much
younger, I came up with an image to help me think about who the beneficiary is of our activities,
whether we're engineers or writers or anything else and I call this the circle of empathy. By the
way, the term empathy had its origins in the a circle of psychologists and poets in Germany about
a century ago and the original use of the term was essentially imagining what virtual reality would
be like someday. In the original use of the term, there was an example given that somebody might
be able to imagine themselves as a leaf or as a mountain and that as people could exercise their
imagination to become different parts of reality, that that would also help them appreciate each
other and develop a sympathy for one another's different positions. It was a very charming idea but
this notion of a radical transformation itself, (phone ringing) Wait. That horrible technology. I'm
sorry, it's always off, I don't know how that happened. This very charming word came from
essentially an attempt to imagine what virtual reality would be like someday. And we've always had
this idea in virtual reality that maybe if you can imagine yourself in a different position, whether
very radical imagining yourself as a leaf blowing or something, this would help you dislodge
yourself from only seeing the world from your perspective all the time and this might help you
become more sympathetic to the situation of others. So the circle of empathy, this image that I
mentioned is the circle around you and anything inside the circle are those who are the
beneficiaries of what you do, anything inside the circle deserves your empathy. Things outside the
circle perhaps don't deserve your antagonism, perhaps they don't deserve destruction but on the
other hand, they're not the beneficiaries of what you do. So I would hope that all decent people
believe that human beings should be inside the circle. That's not always the case. There are
racists and homophobes and many other varieties of ideological people who would like to put
some human beings outside of the circle but I think decent people can agree that humans belong
inside. There are some cases that are very difficult to decide. There are controversies about
animals, should animals be inside the circle or on the outside or somewhere right on the edge? In
the United States, we have tremendous debates about abortion, about whether an early fetus
should instantly be on the inside or not. Typically, Liberals wish to make the circle larger, and
Conservatives wish to make the circle smaller. It's a good definition of those two attitudes. There
are problems with making the circle either too large or too small. If you make the circle too small,
of course the problem is you become cruel. In fact, you eventually make it small enough that you
even destroy yourself and this we've seen again and again when conservatism spirals out of
control in history, something I don't even need to refer to in this location. If you try to make the
circle too big, there are also problems. If you say I will never even kill a bacteria because I wish to
support all life, then you can't live because your body kills bacteria all the time, you essentially
become incompetent and so there's a sort of a zone in which the circle is plausible. Now, I'm
telling you about the circle of empathy for a very simple reason which is that the project of
technology, which has existed for centuries, has recently become, I would say, dominated by an
idea that we should put machines inside the circle and this is the idea of artificial intelligence. Now,
when I say it's become dominated, I don't think I'm exaggerating. If you talk to most of the people
in most of the big tech companies, I'm one of them but I probably have a minority position on this
idea, most of them will say We are in an AI race, just as was described in your fine introduction
where it was said 'Is Germany going to be in the AI race?'. China announces itself as being in the
AI race, the United States certainly announces itself as being in the AI race, everybody does. It's
become this strange myth of competition where everybody is trying to be the inventor and owner
and controller and beneficiary of this new life form that we will create. Now, what I wish to propose
is that putting a computer inside the circle of empathy, which is essentially the idea of artificial
intelligence, is fundamentally an incompetent idea. It's very much like trying to not kill bacteria,
which would force you to never brush your teeth and essentially die, you can't do it. And in the
same way, to treat computers as being alive ultimately is an absurd task, that it is not what it
initially seems to be and I wish to describe my reasons for saying that and some of the reasons
that I think there are much better approaches to getting the advantages of new mathematical and
engineering techniques that are usually described as AI, but we can get those benefits without the
myth-making, without the storytelling, without the theology of AI. Now, I call it a theology because
within the tech world, artificial intelligence has become very much like a religion. Specifically, it's
taken on a quality that is similar to a medieval religion. The adherents are extremely influential. If
you're not familiar with this culture, you might think I'm exaggerating but, for instance, Ray
Kurzweil, who's the chief engineer of Google, says things that are more extreme than what I'm
about to say all the time and so in the new theology, it goes like this - we are building these
intelligent machines, they are improving faster than we can so they will at some point surpass us,
just like a jet plane would surpass a race car and when they surpass us, they will be the most
intelligent machines, they will be the dominant life form. It's a religious project, this new, faster
machine will inherit the project of life from us, it will take over as the major life form on the planet.
Those who are close to the machine at that time might have their brains uploaded and achieve the
experience of immortal life within the new giant artificial intelligence. Until then, the first duty of
everybody is to share all their data because the data is what will make up the new artificial
intelligence and this is a fascinating point because you might think why are companies like Google
and Facebook so greedy for all this data? Can it really possibly make them more money from their
advertising to follow your expressions and all these things? Can it really, really do that? And the
answer is not really, a little bit, but the deep driver is actually this religion. Google says it's only
being an advertising company temporarily and it's trying to win the AI race, and that's what it's
doing with the data. And when I say Google says, I mean, the founders and the people who control
Google, like Larry Page, say this very literally. So this is not an inference or a cleverly statement,
this is simply a report of the literal statement as it is made. So this obviously does bear many
similarities to medieval religions. The true believer can get immortality, the non believers
consigned to death. There'll be an apocalyptic event that ends everything we know, this is called
the singularity when the machines take over, and we're supposed to like that because the
machines will be better, they'll be smarter. That is approximately the theology. Now, I would like to
examine why I think this is a terrible way of thinking from a number of perspectives but the first one
should be obvious because it suggests that people will either all be killed or at the very best, be
made obsolete and this is surely a ridiculous way of conceiving of a project that should be about
serving humanity. The project being technology, it's we have inverted our goals for technology is
the beneficiary instead of a tool to serve the original population. So this is a tremendously tragic
example of the circle of empathy being expanded in an incompetent way where in order to expand
it to include computers, we're actually kicking people out because we're saying that they will not be
the beneficiaries after the singularity. But I want to dig a little deeper into the details of this and I
want to use a very specific example that I often find is the easiest way to get across how I see this.
So the example I'm going to use is translation between languages such as between German and
English, as is being performed now by someone I can barely see in the soft light of a glass box
over there. Hello, translator. (laughs) Some of you have probably heard me use this example
before and I apologize for being repetitive but I think it's clear, so I'll continue to use it. The idea of
translating between human languages was one of the first dreams of computer science even as
early as the 1950's. My mentor, the most important mentor for me was named Marvin Minsky, and
Marvin Minsky, as you might know, was the person who probably did more than any other person
to promote the idea of artificial intelligence, to promote the mythology of it, and I'll get back to that
a little bit later, but Marvin love to argue. So the fact that we disagreed about this from when I was
a kid, it was wonderful. I would tell Marvin this whole AI thing is just horrible. Why are we doing it?
And he would say it will be effective for getting our grants, so shut up and just play along and in
fact, it was true. It was very good for getting grants back in the early days, in the 70's was when I
started at this, you would go to a grant making organization such as the Defense Department,
you'd say we're going to build this super smart thing and if we don't do it, our enemies might and
it'll get smarter than people and it was Okay, here's your money, here's your money, oh my God,
you better... and so it was very effective. And actually, the whole thing started off as storytelling to
get grants and I'm not saying this as a scholar, but as a direct participant, this is where it started.
But at any rate, about language translators. One day, Marvin had the idea that maybe computers
were good enough, that a couple of graduate students could achieve translation between
languages over the course of a summer. So he assigned it as a summer project and the idea was
simple. You would start with Noam Chomsky's idea of a core, a logical core of language and then
you would have the dictionary for the two languages and you'd combine these two things together
with an algorithm, and then you should have a translator. It was a reasonable hypothesis and it
absolutely failed and people tried and tried and tried to do more and more sophisticated versions
of that approach for decades until in the 1990's some researchers at IBM's lab had a totally
different idea. They were saying, trying to write a program that understands language is hopeless
because we use language but we don't understand language. Nobody has a scientific description
of how language works. What we'll do instead is we'll use big data and this was one of the first
examples of big data becoming important in this type of application. We'll get a very, very large
amount of text that has been translated and then we'll look for correlations of phrase to phrase,
phrase to phrase, and we'll create a mashup of that, and that worked. All of a sudden, there were
usable translations coming out, and that is still the core of the technique that we all use today.
Now, since that time, free translation services have become available. So as we all know, we can
go online and from Google or Microsoft or others, we can enter some text and we'll get a memo
translated or a web page, it's wonderful. I think this is a fantastic service, it's convenient, I enjoy it,
I benefit from it. However, there's something very strange going on because with these services
coming online, the career prospects for people who translate professionally have changed and to
get into how they've changed properly requires a bit of a technical discussion. It used to be that
there were ten times more people who could look to a career in translation and their success
formed a bell curve where most of them fell in the middle. Once again, it's a little technical. What
happens after something's been regimented under a computer system, it changes to a much
smaller number of people, and it looks like what we call a zip curve. There are a few people who
can benefit from it, but most don't. At any rate, on average, what's happened, although there are
some people who've done very well, on the whole, there's about a tenth the level of career
prospect for somebody who does translation for a living because they can't get a job doing those
little memos anymore for a business which used to be a lot of the work. Now this follows exactly
the pattern of other tasks that used to be paid that have been turned free. It's what's happened
with recording musicians, it's what's happened with photographers, it's what has happened
crucially with investigative journalists and so you might say, well, that's very sad, but this is just the
old story of progress. When a new technology comes along, it makes some skills obsolete. You
might say that except you'd be wrong. In this case, the reason you're wrong is something that's not
widely realized. Every single day, the language changes. Every single day, we need new example
phrases. For instance, there's news. We suddenly have to talk about yellow jackets in France, and
that has to be translated correctly. If it's translated literally, it will turn into nonsense. We have to
talk about new pop culture, new jokes, new songs, new memes, all of these things. And where do
we get the example phrases? We steal them. We go around to people who do not understand that
we're taking the translations and we simply take them. They have all clicked through on their little
agreement that lets us do it, and we steal them by the tens of millions every single day. We have,
the algorithms have found people who are doing translation for one reason or another, for instance
sometimes it's amateurs who like to do subtitles on the most recent YouTube videos for their
language, that sort of thing. So here we have a very screwy, bizarre situation. We're telling the
people you're obsolete because a giant electronic brain is better than you and has replaced you
but we still need you, we're going to steal from you. So can you see something wrong with that?
So there's a sense in which AI can be understood as a new way of packaging data and other
efforts, creative efforts between people rather than as a thing in itself. Now, if any AI, and this is a
principle that generalizes, if each supposed AI program is different, not all of them need new data
every day, but they all ultimately come from people and so in order to treat the AI is alive, you
have to somehow hide all those people. You have to tell those people, get behind the curtain, like I
did before when I wasn't sure if I should be on stage, you have to pretend that they aren't there,
you have to tell the people you're not needed anymore, maybe you'll survive on basic income or
something but the funny thing is that the AI's are always owned by somebody. The AI is owned by
Google or maybe Microsoft or somebody, Baidu if it's a Chinese one. So essentially, when I hear
AI I hear the word theft. To paraphrase Lennon, AI is theft. It's a form of pretending that people
didn't contribute when, in fact, it was crucially about what people contributed. So from a political
and economic perspective, it's a disaster. In the past, every time a new technology emerged, we
would say this is creative destruction and if some old jobs have gone away because the skills are
different now, that's okay, because there'll be new jobs. But there can only be new jobs if we
accept the idea that the people who do those jobs should be paid. If we say, well, there are new
jobs, the new people are needed. However, we're going to pretend they're not there, we're going to
pretend we're not stealing from them and we're not going to pay them, then of course, that system
breaks down and instead you create unemployment as the technology advances, which is a
disaster, it means that technology hurts people instead of helps people. So going back to our
question, who is technology for? If we say it's in part for machines and when we say artificial
intelligence, we're suggesting that the machine itself is becoming the beneficiary, that there's an
intrinsic value in making the machine have a certain status. If we say that, we also in the same
breath are removing people from having that same status. So it's a remarkably unethical and
destructive idea. Now, having said that, I want to make very clear that the thing I'm criticizing
involves storytelling, myth-making, it involves vocabulary, it involves ethics, it involves economics,
but I'm not criticizing computer code, I'm not criticizing the design of robots, I'm not designing
something fundamentally mathematical or in the realm of engineering. I actually happen to love
that stuff. The people who do AI and call it AI think I do AI. For instance, my friends and I sold
Google their first machine vision company. I do that stuff but to me, it's just code. To me, there's
no reason to add this new layer of myth-making that gives it a kind of a supernatural status as this
new life form that we're creating. In fact, there's every reason not to. Let me give you one more
approach to this idea. As you probably know, one of the earliest expressions of the idea of AI was
from Alan Turing, the principal inventor of the computer and this is called the Turing Test. When I
was in school, when I was young, we were taught the Turing Test as if it were one of Einstein's
thought experiments, as this foundational core idea in modernity and we were taught anything
about the biographical context in which Turing thought of it, we knew very little about it actually, it
was just an abstraction. More recently, more and more people are aware a little bit of Turing's
biography. There was a major movie about it, it's become part of the public discourse at the time,
nobody knew. So for those of you who don't know, Turing proposed the idea of the Turing Test in
the last weeks of his life. He'd had an extraordinary, extraordinary life as the principal inventor of
the computer. He had been drawn into the war effort and plausibly was responsible for saving
Britain from an invasion and what he did with one of the earliest computers was he broke a Nazi
secret code, which was called the Enigma Code. You all probably know this history but just in
case, the Enigma Code was decoded with a little box and the mathematicians in the Nazi
intelligence services believed it to be unbreakable but they weren't aware that there was a
computer on the other side and the computer could break it. So after the war, Turing was initially
celebrated as one of the great heroes of Britain. However, it happened that he was gay and at that
time, it was illegal to be homosexual in Britain. And he, along with about 50,000 others who had
participated in the war effort, were treated as not exactly as criminals, but sort of as mental
patients. So he was forced to accept a treatment, a quack, bizarre, unscientific, and cruel
treatment that was supposed to cure his homosexuality. He was forced to accept massive doses
of female hormones with the idea that that would balance his sexuality and here we see the
extraordinary power of the metaphors we choose to use to understand our own technologies.
Before the computer became the dominant metaphor, the steam engine had been the dominant
metaphor and during Turing's lifetime, the computer was known to only a few individuals, so the
steam engine was still the powerful metaphor and with steam engines, it's all about balancing
pressures and therefore the popular version of Freudian psychology had to do with balancing
these pressures. So the idea was that, well, his sexuality must be out of balance, so we're going to
balance it with the opposite hormone and somehow that will get the engine to not be giving off
steam and all that. It's something like that, something crazy, something stupid. So he developed
female bodily characteristics, and he committed suicide in an extraordinary way by eating an Apple
laced with cyanide next to the first computer in his lab. Do you all know the story? Yeah, some of
you do. So it was just in the last weeks before his death that he wrote down the Turing Test and
once you understand the context, I don't think it's possible to read it in the same way. So the usual
way that the Turing Test is told is we're told that Turing modified an older parlor game in which a
judge was supposed to determine whether little slips of text that are transmitted under a screen
were coming from a man or a woman. You'd have to guess who is the man who is the woman,
they both might be trying to deceive the judge and I don't know why this was considered an
entertaining game, I find it it sounds kind of boring to me, I don't get it, but anyway, apparently
Victorians thought that this was incredibly amusing and so Turing said, we'll get rid of the woman
and let's have a person and a computer and if the judge can't distinguish them, what distinction is
there? Okay, now, this is a very interesting intellectual move, it's a very interesting one, because it
doesn't say anything about the absolute status of anybody. It doesn't say, all it says is if you think
people are elevated in some way and you can't distinguish the person from the machine, who are
you to discriminate against the machine. So it essentially puts the machine into the role of the
homosexual or the Jew or some other oppressed class, it takes the machine into the circle of
empathy and complains, why does this thing not have status, why does it not have rights?
However, however, and that's the attitude that the tech world has had ever since. The technical
young men will often whine, why isn't my machine treated as a real person? But my machine
smarter than your dog. I don't know, I ran into this all the time in the lab with brilliant young men.
The thing is, Turing was being tortured to death for his identity and I think the way we should read
this should be informed by the context. I think it was in part a very dark cry of pain. It was saying
here, what more could I have done for Britain? What more could I have done for the cause of
civilization, what more could I have done for the cause of democracy than what I did? And yet I'm
still being killed for who I am, and I think there's an indictment built into it. There's an indictment
that, you know, if you can't treat me as a person, I guess maybe you'll treat a computer as a
person. It's bringing up the whole absurdity of the lack of empathy shown towards him. I think that
that's the better interpretation given the context. I think what he's saying is that if you have so little
humanity, I bet you can't tell a computer from a person. And in fact, if you look, if you look in the
footnotes, he only wrote two versions of it, one in a little note and another is part of a little article,
and if you look in the footnotes, he has a comment, surely you can see that ultimately the
computer came from people and people came from God and whatever you see in the computer is
ultimately just part of the divinity you see in people. He has this amazing footnote which nobody
ever reports on. Okay, so I want to say something else about the Turing Test, which is a little bit of
a joke, a little bit tongue in cheek, but here's what I wish to point out: In the Turing test as it's
usually presented, there's a judge who is attempting to distinguish a person from a computer, only
getting little notes, little tweets, you might say, little messages. Now, the assumption of the
technically minded nerd is that if the judge can't distinguish which is the computer and which is the
person, it must mean that the computer has become elevated, that it has become like a person,
but there's another logical possibility, which is that the person has gone down, the person might
have made themselves debased, the person might have made themselves stupid and that is why
you cannot distinguish them from the computer. It's another logical possibility. There's nothing in
the Turing Test that tells us which happened. Furthermore, the judge might have become stupid.
Okay, so you start with two people in one computer, either the computer might become elevated or
the people might become stupid, but because there are two people, I would argue that there's a
two third's chance that a person became stupid rather than the machine became smart. Okay, so
in general, anytime the Turing test seems to be passed, it probably means that people have gotten
stupid. Now, you might say, well, this is just a theoretical exercise, but that you would be wrong.
This is actually possibly the most important technological interaction going on now, because it's
creating an existential threat to our species and our civilization. Now, here's what I refer to, I'm
referring to fake people that create fake social perception that destroy a human character, destroy
the politics of region, and ultimately destroy the ability of mankind to act in any collective way that's
rational. So let me say, let me unpack those things. Humans have a characteristic that we share
with many other species, that we perceive socially. And what I mean by that is that the way people
around you are directing their attention and the attitude they have to the world forms a collective
perception that of all people present are aware of. We're always helping each other watch for
dangers, we're always helping each other be aware of opportunities. In one of my books, I tell the
story of when I was a boy, and some friends would play this trick where we would go out into a
crowd and start pointing at something when there's nothing there, and soon everybody was
looking there. That is social perception. So, in the online world right now, the rate at which fake
accounts are added to platforms like Facebook, Instagram, WhatsApp, or accounts on YouTube,
Google accounts, the rate of fake people is rising faster than the ability of the companies to purge
the fake people. So we have an unknown but large portion of fake people and the fake people,
there's a whole economy, you can buy bulks of fake people if you want to, suddenly buy followers
so that you look popular, you can buy the service of creating many fake people to have a political
idea, you can do all of this. Now, this is one of the things, because of my concerns about AI, I
started writing about the danger of this very early, and I even have an essay from the early 90s
about the possibility of fake people who in those days we called agents, and today they'd be called
bots, but about the possibility of them swaying elections through false social perception and how
they might go into battle with each other and I wasn't the only one, I think there were other people
also writing about this danger. All of our warnings were useless, obviously, because it's exactly
what's happened. So when fake people are created and they throw an election, what's happening
is that the Turing Test has been won by people getting stupid. Can you see that? It's the same as
in my analysis of the old thought experiment. Essentially, what we've proven is that we're willing to
become stupid in order to make the machines seem intelligent, in order to let them influence us but
ultimately, and this must be remembered, there's no weird alien or angel or supernatural force
that's creating computer programs, it's all from people, and so therefore, if there is an AI that's
making people stupid, that AI is being run by somebody, and whoever that somebody is is seeking
their own agenda, which is typically money or power, or occasionally just a kind of nihilism, but
usually it's money or power, or is maybe an ego thrill. Whatever the motivation, whenever people
are made stupid by believing in AI, there's somebody unrevealed, somebody behind the the
curtain who is the beneficiary, who's the puppet master. These days, we tend to think of that
person as being Vladimir Putin because he's been caught doing it so much and there's such
extremely well done documentation of him having done it in some specific cases, such as in the
US election, in the Brexit vote, and so on. I'm certain that there are actually many people who do it.
It's very inexpensive to do, it's easy to do. There was this experiment we did at Microsoft that I
interpret perhaps in a different way than the experimenters. We made a little chatbot, and the idea
is that it would talk to a large number of people, millions of people at once, and use their phrases
with each other through pattern matching to create the illusion that it was a friend you could talk to.
So there were all these people talking to each other without realizing it, they all thought that they
were talking to an AI. It was called Tay, and within 24 hours it turned into a Nazi. It became this
horrible, racist, evil thing and had to be shut down and so the question is, why. And there are two
popular explanations for this, one of them is that there was just a group of kids who were being
vandals and kept on feeding it ugly things, but I have a different interpretation and it's very hard to
show which one is more correct. I think we created a Putin detector. I think what happened is our
bot started interacting with other bots that were that were malicious and they had a crypting effect.
So in other words, we made ourselves stupid to believe in this stupid bot and maybe the right thing
is just not to even believe in bots in the first place. Now, a key idea is that any core capability that
you can offer using the Artificial Intelligence framework can also be offered in a different way in
which the person perceives themselves correctly as being in control and perceives the value that
they receive correctly as having come from other human beings. For instance, right now, Google
somewhat pressures you to just let it choose videos for you, video after video after video. The way
it chooses videos is in part based on similarity of interest to what you've selected. But it's also in
part driven to get you engaged and unfortunately, the most engaging material is that material
which excites the fight or flight responses within people, such as fear and anger and so, as has
been repeatedly documented by researchers, if you do let the Google video search go on in a
remarkable, there's a debate about whether it's a majority or a large minority of cases, but it
doesn't really matter, if you just let it choose from you, it eventually will go into some kind of weird,
malicious, paranoia inducing, anxiety inducing zone of videos that seem to have come from
malicious sources. And this is not because anybody at YouTube wants to hurt the world, it's simply
the natural outcome of this whole methodology, this is what will happen. The alternative is very
simple, which is to say we will not automatically choose videos for you, you will search for videos
and click on them. I mean, the difference is so slight, it's so slight and there already are some
options where you can click on a collection of videos or something, I mean, there's ways to avoid
this, but there's, like, the artificial intelligence religion makes engineers just loathe, loathe giving
people control because they want everybody to buy in to this AI religion, they want people to say,
yes, the machine is choosing for me, I trust the machine, the machine is becoming alive. They
want that so much that they'll make the world, they'll destroy the world to get that feeling. So this is
something I battle against all the time. I battle against this constantly, and it's difficult. I can attempt
to theorize why this holds so much sway in the technical community, why is tech culture so
obsessed with this idea of creating fake people and believing that it's creating a new life form.
Well, one theory is that it's mostly men and it's some sort of a womb envy kind of a thing that we
want to give life. It's a way of not needing women anymore, we can propagate life without them,
without their difference from us, without their whatever is imagined. The vast majority of people
who think this way are men, and the vast majority of influential people in tech society are men and
that is a real factor, it's a very peculiar feeling, that's what's part of it. Part of it is, it's an effective
plot for science fiction and so much compelling science fiction has been made out of the idea of
computers coming alive. We could mention the Matrix movies and the Terminator movies and on
and on and on. That's had a big, big role and if you look at these movies and you think, wow,
science fiction does kind of say something about the future of technology, then what you think
yourself is AI is power in the future, so we better be the ones to make AI first, and it better be our
AI. Once again, the sense of the race, this all or nothing-race where nothing else matters. Within
the tech community, you'll often hear, and by the tech community I mean the very most powerful,
wealthy and influential individuals, my friends, in many cases, you'll often hear an argument like
Well, it's fine for you to worry about global warming, it's fine for you to worry about whether there'll
be enough fresh water when we hit peak population later in the century, it's fine for you to worry
about the danger of emerging new infectious diseases as populations travel around the world, you
can worry about all that stuff, but it doesn't really matter. The only thing that really matters is the AI
race, because the AI will be smart enough to solve all those things. So everything else is a waste
of time. It's the singular focus on this fetish object. You can say it's a little bit like the golden calf in
the Old Testament, a sort of a flowering of vanity, of people believing that they can be God, which
is, I think, a natural thing to want. It's difficult to be a person. We die, mortality is very difficult, and
this gives you a fantasy of some other status, some other story. That's a very big part of it. Another
part of it is what I call nerd imperialism and what that means is that the nerd mentality, the nerd
mindset would like to subsume all the other ones. So all the people who are interested in art or
design, all the people who are interested in sociology, the people are interested in politics, the
people who are interested in psychology, the people who live subjectively and are interested in the
impressions of the world, the people who believe in interiority and are interested in their own
experience, the nerd imperialist would like to overwhelm and control and be superior to all of these
and it's kind of happened because we make so much money. You know, the biggest companies
are tech companies now, and they've come up very quickly. We're outpacing even the
Pharmaceuticals, we're kind of Kings in a way and so, when somebody has a kind of good fortune,
they always read it as confirmation of their high self regard. And so naturally, we think, yes, this
whole business of replacing people with something we invent, of course it makes sense and
people are paying us to do it because, of course, it is the right thing. But ultimately, the deepest
reason that I think we should reframe the mathematics and the engineering that's normally
bundled kind of arbitrarily as being the thing that is AI, the reason I think we should reframe that
instead as simply technology that people use instead of a life form, is a spiritual reason. I find that
each moment is remarkable. I am absolutely astonished that I'm experiencing this, I'm absolutely
astonished that I'm not a machine. There's this extra thing where I am alive inside this body, I am
perceiving, this can be called consciousness, it can be called experience, it can be called
sentience. Whatever you call it, the AI people will say, oh no, that's something we can do with a
program, they'll subsume it, so no vocabulary is adequate to describe it, which is maybe
appropriate, but this amazing sense that there's something manifestly supernatural in every
moment of ordinary life, this thing to me is sacred and absolutely remarkable. It makes me a better
scientist, it makes me a better technologist because it reminds me that I don't understand
everything. It reminds me that I live only on a little speck of understanding in a sea of mystery. It
reminds me of how precious and mysterious other people are, even though I can only go by faith
that they are also experiencing. It reminds me of how life is magical, how much gratitude I have to
be here, and the loss of that, the loss of that to this AI fantasy of power and greed and ideological
empiricism is such a horrible idea, it's such a loss. I think, there are many things going on in the
world. Social media is exciting, these emotions through fake people and it's making politics horrible
everywhere. There's been this incredible concentration of wealth and power, much of it for the
people who are closest to the big computers in one way or another, there are many things going
on that are destabilizing and harming the world, but I think one of them is this looming sense of
people becoming obsolete. I was recently giving a talk to high school students in the US, and I
heard questions I've never heard before. I heard questions of the form, Well, if we're going to be
obsolete, why did our parents have us, what is the point? I've never heard that from a young
person before. I've heard young people who are scared or angry, I've heard all kinds of things. I've
heard young people who think adults are full of crap, all of those things, but I've never heard that.
And I think the sense that the technologists are about to make people obsolete is having a
deleterious effect on the world along with all the other things. I think it's part of the reason why we
see such a rise in fundamentalist religions of all kinds everywhere, whether we're talking about
India, the Islamic world, Israel, the United States with fundamental Christianity, all over the world,
you see the rise of these things and I think it's in reaction to this idea of like, wow, people are
about to be obsolete and it's those other nerds over there who are owning the new God, not us.
That's a horrible feeling, it's based on a lie, it's a lie we shouldn't be telling. That is my talk for now.
(Applause) Ulrich Kelber: Well, first of all thank you so much for this, thanks also to the Federal
Chancellor Willy Brandt Foundation that this years' lecture has chosen a digital topic, I dare to
predict that it will not have been the last time because, obviously, this comprehensive change that
comes with it will make it important in other subject areas as well. And that a speaker has been
chosen who is an insider, but who has kept an eye on the big picture, not just dwelling on the
technical solutions he himself has worked on and has kept an overview. Actually, it was an
passionate plea to turn technical progress into social progress, as Willy Brandt would probably
have said, and not to let it go in the opposite direction. And during the speech the smartphone has
"meowed" once, and that reminds me of a book of yours, Mr. Lanier, where you say, Be a cat. Be
non-conformist, don't be submissive, be surprising, be unpredictable, and therefore the question:
In this transformation of technological social progress, how much does the individual have to do,
what must they pay attention to and what is the task of the state? Jaron Lanier: Okay, so before I
answer your excellent question, I have to say something about that "Meow". So as you might be
aware, some additions of the book actually have a picture of the cat on the cover, it's a real black
cat named Potato who is rescued from a parking lot in Oakland, California and that's his "Meow",
and so I wanted everyone to know because I believe in giving credit, just as I don't want to steal
people's data, I don't want to steal cats' data, and so he'll get an extra treat for having contributed
to this so that I can be consistent in my beliefs. So, as far as the role of the various players is tricky
when it comes to technology and it's one of the reasons why I've attempted to live this rather
complicated life of being both inside and being a critic at the same time, of being essentially the
loyal opposition, if you like. And the problem is that it gets very complicated, the future is
unpredictable, and so every time you think you've nailed it down, some programmer comes up with
some twist that undoes your assumptions, right. So it does, it is very hard to figure out what is a
useful role to play. So I made a decision that I would ask people to consider quitting these systems
as a form of social responsibility. It's a very drastic thing to ask and I realize that only a small
minority of people can possibly do that because people are both addicted, and they also have a
dependency because of the network effect, because everybody else and their career things and all
that their families are on these systems and between the network effect and the addiction, it's very
hard to quit. But the thing is, there has to be an element of personal responsibility that includes
refusing to participate, because whenever you have mass addiction that's connected to a
commercial enterprise, there have to be some number of people, even if it's only a small number,
who can have a conversation outside of the addiction system. For instance, with cigarettes, there
was a time when this hall would have been filled with cigarette smoke right now, and more children
would have been dying from lung cancer and we finally came to a decision as a society that having
cigarette smoke everywhere was actually not worth it. But it was very hard, at first everybody
resisted, and we said, no no no, I feel sexy when I smoke, I feel it's my personal right, there was a
very different feeling and we could not have made that progress if there weren't at least a few
people who are outside of the addiction and it's the same with this. If there are at least some
people who start to see their own role in perpetuating the system, that might be enough, even a
small number, to change the conversation. So that's the individual responsibility. As far as
government, this is what we've been talking about for the last two days, it's a big topic. I think that
government has to evolve a sense of regulation in such a way as to attack the problems in the
very most fundamental ways possible. So, for instance, in Europe, there's been an emphasis on
privacy, GDPR, and so on, but the privacy violation in itself is not the thing that does damage, it is
what's done with the information after privacy is violated. So you can have an excellent label on a
bottle of water, and you can say, wow, in the fine print, it says this is poison, but really, you
shouldn't be allowed to put poison in the bottle. And I think that we have to go many steps beyond
the GDPR so that it's not just about preventing privacy violation, but preventing manipulation, and I
think the only way to do that really has to do with restricting not just how data is gathered, but how
it's used and ultimately trying to get rid of the business model of manipulation, and indeed probably
the metaphor of artificial intelligence in the long term, and there are many schools of thought about
how to do that, but I think regulation has to cut as deep as possible because the tech companies
are very clever and wiggly. If you look at how we evade taxes, for instance. I shouldn't say that,
except for Microsoft, only the other ones. But, if you look at how clever these things are, it gives
you a sense of how clever people with big computers can be and so it's very, very hard to foresee
what will happen and from a regulatory point of view you have to cut very, very deep. Ulrich
Kelber: What is the core of the data abuse? You had described earlier also the possibility of abuse
by third parties and then went on to say that artificial intelligence as it is used today often consists
of intellectual property theft, theft of data, misuse of data. Now there are other areas of politics
where attempts are being made to regulate such things. For instance, on the issue of genetic
diversity, the Cartagena Protocol and others have attempted to say that the regions in which
something like this arose must have a share when this information is used. You have made a plea
that data must be given a value, that we must distance ourselves from an "everything for free"
culture, and that the use of, for instance this translation, how to imagine this in practice, is there
the writer of a text must pay people the idea he used in his novel, the translation program must
pay for having collected subtitles on YouTube, what is the essence of this idea? Jaron Lanier:
Okay, so I think there are maybe two questions within that question and if it's okay, actually,
there's like twelve questions, but I'd like to answer two of them in order. One of them, you were
asking about the variety of abuses of data and I'd like to just give you one other example of an
abuse before I get to the specifics of translation, if that's okay. The United States has an insane
system of healthcare where it's privatized and you get your health care mostly through your
employer and if your freelance, you're at risk. This is a crazy system, but anyway, it's what we
have. Now, before big computers, before big cloud computers, before the algorithms that we call
AI algorithms, the competitive pattern between insurance companies was to get as many people
signed up as possible, because that created a bigger pool. So it was all about scale and the bigger
the insurance company, the more profitable in absolute terms. As soon as the big computer
showed up, the business incentives for insurance companies completely reversed. Now you had
software that could correlate the lives of millions and tens of millions of people and so you had
better predictions of what would happen to somebody and now your goal was to ensure as few
people as possible, those who are unlikely to make a claim, right, and so all of a sudden it became
about dropping people and reducing the number of people, so people were served more
information resulted in poor service instead of better service, it had exactly the opposite effect of
what you might want. And the same thing has happened in many other instances in industry where
the use of these algorithms has undone previous assumptions, because it gives one, whoever's
closer to the bigger computer, enough extra information, enough extra insight to essentially undo
what might otherwise seem to be a normal business. So keep that in mind and I'll address your
second question, which is, in the case of the language translators, how would it work exactly?
Now, as we all know, people have more capability technologically than we used to, technology has
improved, let's say, since a 100 years ago, we would like to hope that our societies are getting
somewhat better organized as well. And if you combine those things, what you see is a growing
economy. So we see the economy grow, and as the economy grows, hypothetically, it should
benefit everyone, which would mean that everyone is gaining benefits from our improved
technologies and our improved abilities to coordinate. So I think I'm saying something that's very,
very basic. Now, in the case of the insurance company, I showed an exception where that is
broken. It's a market failure, I feel like, where people are suffering as the technology improves for
the benefit of a small few instead of everybody benefiting, right. Now,what I would argue is that in
the future, let us suppose that you start paying for things like translations, you start paying for
things like Facebook, and let me just deal with one question, I know you might say, oh, I would
never pay for that, even though I'm addicted to it, but the thing is, people pay for Netflix. Like, once
people get used to paying for something, not only does it become acceptable, but they often
perceive what they paid for as being the best thing ever. People perceive Netflix as having
improved the quality of television. I don't know how important Netflix is in Germany, actually, is that
a good example here? Yeah. Okay. So, the thing is, so that if we believe that this technology is
improving our abilities, then the amount that people are paid for data should always be ahead of
how much they have to pay for the services made of their data. In other words, if you are working
in a car factory, you should make enough money to be ahead of the game at the end of the month,
you shouldn't become poorer and poorer, you should do better and better because you're helping
provide cars. In the same way, the price paid for data has to be high enough that it's becoming a
new industry, that people take pride and that they find sustenance and so the degree to which
you're paid has to outpace the degree to which you pay, ultimately, on average. And you might
ask, can that economics work out? We're so used to the idea that everything on the Internet
should be free and the companies make billions upon billions of dollars, but otherwise, nothing's
worth anything. We're so used to this bizarre prejudice that we can't even imagine that a different
economics could also work out but in fact, it can and it's one of the reasons why I've started
collaborating with economists. This could be a completely sensible, sustainable economy that I
think would result in a better world. There are many details that could vary, do you pay a
subscription or do you pay little micro payments with use, one thing that I feel would be a crucial
element is people who provide data have to be able to bargain collectively. It can't be each person
against each other person. The big tech companies would like it to be each person against each
other person, so the prices are driven down to nothing. But in fact, both in the real world and
online, people have to bargain collectively in order to be paid and so there has to be some new
concept of a data union or something. Actually, this leads to many thoughts, but I'll just share one
of them. Right now, we frequently petition the big tech companies like Facebook or Google to
become the government globally, the arbiter for speech and indeed, even for action. We'll say, we
must, you are responsible for stopping the hate speech, you must stop the harassment, you
should stop the fake news and all of these things, right. But the problem is, every time we get them
to do that, we also cement their power. We're declaring them to be the ultimate controllers of
culture, step by step by step and I don't think that that ends in a good place. I think that some sort
of new collective bargaining entity that would assure that people are paid well for their data could
also become a new element of civil society. It could become like a medieval guild where you would
say, if you hire somebody who's good at stone cutting or something, who's part of the skilled,
you're also getting quality, they enforce their own quality, that these entities could also become the
aspects of society that become entrusted with providing truth and quality and decency in the
future. This is a more complex idea, and I'm only presenting the barest picture of it, but there has
to be some way that there's a center of influence online other than the central hub that is able to
create quality, so everything doesn't turn to garbage, because, of course, if there's only one
source, it does turn to garbage and so this idea that the data union could be the same thing as the
source of quality is called a MID, creating a MID, and those who are interested can find an article I
recently wrote with a colleague in the Harvard Business Review that describes them. So once you
start going down this path of reimagining technology as if people were the purpose, you end up
inventing all of these ideas but in a way, it's a very easy and natural invention and I believe that
even though some of these ideas might sound a little radical or bizarre, in fact, they're not. They're
very gentle and more similar to things that have worked for humanity in the past. Ulrich Kelber:
Would that also be a solution model for the areas that we tend to call platform capitalism today,
which are now very much involved in this race for artificial intelligence solutions, where you also
have the feeling that this is also a kind of theft, so the Ubers which then lower the standards for the
vehicles used, which actually force people into an auction for their service, the Amazon
marketplace which leaves small retailers the task of introducing a product to the market and if they
are successful, he takes it away and with all his marketing power, he then takes over the sale.
Could we also support these freelancers, the small companies with such a model and thus also
ensure stabilization and economic progress for everyone? Jaron Lanier: Yeah. So this is
interesting. Now, of course, the complaint of a sort of a Gilded Age with excessive income
inequality and excessive control of vital resources is not new. This is something that has been a
problem for a long time and identified as one from long before there were computers. We could
mention Karl Marx as someone who is eloquent on this topic. Very, very good critic, terrible
inventor, but good critic. And so part of this problem is old. And then part of it is new, and the part
of it that is old is familiar, so I won't go over it. The part of it that is new is the special benefits that
someone gets by being close to one of the biggest computers. So as I was mentioning before, with
the example of insurance in the United States, if you are close to one of the big computers, you do
have an information advantage, and information is power. Now, interestingly, Silicon Valley didn't
realize this at first. The first people to realize it were investors on Wall Street who used it for
automated trading, causing the first computer driven flash crash in the late 80s, very early, and the
people using algorithms for automated trading for a little while made these easy fortunes because
they could outguess everybody, and then everybody got them and they balanced out and now it
doesn't work anymore. It's just, I think it did do damage to the market and that's another very
interesting story to tell, but I think some of the investment groups that had initially gotten an
advantage from using bigger computers, were motivated to become more traditional monopolist
and create kind of cartels among themselves to create the illusion that they were still benefiting,
even though it was no longer about the computer once they had canceled each other out, so it's a
bit of a complicated history. The next party to figure this out was a company called Walmart and
Walmart used a big computer to correlate information about their supply chain so that they could
outnegotiate everybody who is a supplier and concentrate capital for themselves more so than
previous hubs and that worked for a while. Amazon came along and said, Wait, we're not just
going to do this to the supply, we're going to do this to the the demand, too, we're going to do it to
the people as well as to the suppliers, we're going to do the whole loop and so then they
overwhelmed Walmart and one of the crucial things for the future is to undo certain kinds of
unsustainable and unsupportable advantages that people gain from big computers if they own
those big computers. And having to pay for the data would seem to be an obvious way to do it. If
Amazon has to pay for the data that gives it an advantage, there'll be an equilibrium reached
where the benefits of the computer will start to be spread widely instead of concentrated. Ulrich
Kelber: I don't know how about you, but of course for me it's great fun to talk about the structural
problems of technology that need to be solved and directed with someone who is a techie himself
and doesn't start with a fundamental skepticism towards all kinds of change since color television,
but really talk about what's going well, what's going wrong, and how do we improve what's going
wrong. But therefore the question, where did this also go wrong start? Is it inherent in the
technology, i.e. with the techies, when they thought they now had to introduce other forms of code,
or was it the CFOs and investors who first used the technology for certain market models? Jaron
Lanier: Yes, well, the sociology of how we got to where we are is interesting and I've tried to
capture a little bit of it in the new book, The Dawn of the New Everything. Part of the problem
arose because at the dawn of networking, which was in the 70s and 80s, before the Internet, a lot
of the young men with technical educations were taught that the most important feature of
civilization was an ability to hide from the government. In the United States, this was driven largely
by the Vietnam War, where there's been a very widespread experience of being drafted into the
army and then being sent to a war that was not really supported and was very horribly brutal. So a
lot of young men of that era still had embedded in them very deeply this idea that the number one,
the number one by far, the thing that one has to be able to achieve is to hide from the government
and the government is always a bad thing, and often the government is, but that's not the only
thing, though, and so they had this out of balance. As for the part, the aspect of America, the other
side of America that was more, what we say, conservative or pro war, they had a different problem
which is that they were getting arrested all the time for driving too fast. The President Jimmy
Carter had imposed a speed limit in the 70s to conserve gas. It's a little bit like the Yellow Jacket
protests in Paris, I suppose and all these people were saying, Well, we're not going to drive slow,
we're Americans, we're going to drive fast, then they get arrested and so they developed a
technology which was called CB Radios, where everybody would take on a fake persona, just like
on the Internet today, and they would warn each other where the police were and so once again,
the idea of technology to hide from the government was very central, so the whole thing was born
with this idea that you have to hide, hide, hide, hide and the government is bad, bad, bad and it's
all about individuals. It was this very cowboy Wild West idea, and then that was moving forward
and then there was another wave of, I would say, hippie socialism, that everything should be free,
on the Internet we will never pay for anything, everything will be this giant sharing, it will be like a
commune. But then there was this other thing, which is, we love the entrepreneurs, we worship,
like Steve Jobs, like entrepreneurs, we worship these people, they were considered to have, they
had that Nitzschean elevation, they were these supermen who could dent the universe and had
some kind of creative will that could change things and so you have this weird thing, you're hiding
from the government, you want to be a hippie Communist and you want to support entrepreneurs,
how do you combine these things together? And so we ended up making a lot of terrible decisions
because there were very few decisions available that could actually address all of these different
ideas. So for instance, the Internet as it was born, it doesn't represent people. You don't have
accounts on it, that had to come from private companies because the idea is that, oh my God, if
you tell the Internet who you are, then the government can find you. But it's absurd because we
just created these gigantic companies to cover these holes, like Facebook, to have an account. It
truly was a self defeating, ridiculous idea, but this feeling was very strong and then this idea
everything must be free but we still want entrepreneurs. How can you achieve that? Instead of two
people paying to do whatever they do, they'll experience things being free, they'll be told to share,
they'll be told to be open, they'll be told they're in a commune. But actually the only way it's
financed is that there's a third party who's paying out of a belief that they can be manipulated
which is also just the stupidest thing ever, but that's what we were left with. So we started with
these absolutely unassailable ideologies and passions and by being inflexible, we forced ourselves
into a little tiny space of possible solutions, which is exactly the world we have. Ulrich Kelber: Last
question, I found this very convincing, also the presentation, why is it actually not allowed to
participate in this manipulation machines until they have changed their business model, what
needs to be regulated, and I may tell this here, I had your book 'Ten Good Reasons Why You
Should Delete Your Social Media Account Immediately' with me as reading during a business trip
to Ethiopia. I will still delete my Facebook account in December for other reasons, but I feel
encouraged by the book and on my last day I met a young student in Lalibela in Ethiopia who told
me about this atmosphere of departure with the new prime minister and then said without me
being able to address him: "We need Facebook and Twitter like the air we breathe", and I had just
read the day before how quickly those revolutions who started with it were actually eaten up by
this systematic machine behind it that is being abused. So what do I answer to such young people,
what is a way to organize, to exchange with others without falling into this trap? Jaron Lanier:
Africa has the world's vastest reserve of young people, and young people are precious and the
people in was it Ethiopia? Ethiopia, for them to say we need Facebook is not the right answer.
They should be writing their own Facebook. I mean, they have to create their own media. You
can't empower yourself on somebody else's power platform, ultimately. This is a very upsetting
thing to me. So when the Arab Spring happened in Silicon Valley, there was a lot of self
Congratulations. People are saying, it's the Twitter revolution, it's the Facebook revolution and I
was antagonistic towards this and it was hard to be because it was like this religious rapturous
moment, see, we're saving the world! And I was like, well, Twitter is not going to create jobs for
these kids in Tahrir square in Cairo. At the end of this, what is there for them? But there's a deeper
problem, and this is a process, the algorithms that are driving engagement, that get more and
more data and all of this, what they're looking for is something that they can feed a person that will
get that person to become more and more engaged. And typically,it's not the only way, but the
easiest way that that can happen is if the person is fed something that makes them angry or
scared because those are the emotions that rise the fastest and the easiest and then stick around.
Let's say that you have something like the Arab Spring or in the United States the Black Lives
Matter movement. Have you heard of that here? Yeah. And so the people who are participating
are often young, they're often extremely good natured, and they're putting all the stuff out there to
communicate with each other, to coordinate. The algorithm doesn't care about them, the algorithm
is only looking at how it can get an emotional reaction to get engagement from somebody, from
anybody. So naturally, all of this data becomes fuel in exciting and engaging the people who hate
them the most. And then the algorithm gets more engagement and more response from the people
who are upset and then it introduces those people to each other and drive them more and more
and more. So you end up with a tool that while it was powerful, what the Arab Spring kids got out
of Twitter and Facebook was authentic, but what ISIS is getting is even more intense. Black Lives
Matter was authentic and it relied on social media, but the revival of the Ku Klux Klan and neo-
Nazis in the United States was more intense. So the thing is that using this system for social
change is inherently absurd. It will always have a negative effect later. The initial effect will feel
very good and might be quite authentic. So, what I would counsel bright kids in Ethiopia is
somehow, against all odds write your own damn software first, figure it out, like don't rely on
somebody from another country to provide you with your way of talking to each other. Africa has to
develop its own technical culture to a much greater degree than it has, it has to have more
technical universities, it has to have more scientific training if this huge generation of young people
is going to find success and happiness, it's absolutely urgent. And so don't treat

You might also like