aipanic.news-What Ilya Sutskever Really Wants

You might also like

Download as pdf or txt
Download as pdf or txt
You are on page 1of 3

What Ilya Sutskever Really Wants

aipanic.news/p/what-ilya-sutskever-really-wants

Nirit Weiss-Blatt

In WIRED's cover story, “What OpenAI Really Wants,” Steven Levy described how the
company’s ultimate goal is to “Change everything. Yes. Everything,” and mentioned this:

“Somewhere in the restructuring documents is a clause to the effect that, if the


company does manage to create AGI [Artificial General Intelligence], all financial
arrangements will be reconsidered. After all, it will be a new world from that point
on. Humanity will have an alien partner that can do much of what we do, only better.
So previous arrangements might effectively be kaput.”

“The company’s financial documents even stipulate a kind of exit contingency for
when AI wipes away our whole economic system.”1

It reminded me of what Ilya Sutskeversaid in May 2023 (at a public speaking event in
Palo Alto, California). His entire talk was about how AGI “will turn every single aspect of
life and of society upside down.”

When asked about specific professions – book writers/ doctors/ judges/ developers/
therapists – and whether they are extinct in one year, five years, a decade, or never,2 Ilya
Sutskever answered (after the developers’ example):

“It will take, I think, quite some time for this job to really, like, disappear. But the
other thing to note is that as the AI progresses, each one of these jobs will change.

They'll be changing those jobs until the day will come when, indeed, they will all
disappear.

My guess would be that for jobs like this to actually vanish, to be fully automated, I
think it's all going to be roughly at the same time technologically. And yeah, like,
think about how monumental that is in terms of impact. Dramatic.”

Then, Ilya Sutskever explained why “OpenAI is actually NOT a for-profit company. It is a
capped-profit company”:

“If you believe that AI will literally automate all jobs, literally, then it makes sense
for a company that builds such technology to … not be an absolute profit
maximizer.

It's relevant precisely because these things will happen at some point.”

When regulation came up, Ilya Sutskever continued:

1/3
“If you believe that AI is going to, at minimum, unemploy everyone, that's like, holy
moly, right? Assuming we take it for real, we say for real, as opposed to it's just like
idle speculation.

The conclusion may very well be that at some point, yeah, there should be a
government-mandated slowdown with some kind of international thing.”

When asked about AI’s role in “shaping democracy,” Ilya Sutskever answered:

“If we put on our science fiction hat, and imagine that we solved all the, you know,
the hard challenges with AI and we did this, whatever, we addressed the concerns,
and now we say, ‘okay, what's the best form of government?’ Maybe it's some kind
of a democratic government where the AI is, like, talks to everyone and gets
everyone's opinion, and figures out how to do it in like a much more high
bandwidth way.”

On another occasion, Ilya Sutskever also played with the idea: “Do we want our AI to
have insecurity?”

He has been exploring the possibility of AI systems having “feelings” for a while now. On
Twitter, he wrote that “it may be that today’s large neural networks are slightly conscious.”
He later asked his followers, “What is the better overarching goal”: “deeply obedient ASI”
[Artificial Superintelligence] or an ASI “that truly deeply loves humanity.” In a TIME
magazine interview, in September 2023, he said:

“The upshot is, eventually, AI systems will become very, very, very capable and
powerful. We will not be able to understand them. They’ll be much smarter than us.
By that time, it is absolutely critical that the imprinting is very strong, so they feel
toward us the way we feel toward our babies.”3

Back in May 2023, before Ilya Sutskever started to speak at the event, I sat next to him
and told him, “Ilya, I listened to all of your podcast interviews. And unlikeSam Altman,
who spread the AI panic all over the place, you sound much more calm, rational, and
nuanced. I think you do a really good service to your work, to what you develop, to
OpenAI.” He blushed a bit, and said, “Oh, thank you. I appreciate the compliment.”

An hour and a half later, when we finished this talk, I looked at my friend and told her,
“I’m taking back every single word that I said to Ilya.”

He freaked the hell out of people there. And we’re talking about AI professionals who
work in the biggest AI labs in the Bay area. They were leaving the room, saying, “Holy
shit.”

The snapshots above cannot capture the lengthy discussion. The point is that Ilya
Sutskever took what you see in the media, the “AGI utopia vs. potential apocalypse”
ideology, to the next level. It was traumatizing.

2/3
It was after this meeting that I decided I wanted to fight thepublicly-facing AI hype and
doom - even more.

1
Steven Levy, “What OpenAI Really Wants,” WIRED, September 5, 2023,
https://www.wired.com/story/what-openai-really-wants/.

2
Update on Sep 18: "or never" (disappear) was added.

The original question had this option. Regarding the above professions (book writers,
doctors, judges, developers, therapists), Ilya Sutskever speculated about their
replacements (by AGI). When asked about his own job, "Chief Scientist at OpenAI," he
actually used that additional option: "AI, like an AGI, it's still the case that it's like an
evolving system. You still need to teach it new things. So, it may be that you're kind of
there, but you still need to teach it, mentor it, and so on." The audience found it amusing.

3
Billy Perrigo, “Ilya Sutskever,” TIME 100 AI, September 7, 2023,
https://time.com/collection/time100-ai/6309011/ilya-sutskever/.

3/3

You might also like