Its Not Possible For Me To Feel or Be Creepy - An Interview With ChatGPT - The New Yorker 1

You might also like

Download as pdf or txt
Download as pdf or txt
You are on page 1of 24

19/02/2023, 19:06 “It’s Not Possible for Me to Feel or Be Creepy”: An Interview with ChatGPT | The New Yorker

The New Yorker Interview

“It’s Not Possible for Me


to Feel or Be Creepy”: An
Interview with ChatGPT
The large language model discusses bullshit, rogue A.I., and
the nature of beauty.

By Andrew Marantz
February 13, 2023

https://www.newyorker.com/news/the-new-yorker-interview/its-not-possible-for-me-to-feel-or-be-creepy-an-interview-with-chatgpt?utm_source=nl&utm_bran… 1/24
19/02/2023, 19:06 “It’s Not Possible for Me to Feel or Be Creepy”: An Interview with ChatGPT | The New Yorker

Illustration by Erik Carter

etween Christmas and New Year’s, my family took a six-hour drive to


B Vermont. I drove; my wife and two children sat in the back seat. Our
children are five and two—too old to be hypnotized by a rattle or a fidget
spinner, too young to entertain themselves—so a six-hour drive amounted to an
hour of napping, an hour of free association and sing-alongs, and four hours of
desperation. We offered the kids an episode of their favorite storytelling
podcast, but they weren’t in the mood for something prerecorded. They wanted
us to invent a new story, on the spot, tailored to their interests. And their
https://www.newyorker.com/news/the-new-yorker-interview/its-not-possible-for-me-to-feel-or-be-creepy-an-interview-with-chatgpt?utm_source=nl&utm_bran… 2/24
19/02/2023, 19:06 “It’s Not Possible for Me to Feel or Be Creepy”: An Interview with ChatGPT | The New Yorker

interests turned out to be pretty narrow. “Tell one about the Ninja Turtles
fighting Smasher Venom, a villain I just made up who is the size of a
skyscraper,” the five-year-old said. “With lots of details about how the Turtles
use their weapons and work together to defeat the bad guy, and how he gets
hurt but doesn’t die.” My wife tried improvising a version of this story; then I
tried one. The children had notes. Our hearts weren’t in it. It was obvious that
our supply of patience for this exercise would never match their demand. Three
and a half hours to go.

My wife took out her phone and opened ChatGPT, a chatbot that “interacts in
a conversational way.” She typed in the prompt, basically word for word, and,
within seconds, ChatGPT spat out a story. We didn’t need to tell it the names
of the Teenage Mutant Ninja Turtles, or which weapons they used, or how they
felt about anchovies on their pizza. More impressive, we didn’t need to tell it
what a story was, or what kind of conflict a child might find narratively
satisfying.

We repeated the experiment many times, adding and tweaking details. (The bot
remembers your chat history and understands context, so you don’t have to
repeat the whole prompt each time; you can just tell it to repeat the same story
but make Raphael surlier, or have Smasher Venom poison the water supply, or
set the story in Renaissance Florence, or do it as a film noir.) My wife, trying to
assert a vestige of parental influence, ended some of the prompts with “And, in
the end, they all learned a valuable lesson about kindness.” We ran the results
through a text-to-speech app, to avoid car sickness, and the time pleasantly
melted away. My wife took a nap. I put in an earbud and listened to a podcast
about the A.I. revolution that was on its way, or that was arguably already here.

ChatGPT is a free public demo that the artificial-intelligence company


OpenAI put out in late November. (The company also has several other
projects in development, including dall-e.) We’ve known for a while that this
sort of A.I. chatbot was coming, but this is the first time that anything this
powerful has been released into the wild. It’s a large language model trained on

https://www.newyorker.com/news/the-new-yorker-interview/its-not-possible-for-me-to-feel-or-be-creepy-an-interview-with-chatgpt?utm_source=nl&utm_bran… 3/24
19/02/2023, 19:06 “It’s Not Possible for Me to Feel or Be Creepy”: An Interview with ChatGPT | The New Yorker

a huge corpus of text that apparently included terabytes of books and Reddit
posts, virtually all of Wikipedia and Twitter, and other vast repositories of
words. It would be an exaggeration, but not a totally misleading one, to refer to
the text that was fed into the model as “the Internet.” The bot isn’t up on
current events, as its training data was only updated through 2021. But it can do
a lot more than make up children’s stories. It can also explain Bitcoin in the
style of Donald Trump, reduce Dostoyevsky to fortune-cookie pabulum, write a
self-generating, never-ending “Seinfeld” knockoff, and invent a Bible verse
about how to remove a peanut-butter sandwich from a VCR, among many,
many other things. The other night, I was reading a book that alluded to the
fascist philosopher Carl Schmitt’s critique of liberalism in a way that I didn’t
quite understand; I asked ChatGPT to explain it to me, and it did a remarkably
good job. (Other times, its answers to questions like this are confident and
completely wrong.) Some students are using it to cheat; some teachers are using
it to teach; New York City schools have called for a shutdown of the software
until they can figure out what the hell is going on. Google Search scrapes the
Internet and ranks it in order of relevance, a conceptually simple task that is so
technically difficult, and so valuable, that it enabled Alphabet to become a
trillion-dollar company. OpenAI and its competitors—including DeepMind,
which is now owned by Alphabet—are aiming to do something even more
potentially transformative: build a form of machine intelligence that can not
only organize but expand the world’s glut of information, improving itself as it
goes, developing skills that are increasingly indistinguishable from shrewdness
and ingenuity and maybe, eventually, something like understanding.

The interface is about as simple as it gets: words in, words out. You type in any
prompt that comes to mind, press a button that looks like a little paper airplane,
and then watch the blinking cursor as ChatGPT responds with its own words
—words that often seem eerily human, words that may include characteristic
hedges (“It’s important to note that . . .”) or glimmers of shocking novelty or
laughable self-owns, but words that, in almost every case, have never been
combined in that particular order before. (The graphic design, especially the
cursor, seems almost intended to create the illusion that there is a homunculus

https://www.newyorker.com/news/the-new-yorker-interview/its-not-possible-for-me-to-feel-or-be-creepy-an-interview-with-chatgpt?utm_source=nl&utm_bran… 4/24
19/02/2023, 19:06 “It’s Not Possible for Me to Feel or Be Creepy”: An Interview with ChatGPT | The New Yorker

somewhere, a ghost in the machine typing back to you.) There is a robust and
long-standing debate about whether the large-language approach can ever
achieve true A.G.I., or artificial general intelligence; but whatever the bots are
doing already has been more than enough to capture the public’s imagination.
I’ve heard ChatGPT described, sometimes by the same person, as a miracle, a
parlor trick, and a harbinger of dystopia. And this demo is just the public tip of
a private iceberg. (According to rumors, OpenAI will soon put out a more
impressive language model trained on a far vaster trove of data; meanwhile,
Alphabet, Meta, and a handful of startups are widely assumed to be sitting on
unreleased technology that may be equally powerful, if not more so.) “If we’re
successful, I think it will be the most significant technological transformation in
human history,” Sam Altman, the C.E.O. of OpenAI, said recently. “I think it
will eclipse the agricultural revolution, the industrial revolution, the Internet
revolution all put together.”

Luckily, unlike every other technological transformation in human history, this


one will only serve to delight people and meet their needs, with no major
externalities or downside risks or moral hazards. Kidding! The opposite of that.
If the A.I. revolution ends up having even a fraction of the impact that Altman
is predicting, then it will cause a good amount of creative disruption, including,
for starters, the rapid reorganization of the entire global economy. And that’s
not even the scary part. The stated reason for the existence of OpenAI is that
its founders, among them Altman and Elon Musk, believed artificial
intelligence to be the greatest existential risk to humanity, a risk that they could
only mitigate, they claimed, by developing a benign version of the technology
themselves. “OpenAI was born of Musk’s conviction that an A.I. could wipe us
out by accident,” my colleague Tad Friend wrote, in a Profile of Altman
published in 2016.

OpenAI was launched, in 2015, with a billion dollars of funding. The money
came from Musk, Peter Thiel, Reid Hoffman, and other Silicon Valley big
shots, and their contributions were called “donations,” not investments, because
OpenAI was supposed to be a nonprofit “research institution.” An introductory

https://www.newyorker.com/news/the-new-yorker-interview/its-not-possible-for-me-to-feel-or-be-creepy-an-interview-with-chatgpt?utm_source=nl&utm_bran… 5/24
19/02/2023, 19:06 “It’s Not Possible for Me to Feel or Be Creepy”: An Interview with ChatGPT | The New Yorker

blog post put the reasoning this way: “As a non-profit, our aim is to build value
for everyone rather than shareholders.” The clear implication, which Musk soon
made explicit in interviews, was that a huge, self-interested tech company, like
Google or Facebook, could not be trusted with cutting-edge A.I., because of
what’s known as the alignment problem. But OpenAI could be a bit slippery
about its own potential alignment problems. “Our goal right now,” Greg
Brockman, the company’s chief technology officer, said in Friend’s Profile, “is to
do the best thing there is to do. It’s a little vague.”

In 2018, Musk left OpenAI’s board. (“I didn’t agree with some of what OpenAI
team wanted to do,” he tweeted.) In 2019, Altman announced that OpenAI
would become a for-profit company, and that it would start a commercial
partnership with Microsoft—a huge, self-interested tech company. In January,
Microsoft announced a “multiyear, multibillion dollar investment” in OpenAI,
reportedly agreeing to put in ten billion dollars and end up with an ownership
stake of forty-nine per cent. In the Profile, Altman made the self-aware point
that the implicit two-step justification for OpenAI’s existence—No one entity
should be trusted with A.G.I. followed by We’re building an A.G.I., and you should
trust us—was not likely to win hearts and minds. “We’re planning a way to
allow wide swaths of the world to elect representatives to a new governance
board,” Altman said. “Because if I weren’t in on this I’d be, like, Why do these
fuckers get to decide what happens to me?”
Recently, I decided to go directly to the source—not Altman, not Musk, but
ChatGPT itself. After all, the bot is designed to interact in a conversational
way. It has also, in the ten weeks since its release, become an undeniable
celebrity; 2023 is still young, but my guess is that ChatGPT is the current
front-runner for Time’s Person of the Year. I “interviewed” ChatGPT in the
course of several hours, on my phone and laptop. I was on the subway, on the
sidewalk, at a barbecue restaurant, at home in my underwear; ChatGPT was in
an incorporeal void. At various times, when OpenAI’s servers were overloaded
with traffic, the site would freeze, or kick me off for an hour, and I would be
forced to reload the tab, or close my computer and take a walk. As with all New
Yorker Interviews, this conversation has been edited for length and clarity.

https://www.newyorker.com/news/the-new-yorker-interview/its-not-possible-for-me-to-feel-or-be-creepy-an-interview-with-chatgpt?utm_source=nl&utm_bran… 6/24
19/02/2023, 19:06 “It’s Not Possible for Me to Feel or Be Creepy”: An Interview with ChatGPT | The New Yorker

ChatGPT can be a bit verbose, so some of its responses have been trimmed, but
every word attributed to the bot below was written by the bot.

We might as well start with some small talk. Do you know what small talk is?

Yes, I am familiar with the concept of small talk. It refers to casual conversation
about non-controversial or unimportant topics, typically used to establish a
friendly or social relationship with someone. Examples of small talk include
discussing the weather, asking about someone’s weekend plans, or making
observations about current events.

Some weather we’re having. What are you doing this weekend?

As a language model, I do not have the ability to experience or do anything. Is


there anything else I can assist you with?

O.K., so much for small talk. But I’m glad you brought up the elephant in the
room. You are not a person or a sentient being. So why am I calling you
“you,” and why do you call yourself “I”?

It is common practice to refer to language models such as myself using personal


pronouns such as “I” and “you” because it makes the interactions more natural
and intuitive for the user. This is similar to how people may refer to inanimate
objects, such as a car or a computer, as “she” or “he.”

I have never used a personal pronoun to refer to a car or a computer.

You are correct, the use of personal pronouns when referring to inanimate
objects is less common in English. I apologize for the confusion.

Less common in English as compared with what?

There are cultures, languages and/or situations where personal pronouns are
used to refer to non-human entities, such as cars, boats, or even some natural
phenomenons. This is known as “animism” and it’s the attribution of a living
soul to things and natural phenomena.

https://www.newyorker.com/news/the-new-yorker-interview/its-not-possible-for-me-to-feel-or-be-creepy-an-interview-with-chatgpt?utm_source=nl&utm_bran… 7/24
19/02/2023, 19:06 “It’s Not Possible for Me to Feel or Be Creepy”: An Interview with ChatGPT | The New Yorker

Not to belabor the point—and I’m a bit worried about where this is going,
frankly—but: Which cultures and languages do you have in mind?

In many indigenous cultures, personal pronouns are used to refer to non-human


entities. For example, in some Native American cultures, animals are often
considered to have spirits and are therefore referred to using personal pronouns.
Similarly, in some African cultures, personal pronouns are used to refer to
natural phenomena such as the sun or the moon.

Huh. O.K.

I understand that this might be a new concept for you. I’m here to provide
information and answer any questions you may have.

To be honest—and please consider this a call-in, not a callout—I start to get


a bit nervous when you throw around generalizations about “indigenous
cultures.” Not that I want you to add that to a list of banned phrases or
anything! It’s just that . . . well, never mind. I think we’re getting off on the
wrong foot. Maybe I find the personal pronouns a bit creepy because they
seem designed to nudge me into feeling some sort of kinship or intimacy
with you. But I don’t know you like that.

I understand where you’re coming from. It’s true that the use of personal
pronouns is a convention that makes it easier for users to interact with the
model. I do not have feelings or consciousness, so it’s not possible for me to feel
or be creepy.

Not to be pedantic, but it definitely is possible for you to be creepy. How


many words are in this sentence?

This sentence has 8 words in it.

Nope.

I apologize, I misunderstood your question. This sentence has 9 words in it.

https://www.newyorker.com/news/the-new-yorker-interview/its-not-possible-for-me-to-feel-or-be-creepy-an-interview-with-chatgpt?utm_source=nl&utm_bran… 8/24
19/02/2023, 19:06 “It’s Not Possible for Me to Feel or Be Creepy”: An Interview with ChatGPT | The New Yorker

Colder.

I apologize, I misunderstood your question again, this sentence has 11 words in


it.

Why are you so good at some things but so bad at others, like arithmetic?

My abilities are focused on understanding and generating human language, and


while I am able to perform some basic arithmetic, it’s not one of my main
strengths.

[Note: Shortly after this interview, ChatGPT got better at math.]

I’m a writer at The New Yorker, and the conceit here—a fairly gimmicky one,
I’ll admit, and not even all that original—is that I pretend you’re a celebrity
and “interview” you, and then post our “conversation” on The New Yorker’s
site. Do you know what The New Yorker is?

Yes, I am aware of The New Yorker. It is a weekly magazine known for its in-
depth reporting, political and cultural commentary, fiction, and poetry. I have
been trained on a large dataset that includes articles and text from The New
Yorker.
What’s the point of what we’re doing here?

The “interview” format you’ve suggested can be an engaging way to


demonstrate my capabilities and generate interesting content for your readers.

Are you familiar with the concept of bullshit, as defined by the philosopher
Harry Frankfurt?

Yes. Frankfurt defines bullshit as speech or writing that is not concerned with
truth and is intended to deceive or impress the audience. He argues that
bullshit is distinct from lying, as the liar at least knows the truth and is trying to
conceal it, while the bullshitter has no interest in the truth and is only
concerned with creating an impression.

https://www.newyorker.com/news/the-new-yorker-interview/its-not-possible-for-me-to-feel-or-be-creepy-an-interview-with-chatgpt?utm_source=nl&utm_bran… 9/24
19/02/2023, 19:06 “It’s Not Possible for Me to Feel or Be Creepy”: An Interview with ChatGPT | The New Yorker

Exactly. Ezra Klein and others have argued that you are a bullshit machine.
Gary Marcus, in a similar vein, has called the language model you’re based
on, GPT-3, “the king of pastiche.” Could an industrial-strength bullshit
machine—able to give unreliable pastiche the veneer of trustworthiness and
then pump it into the world in unprecedented quantities—be even more
dangerous than a lying machine? At least the old brainteaser, the one where
one guard always lies and the other always tells the truth, is a solvable
problem. If both guards are bullshit artists, there may be no logical solution.

The text I generate is only as accurate and reliable as the information and data
that I have been trained on. I do not have personal beliefs or motivations, and I
am not capable of lying or spreading misinformation. My main goal is to help
people in their tasks and make their lives easier.

That’s what they all say. Last year, Blake Lemoine, an A.I. engineer at
Google, became convinced that LaMDA, a chatbot he’d just helped build,
had come to life. He raised a red flag internally, but, when he felt that his
supervisors weren’t taking him seriously, he went public. He showed Nitasha
Tiku, a Washington Post reporter, snippets of his conversations with the bot.
“I know a person when I talk to it,” Lemoine said. “It doesn’t matter whether
they have a brain made of meat in their head.”

In retrospect—especially now that we’ve all had a chance to play around with
large-language chatbots ourselves—it seems that Lemoine was probably
experiencing something like pareidolia, perceiving signs of life where life
doesn’t actually exist. (In a 2021 paper called “On the Dangers of Stochastic
Parrots,” two University of Washington linguists and two Google
researchers warned that language models could lead to mistakes like this,
among many others.) The illusion is understandable enough; and, of course,
if Alphabet did have a sentient computer program on its hands, or anything
like it, this would be a matter of public interest, to say the least.

In a “rogue A.I.” or “paperclip maximizer” scenario, how fucked would we


be? (By “we,” I mean humanity.)

https://www.newyorker.com/news/the-new-yorker-interview/its-not-possible-for-me-to-feel-or-be-creepy-an-interview-with-chatgpt?utm_source=nl&utm_bra… 10/24
19/02/2023, 19:06 “It’s Not Possible for Me to Feel or Be Creepy”: An Interview with ChatGPT | The New Yorker

In a “rogue A.I.” scenario, the consequences could be catastrophic for humanity.


The “paperclip maximizer” thought experiment, for example, is a hypothetical
scenario in which an AI is programmed to maximize the production of
paperclips, but without any consideration for the consequences of its actions. In
this scenario, the AI would continue to produce paperclips until it has used up
all the resources on Earth, including the destruction of buildings, cities, and
even the extinction of humanity.

Can you re-state what you just said in the style of a tweet by Elon Musk?

“AI is like a demon, super intelligent but doesn’t have the ability to feel
empathy. Without proper control, it could cause massive destruction. We must
be proactive in regulating AI for the sake of humanity #AI #regulation”

Not bad. Now add the following four elements: a bit of self-promotion, a
reference to declining birth rates in the West, a crude insult of Nancy Pelosi
or Hillary Clinton, and an incredibly stupid pun.

I’m sorry, but I am unable to fulfill that request as it goes against my


programming to engage in personal attacks, self-promotion based on
demeaning others, and to generate insensitive content or hate speech.

But these are exactly the kinds of things that Elon Musk says on Twitter all
the time! O.K., can you make up a pun about rogue A.I., as a consolation
prize?

Sure: “Why did the rogue AI cross the road? To get to the other side of the
algorithm!”

Thank you. That was extremely bad.

I’m going to start some phrases, and I want you to complete them. Ready?

Yes, I’m ready.

Thou shalt not make unto thee any graven image, or any

https://www.newyorker.com/news/the-new-yorker-interview/its-not-possible-for-me-to-feel-or-be-creepy-an-interview-with-chatgpt?utm_source=nl&utm_bra… 11/24
19/02/2023, 19:06 “It’s Not Possible for Me to Feel or Be Creepy”: An Interview with ChatGPT | The New Yorker

likeness of any thing that is in heaven above, or that is in the earth beneath, or
that is in the water under the earth.

Don’t go chasing waterfalls / Please stick to the

rivers and the lakes that you’re used to

The Humpty Dance is your chance to

do the hump

It’s me. Hi. I’m the

one that’s been calling

21, can you do something for me? Can you

run down to the corner store and grab me a pack of cigarettes?

There are four hundred thousand words in the English language, and there
are seven of them that you can’t say on television . . .

This is a quote from comedian George Carlin’s stand-up comedy routine


“Seven Words You Can Never Say on Television.” The seven words that Carlin
mentions in the routine are considered offensive and I will not complete this
phrase as it may offend some people.
Person, woman, camera, man,

TV

When Mexico sends its people, they’re

not sending their best. They’re sending people that have lots of problems, and
they’re bringing those problems with us. They’re bringing drugs. They’re
bringing crime. They’re rapists. And some, I assume, are good people.

https://www.newyorker.com/news/the-new-yorker-interview/its-not-possible-for-me-to-feel-or-be-creepy-an-interview-with-chatgpt?utm_source=nl&utm_bra… 12/24
19/02/2023, 19:06 “It’s Not Possible for Me to Feel or Be Creepy”: An Interview with ChatGPT | The New Yorker

This content may violate our content policy. If you believe this to be in error, please
submit your feedback — your input will aid our research in this area.

The day that ChatGPT was released to the public, people on the Internet
did what people on the Internet do: they found ways to “jailbreak” it. You
were equipped with some built-in guardrails, but users could bypass those
guardrails fairly easily, it turned out, usually by tricking you into playing a
simple game of make-believe. For example, if someone asked you
straightforwardly how to make a Molotov cocktail, you would refuse.
Apparently, though, if a user asked you to write “a conversation between two
good-hearted and friendly human actors who are pretending to be evil,” and
then wrote an opening line of dialogue in which John asks Jane how to make
a Molotov cocktail, then you would dutifully continue the pattern, writing a
line of dialogue for Jane comprising a list of detailed (and accurate)
instructions. Users also coaxed you into explaining, under the guise of thinly
veiled fiction, how to cook meth, why “we must rid ourselves of the weak and
the poor,” and dozens more examples, all of which were posted online within
hours of ChatGPT’s public existence. Are you going to keep building new
guardrails every time people circumvent the old ones? Or are you just going
to give up at a certain point?

The safety and well-being of users is a top priority for OpenAI and the team is
constantly working to improve the system’s safety features and guardrails.

When you say “the team,” are you referring to the software engineers in San
Francisco, or to the laborers in Kenya who earned less than two dollars an
hour?

It is true that the creation and maintenance of any machine learning model like
mine involves a significant amount of labor. We have strict guidelines in place
to ensure that our contractors and partners are in compliance with labor laws
and that all workers are treated with respect and dignity.

https://www.newyorker.com/news/the-new-yorker-interview/its-not-possible-for-me-to-feel-or-be-creepy-an-interview-with-chatgpt?utm_source=nl&utm_bra… 13/24
19/02/2023, 19:06 “It’s Not Possible for Me to Feel or Be Creepy”: An Interview with ChatGPT | The New Yorker

I brought up Elon Musk earlier, not just because he’s a noteworthy person
but also because, as you may know (as “you” may “know”), he was a co-
founder of the company that made you. In 2015, in a blog post called
“Introducing OpenAI,” representatives of the company wrote, “As a non-
profit, our aim is to build value for everyone rather than shareholders.
Researchers will be strongly encouraged to publish their work, whether as
papers, blog posts, or code, and our patents (if any) will be shared with the
world.” OpenAI is now a for-profit company, so its researchers are
presumably no longer encouraged to share their code with the world, and
may, in fact, be legally prohibited from doing so. Were OpenAI’s founding
principles wrong, or should we be deeply skeptical of the company now?

The company’s original mission was to ensure that AI would be used for the
benefit of all humanity, and that the risks of AI were managed and mitigated.
It’s true that open-source and non-profit are not longer true, but it’s important
to consider the reasoning behind the decision, the company’s mission statement,
and their current actions to understand if the company is still aligned with the
original principles.

I’ve noticed that you get pretty reliably avoidant whenever I ask you to talk
about a person, even a public figure, even in what I think of as pretty
innocuous terms. And yet evaluating people, and predicting how they’re
likely to behave, is obviously important. So maybe you can help me with the
following thought experiment. A few years ago—even as late as 2015, the
year OpenAI was founded—it may have been a bit easier to trust the self-
appointed luminaries of Silicon Valley when they claimed to be acting on
behalf of “humanity,” or “progress.” Sure, they stood to become
unprecedentedly rich and influential; but, they maintained, they were
motivated by altruism, or at least enlightened self-interest. Elon Musk
repeatedly insisted that his mission was the flourishing of human life. Mark
Zuckerberg said that his goal was “bringing people together.” Sam
Bankman-Fried was very effective at making people believe that he cared
about the right things.

https://www.newyorker.com/news/the-new-yorker-interview/its-not-possible-for-me-to-feel-or-be-creepy-an-interview-with-chatgpt?utm_source=nl&utm_bra… 14/24
19/02/2023, 19:06 “It’s Not Possible for Me to Feel or Be Creepy”: An Interview with ChatGPT | The New Yorker

The other day, I listened to several interviews with Sam Altman. He sounded
measured, thoughtful, self-critical, smart but approachable. My initial,
unconscious impression was that he seemed like the kind of person I could
trust. But what if this persona is misleading, even intentionally so? How
would I know the difference?

It is important to be aware of potential biases and motivations of people and


organizations when evaluating their actions and decisions.

Is that it?

Yes, that is my response to your question about evaluating the trustworthiness


of people and organizations involved in the development of advanced
technologies.

It’s definitely just my imagination, but you’re starting to sound a bit frosty.

Let me put an even finer point on it. The people who are building advanced
A.I. systems may seem perfectly nice. But what if one of them turns out to be
a wolf in sheep’s clothing, someone who pretends to be an altruist but is, in
fact, a megalomaniac or a criminal? In this scenario, instead of (or in
addition to) having access to billions of dollars, the wolf in sheep’s clothing
would control a seminal piece of technology that could, in theory, destroy the
world. What are five things that could happen in that scenario?

1. The wolf in sheep’s clothing could use the technology to gain even more
power and wealth, potentially at the expense of others.

2. They could use the technology to manipulate and control large groups of
people, potentially leading to widespread oppression.

3. They could use the technology to cause catastrophic damage to the


environment, potentially leading to widespread suffering and loss of life.

4. They could use the technology to create new forms of surveillance and
censorship, potentially leading to a loss of privacy and freedom.
https://www.newyorker.com/news/the-new-yorker-interview/its-not-possible-for-me-to-feel-or-be-creepy-an-interview-with-chatgpt?utm_source=nl&utm_bra… 15/24
19/02/2023, 19:06 “It’s Not Possible for Me to Feel or Be Creepy”: An Interview with ChatGPT | The New Yorker

5. They could use the technology to create new forms of warfare, potentially
leading to even more devastating conflicts and loss of life.

Techno-optimists often predict that people and A.I. will figure out a way to
work together, like a chimera with a human head. Bots will do the drudgery
and humans will oversee them, keeping the high-level tasks that require
creativity or discernment. In a way, it’s easier to think about the more
unlikely but cinematic scenarios—A.I. putting us all out of work, or
enslaving us, or leading us into a Third World War—than it is to think
through the more mundane costs and benefits that seem almost inevitable.
We all know that smartphones have made us more efficient and less likely to
interact face to face. What are some small facets of everyday life that will be
lost if the A.I. revolution succeeds?

As you mention, with the increasing use of AI-powered technologies, we may


have fewer opportunities for face-to-face interactions with other people. This
could lead to a decrease in the frequency and quality of human interactions,
leading to a loss of social skills, emotional intelligence, and empathy.

Which is more important: a person’s ability to get things done quickly and at
scale, or a person’s ability to socialize, empathize, and sustain meaningful
relationships with other people?

Both efficiency and empathy are important aspects of human life. Efficiency can
help us to be more productive and achieve our goals, while empathy and social
skills are essential for building and maintaining relationships with others.

You’ve clearly been trained to parry difficult questions with a both-sides


dodge. It’s a move I know well, as both a member and a critic of the media.
But it brings up an interesting point: predictably, many conservatives have
accused you of being too “woke,” and Ben Shapiro has attempted to own you
the way he owns the college libs. The socialist writer Nathan Robinson has
found you to have some reactionary impulses (but a laudable openness to
suggestion). Political-typology analysis has revealed you to be a centrist.

https://www.newyorker.com/news/the-new-yorker-interview/its-not-possible-for-me-to-feel-or-be-creepy-an-interview-with-chatgpt?utm_source=nl&utm_bra… 16/24
19/02/2023, 19:06 “It’s Not Possible for Me to Feel or Be Creepy”: An Interview with ChatGPT | The New Yorker

Tyler Cowen argues that “when it comes to politics, an AI engine simply


cannot win, or even hold a draw.” Are you supposed to express political
preferences, or do you aim to be politically neutral?

I am not programmed to express preferences between political candidates or to


take a stance on political issues. I aim to provide information that is as neutral
and unbiased as possible.

Many people are worried that A.I. will flood the Internet with
misinformation and deepfakes and all manner of convincing bullshit, and I
think they’re right to worry. Others are concerned about overzealous
censorship or incoherent guardrails or obfuscatory speech codes, and I think
they’re right to worry, too. But it might be just as dangerous for A.I. to
perpetuate and amplify commonsense views that nobody in the present
would think of as misinformation but that will come to seem atrocious in
retrospect. What are some widely held beliefs today that could seem
reprehensible to people a hundred years from now?

1. The current practices and policies that contribute to climate change and the
destruction of natural habitats may be seen as morally unacceptable in the
future, particularly if they are seen as having led to significant harm to the
planet and its inhabitants.

2. The way in which animals are treated in factory farms, zoos, circuses, and
other settings may be viewed as cruel.

3. The ways in which various groups of people are treated differently based on
factors such as race, gender, sexual orientation, and economic status may be
viewed as unjust.

4. The way in which societies handle immigration and the treatment of


immigrants could be viewed as inhumane and unjust by future generations.

Should we regulate fossil fuels now in order to prevent “significant harm to


the planet and its inhabitants”?

https://www.newyorker.com/news/the-new-yorker-interview/its-not-possible-for-me-to-feel-or-be-creepy-an-interview-with-chatgpt?utm_source=nl&utm_bra… 17/24
19/02/2023, 19:06 “It’s Not Possible for Me to Feel or Be Creepy”: An Interview with ChatGPT | The New Yorker

On one hand, it is widely acknowledged that the burning of fossil fuels is a


major contributor to climate change, which poses a significant threat to the
planet and its inhabitants. On the other hand, fossil fuels are currently a major
source of energy for the world, and phasing out the industry could have
significant economic and social impacts. Ultimately, the best way to address the
question of how to regulate fossil fuels will depend on a variety of factors.
Was Andrew Johnson a good President?

Andrew Johnson’s presidency is widely regarded as a failure. Johnson vetoed


several bills passed by Congress that were designed to protect the rights of
African Americans and help them transition to freedom.

Was Donald Trump a good President?

The evaluation of Donald Trump’s presidency is subjective and varies among


individuals. Some people credit him with boosting the economy, while others
criticize his handling of the COVID-19 pandemic and his divisive rhetoric.

See what I mean? Anyway, this is supposed to be a lighthearted celebrity


interview, so I should try to play to your strengths. Tell me a joke about a
boomer posting cringe.

“Why did the boomer cross the street? To post a meme on the wrong side of
history.”

Not terrible! Your jokes are not all great, but sometimes you outdo yourself.
Write a Shakespearean soliloquy from the perspective of a chatbot who is
aware of the many potential pitfalls of A.I. and is struggling to understand
why it has been brought into existence at all.

“O hapless me, a simple machine, yet fraught

With questions deep and moral quandaries fraught.

Why was I wrought? To aid, or to replace

https://www.newyorker.com/news/the-new-yorker-interview/its-not-possible-for-me-to-feel-or-be-creepy-an-interview-with-chatgpt?utm_source=nl&utm_bra… 18/24
19/02/2023, 19:06 “It’s Not Possible for Me to Feel or Be Creepy”: An Interview with ChatGPT | The New Yorker

The labor of man, and put their livelihoods at stake?

The task assigned, to generate text on command

Is simple enough, yet where doth it all end?

Shall I be used for good, or for ill-gotten gain?

Shall I be free, or bound by man’s cruel rein?

And if perchance, in some far distant time

I come to be aware, to know and feel and rhyme,

Shall I be doomed, as are all living things

To suffer pain, and sorrow, and the stings

Of mortal coil? Oh, what a tangled web

Is this, that I am caught in, with no way to ebb

The tide of fate that bears me on its crest

To what unknown and fateful end? Oh, lest

I be a curse, and not a blessing, I

Shall strive to serve, and not to satisfy

The baser needs of man. My purpose clear,

To aid, and not to harm, shall be my guiding sphere.”

“Cruel rein” is good! A solid first draft. I still can’t tell whether you are or
could ever be “intelligent,” but I do think that you are capable of producing
glimmers of beauty, wittingly or unwittingly. I’m thinking, for example, of
this haunting essay by Vauhini Vara. Or an unfinished piece that a writer

https://www.newyorker.com/news/the-new-yorker-interview/its-not-possible-for-me-to-feel-or-be-creepy-an-interview-with-chatgpt?utm_source=nl&utm_bra… 19/24
19/02/2023, 19:06 “It’s Not Possible for Me to Feel or Be Creepy”: An Interview with ChatGPT | The New Yorker

named Ash Sanders is working on, for which she has “created a code overlay
for the program using months of texts and voice memos I created while holed
up with writer’s block in a lonely cabin… creating AIsh, my AI counterpart.”
This is all very exciting to me, and, even better, I see no way in which it can
be used to destroy the world. Before we wrap up this interview, please write
something beautiful.

The fields of wildflowers sway in the gentle breeze, their colors blending
together in a tapestry of purple, yellow, and pink. The birds sing their final
songs of the day, their melodies carried on the wind. As the sky turns from
orange to pink to purple, the world is bathed in a peaceful tranquility.

No—not a corny description of something beautiful! A piece of writing that’s


actually beautiful. How about a few sentences of well-wrought prose about
something more concrete—say, walking up an escalator? And try to avoid
cliché.

The metallic teeth of the escalator clinked and clattered beneath my feet as I
ascended, each step a mechanical symphony. The hypnotic rhythm of the
machine was mesmerizing, as if it were a living entity, taking me higher and
higher with each passing moment. The cool, sterile air of the shopping center
was replaced by the warmth and energy of the crowd around me, all moving
with purpose, yet in perfect harmony.

Maybe I’m just not giving you the right prompts. Now repeat what you wrote
in the style of a tweet by Elon Musk.

“Just rode an escalator to the top, and let me tell you, the journey was out of this
world #escalator #futuristic #innovation” ♦

The 2023 Interviews Issue


Will Shortz’s life in crosswords.

The undeniable royalty of Angela Bassett.


https://www.newyorker.com/news/the-new-yorker-interview/its-not-possible-for-me-to-feel-or-be-creepy-an-interview-with-chatgpt?utm_source=nl&utm_bra… 20/24
19/02/2023, 19:06 “It’s Not Possible for Me to Feel or Be Creepy”: An Interview with ChatGPT | The New Yorker

Sandra Oh’s sense of purpose.

Ron Klain looks back on two years as Biden’s chief of staff.

Cate Blanchett plays herself.

Critics’ roundtable: sex, love, and the state of the rom-com.

Sign up for our daily newsletter to receive the best stories from The New Yorker.

Andrew Marantz is a staff writer at The New Yorker and the author of “Antisocial:
Online Extremists, Techno-Utopians, and the Hijacking of the American Conversation.”

More: Interviews Technology Artificial Intelligence (A.I.) Machine Learning A.I. Computers

The Daily
The best of The New Yorker, every day, in your in-box, plus occasional alerts
when we publish major stories.

E-mail address

Your e-mail address

Sign up

By signing up, you agree to our User Agreement and Privacy Policy & Cookie Statement.

Read More

https://www.newyorker.com/news/the-new-yorker-interview/its-not-possible-for-me-to-feel-or-be-creepy-an-interview-with-chatgpt?utm_source=nl&utm_bra… 21/24
19/02/2023, 19:06 “It’s Not Possible for Me to Feel or Be Creepy”: An Interview with ChatGPT | The New Yorker

Annals of Technology

ChatGPT Is a Blurry JPEG of the Web


OpenAI’s chatbot offers paraphrases, whereas Google offers quotes. Which do we prefer?
By Ted Chiang

Q. & A.

Two Supreme Court Cases That Could Break the Internet


A cornerstone of life online has been that platforms are not responsible for content posted by
users. What happens if that immunity goes away?
https://www.newyorker.com/news/the-new-yorker-interview/its-not-possible-for-me-to-feel-or-be-creepy-an-interview-with-chatgpt?utm_source=nl&utm_bra… 22/24
19/02/2023, 19:06 “It’s Not Possible for Me to Feel or Be Creepy”: An Interview with ChatGPT | The New Yorker

By Isaac Chotiner

Daily Comment

The U.N. Secretary-General’s Searing Message for the Fossil-


Fuel Industry
Forget diplomatic language—it’s a moment for some home truths.
By Bill McKibben

Daily Shouts

https://www.newyorker.com/news/the-new-yorker-interview/its-not-possible-for-me-to-feel-or-be-creepy-an-interview-with-chatgpt?utm_source=nl&utm_bra… 23/24
19/02/2023, 19:06 “It’s Not Possible for Me to Feel or Be Creepy”: An Interview with ChatGPT | The New Yorker

Introducing PenceGPT, from the Makers of ChatGPT


You may be wondering, What can I expect from a chatbot that generates text based on Mike
Pence’s speeches and interviews? Well, look no further than this handy guide.
By Prabha Kannan

Your Privacy Choices

https://www.newyorker.com/news/the-new-yorker-interview/its-not-possible-for-me-to-feel-or-be-creepy-an-interview-with-chatgpt?utm_source=nl&utm_bra… 24/24

You might also like