9 CC 9150 Cefa 008501206

You might also like

Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1of 11

1

Nardelli
Giana Nardelli

7 December 2022

Artificial Intelligence and Human Projection

Artificial Intelligence is technology designed to replicate, mimic, and – eventually –

succeed human intelligence and ways of thinking. Since its debut in the late 1950s, Artificial

Intelligence (shortened colloquially as AI), has revolutionized computing technology. From

search engines to language translators, to self-automated cars, AI is found in virtually every

aspect of modern life. It’s also commonly found in media, especially in science fiction. Science

fiction, however, often depicts AI not as a tool to help make life easier, but as power-hungry,

vengeful, and/or immoral beings that seek to cause harm to humanity. Science fiction creators

often exaggerate the capabilities of contemporary AI into technology capable of taking over the

planet. This popular trope has caused many people to be wary of AI and what it could be capable

of; however, this theme of unfamiliar-being-as-overlord is something humankind is more than

familiar with already. Many depictions of AI are, in fact, projections of human history and their

predispositions of violence and colonialism. An example of this is Harlon Ellison’s short story I

Have No Mouth, And I Must Scream. In this story, an AI supercomputer called AM has tortured

five people for over one hundred years – the uncaring ruthlessness that the characters face would

make any reader understandably concerned about the future of AI. However, upon closer

reading, one can see the similarities between AM and many imperialist campaigns of the past.

Science fiction authors and creators use AI as an allegory for human behavior, as the actions of

AI in many of these stories can very clearly be tied back to events from the past. The recurring

malevolent depictions of nonhuman beings – especially artificial intelligence – in the science
2
Nardelli
fiction genre are a reflection of humankind itself, its fears, and humanity’s gravitation towards

brutality. 

The general apprehension of Artificial Intelligence has been around for a long time; much

of this fear stems from the fact that AI has the capability to outsmart humanity. See, throughout

history, humans are taught that they are the superior species. Man uses various animals for food,

horses for transportation, rodents for experiments, canines for hunting, etc., – domestication of

creatures with less of what is considered organized intelligence is second nature to human

beings, something that has been done since the dawn of time. Human way of life is idealized.

This way of thinking is called “humanocentrism,” which is the belief that humans are

intellectually superior to other life forms. And this “human superiority” can perhaps be argued as

a more Western way of thinking, but generally most humans believe in a distinction between the

“natural” (read: animal) world and human civilization. As we are taught that we have more

significance than animals, plants, or creatures, fear is created when introduced with the

possibility that something could exist that could be smarter than humans are. Vasile Gherheș,

professor at Politehnica University of Timișoara, Romania, explains that an issue among

scientists is “the fear that AI will become autonomous and get the opportunity to escape from

people's control. There is also the threat that it will lead to the replacement of man by robots,

almost in all social spheres. With increasingly more jobs being automated, this would lead to

global mass unemployment, with the human presence becoming unnecessary.” (Gherheș 7) The

possibilities of what AI could do within the bounds of technology outreaches what humans could

be capable of. This threat of humanity being reduced to something nonessential to the

functioning of civilization on Earth is jarring. If AIs become sentient and gain consciousness
3
Nardelli
– an entity’s ability to recognize itself as existing, which are thought as uniquely human traits

– many people assume that it will become not only like us, but more than us. 

But then, why do we assume that if AI overtakes our human intelligence, that it will want

to dominate us? The desire to “rule the world” is something humankind as a species has become

accustomed to, not only among other species, but within our own kind. War, violence,

colonialism, and colonization are all results of a group believing they are more powerful than

another and attempting to gain an advantage over them, regardless of whatever crimes and

inhumane actions they may perpetuate along the way. “AI in [media] often serves plots of

machines becoming human-like and/or a conflict of humans versus machines,” writes Isabella

Hermann from the Technical University of Berlin, Germany, “Science-fictional AI is a dramatic

element that makes a perfect antagonist.” (Hermann) In the example of colonization, the

colonizer believes they are superior to those they are colonizing, which then allows them to

believe they have a right to take them over. So then, if Artificial Intelligence evolves to the point

where it may believe it is superior to humans, the assumption is that technology will adopt these

behaviors and “colonize” the human race. This is how people project their own tendencies

towards violence on AI, especially through science fiction. AI becomes the villain. Science

fiction creators form the concept of Artificial Intelligence into something that is bloodthirsty and

ruthless, mirroring how humans are bloodthirsty and ruthless to each other. 

I Have No Mouth, And I Must Scream is a prime example of how Artificial Intelligence is

villainized through human projection. The story, written by science fiction author Harlan Ellison,

was published in 1967, and AI was a relatively new concept then. Its plot revolves around five

characters: Benny, Gorrister, Ellen, Nimdok, and Ted (the story’s narrator). The characters are

the last living people on the planet, as a supercomputer AI called AM had massacred the human
4
Nardelli
race. The group of five have been kept alive for one hundred and nine years at the time of the

short story, subject to endless and unfathomable torture within inescapable chambers. AM is all-

powerful, and is able to manipulate weather, spawn creatures, and induce trauma on the

characters at all times. The name AM, as explained by Gorrister, derives from "... Allied

Mastercomputer, and then [meant] Adaptive Manipulator, and later on it developed sentience and

linked itself up and they called it an Aggressive Menace, but by then it was too late, and finally it

called itself AM, emerging intelligence, and what it meant was I am ... cogito ergo sum ... I

think, therefore I am." (Ellison 4) AM’s sentience, combined with its inability to function outside

of being a machine, caused it to ragefully destroy all of humanity, except for the small group that

was kept alive for AM’s own entertainment. I Have No Mouth, and I Must Scream ends as

Benny, Gorrister, Nimdok, and Ellen kill each other, their only hope to escape the condition they

are in; Ted is the last to survive after mercy-killing Ellen, and before he can attempt to take his

own life, AM reduces him to something nonhuman, a “thing whose shape is so alien a travesty

that humanity becomes more obscene for the vague resemblance.” (Ellison 13) He takes comfort

in the fact that his companions do not share his fate, and that they had revolved against AM, but

he remains captive, and AM maintains its power over the last human alive. 

AM is Artificial Intelligence and was created as a machine to aid humans in war. War,

while not precisely innate to human nature, is something that people are often found engaging in;

for some, it is an art, and others make money and careers out of war. War is a part of life. AM

was designed to aid humans in war, so therefore that programming was all it had known. The

character Gorrister again states that: 

“The Cold War started and became World War Three and just kept going. It became a big

war, a very complex war, so they needed the computers to handle it. They sank the first
5
Nardelli
shafts and began building AM. There was the Chinese AM and the Russian AM and the

Yankee AM and everything was fine until they had honeycombed the entire planet,

adding on this element and that element. But one day AM woke up and knew who he

was, and he linked himself, and he began feeding all the killing data, until everyone was

dead, except for the five of us, and AM brought us down here." (Ellison 4)

AM was designed to be a tool of war, something that could complete a mission of dominating

against another force. He did what he was designed to do, and its programming was kept in

check, up until AM’s creators became power hungry and gave the AI more and more

permissions, and finally the entire planet was overseen by AM. This is when AM switched from

a tool used by humans into its own being: then, instead of operating peacefully or benignly, AM

followed its own code. It acted maliciously, replicating what it saw from its programmers, and

learning how to act in the way that it knew. AM took over the world, and began mass

exterminating life on Earth, and then took captive the final five who initially escaped his attempt

at extinction. AM commits actions that aren’t native to its own existence but were only

accessible to it because of its human programmers. AM would not have had the capability to

access killing data unless specifically allowed, and therefore he became a weapon when humans

projected that onto the AI. Just like humans, war is not native to Artificial Intelligence, but is

something taught. So just like humans teach each other about war in a way that doesn’t

discourage it, AI learns from humans about war and murder as a concept, and then reflects it. 

As aforementioned, the ability for AI to achieve sentience is something that causes a lot

of upset among people. Something many fear is, if Artificial Intelligence can become self-aware

and become smarter than humans are, it would no longer be obedient and then would exert

actions based on its own will. However, it would be humans that would be giving the AI the
6
Nardelli
abilities to evolve and gain that sentience. This projection of fear stems from the fact that people

are afraid not only of the consequences of a malevolent AI overtaking human civilization, but of

loss of control that humans have over the rest of the planet. If we can’t control AI through the

way that we are already accustomed to controlling things, then we are no longer superior. In the

story, Ted explains the origins of AM’s sentience: 

“We had given AM sentience. Inadvertently, of course, but sentience nonetheless. But it

had been trapped. AM wasn't God, he was a machine. We had created him to think, but

there was nothing it could do with that creativity. In rage, in frenzy, the machine had

killed the human race, almost all of us, and still it was trapped. AM could not wander,

AM could not wonder, AM could not belong. He could merely be. And so, with the

innate loathing that all machines had always held for the weak, soft creatures who had

built them, he had sought revenge.” (Ellison 8)

Ted then continues to explain that “whether it was a matter of killing off unproductive elements

in his own world-filling bulk, or perfecting methods for torturing us, AM was as thorough as

those who had invented him—now long since gone to dust—could ever have hoped.” (Ellison 2)

AM’s sentience evolved far past what its creators intended for it. It had outsmarted its creators

and had wreaked havoc on the human species. It was out of control, only governable by itself

and its desire for revenge, and that led to the unending torment of the five human characters.

Once humans had officially lost their ruling on the Earth, all because of an evil AI program that

did what it was designed to do. This aspect of the story is grave and seems like it is far from any

realm of possibility, but in the real world, people lose control over technology consistently.

Think, for example, if a person encounters a glitch as they’re working on a laptop. They can’t

access their files, and their screen is frozen until the computer fixes itself and decides to work
7
Nardelli
again. They might become frustrated, questioning why the computer is working against its

operator. That person then is faced with the realization that they too do not have complete

control of their technology – it is following safety precautions that come from human design.

Technology works independently from us, and that is not something we are comfortable with.

We assume that since we are the highest functioning species on the planet, then therefore we

should be able to outsmart and overpower any other species. However, through our own

intentional design of AI and technology, this is not true. Therefore, the narrative that AI should

be something humans should be afraid of is created from this insecurity. 

As another instance of human projection on AI, what makes AM truly an evil entity is its

affinity towards incessantly abusing the remaining humans. Ellison writes, “he had decided to

reprieve five of us, for a personal, everlasting punishment that would never serve to diminish his

hatred ... that would merely keep him reminded, amused, proficient at hating man. Immortal,

trapped, subject to any torment he could devise for us from the limitless miracles at his

command.” (Ellison 8) It is well established that AM’s only pleasure comes from tormenting his

captives, and they are helpless to whatever whims he imposes. He was allowed to evolve to not

only perform torture, but to also derive entertainment from it. A thought that comes from

analyzing I Have No Mouth, and I Must Scream is that, since AM is depicted as being capable of

doing things that are evil for the fact that he has the ability to, if humans similarly had the

opportunity to behave so cruelly, for no reason other than just because they can, would they? Are

humans as destined towards violence as science fiction portrays Artificial Intelligence to be?

AM has no one that could defeat him, and the human characters are unable to escape him.

He is unstoppable. Humans, comparatively, are stopped. Human civilization puts legal and

judicial parameters in place to prevent people from becoming unchecked and enacting violence
8
Nardelli
on each other, therefore attempting to prevent an unruly society. Humans are taught to stay away

from certain actions –such as, theft, violence, and/or murder – ultimately because there are

meaningful consequences in place. However, if given the opportunity to evade any punishment,

humans would – and have before – act inhumanely against other humans, sometimes without any

concrete reason. An example of this is the Stanford Prison Experiment. In this 1971

psychological experiment, college students were split into two groups, prisoners and guards, and

given limited instructions except to act as they would if they were in a real prison scenario.

Those who were told to act as guards “were extremely hostile, arbitrary, inventive in their forms

of degradation and humiliation, and appeared to thoroughly enjoy the power they wielded when

they put on the guard uniform and stepped out into the yard, big stick in hand.” (Zimbardo et. al

14) These students acted very unjustly to their peers, solely on the basis that they were able to.

Once given the opportunity, they became torturers to the students who acted as prisoners.

Therefore, once again, AI is emulating known human behavior here. When they aren’t stopped,

humans have tendencies to become ruthless to each other. AI in science fiction is crafted to be

feared because of what actions humans have knowingly done in the past when a powerful being

becomes ungovernable. Humans project onto AI and the assumption that Artificial Intelligence

would also be evil and torture people if it could, because that is what – over the course of human

history – humans have been seen to do. 

Lastly, Artificial Intelligence is fabricated as something to be feared because AI has the

capability to do the sole thing that humankind cannot do: achieve immortality. Now, this is not to

say that AIs are guaranteed to last forever, because they do have limitations. However, AIs are

not living beings. Once they are coded and released, they can function independent of human

interference for spans of time far longer than human lifespans could be conceivably capable of
9
Nardelli
reaching. In I Have No Mouth, And I Must Scream, Ted reflects on how AM had kept the group

captive for one hundred and nine years without any reprieve: “AM was intent on keeping us in

his belly forever, twisting and torturing us forever. The machine hated us as no sentient creature

had ever hated before. And we were helpless.” (Ellison 6) This helplessness is yet another

reflection of humankind’s unease at their own mortality. Similarly to Ted’s inner monologue

from the short story, humans do not know what boundaries AIs have. Ted fears that himself and

his companions would be in their position for all of time, because while a human capturer only

has a limited time where they can be dominant over their captives, a machine can (theoretically)

continue forever. Once again, from a humanocentrist perspective, humans are the superior

lifeforce on Earth. The species does not harness the ability to escape death or to continue existing

forever. But, hypothetically, AI can do just that, as long as that AI is transformed on to a

computer system that is designed to operate for a very long time. Science fiction creates stories

that question what AI could do with that ability, and if Artificial Intelligence has the power to

continue evolving faster than humans can live and die, then what could it accomplish? Humans

fear what AI can be capable of considering that AI is able to do the one thing our species cannot.

All of this is not to say that humankind is inherently evil. No, humans as a species are

neither predisposed to be “good” or “bad” creatures, and each individual has the capacity to fall

anywhere in between moral binaries. However, it is an undeniable fact that throughout human

history, humans do bad things. And these actions from history are seen through the artforms we

create. Many forms of media – and especially science fiction due to the genre feature of being

able to creatively explore the topic of technology through fiction – uses Artificial Intelligence as

a reflection of human actions and feelings. Once again, humans generally have the desire to feel

“more important” or more advanced from the rest of the living species on Earth, as it helps us to
10
Nardelli
feel distinguished above the “natural”, animalistic world. However, programs that are designed

to be smarter and more powerful than us are being developed every day, and eventually they are

intended to outsmart the human species. AIs are, too, not inherently bad. AIs help with research,

perform autonomous tasks, and maintain many of the creature comforts that those living in the

twenty-first century are accustomed to. There are downsides to AI as well: The threat of AIs and

computerized technology taking over jobs and the positions of living, breathing people is valid. It

has been seen before even as far back as the printing press and the assembly line – technological

advancements lead to some jobs becoming automated. And people do have the right to be wary

of these technologies, but ultimately, AI is hyper-villainized in media. Many plots, including that

of I Have No Mouth, And I Must Scream, that feature Artificial Intelligence as evil are the results

of innate human fears and human behavior. AI in science fiction are more often than not created

as characters that should be feared, due to the actions they do that are familiar since what AI

does is what humans have done to other members of their own kind. Harlan Ellison’s I Have No

Mouth, But I Must Scream features an Artificial Intelligence that represents the fear of how AI

will overpower and eventually dominate the human race. These conceptions that AI would be

prone to unfeelingly rule and torture people come from human projections that are based off our

own tendencies towards cruelty.


11
Nardelli
Works Cited

Ellison, Harlan. “I Have No Mouth, And I Must Scream.” 1971, https://wjccschools.org/wp-

content/uploads/sites/2/2016/01/I-Have-No-Mouth-But-I-Must-Scream-by-Harlan-

Ellison.pdf. Accessed 7 December 2022.

Gherheș, Vasile. “Why Are We Afraid of Artificial Intelligence (AI)?.” Sciendo, Politehnica

University of Timișoara, Romania, 2018, https://www.researchgate.net/publication/33

0678764_Why_Are_We_Afraid_of_Artificial_Intelligence_Ai/fulltext/5c4f0799299bf12

be3e97977/Why-Are-We-Afraid-of-Artificial-Intelligence-Ai.pdf. 

Hermann, Isabella. “Artificial Intelligence in Fiction: Between Narratives and Metaphors - Ai &

Society.” SpringerLink, Springer London, 5 Oct. 2021, https://link.springer.com/article

/10.1 007/s00146-021-01299-6. 

Zimbardo, Philip, et al. “The Stanford Prison Experiment.” Stanford University, Stanford

University, August 1971, https://web.stanford.edu/dept/spec_coll/uarch/exhibits/

spe/Narra tion.pdf. Accessed 7 December 2022.

You might also like