Download as pdf or txt
Download as pdf or txt
You are on page 1of 8

Research Review

“The Philosophy of Artificial Intelligence: Toward Artificial


Consciousness and Digital Immortality”

Foundations of Human-Computer Interaction

David Molina - May 31st, 2011

Introduction

For those of us who work on computer technology-dependant fields, is very hard to


imagine the world as we know it without computers or Internet. Nevertheless, it speaks about
the speed of the world’s technological development that less than two decades ago there was no
Internet (this is, it was so small and underdeveloped that it was not known by the public) and
that there were no computers with useful software accessible for most of the people. Presently, is
difficult to find any human activity that has not been affected by computers and
interconnectivity, and every day new paradigms and challenges are created by the omnipresence
of these tools.

In the last years, some important questions have arisen from this seemingly endless
computational progress: what will happen if these electronic intelligent systems become
conscious (in the way we human experience consciousness)? Should we allow them to replace
more human functions? It is possible, instead, to enhance and extend human intelligence using
this technology? My interest in the discussion that these questions generate began a few months
ago when reading the article “Cyberspace When You’re Dead”, published by the New York
Times. This article commenced describing what happens with all the digital data generated by
Internet users when they die without having arranged its posterior use or disposal, and the
current legal vacuum regarding such situations; then, it moved on to describe some companies
that offer services oriented to preserve such legacy, a few of which have as a future business plan
of sorts to offer the creation of an “avatar” (a virtual character) with all the digital information
left behind by the deceased user. As a final take on the subject, the writer described the opinions
of well known computer scientists such as Ray Kurzweil, Hans Moravec and Marvin Minsky who
have affirmed that the future transformation of humans through technology is a fact and not a
mere science fiction subject; thus, a cyberspace where a digital version of the personality and
knowledge of an individual would exist on its own after death is now considered a more than
likely possibility.

This research summary aims to acquire some perspective and comment on articles and
texts that have been written about the philosophy of Artificial Intelligence and the necessary
guidelines for structuring a human and ethical response to the possibility of artificial
consciousness and digital immortality.

Philosophy and Artificial Intelligence

It does not come as a surprise that philosophy, a discipline that is oriented to the study of
general and fundamental problems through analysis and rational arguments, would embrace an
approach to artificial intelligence. If the etymologic meaning of philosophy is “love of wisdom”
then its closeness to A.I., a discipline that studies the development and improvement of
intelligence in technologically advanced machines, must be seen as a logical and necessary step
toward a more comprehensive research of what “intelligence” means.

In her paper “A Framework for the Foundation of the Philosophy of Artificial


Intelligence”, Schiaffonati (2003) affirms that philosophy as a discipline has an important role
making the goals and methods of AI more comprehensible; this relation between philosophy
and AI is quite reciprocal, for on the other side AI also delivers important tools to philosophy
that help to understand many different questions that arise from a empirical world and need to
be answered by the latter. In previous decades, the meaning and significance of consciousness as
a part of intelligence was studied mostly by philosophy, psychology and ethology; nowadays it is
accepted that the study of consciousness is a legitimate subject to be analyzed by hard sciences
as well.

Nonetheless, it is a significant point in her paper, as well as in other articles reviewed,


that currently does not exist a clear and rotund definition of the primary meaning and main
concepts of the philosophy of artificial intelligence, precisely because there is not a stable
definition of AI as a branch of computer science either (even more, some authors consider AI
more a “technology” than a science). Schiaffonati arrives to the conclusion that, to keep this
partnership between social and computer sciences working, it is necessary to be “constantly
integrating the general and somehow abstract framework with the analysis of concrete
examples”. However, it is assumed that the lacking of a rotund definition for AI is an intrinsic
part of the discipline, since AI must keep stretching its sphere to keep pace with the latest
technological advances on the field of computers.

Artificial consciousness

Since there is not a stable definition of Artificial Intelligence as a branch of science, we


must assume that the same goes for Artificial Consciousness, for both concepts are somehow
dependant on each other. Is intelligence a requisite for being a conscious entity or is the other
way around? Any human being with an average knowledge of English –or any other modern
language- is able to notice the strong relationship and the diffuse boundaries between both
concepts. Or are we talking just about the same?

We normally, as a child, start to understand what is meant by “consciousness” in an


intuitive way; when we are old enough to express and understand the phrase “I go” we also learn
that along with meaning our own action of going, that construct may also be used by other
individuals to express their own going. And just like we internally see ourselves going when we
execute the action, we understand that those “others” see themselves going as well, in a
replication of our own mental process. Whether or not we make the right choice by using words
without concrete boundaries like “I”, “conscious” or “mental”, we utilize the concept of
“consciousness” to support most, if not all, explanations of our phenomena.

Manzotti and Tagliasco, in their paper “Artificial consciousness: A discipline between


technological and theoretical obstacles” (2008), claim that artificial consciousness is presently
more an exigency than a fact, for it signals the need of creating a new scientific discipline more
than establishing an accepted field of study. The authors make the case of the ambiguous use of
the word “artificial” as one of the obstacles for the further understanding of the scope of AC; as
an example, they wonder whether that word is used in the same way that we use it on “artificial
light” –that is, a human-build light that has all the features of light- or it is used as in “artificial
flower” –which is not a functioning flower at all-. Supporters of AC declare that even when
artificially intelligent and conscious artifacts are made by human beings, they genuinely think,
in the same way that an airplane flies even when it is not the same flying than that of a bird.
Seemingly central to the notion of consciousness is the concept of learning and acquiring
experience as a process composed of our own internal representations of external phenomena
that affects us. The authors offer an interesting representation of different kinds of AC using the
example of two movies and a children’s book: 2001: A Space Odyssey, by Stanley Kubrick;
Pinocchio, by Carlo Collodi, and Artificial Intelligence by Steven Spielberg. In “2001…” the
computer HAL is a perfect example of artificial intelligence, for in spite of not having a body, is a
totally intelligent and autonomous entity that has been programmed that way. In “A.I.” the
protagonist, a child robot called David, is a machine that is capable of love, but must be “loaded”
with a special module in order to being able to feel and express emotions. Finally, “Pinocchio”, a
wooden toy shaped like a child, is the best example of artificial consciousness, for it does not
have any pre-programmed ability or emotion but depends completely on its subjective and
random experiences to acquire consciousness and learn human ethics.

Would a machine that tells us that is able to see and feel be conscious, or would that be
just a simulation of a conscious entity? The article “It thinks… therefore” by Chris Edwards
(2008) presents the two prevalent schools of thought regarding AC. On one position, the
computer scientist Marvin Minsky has said that if we accept that our own human nervous
system obeys the laws of physics and chemistry like any other biological entity, then it should be
possible to reproduce its behavior with some physical-chemical device; on the other side, the
scientist Roger Penrose has affirmed that the laws of physics uniquely are insufficient to explain
consciousness, and that chances are that only biological entities are able to faithfully reproduce
consciousness.

As “It thinks…” indicates, the work developed by computer scientists in the last decades
has already shown how to emulate certain aspects of human intelligence, at least in regards to
acquiring and working with data efficiently; but what is not known yet is whether by extending
those intelligent systems we will be able to reproduce consciousness. What is understood is that
current artificial systems focus on the implementation of complex but limited algorithms to
achieve a specific goal, while conscious subjects are capable on its own of developing random,
unpredictable new goals. It is believed by some people that the architecture of the Internet
might lead to the spontaneous creation of some sort of artificial machine-based consciousness.
A spiritual and teleological perspective

Along with the question of what will happen if artificially intelligent systems become
conscious (as mentioned before, many scientists use “when” instead of “if”) comes the question
of: should AI entities have rights the same way we humans do?

The subject of granting rights to any intelligent non-biological entity is a tough one for
people who declare themselves religious, since is presupposed by every spiritual organization or
movement the existence of something immaterial and mostly undefined that is the base of
human nature, namely the “soul”, “spirit”, “free will” or simply “consciousness”.

In “Online Buddhist and Christian Responses to Artificial Intelligence”, Laurence


Tamatea (2010) gathered opinions from people who declared themselves as belonging to those
religions, although must be said that Buddhism is more a philosophy than a religious movement
(which has a direct implication in its approach to science; more on this later). Both Buddhism
and Christianism base the possibility of intelligence and consciousness on the existence of the
immaterial thing explained before, which takes to the logical conclusion that the no presence of
such thing would be an impediment for the existence of artificial intelligence. However, the
Buddhist position is much more flexible than the Christian one, since its response focuses on
alleviate –in accordance with its main precepts- suffering and diminishing harm to all kind of
sentient life; in words of Dzogchen Ponlop Rinpoche: “it does not matter how life comes about.
It just matters it is life”. The Christian response to the existence of AI focused mostly in the
intrinsic sinfulness of the proposition of creating life, and its reflection is that a future artificial
intelligence constructed with genetics, robotics and nanotechnology will lead to human
extinction.

The openness of Buddhism to AI raises some interesting questions for the article’s
author as well. One of the central activities of Buddhism is meditation, which is used to achieve
and constantly improve the awareness of the mind. Thus, meditation is used as a technique to
cease the continuous flow of information into our mind; but in the case of machines, a state of
“knowing” cannot be achieved without a constant flow of information. Likewise, the Buddhist
doctrine of “impermanence of the Self” could be argued as a reason for not granting rights to AI,
while can also be argued that because there not exist a permanent Self and all the matter is the
result of different processes the concept of “civil rights” cannot be limited to humans.
Even when these subjects may currently look like an undertake for science fiction, the
author highlights the opinion of Michael Kirby, a bioethics adviser to the United Nations
Commissioner on Human Rights: “If anything, we’ve been surprised at how quickly technology
has progressed. It’s worth taking on these issues intellectually now, rather than in crises later”.

Toward Digital Immortality

Ray Kurzweil has reckoned that the creation of the first artificial brain will happen by the
end of 2020 (quoted by Edwards, 2008). This goes in accordance with Moore’s Law, which
establishes that the capacity of computer’s hardware is doubled approximately every two years.

As my initial article of interest mentioned, the step between preserving and transmitting
digital information, and providing that digital information the ability of self-organizing along
with being capable of learning, is a logical one. Gordon and Grey, in their article “Digital
Immortality” (2001), describe these stages as one-way and two-way immortality: one-way
“require part of a person to be converted to information (cyberized) and stored in a more
durable media”, with the purpose of “allowing [one-way] communication with the future”; in
two-way immortality “one’s experiences are digitally preserved and […] then take on a life of
their own”. Technologically speaking, the authors forecast that two-way immortality will be
possible within this century; they make the point that, in terms of information storage, retaining
every conversation that a person has ever heard requires less than a terabyte. Thus, hardware-
wise there are no great impediments for this goal, and as a matter of fact the authors are
conducting an experimental project preserving and digitalizing all the information generated by
them.

Minsky (1994) makes the case that our species has already reached an intellectual
plateau, in the sense that our brain is not able to further develop (“Was Albert Einstein a better
scientist than Newton or Archimedes? Has any playwright in recent years toppled Shakespeare
or Euripides?”), explaining this state by the fact that since our brains are limited by their
concrete physical properties, then they must have necessarily a limit in their learning capability.
However, he also proposes that our evolution as species has not yet ceased, but that we are not
expected to evolve in the familiar and slow Darwinian way; this is the point where AI and DI will
come handy.
As Farnell notices in “Attempting Immortality: AI, A-Life, and the Posthuman in Greg
Egan’s Permutation City” (2000), some models of AI are based in the materialistic (and
erroneous for modern standards) affirmation that “mind” is a function of the brain, and thus
that just by replicating the activity of the brain we will reproduce something similar to human
cognition. The approach taken by Minsky differs from the materialistic one: he states that in
biological organisms, each system is generally insensible to most details of what happens on
other subsystems on which it depends. Then, to replicate a functional brain it should be enough
to copy a part of the function of each system to generate effects in other systems. This would
make possible in the future to replace not only worn-out parts of our body and extending our life
span, but also replacing (or enhancing) specific parts of our brain in order to break the current
human limits of knowledge and perhaps extending our intelligence and consciousness –this is,
projecting ourselves- into non-biological entities.

Conclusion

When reviewing these articles the first subject that jumps to attention is that there is not
currently a polished definition of what Artificial Intelligence and Artificial Consciousness mean.
However, this is a consequence more of the fact that there are continuously new advances and
possibilities in technology than of a lack of efforts. The advent of artificially intelligent and
conscious entities seems now like a very certain possibility, and most of the people directly
implicated in its study just wonder about the “when”. Also, since many efforts from current
medicine are oriented to extending the span of human life, it seems certain that sooner than
later more efforts will be directed to the digital preservation of our knowledge along with our
personalities –our whole conscious beings-.

From the perspective of someone who is very related to and dependant on information
technologies, this is certainly an exciting time to witness the development of sciences. We
humans were a successful species because of our adaptability and will to learn, and that should
suffice as a base to start our next evolutionary stage.
References

Bell, Gordon and Gray, Jim. Mar, 2001. “Digital Immortality”. Communications of the ACM, vol
44, no 3. USA.

Edwards, Chris. Mar, 2008. “It thinks… therefore”. Engineering and Technology, vol. 3, issue 5.
USA.

Farnell, Ross. 2000. “Attempting Immortality: AI, A-Life, and the Posthuman in Greg Egan’s
Permutation City”. Science Fiction Studies, vol 27.

Henderson, Harry. 2007. “Artificial Intelligence. Mirrors for the Mind”. New York, Chelsea
House.

Manzotti, Ricardo and Tagliasco, Vincenzo. Jul, 2008. “Artificial consciousness: A discipline
between technological and theoretical obstacles”. Artificial Intelligence in Medicine, vol 44.

Minsky, Marvin. “Will Robots Inherit the Earth?”. Scientific American, Oct, 1994. USA.

Schiaffonati, Viola. 2003. “A Framework for the Foundation of the Philosophy of Artificial
Intelligence”. Minds and Machines, vol 13.

Tamatea, Laurence. Dec, 2010. “Online Buddhist and Christian Responses to Artificial
Intelligence”. Zygon, vol. 45, no 4. USA.

Walker, Rob. “Cyberspace When You’re Dead”. Link: http://nyti.ms/jLx6ur


New York Times online. Accessed on 1/5/11.

You might also like