Professional Documents
Culture Documents
The Philosophy of Artificial Intelligence: Toward Artificial Consciousness and Digital Immortality
The Philosophy of Artificial Intelligence: Toward Artificial Consciousness and Digital Immortality
Introduction
In the last years, some important questions have arisen from this seemingly endless
computational progress: what will happen if these electronic intelligent systems become
conscious (in the way we human experience consciousness)? Should we allow them to replace
more human functions? It is possible, instead, to enhance and extend human intelligence using
this technology? My interest in the discussion that these questions generate began a few months
ago when reading the article “Cyberspace When You’re Dead”, published by the New York
Times. This article commenced describing what happens with all the digital data generated by
Internet users when they die without having arranged its posterior use or disposal, and the
current legal vacuum regarding such situations; then, it moved on to describe some companies
that offer services oriented to preserve such legacy, a few of which have as a future business plan
of sorts to offer the creation of an “avatar” (a virtual character) with all the digital information
left behind by the deceased user. As a final take on the subject, the writer described the opinions
of well known computer scientists such as Ray Kurzweil, Hans Moravec and Marvin Minsky who
have affirmed that the future transformation of humans through technology is a fact and not a
mere science fiction subject; thus, a cyberspace where a digital version of the personality and
knowledge of an individual would exist on its own after death is now considered a more than
likely possibility.
This research summary aims to acquire some perspective and comment on articles and
texts that have been written about the philosophy of Artificial Intelligence and the necessary
guidelines for structuring a human and ethical response to the possibility of artificial
consciousness and digital immortality.
It does not come as a surprise that philosophy, a discipline that is oriented to the study of
general and fundamental problems through analysis and rational arguments, would embrace an
approach to artificial intelligence. If the etymologic meaning of philosophy is “love of wisdom”
then its closeness to A.I., a discipline that studies the development and improvement of
intelligence in technologically advanced machines, must be seen as a logical and necessary step
toward a more comprehensive research of what “intelligence” means.
Artificial consciousness
Would a machine that tells us that is able to see and feel be conscious, or would that be
just a simulation of a conscious entity? The article “It thinks… therefore” by Chris Edwards
(2008) presents the two prevalent schools of thought regarding AC. On one position, the
computer scientist Marvin Minsky has said that if we accept that our own human nervous
system obeys the laws of physics and chemistry like any other biological entity, then it should be
possible to reproduce its behavior with some physical-chemical device; on the other side, the
scientist Roger Penrose has affirmed that the laws of physics uniquely are insufficient to explain
consciousness, and that chances are that only biological entities are able to faithfully reproduce
consciousness.
As “It thinks…” indicates, the work developed by computer scientists in the last decades
has already shown how to emulate certain aspects of human intelligence, at least in regards to
acquiring and working with data efficiently; but what is not known yet is whether by extending
those intelligent systems we will be able to reproduce consciousness. What is understood is that
current artificial systems focus on the implementation of complex but limited algorithms to
achieve a specific goal, while conscious subjects are capable on its own of developing random,
unpredictable new goals. It is believed by some people that the architecture of the Internet
might lead to the spontaneous creation of some sort of artificial machine-based consciousness.
A spiritual and teleological perspective
Along with the question of what will happen if artificially intelligent systems become
conscious (as mentioned before, many scientists use “when” instead of “if”) comes the question
of: should AI entities have rights the same way we humans do?
The subject of granting rights to any intelligent non-biological entity is a tough one for
people who declare themselves religious, since is presupposed by every spiritual organization or
movement the existence of something immaterial and mostly undefined that is the base of
human nature, namely the “soul”, “spirit”, “free will” or simply “consciousness”.
The openness of Buddhism to AI raises some interesting questions for the article’s
author as well. One of the central activities of Buddhism is meditation, which is used to achieve
and constantly improve the awareness of the mind. Thus, meditation is used as a technique to
cease the continuous flow of information into our mind; but in the case of machines, a state of
“knowing” cannot be achieved without a constant flow of information. Likewise, the Buddhist
doctrine of “impermanence of the Self” could be argued as a reason for not granting rights to AI,
while can also be argued that because there not exist a permanent Self and all the matter is the
result of different processes the concept of “civil rights” cannot be limited to humans.
Even when these subjects may currently look like an undertake for science fiction, the
author highlights the opinion of Michael Kirby, a bioethics adviser to the United Nations
Commissioner on Human Rights: “If anything, we’ve been surprised at how quickly technology
has progressed. It’s worth taking on these issues intellectually now, rather than in crises later”.
Ray Kurzweil has reckoned that the creation of the first artificial brain will happen by the
end of 2020 (quoted by Edwards, 2008). This goes in accordance with Moore’s Law, which
establishes that the capacity of computer’s hardware is doubled approximately every two years.
As my initial article of interest mentioned, the step between preserving and transmitting
digital information, and providing that digital information the ability of self-organizing along
with being capable of learning, is a logical one. Gordon and Grey, in their article “Digital
Immortality” (2001), describe these stages as one-way and two-way immortality: one-way
“require part of a person to be converted to information (cyberized) and stored in a more
durable media”, with the purpose of “allowing [one-way] communication with the future”; in
two-way immortality “one’s experiences are digitally preserved and […] then take on a life of
their own”. Technologically speaking, the authors forecast that two-way immortality will be
possible within this century; they make the point that, in terms of information storage, retaining
every conversation that a person has ever heard requires less than a terabyte. Thus, hardware-
wise there are no great impediments for this goal, and as a matter of fact the authors are
conducting an experimental project preserving and digitalizing all the information generated by
them.
Minsky (1994) makes the case that our species has already reached an intellectual
plateau, in the sense that our brain is not able to further develop (“Was Albert Einstein a better
scientist than Newton or Archimedes? Has any playwright in recent years toppled Shakespeare
or Euripides?”), explaining this state by the fact that since our brains are limited by their
concrete physical properties, then they must have necessarily a limit in their learning capability.
However, he also proposes that our evolution as species has not yet ceased, but that we are not
expected to evolve in the familiar and slow Darwinian way; this is the point where AI and DI will
come handy.
As Farnell notices in “Attempting Immortality: AI, A-Life, and the Posthuman in Greg
Egan’s Permutation City” (2000), some models of AI are based in the materialistic (and
erroneous for modern standards) affirmation that “mind” is a function of the brain, and thus
that just by replicating the activity of the brain we will reproduce something similar to human
cognition. The approach taken by Minsky differs from the materialistic one: he states that in
biological organisms, each system is generally insensible to most details of what happens on
other subsystems on which it depends. Then, to replicate a functional brain it should be enough
to copy a part of the function of each system to generate effects in other systems. This would
make possible in the future to replace not only worn-out parts of our body and extending our life
span, but also replacing (or enhancing) specific parts of our brain in order to break the current
human limits of knowledge and perhaps extending our intelligence and consciousness –this is,
projecting ourselves- into non-biological entities.
Conclusion
When reviewing these articles the first subject that jumps to attention is that there is not
currently a polished definition of what Artificial Intelligence and Artificial Consciousness mean.
However, this is a consequence more of the fact that there are continuously new advances and
possibilities in technology than of a lack of efforts. The advent of artificially intelligent and
conscious entities seems now like a very certain possibility, and most of the people directly
implicated in its study just wonder about the “when”. Also, since many efforts from current
medicine are oriented to extending the span of human life, it seems certain that sooner than
later more efforts will be directed to the digital preservation of our knowledge along with our
personalities –our whole conscious beings-.
From the perspective of someone who is very related to and dependant on information
technologies, this is certainly an exciting time to witness the development of sciences. We
humans were a successful species because of our adaptability and will to learn, and that should
suffice as a base to start our next evolutionary stage.
References
Bell, Gordon and Gray, Jim. Mar, 2001. “Digital Immortality”. Communications of the ACM, vol
44, no 3. USA.
Edwards, Chris. Mar, 2008. “It thinks… therefore”. Engineering and Technology, vol. 3, issue 5.
USA.
Farnell, Ross. 2000. “Attempting Immortality: AI, A-Life, and the Posthuman in Greg Egan’s
Permutation City”. Science Fiction Studies, vol 27.
Henderson, Harry. 2007. “Artificial Intelligence. Mirrors for the Mind”. New York, Chelsea
House.
Manzotti, Ricardo and Tagliasco, Vincenzo. Jul, 2008. “Artificial consciousness: A discipline
between technological and theoretical obstacles”. Artificial Intelligence in Medicine, vol 44.
Minsky, Marvin. “Will Robots Inherit the Earth?”. Scientific American, Oct, 1994. USA.
Schiaffonati, Viola. 2003. “A Framework for the Foundation of the Philosophy of Artificial
Intelligence”. Minds and Machines, vol 13.
Tamatea, Laurence. Dec, 2010. “Online Buddhist and Christian Responses to Artificial
Intelligence”. Zygon, vol. 45, no 4. USA.