The Self Aware Machine: Technical and Contemporary Issues

You might also like

Download as pdf or txt
Download as pdf or txt
You are on page 1of 8

The Self Aware Machine

Technical and Contemporary Issues


David B Mittelman
Computer Science & Engineering University of Connecticut Storrs, United States of America dbmittelman@gmail.com
Abstract The possibility of, the ingredients of, and what we need to worry about becoming gods ourselves.

simple operation can be directly mapped to the operations of a neurons firing. The firing of a neuron has four distinct phases: 1. 2. 3. 4. Stimulation and rising phase Peak and falling phase Afterhyperpolarization Refractory period

I.

INTRODUCTION

When the Segway first hit the market in 2001, it was greeted by public bans in San Francisco and Key West. It is completely banned in Japan and Australia to this day. Even though the Segway is a device that could change the way we think about moving around our world, and is recognized as such, it has been met with great resistance. Developers of new technology are challenged with creating technology that is accepted by the general public. Sometimes the public is not ready for a new development; sometimes a new development has no place in current society. Many new developments have failed for just these reasons. People dont like change. This is why when people think of robots, they have a specific, familiar image of what they will look like when they enter the world. We imagine robots as facsimiles for their replacements. The maid, for example, will probably be the first job completely replaced by a robot. We have been imagining what a world with robots would be like for over a hundred years. In 1962, the world was introduced to Rosie the Robot in the television cartoon "The Jetsons". In this imagined world of science fiction, or science future, robots took humanoid forms so people could relate to them without having to change the way we approach the world. II. CREATING A COMPUTER WITH A SOUL

These four phases cover multiple steps in the firing of a neuron. During the first phase, the presynaptic neuron sends neurotransmitters to the postsynaptic neuron. Neurotransmitters bind to receptors, the neurotransmitters produce an excitatory postsynaptic potential (EPSP) or an inhibitory postsynaptic potential (IPSP 4), the EPSPs and IPSPs sum together either spatially or temporally. As the EPSP and IPSP sums together, its soma becomes more positive. This marks the beginning of the second phase. The more positive charge reaches the axon hillock, the part of the neuron that connects to the axon. Once its threshold of excitation is reached the neuron will fire an action potential, and once the Sodium ion (Na+) channels open and Na+ is forced into the cell by the concentration gradient and the electrical gradient the neuron depolarizes, the Potassium (K+)

A. Is it possible? Human mental functions are the byproducts of a complex interconnected collection of neurons. Every image we see, sound we hear, food we taste, scent we smell, and emotion we feel is products of this system. Since this system produces all of its outputs from the functioning of its basic building block, the neuron, lets start there. Machine Functionalism opens up a completely new realm of thinking about how differently our cognitive mechanisms work. It boils down our behavior to a finite set of states generated by a finite set of transitions designed to accommodate a set of inputs. For a basic Turing Machine, for example, this input might be defined as a language, such that: L={w|w is a Turing Machine that accepts the empty set} (1)
Figure 1: Action potential states defined as an FSA

The Turing Machine will take any input w, perform a set of operations on said input, and then either accept, or reject. This

channels open and K+ is forced out of the cell by the concentration gradient and the electrical gradient. In the third phase, the neuron continues to depolarize. Eventually the Na+ channels close at the peak of their action potential and the neuron starts to repolarize. The K+ channels close, but they close slowly and K+ leaks out. As the terminal buttons release neurotransmitter to the postsynaptic neuron, the resting potential is overshot and the neuron falls to a -90mV. This is called hyperpolarization. In the final phase, the Na+/K+ pump then starts to pump 3Na+ ions out for every 2K+ ions it pumps in. The K+ that is not pumped in is diffused in the synapse. Thus, the neuron begins to repolarize. Finally, the neuron returns to its resting potential. This can be mapped to a simple finite state machine like the one in Figure 1. Each of the four phases of neuron-firing can be equated to a state in a state machine, where the stimulation of the neuron is the input of the state machine, and the firing of the action potential is the output of the state machine. However, the brain has millions of neurons, and for each neuron, thousands of dendrites connect the neurons signal to other neurons. Our model works currently for one input to one output. How do we account for the extra connections? The key to accounting for extra connections lies in the step where neurotransmitters bind to receptors after the firing, producing either an EPSP or an IPSP 4, and the EPSPs and IPSPs sum together either spatially or temporally. The inputs of a neuron are the sums of the outputs of other neurons, representing a many-to-one relationship. Several different mathematical models represent the many-to-one relationship. The Bayesian Network model, for example, is a good start. With the Bayesian network, multiple inputs can be used and summed to define certain outputs. For the example, in Figure 2 the grass wet node is most likely to hold true if both the sprinkler node and the rain node also hold true. Its state represents the sums of the states of the nodes that came before it. This probabilistic state machine allows multiple nodes to be daisy-chained together linearly, or in different configurations, to best fit its specific question: will the grass be wet? So, with a Bayesian network, the operation of groups of

neurons can be recreated. Linking groups of Turing machines that wait until their total input is equal to some value can simulate the Bayesian network, . We can define it as follows: B={w|w is a Turing Machine that accepts an input summing to } (2)

This Bayesian network can be encapsulated and simulated by a multi-tape Turing machine. Each tape can be used to simulate an individual Turing Machine or neuron in the group. There is no limit to the number of tapes a multi-tape Turing Machine can have, and thus we can simulate number of neurons using tapes, with = . We can expand the number of neurons to include every neuron in the human brain, A. Equally, we can increase the number of tapes in the multitape Turing Machine to maintain our predefined relationship above, thus = = . Finally, we can reduce this equation back to our single-tape Turing Machine. Since a Turing Machine has a tape of infinite length, a double-tape Turing Machine can be simulated on a single-tape Turing Machine by putting both tapes on the same tape, one after another. So it follows, an n-tape Turing Machine can be simulated on a single tape machine, and a tape Turing Machine can be simulated on a single-tape Turing Machine. We can conclude from this that it is theoretically possible to accurately model the operations of a human brain on a singletape Turing Machine. Since this is possible, we could extrapolate that a computer can also accurately model the operations of the human brain. It is therefore possible that with an accurate model, we should be able to recreate the necessary backdrop for a machine with the qualities of a human mind. Self-awareness is a good example, and will be the focus of the next section. B. Mechanizing the soul When we are self-aware, we might consider three separate parts of ourselves: the brain, the mind, and the soul. One is a physical entity; the other two are constructs. However, some say the soul doesnt exist, and some say the mind is just what the brain does. Since I consider these distinct parts, I would posit that each serves its own separate purpose. The brain is the plumbing that makes us work. Our nerves, in the brain, drive our muscle systems, perceive our sensory system, and make up the networks that generate our thoughts. The brain is the lowest level of the three. Its vast network of nerves make up a system that is so complex, we still do not understand all of its nuances. The brain is the seat of consciousness. However, in the end, it is simply very important plumbing. How do we define consciousness? Many believe that there is a duality between the mind and the body, and that the collective experience is a separate entity from our physical bodies. This was a central aspect of Descartes' body-mind duality, mentioned in his Meditations on First Philosophy, which argues that some mental processes are non-physical, separate from the body. It might be said that in religious terms, it is the separation of the body and soul. Descartes suggested that the body, possessing the abilities of extension and motion,

Figure 2: A simple Bayesian Network

obeying physical laws, is mechanical; the mind is just the opposite, and lacking these properties, is a non-physical entity. This leads to the idea of vitalism, which establishes that there is a "life force" that makes up our inner selves, the spark that we consider the thing that makes us go. It is hard to argue against vitalism, our gut tells us this is true: we believe that as humans we are so special, so unique, that who we are cannot be wrapped up in the auspices of science or physicality. If we were, it would be a problem for some people who believe in Heaven? What possible explanation could there be for the belief of some people that they have led past lives? If there is no separate entity, what could be the explanation for the inexplicable love associated with the concept of soul mates? There is no explanation for any of these things in modern science. They are not immediately explained by a physical realism of the mind. However, Steven Pinker, in his book How the Mind Works, argues that "the mind is what the brain does; specifically, the brain processes information, and thinking is a kind of computation" (Pinker, 21). What if what we consider as "consciousness just the byproduct of our brains processing? By this model, our higher-level thinking takes the inputs from our senses and makes decisions. This is what happens when "we" make decisions. Our sight, for example, is what our brain perceives and processes as what our eyes see as a result of light shining on our retina. Depth perception is our brain processing the two images as one. Our ability to look at a phone and recognize it as a phone is the mind applying what we know about the world and registering that the object we see is, in fact, a phone. Then, our brain can extrapolate that with this object we consider a phone, we can call and talk to anyone: all we have to do is hit any combination numbers, with infinite possibilities, and we have someone on the other line. These concepts alone take a great deal of thought. Consciousness is not a separate entity within our brains but rather the thinking processes we go about every day. The process, however, is a self-fulfilling prophecy, requiring us to consciously describe it, and thus creates a circular definition. The mind and soul are not, as some have argued, one concept. If we consider the mind and consciousness the processing of our lower-level sensory perceptions and the soul as the unifying force behind the mind, the otherwise-problematic definition is somewhat clarified. Here is a simple example: think about the experience of going to see a movie. We can divide the experience into three pieces: first, the movie playing on the screen, second, the theatre, and third, you, watching the film. The movie is your brain, connecting the outside world with your senses and making it into something your mind can understand. Your mind is the theatre, commenting on the events in the movie. This explanation is then perceived by the soul, which decides how to react. It asks: "Am I enjoying this experience? Do I want a change of pace? If the soul is tired of the experience, it might simply stand up and walk out of the theatre. This translates into you, in the real world, moving on from whatever you are experiencing at the time. The mind perceives what the brain is processing, and the soul makes decisions based on that. These perceptions could

be anything from object-recognition to higher-order thought and the processing of abstract concepts. The soul, on the other hand, uses the minds commentary and makes the decisions that we make every day of our lives. The soul, then, is the underlying force that keeps order within our minds. It is what controls the connections in the brain, making new ones, breaking old ones. It controls the flow of the impulses within. C. Architecture of a Self-Aware Machine Before talking about the problem of how to build a SelfAware Machine (SAM), we should discuss the model on which to build it off of: the human brain. If Bio mimicry, modeling technology after nature, has already provided multiple sustainable solutions to society such as Velcro and airplanes, why couldnt it engineer self-awareness? Why cant selfawareness be engineered in the same way as these other, concrete inventions? 1) How our brain works: The brain is a complex machine, made up of neurons that can range in lengths up to several feet long, transmitting signals at speeds up to 120 meters per second. It is estimated that there are about 1011 neurons with as many as 1015 connections. To put that in perspective, the latest chip from Intel to date was the Core i7-980X, with only 1.17 billion transistors; to make a one-to-one model would require almost 855,000 chips. The cerebral cortex is the main part of our on which we focus in creating an Artificial Intelligence. It is the center of memory, attention, perceptual awareness, thought, language and consciousness. A prevalent theory in psychology about the brain, called Localization of Function, posits that specific areas of the brain are responsible for different functions. The brain can be divided into three areas of perception: sensory, motor, and association. The sensory areas receive and process information from the senses: taste, touch, sight, smell, and hearing. This area holds the visual cortex, auditory cortex, and somatosensory cortex. The motor areas such as the primary motor cortex and the premotor cortex are responsible for voluntary physical movements. Finally, the association areas function to produce our perception of the world, supporting abstract thinking,

Figure 3: The Human Brain (Human brain)

language, and interaction. The parietal, temporal, and occipital lobes are associated with this area. We will model the brain in the form of a SAM using localization of function, since it provides a logical way to divide up the functions of a cognitive thinking machine. 2) How we would build an equal, or better version of our memory: The problem that presents itself when we try and model the behavior of the brain on computer-constructs stems from the fact that the brain has the power to remap the connections in between each of its neurons. The brain can dynamically adapt at the very basic level, while our computers cannot. True, there are polymorphic coding techniques, but on the hardware level, a computer cannot dynamically remap its silicon pathways. How can we, then, approach this problem to allow a computer to change its mappings dynamically? If not on the silicon level, then perhaps we can achieve our goal focusing on its architecture. Currently, computer memory works on a two dimensional grid (Figure 4). Each memory address corresponds to a location on this two-dimensional plane of storage locations, and each storage location stores a set amount of data. This is a great way to organize things based on the way a computer works, but is not how the brain functions. Our memories are associative. When we think of one thing, our minds pull associations. For example, when we think of waking up in the morning, memories of waking up come to mind, the taste and smell of coffee and breakfast comes to mind. In fact, tons of thoughts are conjured and are quickly dismissed by our brains as irrelevant. We quickly filter through the unnecessary thoughts and only use the ones we need. Again, this is not how a computer works. In order to get closer to the goal of a

&1

&2

&3

&4

DATA

&5

&8

&7

&6

Figure 5: Associative Memory Cell

computer that thinks the way we do, we need a memoryarchitecture that bridges the gap between these two models. As silicon etching technology and methodology advances, the ability to print on multiple layers, and connect between layers, becomes a viable reality. What follows is a proposed model for a computer memory model of human associative memory, utilizing the ability in the future to print a twolayered circuit. The first layer is the address layer. Each location in the memory circuit has an address that corresponds to a node in this layer. This layer is specifically to create the bridge of the associations. Directly below each node are eight other nodes connected to a center node, which in turn are connected to the node from the first layer. The center of each star, or cell, holds the data associated with that position. This data could be anything: a word, a document, an image, a video. The size of the storage available is not of concern here; the focus should be where the memory address for the data is held, and the eight nodes connected to this center data node is the key. Seven of the eight nodes hold addresses of other nodes in the memory module. These locations hold their own data in their respective center nodes. Their position as children of the center node indicates an association between this cell and the respective cell of the child. It is important to note that this association has directionality! In order for the parent cell to associate a child cell with it, the cell must be in the parent nodes star points. The E star point is a special node. It is used to expand a nodes association by another memory cell. The expansion cell will not have any data at its center node, but will have seven other addresses to associate with the originating cell. There are two main functions that are performed on the memory circuit. The first of which is a query. The query takes two parameters: the root cell, and an

Address Layer

Associative Layer

Figure 4: Associative Memory Architecture

association level. The query returns a complete tree, each node having seven children. As the query runs, it starts with the root cell, and retrieves the data from the center node of that memory cell. Then, it recursively visits each address of the star points and retrieves the data stored in their respective center node. This is one complete association level. Depending on how many levels the query is asked to run determines how many times the recursion runs and how large the resulting tree is. The second function is the insert function. This function simulates the function of the hippocampus in the brain when adding short-term memories to long-term memory. When the SAM observes something, the new data is stored in a cell without addresses. For example, if a SAM met a new person for the first time, the center node might be that persons name, and the star points would hold things such as height, body shape, gender, a voice sample, etc. The insert function would then run on the new short-term memory, which queries the memory circuit for each piece of data in the short-term memory cell. When a similar piece of data is found, for example, the height of the person is the same as a height already stored; it will replace the data with the address of that cell, forming an association. If a particular piece of data were not found in the memory module, it would create a new cell to store that data, and stores the address of the new cell in the star point of the new memory cell. Once all the star points are converted to addresses, the short-term memory has been converted into long-term memory. Note that the function starts with the center node. If the center node exists in the memory module already, then the insert function should call a merge function, which instead of inserting the new memory merges any new star points with the existing memory cell, and deletes the remaining short-term cell. This would be a very efficient way to store the data, as the data is stored only once, and the associations are also stored only once. 3) The Big Picture: The individual pieces alluded to above be only instruments in the larger symphony. The associative memory cell structure allows us to create a larger structure that correlates information from multiple inputs and creates a decision based on that (Figure 6). These sensors constantly measure the environment. This quality allows us to do something quite unique. Each memory module in Figure 3 has its own processing unit. They run individually unto themselves. As the sensors provide a constant stream of data, the sensor memories, in red, constantly re-query their individual memory cells against the data from the sensors. Their output, to the associative memory module, is the results from this query. For example, if the SAM smells coffee, the smell would be compared against the smell memory, which would then scan the smell cells stored in its memory to find the correct cell. Once it finds the right cell, it would run a query based on that cell. This query would find the associations with that smell. It might return all the coffees ever smelled, all the times the SAM has smelled it, all the sounds associated with coffee, any and all coffee mugs, etc.

At the same time, the other sensor memories would be doing the same thing. All these queries are sent to the associative memory module. This module would take all of the queries sent to it from the individual sensor memories and finds the common elements across all the queries, and send this to the decision memory module. The decision memory module would then query its memory of past decisions, and send those associated with the input from the associative memory modules output to the decision-making module. This module would look at the results from the associative memory, as well as the past decisions, and choose what to do. In regards to how the decision-making is structured, we could revisit the Bayesian network mentioned earlier. Part of the decision memory could store existing network structures used in making decisions in different contexts. By using this model, the decision is boiled down to a statistical calculation. The decision that has the highest probability of being the correct choice is used over the rest, and this choice is saved for future reference. This flow of information closely resembles the way we make decisions as human beings. This is what makes us such powerful thinking machines. We have the power to instantly perceive multiple inputs and make instantaneous decisions based on those inputs. We essentially have multiple processing units running in parallel, working on smaller problems that sum together into the larger symphony of the brain. This architecture does the same thing. Each problem is being worked on in parallel, over and over again. Any realistic implementation of this model will require thousands of cores to ensure the instantaneous operation of these queries. The amount of data is HUGE, and the only way to effectively reduce the problem is to divide and conquer with multiple parallel processing. The good news is this is within the realm of possibility. Once we perfect 3D-chip manufacturing, we will soon be on our way to constructing the first truly thinking
Touch Sensors Taste Sensors Smell Sensors

Touch Memory

Taste Memory

Smell Memory

Audio Sensors

Audio Memory

Location Memory

Location Sensors

Associative Memory Visual Sensors Visual Memory

Time Memory Language Lexicon

Time Sensors

Universal Grammer

Decision Memory

Decision Making

ACTION

Figure 6: SAM Architecture

Artificial Intelligence (AI). III. HOW TO TEST SELF-AWARENESS Psychologists have wrestled for years to find a way to test for self-awareness. The central problem is that if something is self-aware, it can make a conscious decision to hide its selfawareness. Self-awareness and sentience have therefore become almost interchangeable, as psychologists have moved forward in these discussions. A. Measuring Sentience One way to answer the question of self-awareness would be to measure sentience quantitatively. But what do we define as sentient? In science fiction, the word is used to attribute such qualities as consciousness, will, and desire; but is this the correct definition? Do we really mean something else? Sapience has a similar, yet distinct meaning: Main Entry: sentience Pronunciation: \sen(t)-sh()n(t)s, sen-t-n(t)s\ Function: noun Date: 1839 1 : a sentient quality or state 2 : feeling or sensation as distinguished from perception and thought (sentience) Main Entry: sapience Pronunciation: \s-p-n(t)s, sa-\ Function: noun Etymology: Middle English, from Anglo-French, from Latin sapientia, from sapient-, sapiens, present participle Date: 14th century : wisdom , sagacity (sapience)
Figure 7: Sentience vs. Sapience

= 1000 3000

= 10!!" (4) = +13 Using the above equations, we can answer the original question: Should we treat a SAM as sentient? If we now compute the sentient quotient of the theoretical Superconducting Josephson junction, an electronic device consisting of two superconductors separated by a very thin layer of insulating material, which will probably be the cornerstone of the switch to quantum computing, we can answer as follows: = !" = 10!!

= 10!!" (5) = +23 So, as we move forward in the field of computing, it is more and more apparent that the systems we will build will embody the attributes we ascribe to sentience. = !" B. Mirror Test Outside of a sentience measure, we can also try to measure self-awareness qualitatively. The mirror test is a classic test of self-awareness. Gordon Gallup Jr. developed the test in 1970, based partly on research by Charles Darwin. While visiting a zoo, Darwin held a mirror up to an orangutan and recorded the animal's reaction, which included making a series of facial expressions. He noticed that the significance of these expressions was ambiguous, and could either mean that the primate thought what it was seeing was another animal, or that it was a new toy. (Marten and Psarakos) The test is fairly straightforward. A mirror is held up to an animal, and if they react in a manner that suggests selfrecognition, they pass the test for self-awareness. Human infants are not able to pass this test until they reach about eighteen months. Other animals that have passed the test include chimpanzees, gorillas, and bottlenose dolphins. The test, however, is not quite as straightforward for a SAM. Sure, we could hold the robot to a mirror and it would perceive itself through its eyes. What about before we get there though? How do you hold up a mirror to something that doesnt have eyes? One option is to observe the robot chatting with itself in a chat room. If it starts to show behavior that indicates it recognizes that the lines it is typing are in fact its own, this would demonstrate that there might be a level of self awareness. However, there are multiple problems with this test. First, the measure is completely subjective. A SAM would be able to compare the lines of its output to the lines in the chat room, and easily confirm that the lines match its own output. That however, wouldnt necessarily tell the SAM, or the testers that the SAM recognizes on a higher level that the lines it sees in the chat room are its own. Even if the SAM

The traditional definition of sentience implies the perception of emotions, of feelings. However, when we think of what we really mean, we attribute a superior intellect and intelligence with so called sentient beings. This is why we attribute the crown of sentience upon ourselves, because we feel we are the baseline definition for what is sentient. So, what do we consider the baseline for sentience? Robert A. Frietas Jr. introduced a concept called the sentience quotient in a paper he wrote for the Analog Science Fiction/Science Fact magazine entitled Xenopsychology. The sentience quotient is a measure of a brains efficiency along a logarithmic scale ranging from -70 to +50, +50 being the most efficient brain possible according to the limits of quantum mechanics. Logically, the most efficient brain will be the smartest brain, as intelligence is the measure of the ability to understand or deal with new situations. The quotient is defined as follows: = !" (3)

= =

What is the SQ of humans according to this equation?

recognizes the lines, it wouldnt necessarily mean to the SAM that it recognizes itself as an entity in and of itself. The result of this reaction would be a Boolean, true or false, confirmation of its output. This confirmation is normal for a computer, and does not constitute a reaction that we could legitimately consider self-awareness. C. Turing Test Alan Turing was a genius of his time. He was one of the pioneers in the field of Artificial Intelligence after World War II, during which he worked at Bletchley Park breaking the German Enigma code. In his 1950 paper Computing Machinery and Intelligence, Turing proposed a test to determine if a machine was thinking. The test involves a game he called the Imitation Game, involving three participants in isolated rooms: a computer, a human participant, and a human judge. The judge could talk to both the human and the computer by typing into a terminal. Both the computer and human must try to convince the judge that they are humans. If the judge cannot consistently tell which is which, then the computer wins the game. This test is probably the best exterior qualitative test we can work with, because it provides us with the ability to make a subjective opinion on the SAM while comparing it equitably to a human measure of self-awareness. The problem arises, though, that a human as the exemplar might exhibit a higher constraint of self-awareness than we want for basis of comparison. For example, Dolphins are self-aware, but there is no comparison we can make between dolphins and humans. We have had to modify what we consider to be self-aware behavior to make the definition fit to our conclusions about dolphins after observation. D. Theory of Mind The main problem is that the mind of the SAM would not be directly observable. Sure, we could look at the code, watch the registers flip, try and watch what is going on, but the complexity of such a system would make this into an exercise in futility. This resembles the same problem psychologist fight with every day as they assess and treat human minds. One thing that we can measure, however, is the subjects own theory of mind. Theory of mind is ones ability to attribute mental states to themselves and others and to understand that others have mental states that are different from one's own. The presumption that others have a mind is termed a theory because we can only grasp the existence of our own mind through introspection, and no one has direct access to the mind of another. We assume that other people have minds by analogy with our own and based on social interaction, as observed in joint attention, the functional use of language, and subjective understanding of others' emotions and actions. Having a theory of mind allows one to attribute thoughts, desires, and intentions to others, to predict or explain their actions, and to predict their intentions. As defined, it enables one to understand that mental states can be the cause of others behavior. Being able to attribute mental states to others and understanding them as causes of behavior implies that one must be able to conceive of the mind as a "generator of representations. (Courtin, The impact of sign language on the cognitive development of deaf

children: The case of theories of mind; Courtin and Melot, Metacognitive development of deaf children: Lessons from the appearance-reality and false belief tasks) There are many tasks that have been developed to test theory of mind. Most have roots in the false-belief task, initially done by Wimmer and Perner in 1983. The most common version, the Sally-Anne task, a story is told or shown involving two characters. For example, the child is shown two dolls, Sally and Anne, who have a basket and a box, respectively. Sally has a marble, which she places in her basket and then leaves. While Sally is gone, Anne takes the marble from the basket and puts it in the box. Sally returns, and the subject is asked where Sally will look for the marble. The subject passes the task if they answers that Sally will look in the basket, where she put the marble; the subject fails the task if they answers that Sally will look in the box, where the subject knows the marble is hidden, even though Sally cannot know it is in the box since she did not see it change places. In order to pass the task, the subjects must be able to understand that anothers mental representation of the situation is different from their own, and the subject must be able to predict others behavior based on that understanding of the situation. This test is probably the most promising of the tests discussed. The most compelling part of testing for a Theory of Mind is that the subjects themselves must understand that anothers mental representation may be different than their own. This understanding is based wholly on the subjects understanding that they have a mind, and that its separate from the other minds of the world. If our SAM has a well-formed Theory of Mind, it follows that the SAM has a notion that it is different from its surroundings. I think, therefore I am: with this realization comes self-awareness, the human sentience that we hold dear. With this realization a SAM becomes selfaware. IV. CONCLUSION There are many socio-ethical issues that that come with the development of a Self-Aware Machine, which are beyond the purview of this paper. However, the most pressing issue is what this development means for the human race. We are Gods creation, created in his image. Many religious scholars argue that the quality referred to with this statement is our ability to set ourselves apart from the rest of the animal kingdom. We are different, in some very basic ways, from our biological counterparts. We are part of a very small club of species that share the quality of self-awareness, that know where we stand in relation to the rest of the universe. We can appreciate our place in the universe. Carl Sagan put it quite nicely: Our posturings, our imagined self-importance, the delusion that we have some privileged position in the Universe, are challenged by this point of pale light. Our planet is a lonely speck in the great enveloping cosmic dark. In our obscurity, in all this vastness, there is no hint that help will come from elsewhere to save us from ourselves. (Sagan, 7) There are many benefits to designing machines to work like us. Faster scientific research, better city management, automated disaster response; the list goes. However, it cannot be ignored that when machines become self-aware, they acquire the same quality God gave us. We will have passed on

that spark that made us so special. We will have created in our image, truly. With that, do we become closer to God, or gods ourselves? Machine self-awareness is coming, in our lifetimes. The question is: will we realize that we have accomplished it, and will we be ready for it? ACKNOWLEDGEMENT I would like to acknowledge my family for being so supportive through my undergraduate education. I would also like to acknowledge Professor Chiu and Professor McCartney of the Department of Computer Science & Engineering at the University of Connecticut for their advisement and wisdom as I completed my thesis and my degrees. REFERENCES
[1] UNITED NATIONS. "THE UNIVERSAL DECLARATION OF HUMAN RIGHTS." 10 DECEMBER 1948. UNITED NATIONS. 12 MARCH 2009 <HTTP://WWW.UN.ORG/OVERVIEW/RIGHTS.HTML>. [2] [3] [4] [5] Wallach, Wendell and Colin Allen. Moral Machines: Teaching Robots Right from Wrong. New York: Oxford University Press, 2009. Wimmer, H. and J. Perner. "Beliefs about beliefs: Representation and constraining function of wrong beliefs in young children's understanding of deception." Cognition 13 (1983): 103-128. 2001: A Space Odyssey. Dir. Stanley Kubrick. 1968. American Psychological Association's Council of Representatives. "Ethical Principles of Psychologists and Code Of Conduct." 21 August 2002. American Psychological Association. 26 March 2009 <http://www.apa.org/ethics/code2002.pdf>. Asimov, Isaac. I, Robot. London: Panther Books, 1971. Asimov, Isaac. "Runaround." Asimov, Isaac. I, Robot. London: Panther Books, 1971. 33-51. Archer, John. Ethology and Human Development. Savage: Barnes & Noble Books, 1992. Artificial intelligence in fiction. 15 April 2009. 16 April 2009 <http://en.wikipedia.org/wiki/Artificial_intelligence_in_fiction>. Artificial Intelligence: AI. Dir. Steven Spielberg. 2001. Baron-Cohen, S. "Precursors to a theory of mind: Understanding attention in others." Natural theories of mind: Evolution, development and simulation of everyday mindreading. Ed. A. Whiten. Oxford: Basil Blackwell, 1991. 233-251. Battlestar Galactica. Dirs. Michael Rymer, et al. 2004-2009. Bernstein, Douglas A., et al. Psychology. Eighth. Boston: Houghton Mifflin Company, 2008. Bicentennial Man. Dir. Chris Columbus. 1999. Bruner, J. S. "Intention in the structure of action and interaction." Advances in infancy research. Ed. L. P. Lipsitt and C. K. Rovee-Collier. Vol. 1. Norwood: Ablex Publishing Corporation, 1981. 41-56. brainz.org. 13 April 2010 <http://brainz.org/15-coolest-casesbiomimicry/>. Brooks, Rodney. Rodney Brooks says robots will invade our lives | Video on TED.com. February 2003. 10 March 2009 <http://www.ted.com/index.php/talks/rodney_brooks_on_robots.html>. Cerebral cortex. 27 April 2010. 27 April 2010 <http://en.wikipedia.org/wiki/Cerebral_cortex>. Chomsky, Noam. Knowledge of Language. Westport, CT: Praeger Publishers, 1986. Council for International Organizations of Medical Sciences (CIOMS), World Health Organization (WHO). "International Ethical Guidelines for Biomedical Research Involving Human Subjects." 2002. COUNCIL FOR INTERNATIONAL ORGANIZATIONS OF MEDICAL SCIENCES. 26 March 2009 <http://www.cioms.ch/frame_guidelines_nov_2002.htm>. Courtin, C. and A.-M. Melot. "Metacognitive development of deaf children: Lessons from the appearance-reality and false belief tasks." Journal of Deaf Studies and Deaf Education 5.3 (2005): 266-276.

[6] [7] [8] [9] [10] [11]

[12] [13] [14] [15] [16] [17] [18] [19] [20]

[22] Courtin, C. "The impact of sign language on the cognitive development of deaf children: The case of theories of mind." Cognition 77 (2000): 2531. [23] Coren, Stanley. How Dogs Think: What The World Looks Like To Them And Why They Act The Way They Do . New York: Free Press, 2004. [24] Freitas Jr., Robert A. "Xenopsychology." Analog Science Fiction/Science Fact April 1984: 41-53. [25] Gallup, Jr., Gordon G. "Chimpanzees: Self-Recognition." Science 2 167.3914 (1970): 86-87. [26] Gattaca. Dir. Andrew Niccol. 1997. [27] Georges, Thomas M. digital soul: Intelligent Machines and Human Values. Boulder: Westview Press, 2003. [28] Goodrich, Michael T. and Roberto Tamassia. Data Structures & Algorithms in Java. 4th Edition. Hoboken, NJ: John Wiley & Sons, Inc., 2006. [29] Gordon, R. M. "'Radical' simulationism." Theories of theories of mind. Ed. P. Carruthers and P. K. Smith. Cambridge: Cambridge University Press, 1996. [30] I, Robot. Dir. Alex Proyas. 2004. [31] Intel Corporation. "Intel.com." 24 February 2010. Intel Newsroom. 13 April 2010 <http://newsroom.intel.com/servlet/JiveServlet/previewBody/1044-1021-1034/10-03-IntelCorei7-980X_PressDeck.pdf>. [32] Human brain. 13 April 2010. 12 April 2010 <http://en.wikipedia.org/wiki/Human_brain>. [33] Honda Motor Co., Ltd. Honda Worldwide. 2009. 22 March 2009 <http://world.honda.com/ASIMO/>. [34] Josephson, Brian D. "Brian D. Josephson - Nobel Lecture." 12 December 1973. Nobelprize.org. 11 March 2009 <http://nobelprize.org/nobel_prizes/physics/laureates/1973/josephsonlecture.html>. [35] King, Ross D., et al. "The Automation of Science." Science 3 April 2009: 85-89. [36] life. 2009. 13 March 2009 <http://www.merriamwebster.com/dictionary/life>. [37] MacIntyre, Alasdair. After Virtue. Notre Dame, Indiana: University of Notre Dame Press, 2007. [38] Maslow, A. H. "A theory of human motivation." Psychological Review 50.4 (1943): 370-396. [39] Marten, Kenneth and Suchi Psarakos. "Evidence of self-awareness in the bottlenose dolphin." Parker, Sue Taylor, Robert W. Mitchell and Maria L. Boccia. Self-awareness in Animals and Humans: Developmental Perspectives. New York: Cambridge University Press, 1995. 361-379. [40] Mill, John Stuart. On Liberty. Ed. Gertrude Himmelfarb. London: Penguin Books, 1974. [41] Miller, Jason. "American Chronicle." 17 May 2009. Minding the Animals: Ethology and the Obsolescence of Left Humanism . 1 April 2012 <http://www.americanchronicle.com/articles/view/102661>. [42] Moravec, Hans. "When will computer hardware match the human brain?" Journal of Evolution and Technology 1 (1998). [43] Patterson, Francine and Wendy Gordon. "The Case for the Personhood of Gorillas." Cavalieri, Paola and Peter Singer. The Great Ape Project. New York: St. Martin's Griffin, 1993. 58-77. [44] Pinker, Steven. How the Mind Works. New York, NY: W. W. Norton & Company, Inc., 1997. [45] Premack, D. G. and G. Woodruff. "Does the chimpanzee have a theory of mind?" Behavioral and Brain Sciences 1.4 (1978): 515-526. [46] Sagan, Carl. Pale Blue Dot: A Vision of the Human Future in Space. New York: Random House, Inc., 1994. [47] sapience. 2009. 11 March 2009 <http://www.merriamwebster.com/dictionary/sapience>. [48] sentience. 2009. 11 March 2009 <http://www.merriamwebster.com/dictionary/sentience>. [49] Turing, Alan M. "Computing Machinery and Intelligence." Mind LIX.236 (1950): 433-460.

[21]

You might also like