AI Armageddon ArtículoOriginal

You might also like

Download as pdf or txt
Download as pdf or txt
You are on page 1of 13

Ethics and Information Technology (2007) 9:153–164 Ó Springer 2007

DOI 10.1007/s10676-007-9138-2

AI Armageddon and the Three Laws of Robotics

Lee McCauley
Department of Computer Science, University of Memphis, 374 Dunn Hall, Memphis, TN 38152, USA
E-mail: mccauley@memphis.edu

Abstract. After 50 years, the fields of artificial intelligence and robotics capture the imagination of the general
public while, at the same time, engendering a great deal of fear and skepticism. Isaac Asimov recognized this
deep-seated misconception of technology and created the Three Laws of Robotics. The first part of this paper
examines the underlying fear of intelligent robots, revisits AsimovÕs response, and reports on some current
opinions on the use of the Three Laws by practitioners. Finally, an argument against robotic rebellion is made
along with a call for personal responsibility and suggestions for implementing safety constraints in intelligent
robots.

Key words: artificial intelligence, robot, three laws, Asimov, RoboticistÕs oath, Frankenstein Complex

Introduction With the field of Artificial Intelligence now 50 years


old and the extensive use of AI products (Cohn 2006),
In the late 1940Õs a young author by the name of it is time to re-examine AsimovÕs Three Laws from
Isaac Asimov began writing a series of stories and foundations to implementation. In the process, we
novels about robots. That young man would go on to must address the underlying fear of uncontrollable
become one of the most prolific writers of all time and AI.
one of the corner stones of the science fiction genre.
As the modern idea of a computer was still being
refined, this imaginative boy of 19 looked deep into The ‘‘Frankenstein Complex’’
the future and saw bright possibilities; he envisioned
a day when humanity would be served by a host of In 1920 a Czech author by the name of Karel Capek
humanoid robots. But he knew that fear would be the wrote the widely popular play R.U.R., which stands
greatest barrier to success and, consequently, for RossumÕs Universal Robots. The word ‘‘robot’’
implanted all of his fictional robots with the Three which he or, possibly, his brother, Josef, coined
Laws of Robotics. Above all, these laws served to comes from the Czech word ‘‘robota’’ meaning
protect humans from almost any perceivable danger. ÔÔdrudgeryÕÕ or ÔservitudeÕ (Jerz 2002). As typifies
Asimov believed that humans would put safeguards much of science fiction since that time, the story is
into any potentially dangerous tool and saw robots as about artificially created workers that ultimately rise
just advanced tools. up to overthrow their human creators. Even though
Throughout his life Asimov believed that his Three CapekÕs Robots were made out of biological material,
Laws were more than just a literary device; he felt they had many of the traits associated with the
scientists and engineers involved in robotics and mechanical robots of today. Human shape that is,
Artificial Intelligence (AI) researchers had taken his nonetheless, devoid of some human elements, most
Laws to heart (Asimov 1990). If he was not misled notably, for the sake of the story, reproduction.
before his death in 1992, then attitudes have changed Even before CapekÕs use of the term Ôrobot,Õ
since then. Even though knowledge of the Three however, the notion that science could produce
Laws of Robotics seems universal among AI something that it could not control had been explored
researchers, there is the pervasive attitude that the most acutely by Mary Shelly under the guise of
Laws are not implementable in any meaningful sense. FrankensteinÕs monster (Shelley 1818). The full title
154 LEE MCCAULEY

of ShelleyÕs novel is ‘‘Frankenstein, or The Modern involve the supplanting of humanity by its metallic
Prometheus.’’ In Greek mythology Prometheus offspring. In typical Hollywood style, they are using a
brought fire (technology) to humanity and, conse- pre-existing condition for entertainment purposes.
quently, was soundly punished by Zeus. In medieval Unfortunately, in doing so, they also reinforce this
times, the story of Rabbi Judah Loew told of how he fear.
created a man from the clay (in Hebrew, a ÔgolemÕ) of Even well respected individuals in both academia
the Vltava river in Prague and brought it to life by and industry have expressed their belief that humans
putting a shem (a tablet with a Hebrew inscription) in will engineer a new species of intelligent machines
its mouth. The golem eventually went awry, and that will replace us. Ray Kurzweil (1999, 2005),
Rabbi Loew had to destroy it by removing the shem. Kevin Warwick (2002), and Hans Moravec (1998)
What has been brought to life here, so to speak, is have all weighed in on this side. Bill Joy, co-founder
the almost religious notion that there are some things of Sun Microsystems, expressed in a 2000 Wired
that only God should know. While there may be Magazine article (Joy 2000) his fear that artificial
examples of other abilities that should remain solely intelligence would soon overtake humanity and
as GodÕs bailiwick, it is the giving of Life that seems would, inevitably, take control of the planet for one
to be the most sacred of GodÕs abilities. But Life, in purpose of another. The strongest point in their
these contexts, is deeper than merely animation; it is arguments hinges on the assumption that the
the imparting of a soul. For centuries, scientists and machines will become too complicated for humans to
laymen alike have looked to distinct abilities of build using standard means and will, therefore,
humans as evidence of our uniqueness – of our relinquish the design and manufacture of future
superiority over other animals. Perhaps instinctively, robots to intelligent machines themselves. Joy argues
this search has centered almost exclusively on cogni- that robotics, genetic engineering, and nanotechnol-
tive capacities. Communication, tool use, tool for- ogy pose a unique kind of threat that the world has
mation, and social constructs have all, at one time or never before faced, ‘‘robots, engineered organisms,
another, been pointed to as defining characteristics of and nanobots share a dangerous amplifying factor:
what makes humans special. Consequently, many They can self-replicate. A bomb is blown up only
have used this same argument to delineate humans as once – but one bot can become many, and quickly get
the only creatures that poses a soul. To meddle in this out of control.’’ Clearly, Joy is expressing the
area is to meddle in GodÕs domain. This fear of man underpinnings of why the public at large continues to
broaching, through technology, into GodÕs realm and be gripped by the Frankenstein Complex.
being unable to control his own creations is referred
to as the ‘‘Frankenstein Complex’’ by Isaac Asimov
in a number of his essays (most notably Asimov No robotic apocalypse is coming
1978).
The ‘‘Frankenstein Complex’’ is alive and well. Is the extinction of humanity at the hands of auton-
Hollywood seems to have re-kindled the love/hate omous robots a real possibility? The truth is that the
relationship with robots through a long string of likelihood of a robotic uprising that destroys or
productions that each rely on this fear as a central subjugates humanity is quite low. As pointed out
element. To make the point, here is a partial list: previously, the primary argument that robots will
Terminator (all three); I, Robot; A.I.: Artificial take over the world is that they will eventually be able
Intelligence; 2010: a Space Odyssey; Cherry 2000; to design and manufacture themselves in large num-
D.A.R.Y.L; Blade Runner; Short Circuit; Electric bers thereby activating the inevitability of evolution.
Dreams; the Battlestar Galactica series; Robocop; Once evolution starts to run its course humanity is
Metropolis; Runaway; Screamers; The Stepford out of the loop and will eventually be rendered
Wives; and Westworld. This, of course, leaves out the superfluous. On the surface this seems like a perfectly
numerous news stories, documentaries, and made- logical argument that strikes right at the heart of the
for-TV movies that air regularly. Even though several Frankenstein Complex. However, there are several
of these come from Sci-Fi literature, the fact remains key assumptions that must hold in order for this
that the predominant theme chosen when robots are scenario to unfold as stated.
on the big or small screen involves their attempt to First, there is the underlying assumption that large
harm people or even all of humanity. This is not numbers of highly intelligent robots will be desired by
intended as a critique of Hollywood. To the contrary; humans. At first, this might seem reasonable. Why
where robots are concerned, the images that people wouldnÕt we want lots of robots to do all the house-
can most readily identify with, those that capture work, dangerous jobs, or any menial labor? If those
their imaginations and tap into their deepest fears, jobs require higher-order intelligence to accomplish,
ARMAGEDDON AND THE THREE LAWS 155

then we already have a general-purpose machine that appealing? Although the verdict is still out on whe-
is cheaply produced and in abundance – humans. If ther a standard vacuum or a Rumba-like system will
they do not require higher-order intelligence, then a grace most households, only the very rich would even
machine with some intelligence can be built to handle seriously consider the third option. This same logic
that specific job more economically than a highly holds for any appliance you may need in your home.
intelligent general robot. In other words, we may Consequently, we will see the mass production of
have smarter devices that take over some jobs and dumb but smart enough devices, but not general-
make others easier for humans, but those devices will purpose robots or artificial intelligences. This is not
not require enough intelligence to even evoke a seri- to say that we wonÕt create in some lab a human-level
ous discussion of their sentience. Take, for example, artificial intelligence. We will do it because we can.
the popular Rumba vacuuming robot produced by These will be expensive research oddities that will get
iRobot (‘‘iRobot Corporation: Home Page’’ 2006). a lot of attention and raise all of the hard philo-
Most vacuum cleaners manufactured today must take sophical questions, but their numbers will be low and
into account that the user is a human thereby dic- they will be closely watched because of their
tating certain constraints on the overall design. There uniqueness.
must be a handle, it must be relatively light weight, Another assumption underlying our doomsday of
the button(s) must be easy to reach and understand- reproducing robots is that humans would never
able by humans, etc. The Rumba along with any actually check to see if the robots produced deviated
future versions of such a machine, however, does not from the desired output. Especially if they are being
have to follow these constraints. The current models mass produced, this seems quite out of the question.
simply roll around randomly trying not to get stuck Approximately 280 cars were sacrificed to crash tests
or fall down stairs. When it gets low on batteries, the alone in 2006 just by the Insurance Institute for
robot follows a RF beacon back to its recharging Highway Safety and National Highway Traffic Safety
station. From the perspective of robotics researchers, Administration. Every model sold in the United
the Rumba is a very dumb robot. Even so, it gets the States undergoes a huge battery of tests before it is
job done. From an engineering perspective, this is an allowed on the streets. Why would robots be any less
effective tool for its designed task. Future versions regulated? This further reduces the chances of evo-
might add a memory and mapping function that lutionary style mutations. Of course there will still be
would make it more efficient at cleaning all areas defects that crop up for a given robot that did not
within a reasonable time rather than relying on ran- show up in the tests just as with automobiles. Also,
dom chance, but the fact remains that this task does just as with automobiles, these defects will be dealt
not require higher-order intelligence. with and not passed on to future generations of
One might argue that a general-purpose robot robots.
would be able to use the currently existing tools and Finally, the assumption is made that evolution will
would, therefore, be more cost affective in the long occur on an incredibly fast time scale. There are a
run. Such a robot would necessarily be humanoid due couple of ways that this might come about. One
to the fact that our existing tools are designed for argument goes that since these machines will be
human use. It would also need the dexterity, balance, produced at such a high rate of speed, evolution will
and ability to adapt to changing situations of a happen at a predacious rate and that it will catch
human. This kind of a robot would be much more humans by surprise. How fast would intelligent
complex than any motor vehicle currently on the road robots evolve? In 1998 just fewer than 16.5 million
and would likely cost at least as much as a luxury car. personal computers were manufactured in the US.
This automatically relegates it to a much smaller While computer components are built all around the
market. On the other hand, a specialized semi- world, the vast majority are assembled in the US. For
autonomous robot, like the Rumba, for a specific argumentÕs sake, letÕs say that the worldÕs production
task does not need to cost that much more than a of computers is twice that, some 33 million. LetÕs also
comparable device to be used by a human. The cur- assume that that number has quadrupled since 1998
rent price for a Rumba ranges from $129 to $349 to 132 million computers manufactured worldwide in
while a standard vacuum cleaner will cost anywhere one year. These are moderately complex machines
from $129 to $549 at your local department or home created at a rate at least as fast as our future intelli-
improvement store. If the average person needs to gent robots might be. In 2006, there were more than
buy a new vacuum cleaner and they could purchase 130 million human births on the planet, about equal
either a standard vacuum, a Rumba-like robot, or a to our hypothetical number of computers produced.
standard vacuum and a very expensive general- Evolution works, outside of sexual reproduction,
purpose robot, which option is most likely to be by making mistakes during the copying of one
156 LEE MCCAULEY

individual – a mutation. If we assume that our then back to the hardware versions and progression
manufacturing processes will make mistakes on par from there. Another problem with this notion is that
with biological processes, then the evolution of our the very first AI in this chain, since we are not using
reproducing machines will be roughly equal to that of evolutionary processes, will need a great deal of
human evolution – if one discounts the effect of knowledge regarding the nature of intelligence in
genetic crossover via sexual reproduction. Further- order to effectively guide the development. In other
more, each robot produced must have all of the words, there must be some criteria against which the
knowledge, capability, resources and time to build AI can determine its own success or failure and then
more robots, otherwise the mutations donÕt propa- have enough knowledge of its own construction to
gate and evolution goes nowhere. Why would we give purposefully design a new version of itself. Solving
our house-cleaning robot the ability to reproduce on the problem of creating a truly intelligent machine
its own? Even if we allowed for the jumpstarting of using this method is, therefore, a ‘‘catch 22;’’ we
the process by already having a fairly intelligent would have to already know how to create an intel-
robot running the manufacturing show, this would be ligent machine before we could create a machine to
comparable to starting with an Australopithecus and create intelligence.
waiting to come up with a Homo sapiens sapiens. To One might still argue that this software-only AI
sum up, if we start with a fairly intelligent seed robot could be implemented using some form of learning or
that can reproduce, and it builds copies of itself, and genetic algorithm based on some general intelligence
each one of the copies builds copies of themselves on measure. Even if this is implemented at some point in
and on to create large numbers of reproducing the future, it is not something that will be accessible
robots, then it will take thousands of years for the by your everyday hacker due to the cost and will,
process to create any meaningful changes whatsoever, therefore, be relegated to a small number of academic
much less a dominant super species. There are no institutions or corporations. The system needed to
likely circumstances under which this sort of behavior run this genetic algorithm would need hundreds or
would go on unchecked by humans. thousands of times more computational resources
Up to this point we have discussed primarily than the one described in the previous paragraph.
physical robots. Just as much fear may exist for Here we are talking not about a single intelligent
general artificial intelligence (AI). AI is a broad term entity redesigning itself; instead, we are talking about
typically encompassing any human-made system that hundreds of such entities evolving in a virtual envi-
performs tasks considered to require some level of ronment (a real environment would imply physical
intelligence. Robotics, therefore, is a subclass of AI embodiment). For evolution to occur in some
that includes physical animation. There is also a fuzzy meaningful way, this environment would need to be
area in which an AI controls a simulated rather than just as complex and dynamic as the real world with
physical robot, but from the AIÕs perspective there is appropriate sensory/motor interactions possible by
really no difference. One possible way of increasing the evolving agents. Just running the environment in
the rate of evolution might be to use a more directed real-time would require huge amounts of computing
approach that focuses on the AI rather than the power not to mention the huge amounts needed for
physical robotic embodiment. Humans could build a each of the agents present in any given generation.
robot or AI with the sole task of designing and Even so, if we were to create some form of intel-
building a new AI that is better at designing and ligent creature in such a manner, would it be dan-
building AI which builds another AI, etc. This gerous? The first requirement would be that there is
directed evolution is likely to be much quicker and is some connection between the virtual and the real
also likely to be something that an AI researcher world. Secondly, the manipulation within the real
might try. This would also be a very expensive world must be part of the fitness function of the
endeavor. Even if the AIÕs ‘‘body’’ is virtual, there evolving individuals. Why is this the case? For any
must still be a physical computer to house it. Either evolutionary process, there must be a payoff for any
the AI is custom built in hardware with each suc- sustained environmental adaptation. Without a pay-
cessive version or it is created in a virtual manner and off, evolutionary processes will steer the development
run within some larger system. This system would of a population towards some other configuration
likely need to be quite large if the AI is intended to be that does have a payoff. For artificial genetic algo-
truly intelligent. As the versions become more and rithms this is encapsulated in the fitness function.
more adept and complex, the system that houses the Even if the fitness function is essentially a ‘‘survival of
AI would need to be increasingly complex and ulti- the fittest’’ there must be some dependency on actions
mately a proprietary machine would need to be cre- in the real world for survival otherwise the resulting
ated whose purpose would be to run the AI. We are agents will not have any ability to interact in the real
ARMAGEDDON AND THE THREE LAWS 157

world and, therefore, would be of no threat. For a unfold. He made a conscious effort to combat the
more detailed discussion of evolutionary algorithms ‘‘Frankenstein Complex’’ in his own robot stories
and genetic processes see (Holland 1975). The only (Asimov 1990).
scenario that does not ultimately involve a physical
presence would be evolving agents whose environ- What are the three laws?
ment or part of it is the Internet. While there is
potential for great harm here, it is likely to be on the Beyond just writing stories about ‘‘good’’ robots,
scale of a particularly nasty computer virus, not a Asimov imbued them with three explicit laws first
world-ending cataclysm. expressed in print in the story, ‘‘Runaround’’
Is there some set of circumstances in which a (Asimov 1942):
genetic algorithm could evolve in some simulation of
1. A robot may not harm a human being, or, through
a subset of the real world that, nonetheless, has the
inaction, allow a human being to come to harm.
potential to obliterate humanity? Yes. To barrow one
2. A robot must obey the orders given to it by human
of the few plausible scenarios from Hollywood, a
beings, except where such orders would conflict
situation like the one presented in the 1983 movie
with the First Law.
WarGames is loosely possible. In this movie, a
3. A robot must protect its own existence, as long as
computer system is designed to continuously play and
such protection does not conflict with the First or
learn from a realistic game of thermonuclear war with
Second Law.
switches built in so that, in the event of a real war, it
could take over the launching of the missiles. At one
point, the computer gets confused between the game AsimovÕs vision
and reality and begins the countdown to the launch
of the real nuclear missiles. This example demon- Many of AsimovÕs robot stories were written in the
strates the need for a virtual environment that in 1940s and 1950s before the advent of the modern
some way mirrors the real world that also possesses electronic computer. They tend to be logical mysteries
some connection between the system and the real where the characters are faced with an unusual event
world. If cases such as this present themselves, it is or situation in which the behavior of a robot
the AI creatorÕs responsibility to consider the rami- implanted with the Three Laws of Robotics is of
fications of what they create. paramount importance. Every programmer has had
Note that to have any real possibility of destroying to solve this sort of mystery. Knowing that a com-
humanity, even by accident, the AI had to ultimately puter (or robot) will only do what it is told, the
have some physical way of causing harm. At the programmer must determine why it didnÕt do what he
point in the WarGames movie when the AI takes or she told it to do. It is in this way that Asimov
control of the missile launch sequence, it has become emphasizes both the reassuring fact that the robots
a form of physical robot. For this reason, the dis- cannot deviate from their programming (the Laws)
cussion that follows applies both to fully autonomous and the limitations of that programming under
robots and to largely disembodied AI that, nonethe- extreme circumstances.
less, have some way to affect the real world.
Immutable
A key factor of AsimovÕs Three Laws is that they are
The Three Laws of Robotics immutable. He foresaw a robot brain of immense
complexity that could only be built by humans at the
As demonstrated in the preceding section, there is still mathematical level. Therefore, the Three Laws were
the possibility of great harm however remote. Also, not the textual form presented above, but were
even if the Frankenstein Complex is largely unfoun- encoded in mathematical terms directly into the core
ded with regards to the destruction of humanity, of the robot brain. This encoding could not change in
there is still the visceral fear that individuals have any significant manner during the course of the
with regards to the robot standing in front of them – robotÕs life. In other words, learning was something
the one that could malfunction and hurt them. This that a robot did only rarely and with great difficulty.
sort of situation is much more likely. How can we, as Instead, Asimov assumed that a robot would be
robotics researchers, dissuade this fear? Isaac Asi- programmed with everything it needed to function
mov, while still a teenager, noticed the recurring prior to its activation. Any additional knowledge it
theme of ‘‘man builds robot – robot kills man’’ in needed would be in the form of human commands
literature of the time and felt that this was not and the robot would not express any ingenuity or
the way that such an advanced technology would creativity in the carrying out of those commands.
158 LEE MCCAULEY

The one exception to this attitude can be seen in shuts off. This is a simplistic form of sensor designed
‘‘The Bicentennial Man’’ (Asimov 1976). In the story, to convey when the machine might injure the human.
Andrew Martin is a robot created in the ‘‘early days’’ Each machine or robot can be very different in its
of robots when the mathematics governing the crea- function and structure; therefore, the mechanisms
tion of the robotsÕ positronic brain was imprecise. employed to implement the Three Laws are neces-
Andrew is able to create artwork and perform sci- sarily different. AsimovÕs definition of a robot was
entific exploration. A strong plot element is the fact somewhat homogeneous in that their shape was
that, despite AndrewÕs creativity, he is still completely usually human-like and their positronic brains tended
bound by the Three Laws culminating at one point in to be mostly for general-purpose intelligence. This
a scene where two miscreants almost succeed in required that the Three Laws be based exclusively in
ordering Andrew to dismantle himself. The point the mechanism of the brain – less visible to the gen-
being made is that, if a friend had not arrived on the eral public.
scene in time, the robot would have been forced by Despite AsimovÕs firm optimism in science and
the Three Laws to obey the order given to it by a humanity in general, the implementation of the Three
human even at the unnecessary sacrifice of its own Laws in their explicit form and, more importantly,
life. Even with the amazing accomplishments of this public belief in their immutability was a consistent
imaginative robot, Andrew Martin, the fictional struggle for the characters in his stories. It was the
company that built him saw his very existence as an explicit nature of the Three Laws that made the
embarrassment solely because of the fear that his existence of robots possible by directly countering the
intellectual freedom fueled in the general populace – ‘‘Frankenstein Complex.’’ Robots in use today are
the ‘‘Frankenstein Complex.’’ If a robot could be far from humanoid and their safety features are either
intellectually creative, couldnÕt it also be creative clearly present or their function is not one that would
enough to usurp the Three Laws? endanger a human. The Rumba vacuum robot
Asimov never saw this as a possibility although he (‘‘iRobot Corporation: Home Page’’ 2006) comes to
did entertain the eventual addition of a zeroth law mind as a clear example. One of the first household
that was essentially a rewrite of the first law with the use robots, the Rumba is also one of the first to
word ‘‘human’’ replaced with ‘‘humanity’’. This include sensors and behaviors that implement at least
allowed a robot with the zeroth law to harm or allow some of the Three Laws: it uses a downward pointing
a human to come to harm if it was, in its estimation, IR sensor to avoid stairs and will return to its
to the betterment of humanity (Asimov 1985). Asi- charging station if its batteries get low. Otherwise, the
movÕs image for the near future of robotics, however, RumbaÕs nature is not one that would endanger a
viewed robots as complicated tools and nothing person.
more. As with any complicated machine that has the
potential of harming a human during the course of its Current opinions
functioning, he assumed that the builders had the
responsibility of providing appropriate safeguards Asimov believed that the Three Laws were being
(Asimov 1978). One would never think of creating a taken seriously by robotics researchers of his day and
band saw or a nuclear reactor without reasonable that they would be present in any advanced robots as
safety features. a matter of course. In preparation for this writing, a
handful of emails were sent out asking current
Explicit robotics and artificial intelligence researchers what
For Asimov, the use of the Three Laws was just that their opinion is of AsimovÕs Three Laws of Robotics
simple; they were an explicit elaboration of implicit and whether the laws are implementable. Not a single
laws already in effect for any tool humans have ever respondent was unfamiliar with the Three Laws and
created (Asimov 1978). He did not, however, think several seemed quite versed in the nuances of
that robots could not be built without the Three AsimovÕs stories. From these responses it seems that
Laws. Asimov simply felt that reasonable humans the ethical use of technology and advanced robots in
would naturally include them by whatever means particular is very much on the minds of researchers.
made sense whether they had his Three Laws in mind The use of AsimovÕs laws as a way to answer these
or not. A simple example would be the emergency concerns, however, is not even a topic of discussion
cutoff switch found on exercise equipment and most except, perhaps, in South Korea (Lovgren 2007) and
industrial robots being tended by humans. There is a Japan (Christensen 2006). Despite the familiarity
physical connection created between the human and with the subject, it is not clear whether many robotics
the machine. If the human moves out of a safety researchers have ever given much thought to
zone, then the connection is broken and the machine the Three Laws of Robotics from a professional
ARMAGEDDON AND THE THREE LAWS 159

standpoint. Nor should they be expected to. AsimovÕs determine to what extent the Three Laws apply. As
Three Laws of Robotics are literary devices and not anyone that has studied natural language under-
engineering principles any more than his fictional standing (NLU) can tell you, this is by no means a
positronic brain is based on scientific principles. trivial task in the general case. The major underlying
WhatÕs more, many of the researchers responding assumption is that the robot has an understanding of
pointed out serious issues with the laws that may the universe from the perspective of the human giving
make them impractical to implement. the command. Such an assumption is barely justifi-
able between two humans, much less a human and a
Ambiguity robot.

By far the most cited problem with AsimovÕs Three Understanding the effect of an action
Laws is their ambiguity. The first law is possibly the
most troubling as it deals with harm to humans. In the second novel of AsimovÕs Robots Series, The
James Kuffner, Assistant Professor at The Robotics Naked Sun, the main character, Elijah Baley points
Institute of Carnegie Mellon University, replied in out that a robot could inadvertently disobey any of
part: the Three Laws if it is not aware of the full conse-
quences of its actions (Asimov 1957). While the
‘‘The problem with these laws is that they use ab-
character in the novel rightly concludes that it is
stract and ambiguous concepts that are difficult to
impossible for a robot to know the full consequences
implement as a piece of software. What does it
of its actions, there is never an exploration of exactly
mean to ‘‘come to harm’’? How do I encode that in
how hard this task is. This was also a recurring point
a digital computer? Ultimately, computers today
made by several of those responding. Doug Blank,
deal only with logical or numerical problems and
for example, put it this way:
results, so unless these abstract concepts can be
encoded under those terms, it will continue to be ‘‘[Robots] must be able to counterfactualize about
difficult (e.g., Kuffner, personal communications).’’ all of those [ambiguous] concepts, and decide for
themselves if an action would break the rule or not.
Doug Blank, Associate Professor of Computer Sci-
They would need to have a very good idea of what
ence at Bryn Mawr College, expressed a similar sen-
will happen when they make a particular action
timent:
(e.g., Blank, personal communications).’’
‘‘The trouble is that robots donÕt have clear-cut
Aaron Sloman, Professor of Artificial Intelligence
symbols and rules like those that must be imagined
and Cognitive Science at The University of Bir-
necessary in the sci-fi world. Most robots donÕt
mingham, described the issue in a way that gets at the
have the ability to look at a person and see them as
sheer immensity of the problem:
a person (a ÔhumanÕ). And that is the easiest con-
cept needed in order to follow the rules. Now, ‘‘Another obstacle involves potential contradic-
imagine that they must also be able to recognize tions as the old utilitarian philosophers found
and understand ÔharmÕ, ÔintentionsÕ, ÔotherÕ, ÔselfÕ, centuries ago: what harms one may benefit an-
Ôself-preservationÕ, etc, etc, etc. (e.g., Blank, per- other, etc., and preventing harm to one individual
sonal communications)’’ can cause harm to another. There are also conflicts
between short term and long term harm and benefit
While Asimov never intended for robots with the
for the same individual (e.g., Sloman, personal
Three Laws to be required to understand the English
communications; Sloman 2006).’’
form, the point being made above is quite appropri-
ate. It is the encoding of the abstract concepts implied David Bourne, a Principal Scientist of Robotics at
in the laws within the huge space of possible envi- Carnegie Mellon, put it this way:
ronments that seems to make this task insurmount-
‘‘A robot certainly can follow its instructions, just
able. Many of AsimovÕs story lines emerge from this
the way a computer follows its instructions. But, is
very aspect of the Three Laws even as many of the
a given instruction going to crash a program or
finer points are glossed over or somewhat naı̈ve
drive a robot through a human being? In the
assumptions are made regarding the cognitive
absolute, this answer is unknowable! (e.g., Bourne,
capacity of the robot in question. A word encoun-
personal communications)’’
tered by a robot as part of a command, for example,
may have a different meaning in different contexts. It seems, then, we are asking that our future robots be
This means that a robot must use some internal more than human – they must be omniscient. More
judgment in order to disambiguate the term and then than omniscient, they must be able to make value
160 LEE MCCAULEY

judgments on what action on their part will be most or otherwise, that has something akin to human-level
beneficial (or least harmful) to a human or even or better intelligence. Furthermore, he does not think
humanity in general. Obviously we must settle for that such an imposed value system will be necessary:
something that is a little more realistic.
‘‘It is very unlikely that intelligent machines could
General attitudes possibly produce more dreadful behavior towards
humans than humans already produce towards
Even though Asimov attempted to answer these issues each other, all round the world even in the sup-
in various ways in multiple stories and essays, the posedly most civilized and advanced countries,
subjects of his stories always involved humanoid both at individual levels and at social or national
robots with senses and actions at least as good as and levels.
often better than humans. This aspect tends to suggest
that we should expect capabilities that are on par with Moreover, the more intelligent the machines are
humans. Asimov encouraged this attitude and even the less likely they are to produce all the dreadful
expressed through his characters that a humaniform behaviors motivated by religious intolerance,
robot (one that is indistinguishable externally from a nationalism, racialism, greed, and sadistic enjoy-
human) with the Three Laws could also not be dis- ment of the suffering of others.
tinguished from a very good human through its
actions. ‘‘To put it simply – if Byerley [the possible They will have far better goals to pursue (e.g.,
robot] follows all the Rules of Robotics, he may be a Sloman, personal communications; Sloman
robot, and may simply be a very good man,’’ as spoken 2006).’’
by Susan Calvin in the 1946 story, Evidence (Asimov
1946). Furthermore, Asimov often has his characters This same sentiment has been expressed previously by
espouse how safe robots are. They are, in AsimovÕs Sloman and others (Sloman 1978; Worley 2004).
literary universe, almost impossibly safe. These concerns are quite valid and deserve discussion
It is possibly the specter of this essentially well beyond the brief mention here. At the current
unreachable goal that has made AsimovÕs Three Laws state of robotics and artificial intelligence, however,
little more than an imaginative literary device in the there is not much danger of having to confront these
minds of present-day robotics researchers. Maja particular issues in the near future as they apply to
Mataric, Founding Director of the University of human-scale robots. For the remainder of this paper,
Southern California Center for Robotics and we will, therefore, be discussing only robots that do
Embedded Systems, said, ‘‘[the Three Laws of not posses human-level intelligence or anything
Robotics are] not something that [are] taken seriously resembling it.
enough to even be included in any robotics textbooks,
which tells you something about [their] role in the The three laws of tools
field (e.g., Mataric, personal communications).’’ This
seems to be the implied sentiment from all of the So, to recap, we have shown that a species-ending
correspondents despite their interest in the subject. catastrophe at the hands of a malicious AI or robot
Aaron Sloman, however, goes a bit further and horde is infinitesimally small if not impossible. A
brings up a further ethical problem with AsimovÕs large-scale accident because of a malfunctioning AI
three laws: or robot might be possible but highly unlikely due to
the very small number of situations that would make
‘‘I have always thought these were pretty silly: they
this possible. Instead, the future will contain a large
just express a form of racialism or speciesism.
number of much less intelligent but quite capable
robotic devices. A small-scale accident at the hands of
If the robot is as intelligent as you or I, has been
a malfunctioning robot, however, might very well be
around as long as you or I, has as many friends and
possible. We have looked to Isaac AsimovÕs Three
dependents as you or I (whether humans, robots,
Laws of Robotics as a way to address the Franken-
intelligent aliens from another planet, or whatever),
stein Complex that the general public is likely to
then there is no reason at all why it should be
experience when faced with this new breed of robotic
subject to any ethical laws that are different from
tools, but found that they cannot be directly or fully
what should constrain you or me (e.g., Sloman,
implemented due to their inherent ambiguity and the
personal communications; Sloman 2006).’’
high level of intelligence needed to deal with it.
It is SlomanÕs belief that it would be unethical to force To apply the Three Laws to these less intelligent
an external value system onto any creature, artificial robotic devices, two questions must be asked: How
ARMAGEDDON AND THE THREE LAWS 161

much disambiguation should we expect and at what Other questions


level should a robot understand the effect of its
actions? The answer to these questions may have We have now explored the application of the Three
been expressed by Asimov, himself. It was his belief Laws at both extremes of a human-level-intelligent
that when robots of human-level intelligence are built robot to a purely reactive one. While the first extreme
they will have the Three Laws. Not just something of human-level intelligence is worthy of discussion
like the Three Laws, but the actual three laws (Asi- from a philosophical standpoint and must be
mov 1990). At first glance this seems like a very bold addressed at some point (see Sloman 2006), it is likely
and egotistical statement. While Asimov was less to be many decades before AI will have progressed to
than modest in his personal life, he argued that the a point where these issues are pressing. For the pur-
Three Laws of robotics are simply a specification of poses of this paper, we will politely table the question
implied rules for all human tools. He stated them as of the point at which our intelligent tools become
follows (Asimov 1990): sentient or at least sentient enough to be subject to
the issues Sloman suggests. On the other hand, the
1. A tool must be safe to use.
field has progressed quite a ways from the days of
2. A tool must perform its function, provided it does
hard-coded rules found in reactive systems. The
so safely.
following questions, therefore, can be addressed with
3. A tool must remain intact during use unless its
respect to smart but not sentient robots.
destruction is required for safety or unless its
destruction is part of its function.
Should the laws be implemented?
From this perspective, the answers to both of the
questions expressed earlier in this section emerge. How
By whatever method is suitable for a specific robot and
much disambiguation should we expect? Whatever
domain, yes. To do otherwise would be to abdicate our
level makes sense to the level of knowledge for the
responsibility as scientist and engineers. The more spe-
robot in question. At what level should a robot
cific question of which laws should be implemented arises
understand the effect of its actions? To whatever level
at this point. Several people have suggested that AsimovÕs
is appropriate for its level of knowledge. Yes, these
Three Laws are insufficient to accomplish the goals to
may seem like broad answers that give us nothing
which they are designed (Ames 2004; Clarke 1994;
specific and, therefore, nothing useful. However, given
Sandberg 2004) and some have postulated additional
a specific robot with specific sensors, specific actuators
laws to fill some of the perceived gaps (Clarke 1994). For
and a specific function these answers become useful.
instance, ClarkeÕs revamped laws are as follows:
We are no longer faced with the prospect of having to
create a god-like robot whose function is to vacuum The Meta-Law
our floors; instead, we are let off the hook so to speak. A robot may not act unless its actions are subject to
Our robot only has to perform in accordance with the the Laws of Robotics.
danger inherent in its particular function. It might, for Law Zero
example, be reasonable to expect that the Rumba A robot may not injure humanity, or, through inac-
vacuuming robot have a sensor on its lid that detects tion, allow humanity to come to harm.
an approaching object (possibly a foot) and moves Law One
quickly to avoid being stepped on or otherwise dam- A robot may not injure a human being, or, through
aged. This satisfies both the first and the third laws of inaction, allow a human being to come to harm,
robotics without requiring that the robot positively unless this would violate a higher-order Law.
identify the approaching object as a human append- Law Two
age. The third law is satisfied by allowing the robot to A robot must obey orders given it by human beings,
avoid damage while the first law is upheld by reason- except where such orders would conflict with a
ably attempting to avoid making a person fall and hurt higher-order Law.
themselves. There is no need for complete knowledge A robot must obey orders given it by superordinate
or even positive identification of a person, only robots, except where such orders would conflict
knowledge enough to be reasonably safe given the with a higher-order Law.
robotÕs inherent danger. To uphold the Laws does not Law Three
require a fully autonomous robot with full human A robot must protect the existence of a superordinate
capacities, it only requires that something of human- robot as long as such protection does not conflict
level intelligence, possibly a human, takes responsibility with a higher-order Law.
for upholding the laws within the particular robot being A robot must protect its own existence as long as such
produced. protection does not conflict with a higher-order Law.
162 LEE MCCAULEY

Law Four sequences as is being done in Japan (Christensen


A robot must perform the duties for which it has been 2006), but this sets the wrong tone. In the publicÕs
programmed, except where that would conflict mind roboticists and AI researchers would, by
with a higher-order law. default, be likely of committing the crimes addressed
The Procreation Law by the laws. This would not help alleviate the Fran-
A robot may not take any part in the design or kenstein Complex but would exacerbate it. Instead,
manufacture of a robot unless the new robotÕs people involved in the research and development of
actions are subject to the Laws of Robotics. intelligent machines, be they robots or some other
form of artificial intelligence, need to each make a
These laws, like AsimovÕs originals, are intended to
personal commitment to be responsible for their
be interpreted in the order presented above. Note that
creations – something akin to the Hippocratic Oath
laws two and three are broken into two separate
taken by medical doctors. Not surprisingly, this same
clauses that are also intended to be interpreted in
sentiment was expressed by Bill Joy, ‘‘scientists and
order. So AsimovÕs three laws, plus the zeroth law
engineers [need to] adopt a strong code of ethical
added in Robots and Empire (Asimov 1985), are
conduct, resembling the Hippocratic oath (Joy
expanded here into nine if the sub-clauses are inclu-
2000)‘‘ The modern Hippocratic Oath used by most
ded. Clarke left most of AsimovÕs stated four laws
medical schools today comes from a rewrite of the
intact, disambiguated two, and added three addi-
ancient original and is some 341 words long (Lasagna
tional laws.
1964). A further rewrite is presented here intended for
There are still problems even with this more spe-
Roboticists and AI Researchers in general:
cific set. For example, the Procreation Law is of the
least priority – subordinate to even the fourth law I swear to fulfill, to the best of my ability and
stating that a robot has to follow its programming. In judgment, this covenant:
other words, a robot could be programmed to create
other robots that are not subject to the Laws of I will remember that artificially intelligent ma-
Robotics or be told to do so by a human or other chines are for the benefit of society and will strive
superordinate robot pursuant to Law Two. Even if to contribute to that society through my creations.
we reorder these laws, situations will still arise where
other laws have precedent. There doesnÕt seem to be Every artificial intelligence I have a direct role in
any way of creating foolproof rules at least as stated creating will follow the spirit of the following rules:
in English and interpreted with the full capacities of a
1. Do no harm to humans either directly or
human.
through non-action.
2. Do no harm to myself either directly or through
Are the laws even necessary?
non-action unless it will cause harm to a human.
3. Follow the orders given me by humans through
What good, then, are even a revised set of laws if they
my programming or other input medium unless
cannot be directly put into practice?1 Luckily, our
it will cause harm to myself or a human.
robots do not need the laws in English and will not, at
the moment, have anything close to the full capacity
I will not take part in producing any system that
of a human. It is still left to human interpretation as
would, itself, create an artificial intelligence that
to how and to what level to implement the Laws for
does not follow the spirit of the above rules.
any given robot and domain. This is not likely to be a
perfect process. No one human or even group of The RoboticistÕs Oath has a few salient points that
humans will be capable of determining all possible should be discussed further. The overarching intent is
situations and programming for such. This problem to convey a sense of ones connection and responsi-
compounds itself when the robot must learn to adapt bility to humanity along with a reminder that robots
to its particular situation. are just complex tools, at least until such point as
We could rely on the government and the legal they are no longer just tools. When that might be or
system to produce laws dictating a code of conduct how we might tell is left to some future determina-
for robot manufacturers and spelling out legal con- tion. The Oath then includes a statement that the
researcher will always instill in their creations the
spirit of the three rules that follow. The use of the
1
Once again, we are politely sidestepping SlomanÕs word ‘‘spirit’’ here is intentional. In essence, any AI
meaning of this question, which suggests that an imposed Researcher or Roboticist should understand the
ethical system will be unnecessary for truly intelligent ma- intent of the three rules and make every reasonable
chines.
ARMAGEDDON AND THE THREE LAWS 163

effort to implement them within their creations. The not implementable as an internal goal of a robot. The
rules themselves are essentially a reformulation of constructing robot, in this case, must have the ability
AsimovÕs original Three Laws with the second and to determine that it is involved in creating another
third law reversed in precedence. robot and have the ability to somehow confirm
Why the reversal? As Asimov, himself, points out whether the robot it is constructing conforms to the
in The Bicentennial Man (Asimov 1976), a robot Laws. The only situation where this might be possible
implementing his Laws could be forced to dismantle is when a robotÕs function includes the testing of
themselves for no reason other than the whim of a robots after they are completed and before being put
human. In that story, the main character, a robot into operation. It is, therefore, pursuant to the human
named Andrew Martin, successfully lobbies congress creators to make sure that their manufacturing robots
for a human law that makes such orders illegal. are creating robots that adhere to the rules stated in
AsimovÕs purpose in making the self-preservation law the Oath.
a lower priority than obeying a human command was Will even widespread adherence to such an oath
to allow humans to put robots into dangerous situ- prevent all possible problems or abuses of intelligent
ations when such was necessary. The question then machines? Of course not. As with medical doctors
becomes whether any such situation would arise that there may still need to be laws on the books to deal
would not also involve the possible harm to a human. with robotics malpractice, but it will reduce occur-
While there may be convoluted scenarios when a rences and give the general public an added sense of
situation like this might occur, there is a very low security and respect for practitioners of the science of
likelihood. There is high likelihood, on the other artificial intelligence in much the same way as the
hand, as Clarke pointed out (Clarke 1993, 1994), that Hippocratic Oath does for physicians. Is the Roboti-
humans would give a robot instructions that, inad- cistÕs Oath necessary? Probably not, if one only con-
vertently, might cause it harm. In software engi- siders the safety of the machines that might be built.
neering it is one of the more time consuming Those in this field are highly intelligent and moral
requirements that code must have sufficient error people that would likely follow the intent of the oath
checking. This is often called ‘‘idiot-proofing’’ oneÕs even in its absence. However, it is important in setting
code. Without such efforts, users would be providing a tone for young researchers and the public at large.
incorrect data, inconsistent data, and generally
crashing systems on a recurring basis. It is the soft- The future
ware engineerÕs responsibility to reduce the occur-
rence of these mistakes. Many well known people have told us that the human
The RoboticistÕs Oath also leaves out the zeroth race is doomed to be supplanted by our own robotic
law. For Asimov, it is clear that the zeroth law, even creations. Hollywood and the media sensationalize
more than the others, is a literary device created by a and fuel our fears because it makes for an exciting
very sophisticated robot (Asimov 1985) in a story story. However, when one analyzes the series of
written some four decades after the original Three improbable events that must occur for this to play
Laws. Furthermore, such a law would only come into out, it becomes obvious that we are quite safe for the
play at such point when the robot could determine following reasons:
the good of humanity. If or when a robot can make
this level of distinction, it will have gone well beyond 1. It is not likely that the majority of the public will
the point where it is merely a tool and the use of these pay the very high price tag for a general-purpose
kinds of rules should be re-examined (Sloman 2006). robot when a set of much less expensive, less
Finally, if an artificial intelligence were created that intelligent devices will perform the same tasks just
was not sophisticated enough to make the distinction as automatically.
itself, yet would affect all of humanity, then the Oath 2. Evolution of physical robots could not happen any
requires that the creators determine the appropriate faster than human evolution and it would require
safety measures with the good of society in mind. millions of robots to be created every year without
A form of ClarkeÕs procreation law (Clarke 1994) human oversight of any kind.
has been included in the RoboticistÕs Oath, but it has 3. Evolution occurs because of a lack of resources
been relegated to the responsibility of humans. The and/or competition. Evolutionary pressures for
purpose of such a law is evident. Complex machines human-level intelligence and the development of
manufactured for general use will, inevitably, be some way of affecting the real world in some
constructed by robots. Therefore, Clarke argues, a meaningful way could not occur in a virtual envi-
law against creating other robots that do not follow ronment. Outside of a virtual environment appeals
the Laws is necessary. Unfortunately, such a law is to #2 above.
164 LEE MCCAULEY

4. Any robots being mass produced will be put I. Asimov. The Laws of Robotics. In Robot Visions, pp.
through the same scrutiny as automobiles with 423–425. ROC, New York, NY, 1990.
hundreds sacrificed each year to test for safety and B. Christensen. AsimovÕs First Law: Japan Sets Rules for
recalls issued for even minor problems discovered Robots [Electronic Version]. LiveScience from http://
www.livescience.com/technology/060526_robot_ru-
after deployment.
les.html, 2006.
Evolution cannot occur under these circumstances R. Clarke. AsimovÕs Laws of Robotics: Implications for
through anything other than normal human redesign. Information Technology, Part 1. IEEE Computer, 26(12):
Unfortunately, there is still the possibility of tech- 53–61, 1993.
nology misuse and irresponsibility on the part of R. Clarke. AsimovÕs Laws of Robotics: Implications for
Information Technology, Part 2. IEEE Computer, 27(1):
robotics and AI researchers that, while not resulting
57–65, 1994.
in the obliteration of humanity, could be disastrous
D. Cohn. AI Reaches the Golden Years. Wired Retrieved
for the people directly involved. For this reason, Bill July 17, 2006, from http://www.wired.com/news/technol-
JoyÕs call for scientists and engineers to have a Hip- ogy/0,71389-0.html?tw=wn_index_2, 2006.
pocratic Oath (Joy 2000) has been taken up for J. Holland, Adaptation in Natural and Artificial Systems.
roboticists and researchers of artificial intelligence. University of Michigan Press, Ann Arbor, MI, 1975.
The RoboticistÕs Oath calls for personal responsibil- iRobot Corporation: Home Page. Retrieved July, 19, 2006,
ity on the part of researchers and to instill in their from http://www.irobot.com, 2006.
creations the spirit of three rules stemming from Isaac D.G. Jerz. R.U.R. (RossumÕs Universal Robots) [Elec-
AsimovÕs original Three Laws of Robotics. tronic Version]. Retrieved June 7, 2006 from http://
The future will be filled with smart machines. In www.jerz.setonhill.edu/resources/RUR/, 2002.
B. Joy. Why the future doesnÕt need us. Wired, 8.04, April
fact they are already all around you, in your car, in
2000.
your cell phone, at your bank, and even in the
R. Kurzweil. The Age of Spiritual Machines. Viking Adult,
microwave that senses when the food is properly 1999.
cooked and just keeps it warm until you are ready to R. Kurzweil. The Singularity is Near: When Humans
eat. These will get smarter but not sentient, not alive. Transcend Biology. Viking Books, 2005.
A small number of robots in labs may achieve L. Lasagna. Hippocratic Oath—Modern Version.
human-level or better intelligence, but these will be Retrieved June 30, 2006, from http://www.pbs.org/
closely studied oddities. Can the human race still wgbh/nova/doctors/oath_modern.html, 1964.
destroy itself? Sure, but not through artificial intelli- S. Lovgren. Robot Code of Ethics to Prevent Android
gence. Humanity must always be wary of its power Abuse, Project Humans. National Geographic News.
and capability for destruction. It must also not fear March 16, 2007.
H.P. Moravec, Robot: Mere Machine to Transcendent
the future with or without intelligent robots.
Mind. Oxford University Press, Oxford, 1998.
A. Sandberg. Too Simple to Be Safe [Electronic Version]. 3
Laws Unsafe. Retrieved June 9, 2006 from http://
References www.asimovlaws.com/articles/archives/2004/07/too_sim-
ple_to_b.html, 2004.
M.R. Ames. 3 Laws DonÕt Quite Cut It [Electronic Version]. M. Shelley, Frankenstein, or The Modern Prometheus.
3 Laws Unsafe from http://www.asimovlaws.com/articles/ Lackington, Hughes, Harding, Mavor & Jones, London,
archives/2004/07/3_laws_dont_qui.html, 2004. UK, 1818.
I. Asimov. Runaround. Astounding Science Fiction, March A. Sloman. The Computer Revolution in Philosophy:
1942. Philosophy, Science and Models of Mind. Harvester Press,
I. Asimov. Evidence. Astounding Science Fiction, March 1978.
1946. A. Sloman. Why AsimovÕs Three Laws of Robotics are
I. Asimov. The Naked Sun. Doubleday, 1957. Unethical. Retrieved June 9, 2006, from http://
I. Asimov. The Bicentennial Man. In Stellar Science www.cs.bham.ac.uk/research/projects/cogaff/misc/asi-
Fiction, February ed. Vol. 2. 1976. mov-three-laws.html, 2006.
I. Asimov. The Machine and the Robot. In P.S. Warrick, K. Warwick. I, Cyborg. Century, 2002.
M.H. Greenberg and J.D. Olander, editors, Science G. Worley. Robot Oppression: Unethicality of the Three
Fiction: Contemporary Mythology. Harper and Row, Laws [Electronic Version]. 3 Laws Unsafe from http://
1978. www.asimovlaws.com/articles/archives/2004/07/
I. Asimov, Robots and Empire. Doubleday & Company, robot_oppressio_2.html, 2004.
Garden City, 1985.
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.

You might also like