Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1of 16

Velammal College of Engineering and Technology

Madurai

Artificial intelligence

Presented by

Maria Jenifer. T
jenim6@gmail.com
Vaishnavi. A.S
vaish0292@gmail.com
Abstract: fall into place and we will see "truly" intelligent
systems emerge. However, I, and others, believe
Artificial intelligence research has foundered on that human level intelligence is too complex and
the issue of representation. When intelligence is little understood to be correctly decomposed into
approached in an incremental manner, with strict the right sub pieces at the moment and that even
reliance on interfacing to the real world through if we knew the sub pieces we still wouldn't know
perception and action, reliance on representation the right interfaces between them. Furthermore,
disappears. In this paper we outline our approach we will never understand how to decompose
to incrementally building complete intelligent human level intelligence until we've had a lot of
Creatures. The fundamental decomposition of the practice with simpler level intelligences.
intelligent system is not into independent In this paper I therefore argue for a different
information processing units which must approach to creating artificial intelligence:
interface with each other via representations. • We must incrementally build up the capabilities
Instead, the intelligent system is decomposed of intelligent systems, having complete systems
into independent and parallel activity producers at each step of the way and thus automatically
which all interface directly to the world through ensure that the pieces and their interfaces are
perception and action, rather than interface to valid.
each other particularly much. The notions of • At each step we should build complete
central and peripheral systems evaporate intelligent systems that we let loose in the real
everything is both central and peripheral. Based world with real sensing and real action. Anything
on these principles we have built a very less provides a candidate with which we can
successful series of mobile robots which operate delude ourselves.
without supervision as Creatures in standard We have been following this approach and have
office environments. built a series of autonomous mobile robots. We
1. Introduction have reached an unexpected conclusion (C) and
Artificial intelligence started as a field whose have a rather radical hypothesis (H).
goal was to replicate human level intelligence in (C) When we examine very simple level
a machine. Early hopes diminished as the intelligence we find that explicit representations
magnitude and difficulty of that goal was and models of the world simply get in the way. It
appreciated. Slow progress was made over the turns out to be better to use the world as its own
next 25 years in demonstrating isolated aspects model.
of intelligence. Recent work has tended to (H) Representation is the wrong unit of
concentrate on commercial aspects of "intelligent abstraction in building the bulkiest parts of
assistants" for human workers. No one talks intelligent systems.
about replicating the full gamut of human Representation has been the central issue in
intelligence any more. Instead we see a retreat artificial intelligence work over the last 15 years
into specialized sub problems, such as ways to only because it has provided an interface
represent knowledge, natural language between otherwise isolated modules and
understanding, vision or even more specialized conference papers.
areas such as truth maintenance systems or plan 2. The evolution of intelligence
verification. All the work in these subareas is We already have an existence proof of, the
benchmarked against the sorts of tasks humans possibility of intelligent entities: human beings.
do within those areas. Additionally, many animals are intelligent to
Amongst the dreamers still in the field of AI some degree. (This is a subject of intense debate,
(those not dreaming about dollars, that is), there much of which really centers around a definition
is a feeling. That one day all these pieces will all
of intelligence.) They have evolved over the 4.6 spend the whole time in the passenger cabin of a
billion year history of the earth. commercial passenger Boeing 747 on a medium
It is instructive to reflect on the way in which duration flight.
earth-based biological evolution spent its time. Returned to the 1890s they feel Vigo rated,
Single-cell entities arose out of the primordial knowing that AF is possible on a grand scale.
soup roughly 3.5 billion years ago. A billion They immediately set to work duplicating what
years passed before photosynthetic plants they have seen. They make great progress in
appeared. After almost another billion and a half designing pitched seats, double pane windows,
years, around 550 million years ago, the first fish and know that if only they can figure out those
and Vertebrates arrived, and then insects 450 weird "plastics" they will have their grail within
million years ago. Then things started moving their grasp. (A few connectionists amongst them
fast. Reptiles arrived 370 million years ago, caught a glimpse of an engine with its cover off
followed by dinosaurs at 330 and mammals at and they are preoccupied with inspirations from
250 million years ago. The first primates that experience.)
appeared 120 million years ago and the 3. Abstraction as a dangerous weapon
immediate predecessors to the great apes a mere Artificial intelligence researchers are fond of
18 million years ago. Man arrived in roughly his pointing out that AI is often denied its rightful
present form 2.5 million years ago. He invented successes. The popular story goes that when
agriculture a mere 10,000 years ago, writing less nobody has any good idea of how to solve a
than 5000 years ago and "expert" knowledge particular sort of problem (e.g. playing chess) it
only over the last few hundred years, is known as an AI problem. When
This suggests that problem solving behavior, an algorithm developed by AI researchers
language, expert knowledge and application, and successfully tackles such a problem, however, AI
reason, are all pretty simple once the essence of detractors claim that since the problem was
being and reacting are available. That essence is solvable by an algorithm, it wasn't really an AI
the ability to move around in a dynamic problem after all. Thus AI never has any
environment, sensing the surroundings to a successes. But have you ever heard of an AI
degree sufficient to achieve the necessary failure?
maintenance of life and reproduction. This part I claim that AI researchers are guilty of the same
of intelligence is where evolution has (self) deception. They partition the problems they
concentrated its time—it is much harder. work on into two components. The AI
I believe that mobility, acute vision and the component, which they solve, and the non-AI
ability to carry out survival related tasks in a component which, they don't solve. Typically, AI
dynamic environment provide a necessary basis "succeeds" by defining the parts of the problem
for the development of true intelligence. that are unsolved as not AI.
Moravec [11] argues this same case rather The principal mechanism for this partitioning is
eloquently. abstraction. Its application is usually considered
Human level intelligence has provided us with part of good science, not, as it is in fact used in
an existence proof but we must be careful about AI, as a mechanism for self-delusion. In AI,
what the lessons are to be gained from it. abstraction is usually used to factor out all
2. 1. A story aspects of perception and motor skills. I argue
Suppose it is the 1890s. Artificial flight is the below that these are the hard problems solved by
glamour subject in science, engineering, and intelligent systems, and further that the shape of
venture capital circles. A bunch of AF solutions to these problems constrains greatly the
researchers are miraculously transported by a correct solutions of the small pieces of
time machine to the 1980s for a few hours. They intelligence which remain.
Early work in AI concentrated on games, flat (maybe) sitting place, with perhaps a back
geometrical problems, symbolic algebra, theorem support. They have a range of possible sizes,
proving, and other formal systems (e.g. [6, 9]). requirements on strength, and- a range of
In each case the semantics of the domains were possibilities in shape. They often have some sort
fairly simple. of covering material, unless they are made of
In the late sixties and early seventies the blocks wood, metal or plastic. They sometimes are soft
world became a popular domain for AI research. in particular places.
It had a uniform and simple semantics. The key They can come from a range of possible styles.
to success was to represent the state of the world In particular the concept of what is a chair is hard
completely and explicitly. Search techniques to characterize simply. There is certainly no AI
could then be used for planning within this well- vision program which can find arbitrary chairs in
understood world. Learning could also be done arbitrary images; they can at best find one
within the blocks world; there were only a few particular type of chair in carefully selected
simple concepts worth learning and they could be images.
captured by enumerating the set of Sub This characterization, however, is perhaps the
expressions which must be contained in any correct AI representation of solving certain
formal description of a world including an problems; e.g., a person sitting on a chair in a
instance of the concept. The blocks world was room is hungry and can see a banana hanging
even used for vision research and mobile from the ceiling just out of reach. Such problems
robotics, as it provided strong constraints on the are never posed to AI systems by showing them a
perceptual processing necessary [12]. photo of the scene. A person (even a young
Eventually criticism surfaced that the blocks child) can make the right interpretation of the
world was a "toy world" and that within it there photo and suggest a plan of action. For AI
were simple special purpose solutions to what planning systems however, the experimenter is
should be considered more general problems. At required to abstract away most of the details to
the same time there was a funding crisis within form a simple description in terms of atomic
AI (both in the US and the UK, the two most concepts such as PERSON, CHAIR and
active places for AI research at the time). AI BANANAS.
researchers found themselves forced to become But this abstraction is the essence of intelligence
relevant. They moved into more complex and the hard part of the problems being solved.
domains, such as trip planning, going to a Under the current scheme the abstraction is done
restaurant, medical diagnosis, etc. by the researchers leaving little for the AI
Soon there was a new slogan: "Good programs to do but search. A truly intelligent
representation is the key to AI" (e.g. conceptually program would study the photograph, perform
efficient programs in [2]). The idea was that by the abstraction and solve the problem.
representing only the pertinent facts explicitly, The only input to most AI programs is a
the semantics of a world (which on the surface restricted set of simple assertions deduced from
was quite complex) were reduced to a simple the real data by humans. The problems of
closed system once again. recognition, spatial understanding, and dealing
Abstraction to only the relevant details thus with sensor noise, partial models, etc. are all
simplified the problems. ignored. These problems are relegated to the
Consider a chair for example. While the realm of input black boxes.
following two characterizations are true: Psychophysical evidence suggests they are all
(CAN (SIT-ON PERSON CHAIR)), (CAN intimately tied up with the representation of the
(STAND-ON PERSON CHAIR)), there is much world used by an intelligent system.
more to the concept of a chair. Chairs have some
There is no clean division between perceptions Merkwelt may not be valid when real sensors
(abstraction) and reasoning in the real. world. and perception processing is used. The second
The brittleness of current AI systems attests to objection says that even with human sensors and
this fact. perception the
For example, MYCIN [13] is an expert at Merkwelt may not be anything like that used by
diagnosing human bacterial infections, but it humans. In fact, it may be the case that our
really has no model introspective descriptions of our internal
of what a human (or any living creature) is or representations are completely misleading and
how they work, or what are plausible things to quite different from what we really use.
happen to
a human. If told that the aorta is ruptured and the 3.1. A continuing story
patient is losing blood at the rate of a pint every Meanwhile our friends in the 1890s are busy at
minute, MYCIN will try to find a bacterial cause work on their AF machine. They have come to
of the problem. agree that the project is too big to be worked on
Thus, because we still perform all the as a single entity and that they will need to
abstractions for our programs, most AI work is become specialists in different areas. After all,
still done in the blocks world. Now the blocks they had asked questions of fellow passengers on
have slightly different shapes and colors, but their flight and discovered that the Boeing Co.
their underlying semantics have not changed employed over 6000 people to build such an
greatly. It could be argued that performing this airplane.
abstraction (perception) for AI programs is Everyone is busy but there is not a lot of
merely the normal reductionist use of abstraction communication between the groups. The people
common in all good science. The abstraction making the passenger seats used the finest solid
reduces the input data so that the program steel available as the framework. There was
experiences the same perceptual world some muttering that perhaps they should use
(Merkwelt in [15]) as humans. Other (vision) tubular steel to save weight, but the general
researchers will independently fill in the details consensus was that if such an obviously big and
at some other time and place. I object to this on heavy airplane could fly then clearly there was
two grounds. First, as Uexküll and others have no problem with weight.
pointed out, each animal species, and clearly On their observation flight none of the original
each robot species with their own distinctly non- group managed to get a glimpse of the driver's
human sensor suites, will have their own seat, but they have done some hard thinking and
different Merkwelt. think they have established the major constraints
Second, the Merkwelt we humans provide our on what should be there and how it should work.
programs is based on our own introspection. It is The pilot, as he will be called, sits in a seat above
by no means clear that such a Merkwelt is a glass floor so that he can see the ground below
anything like what we actually use internally—it so he will know where to land. There are some
could just as easily be an output coding for side mirrors so he can watch behind for other
communication purposes (e.g., most humans go approaching airplanes. His controls consist of a
through life never realizing, they have a large foot pedal to control speed (just as in these
blind spot almost in the center of their visual newfangled automobiles that are starting to
fields). appear), and a steering wheel to turn left and
The first objection warns of the danger that right.
reasoning strategies developed for the human- In addition, the wheel stem can be pushed
assumed forward and back to make the airplane go up and
down. A clever arrangement of pipes measures
airspeed of the airplane and displays it on a dial. • A Creature must cope appropriately and in a
What more could one want? Oh yes. There's a timely fashion with changes in its dynamic
rather nice setup of louvers in the windows so environment.
that the driver can get fresh air without getting • A Creature should be robust with respect to its
the full blast of the wind in his face. environment; minor changes in the properties of
An interesting sidelight is that all the researchers the world should not lead to total collapse of the
have by now abandoned the study of Creature's behavior; rather one should expect
aerodynamics. only a gradual change in capabilities of the
Some of them had intensely questioned their Creature as the environment changes more and
fellow passengers on this subject and not one of more.
the modern flyers had known a thing about it. • A Creature should be able to maintain multiple
Clearly the AF researchers had previously been goals and, depending on the circumstances it
wasting their time in finds itself in, change which particular goals it is
its pursuit. actively pursuing; thus it can both adapt to
4. Incremental intelligence surroundings and capitalize on fortuitous
I wish to build completely autonomous mobile circumstances.
agents that co-exist in the world with humans, • A Creature should do something in the world; it
and are seen by those humans as intelligent should have some purpose in being.
beings in their own right. I will call such agents Now, let us consider some of the valid
Creatures. engineering approaches to achieving these
This is my intellectual motivation. I have no requirements. As in all engineering endeavors it
particular interest in demonstrating how human is necessary to decompose a complex system into
beings work, although humans, like other parts, build the parts, then interface them into a
animals, are interesting objects of study in this complete system.
endeavor as they are successful autonomous 4. 1. Decomposition by function.
agents. I have no particular interest in Perhaps the strongest, traditional notion of
applications it seems clear to me that if my goals intelligent systems (at least implicitly among AI
can be met then the range of applications for workers) has been of a central system, with
such Creatures will be limited only by our (or perceptual modules as inputs and action modules
their) imagination. I have no particular interest in as outputs. The perceptual modules deliver a
the philosophical implications of symbolic description of the world and the action
Creatures, although clearly there will be modules take a symbolic description of desired
significant implications. actions and make sure they happen in the world.
Given the caveats of the previous two sections The central system then is
and considering the parable of the AF a symbolic information processor.
researchers, I am convinced that I must tread Traditionally, work in perception (and vision is
carefully in this endeavor the most commonly studied form of perception)
to avoid some nasty pitfalls. and work in central systems has been done by
For the moment then, consider the problem of different researchers and even totally different
building Creatures as an engineering problem. research laboratories. Vision workers are not
We will develop an engineering methodology for immune to earlier criticisms of AI workers. Most
building Creatures. vision research is presented as a transformation
First, let us consider some of the requirements from one image representation (e.g., a raw grey
for our Creatures. scale image) to another registered image (e.g., an
edge image). Each group, AI and vision, makes
assumptions about the shape of the symbolic
interfaces. Hardly anyone has ever connected a The advantage of this approach is that it gives an
vision system to an intelligent central system. incremental path from very simple systems to
Thus the assumptions independent researchers complex autonomous intelligent systems. At
make are not forced to be realistic. There is a each step of the way it is only necessary to build
real danger from pressures to neatly one small piece, and interface it to an existing,
circumscribe the particular piece of research working, complete intelligence.
being done. The idea is to first build a very simple complete
The central system must also be decomposed Autonomous system, and test it in the real
into smaller pieces. We see subfields of artificial world.
intelligence such as "knowledge representation", Our favorite example of such a system is a
"learning", "planning", "qualitative reasoning", Creature, actually a mobile robot, which avoids
etc. The interfaces between these modules are hitting things. It senses objects in its immediate
also subject to intellectual abuse. vicinity and moves away from them, halting if it
When researchers working on a particular senses something in its path. It is still necessary
module get to choose both the inputs and the to build this system by decomposing it into parts,
outputs that specify the module requirements I but there need be no clear distinction between a
believe there is little chance the work they do "perception subsystem", a "central system" and
will fit into a complete intelligent system. an "action system". In fact, there may well be
This bug in the functional decomposition two independent channels connecting sensing to
approach is hard to fix. One needs a long chain action (one for initiating motion, and one for
of modules to connect perception to action. In emergency halts), so there is no single place
order to test any of them they all must first be where "perception" delivers a representation of
built. But until realistic modules are built it is the world in the traditional sense.
highly unlikely that we can predict exactly what Next we build an incremental layer of
modules will be needed or what interfaces they intelligence which operates in parallel to the first
will need. system. It is pasted on to the existing debugged
4.2. Decomposition by activity system and tested again in the real world. This
An alternative decomposition makes no new layer might directly access the sensors and
distinction between peripheral systems, such as run a different algorithm on the delivered data.
vision, and central systems. Rather the The first-level autonomous system continues to
fundamental slicing up of an intelligent system is run in parallel, and unaware of the existence of
in the orthogonal direction dividing it into the second level. For example, in [3] we reported
activity producing subsystems. Each activity,or on building a first layer of control which let
behavior producing system individually connect the Creature avoid objects and then adding a
s sensing to action. We refer to an activity layer
producing system as a layer. An activity is a which instilled an activity of trying to visit
pattern of interactions with the world. Another distant
name for our activities might well be skill, visible places. The second layer injected
emphasizing that each activity can at least post commands to
facto be rationalized as pursuing some purpose. the motor control part of the first layer directing
We have chosen the word activity, however, the
because our layers must decide when to act for robot towards the goal, but independently the
themselves, not be some subroutine first
to be invoked at the beck and call of some other layer would cause the robot to veer away from
layer. previously unseen obstacles. The second layer
monitored the progress of the Creature and sent
updated motor commands, thus achieving its goal collapse of the system. Rather one might expect
without being explicitly aware of obstacles, that a given change will at most incapacitate
which some
had been handled by the lower level of control. but not all of the levels of control. Gradually as a
5. Who has the representations? more alien world is entered (alien in the sense
With multiple layers, the notion of perception that
delivering a description of the world gets blurred the properties it holds are different from the
even properties of the world in which the individual
more as the part of the system doing perception layers were debugged), the performance of the
is Creature might continue to degrade. By not
spread out over many pieces which are not trying
particularly connected by data paths or related by to have an analogous model of the world,
function. Certainly there is no identifiable place centrally
where the "output" of perception can be found. located in the system, we are less likely to have
Furthermore, totally different sorts of processing built in a dependence on that model being
of completely accurate. Rather, individual layers
the sensor data proceed independently and in extract only those aspects [1] of the world which
parallel, they find relevant-projections of a representation
each affecting the overall system activity through into a simple subspace, if you like. Changes in
quite different channels of control. the
In fact, not by design, but rather by observation fundamental structure of the world have less
we chance
note that a common theme in the ways in which of being reflected in every one of those
our projections
layered and distributed approach helps our than they would have of showing up as a
Creatures difficulty
meet our goals is that there is no central in matching some query to a central single world
representation. model.
• Low-level simple activities can instill the • Each layer of control can be thought of as
Creature having its
with reactions to dangerous or important changes own implicit purpose (or goal if you insist).
in its environment. Without complex Since
representations and the need to maintain those they are active layers, running in parallel and
representations and reason about them, these with
reactions can easily be made quick enough to access to sensors, they can monitor the
serve environment and decide on the appropriateness
their purpose. The key idea is to sense the of
environment often, and so have an up-to-date their goals. Sometimes goals can be abandoned
idea when circumstances seem unpromising, and
of what is happening in the world. other
• By having multiple parallel activities, and by times fortuitous circumstances can be taken
removing the idea of a central representation, advantage of. The key idea here is to be using the
there world as its own model and to continuously
is less chance that any given change in the class match
of the preconditions of each goal against the real
properties enjoyed by the world can cause total
world. Because there is separate hardware for was able to do without this careful engineering).
each We do claim however, that there need be no
layer we can match as many goals as can exist in explicit representation of either the world or the
parallel, and do not pay any price for higher intentions of the system to generate intelligent
numbers of goals as we would if we tried to add behaviors for a Creature. Without such explicit
more and more sophistication to a single representations, and when viewed locally, the
processor, or even some multiprocessor with a interactions may indeed seem chaotic and
capacity-bounded network. without
• The purpose of the Creature is implicit in its purpose.
higher-level purposes, goals or layers. There I claim there is more than this, however. Even at
need a local, level we do not have traditional AI
be no explicit representation of goals that some representations. We never use tokens which have
central (or distributed) process selects from to any semantics that can be attached to them. The
decide what. is most appropriate for the Creature best that can be said in our implementation is that
to do next. one number is passed from a process to another.
5.1. No representation versus no central But it is only by looking at the state of both the
representation first and second processes that that number can
Just as there is no central representation there is be given any interpretation at all. An extremist
not might say that we really do have representations,
even a central system. Each activity producing but that they are just implicit. With an
layer appropriate mapping of the complete system and
connects perception to action directly. It is only its state to another domain, we could define a
the representation that these numbers and topological
observer of the Creature who imputes a central connections between processes somehow encode.
representation or central control. The Creature However we are not happy with calling such
itself things a representation. They differ from
has none; it is a collection of competing standard representations in too many ways.
behaviors. There are no variables (e.g. see [1] for a more
Out of the local chaos of their interactions there thorough treatment of this) that need instantiation
emerges, in the eye of an observer, a coherent in reasoning processes. There are no rules which
pattern need to be selected through pattern matching.
of behavior. There is no central purposeful locus There are no choices to be made. To a large
of extent the state of the world determines the
control. Minsky [10] gives a similar account of action of the Creature. Simon [14] noted that the
how complexity of behavior of a system was not
human behavior is generated. necessarily inherent in the complexity of the
Note carefully that we are not claiming that creature, but Perhaps in the complexity of the
chaos environment. He made this analysis in his
is a necessary ingredient of intelligent behavior. description of an Ant wandering the beach, but
Indeed, we advocate careful engineering of all ignored its implications in the next paragraph
the when he talked about humans. We hypothesize
interactions within the system (evolution had the (following Agre and Chapman) that much of
luxury of incredibly long time scales and even human level activity is similarly areflection
enormous of the world through very simplemechanisms
numbers of individual experiments and thus without detailed representations.
perhaps
6. The methodology, in practice second layer is added to an existing layer there
In order to build systems based on an activity are three potential sources of
decomposition so that they are truly robust we bugs: the first layer, the second layer, or the
must interaction of the two layers. Eliminating the first
rigorously follow a careful methodology. of these source of bugs as a possibility makes
6. 1. Methodological maxims finding bugs much easier. Furthermore, there is
First, it is vitally important to test the Creatures only one thing possible to vary in order to fix the
we build in the real world; i.e., in the same bugs—the second layer.
world that we humans inhabit. It is disastrous to 6.2. An instantiation of the methodology
fall into the temptation of testing them in a We have built a series of four robots based on the
simplified world first, even with the best methodology of task decomposition. They all
intentions of later transferring activity to an un operate in an unconstrained dynamic world
simplified world. With a simplified world (matte (laboratory and office areas in the MIT Artificial
painted walls, rectangular vertices everywhere, Intelligence Laboratory). They successfully
colored blocks as the only operate with people walking by, people
obstacles) it is very easy to accidentally build a deliberately trying to confuse
sub module of the system which happens to rely them, and people just standing by watching
on some of those simplified properties. This them.
reliance can then easily be reflected in the All four robots are Creatures in the sense that on
requirements on the power-up they exist in the world and interact
interfaces between that submodule and others. with it,
The pursuing multiple goals determined by their
disease spreads and the complete system depends control
in a layers implementing different activities. This is
subtle way on the simplified world. When it in contrast to other mobile robots that are given
comes programs or plans to follow for a specific
time to move to the, unsimplified world, we mission,
gradually and painfully realize that every piece The four robots are shown in Fig. 1. Two are
of the identical, so there are really three, designs. One
system must be rebuilt. Worse than that we may uses
need an offboard LISP machine for most of its
to rethink the total design as the issues may computations, two use onboard combinational
change networks, and one uses a custom onboard
completely. We are not so concerned that it parallel
might be processor. All the robots implement the same
dangerous to test simplified Creatures first and abstract architecture, which we call the
later subsumption
add more sophisticated layers of control because architecture which embodies the fundamental
evolution has been successful using this ideas
approach. of decomposition into layers of task achieving
Second, as each layer is built it must be tested behaviors, and incremental composition through
extensively in the real world.The system must debugging in the real world. Details of these
interact with the real world over extended implementations can be found in [3].
periods. Its behavior must be observed and be Each layer in the subsumption architecture is
carefully and thoroughly debugged. When a composed of a fixed-topology network of simple
finite state machines. Each finite state machine layer is added, one of the new wires is side-
has a tapped
handful of states, one or two internal registers, into an existing wire. A pre-defined time
one or constant is
two internal timers, and access to simple associated with each side-tap. In the case of
computational machines, which can compute suppression the side-tapping occurs on the input
things side
such as vector sums. The finite state machines of a finite state machine. If a message arrives on
run the
asynchronously, sending and receiving fixed net wire it is directed to the input port of the
length finite
messages (1-bit messages on the two small state machine as though it had arrived on the
robots, existing
and 24-bit messages on the larger ones) over wire. Additionally, any new messages on the
wires. existing
On our first robot these were virtual wires; on wire are suppressed (i.e., rejected) for the
our specified
later robots we have used physical wires to time period. For inhibition the side-tapping
connect occurs on
computational components. the output side of a finite state machine. A
There is no central locus of control. Rather, the message
finite on the new wire simply inhibits messages being
state machines are data-driven by the messages emitted on the existing wire for the specified
they time
receive. The arrival of messages or the expiration period. Unlike suppression the new message is
of not
designated time periods cause the finite state delivered in their place.
machines to change state. The finite state As an example, consider the three layers of Fig.
machines 2.
have access to the contents of the messages and These are three layers of control that we have run
might output them, test them with a predicate and on
conditionally branch to a different state, or pass our first mobile robot for well over a year. The
them robot
to simple computation elements. There is no has a ring of twelve ultrasonic sonars as its
possibility of access to global data, nor of primary
dynamically established communications links. sensors. Every second these sonars are run to
There give
is thus no possibility of global control. All finite twelve radial depth measurements. Sonar is
state machines are equal, yet at the same time extremely
they noisy due to many objects being mirrors to sonar.
are prisoners of their fixed topology connections. There are thus problems with specular reflection
Layers are combined through mechanisms we and
call return paths following multiple reflections due to
suppression (whence the name subsumption surface skimming with low angles of incidence
architecture) and inhibition. In both cases as a (less
new than thirty degrees).
In more detail the three layers work as follows: produce an overall force acting on the robot. The
Fig. 1. The four MIT AI laboratory Mobots. Left- output is passed to the runaway machine which
most is the first thresholds it and passes it on to the turn machine
built Allen, which relies on an offboard LISP which orients the robot directly away from the
machine for summed repulsive force. Finally, the forward
computation support. The right-most one is machine drives the robot forward. Whenever this
Herbert, shown with a machine receives a halt message while the robot
24 node CMOS parallel processor surrounding is
its girth. New driving forward, it commands the robot to halt.
sensors and fast early vision processors are still This network of finite state machines generates
to be built and behaviors which let the robot avoid objects. If it
installed. In the middle are Tom and Jerry, based starts in the middle of an empty room it simply
on a sits
commercial toy chassis, with single PALs there. If someone walks up to it, the robot moves
(Programmable Array away. If it moves in the direction of other
of Logic) as their controllers. obstacles it
(1) The lowest-level layer implements a behavior halts. Overall, it manages to exist in a dynamic
which makes the robot (the physical embodiment environment without hitting or being hit by
of objects.
the Creature) avoid hitting objects. It both avoids The next layer makes the robot wander about,
static objects and moving objects, even those that when not busy avoiding objects. The wander
are finite
actively attacking it. The finite state machine state machine generates a random heading for the
labelled robot every ten seconds or so. The avoid machine
sonar simply runs the sonar devices and every treats that heading as an attractive force and
second sums it
emits an instantaneous map with the readings with the repulsive force computed from the
converted to polar coordinates. This map is sonars. It
passed on uses the result to suppress the lower-level
to the collide and feelforce finite state machine. behavior,
The forcing the robot to move in a direction close to
first of these simply watches to see if there is what
anything dead ahead, and if so sends a halt wander decided but at the same time avoid any
message to obstacles. Note that if the. turn and forward
the finite state machine in charge of running the finite
robot forwards—if that finite state machine is not state machines are busy running the robot the
in new
the correct state the message may well be impulse to wander will be ignored.
ignored. (3) The third layer makes the robot try to
Simultaneously, the other finite state machine explore.
computes a repulsive force on the robot, based on It looks for distant places, then tries to reach
an them.
inverse square law, where each sonar return is This layer suppresses the wander layer, and
considered to indicate the presence of a repulsive observes
object. The contributions from each sonar are how the bottom layer diverts the robot due. to
added to obstacles, (perhaps dynamic). It corrects for any
divergences and the robot achieves the goal. other post-Dartmouth traditions in artificial
Fig. 2. We wire, finite state machines together intelligence. We very briefly explain those
into layers of differences
control. Each layer is built on top of existing in the following sections.
layers. Lower level 7.1. It isn't connectionism
layers never rely on the existence of higher level Connectionists try to make networks of simple
layers. processors. In that regard, the things they build
The whenlook finite state machine notices when (in
the robot is not busy moving, and starts up, the simulation only—no connectionist has ever
free driven a
space finder (labelled stereo in the diagram) real robot in a real environment, no matter how
finite simple) are similar to the subsumption networks
state machine. At the same time it inhibits we
wandering build. However, their processing nodes tend to be
behavior so that the observation will remain uniform and they are looking (as their name
valid. suggests)
When a path is observed it is sent to the pathplan for revelations from understanding how to
finite state machine, which injects a commanded connect
direction to the avoid finite state machine. In this them correctly (which is usually assumed to
way, lower-level obstacle avoidance continues to mean
function. This may cause the robot to go in a richly at least). Our nodes are all unique finite
direction different to that desired by pathplan. state
For machines and the density of connections is very
that reason the actual path of the robot is much
monitored lower, certainly not uniform, and very low
by the integrate finite state machine, which sends indeed
updated estimates to the pathplan machine. This between layers. Additionally, connectionists
machine then acts as a difference engine forcing seem to
the be looking for explicit distributed representations
robot in the desired direction and compensating to
for spontaneously arise from their networks. We
the actual path of the robot as it avoids obstacles. harbor
These particular layers were implemented on our no such hopes because we believe
first robot. See [3] for more details. Brooks and representations are
Connell [5] report on another three layers not necessary and appear only in the eye or mind
implemented on that particular robot. of
7. What this is not the observer.
The subsumption architecture with its network of 7.2. It isn't neural networks
simple machines is reminiscent, at the surface Neural networks is the parent discipline of which
level connectionism is a recent incarnation. Workers in
at least, with a number of mechanistic neural networks claim that there is some
approaches to biological
intelligence, such as connectionism and neural significance to their network nodes, as models of
networks. But it is different in many respects for neurons. Most of the, models seem wildly
these endeavors, and also quite different from implausible given the paucity of modeled
many connections
relative to the thousands found in real neurons. hard-wired to the correct place. I think this
We forced
claim no biological significance in our choice of analogy indicates its own weakness. There is no
finite state machines as network nodes. flexibility at all on where a process can gather
7.3. It isn't production rules appropriate knowledge. Most advanced
Each individual activity producing layer of our blackboard
architecture could be viewed as an architectures make heavy use of the general
implementation of sharing
a production rule. When the right conditions are and availability of almost all knowledge.
met Furthermore, in spirit at least, blackboard
in the environment a certain action will be systems
performed. tend to hide from a consumer of knowledge who
We feel that analogy is a little like saying that the
any particular producer was. This is the primary
FORTRAN program with IF statements is means of abstraction in blackboard systems. In
implementing a production rule system. A our system we make such connections explicit
standard and permanent.
production system really is more—it has a rule 7.5. It isn't German philosophy
base, In some circles much credence is given to
from which a rule is selected based on matching Heidegger as one who understood the dynamics
preconditions of all the rules to some database. of existence. Our approach has certain
The similarities to work inspired by this German
preconditions may include variables which must philosopher (e.g. [1]) but our work was not so
be inspired. It is based purely on engineering
matched to individuals in the database, but layers considerations. That does not preclude it from
run being used in philosophical debate as an example
in parallel and have no variables or need for on any side of any fence, however.
matching. Instead, aspects of the world are 8. Limits to growth
extracted Since our approach is a performance-based one,
and these directly trigger or modify certain it is the performance of the systems we build
behaviors which must be used to measure its usefulness and
of the layer. to point to its limitations.
7.4. It isn't a blackboard We claim that as of mid-1987 our robots, using
If one, really wanted, one could make an analogy the sub sumption architecture to implement
of our networks to a blackboard, control complete
architecture. Creatures, are the most reactive real-time mobile
Some of the finite state machines would be robots in existence. Most other mobile robots are
localized still at the stage of individual "experimental
knowledge sources. Others would be processes runs" in
acting static environments, or at best in completely
on these knowledge sources by finding them on mapped static environments. Ours, on the other
the hand, operate completely autonomously in
blackboard. There is a simplifying point in our, complex dynamic environments at the flick of
architecture however: all the processes know their on switches, and continue until their
exactly batteries are drained. We believe they operate at
where to look on the blackboard as they are a level closer to simple insect level intelligence
than to bacteria level intelligence. Our goal physical robot is three. In simulation we have run
(worth nothing if we don't deliver) is simple six parallel layers. The technique of completely
insect level intelligence within two years. debugging the robot on all existing activity
Evolution took 3 billion years to get from single producing layers before designing and adding a
cells to new one seems to have been practical till now at
insects, and only another 500 million years from least.
there to humans. This statement is not intended 8.2. How complex?
as a We are currently working towards a complex
prediction of our future performance, but rather behavior pattern on our fourth robot which will
to indicate the nontrivial nature of insect level require approximately fourteen individual
intelligence. activity producing layers.
Despite this good performance to date, there are The robot has infrared proximity sensors for
a number of serious questions about our local obstacle avoidance. It has an onboard
approach. We have beliefs and hopes about how manipulator which can grasp objects at ground
these questions will be resolved, but under our and table-top levels, and also determine their
criteria only performance truly counts. rough weight. The hand has depth sensors
Experiments and building more complex systems mounted on it so that homing in on a target
take time, so with the caveat that the experiments object in order to grasp it can be controlled
described below have not yet directly. We are currently working on a
been performed we outline how we currently see structured light laser scanner to determine rough
our endeavor progressing. Our intent in depth maps in the forward looking direction from
discussing this is to indicate that there is at least The high-level behavior we are trying to instill
a plausible path forward to more intelligent in this Creature is to wander around the office
machines from our curren situation. areas of our laboratory, find open office doors,
Our belief is that the sorts of activity producing enter, retrieve empty soda cans from cluttered
layers of control we are developing (mobility, desks in crowded offices and return them to a
vision and survival related tasks) are necessary central repository.
prerequisites In order to achieve this overall behavior a
for higher-level intelligence in the style we number of simpler task achieving behaviors are
attribute to human beings. necessary
The most natural and serious questions They include: avoiding objects, following walls,
concerning limits of our approach are: recognizing doorways and going through them,
• How many layers can be built in the sub aligning on learned landmarks, heading in a
sumption architecture before the interactions homeward direction, learning homeward
between layers become too complex to continue? bearings at landmarks and following them,
• How complex can the behaviors be that are locating table-like objects, approaching such
developed without the aid of central objects, scanning table tops for cylindrical
representations? objects of roughly the height of a soda can,
• Can higher-level functions such as learning serving the manipulator arm, moving the hand
occur in these fixed topology networks of simple above sensed objects, using the hand sensor to
finite state machines? look for objects of soda can size sticking up from
We outline our current thoughts on these a background, grasping objects if they are light
questions. enough, and depositing objects. he individual
8.1. How many layers? tasks need not be coordinated by any central
The highest number of layers we have run on a controller. Instead they can index off of the state
of the world. For instance the grasp behavior can
cause the manipulator to grasp any object of the processing box to run specially developed vision
appropriate size seen by the hand sensors. The algorithms at 10 frames per second.
robot will not randomly grasp just any object Each of these steps is a significant engineering
however, behaviors have noticed an object of endeavor which we are undertaking as fast as
roughly the right shape on top of a table-like resources permit.
object that the grasping behavior will find itself Of course, talk is cheap.
in a position where its sensing of the world tells 8.4. The future
it to react. If, from above, the object no longer Only experiments with real Creatures in real
looks like a soda can, the grasp reflex will not worlds can answer the natural doubts about our
happen and other lower-level behaviors will approach.
cause the robot to look elsewhere for new Time will tell.
candidates.
8.3. Is learning and such possible? References
Some insects demonstrate a simple type of [1] P.E. Agre and D. Chapman, Unpublished
learning that has been dubbed "learning by memo, MIT
instinct" [7]. It is hypothesized that honey bees Artificial Intelligence Laboratory, Cambridge,
for example are pre-wired to learn how to MA (1986).
distinguish certain classes of flowers, and to [2] R.J. Bobrow and J.S. Brown, Systematic
learn routes to and from a home hive and sources understanding:
of nectar. Other insects, butterflies, have been synthesis, analysis, and contingent knowledge in
shown to be able to learn to distinguish flowers, specialized
but in information limited way [8]. If they are understanding systems, in: R.J. Bobrow and
forced to learn about a second sort of flower, A.M, Collins, eds.,
they forget what they already knew about the Representation and Understanding (Academic
first, ina manner that suggests the total amount of Press, New
information which they know, remains constant. York, 1975) 103-129.
We have found a way to build fixed topology [3] R.A. Brooks, A robust layered control system
networks of our finite state machines which can for a mobile
perform learning, as an isolated subsystem, at robot, IEEE J. Rob. Autom. 2 (1986) 14-23.
levels comparable to these examples. At the [41 R.A. Brooks, A hardware retargetable
moment of course we are in the very position we distributed layered
lambasted most AI workers for earlier in this architecture for mobile robot control, in:
paper. We have an isolated module of a system Proceedings IEEE
working, and the inputs and outputs have been Robotics and Automation, Raleigh, NC (1987)
left dangling. 106-110.
We are working to remedy this situation, but
experimental work with physical Creatures is a
nontrivial and time consuming activity. We find
that almost any pre-designed piece of equipment
or software has so many preconceptions of how
they are to be used built into them, that they are
not flexible enough to be a part of our complete
systems. Thus, as of mid-1987, our work in
learning is held up by the need to build a new
sort of video camera and high-speed low-power

You might also like