Download as pdf or txt
Download as pdf or txt
You are on page 1of 2

ing RBL to complex domains.

The appeal for RBL is that it is Neural Networks for Pattern Recognitionby
an unsupervised learning technique, very little domain knowl-
edge is required, and it is mathematically sound. Nehmzow and Albert Nigrin
colleagues show learning behaviors using associative memories MIT Press, 1993 Hard cover, xix + 413 pages, $45.00
and rules to be maintained by the robot they call instinct rules. It
is impressive how their Lego robots learn behaviors to avoid
Reviewed by: Michael de la Maza
obstacles, follow walls, and follow corridors. Having worked
with Lego vehicles, one is impressed that such robots with lim- Numinous Noetics Group
ited structure and computation learn their tasks before self- M1T AI Lab
destructing. (Experiments with physical robots are superior to Cambridge, M A 02139
simulations, and we are quite impressed with work describing mdlm@ai.mit.edu
learning using physical robots.) Colombetti and Dorigo use
classifier systems on a physical robot to learn complex behav- Newly minted assistant professors often super-charge their
iors as composition of simple behaviors. careers and their curriculum vitae by publishing elaborations of
their dissertations in the form of books. Indeed, the practice is
In the section on Evolution, Harvey, Husbands, and Cliff state so common and so effective that it is recommended in "Getting
in their paper, "An Animal should not be considered as a solu- what you came for: The smart student's guide to earning a Mas-
tion to a problem posed 4 billion years ago." This position advo- ter's or a Ph.D." by Robert L. Peters. Albert Nigrin's "Neural
cates devising evolution mechanisms that use the agent's own Networks for Pattern Recognition" is such a book. The disserta-
physiology to adapt to their world and exhibit intelligent behav- tion from which the book springs was written under the direc-
ior with respect to their own physiology. Papers in Evolution tion of Alan Biermann at Duke University.
have in common the themes encountered in the artificial life
community. The book begins with a typical and benighted attack on sym-
bolic AI. Typical because everyone from neural network
Papers in the Collective Behavior section describe experiments researchers to genetic programming hackers to mobile robot
with many interacting robots engaged in group activity. These enthusiasts has discussed the shortcomings of symbolic AI -
papers explore architecture and/or behaviors of each agent that often demonstrating greater flaws in their own thinking than in
contribute to collective behaviors. Sometimes, interesting symbolic AI. Benighted because Nigrin regurgitates often
behaviors from local interactions that are not programmed into repeated and scantily supported statements. To wit: "...there is a
agents are observed, called emergent behaviors. certain flavor to the components of neural networks that is not
found in the components of ordinary circuits" (p. 15). What is
In the One Page Summaries at the end of proceedings we see a this ineffable, ephemeral, indescribable "flavor"? Nigrin would
diverse array of papers presented as posters that belong in artifi- better further his cause if he avoided this mysticism - a mysti-
cial life. These papers make a nice transition to artificial life cism that, in the eyes of symbolic AI researchers, besmirches
research and contribute to study of adaptive behavior. Artificial and taints the connectionist field. Nigrin finishes this discussion
Life has attracted scientists from varying disciplines. Animat as he must - by confessing that his neural networks are imple-
research shares with artificial life research the goal of creating mented on serial computers, hence they cannot enjoy the mysti-
creatures that appear lifelike. These creatures "live" and may cal, supernatural advantages that he attributes to parallel,
learn, reproduce, and die. Many of these creatures are given distributed computation.
habitats and studied for their behavior and evolution. It is yet
too early in artificial life research to identify common tech- If the rest of the book continued in the same sclerotic vein then
niques for creature building or study of them. However, a gen- one could conclude, with justification, that the book is not worth
eral approach in this research is to produce physical systems reading. Fortunately, it does not. Nigrin is interested in develop-
that interact in realistic situations. This is supported by a trend ing networks that self-organize using supervised learning, form
toward building inexpensive and simple robots with very simple stable category codes, operate under the presence of noise, oper-
behavior engines and embeding them in a physical environment. ate in real-time, perform fast and slow learning, scale well to
Under such an approach, it is possible for external observers to large problems, use feedback expectancies to bias classifica-
note behaviors (resulting from interactions of the robot with the tions, create arbitrarily coarse or tight classifications that are
environment) that are unexpected, apparently intelligent, and distortion insensitive, perform context-sensitive recognition,
can be viewed as behaviors that emerge from the simple compu- process multiple patterns simultaneously, combine existing rep-
tational elements with which the robot is endowed. It will be resentations to create categories for novel patterns, perform syn-
interesting to follow these lines of research (animats and artifi- onym processing, and unlearn or modify categories when
cial life) to see development of adaptive and emergent behav- necessary. All of these criteria are described in complete, if
iors in intelligent agents. somewhat breathless, detail. Nigrin tests his neural network
architectures on a variety of problems that come from the natu-
References: ral language processing field.
{agree90}Philip Agre and David Chapman. What are plans
for? In Pattie Maes, editor, Designing Autonomous Agents, Nigrin notes that some of these thirteen requirements are satis-
chapter 2, pages 17--34. MIT Press, 1990. fied, at least in part, by neural networks developed in Gross-
{brooks90} Rodney A. Brooks. Elephants dont play chess. In berg's research group and that his own research is to a large
Pattie Maes, editor Designing Autonomous Agents, pages 3--15. degree motivated by the desire to extend and improve Gross-
MIT Press, 1990. berg's work. Nigrin acknowledges this debt and provides a

11 S I G A R T Bulletin, Vol. 5 No. 2


foundation for his research by summarizing some of Gross- the capabilities of their networks could do worse than read his
berg's work in Chapter 2. This chapter, which will have propae- book.
deutic value for those new to the field, but which should be
skipped by experts, presents the parts of adaptive resonance the- {SGrossberg88} S. Grossberg. Neural Networks and Natural
ory [SGrossberg88] which are critical for understanding SON- Intelligence MIT Press, Cambridge, MA, 1988.
NET, the neural network architecture that Nigrin develops in
this book. {BMoore88} B.Moore. Art 1 and pattern clustering. Proceed-
ings of the 1988 Connectionist Summer School: Connectionist
These beginning chapters force tile novice reader to absorb an Models, pages 174--185, 1988.
enormous amount of not always precisely defined vocabulary:
outstar, modifiable LTM weight, passive decay, total field activ-
ity, feedforward on-center off-surround network, shunting equa-
tion, masking field, and on and on. The reader who is interested
in a gentler, jargon-free introduction to adaptive resonance the-
ory should consult Moore's introductory article [BMoore88].
But the reader who does slog through this sesquipedalian
swamp is rewarded with chapter 3, the meat of the book.
The third chapter introduces SONNET 1 (Self-Organizing Neu-
ral NETwork) which is designed to meet many of the thirteen
previously mentioned criteria. One novel feature of this chapter
and the ones that follow it is that Nigrin has taken pains to
describe the anfractuous and arduous road that he took to reach
the final SONNET algorithm. He writes: "As in the previous
example, I know that this problem exists not because I was
smart enough to avoid it in advance but because I wasted time
trying to get it to work! And if I a:m going to suffer through try-
ing a method, I feel that the reader should have to suffer through
learning about it!" (p. 158). But suffer is the wrong word. By
reporting both positive and negative results Nigrin shows the
reader what to do and what to avoid.

Nigrin engineers a variety of mechanisms for meeting his thir-


teen criteria. Some of them, such as normalizing outputs, are
straightforward techniques that have been used in other fields,
while others, such as his analysis of the attentional reset mecha-
nism, contain new insights that repay careful study. The exposi-
tion of these methods could have been improved by giving
pseudo-code, something which Nigrin does only once. By the
end of chapter 3, Nigrin has convinced the reader that SONNET
addresses many issues that other neural network architectures
do not. Chapter 4 extends these results to include temporal pat-
terns and chapter 5 shows how to add additional layers to SON-
NET. While reading these chapters, the reader should keep in
mind that although many of the equations that govern the
behavior of SONNET have time derivatives, most of the mathe-
matical analysis depends only on equilibrium properties, not
dynamical properties. However, some of the dynamical proper-
ties of the equations are used to dismiss certain potential solu-
tions.

Chapters 6 and 7 describe a series of extensions to SONNET


that are designed to overcome some of its shortcomings. The
primary insight of these two chapters is that if the signals on the
connections between nodes compete, instead of the nodes com-
peting, then certain problems are easier to solve. This novel idea
will have to await implementation before final judgment can be
passed on it.

One of the blurbs that appears on the overleaf notes that


Nigdn's book contains a wealth of ideas for those who are
familiar only with backpropagation neural network algorithms.
This is certainly true and those who are interested in increasing

SIGART Bulletin, Vol. 5 No. 2 12

You might also like