Reviews In: Field

You might also like

Download as pdf or txt
Download as pdf or txt
You are on page 1of 1

IEEE TRANSACTIONS ON COMPUTERS, JUNE 1969

572

Reviews

of Books and Papers

in the Computer Field

DONALD L. EPLEY, Reviews Editor


D. W. FIFE, A. I. RUBIN, R. A. SHORT, H. S. STONE
Assistant Reviews Editors

Please address your comments and suggestions to the Reviews Editor: Donald L. Epley,
Department of Electrical Engineering, University of Iowa, Iowa, City, Iowa 52249

A. PERCEPTRONS
R69-13 Perceptrons: An Introduction to Computational Geometry
-M. Minsky and S. Papert (M.I.T. Press, 1969).
Perceptrons were invented in the fifties when "learning machine"
was an exciting new concept. For a decade thereafter, there has been
much describing, experimenting, and speculating about what perceptrons can and cannot do. Discussions of this topic were typically
lively and vague, because the underlying model and the concepts
used were rarely completely defined.
Minsky and Papert's book clarifies this situation a great deal
by defining a rigorous mathematical model in terms of which the
usual assertions about perceptrons, and new ones as well, can be
stated and proved.
This clarification of the basis must have played an important part
in allowing the authors to prove many interesting new results.
Among the main results is a theorem which relates the group of
transformations which leaves a class of patterns invariant to equivalence classes of predicates of a perceptron designed to recognize this
class. The theorem says that all predicates in an equivalence class
can be made to have equal weight. It is useful in proving that certain
things cannot be done, and also in the actual design of perceptrons
which recognize a class of pictures with many symmetries.
Also noteworthy is the technique where one perceptron is related
to another one either as a "subperceptron" or as a "homomorphic
image" (although the authors do not use this terminology). It is used,
e.g., in proving that the predicate "connectedness" is not "of finite
order." This is done by setting up a perceptron which recognizes
connected pictures and which has a subperceptron (which looks only
at part of the retina) which recognizes pictures consisting of an even
number of points. Since the order of a subperceptron P' of P is less
than or equal to the order of P, and since P' has already been proved
not to be of finite order, it follows that P, the connectedness predicate, is not of finite order.
I mention these examples mainly to point out the different flavor
which this book has compared to the older literature on perceptrons.
Another indication of this is the table of contents, copied below.
I. Algebraic theory of linear parallel predicates
1. Theory of linear Boolean inequalities
2. Group invariance of Boolean inequalities
2. Parity and one-in-a-box predicates
4. The "and/or" theorem.
II. Geometric theory of linear inequalities
5. VtCONNECTED: a geometric property with unbounded

order

6. Geometric patterns of small order: spectra and context


7. Stratification and normalization
8. The diameter-limited perceptron
9. Geometric predicates and serial algorithms.

III. Learning theory


10. Magnitude of the coefficients

1 1. Learning
12. Linear separation and learning
13. Perceptrons and pattern recognition.

Many specific examples are discussed in this book, and they leave
one with doubts about the pattern recognition ability of perceptrons.
Simple geometric properties like connectedness cannot be recognized
by "local" perceptrons. Perceptrons to recognize triangles or circles
turn out to be very big.
What then, one asks, is the role of perceptrons in current theories

of computing?
I expect perceptron theory will come to be viewed as having the
same relation to pattern recognition machines as the theory of finite
state machines has to conventional computers. The basic theoretical
concepts are relevant, and some theorems and algorithms may be
useful in designing small subunits, but it is not practically useful to
view a computer as a finite state machine. Similarly, one cannot expect much benefit from viewing a pattern recognition machine as a
perceptron (even with a liberal definition). But such machines will
(and current optical character readers do) contain subunits that
operate according to principles that originated with perceptrons.
Two of these concepts that have emerged as basic to pattern
recognition are the concept of layers (or levels, or stages), where
each successive layer performs a well-defined function, and the idea
of parallelism (in which computations or decisions can be made independently of others). To a large extent perceptron theory is a
study of what can be done with machines that have many units
which operate in parallel and which are connected in the form of a
few layers.
Minsky and Papert's "Perceptrons" is an important contribution
to the growing stock of mathematical models of computing devices.
It establishes perceptron theory on a rigorous foundation, and will
probably generate a revival of interest in perceptrons. It should also
attract the attention of automata theorists to the subject, who may
in the past have been distracted by the lack of mathematical rigor
with which perceptrons have usually been treated.
J. NIEVERGELT
Dept. of Computer Science
University of Illinois

Urbana, Ill.

B. TRANSLATOR WRITING SYSTEMS


R69-14 Translator Writing Systems-J. Feldman and D. Gries
(Commun. ACM, vol. 11, pp. 77-113, February 1968).
This paper is a survey of the field of translator writing systems,
i.e., those programs which are used to automate the construction of
compilers. The paper systematically discusses both syntax and
semantics of programming languages. Under syntax, the authors
consider automatic construction of recognizers (e.g., precedence
techniques) and formal studies of syntax (e.g., LR(k) grammars.)
Under the general topic of semantics, the authors discuss syntax

You might also like