Download as pdf or txt
Download as pdf or txt
You are on page 1of 11

Physica D 42 (1990) 1-11

North-Holland

EMERGENT COMPUTATION:
SELF-ORGANIZING, COLLECTIVE, AND COOPERATIVE PHENOMENA
IN NATURAL AND ARTIFICIAL COMPUTING NETWORKS
INTRODUCTION TO THE PROCEEDINGS OF THE NINTH ANNUAL C N I ~ CONFERENCE

Stephanie FORREST
Center for Nonlinear Studies and Computing Dioision, MS-B258, Los Alamos National Laboratory, Los Alamos, N M 87545, USA

Parallel computing has typically emphasized tions. When the emergent behavior is also a com-
systems that can be explicitly decomposed into putation, we refer to the system as an emergent
independent subunits with minimal interactions. computation.
For example, most parallel processing systems The distinction between standard and emergent
achieve speedups by identifying code and data computations is analogous to the difference be-
segments that can be executed simultaneously. tween linear and nonlinear systems. Emergent
Under this approach, interactions among the computations arise from nonlinear systems while
various segments are managed directly, through standard computing practices focus on linear be-
synchronization, and communication among com- haviors (see section 2.2). The idea that interactions
ponents is viewed as an inherent cost of computa- among simple deterministic elements can produce
tion. As a result, most extant parallel systems interesting and complex global behaviors is well
require substantial amounts of overhead to man- accepted in the physical sciences. However, the
age and coordinate the activities of the various field of computing is oriented towards building
processors, and they obtain speedups that are systems that accomplish specific tasks, and emer-
considerably less than a linear function of the gent properties of complex systems are inherently
number of processors. difficult to predict. Thus, it is not immediately
An alternative approach exploits the interac- obvious how architectures (either hardware or
tions among simultaneous computations to im- software) that have many interactions with often
prove efficiency, increase flexibility, or provide a unpredictable effects can be used effectively, and it
more natural representation. Researchers in sev- is for this reason that I have chosen to use the
eral fields have begun to explore computational term "emergent computation" instead of referring
models in which the behavior of the entire system more broadly to nonlinear properties of computa-
is in some sense more than the sum of its parts. tional systems. The premise of emergent computa-
These include connectionist models [46], classifier tion is that interesting and useful computational
systems [22], cellular automata [5, 7, 56], biological systems can be constructed by exploiting interac-
models [11], artificial-life models [33], and the tions among primitive components, and further,
study of cooperation in social systems with no that for some kinds of problems (e.g. modeling
central authority [2]. In these systems interesting intelligent behavior) it may be the only feasible
global behavior emerges from many local interac- method.

0167-2789/90/$03.50 © Elsevier Science Publishers B.V.


(North-HoUand)
2 S. Forrest/Emergent computation: An introduction

To date, there has been no unified attempt to computation:


specify carefully what constitutes an emergent (i) A collection of agents, each following ex-
computation or to determine what properties are plicit instructions;
required of the supporting architectures that gen- (ii) Interactions among the agents (according to
erate them. This volume contains the Proceedings the instructions), which form implicit global pat-
of a recent Conference held at Los Alamos Na- terns at the macroscopic level i.e. epiphenomena;
tional Laboratory devoted to these questions. (iii) A natural interpretation of the epiphenom-
Emergent computation is potentially relevant to ena as computations.
several areas, including adaptive systems, parallel The term "explicit instructions" refers to a
processing, and cognitive and biological modeling, primitive level of computation, also called
and the Proceedings reflects this diversity, with "micro-structure", "low-level instructions", "local
contributions from physicists, computer scientists, programs", "concrete structure", and "component
biologists, psychologists, and philosophers. The subsystems". In a typical case, such as cellular
Proceedings are thus intended for an interdisci- automaton, each cell acts as an agent executing
plinary audience. Each paper addresses this by the instructions in its state-transition table. How-
providing introductory explanations beyond that ever, in some cases the coding for an instruction
required for a field-specific publication. In this may not be distinguished from the agent that
introduction I hope to stimulate discussion of executes it. The important point is that the explicit
emergent computation beyond the scope of the instructions are at a different (and lower) level
actual conference. First, a definition is proposed than the phenomena of interest. The level of an
and several detailed examples are presented. Then instruction is determined by the entity that pro-
several common themes are highlighted, and the cesses it. For example, if the low-level instructions
contents is briefly reviewed. were machine code, they would be executed by
hardware, while higher-level instructions would be
interpreted by "virtual machines" simulated by
the lower-level machine code instructions. The
1. What is emergent computation? higher-level instructions would be implicit, al-
though not necessarily the product of interactions
It is increasingly common to describe physical (see section 2.2 on superposition).
phenomena in terms of their information process- There is a tension between low-level explicit
ing properties [57, 58]. However, we wish to distin- computations and the patterns of their interaction,
guish emergent computation from the general and the interaction among the levels is important.
emergent properties of complex phenomena. We Global patterns may influence the behavior of the
do this by requiring that both the explicit and the lower-level local instructions, that is, there may be
emergent levels of a system be computations. For feedback between the levels. Patterns that are
example, a Rayleigh-Brnard convecting flow, in interpretable as computations process informa-
which the dynamics of the fluid particles follow a tion, which distinguishes emergent computation
chaotic path, would not necessarily be considered from the interesting global properties of many
a form of emergent computation. complex systems such as the Rayleigh-Brnard
The requirements for emergent computation are experiment mentioned earlier.
quite similar to those proposed by Hofstadter in Central to the definition is the question of to
his paper on subcognition [19]. He stresses that what extent the patterns are "in the eye of the
information which is absent at lower levels can beholder", or interpreted, and to what extent they
exist at the level of collective activities. This is the are inherent in the phenomena itself. This issue
essence of the following constituents of emergent arises because the phenomena of interest are im-
S. Forrest / Emergent computation: An introduction 3

plicit rather than explicit. Note that to a lesser components (e.g. processes). As mentioned earlier,
extent the interpretation problem also exists in a high proportion of computing time is devoted to
standard computation. The difference is that in managing interactions among processes. Other
emergent computations there is no one address (or kinds of efficiencies may also be realized, includ-
set of addresses) where one can read out an an- ing efficiencies of cost through the use of multiple
swer. Thus, the time for interpretation is likely to cheap components, efficient uses of programmer
be much lower for standard computations than time, and raw computational speed through the
emergent ones. Currently, many emergent compu- use of massive parallelism. Second, flexibility is
tations are interpreted by the perceptual system of important for systems that must interact with
the person running the experiment. Thus, when complex and dynamic environments, e.g. intelli-
conducting a cellular automaton experiment, re- gent systems. For these systems, it is impossible to
searchers typically rely on graphics-based simula- get enough flexibility from explicit instructions;
tions to reveal the phenomena of interest. While for realistic environments, it is just not possible to
quantitative measures can be developed in some program in all contingencies ahead of time. There-
cases to interpret the results, scientific visualiza- fore, the flexibility must appear at the emergent
tion techniques are an integral part of most cur- level. The interaction between the instructions and
rent emergent computations. the environment (or between emergent properties
According to the Church-Turing thesis, a Tur- of the instructions and the environment) is impor-
ing machine can both implement any definable tant, and there are global patterns (symbols, etc.)
computation and simulate any set of explicit in- associated with this instruction-environment in-
structions we might choose as the basis of an teraction. Third, the advantage in representation
emergent computation [23]. Thus, the concept of arises in systems for which it is difficult to articu-
emergent computation cannot contribute magical late a formal description of the emergent level.
computational properties. Rather, we are advocat- Several authors have argued for the impossibility
ing a way of thinking about the design of compu- of such an undertaking for systems of sufficient
tational systems that could potentially lead to complexity such as weather patterns and living
radically different architectures which are more systems [19, 33, 35, 48]. In these circumstances,
robust and efficient than current designs #1. emergent systems may provide the most natural
A related question is whether or not emergent model. Finally, the grounding issue arises if the
computations can be implemented in more tradi- emergent patterns are intended as real phenomena
tional ways (i.e. can the emergent patterns be or models of real phenomena (as in cognitive
encoded as a set of explicit instructions instead of modeling). In this circumstance, the intended in-
indirectly as implicit patterns?). While in some terpretation of a purely formal model (e.g. sym-
cases it may be possible to encode the emergent bolic models of artificial intelligence) becomes
patterns directly in some language or machine, problematic since the model is not connected to
there are several advantages to an emergent-com- (grounded in) the domain of interest (by e.g. a
putation approach, including efficiency, flexibility, sensory interface). Emergent-computation models
representation, and grounding. First, implement- can address this problem by using low-level ex-
ing computations indirectly as emergent patterns plicit instructions that are directly connected to
may provide implementation efficiencies because the domain. Harnad's paper discusses the ground-
of the need for less control over the different ing problem in detail [17].
At the architectural level, there are two criteria
that capture the spirit of emergent computation:
~lEven if in principle emergent computations can be simu-
lated by a Turing machine, interpreting the resulting patterns efficacy and efficiency. The criterion of computa-
as computations is likely to be so difficult as to be infeasible. tional efficacy is met by systems in which each
4 S. Forrest/Emergent computation: An introduction

computational unit has limited processing power geneous operating systems, or it could simply be a
(e.g. a finite state machine) and in which the large, evolving software package being modified
collective system is computationally more power- simultaneously by several different programmers.
ful (e.g. a Turing machine). The criterion of com- In the following, we will focus on the paralleliza-
putational efficiency can be met by parallel models tion example, but similar arguments can be made
that are capable of linear or better than linear for the distributed systems and software engineer-
speedups relative to the number of processors ing aspects of the problem.
used to solve the problem. (Other criteria can be The conventional approach to such a problem
imagined that are less strict. For example, it may looks for code a n d / o r data segments that can be
be reasonable to construct systems in which not executed independently, and hence simultane-
all of the processors are used all of the time. In ously. With this view, it is important to minimize
these cases, the dimensions of time and number of the interactions among the various components.
processors might be combined to obtain a reason- Synchronization strategies are defined to manage
able definition somewhat different from that men- communication among the independent compo-
tioned above.) Previous work on computational nents. The controlling program knows about all
systems with interesting collective/emergent prop- possible interactions and manages them directly.
erties has generally focused on some variant of the Thus, interactions are viewed as costs to be mini-
first of these criteria and ignored the second. mized. As mentioned earlier, most extant parallel
systems consume substantial amounts of overhead
managing and coordinating the activities of their
2. Example problems processors. This is because the flow of data and
control rarely match exactly the interconnection
In this section, three concrete examples are pre- topology of the parallel machine. Potential
sented to illustrate what sorts of problems emer- speedups are also limited by inherently sequential
gent computation can address and provide a components within a computation, a phenomenon
framework for interpreting the definition. The par- quantified by Amdahl's law [1]. For these reasons,
allel processing example shows how emergent the speedups that are achieved by most parallel
computation can lead to efficiency improvements. systems are considerably less than a linear func-
The section on programming languages establishes tion of the number of processors.
the connection between emergent computation and Emergent computation suggests a view of paral-
nonlinear systems. Finally, two search techniques lelism in which the interactions among compo-
are compared to show how the emergent-computa- nents lead to problem solutions with potentially
tion approach to a problem differs from other better than linear performance. For example, a
more conventional approaches. system that performs explicit search at the con-
crete architecture level, but is implicitly searching
2.1. Parallel processing a much larger space (see section 2.3) meets this
criterion.
Consider the problem of designing a large com- The claim that superlinear speedups are in prin-
plex computational system to perform reliably and ciple possible is controversial. The standard coun-
efficiently. By "large complex" we mean that there terargument is as follows: if a process P runs in t
are many components and many interactions time on n processors, then there exists a sequen-
among the components. The computational sys- tial machine that can simulate P in at most (klnt)
tem could be a complicated algorithm that we d - k 2 time steps, where k 1 and k 2 a r e constants,
would like to run in parallel, it could be dis- so the speedup is only by a factor of n. This
tributed over many machines with possibly hetero- counterargument ignores the time required to in-
S. Forrest // Emergent computation: An introduction 5

terpret the results. If the result of the computation implicitly, as we would expect in a truly emergent
is a global pattern (e.g. a pattern of states in a computation.
cellular automaton distributed across several time Generally, we expect the emergent-computation
steps), then the procedure for recognizing that approach to parallelism to have the following fea-
same pattern on a sequential machine might re- tures: (1) no central authority to control the over-
quire as much computation as the original compu- all flow of computation, (2) autonomous agents
tation. that can communicate with some subset of the
Even allowing for the theoretical possibility of other agents directly, (3) global cooperation (see
superlinear speedups, one might question whether section 3) that emerges as the result of many local
or not it is feasible to actually construct such a interactions, (4) learning and adaptation replacing
system. The following simple example illustrates direct programmed control, and (5) the dynamic
how interactions among components can provably behavior of the system taking precedence over
help the efficiency of a computation. In formal static data structures.
models of parallel computation, there are various
assumptions about what happens when two inde- 2.2. Programming languages and the superposition
pendent processors try to write to the same loca- principle
tion in global memory simultaneously. Some of
these assumptions forbid any interaction between Emergent computation arises from interaction
the two simultaneous writes: for example, one among separate components. There are several
processor is allowed to dominate and write suc- ways in which the standard approach to program-
cessfully while the other processor is forced to ming-language design minimizes the potential for
wait (an "Exclusive Write"). Others exploit the emergent computation. This example explores the
interaction: for example, one version of the "Con- connection between emergent computation and
current Write" model prevents both processor~ nonlinear systems.
from writing but records the collision as a "?" in The notation, or syntax, used to express com-
the memory cell, destroying the previous contents. puter programs is for the most part context free.
The Concurrent Write model turns out to be prov- Roughly, this means that legal programs are re-
ably stronger than the Exclusive Write in the sense quired to be written in such a way that the legality
that certain parallel algorithms can be imple- (whether or not the program is syntactically cor-
mented more efficiently with Concurrent Write rect) of any one part of the program can be
than they can with Exclusive Write (for example, determined independently of the other parts. While
computing certain kinds of disjunction [24]). The this is a very powerful property (among other
trick is that if the interaction is recorded as a "?" things, it makes it possible to build efficient com-
then both processors that tried to write can in- pilers), emergent computations are almost cer-
spect that memory cell and determine that there tainly not context free since they arise from inter-
was a collision. This information can be exploited actions among components. However, the low-level
in certain circumstances to produce more efficient instructions that generate emergent computations
algorithms. Thus, in the Concurrent Write model, may well be context free.
write collisions (interactions) are shown to be a The semantics of programming languages can
useful form of computation, leading to perfor- be described by any of several different standard
mance improvements, even if the collisions them- mathematical models [52]. These models describe
selves do not result in transmitted values reaching how the syntax should be interpreted, that is, what
their destination. This small example meets the a program means (more specifically, what func-
criterion of "computing by interaction", although tion it computes). The meaning of a program
the interactions are recorded explicitly rather than helps determine the set of low-level machine in-
6 S. Forrest / Emergent computation: An introduction

structions that are executed when a program runs. distinction between linear and nonlinear models in
The standard approaches to programming-lan- physics exists in computational systems. Note,
guage semantics discourage emergent computa- however, that it is possible to write programs in
tion. For example, in the denotational semantics context-free languages that have nonlinear behav-
approach [52], the meaning of a program is deter- iors when executed, e.g. a simple logistic map, just
mined by composing the meanings of its con- as it may be possible to write the low-level instruc-
stituents. The meaning of an arithmetic expression tions for an emergent-computation system in a
A + B might be written as follows: (read [expres- context-free language.
sion] as "the meaning of expression")#2: While nonlinear computational systems are
[A • B]I = [A] + [B]. more difficult to engineer than linear ones, they
are capable of much richer behavior. The role of
Thus, the meaning of A is isolated from B, and
enzymes in catalysis provides a nice example of
can be computed independently of it. Similar
how nonlinear effects can arise from simple re-
expressions can be written for all the common
combinations of compounds [10]. More generally,
programming constructs, including assignment
consider the problem of recombination in adap-
statements, conditional statements, and loops. By
tive systems. If one can detect combinations that
contrast, we expect that in an emergent compu-
yield effects not anticipated by superposition, then
tation there would be interactions between
those combinations can be exploited in various
components that would not interact in standard
ways that are not available in a model based on
computation.
principles of superposition.
This compositional approach to programming-
A final example of how the principle of super-
language semantics is analogous to the superposi-
position pervades standard programming lan-
tion principle in physics, which states that for
guages is provided by the Church-Rosser theorem
homogeneous linear differential equations, the sum
[9]. The h calculus defines a formal representation
of any two solutions is itself a solution. Systems
for functions and is closely related to the Lisp
that obey the superposition principle are linear
programming 'language. In the ~ calculus, various
and thus incapable of generating complicated be-
substitution and conversion rules are defined for
haviors associated with nonlinear systems such as
reducing ~ expressions to normal form. The
chaos, solitons, and self-organization. Similarly in
Church-Rosser theorem (technically, one of its
the domain of programming languages, the ability
corollaries) shows that no ~ expression can be
to define the meaning of context-free programs in
converted to two different normal forms (for ex-
terms of their constituent parts indicates that there
ample, by applying reductions in different order).
are few if any interactions between the meaning of
This is another example of how computer science
one part and the meaning of another. In these
gets a lot of leverage out of systems that have
sorts of languages and models, the goal is to
something like the principle of superposition. Since
minimize side effects that could lead to inadver-
nonlinear systems often have the property that
tent interactions (e.g. changing the value of a
operations applied in different orders have differ-
global variable)- once again, emergent computa-
ent effects, emergent computations will not in
tion is primarily computation by side effect. The
general have nice simplification rules like the
analogy between the superposition principle in
Church-Rosser theorem.
physics and the compositional approach of deno-
rational semantics suggests that something like the
2.3. Search
#2Two different plus symbols, are used to distinguish be-
tween the symbol plus (O) and the operation that implements The problem of searching a large space of possi-
it (+). bilities for an acceptable solution, a particular
S. Forrest // Emergent computation: An introduction 7

datum, or an optimal value is one of the most to produce new guesses;


basic operations performed by a computer. Intelli- (v) Iteration for many generations of the cycle:
gent systems are often described in terms of their evaluation of fitness, differential reproduction, and
capabilities for "intelligent" search-that is the application of operators. Over time, the popula-
ability to search an intractably large space for tion will become more like the successful individu-
an acceptable solution, using knowledge-based als of previous generations and less like the unsuc-
heuristics, previous experience, etc. The various cessful ones.
techniques of intelligent search provide a sharp At the virtual, or implicit, level, we can interpret
contrast between emergent computation and tradi- the genetic algorithm as searching a higher-order
tional approaches to computation. A classical ap- space of patterns, the space of hyperplanes in
proach to the problem of search is that of an early (0,1) I. When one individual is evaluated by the
artificial intelligence program, the general problem fitness function, many different hyperplanes are
solver (GPS) [39], while the emergent-computation being sampled simultaneously-the so-called im-
approach is illustrated by the genetic algorithm plicit parallelism of the genetic algorithm. For
[20]. example, evaluating the string 000 provides infor-
GPS uses means-ends analysis to search a state mation about the following hyperplanes*3:
space to find some predetermined goal state. GPS
works by defining subgoals part way between the 000, 0 0 # , 0 # 0 , #00, 0 # # , # 0 # , # # 0 , # # # .
start state and the goal state, and then solving
Populations undergoing reproduction and
each of the subgoals independently (and recur-
cross-over (with some other special conditions) are
sively). Under this approach, the domain of prob-
guaranteed exponentially increasing samples of the
lem solving is viewed as "nearly decomposable"
observed best schemata (a property described in
[50], meaning that for the most part each subgoal
refs. [14, 20]). Thus, performance improvements
can be solved without knowledge of the other
provably arise from the collective properties of the
subgoals in the system. The overall approach taken
individuals in the population over time. The popu-
by GPS is still prevalent in artificial intelligence,
lation serves as a distributed database that implic-
the recent work on SOAR [31] being a good
itly contains recoverable information about the
example.
multitudes of hyperplanes (because each individ-
In contrast, genetic algorithms [14, 20] show
ual serves in the sample set of many hyperplanes).
how emergent computation can be used to search
Put another way, the population reflects the ongo-
large spaces. There are two levels of the algorithm,
ing statistics of the search over time.
explicit and implicit. At the mechanistic or explicit
Several aspects of emergent computation are
level, a genetic algorithm consists of:
illustrated by this example. The algorithm is very
(i) A population of randomly chosen bit strings:
flexible, allowing it to track changes in the envi-
P c {0,1) t, representing an initial set of guesses,
ronment. Since the statistical record of the search
where 1 is a fixed positive integer denoting the
is distributed across the population of individuals,
length in bits of a guess;
interpretation is an issue if there is a need to
(ii) A fitness function: F: guesses-~ R, where
recover the statistics explicitly. Normally it is
R denotes the real numbers;
sufficient to look at a few typical individuals or to
(iii) A scheme for differentially reproducing the
population based on fitness, such that more copies #3The # symbol means "don't care". Thus, # 0 0 denotes
are made of more fit individuals and fewer or no the pattern, or schema, which requires that the second 2 bits be
copies of less fit ones; set to 0 and will accept a 0 or a 1 in the first bit position. The
space of possible schemata is the space of hyperplanes in
(iv) A set of "genetic" operators (e.g. mutation, {0,1 }( (See ref. [14] for an introduction to both the mechanism
crossover, and inversion) that modify individuals and theory of the genetic algorithm.)
8 S. Forrest /Emergent computation: A n introduction

treat the best individual seen as the "answer" to between the cells in a cellular automaton and
the problem. The potential efficiency of emergent individual ants, Langton has described computa-
computation is also demonstrated through the use tional models that emulate some of the important
of implicit parallelism. There is a price, however. information-processing aspects of ant colonies [32].
While the algorithm is highly efficient, it achieves Kauffman's article in these Proceedings [28] ex-
its efficiency through sampling. This means that plores self-organizing behavior in simple randomly
there is some loss of accuracy (see Greening's connected networks of Boolean function. These
paper in these Proceedings [15] for a careful treat- networks spontaneously organize themselves into
ment of this issue). regular structures of "frozen components" that
are impervious to fluctuating states in the rest of
the network. The tendency of a network to exhibit
this and other self-organizing behaviors is related
3. Themes of emergent computation to various structural properties of the network and
more generally to the problem of adaptation.
Three important and overlapping themes of Not all examples of emergent computation are
systems that exhibit emergent computation are beneficial. The Internet (a nationwide network for
self-organization, collective phenomena, and exchanging electronic mail) was designed so that
cooperative behavior. Here, we use the term self- messages would be routed somewhat randomly
organization to mean the spontaneous emergence (there are usually many different routes that a
of order from an initially random system, but see message may take between two Internet hosts).
ref, [40] for a detailed formulation of self-organi- The intent is for message traffic to be evenly
zation. Collective phenomena are those in which distributed across the various hosts. However, in
there are many agents, many interactions among some circumstances the messages have been found
the agents, and an emphasis on global patterns. A to self-organize into a higher-level structure, called
third component of emergent computation is the a token-passing ring, so that all of the messages
notion of cooperative behavior, i.e. that the whole collect at one node, and then are passed along to
is somehow more than the sum of the parts. In the next node in the ring [26]. In this case, the
this section, these three themes are illustrated in self-organization is highly detrimental to the over-
the context of several examples. all performance of the network. The behavior
One of the most compelling examples comes raises the question of what, if any, low-level proto-
from nature in the form of ant colonies. The cols could reliably prevent harmful self-organizing
actions of any individual ant are quite limited and behavior in a system like the Internet.
apparently random, but the collective organization In a computational setting, there are at least
and behavior of the colony is highly sophisticated, two quite different types of cooperation: (1) pro-
including such activities as mass communication gram correctness, and (2) resource allocation. In
and nest building [53, 54]. In the absence of any this context, program correctness means that a
centralized control, the collective entity (the collection of independent instructions evolves
colony) can "decide" (the decision itself is emer- (more accurately, coevolves) over time in such a
gent) when, where, and how to build a n e s t - self- way that their interactions result in the desired
organizing, collective, and cooperative behavior in global behavior. That is, the adaption takes place
the extreme. Clearly, many of the activities in an at the instruction level, but the behavior of inter-
ant colony involve information-processing, such as est is at the collective level. If the collective in-
laying trails from the nest to potential food sites, structions (a program) learn the correct behavior,
communicating the quality and quantity of food we say that they are cooperating. Holland's classi-
at a particular site, etc. By making an analogy fier systems (see papers in these Proceedings) are a
S. Forrest / Emergent computation: A n introduction 9

good example of this sense of cooperation. The There is a wide range of papers concerned with
second meaning of cooperation occurs when some emergent behavior and computing. Langton's pa-
shared resource on a local area network (e.g., CPU per [34] illustrates the importance of phase tran-
time, printers, network access, etc.) is allocated sitions to emergent computation. Huberman [24],
efficiently among a set of distributed processes. Kephart et al. [30], and Maxion [37] discuss emer-
The Huberman and Kephart et al. papers in this gent behaviors in computing networks. Machlin
volume [24, 30] discuss how robust resource-alloc- and Stout's paper [36] illustrates how very simple
ation strategies can emerge in distributed systems. Turing machines can exhibit interesting and com-
plex behavior. Palmore and Herring's paper [42]
provides an example of the connection between
emergent computation and real computing proce-
4. Review of contents dures (computer arithmetic). Rasmussen's paper
[44] uses a simple model of computer memory to
This introduction has described one view of show how cooperative "life-like" structures can
emergent computation. The conference produced emerge under various conditions. Finally,
several themes and topics of its own. In particular, Kauffman's paper [28] explores the self-organiz-
the themes of design (how to construct such sys- ing properties of simple Boolean networks.
tems), learning and the importance of preexisting The adaptive systems aspect of emergent com-
structure, the role of parallelism, and the tension putation is a dominant theme in the Proceedings,
between cooperative and competitive models of and Farmer's paper [10] relates various models of
interaction are central to many of the papers in learning through the common thread of adaptive
the Proceedings. Emergent-computation systems dynamics. Papers on classifier systems and genetic
can be constructed either by adapting each indi- algorithms range from proposals for new mecha-
vidual component independently or by tinkering nisms (Holland [21]) to methods for analyzing
with all of the components as a group. Wilson's classifier system behavior (Compiani et al. [8] and
paper [55] addresses this issue of local- versus Forrest and Miller [12]), to bridges between ge-
system-level design. Learning is clearly central to netic algorithms and neural networks (Schaffer
emergent computation, since it provides the most et al. [47] and Wilson [55]). Two papers (Hillis
natural way to control such a system. Several [18], Ikegami and Kaneko [25]) explore how inter-
papers in the Proceedings (Baird, Banzhaf and actions between hosts and parasites can improve
Haken, Hansen, Omohundro, Schaffer et al. [3, 4, the global behavior of an evolutionary system.
16, 41, 47]) focus on specific learning issues, and Banzhaf and Haken's [4], Hanson's [16], Kanter's
many others use learning as an integral part of [27], and Churchland's [6] papers describe connec-
their system. The role of parallelism in emergent tionist models of learning; Greening's paper [15]
computation is often assumed. However, Machlin discusses parallel simulated annealing techniques;
and Stout's paper [36] challenges that assumption, Omohundro [41] examines geometric learning al-
and Greening's paper [15] explores the conse- gorithms. Papers on the emergence of symbolic
quences of using parallelism efficiently. reasoning systems from subsymbolic components
The papers have been grouped roughly into the include Mitchell and Hofstadter [38] (models of
following subject areas: (1) artificial networks, analogy-making) and Harnad [17] (connectionism
(2) learning and adaptation, and (3) biological net- and the symbol-grounding problem).
works. Thus, all of the papers on biological Several papers describe emergent computations
networks are grouped together, although they in different biological systems, ranging from the
emphasize different aspects of problems of emer- cortex to the cytoskeleton. Reeke and Sporns [45]
gent computation. discuss perceptual and motor systems. Two papers
10 s. Forrest /Emergent computation: A n introduction

( B a i r d [3] a n d Siegel [49]) f o c u s o n the c o r t e x , [14] D.E. Goldberg, Genetic Algorithms in Search Optimiza-
tion, and Machine Learning (Addison-Wesley, Reading,
K e e l e r ' s p a p e r [29] e x a m i n e s c e r e b e l l a r f u n c t i o n ,
MA, 1989).
and George et al. [13] c o n s i d e r vision. F i n a l l y , [15] D.R. Greening, Parallel simulated annealing techniques,
R a s m u s s e n et al. [43] p r e s e n t a c o n n e c t i o n i s t m o d e l Physica D 42 (1990) 293-306, these Proceedings.
[16] S.J. Hanson, A stochastic version of the delta rule, Phys-
of the cytoskeleton.
ica D 42 (1990) 265-272, these Proceedings.
[17] S. Harnad, The symbol grounding problem, Physica D 42
(1990) 335-346, these Proceedings.
Acknowledgements [18] W.D. Hillis, Co-evolving parasites improve simulated evo-
lution as an optimization procedure, Physica D 42 (1990)
228-234, these Proceedings.
I a m g r a t e f u l to D o y n e F a r m e r , J o h n H o l l a n d , [19] D.R. Hofstadter, Artificial intelligence: subcognition as
Melanie Mitchell, and Quentin Stout for t h e i r computation, Technical Report 132, Indiana University,
careful reading of the manuscript and many help- Bloomington, IN (1982).
[20] J.H. Holland, Adaption in Natural and Artificial Systems
ful s u g g e s t i o n s . Chris Langton and I have had (University of Michigan Press, Ann Arbor, MI, 1975).
m a n y p r o d u c t i v e d i s c u s s i o n s o f these i d e a s o v e r [21] J.H. Holland, Concerning the emergence of tag-mediated
t h e years. lookahead in classifier systems, Physica D 42 (1990)
188-201, these Proceedings.
[22] J.H. Holland, K.J. Holyoak, R.E. Nisbett and P. Thagard,
Induction: Processes of Inference, Learning, and Discov-
References ery (MIT Press, Cambridge, MA, 1986).
[23] J.E. Hopcroft and J.D. Ullman, Introduction to Au-
tomata. Theory, Languages, and Computation (Addison-
[1] G.M. Amdahl, Validity of the single processor approach Wesley, Reading, MA, 1979).
to achieving large-scale computing capabilities, AFIPS [24] B.A. Huberman, The performance of cooperative pro-
Conf. Proc. (1967) 483-485. cesses, Physica D 42 (1990) 38-47, these Proceedings.
[2] R. Axelrod, An evolutionary approach to norms, Am. [25] T. Ikegami and K. Kaneko, Computer symbiosis-emer-
Political Sci. Rev. (1986) 80. gence of symbiotic behavior through evolution, Physica D
[3] B. Baird, Bifurcation and learning in oscillating neutral 42 (1990) 235-243, these Proceedings.
network models of cortex, Physica D 42 (1990) 365-384, [26] V. Jacobson, personal communication.
these Proceedings. [27] I. Kanter, Synchronous or asynchronous parallel dynam-
[4] W. Banzhaf and H. Haken, An energy function for spe- ics - Which is more different, Physica D 42 (199) 273-280,
cialization, Physica D 42 (1990) 257-264, these Proceed- these Proceedings.
ings. [28] S.A. Kauffman, Requirements for evolvability in complex
[5] A.W. Burks, ed., Essays on Cellular Automata (University systems: orderly dynamics and frozen components, Phys-
of Illinois Press, Urbana, IL, 1970). ica D 42 (1990) 135-152, these Proceedings.
[6] P.M. Churchland, On the nature of explanation: a PDP [29] J.D. Keeler, A dynamical systems view of cerebellar func-
approach, Physica D 42 (1990) 281-292, these Proceed- tion, Physica D 42 (1990) 396-410, these Proceedings.
ings. [30] J.O. Kephart, T. Hogg and B.A. Huberman, Collective
[7] E.F. Codd, Cellular Automata (Academic Press, New behavior of predictive agents, Physica D 42 (1990) 48-65,
York, 1968). these Proceedings.
[8] M. Compiani, D. Montanari and R. Serra, Learning and [31] J.E. Laird, A. Newell and P.S. Rosenbloom, Soar: an
bucket brigade dynamics in classifier systems, Physica D architecture for general intelligence, Artificial Intelligence
42 (1990) 202-212, these Proceedings. 33 (1987) 64.
[9] H.B. Curry and R. Feys, Combinatory Logic, Vol. I [32] C.G. Langton, Studying artificial life with cellular au-
(North-Holland, Amsterdam, 1968). tomata, Physica D 22 (1986) 120-149.
[10] J.D. Farmer, A Rosetta Stone for connectionism, Physica [33] C.G. Langton, ed., Artificial Life, Santa Fe Institute Stud-
D 42 (1990) 153-187, these Proceedings. ies in the Sciences of Complexity (Addison-Wesley, Read-
[11] J.D. Farmer, N.H. Packard and A.S. Perelson, The im- ing, MA, 1989).
mune system, adaption, and machine learning, Physica D [34] C.G. Langton, Computation at the edge of chaos: phase
22 (1986) 187-204. transitions and emergent computation, Physica D 42
[12] S. Forrest and J. Miller, Emergent behaviors of classifier (1990) 12-37, these Proceedings.
systems, Physica D 42 (1990) 213-227, these Proceedings. [35] E.N. Lorenz, Deterministic nonperiodic flow, J. Atmos.
[13] J.S. George, C.J. Aine and E.R. Flynn, Neuromagnetic Sci. 20 (1963) 130-141.
studies of human vision: noninvasive characterization of [36] R. Machlin and Q.F. Stout, The complex behavior of
functional architecture, Physica D 42 (1990) 411-427, simple machines, Physica D 42 (1990) 85-98, these Pro-
these Proceedings. ceedings.
S. Forrest / Emergent computation: A n introduction 11

[37] R.A. Maxion, Toward diagnosis as an emergent behavior Group, Parallel Distributed Processing: Explorations in
in a network ecosystem, Physica D 42 (1990) 66-84, these the Microstructure of Cognition (MIT Press, Cambridge.
Proceedings. MA, 1986).
[38] M. Mitchell and D.R. Hofstadter, The emergence of [47] J.D. Schaffer, R.A. Caruana and L.J. Eshelman, Using
understanding in a computer model of concepts and genetic search to exploit the emergent behavior of neural
analogy-making, Physica D 42 (1990) 322-334, these Pro- networks, Physica D 42 (1990) 244-248, these Proceed-
ceedings. ings.
[39] A. Newell and H.A. Simon, A program that simulates [48] R. Shaw, Strange attractors, chaotic behavior, and infor-
human thought, in: Computers and Thought, eds. E.A. mation flow, Z. Naturforsch. 36a (1981) 80-112.
Feigenbaum and J. Feldman (McGraw-Hill, New York, [49] R.M. Siegel, Non-linear dynamical system theory and
1963) 279-296. primary visual cortical processing, Physica D 42 (1990)
[40] G. Nicolis and I. Prigogine, Self-Organization in 385-395, these Proceedings.
Nonequilibrium Systems (Wiley, New York, 1977). [50] H.A. Simon, The Sciences of the Artificial (MIT Press,
[41] S.M. Omohundro, Geometric learning algorithms, Phys- Cambridge, MA, 1969).
ica D 42 (1990) 307-321, these Proceedings. [51] Q. Stout, personal communication.
[42] J. Palmore and C. Herring, Computer arithmetic, chaos [52] J.E. Stoy, Denitational Semantics: The Scott-Strachey
and fractals, Physica D 42 (1990) 99-110, these Proceed- Approach to Programming Language Theory (MIT Press,
ings. Cambridge, MA, 1977).
[43] S. Rasmussen, H. Karampurwala, R. Vaidyanath, K.S. [53] E.O. Wilson, The Social Insects (Belknap/Harvard Univ.
Jensen and S. Hameroff, Computational connectionism Press, Cambridge, MA, 1971).
with neutrons: a model of cytoskeletal automata subserv- [54] E.O. Wilson, Sociobiology (Belknap/Harvard Univ. Press,
ing neural networks, Physica D 42 (1990) 428-449, these Cambridge, MA, 1975).
Proceedings. [55] S.W. Wilson, Perceptron redux: emergence of structure,
[44] S. Rasmussen, C. Knudsen, R. Feldberg and M. Physica D 42 (1990) 249-256, these Proceedings.
Hindsholm, The Coreworld: emergence and evolution of [56] S. Wolfram, Universality and complexity in cellular au-
cooperative structures in a computational chemistry, tomata, Physica D 10 (1984) 1-35.
Physica D 42 (1990) 111-134, these Proceedings. [57] W.H. Zurek, Algorithmic randomness and physical en-
[45] G.N. Reeke Jr. and O. Sporns, Selectionist models of tropy, Phys. Rev. A 40 (1989) 4731-4751.
perceptual and motor systems and implications for func- [58] W.H. Zurek, Thermodynamic cost of computation, algo-
tionalist theories of brain function, Physica D 42 (1990) rithm complexity and the information metric, Nature 341
347-364, these Proceedings. (1989) 119-124.
[46] D.E. Rumelhard, J. L. McClelland and the PDP Research

You might also like