Professional Documents
Culture Documents
Worldviews, Science and Us - Philosophy and Complexity - Carlos Gershenson, Diederik Aerts, Bruce Edmonds
Worldviews, Science and Us - Philosophy and Complexity - Carlos Gershenson, Diederik Aerts, Bruce Edmonds
Worldviews, Science and Us - Philosophy and Complexity - Carlos Gershenson, Diederik Aerts, Bruce Edmonds
Science and Us
Philosophy and Complexity
This page intentionally left blank
editors
Carlos Gershenson
Brussels Free University, Belgium
Diederik Aerts
Brussels Free University, Belgium
Bruce Edmotads
Manchester Metropolitan University Business School, UK
Worldviews,I 4 d
Science and Us 5
World Scientific
NEW JERSEY * LONDON SINGAPORE * BElJlNG - SHANGHAI * HONG KONG TAIPEI * CHENNAI
Published by
World Scientific Publishing Co. Re. Ltd.
5 Toh Tuck Link, Singapore 596224
USA ofice: 27 Warren Street, Suite 401-402, Hackensack, NJ 07601
UK ofice: 57 Shelton Street, Covent Garden, London WC2H 9HE
For photocopying of material in this volume, please pay a copying fee through the Copyright
Clearance Center, Inc., 222 Rosewood Drive, Danvers, MA 01923, USA. In this case permission to
photocopy is not required from the publisher.
ISBN-13 978-981-270-548-8
ISBN-10 981-270-548-1
Introduction
Carlos Gershenson, Diederik Aerts and Bruce Edmonds 1
V
vi
1
2
regardless of how complex the world isa, there is an inescapable tension be-
tween the relative simplicity of human cognition and the apparent (or real)
complexity of the phenomena they are trying to understand. The different
philosophies and approaches can, to a large extent, be seen as responses
to this tension. Thus some seek to show that this tension is not feasibly
soluble, which may have consequences for how we think about or manage
the world we inhabit. Some seek to explain productive ways forward-how
one might explain and understand the complexity we observe. Others take
a step back to examine what consequences the tension itself has upon the
scientific method.
This volume is the fourth in a series entitled ‘Worldviews, Science and
Us’ published by World Scientific. The series collects interdisciplinary ar-
ticles with the global aim of stimulating new insights about the world and
the place of human in this world, and also about the role science plays in
the construction of new aspects of worldviews.
Many chapters in this volume are derived from presentations given at
the Philosophy and Complexity session of the Complexity, Science and So-
ciety conference, held in Liverpool, UK, between September llthand 14th
2005. We appreciate the work of the program committee for that session,
which was able to select relevant works from all the submissions received.
The program committee members were: William Bechtel, Mark Bedau,
Jacques Dubucs, Bruce Edmonds, Carlos Gershenson, Francis Heylighen,
Alicia Juarrero, Michael Lissack, Chris Lucas, Edgar Morin, Robert Pen-
nock, Kurt Richardson, John Symons, Pedro Sotolongo, Jean Paul Van
Bendegem, Franz Wuketits, and Roger Young. We would also like to thank
Robert Geyer, Jan Bogg, and Abbie Badcock for their organizational sup-
port at the conference. Clem6nt Vidal helped in polishing the translation of
Edgar Morinls chapter. Finally the speakers and participants of the session
and the quality of their discussions motivated us to compile this volume.
After the conference, the participants were invited to prepare a
manuscript and more people were invited to contribute to the volume. An
internal review process followed, where contributors reviewed other submis-
sions. From these, the actual chapters were selected to be included in the
volume.
In what follows, summaries of the chapters are given. They are diverse
enough to make it difficult to classify in different topic areas, so the order
is simply dictated by similarity.
aThis is obviously not a question that can itself be settled in a scientific way.
3
on vision.
Hkctor Zenil and Francisco Herna’ndez Quiroz discuss the possibility
of using artificial neural network models to characterize the computational
power of the human mind.
Helen De Cruz argues for an active externalism as a requirement for the
emergence of algebra, studying its emergence in different cultural settings.
References
1. Heylighen, F., P. Cilliers, and C. Gershenson (forthcoming 2007). “Complex-
ity and Philosophy”. In Bogg, J. and R. Geyer (eds.) Complexity, Science
and Society. Radcliffe Publishing, Oxford.
2. Kuhn, Thomas S. The Structure of Scientific Revolutions. University of
Chicago Press, 2nd Ed.
RESTRICTED COMPLEXITY, GENERAL COMPLEXITY*
EDGAR MORIN
CNRS Emeritus Director
Centre d’Etudes Transdisciplinaires. Sociologie, Anthropologie, Histoire
Ecole des Hautes Etudes en Sciences Sociales
5
6
the universe, where the second law leads toward dispersion, uniformity, and
thus towards death. This conception of the death of the universe, long ago
rejected, has appeared recently in cosmology, with the discovery of black
energy. This will lead to the dispersion of galaxies and would seem to
announce us that the universe tends to a generalized dispersion. As the
poet Eliot said: “the universe will die in a whisper” ...
Thus, the arrival of disorder, dispersion, disintegration, constituted a
fatal attack to the perfect, ordered, and determinist vision.
And many efforts will be needed-we are not there precisely because
it is against the reigning paradigm-to understand that the principle of
dispersion, which appears since the birth of the universe with this incred-
ible deflagration improperly named big bang, is combined with a contrary
principle of bonding and organization which is manifested in the creation
of nuclei, atoms, galaxies, stars, molecules, and life.
3. Interaction Order/Disorder/Organization
How is it that both phenomena are related?
This is what I tried to show in the first volume of La Mkthode (The
Method). We will need to associate the antagonist principles of order and
disorder, and associate them making another principle emerge that is the
one of organization.
Here is in fact a complex vision, which one has refused to consider during
a very long time, for one cannot conceive that disorder can be compatible
with order, and that organization can be related to disorder at all, being
antagonist to it.
At the same time than that of the universe, the implacable order of life
is altered. Lamarck introduces the idea of evolution, Darwin introduces
variation and competition as motors of evolution. Post-darwinism, if it
has, in certain cases, attenuated the radical character of the conflict, has
brought this other antinomy of order: chance, I would say even a vice of
chance. Within the neodarwinian conception, to avoid calling “creation”
or “invention” the new forms of living organization such as wings, eyes-
one is very afraid of the word ‘Linventionlland of the word ‘kreation”---one
has put chance at the prow. One can understand the rest of the fear of
creation because science rejects creationism, i.e. the idea that God is a
creator of living forms. But the reject of creationism finished in masking
the creativity that manifests itself in the history of life and in the history of
humanity. And, from the philosophical point of view, it is rather recently
8
that Bergson, and then in another way, Castoriadis, put at the centre of
their conception the idea of creation.
In addition, in the beginning of the twentieth century, microphysics in-
troduced a fundamental uncertainty in the universe of particles that ceases
to obey the conceptions of space and time characteristic of our universe
called macro-physic. How thus these two universes, that are the same, but
at a different scale, are compatible? One begins today to conceive that one
can pass, from the micro-physical universe to ours, since between them a
certain number of quantum elements are connected, in virtue of a process
called decoherence. But there remains this formidable logical and concep-
tual hiatus between the two physics.
Finally, at a very large scale-mega-physical-Einstein’s theory discov-
ers that space and time are related to one another, with the result that our
lived and perceived reality becomes only meso-physical, situated between
micro-physic reality and mega-physical reality
4. Chaos
All this made that the dogmas of classical science are reached, but de fucto:
although increasingly mummified, they remain.
Yet a certain number of strange terms would appear. For example, the
term “catastrophe”, suggested by RenC Thom to try to make intelligible
the discontinuous changes of form; then the fractalism of Mandelbrot; then
the physical theories of chaos, which distinguishes itself from the rest, since
today it is thought that the solar system, which seems to obey an abso-
lutely impeccable and measurable order with the most extreme precision,
considering its evolution in millions of years, is a chaotic system comprising
a dynamic instability modifying for example Earth’s rotation around itself
or around the Sun. A chaotic process may obey to deterministic initial
states, but these cannot be know exhaustively, and the interactions de-
veloped within this process alter any prevision. Negligible variations have
considerable consequences over large time scales. The word chaos, in these
physics, has a very limited meaning: that of apparent disorder and unpre-
dictability. Determinism is saved in principle, but it is inoperative since
one cannot know exhaustively the initial states. We are in fact, since the
original deflagration and forever, plunged in a chaotic universe.
9
word complexity would encompass them all. Only that this complexity is
restricted to systems which can be considered complex because empirically
they are presented in a multiplicity of interrelated processes, interdepen-
dent and retroactively associated. In fact, complexity is never questioned
nor thought epistemologically.
Here the epistemological cut between restricted and generalized com-
plexities appears because I think that any system, whatever it might be, is
complex by its own nature.
Restricted complexity made it possible important advances in formal-
ization, in the possibilities of modeling, which themselves favor interdis-
ciplinary potentialities. But one still remains within the epistemology of
classical science. When one searches for the “laws of complexity”, one still
attaches complexity as a kind of wagon behind the truth locomotive, that
which produces laws. A hybrid was formed between the principles of tradi-
tional science and the advances towards its hereafter. Actually, one avoids
the fundamental problem of complexity which is epistemological, cognitive,
paradigmatic. To some extent, one recognizes complexity, but by decom-
plexifying it. In this way, the breach is opened, then one tries to clog it:
the paradigm of classical science remains, only fissured.
6. Generalized complexity
But then, what is “generalized” complexity? It requires, I repeat, an epis-
temological rethinking, that is to say, bearing on the organization of knowl-
edge itself.
And it is a paradigmatic problem in the sense that I have defined
“paradigm”a. Since a paradigm of simplification controls classical science,
by imposing a principle of reduction and a principle of disjunction to any
knowledge, there should be a paradigm of complexity that would impose a
principle of distinction and a principle of conjunction.
In opposition to reduction, complexity requires that one tries to com-
prehend the relations between the whole and the parts. The knowledge
of the parts is not enough, the knowledge of the whole as a whole is not
enough, if one ignores its parts; one is thus brought to make a come and go
in loop to gather the knowledge of the whole and its parts. Thus, the prin-
ciple of reduction is substituted by a principle that conceives the relation
of whole-part mutual implication.
our cells contains the totality of our genetic inheritance, only a small part
of it is active, the rest being inhibited. In the human relation individual-
society, the possibilities of liberties (delinquent or criminal in the extreme)
inherent to each individual, will be inhibited by the organization of the
police, the laws, and the social order.
Consequently, as Pascal said, we should conceive the circular relation:
‘one cannot know the parts if the whole is not known, but one cannot know
the whole if the parts are not known’.
Thus, the notion of organization becomes capital, since it is through
organization of the parts in a whole that emergent qualities appear and
inhibited qualities disappearb.
culture that produces as emergent psychic, mental qualities, with all that
involves language, consciousness, etc.
Reductionists are unable to conceive the reality of the spirit and want to
explain everything starting from the neurons. The spiritualists, incapable
of conceiving the emergence of the spirit starting from the relation brain-
culture, make from the brain at most a kind of television.
a great intellectual isolation within his corporation, in the 70’s. Finally the
word emerged in the 8O’s-9O’s in Santa Fe as a new idea, whereas it existed
already for nearly half a century. But it is still not imposed in biology.
I call self-eco-organization to the living organization, according to the
idea that self-organization depends on its environment to draw energy and
information: indeed, as it constitutes an organization that works to main-
tain itself, it degrades energy by its work, therefore it must draw energy
from its environment. Moreover, it must seek its food and defend against
threats, thus it must comprise a minimum of cognitive capacities.
One arrives to what I call logically the complex of autonomy-
dependence. For a living being to be autonomous, it is necessary that
it depends on its environment on matter and energy, and also in knowl-
edge and information. The more autonomy will develop, the more multiple
dependencies will develop. The more my computer will allow me to have
an autonomous thought, the more it will depend on electricity, networks,
sociological and material constraints. One arrives then to a new complexity
to conceive living organization: the autonomy cannot be conceived without
its ecology. Moreover, it is necessary for us to see a self-generating and
self-producing process, that is to say, the idea of a recursive loop which
obliges us to break our classical ideas of product --f producer, and of cause
+ effect.
the whole compared to a cold environment. That is to say that the feedback
is a process which complexifies causality. But the consequences of this had
not been drawn to the epistemological level.
Thus feedback is already a complex concept, even in non-living systems.
Negative feedback is what makes it possible to cancel the deviations that
unceasingly tend to be formed like the fall in temperature compared to the
standard. Positive feedback develops when a regulation system is not able
anymore to cancel the deviations; those can then be amplified and go to-
wards a runaway, kind of generalized disintegration, which is often the case
in our physical world. But we could see, following an idea advanced more
than fifty years ago by Magoroh Maruyama, that the positive feedback, i.e.
increasing deviation, is an element that allows transformation in human his-
tory. All the great transformation processes started with deviations, such
as the monotheist deviation in a polytheist world, the religious deviation
of the message of Jesus within the Jewish world, then, deviation in the
deviation, its transformation by Paul within the Roman empire; deviation,
the message of Mohammed driven out of Mecca, taking refuge in Medina.
The birth of capitalism is itself deviating in a feudal world. The birth of
modern science is a deviating process from the XVIIth century. Socialism
is a deviating idea in the XIXth century. In other words, all the processes
start by deviations that, when they are not suffocated, exterminated, are
then able to make chain transformations.
so that Jane Goodall could perceive that chimpanzees had different per-
sonalities, with rather complex relations of friendship, of rivalry; a whole
psychology, a sociology of chimpanzees, invisible to the studies in a labora-
tory or in a cage, appeared in their complexity.
The idea of knowing the living in their environment became capital in
animal ethology. Let us repeat it, the autonomy of the living needs to be
known in its environment.
From now on, becoming aware of the degradations that our techno-
economic development makes to the biosphere, we realize the vital link
with this same biosphere that we believe to have reduced to the rank of
manipulable object. If we degrade it, we degrade ourselves, and if we
destroy it, we destroy ourselves.
The need for contextualization is extremely important. I would even say
that it is a principle of knowledge: Anybody who has made a translation
in a foreign language will seek an unknown word in the dictionary; but
with words being polysemous, it is not immediately known which is the
good translation; the sense of the word will be sought in the sense of the
sentence in the light of the global sense of the text. Though this play
from text to word, and from text to context, and from context to word, a
sense will crystalize. In other words, the insertion in the text and in the
context is an evident cognitive necessity. Take for example the economy,
the most advanced social science from a mathematical point of view, but
which is isolated from human, social, historic, and sociologic contexts: its
prediction power is extremely weak because the economy does not function
in isolation: its forecasts need to be unceasingly revised, which indicates us
the disability of a science that is very advanced but too closed.
More generally, mutual contextualization is lacking in the whole of social
sciences.
I have often quoted the case of the Aswan dam because it is revealing and
significant: it was built in Nasser’s Egypt because it would make it possible
to regulate the course of a capricious river, the Nile, and to produce electric
power for a country which had a great need for it. However, after some
time, what happened? This dam retained a part of the silts that fertilized
the Nile valley, which obliged the farming population to desert the fields
and overpopulate large metropolises like Cairo; it retained a part of the fish
that the residents ate; moreover today, the accumulation of silts weakens
the dam and causes new technical problems. That does not mean that the
Aswan dam should not have been built, but that all the decisions taken in a
techno-economic context are likely to be disastrous by their consequences.
20
The objective knowledge which is its idea, resulted in the need of elim-
inating subjectivity, i.e. the emotional part inherent to each observer, to
each scientist, but it also comprised the elimination of the subject, i.e. the
being which conceives and knows. However, any knowledge, including ob-
jective, is at the same time a cerebral translation starting from data of
the external world and a mental reconstruction, starting from certain or-
ganizing potentialities of the spirit. It is certain that the idea of a pure
objectivity is utopian. Scientific objectivity is produced by beings who are
subjects, within given historical conditions, starting from the rules of the
scientific game. The great contribution of Kant was to show that the ob-
ject of knowledge is co-constructed by our spirit. He indicated us that it
is necessary to know knowledge to know its possibilities and limits. The
knowledge of knowledge is a requirement of the complex thinking.
As Husserl indicated in the ~ O ’ S , in particular in his conference on
the crisis of European science, sciences developed extremely sophisticated
means to know external objects, but no means to know themselves. There
is no science of science, and even the science of science would be insufficient
if it does not include epistemological problems. Science is a tumultuous
building site, science is a process that could not be programmed in advance,
because one can never program what one will find, since the characteristic
of a discovery is its unexpectedness. This uncontrolled process has lead
today to the development of potentialities of destruction and of manipula-
tion, which must bring the introduction into science of a double conscience:
a conscience of itself, and an ethical conscience.
Also, I believe that it will be necessary to arrive more and more to
a scientific knowledge integrating the knowledge of the human spirit to
the knowledge of the object which this spirit seizes and recognizing the
inseparability between object and subject.
this cut. This became a frightening ditch. The ditch of ignorance separates
the scientific culture from the culture of the humanities.
But the current has started to be reversed: the most advanced sciences
arrive to fundamental philosophical problems: Why is there a universe out
of nothing? How was this universe born from a vacuum which was not at
the same time the vacuum? What is reality? Is the essence of the universe
veiled or totally cognizable?
The problem of life is posed from now on in a complexity that exceeds
biology: the singular conditions of its origin, the conditions of emergences
of its creative powers. Bergson was mistaken by thinking that there was an
e'lan vital, but was right while speaking about creative evolution. He could
even have spoken about evolutionary creativity.
Today we can foresee the possibility of creating life. From the moment
when it is believed that life is a process developed starting only from physic-
ochemical matter under certain conditions, in underwater thermal vents or
elsewhere, one can very well consider creating the physical, chemical, ther-
modynamic conditions which give birth to organisms gifted with qualities
that one calls life. We can also foresee the possibility to modify the human
being in its biological nature. Therefore, we have to meditate about life,
as we never did it. And at the same time we must meditate about our
relationship with the biosphere.
Thus all the most advanced sciences arrive to fundamental philosophical
problems that they thought to have eliminated. They do not only find them,
they renew them.
If one defines philosophy by the will and capacity of reflection, it is
necessary that the reflectivity is also introduced into the sciences, which
does not eliminate the relative autonomy of philosophy nor the relative
autonomy of scientific procedures compared to philosophical procedures.
Finally and especially, any knowledge, including the scientific one, must
comprise in itself an epistemological reflection on its foundations, principles,
and limits.
Still today there is the illusion that complexity is a philosophical prob-
lem and not a scientific one. In a certain way, it is true, in a certain way,
it is false. It is true when you place yourselves from the point of view of an
isolated and separated object: the fact that you isolate and separate the
object made the complexity to disappear: thus it is not a scientific problem
from the point of view of a closed discipline and a decontextualized object.
But, as soon as you start to connect these isolated objects, you are in front
of the problem of complexity.
25
Think that when Fermi elucidated the structure of the atom in the ~ O ’ S ,
it was a purely speculative discovery and he had by no means thought that
this could allow the fabrication of an atomic bomb. However, a few years
later, the same Fermi went to the United States to contribute to the fabri-
cation of the atomic bomb that would be used in Hiroshima and Nagasaki.
When Watson and Crick determined the structure of the genetic inheritance
in DNA, they thought that it was a great conquest of knowledge without
any practical consequences. And hardly ten years after their discovery, the
problem of genetic manipulations was posed in the biology community.
The ecology of action has a universal value. One can think of examples
in our recent French history: a dissolution of the Parliament by President
Chirac to have a governmental majority led to a socialist majority; a refer-
endum made to win general support led to its rejection. Gorbachev tried a
reform to save the Soviet Union but this one contributed to its disintegra-
tion. When one sees that a revolution was made in 1917 to suppress the
exploitation of man by his fellow man, to create a new society, founded on
the principles of community and liberty, and that this revolution, not only
caused immense losses of blood, destruction, and repression by a police sys-
tem, but, after seventy years, it led to its contrary, i.e. to a capitalism even
more fierce and savage than that of the tsarist times, and with a return
of religion! Everything that this revolution wanted to destroy resurrected.
How not to think about the ecology of action!
DONALD C. MIKULECKY
Senior Fellow, Virginia Commonwealth University’s Center for the Study of
Biological Complexity, Richmond, VA
1. INTRODUCTION
How can we treat science as an object of scientific inquiry? The central
problem arises with that question. Science has tried to rid itself of circu-
larity and in so doing has become a very limited method for examining the
complex world it tries to have us understand. Self reference is at the center
of so many of the interesting thing we want to understand including and
especially life itself. The existence of this self referential character is the
essence of what we have come to call “complexity”. The works of Robert
Rosen [l, 2, 31 spell this out in great detail. This series of investigations
began over a half century ago yet still remains virtually unrecognized by the
vast majority of those who call themselves “scientists”. That fact alone can
be a springboard to launch a study of science as an object, which is what
this study is all about. I have reviewed the technical aspects of Rosen’s
work elsewhere [4]and will consider the broader philosophical implications
here.
Using the ideas Rosen developed, we can begin with the following work-
ing definition of complexity:
30
31
way that the things that are “more than the sum of the material parts”
become entities having an ontology of their own. This idea is central to an
understanding of complexity.
1.6. h g m e n t a b i l i t y
It follows directly from what has been developed so far that the complex
system with its context dependent functional components can not be frag-
mented into material parts. Simple mechanisms or machines can always be
fragmented into material parts.
1.7. Computability
This subject can fill a number of books. It is the subject of heated debates
and for those who have placed their faith in computers and computable
models, the stakes are very high. The sides of this debate can be exemplified
on one hand by the hard core proponents of “artificial life” and on the
other hand by Robert Rosen and many others who have come to see his
understanding of the complex real world as a fundamental breakthrough in
world models. The proponents of the reductionist/mechanistic view believe
that the Church-Turing thesis is correct.
its patrons. Both complexity theory and the philosophy of science run into
this political aspect of science in a number of ways. By describing the
limits of the reductionist/mechanist paradigm there is a danger of casting
doubt on the ability of science to produce what its patrons expect from it.
Teaching such things to science students can place them in situations that
are difficult. Students must choose projects to satisfy their mentors who in
turn must satisfy their patrons. Reflecting these strong shaping forces back
on the definition we so roughly outlined, it should be clear that differences
between science as an ideal and science as what is practiced by scientists
can be very great if the ideal is involved with showing limits rather than
convincing patrons that their investment will pay off.
As the research tools and equipment grow in expense and sophistication,
the tendency to occupy one’s time using the equipment also grows. Thus the
activity we call science becomes driven by questions that can be answered
by the equipment rather than the desire to know in its “purest” form.
The methodology of science is not codified in any clear way. Books have
been written about the best way to accept candidates for scientific law and
scientific theory. Books have also been written about the failure of other
disciplines to satisfy the criteria of the scientific method.
Periodically there are so called paradigm shifts claimed. Often these
are not universally recognized as such, but what ever name one wants to
assign, change does occur in the method and this can be significant in its
magnitude. Proponents of complexity theory in its many forms often ask
for it to be a candidate for such status.
Is there a model for this kind of complexity in systems of human thought
and the activities that feed in to such systems? Clearly there is. Robert
Rosen chose Number Theory as his example. He reviewed the efforts of
the formalists to purge the field of all circularities and self referential loops.
To make a very long story very short, the attempt failed miserably. Each
attempt to present a finished product was met by demonstrations that im-
portant things about numbers were left out. This was because they were
insisting that the theory be self consistent and rightly so. It took Kurt
Godel to prove that such systems can not be both self consistent and com-
plete. By requiring self consistency they doomed their efforts to produce
an incomplete system. The only way to deal with complex systems was to
rely on something external to attempt the completion. But then the larger
system is subject to the same problem and an infinite regression results.
This issue has come up often in discussions among complexity theory pro-
ponents. One way some seem to satisfy themselves that they need not heed
37
built up from distinct parts and can be reduced to those parts without losing
its machine-like character. We call this idea ”Cartesian Reductionism”. We
have seen that complex (real) systems can not be successfully reduced to
material parts without the loss of some significant attributes in the process.
This led to the axiom that the whole is more than the mere sum of its parts.
Adopting this axiom as a truth leads to a conclusion that is inescapable. If
the whole is more than the sum of its parts there must be more to things
than atoms and molecules. Reducing a complex whole in the real world
always destroys something. What is lost is an elusive entity, but is also the
central idea in the concept of complexity as applied to the real world. If
there is more to things in the real world than atoms and molecules then
this something that is there has an existence, an ontology. This is the
reason why as a concept complexity is a difficult one. It can not be both
ways. Either a real world whole is more than the sum of its parts or
it can be reduced to atoms and molecules with nothing being lost and
the world is made of machine-like things. Cartesian reductionism does not
work for making models of complex systems; it only reduces them to simple
mechanisms that may reflect some aspect, but merely as a shadow of the
complex whole
the material world can be reduced to particle motion, that is, the motions
and interactions of atoms and molecules. When we look carefully at the
subject matter of physics, we see that it is the application of the Newtonian
Paradigm to the universe. This application then makes the world into
simple mechanisms. That is to say that the subject matter of physics is the
study of simple mechanisms. Note that in this context, "simple" means the
opposite of complex, not the opposite of complicated.
DECODING
CAUSAL
EVENT IMPLICATION
SYSTEM
ENCODING
Figure 1.
as seen through the eyes of science). As we began to look more deeply into
the world we came up with aspects that the Newtonian Paradigm failed to
capture. Then we needed an explanation. Complexity was born! This
easily can be formalized. It has very profound meaning.
4. THERMODYNAMIC REASONING AS A
TRANSITION TO COMPLEXITY SCIENCE
Rosen had little to say about thermodynamics in his critique of reduction-
ism in the form of the Newtonian paradigm. There is probably a good
reason for this. Clifford Truesdell [5] once made a very good case for ther-
modynamics having a certain “strangeness” as a part of physics, or we could
say as part of the reductionist Newtonian paradigm. This strangeness needs
to be considered in more detail for it reveals the seeds of the ideas Rosen
found to be true about the mechanistic approach to reality. Thermody-
namics is probably more poorly understood by mechanistic scientists than
any other branch of physics. The reasons for this are deep and revealing.
One of the most serious consequences of the inability of those doing
either mechanistic science or thermodynamics to see the problem clearly
is the resultant gap in the Newtonian largest model. It is possible to see
mechanistic reasoning and thermodynamic reason as different models of
complex reality in the spirit of the ideas discussed here. Instead, from its
beginnings thermodynamics was put under severe pressure to conform to
the mechanist’s largest model and thereby suffered a lack of development
as its own alternative to mechanistic physics.
Thermodynamics came into being for very practical reasons. The boring
of cannons, the brewing of beer and the steam engine are but a few of
the reasons that the frictionless world of Newton’s paradigm needed to be
patched up. Heat as a form of energy also had to be dealt with. Perpetual
motion machines had to be dealt with in a rational manner to curtail the
squandering of time and energy, sometimes by very bright scientists. As
progress was made, there were also immediate problems presented to the
those who had been content with the frictionless world where heat was
merely another form of energy and nothing more troublesome than that.
Since the material world was to be understood in mechanistic terms, a new
kind of mechanics called “statistical” mechanics had to be developed to
try to make a bridge between thermodynamic reasoning and mechanistic
reasoning.
Why is this so? The answer is one of the best demonstrations of the
42
42
power of Robert Rosen’s analysis even though he may not have ever seen it
himself. There is a fundamental difference between mechanistic reasoning
and thermodynamic reasoning and it can not be erased by the limited
successes of the use of statistical mechanics to bring them together.
Thermodynamics is about those properties of systems that are true
independent of their mechanism. This is why there is a fundamental asym-
metry in the relationship between mechanistic descriptions of systems and
thermodynamic descriptions of systems. From the mechanistic information
we can deduce all the thermodynamic properties of that system. However,
given only thermodynamic information we can deduce nothing about mech-
anism. This is in spite of the fact that thermodynamics makes it possible
for us to reject classes of models such as perpetual motion machines. (This
does not stop such models from appearing in subtle forms in the modern
literature.) This asymmetry is poorly understood because thermodynamics
is not a tool mechanists see as valuable. Some of this attitude is justified
by the structure of thermodynamics as it exists in texts and courses. In
some ways the field is a prisoner of its own history. It is useful to examine
that history with this problem in mind.
S = Q/T
More useful was the notion of the entropy change associated with an
isothermal quasi-stationary process:
dS = dQ/T (2)
The entropy of a system, to oversimplify somewhat, is a measure of the
“quality” of the energy of that same system. This follows from reasoning
about heat engines being operated in cycles. In order for a heat engine (e.g.
a steam turbine) to produce work that can be used it must have a source of
44
hot steam and that steam must flow through it to a reservoir that is cooler.
The First law of thermodynamics is nothing new to physics since it is
simply the idea that energy can not be created or destroyed. Hence a stem
engine uses some of the heat energy, converts it t o mechanical work, another
form of energy, and allows the rest of the heat energy to pass through it with
the matter (in this case water and/or water vapor) that is conserved in the
process. The Second law of thermodynamics forbids the complete transfer
of heat energy to mechanical work as well as forbidding the operation of the
engine between a source and sink at the same temperatures. This is why the
entropy became a useful and necessary concept. The result of “cooling” the
matter in the process of extracting mechanical work, increased the heat/
absolute temperature quotient or the entropy of the system. One way of
stating the second law is that there is a requirement that any real process
must result in the increase of entropy overall. It is possible to make devices
that locally decrease entropy, but only if the global result is an increase.
At equilibrium, there is no change in the amount of entropy.
dS = 0 (3)
In any real process,
dS >0 (4)
The fact that so much of what we know about thermodynamics came
from reasoning involving equilibrium and isolated systems is ironic when we
recall that it was practical matters, the doing of things, the carrying out of
processes that motivated the entire development. Nevertheless, by resorting
to what now seems to be a rather clumsy thought process involving the
carrying out of processes by performing small incremental changes, much
was learned.
What is problematic is that the world of real processes is what is of
interest and equilibrium are the natural endpoints of some of those processes
in the situation where the system is totally isolated.
Newton and his followers was the physics of systems that had dynamics
yet were without the creation of entropy. The “ideal” pendulum, for ex-
ample, can not exist and is the end of a limiting process where friction
is diminished to zero. Friction is another manifestation of the second law
of thermodynamics. It is the recognition that in any real process some of
the energy must be converted to heat energy. This heat energy can not be
converted back to the original form of energy by the same process. The
process is “irreversible”. One good way to see this is in a famous exper-
iment performed by Joule to measure the factor that must be used when
energy in the form of mechanical work is changed to heat energy by any
process. The number he obtained was called the mechanical equivalent of
heat. The device he used has value as a source of insight far beyond its
utility in obtaining that number.
The device was a jar of water surrounded by an insulating jacket having
the purpose of isolating the system thermally. Thus heat energy could not
enter or leave the system through its walls. At the top of the jar was
mounted a well insulated crank attached to a paddle wheel inside the jar.
Turning the crank allowed mechanical work to be performed on the water
in the jar. The system allows a simple mechanistic explanation for how the
mechanical work is converted to heat energy. The water is put into motion
in a directed manner by the paddle wheels, but looses that directed motion
eventually, the motion rather becoming “random” motion characteristic of
heat energy. Having a thermometer mounted so that the water temperature
can be monitored allows the direct relation between the work done turning
the crank and the appearance of heat in the jar, using the heat capacity of
water to calculate the amount of added heat energy from the increase in
measured temperature.
The process is irreversible. There is no way to reverse this randomiza-
tion of molecular motion to turn the paddle wheel and thereby gain back
the mechanical work. The energy has been converted once and for all as
long as it is confined to the thermally isolated jar.
This is a very revealing example of the nature of friction. Sometimes it
is difficult to see a mechanistic picture of how this frictional “dissipation”
of energy occurs. Chemical reactions have their own version of friction and
are as irreversible as any mechanical process. In electricity the resistance
to the flow of electrons results in electrical energy being converted to heat.
Knowing this, physics has created two categories for the systems it stud-
ies, conservative and non-conservative systems. Newtonian dynamics was
developed using conservative, frictionless systems (fictional) and then ex-
46
also was shaped by the same urges. The difference is that quantum theory
is so much more adaptable to the mechanistic reductionist approach. Its
mathematics and the interpretation of the mathematics could be given the
very same form as what it was clearly showing to be a failure of Newtonian
mechanics. Thermodynamics, on the other hand, uses a different form of
mathematics. To most scientists satisfaction quantum mechanics simply
helped further the knowledge generated by the Newtonian paradigm and
did it little harm. It allowed the reductionist philosophy to appear to have
established itself at all levels in the material description of reality. It became
possible to see a more or less universal bottoms up approach to science.
Yet there are findings from network thermodynamics that raise some very
interesting, and possibly troubling questions about this. The lack of interest
in these questions parallels a similar lack of interest in questions raised
about science by Robert Rosen as he explored the complexity of the world
science seemed to have mastered. It is very important that these questions
and the findings that have been generated by the use of thermodynamic
reasoning that is not constrained by the mechanistic mind set do not get
totally ignored and forgotten Yet that may be exactly what is happening
as reductionism forges ahead. Some insights into these events of scientific
history and the philosophical oversight or lack of same can be had thanks
to the contributions of Lakoff [9] who has applied cognitive linguistics to
similar situations in politics. His ideas can be married to a concept of
memes as “packages” of ideas that get passed along in a manner much
analogous to genes.
the real world using thermodynamic reasoning are very closely related.
Rosen, as a student of Rashevsky recognized that topological mathemat-
ics was necessary to create the relational biology they both had envisioned
as the approach to living systems that was not locked to the severe limits
of the reductionist paradigm. The elusive qualities of living things that
distinguished them from non-living mechanistic things could only be dealt
with by encodings into topological mathematical formalisms. Hence Rosen
developed the metabolism-repair M, R system as the formalism he would
manipulate to come up with a clear distinction between the classes of things
we call “organisms” and the class of things we call “machines”. In doing
this he forced his audience to make a hard choice. Either they would accept
a formalism that “kept the organization” but left out the “physics” or they
were locked into the reductionist’s world of physics which necessarily lost
crucial aspects of the complex whole as it was reduced to mere material
parts.
Thermodynamics in its most powerful form mimics this situation. There
are many examples, but only a few need be mentioned to make this point.
the domain of what we often call the “mind-body” problem. The work of
Bach-y-Rita as interpreted by Kercel [12, 131 and others goes tom the heart
of many modern controversies. Their interpretation of real experiments in-
volving the sensory behavior of humans is shaking quite a few foundations.
At the center of all their models is the central role played by closed loops
of causality. These loops are the self-referential heart of complex reality
and the human mind seems replete with them. This work and he work of
Louie [14], a student of Rosen, showing that Rosen’s {M,R} systems nec-
essarily have non-computable aspects suggests that “artificial intelligence”
and “artificial life” are indeed merely machine intelligence and simulations
of systems that have some limited life-like qualities. These technological
marvels are distinctly different from human intelligence and living systems
in many import ways.
These new investigations as well as the areas of thermodynamics that
have been largely disregarded because of their distinctly non-mechanistic
character strongly suggest that science will find a way to include other
formalisms and break free from the restrictions it has imposed in itself.
References
1. Rosen, R. Anticipatory Systems, New York, Pergamoii ,1985
2. Rosen, R. LifeIitself, New York, Columbia, 1991
3. Rosen, R. Essays on LifeIitself, New York, Columbia, 2000
4. Mikulecky, D. C. The circle that never ends: Can Complexity be made sim-
ple? In Complexity in Chemistry, Biology, and Ecology, D. Bonchev and D.
H. Rouvray, eds. New York, Springer, 2005.
5. Truesdell, C. Rational Thermodynamics, New York, McGraw-Hill, 1969
6. Onsager, L. Reciprocal Relations in Irreversible Processes I, Phys. Rev.,
1931a, 37, 405-426.
7. Onsager, L. Reciprocal Relations in Irreversible Processes 11, Phys. Rev.,
1931b, 38, 2265-2279.
8. Mikulecky, D. C. Applications of Network Thermodynamics to Problems in
Biomedical Engineering, New York, New York University Press, 1993.
9. Lakoff, G, Don’t Think of an Elephant! Know Your Values and Frame the
Debate White River Junction, Vermont, Chelsea Green Publishing, 2004
10. Schneider, E. D. and Sagan, D. Into the Cool, Chicago, University of Chicago,
2005
11. Schneider, E. D. and J. J. Kay, Life as a manifestation of the second law of
thermodynamics, in Modeling Complex Biological Systems, hl. Witten and
D. C. Mikulecky, eds, Special Issue of Mathematical and Computer modeling,
1994, 19, 25-48.
12. Kercel, S. W. Journal of Integrative Neuroscience, 2005, 4, 403-406.
13. Kercel, S. W., Reber, A. S. and Manges, W. W. Some Radical Implications
52
PAUL CILLIERS
University of Stellenbosch
fpc@sun. ac.za
In philosophy the winner of the race is the one who can run most slowly.
Or: the one who gets there last.
Wittgenstein (Culture and Value)
1. Introduction
As a result of a whole range of what one could call “pathologies” in con-
temporary culture, the idea of “slowing down” has of late been mooted in
a number of contextsa. A few can be named briefly. The “Slow Food”
movement, which started in Italy but has a worldwide following, extols
the virtues of decent food made from decent ingredients without compro-
mise. The resistance shown to “junk food” is not only based on aesthetic
considerations, but also on ethical (and nutritional) ones. The movement
promoting “Slow Cities”, also of Italian origin, fosters an understanding of
cities which is more humane. Such cities should encourage walking rather
than driving, have small shops with local products rather than shopping
malls and, in general, provide opportunities for the community to interact,
not to live in isolation. “Slow schooling” is a movement which questions
educational processes in a world geared for instant results. It emphasises
the contextual nature of knowledge and reminds us that education is a
process not a function. On a more personal level, “slow sex” involves at-
titudes which try to prevent that the values of the marketplace also rule
in our intimate relationships. We need to recognize that the journey is
53
54
more important than the destination, and that takes time. An immediate
or perpetual orgasm is really no orgasm at all.
There are a number of very important issues at stake in these examples.
In what follows, however, the focus will not be on these social movements
as such, but on some of the underlying principles which make the debate
on slowness an important one. Through an analysis of the temporal nature
of complex systems, it will be shown that the cult of speed, and especially
the understanding that speed is related to efficiency, is a destructive one.
A slower approach is necessary, not only for survival of certain important
values or because of romantic ideals, but also because it allows us to cope
with the demands of a complex world in a better way.
The argument will be made initially by briefly analysing current dis-
tortions in our understanding of time. These distortions result, on the one
hand, from the rational and instrumental theories we have about a modern
world, and, on the other, from the effects of certain technologies, especially
communication and computer technologies. In order to show why these are
“distortions”, or at least, to show why these distortions are problematic,
the temporal nature of complex systems will be discussed. The relation-
ship between memory and anticipation will be central to this discussion,
but attention will also be paid to the importance of delay and iteration.
These characteristics of complex systems have important implications for
our understanding of the formation of identity, both individual identity as
well as the identity of groups. In closing, a number of general cultural issues
involving the fast and the slow will be looked at.
It is important to realise that the argument for slowness is not a conser-
vative one; at least not in the political sense of the word. It is not merely
backwards-looking nor a glorification of what has been. Although it em-
phasises the historical nature of knowledge and memory, the argument for
slowness is forward-looking; it is about an engagement with the future as
much as with the past. Slowness is in itself a temporal notion, and in many
ways the opposite of the notion “static”. In point of fact, it is actually an
unreflective fastness which always returns you to the same place.
It should also be stated up front that there is no argument against an
appropriate fastness. A stew should simmer slowly, but a good steak should
be grilled intensely and briefly. The argument is against unreflective speed,
speed at all cost, or more precisely, against speed as a virtue in itself; against
the alignment of “speed” with notions like efficiency, success, quality and
importance. The point is that a system which has carefully accumulated
the relevant memories and experiences over time will be in a better position
55
to react quickly than one which is perpetually jumping from one state to
the other.
Perhaps “slow” and “fast” are not exactly the correct terms to use.
Terms like “reflective” and “unreflective”, or “mediated” and “unmediated”
may be more accurate. Nevertheless, the debate taking place uses “slow”
and “fast”, and the terms do have a certain rhetorical significance. If we
stay with their use, it is done in a metonymical way. The whole point of
this paper is to give them a richer meaning.
could say) and delay (to defer, a temporal notion) as the engines of meaning
(Derrida 1982). The present consists only as a combination of memory (of
what has been) and anticipation (of what is to come).
In his novel Slowness Milan Kundera (1996) uses the metaphor of some-
body riding on a motorcycle as being constantly in the present. Speed and
the demands of the machine reduces his horizon to something immediate.
Someone walking, however, is moving at a pace which allows for a much
wider horizon. The stroll unfolds in time in a way which opens up reflection
about where we are coming from and where we are going to, as we walk.
This theme of both the past and the future being present in a meaningful
experience of the present could be pursued in much more detail from both
a Freudian and Derridean perspective - and several others too - but the
argument for a meaningful temporality, i.e. something slower, will be made
here from the perspective of the dynamics of complex systems.
bThis argument can also be made using the example of the brain, and links with many
58
CThisprocess is known as the “use principle” or Hebb’s rule. For more detail see Cilliers
(1998: 17-18,93-94)
dHysteresis is the “lagging of effect when cause varies” (Oxford Concise)
59
everything quickly will only have something nice to eat now and then, and
then purely by chance.
“See Hayles (1999) for a discussion of these issues. Primary sources are Shannon (1949)
and Chaitin (1987).
62
References
1. Badmington, Neil (ed.). 2000. Posthumanism. London: Palgrave.
2. Bauman, Zygmunt. 1992. Intimations of Postmodernity. London: Routledge.
3. Braidotti, Rosi. “Cyberfeminsim
with a difference.” http://www.let.uu.nl/womensstudies/rosi/cyberfem.htm
(Accessed 8 August 2005)
4. Chaitin, G.L.J. 1987. Algorithmic Information Theory. Cambridge: Cam-
bridge University press.
5. Cilliers, Paul. 1998. Complexity and Postmodernism. Understanding Complex
Systems. London: Routledge.
6. Cilliers, Paul. 2005. Complexity, Deconstruction and Relativism. Theory Cul-
ture €4 Society, Vol. 22 (5), pp 255-267.
7. Coetzee, JM. 2005. Slow Man. London: Seeker and Warburg.
8. Derrida, Jacques. 1976. Of Grammatology. Baltimore: John Hopkins Univer-
sity Press.
9. Derrida, Jacques. 1982. ”Diffkrance” in Derrida, Jacques. Margins of Philos-
ophy. Chicago: The Harvester Press, pp. 1-27.
10. Eriksen, Thomas Hylland. 2001. Tyranny of the Moment: Fast and Slow
Tame in the Information Age. London: Pluto Press.
11. Hayles, N Katherine. 1999. How W e Became Posthuman: Virtual Bodies in
Cybernetics, Literature, and Informatics. Chicago: The University of Chicago
Press.
12. Honork, Carl. 2004. I n Praise of Slowness: How a Worldwide Movement is
Challenging the Cult of Speed. London: Orion.
13. Kundera, Milan. 1996. Slowness. London: Faber and Faber.
14. Nadolny, Sten. 2003. The Discovery of Slowness. Edinburgh: Canongate.
15. Nowotny, Helga. 1994. Time: The Modern and the Postmodern Experience.
Oxford: Polity Press.
64
16. Parkins, Wendy. 2004. “Out of Time: Fast subjects and slow living”. In Time
and Society, Vol. 13 No 2, pp. 363-382.
17. Shannon, C.E. 1949. Communication in the presence of noise. Proc. IRE.,
Vol. 37, pp. 1cb21.
18. Taylor, Mark C. 2003. The Moment of Complexity: Emerging Network Cul-
ture. Chicago: The University of Chicago Press.
SIMPLICITY IS N O T TRUTH-INDICATIVE
BRUCE EDMONDS
Centre for Policy Modelling
Manchester Metropolitan University
http://bruce. edmonds.name
1. Introduction
The notion of simplicity as an important property of theories is traditionally
ascribed to William of Occam (1180), who extensively used the principle in
argument to rebut over-elaborate metaphysical constructions. It has been
invoked as part of the explanation why the Copernican account of plane-
tary motion succeeded over the Ptolemaic one whilst the evidence was still
equivocal. Newton made it one of his rules of reasoning: “we are t o admit
n o more causes of natural things than such as are both true and suficient to
explain their appearances f o r Nature is pleased with simplicity and aflects
not the pomp of superfluous causes.” [Ref. 1, page 31. Einstein chose the
simplest possible system of tensor equations to formalise his theory of Gen-
eral Relativity.2 Phrases like “for the sake of simplicity” are used to justify
many modelling decisions (e.g. Ref. 3, page 1). More fundamentally, some
go further, claiming (or more often, just assuming) that a simpler theory is
somehow more likely to be true (or closer to the truth) than a more com-
plex theory. For some surveys on the philosophy of simplicity see Refs. 4
and 5.
In this paper I will argue (along side others, including those in Refs. 6
and 7) that, in general, there is no reason to suppose that the simpler theory
is more likely to be true. In other words simplicity does not tell us anything
about underlying ‘model bias’.a In this I am saying more than just that
simplicity is not necessarily truth-indicative. For although I admit there
may be special circumstances where simplicity is truth-indicative, I do not
~~~~ ~ ~
aModel bias is that effect that the form of a model has upon its efficacy, for example
using a series of sinusoid functions to model some data rather than a polynomial of some
order.
65
66
see that there is any evidence that simplicity and truth are fundamentally
related. By analogy, there may be special circumstances where the colour
of an object is an indication of its mass (e.g. when someone has gone round
calibrating weights and painting them accordingly) but, in general, colour is
not at all an indication of mass. To say only that “colour is not necessarily
an indication of mass” would be highly misleading, for unless somehow
contrived, colour and mass are completely unrelated. I claim that there is
no more connection between simplicity and truth than between colour and
mass - they are unrelated properties that are only correlated where this
is contrived to be so (i.e. by prior assumption or arrangement).
Thus, in particular, when t w o theories have equal explanatory power
there is no particular reason to prefer the simpler other than convenienceb
- it may be that the more complex turns out to more useful in the future,
than the peculiar case where one has a limited set of data to fit as discussed in
the section on special cases below.
67
2. Elaboration
If one has a theory whose predictions are insufficiently accurate to be ac-
ceptable, then it is necessary to change the theory. For human beings it
is much easier to elaborate the theory, or otherwise tinker with it, than to
undertake a more radical shift (for example, by scrapping the theory and
starting again). This elaboration may take many forms, including: adding
extra variables or parameters; adding special cases; putting in terms to
represent random noise; complicating the model with extra equations or
rules; adding meta-rules or models; or using more complicated functions.
In Machine Learning terms this might be characterised as a preference for
depth-first search over breadth-first search.
Classic examples of the elaboration of unsatisfactory theories include
increasing the layers of epi-cycles to explain the observations of the orbits
of planets in terms of an increasing number of circles and increasing the
number of variables and equations in the national economic models in the
UK. In the former case the elaboration did increase the accuracy on the
existing data because the system of epi-cycles can fitted arbitrarily well
to this data, but this is better done with ellipses. Given enough data
the system of epi-cycles can be used to successfully predict the orbits to
any desired degree of accuracyc, but ellipses will do it with less data and
considerably less calculation because the form of ellipses is more suited
to describing the true orbits.d Once the a priori bias towards circles is
abandoned the system of epi-cycles becomes pointless. In the later case of
the macro-economic models the elaboration did not result in the improved
prediction of future trends', and in particular these models have failed to
predict all the turning points in the UK economy.e
Why humans prefer elaboration to more radical theory change is not
entirely clear. It may be that it is easier to understand and predict the effect
of minor changes to the formulation of a theory in terms its content, so that,
if one wants to make a change where one is more certain of improvement,
'One needs t o be a little careful here, unlike ellipses, the system of epi-cycles does not
provide of itself any information about the future course of orbits, but given a postulated
orbit shape (including ellipses) epi-cycles can be used t o express such orbits and hence
used in accurate prediction of the future courses of the planets as well as ellipses.
dBeyond a certain point they will do better than an ellipse because they will be able t o
include the Eisteinian corrections, however this can still be done easier with corrections
t o an ellipse.
=Although one was apparently predicted, but this was due t o intervention by the modeller
on the basis of his expert knowledge (that destocking that can occur after an oil shock)
for details about this see Ref. 8.
68
minor changes are a more reliable way of obtaining this. In this case a
more certain but marginal improvement may well be preferred to highly
uncertain significant improvement. Alternatively, it may be that using a
certain model structure biases our view because we get used t o framing our
descriptions and observations in this way, using variations of the model as
our ‘language’ of representation - akin to Kuhns ‘theoretical spectacle^'.^
In other words, once we have started to think about some phenomena in
terms of a particular model, it becomes difficult to think about it in other
ways. Finally, it may be due to simple laziness - a wish to ‘fit’ the current
data quickly rather than going for longer-term fundamental success ( e g
prediction on unseen data).
Regardless of the reasons for this tendency towards elaboration, we are
well aware of this tendency in our fellows and make use of this knowledge.
In particular, we know to distrust a theory (or a story) that shows signs
of elaboration - for such elaboration is evidence that the theory might
have needed such elaboration in the past - for example because it had
a poor record with respect to the evidence. Of course, elaboration is not
proof of such a poor record. It may be that the theory was originally
formulated in an elaborate form before being tested, but this would be an
unusual way for a human to proceed. Thus when presented with alternative
theories developed by our fellows, one simpler than the other, we may well
guess that the complex one has been elaborated and this would be some
(albeit fallible) evidence that it has needed such elaboration. Here it is
knowledge of the (human) process that produced the theories that informs
us implicitly about their past record against the evidence and past record
against evidence is the only guide we have to the future performance of
theories.
This knowledge, along with an understandable preference for theories
that are easily constructible, comprehensible, testable, and communicable
provide strong reasons for choosing the simplest adequate theory presented
to us. An extra tendency for simplicity to be, of its own, truth-indicative
is not needed to explain this preference.
In addition to this preference for choosing simpler theories, we also have
a bias towards simpler theories in their construction, in that we tend to start
our search with something fairly simple and work ‘outwards’ from this point.
This process stops when we ‘reach’ an acceptable theory (for our purposes)
- in the language of economics we are ‘satisficers’ rather than ‘optimisers’f.
fAn ‘optimiser’is someone who searches for the best solution, whilst a ‘satisficer’accepts
69
This means that it is almost certain that we will be satisfied with a theory
that is simpler than the best theory (if one such exists, alternatively a
better theory). This tendency to, on average and in the long term, work
from the simpler to the less simple is partly a consequence of the fact that
there is a lower bound on the simplicity of our constructions. This lower
bound might be represented by single constants in algebra; the empty set
in set theory; or a basic non-compound proposition.
An alternative approach might be to start from a reasonably complex
theory an look at modifications to this (in both simpler and more complex
directions). For example in ethology one might start from a description of
how an animal appeared to behave and then eliminated irrelevant aspects
progressively. In this case, unless one had definite extra evidence to the
contrary the sensible bias is towards the original, more complex, account.
It is important to constrain our theorising in order that it be effective, but
one does not have to use simplicity as this constraint. In" we argue for
the use of evidence to determine the starting point for model adaptation
rather than simplicity.
3. A Priori Arguments
There have been a number of a priori arguments aimed at justifying a
bias towards simplicity - I discuss two of these below. It is impossible
to disprove all such arguments in a single chapter so I will confine myself
to these two and then make some more general arguments why any such
attempt is likely to be mistaken.
Kemeny" makes an argument for preferring simpler theories on the
presumption that there is a sequence of hypotheses sets of increasing com-
plexity and that a completely correct hypotheses exists - so that once one
has reached the set of hypotheses that contains the correct one it is not
necessary to search for more complex hypotheses. Thus once the evidence
shows one has an acceptable theory one should not look for more complex
theories. However this does not show that this is likely to be a better or
more efficient search method than starting with complex hypotheses and
working from there. The conclusion is merely a reflection of the assump-
tions he has made. It also does not deal with the case where there is not
a single completely 'Lcorrect7'theory but a series of theories of increasing
complexity that more precisely fit the data as they get more complex.
5. Special Cases
Although, there is no reason to suppose simplicity is in general truth-
indicative, there are special circumstances where it might be. These are
circumstances where we already have some knowledge that would lead us
to expect that the solution might be simple. That is the evidence points to
a particular theory (or class of theories) and those happen to be simple. I
briefly consider these below.
The first is when the phenomena are the result of deliberate human
73
~~~
gFor example, the number of areas that n straight lines, each crossing the perimeter of
a circle twice and such that no three lines intersect in a single point, cut that circle into.
74
more complex - the lower bound and the ‘inhabited’ part of the possibility
space do not impinge upon the possibilities that much so as to significantly
bias its evolution towards complexity.
Another situation is where one already knows that there is some correct
model of some minimum complexity. In this case one heuristic for finding
a correct model is to work outwards, searching for increasingly complex
models until one comes upon it. There are, of course, other heuristics - the
primary reason for starting small are practical; it is far easier and quicker
to search through simpler models. In more common situations it might
be the case that increasingly complex models may approximate the correct
model increasingly, but never completely, well or that no model (however
complex) does better than a certain extent. In the first case one is forced
into some trade-off between accuracy and convenience. In the second case
maybe no model is acceptable, and it is the whole family of models that
needs to be changed.
Clearly if one has some information about the complexity of the sought
after model before the search starts, using that information can make search
more efficient, but then this is merely a case of exploiting some knowledge
about the solution - it is not an reason for a general bias towards simplicity
in other cases. In such circumstances as those above there is some reason
to err towards simplicity. However in these circumstance the principle is
reducible to a straight forward application of our knowledge about the
phenomena that leads us in that direction - principles of simplicity do not
give us any ‘extra’ guidance. In these circumstances instead of invoking
simplicity as a justification the reason for the expectation can be made
explicit. Simplicity as a justification is redundant here. Likewise if one had
some evidence that the desired model or theory was complex before the
search starts, a bias away from simplicity would be helpful.
6. Versions of “Simplicity”
In order to justify the selection of theories on the basis of simplicity,
philosophers have produced many accounts of what simplicity as. These
have included almost every possible non-evidential advantage a theory
might have, including: number of parameter^^^; extensional p l ~ r a l i t ;y ~ ~ ~ ~ ~
fal~ifiability~~;
likelihood26,6; stability27 logical expressive power28; and
content2g. These approaches give an uncomfortable feeling of putting the
cart before the horse - for instead of deciding what simplicity could mean-
ingfully be and then seeing if (and when) it is useful to be biased towards
75
hFor this reason a definition of complexity is not relevant here. For those who are
interested in what might be a reasonable definition of complexity see Ref. 31.
76
can have strong visual intuitions about the suitability of certain choices
which strongly relate to a set of heuristics that are effective in the domains
one happens to have experienced. These intuitions might not be helpful in
general.
In particular, one might happen to know that there is likely to be some
random noise in the data, so that choosing a curve that goes through every
data point is not likely to result in a line that reflects the case when more
data is added. In this case one might choose a smoother curve that might
approximate to a local average of the data. A traditional method of smooth-
ing is choosing a polynomial of a lower order or with fewer parameters. This
is not, of course, the only choice for smoothing; one might instead use, for
example, local regression32 where the fitted curve is a smoothed combina-
tion of lines to fit segments of the data. Thus an assumption that a curve
with a simpler functional form might be more appropriate for a certain set
of data can depend on: firstly, that one has knowledge about the nature of
the noise in the data and, secondly, that one chooses the simplicity of the
functional form as ones method of smoothing. If, on the other hand, one
knew that there was likely to be a sinusoid addition to the underlying data
one might seek for such regularities and separate this out. Here a preference
for simplicity is merely an expression of a search bias which encodes one’s
prior knowledge of the domain.
A recent series of argues that simplicity is justified on the
grounds that its use can result in greater predictive accuracy on unseen
data. This is based on results obtained in Ref. 35. Simplicity in this case is
defined as (effectively) the Vapnik-Chervonenkis (VC) dimension36 of the
set of curves which in very particular circumstances is equivalent to the
number of adjustable parameters in the equation form. The advantages of
‘simplicity’ in this account amount to the prescription not to try and fit
more parameters that you have data for, since the larger the set of hypothe-
ses one is selecting from the more likely one is to select a bad hypothesis
that ‘fits’ the data purely by chance. The extent of this overfitting can be
estimated in particular cases. The argument is that in these cases one can
know to choose a form with fewer parameters if one does not have enough
data to justify estimating any more, even if such more complex forms may
appear to fit the data better. However, this does not effect the general ar-
gument - if you have two models whose predictive accuracy, once adjusted
f o r its expected overfitting o n limited data, is equal then there would be no
reason to choose the family which might be considered simpler to have a
simpler form. In circumstances with a fixed amount of data the estimation
77
of the extent of overfitting might or might not tip the scales to lead one to
select the simpler model.
This account gives no support for the thesis that the simplicity of a
model gives any indication as to its underlying model bias. Thus, in cir-
cumstances where one can always collect more data (so that effectively
there is an indefinite amount of data), these arguments provide no reason
to select a simpler model, but rather suggest one should collect more data
to distinguish which model is better in general. In this case, the decision of
when to stop seeking for a model which gives increased predictive accuracy
is a practical one: one has to balance the cost of collecting the additional
data and using it to search for the most appropriate model against the
goodness of the parameterised model.
Also the connection between the VC dimension and any recognisable
characteristic of simplicity in the family of curves is contingent and tenuous.
In the special case where the only way of restricting the VC dimension (or in
finite cases, the number of hypotheses) is through the number of adjustable
parameters, then it is the case that an equational form with more adjustable
parameters will require more data for accurate parameterisation. However
there are other ways of restricting the set of hypotheses to reduce the VC
dimension; as discussed abovelg successfully used a similarity criterion to
do this. Thus one can avoid overfitting by restricting the VC dimension of
the set of hypotheses without using any criteria of simplicity or parsimony
of adjustable parameters. In a related study3’ examined the connection
between the complexity of expressions and (indirectly) their ambiguity and
concluded that any measure that restricted the space of models would be
equally effective. Of course, one can decide to define simplicity as the VC
dimension, but this is begging the question again and one would need to
justify this transferred epithet.
To summarise this section, there is a limit to the accuracy with which
one can adjust a given number of unknown parameters given a certain, jixed
amount data - one is only justified in specifying a curve to the extent
that one has information on which to base this. Information in terms of
a tightly parameterised curve has to come from somewhere. However, in
the broader picture where different families of curves are being investigated
(for example, by competing teams of scientists continually searching out
more data) these considerations give no support to the contention that
the simpler family has an advantage in terms of predicting unknown data
better.
78
8. Concluding Plea
It should be clear from the above that, if I am right, model selection ‘for
the sake of simplicity’ is either: simply laziness; is really due to practical
reasons such as cost or the limitations of the modeller; or is really a rela-
belling of more sound reasons due to special circumstances or limited data.
Thus appeals to it should be recognised as either spurious, dissembling or
confusing and hence be abandoned.
Indeed rather than assuming that there will always be a simple adequate
theory (if only we were clever enough and worked hard enough to find it)
we should keep an open mind. On the face of it, it would seem reasonable
to expect that complex phenomena will require complex theories for many
purposes, and that simple theories will only be adequate for a restricted
range of phenomena. However we should always allow for the possibility
that occasionally some apparently simple phenomena will require complex
theories and apparently complex phenomena allow for simple theories. The
point is one can not tell in advance and so it is unwise to make assumptions
about this.
However, there is a form of Occam’s Razor which represents sound ad-
vice as well as perhaps being closer to its Occam’s original formulation
(usually rendered as “entities should not be multiplied beyond necessity”),
namely: that the elaboration of theory in order to fit a known set of data
should be resisted, i.e. that the lack of success of a theory should lead to a
more thorough and deeper analysis than we are usually inclined to perform.
It is notable that this is a hallmark of genius and perhaps the reason for the
success of genius - be strict about theory selection and don’t stop looking
until it really works.
Acknowledgements
I would like to thank all those with whom I have discussed or commented
upon these ideas (most of whom were not inclined to change their opin-
ions on this matter), including: the participants of the 20(th)Wittgenstein
Symposium in 1999; and those at the workshop on the Evolution of Com-
plexity at the VUB in 1997. Also to several anonymous referees who have
motivated me to make my position clearer (if more stark).
References
1. Newton, I. Philosophia naturalas Princapia Mathematica, (1686).
79
2. Einstein, A., Relativity: the special and the general theory - a popular ex-
position, London: Methuen, (1920).
3. Gertsbakh, E. and Gertsbakh, I., Measurement Theory for Engineers,
Springer, (2003).
4. Hesse, M., Simplicity, in P. Edwards (ed.), The Encyclopaedia of Philosophy,
vol. 7., New York: Macmillan, 445-448, (1967).
5. Zellner, A., Keuzenkamp, H. and McAleer, M. (eds.), Simplicity, Inference
and Modelling. Cambridge: Cambridge University Press, (2001).
6. Quine, W. V. 0. Simple Theories of a Complex World. In The Ways of
Paradox. New York: Random House, 242-246, (1960).
7. Bunge, M., The Myth of Simplicity. Englewood Cliffs, Prentice-Hall, (1963).
8. Moss, S., Artis, M. and Ormerod, P., A Smart Macroeconomic Forecasting
System, The Journal of Forecasting 13,299-312, (1994).
9. Kuhn, T. S. The Structure of Scientific Revolutions. Chicago, University of
Chicago Press, (1962).
10. Edmonds, B. and Moss, S., From KISS t o KIDS - an anti-simplistic mod-
elling approach. In P. Davidsson et al. (eds.): Multi Agent Based Simula-
tion 2004. Springer, Lecture Notes in Artificial Intelligence, 3415, 130-144,
(2005).
11. Kemeny, J. G. Two Measures of Complexity. The Journal of Philosophy, 52,
722-733, (1953).
12. Li, M. and VitBnyi, P. M. B. Philosophical Issues in Kolmogorov Complexity,
in Automata, Languages and Programming, 19th International Colloquium,
Lecture Notes in Computer Science, 623,1-15, Springer, (1992).
13. Bak, P. How Nature Works: The Science of Self Organized Criticality. Ox-
ford, Oxford University Press, (1997).
14. Charter, N. The Search for Simplicity: A Fundamental Cognitive Principle?
The Quarterly Journal of Ezperimental Psychology, 52A,273-302, (1999).
15. Schaffer, C., conservation law for generalization performance. In Proceedings
of the llth International conference on Machine Learning, 259-265. New
Brunswick, NJ: Morgan Kaufmann, (1994).
16. Wolpert, D. The lack of a priori distinctions between learning algorithms.
Neural Computation, 8, 1341-1390, (1996).
17. Murphy, P.M.; Pazzani, M.J. Exploring the Decision Forest: an empirical
investigation of Occam’s razor in decision tree induction, Journal of Artificial
Intelligence Research, 1, 257-275, (1994).
18. Murphy, P. M. An empirical analysis of the benefit of decision tree size biases
as a function of concept distribution. Technical report 95-29, Department of
Information and Computer Science, Irvine, (1995).
19. Webb, G . I. Further Evidence against the Utility of Occam’s Razor. Journal
of Artaficial Intelligence Research, 4, 397-417, (1996).
20. Domingos, P. Beyond Occam’s Razor: Process-Oriented Evaluation. Machine
Learning: ECML 2000, llth European Conference on Machine Learning,
Barcelona, Catalonia, Spain, May 31 - June 2, 2000, Proceedings, Lecture
Notes in Artificial Intelligence, 1810, (2000).
21. Dennett, D. C., The Intentional Stance. Cambridge, MA: A Bradford Book,
80
CAMILO OLAYA
Institute of Management
University of St. Gallen (HSG)
Dufourstrasse 4Oa, CH-9000,St. Gallen, Switzerland
E-mail: camilo.olaya@unisg.ch
1. Introduction
Lately everything seems to emerge. Qualities, properties, matter, life,
mind, ideas, inventions, computed macro-states, and many other things,
now ‘emerge’. In particular, in what is loosely known as Complexity Sci-
ence (CS),” the term ‘emergence’ is widely invoked as a pivotal notion,
although it is used in diverse senses. And it is not the purpose here to offer
a new definition; the article presents important distinctions that usually
are absent in CS, regardless of been heavily discussed in other areas, in
particular in philosophy. The central point is that emergence, taken as a
metaphysical inquiry, seems to be overlooked when developing explanations
by means of computer simulation. However, as John Holland reminds us -
in his book ‘Emergence’,’ CS is committed to make models of the world.
“In order to frame the subject, somehow, CS refers here t o approaches related t o non-
linear dynamics, agent-based modeling, chaos theory, artificial life, social simulation,
complex adaptive systems, system dynamics, and alike, that claim t o address ‘complex-
ity’ using typically computer simulation. Yet this does not exclude indirect connections
to related areas that also address emergence like molecular and evolutionary biology,
quantum theory, matter theory, and the philosophy of mind.
81
a2
The heated dispute may have been started in 1920 with Samuel Alexan-
der for whom emergence necessarily admits no explanation; he claimed that
it has to be accepted as a ”brute empirical fact” (as cited by Pap).3 The
reply of Stephen Pepper in 1926 posts the warning sharply: either it is the
sum of the parts or it is not4 (no paradox here). Looking again at the quote
above of Jones, it seems there has not been too much advance since then.
What follows is a brief account of some important distinctions through this
philosophical debate;b these distinctions (highlighted in italics) are rele-
vant today, on a first basis because many of them seem to be trivialized or
overlooked.
C A full discussion and criticism of the argument of Pepper is developed by Meehl and
~el~ars.'
84
lute novelty (anything which had never appeared before) and not relative
novelty (relative to a particular situation).l' Gotshalk, in 1942, emphasizes
that novel emergents are irreducible and that they bring into prominence
individuality and quality; for him emergents are underivative ingredients of
the real and not merely causal reductions;12 it is implicitly a criticism to
Pepper (emergence now is meaningful) and Malisoff (for whom the question
is only epistemological). Also in 1942 Campbell Garnett emphasizes that
the failure of the theory of emergence to establish a vera causa (as it is was
suggested by Alexander) implies the incompatibility with scientific meth-
ods; as a result, he adheres to the organismic interpretation: a single nature
within which two irreducibly distinct types of process are functionally in-
tegrated in one organic whole;13 hence, he rejects ontological emergence
since complete independence is proclaimed between layers. In the same
journal, Henle criticizes the prominent role of the observer for addressing
predictability (as in "quality A is predictable by person B on evidence C
with degree of assurance D"); for him novelty exists in the universe though
this does not entail emergence since two conditions have to be met: it must
be logically novel relative to all previously exemplified characteristics, and
it must not be merely a spatio-temporal characteristic.'* Bahm in 1947
supports organicism: things are organic unities, i.e. they are both one
and many, identity and difference, permanence and change, stability and
novelty, substance and function; for him, emergence is just a natural con-
sequence of recognizing those levels or dimensions15 (and therefore there is
no paradox). Pap in 1952 attacks the notion of absolute unpredictability
("pure metaphysical obscurantism"); he proposes the possibility of having
laws that can correlate an a priori unpredictable quality Q with certain
causal condition^.^ Berenda, in similar lines, supports the ability for un-
derstanding (structured in reason) as different from the ability for learning
(developed in time), for him prediction is just a matter of time, therefore
he rejects logical unpredictability.16
Somehow summarizing these earlier debates, Kekes in 1966 characterizes
emergence with novelty (when a token of a type appears which never existed
before) and unpredictability (this occurrence of novelty is unpredictable as
based on the theories of lower levels of existence); his type of novelty is in
the lines of Berenda, a priori unpredictability, prior to the occurrence of the
event but explicable afterwards. Kekes postulates both novelty and unpre-
dictability as necessary and sufficient conditions of emergence. Moreover,
for him qualities and laws as well can emerge, and the emergence of any of
85
dHe assumes order in nature which can be described by laws (determinism), then he
argues: ”At any rate, emergent laws describe regularities holding in nature which must
have been novel at some time. These regularities describe the behaviour of entities which
may or may not have emergent qualities; yet the regularities described by emergent laws
could not be described by laws formulated to describe the behaviour of entities on a
lower level of integration” (p. 366).
‘Van Gulick’l makes a classification using this distinction, finding four types of episte-
mological emergence and six types of ontological emergence; e.g. Georgiou” favors the
epistemological approach because for him emergent properties provide the identity of a
system since they allow an observer to say ‘this is a system of such-and-such and not
otherwise’ for they act as unifying reference points for the group of interrelated parts
which constitute the system” (p. 242).
86
3. Explanations of Uniformity
Emergence used to be a cosmic affair (as in ”emergence is a feature of the
world”); in fact, a number of discussions within CS seem to point at absolute
emergence, but what seems to prevail - at the end - is a particular level
of description that habitually takes the form of causal accounts and/or
universal laws; such view involves a sort of uniformity that makes harder
to address novelty and change. This is presented next.
Explanations based on causal accounts hold that there is something spe-
cial about causal interactions that increases our understanding of the world;
causality is the crucial relationship which makes information explanatorily
relevant.26 An account of ontological emergence is naturally a formidable
challenge for such approach, e.g. it has been widely addressed in terms
of multiple complex causal networks or sophisticated law-like expressions.h
Silberstein and McGeever summarize the popular position:20
Complexity studies is inclusive of chaos theory, connectionism and
artificial life, in that it seeks a grand unified theory of the funda-
mental and universal laws that govern behavior of all complex and
non-linear systems. It would be fair to say that many complexity
theorists are after a unified theory of ‘emergence’ (see Kauffman
and Wolfram for examples). Complexity theorists (often) reject
part-whole reductionism, but they hold that there are fundamental
laws of the universe which pertain not only to all of the physical,
chemical and biological systems, in the universe, but also to all
systems (such as the economy and other social systems) that reach
certain degree of ‘complexity’. These ‘meta-laws’ or ‘power-laws’
Such quests seem inappropriate for coping with emergence; for instance,
universal laws are opposed to individuality, ignoring the remark of Gotshalk
of 1942: ”The most obvious philosophical importance of the principle of
emergence, as I see it, is that it brings into intellectual prominence factors
in the universe that actually have been present and in their way important
all the time, namely, individuality and quality. In modern scientific causal
analysis, both of these factors have tended to be lost from view”12 (p. 397,
emphases added). Seemingly they are still lost from view since it can be
said that many discussions on emergence have taken an uniform nature for
granted (or explicitly as e.g. Kekes17, presented above), as it is introduced
next.
We try to make sense of a higher level ‘property’ (let’s say) called ‘emer-
gence’ that is not only in function of the parts although it is derived from
them. This should challenge some basic frameworks.’ Why? Within a
causal view, we seem to follow Aristotle where there is a necessity in causal
relations coming from the fact that they derive from the essential charac-
teristics of material things.31 A very brief note on Aristotelian properties:
”essential properties of a thing are those on which its identity depends.
They are the properties of a thing which it could never have lacked, and
which it could not lose without ceasing to exist or to be what it is. Ac-
cidental properties of a thing are properties which that thing could either
have had or lacked”(p. 374).32 Such a frame imposes strong constraints
for having a notion of emergence within a world of properties ascribed to
static substances which in turn seem to be assumed as self-identically en-
during stuff - see e.g. [33],[34]; hence substancial emergence (as in Watts
Cunningham,’ presented earlier) based on essential qualities, becomes dif-
ficult to address.
This world depicted above is usually expressed with laws: ”The state-
ment that an object follows a law involves the presumption that there is
some informant agency that makes an object to behave in a certain way
‘For instance, Humphreys suggests the examination of the common ontological minimal-
ism, a doctrine based on a small set of premises: (i) to hold a small set of fundamental
constituents of the world; (ii) the primacy of intrinsic (non-relational) properties; (iii)
all non-fundamental individuals and properties are composed from those fundamental
entities. These premises are rooted in two doctrines: a pre-eminence of the logical re-
construction of concepts, and a Humean account of causality.30
89
and not to behave in any other way...If there is only one informant agency,
for all objects of a kind in all time, we call it a law”(p. 9).35 We can ap-
preciate why individuality and novelty are hard to meet, nature is assumed
uniform;j more specifically this use of laws neglects endogenous change,
e.g.: ”the events the law describes also do not change endogenously unless
an exogenous force is introduced into the system. The model is universally
deterministic. Given complete information about the initial and subsidiary
conditions, the law allows us to retrodict events precisely on to the past
and to predict them to precisely on to the future”(p.
Claims of emergence, framed in such assumptions and focused on univer-
sal and abstract modeling, are - seemingly unnoticed - reductionist accounts
(regardless of regularly affirming the opposite) whenever order is taken as
reducible to a single mode of relatedness, but apparently believing, at the
same time, in something ‘more than the sum of the parts’; this is indeed a
mystic path. Already for Pepper was clear that we can not have both. If
there is a function (non-linear usually; it comes to mind the condition of
non-functionalizability of Kim), if there is a law (restricted in any form), or
if there are causal statements, then to deal with absolute novelty beyond
‘the parts’ is not uncomplicated - see e.g. [37], [38]. The old warning of
Gotshalk made in 1932 helps to rephrase the point and the relevance:3g
JThis is perfectly illustrated in the quote of the physicist Wigner: ”It is ...essential that,
given the essential initial conditions, the result will be the same no matter where and
when we realize these ...If it were not so ...it might have been impossible for us to discover
laws of nature” - as cited by Schlesinger (p. 529).36
90
4. Addressing Novelty
Complexity Science is supposed to address change, at least if the habitual
allusion to ‘emergence’ is going to be conceived as a kind of change. There
are already accounts of emergence understood as processes of interacting,
immutable (unchanging) entities (or as transitions between different enti-
ties) where results can be repeated as long as the circumstances are similar.
But a static concept of entities is at the center of the puzzle, that is, the ne-
glect of transition within entities. Time is left out. Why? Leclerc expresses
it ~ n m i s t a k a b l y : ~ ~
k T ~ proposals
o of dynamic interactions that would constitute non-magical explanans
for ‘emergent’ phenomena are presented in [43]and [44].
‘A broad view on process thought is introduced in [40]. The philosophy of organ-
ism, as proposed by Whitehead, particularly in his major work of 1929 ‘Process and
can be a starting point for approaching process thought more formally. Yet,
beyond this point, a process philosophy may have diverse interpretations and this ar-
ticle is not necessarily committed with particular assumptions; we can expand it in
different ways, e.g. presentationalist/idealist paths closer t o the ideas of Whitehead or
re-presentationalist/realist paths closer to the Evolutionary Epistemology of Campbell
and Popper or the Evolutionary Realism of Dopfer. Also limitations and specific criti-
cisms can be found, e.g. shortcomings related to the nature of composite entities in this
framework have been debated, e.g. [33],[45],[46],[47]; in particular Douglas examines
and rejects the possibility of characterizing emergence within process thought.48
92
5. And Individuality
Another central point from the discussion on emergence is individuality.
How to approach it? This question is related to the methods and types of
explanation that CS aims to provide. The reflection above brings elements
to bear in mind. The general concern is still model building which in fact
is a major feature of CS: the use of computer simulation models. This
particular characteristic is also a current challenge in the realm of explana-
tion, as Berger notices: ”It is reasonable to say that modelling explanations
dramatically increase our understanding of the world. But the modelling
explanations found in contemporary scientific research show that the in-
teresting claims of causal accounts are untenable...An adequate account of
scientific explanation must accommodate modelling explanations, because
they are simply too central to ignore”(pp. 329, 330).26
Keeping in mind the previous discussion, simulation models can be char-
acterized as systems of interacting agents using evolving rules. Addressing
the evolution of dynamic sets of rules (structure, origination, adoption fre-
quencies, selection criteria, individual and collective retention, etc.) we
can explain the becoming of emergents which are constituted by interacting
evolving agents. This is particularly relevant for modeling social systems;
for instance, the ‘parts’ can be related to agents using rules, e.g. cogni-
tive, behavioural, and blueprint rules, see [49]. Each agent is unique and
is changing constantly; they may belong to a generic class but each agent
does not necessarily apply in the same way his particular set of rules, and
a given agent does not act necessarily in the same way today than yester-
93
day, even in similar circumstances; moreover, the set of rules of each agent
evolves as well. And in a further level, the agents also evolve as such, that
is, the generic class of agents evolves too; perhaps over a longer time scope.
In one word, there is variety: ”many individuals of a kind, where each
constitutes an actualization of a distinct idea. There are, on the one hand,
many individuals that are many matter-energy actualizations, and, on the
other hand, many ideas that can be actualized”(p. 14).35Furthermore, this
variety changes through time - variation (e.g. mutations in biology).
Explicitly, the implication is to question abstract modeling (e.g. ”Kauff-
man’s style”). A foundation on processes of activities underlines the pri-
macy of acting individuals (agents), variety and variation. The inclina-
tion here is for a detailed, qualitative, rule-based, ’expert system’ approach
where details do matter.m
6. Looking Ahead
Bearing in mind the emergent question, the last point to comment is that
an approach based exclusively on computation has to specify clearly how it
is going to meet ontological emergence when such claims are asserted. It is
not straightforward to declare non-reductionism using computer simulation.
Being closed software systems, it is not evident how something arising from
a computation can be (or even just represent) more than essentially that
computation (which is a formal deduction, or at least a positive calculation
anyway) beyond suggestive metaphors or the association of reductionism
with linearity. If we are going to address emergence then the type of episte-
mological bridge between computer simulation and metaphysical emergent
entities has to be built - and carefully - if we are really going to meet
it. Can it be done? After all ontological emergence used to be the original
question. Already W i m ~ a t noticed:
t~~ ”Most scientists regard a system
property as emergent relative to properties of its parts if it depends upon
their mode of organization - a view consistent with reduction”(p. 269).”
He refers to areas such as non-linear dynamics, connectionist modelling,
“‘Indeed, Chris Langton, leading researcher on artificial life, has already stated that ”the
quest for global rather than local control mechanisms (rules) is the source of the failure
of the entire program of modeling complex behaviors up to the present, especially, much
of the work on artificial intelligence”(p. 24).50 - see also [51].
“It is also a matter of what is understood by ‘reduction’; Wimsatt clarifies: ”Most
scientists see reductionism as a species of explanatory mechanistic materialism which
may not even involve laws or theories, as those terms are traditionally understood by
philosophers”(p. 292).
94
OIn his stimulating paper, Bedaulg exposes the promising possibilities of working with
weak emergence, though he also rejects the meaningfulness of addressing absolute emer-
gence - ”logically possible but uncomfortable like magic; and irrelevant”. In related
lines, B e ~ h t e lhas
~ ~to ‘defend’ and to make a case for the compatibility of complex
systems and reduction; he is right and a defence should not be necessary; unfortunately
it seems it is needed; moreover, the role of mechanism, as a kind of explanation, has
been recently developed in this context - see [54], [55], [56];in particular for complex
systems see [57].
95
Acknowledgments
T h e author thanks three anonymous reviewers for their helpful criticism
and suggestions. Further thanks go t o Peter Douglas, Kristjan Ambroz
and Markus Schwaninger, for their valuable comments on earlier drafts of
this paper. T h e usual disclaimer applies.
References
1. J. Holland, Emergence. Cambridge MA., Perseus Books (1998).
2. S. Jones, Organizing Relations and Emergence. In: R. Standish, M. Bedau
and H. Abbass (Eds.), Artificial Life VIII: The 8th International Conference
on Artificial Life, Cambridge, MA., MIT Press, 418-422 (2002).
3. A. Pap, The Concept of Absolut Emergence. The British Journal for the
Philosophy of Science, 2 (8), 302-311 (1952).
4. S. C. Pepper, Emergence. The Journal of Philosophy, 23 (9), 241-245 (1926).
5. J. Kim, Making Sense of Emergence. Philosophical Studies, 95, 3-36 (1999).
6. A. Stephan, Emergence - A Systematic View on its Historical Facets. In: A.
Beckermann, H. Flohr and J. Kim (Eds.), Emergence or Reduction? Essays
on the Prospects of Nonreductiue Physicalism, Berlin: Walter de Gruyter,
25-48 (1992).
7. P. E. Meehl and W. Sellars, The Concept of Emergence. In: H. Feigl and
M. Scriven (Eds.), Minnesota Studies i n the Philosophy of Science, Volume
I: The Foundations of Science and the Concepts of Psychology and Psycho-
analysis, University of Minnesota Press, 239-252 (1956).
8. G. Watts Cunningham. Emergence and Intelligibility. International Journal
of Ethics, 39 (2), 148-166 (1929).
9. R. Ablowitz, The Theory of Emergence. Philosophy of Science, 6 ( l ) , 1-16
(1939).
10. W. M. Malisoff, Emergence without Mistery. Philosophy of Science, 6 ( l ) ,
17-24 (1939).
11. W. T. Stace, Novelty, Indeterminism, and Emergence. The Philosophical Re-
view, 48 (3), 296-310 (1939).
96
353 (1984).
53. W. Bechtel, The Compatibility of Complex Systems and Reduction: A Case
Analysis of Memory Research. Minds and Machines, 11, 483-502 (2001).
54. P. Machamer, L. Darden and C. F. Craver, Thinking About Mechanisms.
Philosophy of Science, 67 (l),1-25 (2000).
55. S. S. Glennan, Rethinking Mechanistic Explanation. Philosophy of Science,
69, 342-353 (2002).
56. J. G . Tabery, Synthesizing Activities and Interactions in the Concept of a
Mechanism. Philosophy of Science, 71, 1-15 (2004).
57. W. Bechtel and R. C. Richardson, Discovering Complexity: Decomposition
and Localization as Strategies in Scientific Research. Princeton, N. J., Prince-
ton University Press (1993).
58. M. Hesse, Models and Analogies. In: W. Newton-Smith (Ed.), A Companion
to the Philosophy of Science, Malden/Oxford, Blackwell Publishers, 299-307
(2000).
59. A. L. Plamondon, Whitehead’s Organic Philosophy of Science. Albany, State
University of New York Press (1979).
60. F. A. Hayek, Degrees of Explanation. The British Journal of the Philosophy
of Science, 6 (23), 20S225 (1955).
WHY DIACHRONICALLY EMERGENT PROPERTIES
MUST ALSO BE SALIENT
CYFULLE IMBERT
IHPST / Universit Paris 1 Panthon - Sorbonne
99
100
1. Preliminaries
In order to present the problem, I shall restrict for simplicity to an example
belonging to discrete mathematics, namely the case of a one dimensional CA
system composed of cells, each colored black or white.a A rule determines
the color of each cell at each step. Below is rule 110. It says for example
that when a CA and its neighbours are black, the CA turns white at the
next step.
Figure 2. Rule 110, starting from a single black cell, 250 steps. Figure taken from A
New Kind of Science, chap. 2, reproduced with permission of Stephen Wolfram, LLC.
Figure 3. Rule 110,500 steps. Figure from A New Kind of Science, chap. 2, reproduced
with permission of Stephen Wolfram, LLC.
ties that can only be possessed by the whole system but not by its part.
Such properties are sometimes referred to in the literature as ‘nominally
emergent’.4 Structural properties, involving microproperties and a relation
between them are a subspecies of nominally emergent proper tie^.^ Exam-
ples of macroproperties are Blaekcell, Microstate50 or, for a cup of water,
fluidity or transparency, because molecules of water cannot be said to be
fluid or transparent. I will not venture here in giving a more detailed general
definition of macroproperties because I shall not need that much to define
DEPs. Bedau himself acknowledges that “full understanding of nominal
emergence would require a general theory of when macro entities have a
new kind of property that their constituents cannot have”.4 That should
not worry us too much because in our CA system example, we can very
easily, if we need to, depart on a purely logical ground what counts as a
macroproperty from what does not.
103
words, what matters is which new features the system dynamics is able to
generate out of a context that did not possess these features, and that is
why I say the criterion we look for is contextual.
3. Salience
In the last section of this paper, I try to develop a concept, salience, that
fulfills the previously listed requirements and I claim that DEPs must also
be salient. Salience is an independent notion and I believe that it can prove
to have a wider application than the question of emergence.e The definition
of salience also raises its own set of problems, but since it is not my goal
to fully develop this notion here, I shall be quite sketchy, leave aside most
of these problems and be content to show how salience could be used to
complete the definition of DEPs.
The idea I am going to develop is that a property is salient if one can
find a descriptive indicator which can be calculated for any state of the
system and that undergoes a change when a salient property appears. To
make things clearer, I start with two clear-cut examples.
In the case of phase transitions, order parameters (when one manages
to find one) yield such indicators. For example, in the case of the transition
from ice to water, the quantity pice - psystem,where p indicates the density,
is zero till the transition and then grows.
Another more elaborate example from dynamical system field, which
I borrow from Rueger', is the case of the damped non linear oscillator,
+
which is described by van der Pol's equation: x - a(1- x2)x x = 0, where
x = d x / d t and a is a damping parameter. For a dynamical system, it is
appropriate to study properties of the phase space, which plays for us the
role of a descriptive indicator. With a = 0, the oscillator is undamped
and the phase space can be portrayed by concentric ellipses, each ellipse
representing a trajectory (see figure 4). If the damping is gradually turned
on, trajectories are no longer periodic and the phase space is made of spi-
rales converging to a limit circle (see figure 5). So the turning off of the
damping makes new salient properties appear (e.g. periodicity) and this
can be indicated by the topological change of the phase space, which is de-
scribed by topologically inequivalent objects (two objects are topologically
inequivalent if one cannot smoothly deform one object to transform it into
the other).
This specific and very well chosen example enables Rueger to provide a
very clear criterion of novelty, using the notion of topological inequivalence.
eFor example for the question of what it is to identify a non trivial property in a data
basis obtained by a number crunching simulation or by experiments.
109
for a fluid, what I called ‘microstates’ for the above CA system or subparts
of them (for example the subpart corresponding to CA number 15 to 25,
that is to say where Bigtriangle does appear).
I call ‘trajectory’ the sequence of states that one gets by varying a
parameter describing the system. This parameter can be time, like in the
case of the above CA system, but it need not always be. For example,
temperature can be chosen for an ice to water phase transition. What is
only required to get a well-defined trajectory along a parameter is that
one single state can be ascribed to each new value of the parameter along
the trajectory. Suppose for example you study properties of cooling of
glasses. Since, for a similar initial condition, the end state of the glass and
its properties depend on the cooling rate (see figure 6)’actual trajectories
depend on the cooling rate. Therefore temperature cannot be taken as the
varying parameter defining trajectories. Specifying exactly the trajectory
does matter because salience is a contextual notion (Cl) and because the
part of the trajectory before a property appear provides the context against
which the property stands out.
I call ‘descriptive function’ a single-valued mathematical function that
0.12 -
0.10 --
0.08 --
0.06 -.
0.04 --
0.4’Wsec
Figure 6. Glass cooling modelled by a two well system (the fraction of the systems in the
higher well is plotted). Figure taken from Dynamics of Complez systems,by Bar-Yam,
reproduced with permission of Addison-Wesley.
111
fThis is actually a restriction that I make for simplicity because one may need more com-
plicate objects than singlevalued function to describe suitably the systems. In Rueger’s
example of the oscillator, the system was described for each value of the damping pa-
rameter by trajectories within a phase space,
112
salient for trajectory T of S iff there exists different EMOs A and B and a
descriptive function DF of the states of S’s trajectory such that:
is less simple than x2 or 0.1z3. Since the last two expressions are composed
of one element and the former of these two elements, the answer may seem
obvious. But this seems all too relative to the mathematical framework that
is taken as basic. Taking as basic the functions z H g(z) = z2 0.1z3 +
and z H h ( z )= z2- 0.1z3, +
one gets z2= 0.5(g(z) h ( z ) )and z2seems
now less simple than g(z). As a reviewer points out, this makes salience
relative to a privileged descriptive mathematical framework. A way out
of the deadlock would perhaps be to argue that there are good objective
grounds, in each case of data mining and salient properties detection, to
privilege a descriptive mathematical framework and basic EMOs on the ba-
sis of which projectable quantities characterizing the system the descriptive
function and more generally the mathematical description represent.
I do not go any further in the discussion of the concept of salience, since
as just shown, this would require a close scrutiny of much debated ques-
tions, like simplicity and curve fitting. One more thing still. In this paper,
the concept of salience was only aimed at giving a way to single out the
properties that are considered as remarkable in the study of a system, e.g.
in physics. But determining what patterns or properties are salient for a
subject is also a question that is crucial in cognitive sciences.' I make no
claim in this paper about the link between the two notions, which may be
somewhat related, even if not identical. This latter point can be illustrated
as follows. Indeed, a salient property (as defined above) in a data basis can
be imperceptible for a subject. That is why data mining requires resort to
statistical tests made by computers. In the same time, perceptual abilities
for pattern recognition also prove very useful to detect interesting p r o p
erties, for example in hydrodynamic simulations, and the existence of the
supposedly detected properties can be checked afterwards with statistical
indicators. Finally, a system (e.g. a Hopfield network3) could be trained to
detect in any circumstances a given pattern of CAs whereas this pattern
need not always be salient (in my sense), since the notion is contextual.
Besides, the pattern detection system may treat differently inputs of the
same kind ( i e corresponding to similar variables of the n-tuple in the
mathematical description) and thereby use for detection a function that
would not qualify as a descriptive function.
4. Conclusion
We can now check that the concept of salience meets the requirements
listed above, or that a well-grounded and refined version of it could be
hoped to. It is local, because it depends only on the preceding part of the
trajectory. It is contextual because any property can happen t o be salient
if it is generated by a context against which some of its features stand out.
It is objective (at least provided a descriptive mathematical framework is
given) because it relies on the construction of mathematical indicators and
not on our epistemic interests, perceptual abilities or practical goals: for
example, the position at a given time of a planet, on which one wants t o
send a shuttle, is very unlikely t o be salient.
Salience, as I built it, is a very weak and purely descriptive notion,
which aims a t grasping the idea that the apparition of a non-trivial prop-
erty is simultaneous with a significant change in some of the values of the
observable quantities characterizing the system. Once again, I think that
using a concept as weak as possible t o complete the definition of DEPs is
appropriate because I believe that the SR is the crucial element in it. But
my conclusion is that DEPs satisfy the SR and are salient.
Acknowledgements
Special thanks to Anouk Barberousse, Jacques Dubucs and Paul
Humphreys for discussing different versions of this paper and also to an
anonymous reviewer for very helpful and very apposite critical comments,
which helped a lot t o improve the paper. All remaining shortcomings are
of course mine.
References
1. A. Stephan, “Phenomenal Emergence”, Networks, vol. 3-4, 91-102 (2004).
2. M. Bedau, “Weak emergence”, Philosophical Perspectives: Mind, Causation
and World, 11, Oxford Blackwell Publisher (1997).
3. G. Weisbuch, Dynamique des systbmes complexes, Une introduction a m
rkseauz d’automates, InterEditions/Editions du CNRS (1989).
4. M. Bedau, “Downward causation and the autonomy of weak emergence”,
Princzpia, vol. 6, 5-50 (2003).
5. T. O’Connor, Hong Yu Wong, “The Metaphysics of Emergence”, N o h , forth-
coming.
6. S. Wolfram, A New Kind of Science, Wolfram Media, Inc. (2002).
7. J.P Crutchfield, J.D Farmer, N.H Packard, R.S Shaw, “Chaos”, Scientific
American, vol. 255, 46-57 (1986).
116
KURT A. RICHARDSON
1. Introduction
On the matter of privileging a so-called ‘scientific method’ over all other
methods in the discovery of ‘truth ’ the philosopher Paul Feyerabend said:
2. A Theory of Everything
My starting point for such an exploration is the assumption that the Uni-
verse, at some level, can be well-described as a cellular automaton (CA).
Of course by selecting this starting point I am making the assertion that
not only does some form of cellular automata represent a plausible theory
of everything, but that a theory of everything does indeed exist. This would
seem to be a radically reductionist starting point, but I am claiming a lot
less than may at first appear. Rather than defend this starting point in
117
118
the main part of the chapter, I have added a short appendix that discusses
the reasonableness of the CA assumption. One thing is clear, a theory of
everything doesn’t even come close to offering explanations for everything so
we needn’t get too concerned about the ‘end of science’ or human thinking
just because such a theory might exist and might be discoverable.
Whether one accepts my starting point or not it should be recognised
that this is a philosophical exercise; to explore the ontological and episte-
mological implications of assuming that the Universe is well-described as a
CA at some level. It doesn’t necessarily follow that the Universe actually
is and only is a CA. The details of the CA, however, do not change the
philosophical conclusions. Richardson15 has explored these issues at length
and argued that, whether or not a CA model is a plausible theory of ev-
erything, a philosophy of complexity based on this assumption does seem
to have the capacity to contain all philosophies, arguing that each of them
is a limited case of a general, yet empty, philosophy (akin to the Buddhist
‘emptiness ’. This seems to be a fantastically audacious claim, particularly
given the heated debate nowadays over global (intellectual) imperialism.
However, one should realise that the resulting general philosophy is quite
empty - which I suppose is what we would expect if looking for a philosophy
that was valid in all contexts - and explicitly avoids pushing to the fore one
perspective over any other. Quoting Feyerabend’ again, “there is only one
principle that can be defended under all circumstances and in all stages
of human development. It is the principle: anything goes.” In a sense, all
philosophies are a special case of nothing!
3. S o , What is Emergence?
In the founding issue of the journal Emergence, Goldsteinlo offers the
following description of ‘emergence ’:
For the rest of this paper I would like to consider how patterns and
properties emerge in simple CA, and how the macro is distinguished from
the micro. I will argue that the recognition of emergent products, which is
often how the macro is distinguished from the micro, is method dependent
119
Figure 1. Examples of (a) ordered, (b) complex, and (c) chaotic automatons
'Even though for the products t o show themselves, an observer must recognise them as
products.
bThough having run this particular CA on many occasions, the qualitative nature of
the pattern was of no surprise to me at all - otherwise I would've found it very dificult
indeed to preselect an appropriate case. It would be a very straightforward exercise
indeed for me to produce a library of CA patterns from all possible initial conditions -
though I would be limited by such resources as time and, the related resource, processing
power. The point is that I would have to run all the cases fully, there is no shortcut -
no algorithm - I could implement to speed the task up, i.e., CAs are intrxtable.
CIt is still a little dissatisfying to regard these patterns as emergent as they are simply
an alternative expression of the CA rule itself, cut-off at an arbitrary time step.
121
0 0 1 1 0 1 1 0
+
; fj-,v-
Y++Y"P
. .
7' + a + y- + y + a + y'
y' + p + y- + 0
(a) (b)
Figure 3. (a) The micro-physics of level 0 and (b) the macro-physics of level 1
opt for the latter, should be say at least that these moving patterns
are real?
Choosing ‘0’ and ‘1’ to represent the two cell states perhaps further
suggests the illusion of motion and ontological independence.
The choice of ‘0 ’ and ‘ 1’ is a little misleading as it can easily be inter-
preted as being ‘on’ and ‘off’ or ‘something’ and‘nothing’. Whether a
cell is in state ‘0 ’ or ‘ 1’, the cell itself still exists even if a‘black square’
is absent. We could easily use ‘ x ’ and ‘ N ’ to represent the two states
and the result would be the same. In uncovering the macro-dynamics we
had to, however reasonable it may have seemed, choose a pattern-basis to
be ‘removed’ or filtered-out. By clearing-out the space-time diagram we
allowed ‘defects ’ in the space-time diagram to become more clearly appar-
ent. We also chose to set to ‘0’ those areas of the space-time diagram that
were tiled evenly with the pattern-basis, and ‘ 1’ those regions which repre-
sented distortions in the pattern-basis; we assumed an even ‘background ’
and then removed it to make visible what was in the ‘foreground’d. Again,
it is tempting to interpret the ‘0’-covered (white areas) as ‘nothing’ and
the ‘1’-covered (black) areas as ‘something’. This would be incorrect
however, as all the cells that comprise the CA network are still there, we
have simply chosen a representation that helps us to ‘see’ the particles
and ignore the rest. However, it must be noted that though ‘0’ and ‘1’
do not map to ‘nothing’ and ‘something’ at the CA-substrate level, this
is exactly the mapping made at the abstracted macro-level.
So in any absolute sense, the emergent products do not exist as such.
They do however exist in the alternative (although deeply - complexly -
related) macro-level description of the CA. The change in perspective is
equivalent to an ontological shift, where the existence of the emergent prod-
ucts is a ‘feature’ of the new perspective though not of the absolute CA
perspective; what existed at the micro-level does not exist at the macro-
level and vice versa. This new perspective is an incomplete description,
yet the fact that the new description is substantially complete would sug-
gest the substantial realism of the particles on which the new description is
based. For this reason we say that though the particles do not absolutely
exist, they do indeed exhibit substantial realism and as such, for many in-
tents and purposes, can be treated as if they were real (for this reason they
~~~
dAt the CA-substrate level the background is not ontologically different from the fore-
ground - any difference is achieved through filtering.
124
=In considering Figure 4b it is very tempting indeed to assume that the extra dots (in
comparison to Figure 4a) are no more than noise. However, it is important to note (but
not necessarily easily grasped) that the existence of the seemingly more real pattern
(represented in Figure 4a) depends upon the existence of what is easily labelled ‘noise’
and vice verse. Each ‘noisy’ dot is potentially significant in the future evolution of
the system as a whole as it is a part of the system, not just the result of some impure
measurement.
125
which cells are formed, from which multi-cellular organisms are formed, and
so on right up to galactic clusters and beyond. In such a neat nested hier-
archy every ‘entity’ has a place which distinguishes it from other entities
as well as determining which ‘science ’ should be employed to understand
it. Furthermore, every ‘thing’ at one particular ‘level’ in the hierarchy is
made-up off those ‘things’ said to exist at the ‘level’ below. This would
seem to suggest that molecular physics/chemistry, say, should in principle
be derivable from atomic physics. However, even though such a trick is
supposedly possible (assuming such a neat hierarchical organisation), in re-
ality scientists are very far indeed from pulling off such a stunt. Nowadays
the reason offered for this is simply the intractability of emergence which
is well known to nonlinear researchers. Rather than derive biology from
chemistry, scientists are forced to assume that the ‘entities’ of interest
(which are the result of a particular filtering, or coarse-graining) to their
particular domain do exist as such. Once such a step is taken, a science can
be developed independently of the science that deals with the supposedly
‘simpler ’ ‘ lower-level ’ ‘entities ’. Making such an assumption (which is
often made implicitly rather than explicitly) has worked out rather well,
and an enormous amount of practical understanding has been accumulated
that allows us to manipulate these ‘entities ’ in semi-predictable ways, even
if they don’t happen to truly exist! Technology is the obvious example of
such success.
next. This section of the chapter represents a move away from the more
scientifically-basedphilosophy presented thus far to a more speculative phi-
losophy.
As we move up our constructed hierarchy of explanations, the reality
that is ‘explained’ by our explanations seems to be more complex. By
complex here I mean that we seem to have increasing difficulty in locat-
ing the boundaries, or the ‘quasi-particles’ that describe our systems of
interest. This issue is perhaps exemplified at the ‘social level’, and more
clearly illustrated by the methodological diversity of the social sciences.
Whereas recognising the ‘atoms’ of the physical sciences is seemingly a
rather straightforward affair (though still only approximately representa-
tive of what is), the recognition of the ‘atoms’ of social science seems rather
more problematic. There are many more ways to ‘atomise‘ social reality in
an effort to explain a particular phenomenon (of course our choice of ‘atom-
isation scheme’ determines the phenomena that then takes our interest, and
to some extent, vice verse - although there are potentially many different
(possibly contradictory) atomisations ’ that may lead to seemingly good
explanations for the same phenomena). This is probably why social prob-
lems are often described as ‘messy’ problems” ; the articulation of the
system of interest itself is problematic, let alone the development of expla-
nations. The fact explanations of social phenomena often have low ‘fitting
coefficients‘ whereas explanations of physical phenomenon often correlate
very well indeed with perceived (measured) reality, may further suggest the
legitimacy of multi-methodological approaches in the social science+.
I don’t want to spend much time at all defending the view that multi-
methodological approaches for the social sciences are a must. For now I am
content to point the reader towards the book Multimeth~dology~~ which
argues rather well for a pluralist attitude in the social sciences. I think that
the failed attempts in the past to develop a ‘Theory of Society’ satisfies
me, for now at least, that pluralism is a feature of the social sciences rather
than a failure of social scientists to construct such an all-embracing theoryh.
-mE
._
00 0 0
0 0 0
0
-
O O G
O&-
a Level 0 - CA ‘Reality’: no hierarchy
Figure 5. The convoluted hierarchy of existence. Each ellipse represents the set of
‘entities’ (a theory’s alphabet) that supposedly account for certain phenomena at a
.
certain level ’
developed very well indeed without taking pluralism too seriously. For those fields to
continue to develop though I would expect pluralism to become more common. So all
my comments above regarding social sciences are true also for physics, just not equally
so.
131
reign in the physical sciences. So how do we decide which are the ‘best‘
approaches that lead to the ‘best ’ theories? How do we avoid the choice
of ‘filter’ from being completely arbitrary? In the physical sciences consis-
tency with currently accepted theories is often used as an additional filter.
Equally important is partial validation through experimentation, and lim-
ited empirical observation. Physical scientists are rather fortunate in that
many of their systems of interest can be isolated to such a degree that the
scientists can actually get away with the assumption that the systems are in-
deed isolated. This helps a great deal. In the social sciences there is a more
‘try it and see ’ attitude as the application of reductionist methodologies is
highly problematic, because of the considerable difficulties in reproducing
the same experiment time and time again. Often new situations, or new
phenomena, force the social scientist back to square one and a new ‘ex-
planation ’ is constructed from scratch with minimal consideration of what
theories went before (although what went before may prove inspirational).
The fact that pattern-basis filtering of complex CA leads to alterna-
tive representations of the CA reality that account for a good percentage
of the systems behaviour would seem to suggest that an ‘anything goes’
relativism in particular contexts is not supportable. The success of physics
itself at least supports the notion that some theories are considerably bet-
ter (in a practical sense at least) than others. However, the fact that these
alternative representations are incomplete would suggest that an ‘anything
goes ‘ relativism might be valid for all contexts taken together at once. Once
a pluralist position is adopted, the key role of critical thinking in taking us
from a universal ‘anything ’ to a particular ‘something’ becomes apparent.
I do not want to explore any further the epistemological consequences
of CA research as this paper is primarily concerned with the ontological
status of emergence products (which, as has been indicated, is impossible to
explore without concerning oneself with epistemology). If our commitment
to a CA model of the Universe is to persist then we need to find CA systems,
and the tools for analysing them, that when abstracted multiple times (to
‘higher levels’ of existence) lead to representations that can be usefully
‘atomised’ and analysed in a variety of different ways (analogous to the
social sciences).
filters (and their consequent patterns) are more meaningful than others. In
the absence of direct access to absolute and complete descriptions of reality,
however, an ‘anything goes ’ relativism (such as Feyerabend’s) cannot be
dismissed on rational grounds alone and offers the possibility of a more
genuine engagement with the (unseen) ‘ requirements ’ of certain contexts.
yet representative;
0 Determining what ‘macro’ is depends upon choice of perspective
(which is often driven by purpose) and of course what one considers
as ‘micro’ (which is also chosen via the application of a filter).
0 All filters, at the universal level, are equally valid, although certain
filters may dominate in particular contexts.
One last remark. Much philosophical debate circles around what the
most appropriate filters are for understanding reality. For example, radi-
cal scientism argues that the scientific methods are the most appropriate,
whereas humanism argues that the filters resulting from culture, personal
experience, values, etc. are the most appropriate. An analysis of a CA
Universe suggests that it is impossible to justify the privileging of any filter
over all others on rational grounds (though there is a way to go before we
can say that this conclusion follows naturally from the CA view). Whereas
political anarchism is not a particularly effective organising principle for
society at large, a complexity informed theory of knowledge does tend to-
wards an epistemological and ontological anarchism, at least as a Universal
default position.
‘It is quite possible for the conclusions to be correct whilst the reasoning could be falla-
cious.
135
j ” ...and
of course the clear and certain truth no man has seen nor will there be anyone
who knows about the gods and what I say about all things. For even if, in the best case,
one happened to speak just of what has been brought to pass, still he himself would not
know.” Xenophanes of Colophon (fragment B34).
kNot that the observation of novelty by a participant within the Universe is proof enough
that novelty ‘exists’. Plus, the idea of the big bang being following by a big crunch may
well suggest that there is indeed some colossally-high period attractor leading cosmic
136
evolution.
'Though there are CA models whose phase spaces are characterised by a single attractor
- in such circumstances initial conditions are quite independent of the system's end state
(cycle).
mPlanck time, which is 5.4 x s, is the age of the Universe at which energies were
available to enable particle interactions to occur across Planck distances.
137
Figure 6. The largest attractor for CA rule &6cle53&, ~ 1 7k.5 , showing complex
transient trees of length 72 steps. The period of the attractor cycle is 221 steps
"As a first approximation I found the relationship between network size, TZ, and transient
length, La%,
(for rule &6cle53&, k.5) to be:
n e e x p [ - 11n ( k > ]
0.0126 3.0228
if a power law is assumed, or:
n m - -1- - - - - I n ( k )
0.2368 0.9952
if an exponential law is assumed.
""The only way one region of space can 'know' about the conditions in another is if
there has been time for some kind of influence to go back and forth between them-and
the maximum speed of any influence is that of light. Hence, the early Universe consisted
of los3 regions which were entirely separate, or 'causally disconnected"'2.
138
Figure 7. A section of the space-time diagram for CA rule &6cle53a8, -17, k 5 , and
the complete attractor field which contains 54 attractors (21 are qualitatively distinct -
only these are shown)
References
1. Bedeau, M. (1997) “Weak emergence,” Philosoph~calperspectives, 11: 375-
399.
2. Chown, M. (2000) “Beforethe Big Bang,” New Scientist, 166(2241): 24.
3. Corning, P. (2000) “The reemergence of ‘emergence’: A venerable concept
in search of a theory,” C o r n p ~ e ~7(6):
t ~ , 18-30.
139
5 10 15 20 25 30
Figure 8. Maximum transient length versus network size for CA rule &6cle53a8, n=5-
25, k 5
DAVID SHIPWORTH
Faculty of Science
University of Reading
141
142
that the mathematical structures into which the selected observables are
encoded are formal systems is of particular importance in light of the known
incompleteness of such systems. The mathematician David Hilbert, at the
beginning of the 20th century, presented a challenge to the mathematical
community. That challenge was to demonstrate the consistency of the
axioms of mathematics. This challenge became known as the ‘formalist
program’ or f o ~ m a l i s r n . ~
It was well known by the end of the lgth century that mathematical
terms had both a syntactic and a semantic content [2] and it was this, the
formalists argued, that gave rise to some well documented apparent contra-
dictions within mathematics, such as the Russell para do^.^ The syntactic
content of mathematics is that part of mathematics that involves the ap-
plication of rules to strings of symbols. Certain of these strings of symbols
are taken as the axioms of the system; other symbol strings are created
through the application of the rules to these axioms to create new symbol
strings. This aspect of mathematics is purely logical and the symbol strings
mean nothing. It is this aspect of mathematics that was being described by
Bertrand Russell when he claimed that ‘pure mathematics is the subject in
which we do not know what we are talking about, or whether what we are
saying is t r ~ e ’ . ~ J ’ . ~ ~ ~
The semantic component of mathematics arises when meaning is at-
tached to these symbols. This meaning can be as simple as letting these
symbols represent number^.^ Without the semantic component of mathe-
matics, mathematical modeling is impossible, as it requires the encoding
of the observables of the natural system into the language of mathematics.
Likewise, it is impossible to conceive of any applied CAS model without
semantic content. It was this ‘meaning’ component of mathematics which
Hilbert and the other formalists thought gave rise to mathematical contra-
dict ions.
Hilbert could not tolerate any such contradictions and sought to elim-
inate them through elimination of the semantic content of mathematics.
Hilbert argued all semantic functions of terms could be captured by syntac-
tic rules. As Rosen [2, p.71 notes, ‘In the Formalist view, then, Mathemat-
ics ... is a game of pattern generation of producing new patterns (theorems)
from given ones (axioms) =cording to definite rules (production rules)’. In
this view mathematics is not ‘about’ anything, it is simply an exercise in
symbol manipulation according to a set of rules.
This view of mathematics is highly amenable to computation. Com-
putation is the process of applying predetermined transformation rules to
143
As noted by Chaitin [4, p.111, ‘Theorems are found by writing all the
possible grammatical statements in the system and testing them to deter-
mine which ones are in accord with the rules of inference and are therefore
valid proofs.’ That is, a statement can be grammatically correct and yet still
be false. Indeed, only a small proportion of grammatically correct state-
ments of any given formal system are provable as theorems of that system.
Hilbert, in developing the mathematics of formal systems as part of the
formalist program, devised a means of constructing all provable theorems
within any given formal system.
An alternate way to view the structure of a Formal Axiomatic System is
as a mathematical graph ( GFAS)(a set of ‘vertices’ (points) joined by a set
of ‘edges’ (lines)). The vertex set ( V F A S of
) such a graph would correspond
to the union of the set of axioms of the FAS, and the set of all syntactically
correct statements provably true through application of the FAS’s rules of
inference to the set of axioms of the FAS and subsequently derived theo-
rems. The edge set ( EFAS)would be defined by any two vertices connected
by application of the FAS’s rules of inference to the set of axioms of the
FAS and subsequently derived theorems. Here we follow the logic used by
Godel in the incompleteness theorem and equate ‘truth’ with ‘proof’ and
note that in the context of a FAS, proof means something that is provable
through execution of a properly constituted algorithm on a Universal Tur-
ing Machine. By definition, this graph G F A would ~ have one connected
component containing all vertices. It is to be remembered however that
FAS are literally meaningless-i.e. they are devoid of semantic content.
The use of FAS to model any meaningful system fundamentally changes
this graph. Chaitin [9, 41 studied the arithmetic of real numbers (a seman-
tic system) through analysis of real number Diophantine equations. He
found that there are an infinite number of essentially random but provably
true arithmetic facts that cannot be derived from any other facts through
application of the rules of any FAS model of the arithmetic of real numbers.
Following from above, we define the graph of the semantic system of
arithmetic G s e r n - ~as
, the vertex set V s e r n -of
~ all syntactically correct
and provably true statements within the semantic system of arithmetic. We
145
of any formal system there is every reason to suppose that the same will
apply to more complex systems.
This has two implications that are central to this argument. Firstly, as
noted above, any mathematical model executed on a digital computer is by
definition a formal axiomatic system. This FAS model (syntactic) defines
an edge set ( E F A S )on the vertex set of all provably true statements about
the semantic system Vs,, it is modelling. The graph created by this vertex
set and edge set contains a connected component, (the set of entailments of
the model), and an indeterminate number of disconnected vertices. Each of
these vertices represents a ‘fact’ about the system being modelled but one
that is unprovable by that model. A model with different axioms, and/or
rules of inference, defines a different FAS and a different set of truths. Again
the graph thus created has a connected component and an indeterminate
number of disconnected vertices. Note that while any number of models
can be defined, we know from Godel and Chaitin that none of them can
connect all the vertices on the graph. This is precisely the point made by
Rosen [2, p.8-91 when he notes that ‘The relation of Number Theory to any
formalization of it is the relation of the complex to the simple...We shall
argue that a formalization constitutes a (simple) model of that which it
formalizes’. Rosen [2, p.91 goes on to note that:
For our purposes, the point of doing all this is that we are going
to identify this category of simple models of a system of entailment
with ‘what we can know’ about it. ... When we do this, all of
epistemology gets wrapped up in that category of models. ‘What we
can know’ about such a system thus becomes a question of how good
we are at navigating in this category of models.
Casti reaches the same conclusion. ‘The best we can hope for is to illu-
minate various facets of natural systems using different - and inequivalent
- formal d e s c r i p t i ~ n s ’ . ~ ~ ” ~ ~ ~ ~ ~ p . ~ * ~
It should be noted that the formal results of incompleteness of Godel
and Chaitin pertain to the relationship between mathematical systems such
as arithmetic, and formal systems models of them. This is illustrated in
Figure 1 in the lower half of the diagram. In the quotes above however, both
Rosen, and more explicitly Casti, suggest a parallel if less formally defined
incompleteness relationship exists between natural systems and mathemati-
cal model of them (the relationship on the left hand side of Figure 1).While
this is intuitively reasonable, extrapolating from the relationship between a
formal system and the mathematical system it models, to a mathematical
147
Natural
System f
% f
Computer
Simulation
5’
1
f$
Mathematical
System
7 ~
Formal
System
Figure 1. Modelling relation (synthesising elements from Casti [6] and Traub [ll])
This quote by Jain [18, p.l] introduces one of the central aspects of the
epistemology of complex adaptive systems theory, that is, the increasing
role played by computer simulation and experimentation as a mechanism
for understanding certain classes of systems.
Indeed Coveney and Highfield [3, p 2771 note that, in the CAS field
of artificial life ‘The most important contemporary Alife system, Tierra,
has an evolutionary dynamic of such great complexity that the only way
to investigate its behaviour is to perform experiments on it within a com-
puter.. .’.
The relationship between complex adaptive systems theory and com-
puter simulation is well expressed by Holland [19, p.951.
imagined.
Indeed it is even questionable as to whether such qualitatively different
system properties operating at higher hierarchical levels can be expressed
within the limited vocabulary of the formal systems of CAS models. This
is because emergent properties are robust statistical aggregates of what are
frequently transcomputational sets of the combinatorics of deterministic
relations between elements of the system at lower hierarchical levels. For
this reason, the detection of emergent properties through the analysis of
individual deterministic relations is not possible.
For these reasons, it seems unreasonable, to ascribe truth value to a
priori unimaginable and qualitatively different properties of systems that
are both structured and limited by our choice of formal axiomatic system.
We therefore see that both the epistemological practice of CAS re-
searchers, and one of the defining ontological assumptions of CAS theory,
lie in opposition to the dominant deductive ‘realist’ ontology of classical
mathematics based on an ontological acceptance of the law of the excluded
middle. Rather, CAS ontology and epistemological practice lie in accord
with the constructivist epistemology of intuitionism and other inductive
logical frameworks. Thus, it is argued that the truth status of emergent
properties of complex adaptive systems models should be based on strictly
inductive logics and on proof by constructive verification.
Accepting this leads to very real implications for building CAS models
in practice. It becomes necessary to ask the question whether there are
computer hardware or software environments that operate in a way that
either uses, or entails, principles of mathematical logic based on adherence
to the law of the excluded middle that the strictly constructive proofs of
intuitionistic mathematicians would reject? The answer is yes. The most
notable example of this is the use of ‘autoepistemic logics’ in multi-agent
CAS models.
Autoepistemic logics are a class of non-monotonic logics and were devel-
oped by Moore [21] and Przymusinski [22]. Autoepistemic logic introduces
an additional syntactical element to those of standard propositional logic.
This additional element is the modal operator 0 representing knowledge.
This is used to indicate knowledge of the truth status of a declarative sen-
tence (S) and is used as follows: US indicates that the truth status of S
is known; 04indicates that the truth status of 4 is known; and TOS
indicates that the truth status of S is not known.
This operator is used, in conjunction with an implicit acceptance of the
law of the excluded middle, to support reasoning based on knowledge of
154
References
1. J. Casti, Reality Rules: 1 & 2-Picturing the world in mathematics-
‘Vol. 1-The Fundamentals’ & ‘Vol. 2-The Frontier’, New York, Wiley-
Interscience, John Wiley & Sons (1992).
2. R. Rosen, ’What can we know?’, in Beyond Belief: Randomness, Prediction
and Explanation in Science, eds J. Casti and A. Karlqvist, CRC Press, p.310
(1991).
3. P. Coveney and R. Highfield, h n t i e r s of Complexity: The search for order
in a chaotic world, London, Faber and Faber (1995).
4. G. Chaitin, Information Randomness €9’ Incompleteness: Papers on algo-
rithmic information theory, Znd edn, Series in computer science - Vol. 8,
Singapore, World Scientific (1990).
5. J. Casti, ‘Chaos, Godel and Truth’, in Beyond Belief: Randomness, Predic-
tion and Explanation i n Science, eds J. Casti and A. Karlqvist, CRC Press
(1991).
6. J. Casti, ‘The outer limits: In search of the ‘unknowable’in science’ in Bound-
aries and Barriers: On the limits to Scientific Knowledge, eds J. Casti and
A. Karlqvist, Reading Massachusetts, Addison-Wesley (1996).
7. G. Chaitin, ‘Thoughts on the Riemann Hypothesis.’ The Mathematical In-
telligencer, Vo1.26, No.1, pp.4-7 (2004).
8. G. Chaitin, ‘Computers, Paradoxes, and the Foundations of Mathematics’,
American Scientists, Vol. 90, pp.164-171, (2002).
9. G. Chaitin, Algorithmic Information Theory, Cambridge, Cambridge Univer-
sity Press (1987).
155
DAMIAN POPOLO
Durham University, UK
1. Introduction
There has been some debate recently surrounding the question of whether
Complexity could be seen as a manifestation of ‘post-modern’ science’.
Some scholars have sought to underline the similarities between Complexity
science and post-structuralist p h i l o s ~ p h y ~ ySome
~ > ~ .others have perceived
156
157
aByrne, for instance, considers Complexity exclusively for its potential in quantitative
social science, and believes that the new science represents a serious challenge to ‘post-
modernism’. See
bFoucault describes an episteme as the ensemble of the underlying rules that characterise
the general configuration of knowledge within a precise historical context8.
158
Foucault puts it: ‘History gives place to analogical organic structures [the
the way for the modern episteme, the realm of philosophy finds itself di-
vided into three distinct areas of enquiry: ‘The criticism - positivism -
metaphysics triangle of the object was constitutive of European thought
from the beginning of the nineteenth century to B e r g ~ o n ’ ~ PBergson?
,~~~.
Why Bergson? Are we to understand that Henri Bergson was the first
thinker to push Western philosophy beyond the modern organisation of
knowledge?
Henri Bergson is one of the most important figures within the spiri-
tualist tradition, a tradition that advocated the presence of a distinctive
philosophical experience. Bergson followed with a lot of interest the de-
velopments that were taking place within the philosophy of science. More
precisely, PoincarC’s initial doubts concerning the infallible objectivity of
science, and his disciple’s (Le Roy) reinforcement of these doubts, must have
played an essential role in the constitution of Bergson’s thought. At the
core of Bergson’s philosophy of science lies the conviction that the scientific
method adopts a cinematographical view of temporality, which implies, as
Gutting puts it, ‘that science views reality not as a continuous flux (the du-
ration that in fact is) but as a series of instantaneous ‘snapshots’ extracted
from this
Science’s cinematographical view of duration is due to the fact that it
is primarily concerned with action. As thought that is primarily concerned
with practice, science must abstract from that concrete reality that we
experience, in which temporality is not simply another form of space, but
a ‘wholly qualitative multiplicity, an absolute heterogeneity of elements
which pass into the ~ther’~~J’.~’. For Bergson, in the real continuum of
duration there are no distinct elements that precede or follow real points in
‘time’. In this context, it becomes meaningless to speak of an a priori or
an a posteriori: Bergson envisages a notion of temporality as a ‘continuous
flux of novelty in which nothing is ever fixed, complete, or separate. In
this flux, anything that we can say exists ‘now’ also incorporates into a
qualitative whole everything we can say is ‘past’, a whole that is itself
being incorporated into the new synthesis of the ‘future”13.
The distinction between the synthetic and the analytic disappears in
the flux of time, for it is precisely this continuous temporal vortex that r e p
resents the formation of things (their synthesis) and their intrinsic (their
essence - in Kantian, the ‘thing in itself’) reality. This is the main postu-
late of what has been referred to as Bergson’s ‘superior Empiricism’. And
again, it is precisely this refusal to deal with transcendentalisms that char-
acterises Bergson’s drive for an immanent reality that can be experienced
164
while refusing to be cut into bits and abstracted. Clearly, the emergence
of such ontology revolutionises the basis of the modern episteme. And it is
precisely by demolishing the modern episteme ’s understanding of all pos-
sible paths and conditions for the attainment of knowledge that Bergson
issues a challenge to Kant’s Copernican revolution in philosophy and in
science:
unlike the transcendental procedure of Kant, [Bergson’s philosophy]
does not refer to the conditions of all possible experience; rather, it is mov-
ing toward ‘the articulations of the real’ in which conditions are neither
general and abstract nor are they broader than the conditioned.. . Bergson
insists upon the need to provide a genesis of the human intellect”14.
Having established the fact that science fails to tackle the issue of real
temporality, Bergson argues that philosophy might have been expected to
occupy this empty ground. However, this was not to be. Modern philoso-
phy, Bergson argues, has not challenged the view of time as ‘nothing more
than a fourth spatial dimension, which could readily be viewed as having
no creative efficacy, as merely the vehicle for the automatic unrolling of a
nornologically determined s e q ~ e n c e ’ ~ ~ PThis
. ~ ’ . modern scientific vision of
time, as Prigogine maintains, is all but dead.
Now we can start to understand how Bergson goes beyond the mod-
ern episteme. His conceptualisation of temporality refuses the a priori / a
posteriori distinction upon which the modern organisation of knowledge is
based. His refusal of transcendentalism, coupled with his insistence on the
realm of immanence, has produced, amongst many, challenges for the mod-
ern notions of abstraction, temporality, empiricism, science and freedom.
Bergson lies at the heart of Deleuzian philosophy14, whose conceptualisa-
tion of time is highly relevant to the ethos of Complexity, as Manuel De
Landa has shown3. Prigogine makes of Bergson a cornerstone in his theo-
rizing on Complexity and quantum physics.
Another timely reminder of Complexity’s origins can be found in the
works of Eric Hobsbawm. Hobsbawm rightly asserts that the principles
of Complexity did not ‘appear’ but reappeared under the misleading tag
of ‘chaos theory’. The reappearance of such ideas was possible thanks to
the increasing calculating powers of computers. According to Hobsbawm
the reemergence of ‘Complexity’ has profound implications for the concept
of causality. Such an approach does have the potential to undermine sev-
eral ‘political, economic and social’ assurnption~l~. Crucially, Hobsbawm
identifies the genesis of Complexity in the truly European ‘epistemic civil
war’, characterised by two contrasting interpretations of the role of rea-
165
The quest for certainty and securityf is ultimately the biggest danger
facing European civilisation. It would be clear, following Foucault’s analy-
sis of the modern episteme, that transcendental philosophies of the object
- metaphysics - necessitate two crucial elements in order to function: a
irreversible p r o c e s ~ e s ’ ~ pReversible
.~~. processes deny temporality as a con-
stitutive apparatus of the process. Examples of these processes can be
found in Newton’s formulation of classical physics and in Schrodinger’s
basic equation of quantum mechanics. In both cases, equations are invari-
ant with respect to time inversion. Contrarily, time irreversible processes
break time symmetry. In these processes temporality does affect how the
general rules of motion will impact the system in a precise temporal con-
text. More importantly, time irreversibility produces entropy. An example
of time irreversible processes is the second law of thermodynamics. How-
ever, Prigogine argues that time reversibility is produced firstly because we
accept to reduce the analysis to an elementary level, and secondly because
we abstract: ‘Reversible processes correspond to idealizations: We have
to ignore friction to make the pendulum work r e ~ e r s i b l y ’ ~In. Foucault’s
words, these metaphysical transcendentalisms adopt reductionism because
they ‘must deploy the deductive forms only in fragments and in strictly
localised regions’. Once Prigogine dismisses the idea that entropy might be
caused by insufficient data or faulty examination, the ideas that follow from
his arguments suggest that, if we bear Foucault in mind, time reversibility
is a particular cause of a transcendental philosophy of objects, that is, a
metaphysical system that ignores elements which do not happen to coexist
with the basic premises of a paradigm. This causes the need to discard
incompatible elements (precisely such as the second law) on the grounds of
humanity’s imperfect observation capacities or on the inadequacy of its in-
struments (in other words, limits). However, according to Prigogine: ‘The
results presented thus far show that the attempts to trivialize thermody-
namics.. .are necessarily doomed to failure. T h e arrow of t i m e plays a n
essential role in the f o r m a t i o n of structures in both the physical sciences
and b a ~ l o g y ’ ~ *(emphasis
.~~ added).
Time irreversibility becomes undeniable once, on the one had, we adopt
a more i m m a n e n t approach to nature, and on the other, we look at popu-
lations and not at single elements that compose them. The apparently con-
tradictory pulls towards immanence and connectivity are resolved through
the continuum of time and this notion of empiricismg.
Prigogine is happy to demonstrate that the results of his own research
concord with Bergson’s conceptualisation of temporality:
I’m certainly not the first one to have felt that the spatialization of time
g For a more detailed account of Prigogine’s empiricism and its significance for Complex-
ity’s ethos see 25
169
5. Conclusion
This paper sought to present Foucault’s ‘Archaeology’as an adequate con-
ceptual means for charting the origins of Complexity. Following Foucault,
the paper has 1) presented the emergence of linear temporality as an es-
sential feature of the modern episteme, specifically related to birth of tran-
scendental philosophies of the object (modern metaphysics); 2) indicated
how figures present in the European philosophical tradition challenged the
basic understanding of time inherent in the modern episteme, making spe-
cial references to Bergson and Heidegger and 3) outlined how these figures,
precisely through their alternative conceptualisation of time, are becoming
increasingly influential in Complexity-related research. There is no doubt
that the question of how time should be conceptualised is becoming increas-
ingly topical in contemporary scientific research. Moreover, such questions
directly tackle the issue of how such conceptualisation of time as an ir-
reversible feature of dynamical (as opposed to static) processes should be
understood. It appears that an accurate understanding of change - more
precisely irreversible change - has become a primary objective in many
research agendas. The acceptance of such irreversibility (or, as Prigogine
puts it, the ‘End of Certainty’) radically undermines the nature of meta-
physical knowledge, and clearly constitutes one of the pillars of Complexity
Science. The message inherent to Complexity’s ethos is beautifully encap
sulated in the simple concluding remark of Etienne Klein’s exploration of
the philosophy of time in contemporary physics: ‘We must learn to love
irre~ersibility’~~.
171
References
1. G. MorCoI, ”What Is Complexity Science? Postmodernist or Postpositivist?,”
Emergence 3, no. 1 (2002).
2. P. Cilliers, Complexity and Postmodernism: Understanding Complex Systems
(London ; New York: Routledge, 1998).
3. M. De Landa, Intensive Science and Virtual Philosophy, Transversals. (Lon-
don ; New York: Continuum, 2002)
4. W. Rasch and C. Wolfe, eds., Observing Complexity: Systems Theory and the
Postmodern Episteme (Minneapolis: University of Minnesota press, 2000).
5. D. S. Byme, Complexity Theory and the Social Sciences : A n Introduction
(London: Routledge, 1998).
6. Thomas Kuhn, The Structure of Scientific Revolutions (Chicago: University
of Chicago Press, 1970).
7. M. Dillon, ” Poststructuralism, Complexity and Poetics,” Theory, Culture €4
Society 17, no. 5 (2000).
8. Michel Foucault, The Order of Things: An Archaeology of the Human Sci-
ences, World of Man (London: Routledge, 1989).
9. Ilya Prigogine and Isabelle Stengers, The End of Certainty: Time, Chaos,
and the New Laws of Nature, 1st Free Press ed. (New York: Free Press, 1997).
10. Gary Gutting, Michel Foucault’s Archaeology of Scientific Reason, Modern
European Philosophy. (Cambridge England ; New York: Cambridge Univer-
sity Press, 1989) 184.
11. John Dupr6, The Disorder of Things: Metaphysical Foundations of the Dis-
unity of Science (London: Harvard University Press, 1995) 1-2.
12. B. Latour, Science i n Action : How to Follow Scientists and Engineers
through Society (Milton Keynes: Open University Press, 1987).
13. Gary Gutting, French Philosophy in the Twentieth Century (Ca,mbridge,
U.K. ; New York: Cambridge University Press, 2001) 51.
14. K. A. Pearson, Philosophy and the Adventure of the Virtual. Bergson and the
Time of Life (London: Routledge, 2002) 11-13.
15. E. J . Hobsbawm, Age of Extremes : The Short Twentieth Century, 1914-1991
(London: Abacus, 1995).
16. Michael Hardt and Antonio Negri, Empire (Cambridge, Mass. ; London:
Harvard University Press, 2000).
17. Hobsbawm, E. L ’age des extremes (Bruxelles: Editions Complexes, 1999),
756 - my translation.
18. J. Gleick, Chaos (London: Vintage, 1998).
19. E. J . Hobsbawm, The Age of Revolution : Europe, 1789-1848 (London: Aba-
cus, 1997) 335.
20. Harold Bloom, The Anxiety of Influence : A Theory of Poetry, 2nd ed. (New
York; Oxford: Oxford University Press, 1997) 50.
21. M. Inwood, Heidegger: A Very Short Introduction (Oxford: Oxford Univer-
sity Press, 2000) 6.
22. Edmund Husserl, The Crisis of European Sciences and Transcendental Phe-
nomenology; an Introduction to Phenomenological Philosophy, Northwestern
172
1. Introduction
A presentation at a complexity conference sometime in the early 2000’s:
its main purpose is to relate the philosophical implications of the fractal
gravitational singularity concept. Fractal gravitational singularity? What
does it mean? A gravitational singularity is a precisely defined concept
(such as a point of infinite space-time curvature) and a fractal is also a
clearly defined concept (a fractal is a geometric object which is rough or
irregular on all scales of length)”. However a fractal singularity makes no
sense from a scientific point of view, not even as a metaphor.
This kind of misinterpretation has been the subject of popular science
debate, and it shows how easily the scientific method can be ignored when
multidisciplinary conceptual transpositionb occur. The question is then:
aMost concepts relating t o complexity theory are rigorously defined in physics or biology
bTransposition is here defined as a metaphor for the process of spreading scientific con-
cepts and models from one discipline into another. Carruthers’ calls the recipient of
transposing concepts emerging sciences, but this may cause confusion because of the
diffusion of the term emergence.
173
174
is where it runs into scientific terminology being used without the slightest
knowledge of its real meanings.
The metaphorical conceptual nonsense created in transposing complex-
ity into other disciplines has a powerful justification in Kuhn's work - much
cited in philosophical complexity works [e.g. Refs. 9, 10, and probably
a good percentage of philosophical complexity papers presented at confer-
ences around the world]. The biggest problem with Kuhn's view is that, up
to the point of the paradigm shift, proponents of the older and newer theo-
ries cannot be speaking the same scientific language, and, in fact, alienate
each other from their respective views of the world. This problem perme-
ates complexity studies in economics, where most economists cannot begin
to cope with the new theory because of its inherent incommensurability2.
However, Kuhn's paradigm shift and incommensurability in no way im-
plies an anything goes relativistic view of the dynamics of science. New
paradigmatic science still would have to have consensus as the reference
frame, and internal coherence. This would be derived from new concepts
and ideas, and new meanings of old ones - but those new meanings would
still have to make sense inside a reference frame.
What is happening is that to deconstruct what is now referred as the old
Cartesian world-viewc internal coherence in many philosophical complexity
works have been lost, even in the metaphorical level.
The metaphorical level is inherently theoretical and can lead to en-
demic insights in the process of transposition. For instance, Goldsteing
proposes a self-transcending construction of emergence new formalism that
is a very powerful insight and is born out of philosophical theory, even if
relating some discussion to biology theoretical work. The main problem
with the metaphorical level in the process of the dynamics of complexity
theory is that its lack of rigor can lead to the hubbubs and nonsensical
notions. Moreover, many implication are derived from it without any sem-
blance of a rigorous demonstration - for instance, complexity was used as
tool for raising ethical Issues Concerning the Use of Animals in Scientific
Experimentation".
The metaphorical level is where many epistemological problems can
arise. The fact that it is a speculative stage on the dynamics of scien-
tific nature means that many propositions, implications and conjectures
5. Final Comments
Separating the two levels of the scientific development of complexity can be
most useful to verify how transposition of complexity into other disciplines
is evolving. Moreover, it leads to better comprehension of how concepts
and ideas are being incorporated into other disciplines, with better chance
to envision the hubbubs and nonsense being created in the process. In-
stead of a pure Kuhnian approach, where incommensurability is the norm,
by having a metaphorical and an empirical level, some commensurability
is possible, especially when regarding inter-disciplinary commensurability
- i.e. ensuring that transposed concepts and ideas are compatible with its
origin, and also make sense with the recipient discipline, and, more impor-
tantly, internal commensurability. Not doing so leads to fractal singularities
and even more outrageous concepts.
180
References
1. Carruthers, P. (1988). Emerging syntheses in modern science. European Jour-
nal of Physics, 9: 110-116.
2. Zeidan, R. M., Fonseca, M. G. D. (2004) Epistemological considerations
on agent-based models in evolutionary consumer choice theory. Emergence:
Complexity and Organization. Boston, MA: , v.6, n.3, p.32 - 39.
3. Arthur, W. B. (1995) Complexity in Economic and Financial Markets, Com-
plexity, v.1, n.1, April.
4. Anderson P.W., Arrow K.J., Pines D. eds. (1988) The economy as an evolving
complex system; Addison-Wesley Pub.Co, Reading Ma.
5. Cohendet P., Llerena P, Stahn H., Umbauer G. eds. (1998) The Economics
of Networks, Interactions and Behaviours Springer, Berlin.
6. Holland, J. H. & Miller J. H. (1991) Artificial Adaptive Agents in Economic
Theory, American Economic Review, May, pp. 365-370
7. Goldberg, D. E. (1989) Genetic Algorithms: in Search Optimization and
Machine Learning, Reading, MA, Addison-Wesley.
8. Sokal, A. & Briemont, J. (1998) Intellectual Impostors. London: Profile
Books.
9. Goldstein, J. (2005). Impetus without Drive or Teleology: The Self-
transcending Construction of Emergence. paper presented at the Complexity,
Science and Society Conference. University of Liverpool.
10. Waldrop, M. Mitchell. (1992) Complexity: The Emerging Science at the Edge
of Order and Chaos. New York: Simon & Schuster.
11. Robinson, F. (2005) The Relevance of Complexity Science to the Ethical
Issues Concerning the Use of Animals in Scientific Experimentation: the
Matter of Constraints. Paper presented at the CSS Conference - Liverpool.
SOME PROBLEMS FOR AN ONTOLOGY OF COMPLEXITY
MICHAEL McGUIRE
Department of Applied Social Sciences
London Metropolitan University
181
182
suspect foundations. In what follows then I will outline some reasons for
thinking that standard ontologies cannot meet the muster, before moving
on to outline a few of the questions for the alternative that I advocate.
1. Ontology
Two (broad) senses of ontology are operative in contemporary analytic
philosophy.
(i) ‘Classical’or philosophical ontology - what Aristotle called the “sci-
ence of being qua being’’.b The approach deals with questions
about objects, existence, properties] relations, parts, boundaries]
measures, causality, possibility and so on. The development of for-
mal techniques in ontology has increasingly enabled such questions
to be addressed in more rigorous terms rather than by what D.C.
Williams termed ‘speculative cosmology’ [3]
(ii) ‘Technical’ Ontology. An approach that has roots in formal ontol-
ogy but is mostly applicable to developments in computer science,
particularly artificial intelligence (AI) research. Here the focus is
less upon real world objects, but upon ways of specifying what is
required to exist for some conceptualisation to function correctly.
In A1 for example knowledge based systems need to be able to rep-
resent the range of assumptions they deploy to operate. Specifying
an ontology will help the programmer attain this as well provid-
ing a shared vocabulary that can operate across other systems in a
coherent and consistent manner‘
In this paper it is the former approach that I will draw upon for I intend
to ask what is the most general kind of thing that theories of complexity
could take to act as the ontic vehicle for the phenomena they purport to
identify. Given the discontinuities in content between such theories this
will be, by necessity, an exercise in what Strawson calls ‘descriptive’rather
than ‘revisionary’ metaphysics [4]. But if the ontological commitments
of CT could be given some satisfactory metaphysical grounding, then the
radical nature of some of its propositions may in turn suggest that serious
revision to the traditional cast of metaphysical candidates ontologists are
familiar with may be necessary.
dIt is worth noting that a categorisation of basic metaphysical kinds does not rule out or
reduce away large scale, or ‘common sized’ objects. It merely suggests that they might
also be subject to unification at deeper levels of abstraction
eLoosely, the idea of supervenience holds that all non physical phenomena is realised
by physical phenomena without reducing to it. More precisely, a set of properties P1
supervenes on a set of properties P2 if and only if there can be no changes or differences
in P1 without there being changes or differences in P2. See [5, 61
184
logical reducibility (see her [7, 81). Such views sit well with many of the
underlying assumptions of theories of complexity. But how does this cash
out in ontological terms? Are the ideas of holism and emergence, of ‘infi-
nite’ or irreducible ontological kinds it hints at as equally questionable as
the commitments of the scientific reductionist?
f[9, p. 4471
185
some respect.
By contrast:
gThis is the medieval distinction between universalia ante rem and universalia in rebus.
See [15]for more on this
187
Table 1.
Particulars
One Category On-
tology jects
part icularised of properties
things with particu-
larised features
Two Category On- All objects are combinations of particulars and
tology properties
‘See [19] for a discussion of states of affairs and a more developed account of whether
universals and particulars from irreducible complexes
JSeefor example [20] which attempts to develop an approach to ‘science without numbers’
189
kSee [23] for a defence of tropes. See [24] for reasons to question them
191
“‘For some objections to second order properties see [28]. For some arguments in favour
see (291. (301 discusses the idea of a second order logic
194
9. Patterns
If venerable and long serving range of metaphysical candidates such as these
are found wanting as the referents of the language of complexity, then a neg-
PSee[34] and [35] for the earliest discussions of the compression criterion for pattern and
the idea of algorithmic complexity
qIn my (forthcoming) I consider a way in which computational complexity conceptions
of pattern can be augmented by a complementary, more naturalised conception of them.
This relates to the idea of symmetry and the deeper notion that goes with this - the
notion of invariance (through transformations of various kinds). This idea provides for
a continuity between overtly mathematical or geometrical patterns and the regularities
in physical objects. On the one hand there is invariance and stability in the face of
transformation through mathematical operations, on the other stability in the face of
transformations through physical processes such as time, causal effect and change. A
plausible bi-conditional holds, one that relates the computational complexity conception
of pattern t o this symmetry, or invariance notion
197
‘The discussion is 1371. 1381 argued that mathematical objects can be dispensed with in
terms of structures or patterns such as ( N >) where each number functions as no more
than ‘position’ in this pattern. It is not a view he has developed in any great ontological
detail
“Solomonoff 139) develops a model of induction which involves parallel conceptions to
Chaitin & Kolmogorov’s ideas about compressibility. I401 indicates a way in which
Solomonoff’s idea can be extended into the “MinimumDescription Length Principle” as
a way of solving the curve-fitting problem
198
(ii) As we saw, there is a case for arguing that patterns subsume many
traditional metaphysical categories. The necessity for depending
upon some form of regularity or continuity in order to identify and
reidentify anything at all provides both justification for priority as
well as suggesting something very basic underpin this need for or-
dering.
(iii) Patterns serve as precisely the kind of entities that can provide a
grounding for the apparently universal capacity within science, and
especially complexity science to distinguish between more or less
ordered phenomena.
(iv) Even better, a multiplicity of patterns allows complexity science
to justify its rejection of reductionism and so permit a plurality of
explanatory levels and processes.
(v) Patterns provide a way of grounding the apparent ubiquity of regu-
larity in the world given its extremely low probability. Continuities
in object identity, regular causal connectivity or the tendency of
complex systems to resist the pull of entropy all acquire an ontolog-
ical grounding they would otherwise lack.
(vi) By extension, patterns not only offer important grounding concep
tions into the concepts of ‘Laws of Nature’ and of theory change
in science (the development of increasing sphiosticated compressors
of the world. As Ray Solomonoff’s work indicated [39] they may
illuminate some of the mechanisms for inductive reasoning and its
successes (the search for minimal programs in the identification of
natural phenomena).
(vii) Patterns seem also to offer insights into numerous other mysterious,
but desirable principles of scientific reasoning - in particular the
effectiveness of Occams razor - the appeal to simplicity (or shortest
compressor).
I do not think these problems are fatal, though they are certainly dif-
ficult problems that need to be addressed. My (forthcoming) addresses
some of them and it is to be hoped that additional lines of research can
deal with others. But if, in the end, an ontology of patterns turns out t o be
as unworkable for complexity science as traditional ontologies seem t o be,
then perhaps we would need t o look again at how scientifically robust the
patchwork of claims and observations that make it up can really be taken
to be.
References
1. McGuire M.R. 1999 On a General Theory of Pattern Doctoral Dissertation,
University of London
2. McGuire M.R (forthcoming) Patterns Kluwer Academic Press
3. Williams, D.C. 1953 “On the Elements of Being”, Rev. Metaphysics 7. 3-18,
171-192
4. Strawson, P. 1959 Individuals: A n Essay in Descriptive Metaphysics, Rout-
ledge
5. Davidson, D. 1970, “Mental Events”, in D. Davidson, Essays on Actions and
Events, Oxford, Clarendon Press., pp. 207-227
6. Kim, J. 1993 Supervenience and Mind: Selected Philosophical Essays ,Cam-
bridge: Cambridge University Press
7. Cartwright, N.1983 How the Laws of Physics Lie, Oxford: Clarendon Press
8. Cartwright, N 1989 Natures Capacities and their Measurement, Oxford:
Clarendon Press
9. Anderson, P & Stein, D 1987 ”Broken Symmetry, Emergent Properties, Dis-
sipative Structures, Life: Are They Related, in F. Eugene Yates (ed.), Self-
Organizing Systems: The Emergence of Order (NY: Plenum Press, 1987), p.
445-457
10. Laughlin, R.B. and Pines, D.2000 “The Theory of Everything,” Proceedings
of the National Academy of Sciences 97 (l),28-31
11. Prigogine, I. (et al) 1976 “Kinetic theory and ergodic properties” Proc. Nati.
Acad. Sci. USA Vol. 73, No. 6, pp. 1802-1805
12. Kaufmann, S. 1993 The Origins of Order: Self-organization and Selection
in Evolution, Oxford, OUP.
13. Oliver, A. 1996 “The Metaphysics of Properties”, Mind, 105, (pp. 1-75)
14. Ayers, M. 1991 Locke, Vol 2, Part I “Substance and mode”, Routledge, Lon-
don.
20 1
VASCO CASTELA
University of Manchester
School of Social Sciences - Philosophy
203
204
weapons meant there was a decisive advantage in being the one to strike
first. After an initial attack, in which the enemy’s defenses would be ren-
dered inoperative, victory was sure. This of course made giving the order
of striking first both very tempting and quite final, as once an attack had
been launched and the bombs were in the air, there was no turning back.
The instability of this situation led both American and Soviet govern-
ments to adopt a doctrine of nuclear deterrence known as Mutual Assured
Destruction (MAD). The idea was for each to eliminate the opponent’s
strategic advantage of striking first by stockpiling strike-back missiles that
could be launched within minutes of the initial attack or that would survive
the blasts by being placed in bunkers. If each state could provide evidence
of adequate retaliatory capability, this would give the other state a good
reason not to make a first move, as it was to expect heavy casualties in case
of war, regardless of whether it was the first to attack.
The situation seemed under control until an influential game theorist
claimed MAD was not entirely safe, as mutual destruction could not be
assured - which implied it could not be a reliable deterrent. After an initial
strike from the Russians, for instance, why would the US retaliate, if this
no longer served any rational purpose? If we consider a situation in which
Soviet missiles had been launched, this would mean American bombs had
already failed to act as a deterrent. Why use nuclear weapons, at that
point, for killing millions of Russian civilians, with no possible strategic
gain to be had? If retaliation was irrational, there would be no assurance of
mutual destruction, and the strategic advantage of striking first would once
again be in place. The solution the game theorist devised to this dangerous
situation was a doomsday device, a machine that would be programmed to
automatically retaliate in case of attack. The existence of such a machine
would bring the situation back to a MAD condition of equilibrium.
MAD was the basis for the plot in Stanley Kubrick’s hilarious dark
comedy, Dr. Strangelove or How I Learned To Stop Worrying And Love The
Bomb. In the film, the President of the US discovers, following the launch
of a nuclear strike by a rogue American general, that the Russians had built
a doomsday device to ensure the effectiveness of nuclear deterrence, only
had kept it a secret as they were saving the announcement for the upcoming
elect ions.
205
2. The problem
MAD may not seem such a good idea now, but it was the best solution that
could be found to a puzzling problem: rational action does not always bring
about the most materially favourable state of While acting in the
interest of national safety, nations were paradoxically putting themselves
in increasingly great danger. Note that the notion of rationality we are
discussing is that of self-interest theory, underlying game theory and main-
stream economics (and cold war politics), according to which an action can
only be rational so long as it is in the interest of the person who performs
it. No form of giving can then ever be rational, unless it happens as a
part of a trade. Or, to use the terms of game theory, an agent's action is
said to be rational if it maximises his utility, according to a set of personal
preferences.
While this theory of rationality may seem to make rather pessimistic
assumptions regarding human nature, it does have the merit of acknowl-
edging the trivial fact that systems (states, companies, cells, humans, and
so on) must conform to the laws of physics. Systems must conserve their
resources and ensure their security if they are to survive. In a situation
where conflict seems likely, it is perhaps inevitable that states will care
about their national security above that of their opponents, if they are to
remain a state. The same is true regarding the actions of individuals.
We could explain altruistic behaviour by saying, like Hume, that when
we act unselfishly we are merely satisfying a natural instinct of benevo-
lence, which would enable us to claim that altruism is compatible with
self-interest.17 If we think like Hume, altruistic behaviour no longer seems
to be counter-intuitive. Such psychological explanations, probably of great
importance in a complete ethical theory, do not deal, however, with the very
real problem that the range of psychological reasons must be restricted by
material demands, if we want to have a system that survives. Regardless
of whether we prefer to count altruistic interests as rational or irrational,
we must provide an explanation for how they arise and remain viable.
The strategic advantage of striking first, during the cold war, before
MAD was put in place, was not simply a mistake of the analysts. It was
quite real. While both nations would be better off by not going to war,
striking first was the only safe way for each to avoid total annihilation.
Diplomatic negotiations, interestingly, could not help, as no promise could
"An interesting analysis of this and other interesting commitment issues involving the
MAD strategy can be found in 141.
206
be trusted in that situation. What could the penalty for lying be after the
trusting party had been destroyed? One of the most popular and systematic
approaches to try to solve the problem of trying to make peace between
rationality and cooperation has been to study the Prisoner’s Dilemma, a
strategic puzzle which closely mirrors the nature of the cold war situation.
ests, regardless of whether these are selfish interests or not. This suggests
the obvious solution: what if each would care about the other’s interests?
According to Peter Kollock, game theory’s assumption that agents make
decisions based on fixed payoffs is m i ~ t a k e n .For
~ Kollock, a subjective
sense of moral righteousness could affect them, making cooperation seem
more attractive. As an example, imagine that in the PD the agents would
prefer to help their friend rather than try to avoid prison time at all cost.
The payoff for cooperation would then be higher and cooperation would be
rational. However, as we have seen above regarding Hume, this solution
would solve the problem at the expense of taking the explanation of why
they would adopt this preference for granted.
In practice, economists label altruistic actions as irrational.’ This is
a fair assumption, or at least a good starting point, as it deals with the
problem of physical viability of a system. It makes sense to expect not
only that the agent will satisfy his interests, but also that these interests
will be self-serving. Any other claim will contradict the Darwinian model
of evolution. Evolution does not care about subjective moral principles. A
genetically inherited trait only gets selected if it directly contributes (in a
material sense) to the survival of the individual and, to quote Robert Frank
(1998:54), “We can’t eat moral sentiments”. So, in order for altruistic
behaviour to emerge and spread, it seems it must be advantageous in a
material way. How could this work? There have been a few interesting
attempts to try to explain how altruistic behaviour could have emerged
(See [3] or [l],for a detailed survey), but, as we will see, they all have their
limit at ions.
strategy” .’
Kin selection was proposed by Hamilton and can be understood as a
variety of the group selection argument, but with much better support.6
Hamilton claims it would make sense for someone to sacrifice himself for his
family as the genes he shares with his kin would live on. This is an argument
easily understood today if we think in terms of Richard Dawkins’ notion of
the selfish gene.1° The phenotype (the body, a physical implementation of
the genotype) may be sacrificed in order to save the genotype (the DNA).
An instinct for sacrificing oneself for the sake of our genotype would be an
Evolutionarily Stable Strategy (ESS).
Kin selection, however, does not explain why some of us are willing
to risk our lives to save a stranger and surely, in today’s social life, most
of our interactions are not with kin. It could be argued that this is a
recent phenomenon in the history of human evolution, and that perhaps
our genetic programming has not had time to adjust to our new lifestyles.
Robert Frank, however, correctly argues that genetically a second cousin
is almost no different from a stranger, and so even if humans used to live
in small groups in recent natural history, kin selection could not by itself
explain the evolution of altruism.’
For Robert Trivers, reciprocity is the basis of altruistic behaviour.” If a
set of players was to play the PD a number of times, a failure to cooperate
could be punished in the next interaction.b The success of the Tit for
Tat strategy in Axelrod and Hamilton’s study of the PD lent considerable
empirical support to this claim.12
Reciprocity can also work indirectly through r e p ~ t a t i o n In
. ~ real-world
situations, behaviour that could appear altruistic at prima facie could con-
tribute to our establishing a reputation for being honest or kind, which
could give us an advantage in future interactions. In many cases, we have
an interest in keeping a promise because we would be punished if we did
not. However, we all know situations where we could stand to gain by
behaving selfishly but behave altruistically, not merely cooperating with
bTo be precise, the number of plays would have to be unknown to the players for co-
operation to be possible. If they both knew they would play 10 times, for instance, it
would be rational for both to defect on the 10th game, as there would be no possibility
of retaliation. This would mean there would be no possibility of retaliation on the 9th
game either, as both would know they would defect on the last game anyway. This would
then be true of all games, and so they would never cooperate. The versions of the puzzle
that are the hardest t o solve (and the most interesting for the purposes of this paper)
are either the single-shot PD or a variant in which the number.
209
others for mutual advantage but going to some considerable trouble to help
a stranger, for instance, with no hope of material gain.
Reciprocity’s major drawback is that it fails to explain cooperation when
there is no possibility of retaliation. Even when humans play a PD-style
game a single time (also known as the single-shot PD), when not only
defection is the only rational choice but also it cannot entail retaliation,
results do not follow the predicted result of game theory. Many of us give
tips in restaurants when we are on holidays away from home. Why would
anyone pay for good service after he already got it, if he could simply leave
and save some money? Some of us would risk our lives to save someone
we do not even know. Reciprocity does not seem to be able to solve our
problem. Our aim should then be to explain how altruism is possible in
the single-shot PD, which reciprocity fails, not least because thinking of
altruism as being based purely on conscious seeking of material gain surely
contradicts our common sense understanding of what genuine altruism is.
For some of those not satisfied with the standard accounts of how al-
truistic behaviour emerges and who do not want to abandon game theory’s
concept of rationality (which has the merit of dealing with the issue of
physical viability of ethical systems), a complexity approach seemed an
interesting route. Skyrms, Danielson and Axelrod, among others, wanted
to see how adding an evolutionary dimension to traditional models would
change the nature of the PD, making cooperation a rational strategy.* 21
12
not understand the mechanics involved in it. The main problem does not
lie in a lack of knowledge regarding intra-molecule interaction, but on the
fact that the behaviour of every single molecule in the atmosphere has a
role in the behaviour of the system as whole. As the climatic system cannot
be decomposed into large chunks of functional parts as, say, the engine of a
car, it resists traditional methods of analysis. The behaviour of the climate
is a result of the rich interactions between its elements - it is a complex
system. Human society, like the weather, is composed of a high number
of functional elements. Interactions between humans are complex and it
certainly seems that any model that ignores them will be overly reductive.
In weather forecasting, thousands of measurements of the temperature
and pressure of the atmosphere at different altitudes are obtained by satel-
lite and weather balloons and are inputted into computer simulations. In
the simulation, time can then be fast forwarded to yield predictions. The
more detailed the model is, the more accurate will the predictions be. Much
as even a perfect understanding of the interaction between two molecules
would not yield good weather forecasting, game theoretic analysis of the
interaction between two rational agents will not yield adequate models of
the mechanisms of altruism. Evolutionary Game Theory (EGT) introduces
the concept of population in game theory, eliminating a perhaps excessive
reduction of standard game theory that may be throwing away the baby
with the bath water.
When playing the PD in a game theory environment, the agent who
cooperates when the other defects will have the lowest payoff of all, as we
have seen. He will get the so-called “sucker’s payoff’. It is only mutual
cooperation that is beneficial for both, which is the reason why defection is
such an alluring strategy. EGT models have been instrumental in showing
that a cooperator will not necessarily do worse than a defector even when
playing with other defectors. Consider a typical type 1 model: there is a
virtual space, a grid in which a number of strategies, some “Cooperate” and
others “Defect”, placed at random, interact. The game theoretic concept
of utility is replaced with that of fitness (something similar to health points
which reflect the ability to reproduce more effectively), and strategies play
the one-shot PD repeatedly for fitness points. Note that repetition does not
change the fact that the game keeps the characteristics of the one-shot PD.
As long as the rules of the system make sure the players have no possibility
of retaliating (for instance by not giving agents ways of identifying and
remembering other players) they play each game as if it was the first and
only time they played.
211
Let us run the model. We will notice that the strategy of Defect, free-
riding on Cooperate (taking advantage of its kindness) and not losing too
many fitness points against other Defect strategies, will do quite well. De-
fect will do better on average than Cooperate. So far this confirms the
predictions of game theory. Now consider we introduce an extra element of
complexity in the model. When strategies are very healthy, having enough
fitness points, we allow them to produce offspring. This is where EGT starts
yielding considerably different results from game theory. If we program the
rules of the model so that the offspring remain within a short distance from
the parents, as happens in many biological systems, we will notice that local
pockets of Cooperate will start to grow, as Cooperate will have a good prob-
ability of playing against other Cooperate strategies and get more fitness
points on average. In this scenario, and depending on the variables regu-
lating spatial distribution and the fitness points attribution matrix, Defect
may be driven into extinction, or keep only a small population, surviving
in relative isolation on the borders of populations of Cooperate they can
exploit. A moral player does better that a rational player, and so coopera-
tion can become an Evolutionarily Stable Strategy. (See [4]for a detailed
explanation of the mechanics of standard EGT models).
In order to achieve the results described above, however, careful tweak-
ing of the variables is essential. We do get the results we want, but at the
expense of rigging the experience, making sure the model is set-up so that
nothing else can happen. This does not detract from the fact that it is good
news that such a set-up is possible at all. There are however many more
possible scenarios in which cooperation cannot evolve, due to the natural
advantages of free-riding. The model described is also abstracting away
quite a few important things. The characters of the plot are excessively
flat, for instance. These agents always cooperate or always defect, regard-
less of context. Calling them strategies is more of a metaphor, as they are
equivalent to their strategies, which means there is no process of decision.
It is useful to have discovered that the spatial distribution of coopera-
tors and defectors can be a deciding factor in cooperation dominating the
population, but surely we do not always or mostly interact with our kin.
The argument that was valid against kin selection can be again brought
in here. There are, however, other more sophisticated applications of EGT
that deal to this issue, such as Paul Danielson’s, which we will now examine.
212
as only this agent could solve the PD. In order to produce it, Danielson asks
us to consider not the PD, but a slightly modified version of it that shares
most of its structure. In what Danielson calls the Extended Prisoner’s
Dilemma (XPD), one of the agents makes his move first. According to
game theory, this does not change the payoff matrix or the sub-optimal
result of mutual defection. The first player will still be better-off defecting,
regardless of what the second player does. The second player will also still
be better-off defecting, again regardless of what the first player has decided.
However, now it would be rational for the first player to cooperate if he could
somehow be sure that the second player would respond to cooperation and
only to cooperation with cooperation. It would also be rational for the
second player to use this strategy if proof that he will use it is the only
thing that can convince the first player to cooperate.
Danielson creates a constrained maximiser that is able to show his moral
engine to his opponent as evidence of his intention to respond to cooperation
with cooperation (and to defection with defection). The agent, a computer
program, gives access to his own code to the other agent, which can run it
to test it. As Danielson runs an EGT model in which there are such moral
agents, these will soon dominate the population, even if they start off with
just a small proportion of elements. Again, a moral player does better than
a rational player, and this time no careful tweaking of initial conditions is
necessary.
We have seen before that no form of discussion or promise-making could
result in a commitment in the game theory analysis of the PD. That is be-
cause talk is cheap. Regardless of what promises the agent would make to
cooperate, he would have no reason to keep them because he knows that, as
he will not play again against the same opponent, he cannot suffer retalia-
tion for lying. Knowing this, his opponent would have no reason to take his
promises seriously, and would be better off ignoring them altogether. Both
will defect. In Danielson’s XPD, however, commitment becomes possible
because there is no need for promises. There is in fact no real commitment,
in the psychological sense. If I say I behave like a human being because
I am a human being and present as evidence my healthy human body, I
am not just promising to behaving like a human being, I am proving I will
behave like one, by necessity. Similarly, the agent that plays first knows
that the other player will respond to cooperation with cooperation in virtue
of its moral make-up. It will be irrational for the first player to defect, as
the retaliation is inevitable and automatic - the equivalent to the cold war’s
doomsday device. It seems we have got to love the bomb.
214
8. Discussion
If LaCasse and Ross are right, Danielson’s work is a complete waste of
time. But let us consider what kind of argument would convince them they
were wrong. What kind of argument would they accept? Hume said that
“ought” cannot be derived from “is”, to mean that normative statements
(what is morally good) cannot be derived from prudential considerations
(what is advantageous). LaCasse and Ross seem to agree with Hume, when
215
they refuse to accept Danielson’s explanation for how altruism can exist.
His “is” arguments cannot bridge the gap t o “ought”. However, they seem
to demand such impossible bridging arguments, since they consider that
the failure t o provide them can help them to prove morality has no role.
LaCasse and Ross seem to expect a normative account of morality to
be based on prudential explanations. This is indeed an impossible mission.
Nevertheless, failing it has no implications for the possibility of a natural-
istic account of altruism. If Hume was right in saying that reason is but a
slave of passions, such an account cannot be grounded on rational reasons.
According to Hume’s account of action, reason only job is to find the means
to ends defined by passions (desires). Emotions, then, are more likely ac-
ceptable candidates for the causes behind altruistic action, when we ask
for a psychological explanation. And when we ask for the foundations of
such psychological explanations, we will have to look a t what evolutionary
pressures produced such emotions, so as to complement Hume’s account of
motivation.
Economist Robert Frank offers us an account of altruism that deals with
both the psychological and the evolutionary aspects of the problem. For
Frank, emotions, like Danielson’s or Gauthier’s behavioural dispositions,
act as enforcers of commitment. This needs not be seen as a deviation
from rationality if we accept Hume’s minimal notion of reason, according
t o which it is rational t o do whatever we desire. We have expressed some
reservations regarding the fact that Hume’s account in incomplete, but
Frank’s, however, includes the essential explanation for why moral senti-
ments do not compromise the organisms’ viability.
Holding moral sentiments, for Frank, may be an evolutionary advantage,
as this can make the agent a preferred target for cooperation. The agent’s
sense of justice implies that he may seek revenge at any cost, when he
considers he has been treated unfairly. This will act as a deterrent t o
potential offenders. These mechanisms can only work if it is somehow
visible to others that the agent is a moral agent (perhaps because he sounds
and looks honest). It must also be true that mimicking the appearance of a
moral agent is quite costly, from an evolutionary point of view, or we would
all pretend to be honest without being it, enjoying the advantages without
incurring the costs.
While it is not easy to provide empirical proof to support these claims,
Frank’s story succeeds in providing an account for situations in which one
is generous even when one knows the other may have no chance of returning
one’s generosity. Holding moral sentiments may mean that, a t times, one
216
will engage in costly self-sacrifice for someone else’s sake. However, the
theory will remain compatible with Darwinist evolution as long as holding
moral sentiments is advantageous on average.
To be fair to Hume, we should note that he does include a crucial
component of a kind of non-intentional consequentialism in his account of
how desires are formed. The story goes as follows. We desire t o help some
rather than others because we have come to love them. But for Hume
we come t o love those who have something that interest us: a quality we
admire, fortune, influential friends. This is not because we are planning
to take advantage of such people. The love is genuine. It is simply in
our nature to love those we admire. After this genuine love is formed, it
will suit our purposes to help such people, as they will be in a position to
return more than we have gave them - only this advantage is now accidental.
Interestingly, Hume’s account of how we come to love can help t o solve a
major problem in Frank’s theory, as we will see.
Robert Frank goes one step too far when he argues that the selfish
person who, after reading his book, now knows that being an altruist is
advantageous, will start acting altruistically. This is the mistake Gauthier
has made and Danielson was trying to avoid. Acts of genuine altruism do
involve sacrifices, and if we accept Frank’s account, it is only advantageous
to be an altruist on average. Each individual act of altruism remains some-
thing to be avoided by a selfish person, regardless of whether he agrees with
Frank’s account. So Frank still leaves us in need of an explanation for how
we can come t o be altruists.
Evolution has made it possible for us to feel the emotions that make
genuine altruism possible, but we are still not born natural altruists. How
can these emotions be used in the right way? How can they serve our in-
terest? Hume’s account of how we come to love others can be part of it.
We love those who have something to offer us. But here we have an inter-
esting case in which genuine sacrifice leads to material advantage, and that
cannot always be the case. Genuine altruism can involve straightforward
loss, like the sacrifice of one’s own life for the sake of someone else’s welfare.
A r i ~ t o t l eoffers
~ ~ an interesting account of moral education, according to
which we come t o know what is right, as children, by doing the right thing.
We simply go through the motions, mechanically at first, and then end up
enjoying it. This is indeed how we come to like music, or sports: by listen-
ing to music, by doing sports, not by theorising about it. It could be the
way how we come to enjoy moral action.
So where does this leave Danielson’s work on altruism? His model shows
217
9. Conclusions
One important lesson to learn is that models of complex systems are not
at all immune to rationalistic assumptions. These can indeed dangerously
creep up unseen, covered up by the apparent realism of this methodological
approach. This seemingly down to earth, pragmatic nature of Evolutionary
Game Theory cannot make us forget its models are still heavily theory
laden.
Danielson’s story is a story of how to love the bomb, as it claims al-
truistic actions can be made rational if they can rely on a mechanism of
deterrence. We have seen that being moral cannot compromise a system’s
physical viability. The merit of the Prisoner’s Dilemma puzzle is that it
highlights this issue, a problem the Humean account of motivation does
not deal with directly. Acting altruistically therefore needs to be materi-
ally advantageous, at least on average. In search for the elusive connection
between morality and rationality, Danielson has followed Gauthier’s ap-
proach of reducing altruism to rational action. Danielson demands a direct
connection between physical viability and action, one which makes genuine
altruism impossible. If adequate evidence that genuine acts of altruism are
an illusion and can be explained in terms of pure rationality could be pro-
vided, we should be ready to accept it. It seems, however, that those who
are happy about reducing altruism to rationality do not do it because they
have found such evidence, but rather because they are confused by the fact
that altruistic acts paradoxically seem to serve the individual who performs
them. Although their intuition might be telling them the same story those
who defend in genuine altruism believe in, they reject it possibly on the
grounds that we may be fooling ourselves. In fact, we often do. Perhaps
we like to think we are being kind when we are actually always trying to
further our interests. But why would we be any happier for thinking we are
morally good if we are not? If we accept this happiness is a result of purely
cultural reasons, we are still left with no explanation for why culture would
favour such self-delusion. Keeping genuine altruism and explaining why it
is possible offers us a much more satisfactory story of human nature.
An extra cog is needed in the mechanisms of causality; one that some-
times seems to be turning the wrong way, and emotions are a good candi-
date for this. Danielson’s other interesting work in applied ethics has done
much to establish EGT as a powerful tool for understanding and predicting
human behaviour. However, EGT, in its present stage, is unable to offer
an adequate account of altruistic behaviour. Danielson may be right in ex-
219
Acknowledgments
I conducted the research presented in this paper with a PhD scholarship
from the Portuguese Foundation for Science and Technology (FCT). I would
also like to thank Peter Goldie, Federica Frabetti and Marina Hasselberg,
for a number of interesting discussions on the topic of altruism
References
1. R. Frank Passions within Reason: the strategic role of emotions. Norton
(1988).
2. C. Darwin The Descent of Man and Selection in Relation to Sex. New York:
Appleton (1871)
3. F. Heylighen, Evolution, Selfishness and Cooperation. In: Journal of Ideas
Vol 2, NO 4, pp 70-84 (1992)
220
PETER JEDLICKA
Institute of Clinical Neuroanatomy
J. W. Goethe University
fiankfurt/Main, Germany
E-mail: jedlicka@em.uni-frankfurt.de
Our intuition tells us that there is a general trend in the evolution of nature, a
trend towards greater complexity. However, there are several definitions of com-
plexity and hence it is difficult to argue for or against the validity of this intuition.
Christoph Adami has recently introduced a novel measure called physical com-
plexity that assigns low complexity to both ordered and random systems and high
complexity to those in between. Physical complexity measures the amount of in-
formation that an organism stores in its genome about the environment in which
it evolves. The theory of physical complexity predicts that evolution increases
the amount of ‘knowledge’ an organism accumulates about its niche. It might be
fruitful t o generalize Adami’s concept of complexity to the entire evolution (includ-
ing the evolution of man). Physical complexity fits nicely into the philosophical
framework of cognitive biology which considers biological evolution as a progressing
process of accumulation of knowledge (as a gradual increase of epistemic complex-
ity). According t o this paradigm, evolution is a cognitive ‘ratchet’ that pushes
the organisms unidirectionally towards higher complexity. Dynamic environment
continually creates problems t o be solved. To survive in the environment means t o
solve the problem, and the solution is an embodied knowledge. Cognitive biology
(as well as the theory of physical complexity) uses the concepts of information
and entropy and views the evolution from both the information-theoretical and
thermodynamical perspective. Concerning humans as conscious beings, it seems
necessary to postulate an emergence of a new kind of knowledge - a self-aware and
self-referential knowledge. Appearence of selfreflection in evolution indicates that
the human brain reached a new qualitative level in the epistemic complexity.
1. Introduction
Our intuition suggests that there is a general trend in the evolution of na-
ture, a trend towards greater complexity. According t o this view evolution
is a progressive process with Homo sapiens emerging a t the top of the life’s
221
222
complexity
0 randomness 1
Figure 1
random but lie between the two extremes of order and randomness (7).
What makes living systems complex is the interplay between order and
randomness (8). This principle has been supported by recent research on
biological networks (9). Complex networks of cellular biochemical path-
ways have neither regular nor random connectivity. They have a so called
scale-free structure (10). A scale-free network contains a small number of
hubs - major nodes with a very high number of links whereas most nodes in
the network have just a few links. “Scale-free” means that there is no well-
defined average number of connections to nodes of the network. In case of a
biochemical network, nodes and links represent molecules and their chemi-
cal reactions, respectively. In such a molecular network, hubs are important
molecules that participate in a large number of interactions (e.g. CAMP,
H20) in comparison to other molecules that take part in a few biochemical
signaling paths (11). An important consequence of the scale-free archi-
tecture is robustness against accidental failures of minor nodes/links and
vulnerability to disfunctions of major hubs. (Note that this may bring inter-
esting insights into the molecular pathophysiology of various diseases.) Sur-
prisingly, recent mathematical analysis of various scale-free networks (in-
cluding biochemical signaling networks) has revealed self-similarity (fractal
pattern) in their structure (12). Self-similarity is a typical property of sys-
tems that are on the edge of order and chaos. Such a critical state (with
a tendency to phase transitions) might be useful for optimizing the per-
formance of the system (13). To sum up, we need a measure which would
capture the complexity of dynamical systems that operate between order
and randomness.
3. Physical complexity
Christoph Adami (14) has recently introduced a novel measure called physi-
cal complexity that assigns low complexity to both ordered and random sys-
tems and high complexity to those in between (Fig. 2). Physical complexity
measures the amount of information that an organism stores in its genome
about the environment in which it evolves. This information can be used
to make predictions about the environment. In technical terms, physical
complexity is a shared (mutual) Kolmogorov complexity between a sequence
and a n environment (for mathematical equations see Ref. 14). Informa-
tion is not stored within a (genetic) sequence but rather in the correlations
between the sequence and what it describes. By contrast, Kolmogorov-
Chaitin complexity measures regularity/randomness within a sequence and
224
maximum I
0 randomness 1
Figure 2 .
aInterestingly, Chaitin has proposed his own definition of life based on Shannon’s concept
of mutual information. See Ref. 15.
225
1
generalization to the
entire evolution including
the evolution of man
Figure 3.
Polkinghorne (23) predicts that “by the end of the twenty first century,
information will have taken its place alongside energy as an indispensable
category for the understanding of nature.” The paradigm of cognitive bi-
ology points t o the same direction.
Acknowledgements
T h e author thanks Dr. Stephan W. Schwarzacher for valuable discussions
and for reading the manuscript
References
1. A.N. Kolmogorov, Three approaches to the quantitative definition of infor-
mation. Problems Information Transmission 1:l-7 (1965)
2. G.J. Chaitin, On the length ofprograms for computingfinite binary sequences.
J Assoc Comput Mach 13:547-569, (1966)
3. R.J. Solomonoff, A formal theory of inductive inference, part 1 and 2. Inform
Contr, pp. 1-22, 224-254 (1964)
4. G.J. Chaitin, Computers, paradoxes and the foundations of mathematics. Sci
Am 90:164-171 (2002a)
5. G.J. Chaitin, On the intelligibility of the universe and the notions of sim-
plicity, complexity and irreducibility. In: Grenzen und Grenzberschreitun-
gen, XIX. Deutscher Kongress fur Philosophie (Hogrebe W, Bromand J, ed)
Akademie Verlag, Berlin, pp. 517-534 (2002b)
6. J.P. Crutchfield, When Evolution is Revolution Origins of Innovation. In:
Evolutionary Dynamics Exploring the Interplay of Selection, Neutrality, Ac-
cident, and Function (Crutchfield JP, Schuster P, ed) Santa Fe Institute Se-
ries in the Science of Complexity, Oxford University Press, New York (2002b)
7. J.P. Crutchfield, What lies between order and chaos? In: Art and Complexity
(Casti J, ed) Oxford University Press (2002a)
8. P. Coveney, R. Highfield, Frontiers of Complexity: The Search for Order in
a Chaotic World, Ballantine Books (1996)
9. A.L. Barabasi, Z.N. Oltvai, Network biology: understanding the cell’s func-
tional organization. Nat Rev Genet 5:lOl-113 (2004), www.nd.edu/ networks
10. A.L. Barabasi, E. Bonabeau, Scale-free networks. Sci Am 288:6(r69 (2002)
11. D. Bray, Molecular networks: the top-down view. Science 301:1864-1865
(2003)
12. C. Song, S. Havlin, H.A. Makse, Self-similarity of complex networks. Nature
4331392-395 (2005)
13. S.H. Strogatz, Complex systems: Romanesque networks. Nature 433:365-366
(2005)
14. C. Adami, What is complexity? Bioessays 24:1085-1094 (2002)
230
39. D.R. Hofstadter, Godel, Escher, Bach: A n Eternal Golden Braid, Basic
Books (1979)
40. F.M. Kronz, J.T. Tiehen, Emergence and quantum mechanics. Philosophy of
Science 69:324-347 (2002)
41. K. Svozil, Computational universes. In: Space Time Physics and Ractality
(Weibel P, Ord G, Rossler OE, ed) Springer Verlag, Wien, New York, p.
144-173 (2005)
42. M.J. Dodds, Top down, bottom up or inside out? Retrieving Aristotelian
causality in contemporary science. In: Science, Philosophy and Theology
(O’Callaghan J, ed) South Bend (1997)
INFORMATIONAL DYNAMIC SYSTEMS: AUTONOMY,
INFORMATION, FUNCTION
WALTER RIOFRIO
Neuroscience and Behawiour Division
Universidad Peruana Cayetano Heredia.
The main purpose of this paper is the conceptual exploration into the pre-biotic
world, a world that we consider t o be made up of a relatively continuous sequence
of systems directed toward the origin of the first life forms. In a tentative way, we
propose the ‘Informational Dynamic System’. This type of system would constitute
the ancestral system that definitively opened the door t o the pre-biotic world.
Also, it explains, in a naturalistic way, the physical emergence of functions and
information.
1. Introduction
The so-called self-organization phenomena feature multiple types of dy-
namic systems, and we find them in different sectors of our reality. For
example, we have the BCnard cells and the Belousov-Zhabotinskii reactions
(or Brusselator when this type of chemical reaction is computer simulated).
It was Ilya Prigogine who, working on these chemical processes, coined the
term “dissipative structures”, allowing us to understand the emergence of
‘novelties’ in the systems that are far from thermodynamic equilibrium
state. From the thermodynamic point of view, these systems are open sys-
tems (they exchange energy and/or matter with their environment), and the
self-organization phenomena are produced due to the continuous changes
in the local interactions of the components that form the dynamic system
111.
The Belouzov-Zhabotinskii (B-Z) reaction is an oscillating chemical re-
action that is autocatalytic. In other words, products of the reaction in-
crease the rate of their own production. In this process, different steady-
states and periodic orbits can be observed. The Cerium oxides, Ce(II1) and
Ce(IV), are the chemical compounds in solution that produce an oscillation
dynamic in the reaction. The periodic increase in the concentration of one
of the chemical compounds and the subsequent decrease of the other is seen
232
233
namic organization such that they would allow the emergence of the dawn
of an open-ended evolution (biological evolution).
Our conceptual proposal furthermore has two additional ends: the con-
struction of arguments that support the possibility and the necessity of
proposing one definition of information and one definition of function in
naturalist terms. In other words, we defend that both notions are not
purely thought exercises, whether in abstract, formal, or epiphenomena1
terms, or that for their definition they need some type of adscription im-
posed from the human world or from that of the human-generated devices.
In this work, we put forward a type of self-organized dynamic system
that we believe to be the system that opened the doors of the pre-biotic
world, and because of that, will begin the step from the world of the inan-
imate to the world of living systems: the Informational Dynamic System.
We place our theoretical reflections in the distant past, and this era tell
us that the types of chemical constituents we might find available on the
Earth would be incredibly simple chemical compounds, mainly inorganic,
yet perhaps a very small amount of organic ones. We might even consider a
236
possible abiogenic synthesis of such compounds like the thioester [27]. The
thioester compounds are those that are formed when sulfhydryl, an organic
group that contains a Sulfur-Hydrogen bond (R-SH), joins with another
compound called carboxylic acid (R’-COOH). During the chemical reaction,
a water molecule is released, forming this way the thioester compound (R-
S-CO-R’).
Taking into account the current understanding about what the condi-
tions of primitive Earth could have been like and meteor impacts during
this time period, it is possible to theorize the feasibility of the existence of
amino acids and carboxylic acids. Furthermore, the fact of massive volcanic
activity makes it possible to theorize the existence of sulfhydryl [28-321.
For the purposes of the work, we set a line of demarcation around the
origin of pre-biotic systems: it would be a type of system that has managed
a certain degree of autonomy by its own, self-maintained organization dy-
namic. Therefore, it is a type of dynamic system much more robust than
the self-organized system we mentioned at the beginning, which we will call
a “strict sense’’ self-organized system.
This opens up space for the possibility of returning to the most robust
and complex dynamic organization in the continuous succession of a type
of pre-biotic system that will end up producing the first life forms.
In that sense, we consider that our hypothetical informational dynamic
system provides an explanation for the emergence of that organizational
dynamic, and with it, allows us to understand the emergence of the global
property known as autonomy.
Generally speaking, self-organized systems are systems far from thermc-
dynamic equilibrium. BCnard cells, Belousov-Zhabotinskii reactions, living
systems, and pre-biotic systems are ones that are far from thermodynamic
equilibrium.
One difference that is crucial to our argument is that the first two exam-
ples are systems far from equilibrium because of external causes: the matter
and energy requirements that their processes needs invariably depend upon
the environmental conditions.
On the contrary, the other two cases maintain themselves in nonequi-
librium conditions for reasons that are systemically intrinsic. With this
statement, let us begin our study.
The correlation among processes is an expected phenomenon when a
system is in far from equilibrium conditions [33, 341. One important step
to the pre-biotic world is the appearance of a correlation between endergonic
and exergonic processes. The former needs energy requirements, whereas
237
aThe idea of cohesion is applied to dynamic systems and their properties that are bonded
by an internal dynamic of relationships. This allows the establishment of a causal sub-
strate for delimiting the dynamic system identity [39]
240
namic organization that constitutes it. So, the system’s dynamic organi-
zation is in fact a collection of processes, interrelated and interdependent,
with the additional element that this interconnection among the processes
maintains a determined form of concretely connecting themselves with the
constriction that permits the system to remain in the far from thermody-
namic equilibrium state.
An agent behavior brings us to the understanding that a certain degree
of managing their internal and external conditions would exist. This man-
agement (even though very minimal at the beginning) directs our attention
to those processes that would be responsible for interacting and performing
recognition, selection, and active or passive transport processes.
And this management is done because it is “dictated by” (there is a
strong correlation with) the tendency to persist as such - to maintain itself
~ in the state that fundamentally characterizes it; the most primary aspect
of its nature is constituted by being a far from thermodynamic equilibrium
system.
It is possible to conjecture that the basic constriction causes the change
of the system’s free energy (AG,,,) to have a tendency towards negative
values. Taken as a totality of the processes and its interconnection, this
causes it to be immersed in continuous and successive dynamics of increas-
ing order.
Therefore, to stop being a far from thermodynamic equilibrium system
is the same as saying that the system no longer exists. To strongly influ-
ence the conditions for remaining in that state implies seriously placing its
survival at risk.
This central aspect of its nature is now maintained by the system itself
through the incorporation of the basic constriction. This allows the system
to maintain itself in the far from equilibrium state.
As this constitutes the most original aspect of these systems, it seems to
us that we can assume that it governs the multiplicity of transformations of
behaviors, components, capacities, and characteristics of the future system.
This way, our proposal rests upon the naturalist supposition that infor-
mational dynamic systems develop their difference from “strict sense” self-
organized systems because they have been able to develop and maintain a
capacity that is elemental for their existence: maintaining or increasing the
far from equilibrium state.
As the constriction maintaining them far from equilibrium is an intrinsic
part of their dynamic organization, there are strategies they can develop
that manage to keep this state in conditions compatible with the laws im-
241
agreement with maintaining the system in the far from equilibrium state.
However, it can also be that it was in opposition to maintaining the far
from equilibrium state.
In the first situation, the meaning of the matter-energy variation that
comes to the system will cause a positive response by the system. In the
second case, the meaning will generate a negative response by the system.
We can see that both information and function are strongly connected
by their definition with the idea of far from thermodynamic equilibrium.
And this is precisely what we propose.
Because of this, it is possible to consider, through a naturalist perspec-
tive, the far from thermodynamic equilibrium state as a basic norm that
the system imposes on itself for continuing to be the system that it is.
This way, information and function will have to be defined starting
from what we physically have right now, where there is no place for human
adscription, and hence, the possible epiphenomena1interpretation of some
of them is avoided from the start.
The differences that exist between a system in a thermodynamic equi-
librium state and a system in a far from thermodynamic equilibrium state
is something we possess firmly in our minds and have been well defined
through experimentation. The group of causes and contour conditions to
differentiate them can be used in the most explicit way possible.
Therefore, for our defense of a class of system with the characteristics
we mentioned, we would have in the realm of physical laws a naturalist
principle for building up the essential aspects in a normativity.
Effectively, a normative principle points out precisely those circum-
stances when the norm is fulfilled and the difference in those situations
when the norm is not fulfilled. For our work, those circumstances in agree-
ment with the far from equilibrium state are fulfilling the norm, and those
circumstances not in agreement with that state are not fulfilling the norm.
What is interesting about the case is that the norm we just explained
essentially permits the way these systems exist. Therefore, this normativity
is derived naturally from the physical world. Moreover, the normativity
comes from and is dictated by the system itself.
In its origins, what we would have might be a normative definition
of the information-function notion. It is from this initial appearance of
both ideas that we can postulate later on that a kind of symmetry break
would arise in the primitive idea so we would find both characteristics
acting independently (although on occasion, in some way, interacting as
they did at the beginning) and with the possibility of increasing the levels
243
References
1. Nicolis G & Prigogine I, Self-Organization in Nonequilibrium Systems:Fbom
Dissipative Structures to Order through Fluctuations. Wiley, New York
(1977).
247
250
251
enced in the natural sciences but it has reached and influenced a wide span
of disciplines, from physics to the humanities and, of course, to what Simon
began to call “the sciences of the artificial.” Even so, it is remarkable how
difficult is to define the concept of complexity. Complexity is a very elu-
sive idea, due, on the one hand, to the wide range of areas of applicability
and, on the other, to the multiplicity of proposed definitions and measures.
Nevertheless, it is not our intention to offer a huge list of definitions of what
other people think what complexity is.a On the contrary, we will just very
shortly describe which general traits epitomize complex systems and then
we will characterize what kind of systems are the living ones and which are
their peculiarities (as organized complexity).
Therefore, in this paper we intend to address, at least partially, the
conceptual challenge posed by complexity (i.e., apparent contradiction of
simultaneous vagueness and usefulness) by focusing on living systems. It
is generally taken for granted that living beings are complex, even charac-
teristically so, and this attribution does not seem especially controversial.
Besides, the traits that make biological organization complex seem to be (at
least to certain degree) more amenable to scientific scrutiny and descrip
tion, without turning to higher order capacities such as sociality or human
intelligence. Notwithstanding, biological complexity keeps at the same time
some conceptual depth and suggestive power that appear lacking in more
formal or physical expressions of the notion of complexity.
Turning then to biological complexity, if we review the philosophical
panorama of the past century regarding the conceptual tools that have been
used to understand and explain living systems, we may easily find (among
other equally interesting turning points) that the fate of a cluster of ideas
revolving around aspects of complexity, such as hierarchy, levels, emergence,
organization, wholes and systems, etc., has been peculiar, ranging from
proposal and more or less easy coexistence and debate with other opposing
views in the first part of the century to total disappearance from the larger
part of that period, and to a renewed interest in its last decade. Witness
to this resurgence are, to mention two approaches relevant to this work,
the very recent mainstream appeal to build a systemic biology or systems
biology (see Ref. 1) and the not as recent constitution of the field of theories
or sciences of complexity, including Artificial Life, which we might date back
Lloyd, from MIT, has been compiling for years a “non-exhaustive list of measures of
complexity” that can be accessed at the web:
http://web.mit .edu/esd.83/www/notebook/Complexity.PDF
252
bHowever, it is not the purpose of this short paper to deal with the logically subsequent
issue of trying to assess to what degree do these recent approaches innovate with respect
to paths previously open or just develop them further (or even if they aware of them).
Therefore, we are not going to discuss here current efforts to approach the complexity
of living systems along similar lines to those we will describe in the paper.
253
archy as a relation among levels with different and specific dynamics which
gives way to emergent processes both upwards and downwards.
We claim that this kind of work is a very appropriate way to advance in
disclosing what complexity means. Recalling Weaver’s distinction between
problems of “simplicity,” “disorganized complexity” and “organized com-
plexity,” we intend to contribute to characterize as precisely as possible a
kind of systems, the living ones, which are paradigmatic of that organized
complexity whose challenge Weaver thought would be accessible within the
next fifty years since his paper was published in 1948.6
complexity. In the first case, the behavior of the components of the system
can be described as Brownian and the global outcome can be described
as an average; examples would range from gases to more interesting ones
such as some dissipative structures or chaotic systems. The second case
would include biological and some social systems, where the behavior of
components is locally controlled by the level of the whole system (or specific
modules).
In general terms, complex systems show a global emergent behavior
that is not predictable with the classical mechanics toolkit, due to their
non-linear character. Complex systems description always imply, at least,
two levels lower and upper levels- and, in order to explain their behavior,
a full account of the relations between them is required. The levels at the
bottom give rise to global patterns (upward causation) meanwhile those
global patterns harness the behavior of the levels at the bottom (downward
causation).18 In the case of disorganized complexity, the level above may
impose what is known as boundary conditions; in organized complexity, the
level above may impose functional constraints on the level below.
It is this challenge that Simon, to cite another classical paper, seeks
to answer by claiming that the architecture of complexity is necessarily
hierarchi~a1.l~
“Even if they had a different origin and development, somehow they have come t o meet
in a common corpus (see Refs. 20, 21, 22, 23).
257
3.1. Organicism
Organicist theories in biology developed mainly during the interval between
the two world wars by practitioners of theoretical biology generally linked
to e m b r y ~ l o g y . Organicism
~~>~~ was a research framework that developed
an epistemology addressed to cope with the complexity of life. It might
be among the first ones in doing so in the 20th century since it dates back
to the early days of the 1910’s with the work of Ross Harrison. Among
those who were involved on the new perspective were the just mentioned
Harrison, Bertalanfi, Needham, Waddington, Weiss, and Woodger among
others. Organicism in a wider scope was a response to a set of general
claims and debates developed all along the lgth century between vitalism
and materialism (e.g., some of these were on naturalism vs. supranatural-
ism, determinism us. freedom, mechanicism vs. vitalism). A very specific
258
one was related to how biological form should be explained, a problem that
defied every mechanisic or vitalist explanation since the 18th century (and
still does to some extent), due the emergence of order apparently “for free.”
So, in a much more narrow sense organicism offered an alternative program
regarding the emergence of embryological order. The main difference be-
tween this alternative view with reference to vitalism and materialism was
the search for an emergentist account that would be consistent with physics.
All the efforts of organicism were addressed to understand the forms that
arise and proliferate in the organic world and the comprehension of the laws
that generate them. To explain form it was necessary to understand what
was responsible for its origins, maintainance and transformation. This was a
twofold situation, on one side, it was necessary to approach the evolutionary
aspects of form; on the other, its developmental features. A particular
challenge posed by organic forms to any evolutionary or developmental
account of it was that organic forms are progressively defined in the process
of growing and not piecemeal built.
In order to address these problems, organicism developed a set of ideas
and concepts that can be classified, following Bertalanfi and Morton Beck-
ner, in three groups: Organization and Wholeness, Directiveness, and His-
toricity or dynamic^.^^^^^
Organization and Wholeness. The concept of organization made refer-
ence to the hierarchical constitution of organisms. Each level was formed
by parts which, due to the relations among them, displayed at that level
rules and properties that were absent at the levels below. Within this frame
some organicists like Harrison and Needham, developed their notions on the
connection between structure and function.
Akin to the concept of organization was the notion of wholeness. Organi-
cists always emphasized the importance of the appreciation of wholeness.
This notion was about regulation, that is, about the top-down perspective
that allows to see how the rules at the level of the system exert their power
over the relations among its constitutive parts. In other words, the exis-
tence of a whole stands on the subordination of parts to the rules or laws
that characterizes the systems as such.
To make explicit the link between form, wholeness and organization,
for the organicists, form would be the expression of the whole. In this
sense, their main concern was to understand how, along development and
evolution, the relations among parts and the relations among levels give rise
to new levels of complexity; and at each step of these processes, properties
such as symmetry, polarity, and patterns are the means through which form
259
The Theory of Integrative Levels stated that the relation among the
components of a system is what gives rise to new emergent levels and that
new properties are the outcome of the organizational and integrative prin-
ciples instead of the component properties. The purpose of taking into
account the integration of the components in the system’s organization
was to include as fundamental the properties of the level considered at the
bottom and those of the levels considered at the top, i.e., the so-called
emergent. They considered that there are some general levels that can be
identified as physical, chemical, biological and social, and that each one has
its very own laws as well as its own mechanisms through which changes or
in between levels emergence occur.
One of the most representative features of the Theory of Integrative
Levels is their stand against the reduction of biology to physics and chem-
istry; Novikoff, for example, was clear about the fact that living organisms
are not just machines made of many physico-chemical units that could be
taken apart and whose global behavior could be deduced through analysis
(see Ref. 30:210). For this theory the main biological levels were the cells,
tissues, organs, organisms, and populations.
As methodological and ontological aspects, we may consider the follow-
ing rephrased from Feibleman:42
The analysis of a system must be done considering its lowest and its
highest levels in order to achieve the most complete explanation.
An organization always belongs to its highest level.
At the end, every organization must be explained in terms of its very
own level.
It is impossible to give a full account of an organization if it is ex-
plained only in terms of its lowest or its highest level (e.g., an organism
cannot be explained just in terms of genes or ecosystems).
4. Conclusion
The work of all these researchers on biological complexity offers a toolkit
to build an operative approach to phenomena whose epistemological impli-
cations go beyond biology and which is interesting for the study of complex
systems in general. Beginning with the necessity of a hierarchical arrange-
ment, there is further interplay among structural and functional hierarchies.
In this context, the emergence of autonomous hierarchies implies having
systems with the capacity to self-impose holonomic and non-holonomic con-
straints. Those systems and their increasingly complex behaviors appear
naturally in biological cases (and, obviously, in socio-cultural ones), but are
also quite interesting for artificial systems. Within this kind of full-fledged
complex systems, the relations among levels are described as upward and
downward causation and are made operative through the concept of con-
straint. But constraints understood as enabling mechanisms and not only
as restrictive material structures. Finally, this scenario sets the conditions
to allow the open-ended evolution of complexity.
Aknowledgements
Funding for this research was provided by grant HUM2005-02449 from the
Ministerio de Educacin y Ciencia of Spain and the FEDER program of the
E.C., and grant 9/UPV 00003.230-15840/2004 from the University of the
Basque Country. Jes6s M. Siqueiros is holding a predoctoral scholarship
263
References
1. -, Systems Biology. Science, 295, 1661-1682 (2002).
2. D. Pines (Ed.), Emerging Syntheses in Science: Proceedings of the Founding
Workshops of the Santa Fe Institute. Redwood City, Addison-Wesley (1988).
3. G. A. Cowan, D. Pines, and D. Meltzer (Eds.), Complexity Metaphors, Mod-
els, and Reality. Redwood City, Addison Wesley (1994).
4. C. G. Langton (Ed.) Artificial Life Redwood City, Addison-Wesley (1989)
5 . M. Polanyi, Life's irreducible structure. Science, 160, 1308-1312 (1968).
6. W . Weaver, Science and complexity. American Scientist, 36 (36), 536-544
( 1948).
7. G.J. Klir, Facets of Systems Science. New York, Plenum Press (1991).
8. R. Rosen, Some comments on systems and system theory. International Jour-
nal of General Systems, 13, 1-3 (1986).
9. D. L. Stein (Ed.), Lectures in the Sciences of Complexity: the proceedings of
the 1988 Complex Systems Summer School. Redwood City, Addison-Wesley
(1989).
10. J. Gleick, Chaos. Making a new science. New York, Viking Press (1987).
11. E. N. Lorenz, The Essence of Chaos. Seattle, University of Washington Press
( 1993).
12. I. Prigogine, &om Being to Becoming. Time and Complexity in the Physical
Sciences. San Rancisco, Freeman (1980).
13. I. Prigogine and I. Stengers, La nouvelle alliance. Me'tamorphose de la sci-
ence. Paris, Gallimard (1979).
14. H. Atlan, Entre le cristal et la fume'e. Paris, Seuil (1979).
15. S. Forrest (Ed.), Emergent Computation. Self-organizing, collective, and co-
operative phenomena in natural and artificial computing networks. Cam-
bridge, MIT Press (1991).
16. J. Umerez, Jerarquias Autdnomas. Un estudio sobre el origen y la natumleza
de 10s procesos de control y de formacidn de niveles en sistemas naturales
complejos. Universidad del Pais Vasco/ Euskal Herriko Unibertsitatea (1994).
17. H. H. Pattee, Instabilities and Information in Biological Self-organization.
In: F. E. Yates (Ed.), Self-organizing Systems. The Emergence of Order, New
York, Plenum Press, 325-338 (1987).
18. A. Moreno and J. Umerez, Downward causation at the core of living orga-
nization. In: P. B. Anderson, C. Emmeche, N. 0. Finnemann, P.V. Chris-
tiansen (Eds.), Downward Causation, Aarhus, Aarhus University Press, 99-
117 (2000).
19. H. A. Simon, The Architecture of Complexity. Proceedings of the American
Philosophical Society, 106, 467-482 (1962).
20. W.R. Ashby, An Introduction to Cybernetics. London, Chapman & Hall
(1956).
264
1. Introduction
The analysis of complex systems is being pursued with increasingly more
sophisticated information technologies. In particular, the area of computer
simulation has acquired a decisive role for analysing societies as complex
systems, leaving behind the history of simulation as a secondary method-
ology in the social sciences. The sources of analogy between agent-based
*The title of this paper is inspired by James Fetzer’s article “Formal Verification: The
Very Idea”. See Ref. 1.
266
267
CForan introduction to classic computer theory, also known as the Church-Turing thesis,
see e.g. Ref. 9.
‘For a comprehensive critic see Ref. 8; see also Ref. 12.
269
literature are recalled, proceeding afterwards to refuting the use of such con-
ceptions for describing the scientific practice of simulation. In the second
part the role of programming languages in simulation, according to inten-
tional accounts of computation, is discussed. Two types of programs and
programming languages in simulation are identified: Programs as text and
programs as icons; and languages as abstract machines and languages as
aesthetic machines. The use of abstract languages confirms that the method
of simulation incorporates formal and empirical methodologies. The use of
aesthetic languages demonstrated that it depends fundamentally on inten-
tional methodologies. The roles that intentional decision making may play
in a participative information society are also discussed.
2. An Ontological Confusion
Conventional methodologies of computer science model the mechanism of
executing a program in a computer like a process of formal inference. In
complexity sciences, the process of running a program was described not
only as an automatic inference procedure but as a formal deductive proce-
dure itself. The computer is seen as a mechanism of formal calculus, where
programs represent mathematical functions that map inputs into outputs.
The calculus can be modelled in several ways, one of which ascribes the
computer the capacity to prove first-order theorems according to a fixed
vocabulary. Considerations of brevity and simplicity lead us to call this tra-
dition the FDE argument, as per ‘Formal Deduction through Execution’.
Notwithstanding, our goal is quite the opposite, namely to demonstrate
that simulation shall not be legitimized under the presumption of resulting
from a calculus of formal inference. Additional conceptions of knowledge
are needed.
hThe concept of generative social science was also adopted in Ref. 17.
271
These ideas often turned out to be misleading. For instance, it was often
claimed that the intended behaviours of computers could be completely
specified and verified before the corresponding programs were executed on
specific computers, by means of purely formal methods. Computer science
would be viewed as pure mathematics rather than as applied mathematics
or empirical science.
Fetzer‘s philosophical refutation of the formal verification project con-
sisted of distinguishing two kinds of programs: those that are and those that
are not in a suitable form to be compiled and executed in a machine. The
difference between the programs is that the former is verified by reference to
abstract machines, whereas the latter require the existence of compilers, in-
terpreters and target machines. Compilers, interpreters and processors are
properly characterized according to specific target physical machines. Inso-
far as a program in an abstract machine does not possess any significance
for the performance of a target machine, the performance of that program
can be verified formally and conclusively. Conversely, to the extent that the
performance of a program possesses significance for the performance of a
target machine, that program cannot be conclusively verified a priori. The
program must be verified empirically by means of program testing. Hence,
the formal verification project is not viable.’ Program testing should be
the crucial technique to ascertain the proper behaviour of programs and
computers, notwithstanding the use of formal methods during the stages of
analysis and design.
Our goal is to informally reduce the FDE argument in social science sim-
ulation to the formal verification project in computer science. Consider the
formal verification project according to its most radical terms. The intent
would be to create formal methodologies that could guarantee that a given
specification would correspond to the behaviour of a program executing in
a computer. That is, to find deductive procedures to verify conclusively
the correctness of a program P in relation to a specification F : I 4 0,
in order to guarantee that the execution of P with inputs I would result
exactly into the specified outputs 0. The following argument reduces FDE
to the formal verification project.
I
I
hductive@ar
abductively ...
Natural Process
PROGRAMS MACHINES
HIGH LEVEL
LOW LEVEL
relating facts about the objective behaviours of the program. The results
are appropriately characterized by conditions of intentionality that relate
aesthetic components in the program, negotiated according to a limited
level of consensus.
Acknowledgements
Jaime S i m k Sichman is partially supported by CNPq, Brazil, grants
482019/2004-2 and 304605/2004-2.
References
1. James Fetzer, Program Verification: The Very Idea, Communications of the
ACM, 31, 1048-1063(1988).
2. Nuno David, Maria Marietto, Jaime S. Sichman, Helder Coelho, The
Structure and Logic of Interdisciplinary Research in Agent-Based Social
Simulation, Journal of Artificial Societies and Social Simulation, 7(3),
ihttp://www.soc.surrey.ac.uk/JASSS/7/3/4.htmli,(2004).
3. Ulrich Frank and Klaus Troitzsch, Epistemological Perspectives on Sim-
ulation, Journal of Artificial Societies and Social Simulation, 8(4),
ihttp://jasss.soc.surrey.a~.uk/8/4/7.html~ (2005).
4. Carlos Gershenson, Philosophical Ideas on the Simulation of Social
Behaviour, Journal of Artificial Societies and Social Simulation, 5(3),
ihttp://jasss.soc.surrey.ac.uk/5/3/8.htmli,(2002).
5. J. Kluver, C. Stoica and J. Schmidt, Formal Models, Social Theory and
Computer Simulations: Some Methodologi-
cal Reflections, Journal of Artificial Societies and Social Simulation, 6(2),
ihttp://jasss.soc.surrey.ac.uk/6/2/8.html~(2003).
6. Rosaria Conte, Bruce Edmonds, Scott Moss, Keith Sawyer, Sociology and
Social Theory in Agent Based Social Simulation: A Symposium, Computa-
tional and Mathematical Organization Theory, 7(3), 183-205(2001).
7. Nuno David, Empirical and Intentional Verification of Computer Programs in
Agent-based Social Simulation, Ph.D., University of Lisbon (in Portuguese)
(2005).
8. Nuno David, Jaime S. Sichman, Helder Coelho, The Logic of the Method
of Agent-Based Simulation in the Social Sciences: Empirical and Intentional
Adequacy of Computer Programs, Journal of Artificial Societies and Social
Simulation, 8(4), ihttp://jasss.soc.surrey.ac.uk/8/4/2.htmli,(2005).
9. C. Papadimitriou, Computational Complexity, Addison-Wesley (1994).
10. Joshua Epstein, Agent-Based Computational Models And Generative Social
Science, Complezity, 4(5), John Wiley & Sons, 41-59 (1999).
11. John Casti, Would-Be Business Worlds. Complexity, 6(2), John Wiley &
Sons, 13-15 (2001).
12. Gunter Kuppers and Johannes Lenhard, Validation of Simulation: Patterns
in the Social and Natural Sciences, Journal of Artificial Societies and Social
Simulation, 8(4), ihttp://jasss.soc.surrey.ac.uk/8/4/3.htmli,(2005).
284
13. John Casti, Would-be Worlds: how simulation is changing the frontiers of
science, John Wiley & Sons (1997).
14. Bernd-0. Heine, Matthias Meyer and Oliver Strangfeld, Stylised Facts
and the Contribution of Simulation to the Economic Analysis of Bud-
geting, Journal of Artificial Societies and Social Simulation, 8(4),
ihttp://jasss.soc.surrey.ac.uk/8/4/4.htmli (2005).
15. Scott Moss and Bruce Edmonds, Towards
Good Social Science, Journal of Artificial Societies and Social Simulation,
8(4), ihttp://jasss.soc.surrey.ac.uk/8/4/13.htmli(2005).
16. Riccardo Boero and Flaminio Squazzoni, Does Empirical Embeddedness
Matter? Methodological Issues on Agent Based Models for Analytical So-
cial Science, Journal of Artificial Societies and Social Simulation, 8(4),
ihttp://jasss.soc.surrey.ac.uk/8/4/6.htrnlj,(2005).
17. Joshua Epstein, Robert Axtell, Growing Artificial Societies: Social Science
from the Bottom Up, MIT press (1996).
18. C. G.Hempel and P. Oppenheim, Studies in the Logic of Explanation, Phi-
losophy of Science, 15, 567-579 (1948).
19. Brian Cantwell Smith, The Foundations of Computing, Smiths in-
troduction to a series of books that report his study of computing
in the books The Age of Significance: Volumes IVI. Available at
ihttp://www.ageosig.org/people/bcsmith/papersi(1996).
20. M. J. Prietula, K. M. Carley and L. Gasser, Simulating Organizations, MIT
Press (1998).
21. Keith Sawyer, Multiagent Systems and the Micro-Macro Link in Sociological
Theory, Sociological Methods and Research, 21(3), 37-75 (2003).
22. Robert Axelrod, Advancing the Art of Simulation in the Social Sciences,
Simulating Social Phenomena, Springer Verlag, 21-40 (1997).
23. James Fetzer, Computers And Cognition: Why Minds are not Machines,
Studies in Cognitive Systems, 25, Kluwer Academic Publishers (2001).
24. Edsger Dijkstra, A Discipline of Programming, Prentice-Hall (1976).
25. C. A. R. Hoare, An Axiomatic Basis for Computer Programming, Commu-
nications of the ACM, 12,576-580 (1969).
26. Karl Popper, The Logic of Scientific Discovery, Routledge Classics
(1935/1992).
27. James Fetzer, The Role of Models in Computer Science, The Monist, 82(1),
La Salle, Illinois 61301,20-36 (1999).
A COMPROMISE BETWEEN REDUCTIONISM AND
NON-REDUCTIONISM
ERAYOZKURAL
Department of Computer Engineering
Bilkent University
Bilkent 06800 Ankara, Turkey
E-mail: erayo@cs.bilkent.edu.tr
1. Introduction
New mathematical techniques have enabled scientists to gain a better un-
derstanding of non-linear dynamical systems, while similar advances have
occurred in computer science for constructing models of natural phenom-
ena as information processing systems. A particularly interesting research
direction in complexity sciences is the study of conditions under which self-
organization occurs. '
Complexity theorists pay special attention to the
concept of emergence. Gershenson and Heylighen give a typical definition
of emergence:
Emergent properties characterize a system that cannot be reduced
to the sum of its parts.
The authors then give the most striking example that an animate cell is
composed of parts that are inanimate. Thus, to be animate must be an
emergent property since the system cannot be explained in terms of its
parts. In the same paper, the authors show analysis/reductionism as a
shortcoming of classical science, which is indeed the case.
285
286
And I will suggest that complex phenomena like emergence may be under-
stood through the mechanical view.
The rest of the paper is organized as follows. In the next section, I will
review the relevance of the information theory approach to quantification
of complexity. In Section 3, I will highlight the relation of dualist and non-
reductionist theories in philosophy of mind, suggesting that extreme non-
reductionism leads to extra-physical notions. In Section 4, I will outline
mild non-reductionism as a physicalist approach for resolving the tension
between reductionism and non-reductionism.
bPlease note that a formal argument is beyond the scope of this paper.
288
and cellular automata dynamics. lo On the other hand, time aspect can be
put into new definitions of complexity. The “logical depth” definition of
Bennett incorporates time. l1
There are significant features of biological systems that algorithmic
complexity does not model. For instance, algorithmic complexity disre-
gards redundancy in the data, while redundancy and fault-tolerance are
fine features of biological systems. Biological systems also seem to achieve
optimizations other than reducing program-size, e.g., time and energy op-
timizations. Critics have also argued that algorithmic complexity does not
model the interestingness or usefulness of a message. In classical informa-
tion theory, the notion of “relevant information” has addressed the problem
of interestingness. l2 Hutter has pursued in detail an approach based on
algorithmic probability including agent models that maximize an arbitrary
utility function. l3
Another shortcoming is the fact that algorithmic complexity is uncom-
putable. Therefore we always have to stick to much simpler approximations
if we need to explicitly calculate the complexity of a message. This is not
a shortcoming in theory, only a shortcoming in practice.
Descartes would not have known the concept of a universal machine which
was discovered by Alan Turing, a machine so flexible that it can mimic any
other machine. l8
Descartes proposed that the immortal soul that he imagined interacted
with the human body through the pineal gland. This was not a good idea
as examinations of the pineal gland showed. In general, the view that
the stuff of mind is distinct from matter is known as substance dualism.
Descartes subscribed to this view; he thought that mind did not extend in
space like matter. There are two other kinds of ontological dualism. P r o p
erty dualism suggests that mental properties are fundamentally different
than material properties. The weakest statement of dualism is predicate
dualism 19, which states that mental (or psychological) predicates cannot
be reduced to material predicates, although supervenience holds.
A primary non-reductionist argument against physicalism is the knowl-
edge argument. 2o Imagine a neuroscientist who has been born and raised
in a black and white room. The neuroscientist knows every physical fact
about the brain. However, she has never seen red; she does not know what
red is like. The argument then quickly concludes that, the knowledge of
red cannot be reduced to physical facts about the brain, so it must be
extra-physical.
nature. Nevertheless, we still need to clarify in what sense our minds are
irreducible.
found a minimal program, or we can never know how close we are to a min-
imal program. Both of these have turned out to be uncomputable, and we
finite beings have to be satisfied with approximations. Beyond the uncom-
putability of algorithmic complexity there are incompleteness results that
place serious limits to analysis which have been rigorously investigated by
Chaitin. However, the incompleteness results in mathematics do not show
that it is impossible for scientific inquiry to be successful in general. Rather,
they deal in what a machine can/cannot infer from given axioms. On the
other hand, it has been possible for scientists to discover new postulates
such as that the speed of light is constant. As a consequence, we can never
be sure whether we have the most elegant theory in any domain. Intuitively,
we can imagine that the more the complexity, the more effort it will take
to compresslanalyze the data. Then, we need to diminish complexity.
brain is more than the sum of its neurons, I have already argued that
reduction to a physical description is necessary for the scientifically minded
complexity researcher who rejects ontological dualism. First, let us have a
look at the meaning of such predicates like animate or conscious that denote
complex entities. These have been most likely invented for the purpose of
avoiding excessive expcplanations, i.e., to be able to talk at a correct level of
abstraction. When we use these terms, we do not feel the need to explain
them further, however we can all recognize their presence through their
outward appearances. Furthermore, in many common sense problems, we
need no more information about such words except for the basic function of
distinguishing a dead animal from a living animal, etc. As countless works
in philosophy show, it takes a great effort to do otherwise and try to give
a complete description.
On the other hand, the animate-ness of a cell supervenes on continuing
success of a number of complex subsystems that are tightly interconnected.
Should one of these crucial systems fail, the cell is no more animate. While
like any other machine, the description of the cell is compositional, the
topology and nature of interconnections are non-trivial.
The outward appearances of complex phenomena are usually not satis-
factory for a scientist. Complexity community was not satisfied with the
definition of “animate” that it meant being able to reproduce or having
genetic material. They rightly sought generic properties of the systems
that we call animate. For instance, that they are stable, yet have dynamics
at the edge of chaos, that they are made up of parts that are being re-
placed over time. Attempts have been made to characterize not only earth
biology, but any animate matter, and further efforts have been made to
characterize evolution. For instance, Chaitin has given a rough mathemat-
ical definition of life 25 where he suggests that the interacting parts of such
a system (similar to components of a system in Section 2.3) are self-similar,
i.e., they have high mutual algorithmic complexity. He also suggests the
famous halting probability R as an abstract model of evolution; the approx-
imation procedure to calculate it is presented as a mathematical model of
entire evolution. On the other hand, researchers have already successfully
reconstructed evolutionary trees from gene sequences using an information
theoretic method. 26
The very existence of these generic explanations in complexity sciences
suggests that the concepts of animate-ness and evolution may be given ab-
stract descriptions that are independent of particular instances. It may
be possible to extract the essential commonality in all living systems into
297
5. Conclusion
The physical versions of algorithmic complexity help us consider it in a
purely physical way, and free of problems in the abstract version. I have
shown that algorithmic complexity corresponds to our intuition about com-
plexity, and further clarifies it with theorems. In Section 3 some dangers
of extreme non-reductionism have been shown by examining the knowledge
argument and predicate dualism which rest on epistemological and abso-
lute irreducibility. These arguments have been criticized as arguments from
ignorance. With the same attitude, we would have hardly advanced science
to this point. In Section 4,I have proposed using precise and quantitative
conceptions of complexity as they clarify our intuitions. Irreducibility of a
system comes in degrees, and it may be understood physically. Random-
ness means the absence of further reducibility, and we cannot reason about
random events. The reason for abstract talk is examined as a matter of pro-
cessing feasibility, purpose, inductive reasoning and opportunity. Finally,
instead of stating emergency of a property X as “ X is more than sum of its
parts”, I suggest that we can use the precise language of information theory.
In particular, “emergent property” can be given a satisfactory meaning in
AIT, corresponding to algorithmic information in a system that does not
reside in its components or prior history, thus is impossible to predict using
the information therein.
Mild non-reductionism cannot be the final word on this controversial
subject at least for the reason that algorithmic complexity has obvious
298
Acknowledgments
I a m grateful to Cristian S. Calude for technical corrections and suggesting
references. Thanks t o anonymous reviewers, Peter Jedlicka, Michael Olea,
Gregory Chaitin, and William Modlin for their comments which helped
improve this paper.
References
1. Carlos Gershenson and Francis Heylighen. When can we call a system self-
organizing? In W Banzhaf, T. Christaller, P. Dittrich, J. T. Kim, and
J. Ziegler, editors, Advances i n Artaficial Life, 7th European Conference,
ECAL 2003 LNAI 2801, pages 606414. Springer-Verlag, 2003.
2. Carlos Gershenson and Francis Heylighen. How can we think the complex? In
Kurt Richardson, editor, Managing Organizational Compledty: Philosophy,
Theory and Application, chapter 3. Information Age Publishing, 2005.
3. Information and Randomness - A n Algorithmic Perspective. EATCS Mono-
graphs in Theoretical Computer Science 1. Springer-Verlag, 1994.
4. Gregory J. Chaitin. Algorithmic Information Theory. Cambridge University
Press, 1987.
5. Thomas M. Cover and Joy A. Thomas. Elements of Information Theory,
chapter 1. Wiley Series in Telecommunications. Wiley-Interscience, 1991.
6. W. H. Zurek. Algorithmic randomness, physical entropy, measurements, and
the demon of choice. In A. J. G. Hey, editor, Feynman and computation:
exploring the limits of computers. Perseus Books, 1998.
7. Charles H. Bennett. How to define complexity in physics, and why. In W. H.
Zurek, editor, Complexity, Entropy, and the Physics of Information, volume
VIII, pages 137-148, 1980.
8. A.A. Brudno. Entropy and the complexity of the trajectories of a dynamical
system. IPrans. Moscow Math. SOC.,44:127, 1983.
9. Fabio Cecconi, Massimo Falcioni, and Angelo Vulpiani. Complexity char-
acterization of dynamical systems through predictability. Acta Physica
Polonica, 34, 2003.
10. Jean-Christophe Dubacq, Bruno Durand, and Enrico Formenti. Kolmogorov
complexity and cellular automata classification. Theor. Comput. Sci., 259( 1-
2):271-285, 2001.
11. Charles H. Bennett. Logical depth and physical complexity. In Rolf Herkin,
editor, The Universal Turing Machine: A Half-Century Survey, pages 227-
257. Oxford University Press, Oxford, 1988.
299
JOHN SYMONS
University of Texas, El Paso
300
30 1
1. Recognizing Complexity
Accounts of the visual system from the computational perspective almost
invariably begin by contrasting their approach with earlier attempts to
provide “direct” or “na’ive realist” theories of perception. Central to this
contrast is the notion of complexity. Computationalists see their work as
uncovering the complexity of the information-processing task involved in
visual perception that previous theorists had neglected. When scientists
are introducing and defending their approaches to vision, they regularly
begin by taking a stand on the notion of complexity. So, for example,
Alan Yuille and Shimon Ullman begin their introduction to computational
theories of low-level vision as follows:
Vision appears to be an immediate, effortless event. To see the
surrounding environment, it seems we need only open our eyes and
look around. However this subjective feeling disguises the immense
sophistication of the human (or animal) visual system and the great
complexity of the information processing tasks it is able to perform
(Yuille and Ullman 1990, 5)
Computationalists understand their predecessors as having ignored the
complex process underlying visual experience. Nicholas Wade and Michael
Swanston also echo this emphasis on the underlying process when they write
that: “The world around us seems concrete and immediate, and our abil-
ity to perceive it is easily taken for granted. . . Clearly, there must be some
process that gives rise to visual experience.” (Wade and Swanston 1991, 1)
By contrast with their predecessors’ neglect, they contend that the compu-
tational approach to perception as offers the best route to recognizing and
untangling the information processing complexity of the visual system. In
his introduction to cognitive science, Paul Thagard explicitly connects our
recognition of complexity to the problem of crafting a computer program
that sees:
For people with normal vision, seeing things seems automatic and
easy. You look at a room and immediately pick out the furni-
ture and people in it. The complexity of vision becomes apparent,
however, when you try to get a computer to do the same thing.”
(Thagard, 1996, 96)
303
Neisser and others argued that the time taken to complete a process is
an indication of the level of complexity involved. It is important to note
that the complexity which interests cognitive scientists is said to exist at the
level of information-processing, rather than at the biological or behavioral
levels. Given the emphasis on the informational level analysis, sources of
evidence at the biological or even at the behavioral level tend to be marginal
to their choice of explanation. Given the nature of our access to the puta-
tive underlying information processing, the evidential constraints governing
articulations of informational complexity become somewhat problematic.
The time that experimental subjects take to perform visually-based tasks
serves as the primary empirical basis for the construction of theories of
information-processing complexity. However, the problem of understand-
ing what the informational complexity of a process amounts to and how
it should be studied is underdetermined by timing evidence alone. Timing
measurements alone leave room for a variety of explanations that are com-
patible with the evidence. Consequently, other, non-empirical constraints
must play a role in one’s decision to favor one articulation of information
processing complexity over another. Philosophical presuppositions, play an
especially prominent role in this kind of decision, as we shall see.
304
Gibson would object to the idea that the brain does anything like com-
puting the solutions to problems. For example in a passage cited by Marr,
he writes “the function of the brain, when looped with its perceptual organs,
is not to decode signals, nor to interpret messages, nor to accept images, nor
to organize the sensory input or to process the data, in modern terminol-
ogy.” (as cited in Marr 1982, 29) Consequently Marr and Gibson differed
fundamentally on the nature of brain function. For Gibson, the problem
with an account like Marr’s is that it relies on what he called “the mud-
dle of representation” (1988, 279) and that it takes what Gibson called a
“molecular” rather than a “molar” approach to perceptual systems. (1966,
52). The purpose of Gibsonian explanations, by contrast, is to show how
“the receptive units combine and their inputs covary in systems and sub-
systems” (1966,52) Both Gibson and Marr are interested in understanding
the function of vision. However, their views of what constitutes an explana-
tion of this functional level are different. This functional level difference is
not debated, and the computationalists, who assume that cognitive science
is in the business of providing complex information-processing accounts,
see Gibson as simply failing to understand the problem. In Marr’s words:
“Although some aspects of his thinking were on the right lines, he did not
understand properly what information processing was, which led him to
seriously underestimate the complexity of the information-processing prob-
lems involved in vision and the consequent subtlety that is necessary in
approaching them.” (1982,29)
From Marr’s perspective, the important lacuna in Gibson’s approach,
was his failure to articulate the process involved in extracting informa-
tion about the environment from the flowing array of ambient energy. For
Marr, it is not enough to say, as Gibson does, that the organism simply
‘Lresonates”with certain invariant features of its visual environment. Marr
believed that he could explain, in information processing terms, how the
organism extracted these invariant features in the first place. In Gibson’s
307
Direct perception is what one gets from seeing Niagara Falls, say,
as distinguished from seeing a picture of it. The latter kind of
perception is mediated So when I assert that perception of the
environment is direct, I mean that it is not mediated by retinal
pictures, neural pictures, or mental pictures. Direct perception
is the activity of getting information from the ambient array of
light. I call this a process of information pickup that involves the
exploratory activity of looking around, getting around, and looking
at things. This is quite different from the supposed activity of
getting information from the inputs of the optic nerves, whatever
they may prove to be. (1979, 147)
What is the real shape of . . . a cat? Does its real shape change
whenever it moves? If not, in what posture is its real shape on
display? Furthermore, is its real shape such as to be fairly smooth
outlines or must it be finely enough serrated to take account of each
hair? It is pretty obvious that there is no answer to these questions-
no rule according to which, no procedure by which, answers are to
be determined. (Austin 1966, 67 as cited in Marr 1982, 31)
Contrary to the skepticism that Austin promotes, Marr shows how each
of Austin’s questions can be given an answer, perhaps not the kind of answer
that would satisfy the philosophical skeptic, but certainly the kind that
would satisfy ordinary scientific curiosity. Marr sees himself as providing
precisely these answers in the fifth chapter of Vision where he provides an
account of how the visual system generates representations of shape. The
purpose of M a d s account in that chapter is to show how the visual system
might provide some content for a judgment that some represented shape
can be identified as an instance of some concept- - -how, for example,
some represented shape is the shape of a cat.
To conclude this section, we have seen that M a d s criticism of direct
theories of perception is misdirected. Gibson was not suggesting that Marr’s
explanatory goals could not be achieved. Philosophers like Austin and
perhaps occasionally Wittgenstein may have held such positions, however,
as we have seen, where Gibson and Marr really differed was with respect
to the representational approach to perception. The significant difference
centers on the notion that the complexity of the information-processing task
must be articulated in representational terms. Gibson certainly believed in
some sort of mediating process that supported perception. What Gibson
denied was the need for a system of representations to serve as mediators
between perception, cognition and action. By contrast, the notion that the
brain generates a series of representations is central to M a d s view.
task. The challenge for the computationalist is to decide which of the set
of possible solutions to chose.
In his defense of the dynamical systems approach to perception and
cognition, Tim Van Gelder made the valuable point that simple systems
can accomplish feats that can also be construed as incredibly complex
information-processing tasks. (Van Gelder 1995;1991, Van Gelder, T. J.,
& Port, R. 1995) His example of such a system is Watt’s governor. In the
late 1700’s James Watt invented a device for controlling the relationship be-
tween the speed of a steam engine and the pressure of the steam driving the
machine. This device itself is simple, however, given computationalist incli-
nations, we could interpret it as solving a complex information-processing
task. Careful analysis of the problem could, for example, lead us to write a
computer program attached to a system of sensors (input) and valves (out-
put) which could calculate the adjustments that are necessary to maintain
a constant relationship between the pressure, speed, load etc. While noth-
ing stops us from producing such a system, (let’s call it Turing’s governor)
Watt’s governor was a far simpler and more elegant solution.a
How much would an articulation of this task in terms of a computer
program - the Turing Governor - add to our understanding of what it is that
the Watt Governor accomplishes? Practically speaking, a project to develop
Turing’s Governor would almost certainly have been an impediment to the
creation of a technological solution. It is possible that such a solution may
have been of some theoretical interest, but the chances of such theoretical
advancement would hardly be sufficient to abandon Watt’s solution
Theorists of direct perception, like Gibson and others, saw the senses
along the lines that Watt took to the governor. The senses were to be
understood in terms of the dynamic interaction between the organism and
the environment. Sensory systems were collections of feedback loops that
tended towards a kind of equilibrium between organism and environment.
In the case of the visual system, these visuomotor loops obviously involved
aThe Watt governor is a spindle with two hinged arms. Connected t o the engine’s
flywheel, the spindle turns at a rate that varies according t o the demands on the engine
(speed and load). As the spindle turns, the hinged arms are driven towards a horizontal
position by centrifugal force. The arms are connected t o valves that control the release
of steam such that as the flywheel spins faster, the arms of the spindle move upwards,
slowing the release of steam and thereby slowing the engine. If the flywheel slows, one
gets the opposite effect, the arms descend and the valve opens again, thereby increasing
engine speed. This simple device thereby establishes equilibrium between a variety of
forces without any recourse t o the complex information-processing techniques that would
have been involved in our imaginary Turing governor.
311
some kind of mediating biological process, but these processes were thought
to be more like Watt’s governor than my imaginary Turing Governor. For
direct theorists of perception, a detailed presentation of the interaction of
the organism and the environment was a sufficient explanation of the senses.
Computationalist criticisms of direct theories of perception attack this
basic idea that the senses can be understood solely in terms of the interac-
tion of their component parts. The study of cells, for example, was seen as
failing to explain how an information-processingtask, like recognizing a cat
as a cat, could be accomplished. To compensate for this apparent deficiency,
an additional level of explanation was introduced wherein the dynamic re-
lationship between organisms and their environment could be analyzed into
components that had no direct relationship to physical objects or to observ-
able behavior. This level, the level of information-processing complexity,
permits the development of theories of perception that incorporate the all-
important notion of representation.
This complexity bears no direct relationship to the philosophical prob-
lem of computational complexity and has relatively little to do with Shan-
non and Weaver’s formal definition of the notion of information. Instead,
the standard strategy in cognitive science is to recommend that perception
be broken down into a series of stages consisting of transformations over in-
formation bearing states or representations. As we have seen, articulations
of this complexity may often have no correlate in physical reality. Conse-
quently, it is often very difficult to know what it is that we are being told
to recognize as complex.
5. Conclusion
The problem with Marr’s argument for the importance of information-
processing complexity is its implicit assumption that we can understand
what the visual system does or what its function is, without knowing its
physiological or anatomical form. Marr’s basic assumption, understood in
its strongest form, detaches accounts of this complexity from biological or
other sources of evidence and precludes the possibility that any new dis-
coveries regarding the structure or physiology of the brain or mechanism in
question could change the way we understand its computational or func-
tional properties. This weakness is due to the lack of evidential constraints
on the characterization of information processing complexity.
Marr qua scientist was a not dogmatic adherent to the autonomy of
the functional level and would probably have been sufficiently skeptical in
312
Even in the congressionally declared decade of the brain one could still
read cognitive scientists arguing along similar lines. In his (1996) Paul
Thagard lists ‘Neurobiological Plausibility’ as the fourth of five criteria for
313
evaluating theories in cognitive science. (1996, 13) In his judgment, only the
‘Practical Applicability’ of a theory to educational techniques and design
ranks as a less import3nt criterion.
The only real theoretical constraints on computational explanations are
the limits of computability. This is a virtue insofar as it seems to allow the
investigator to abstract from the mental life of particular cognizers to ”cog-
nition as such,” but it is a vice when it comes to offering the kinds of expla-
nations that most of us seek from the brain and behavioral sciences. Given
an appropriate interpretation, we can imagine an information-processing
account capturing the essence of the behavior that interests us, but we
can only make this claim once we imagine putting the kind of biological or
robotic mechanisms of affection and transduction in place that will allow
us to imagine our interpretation of the algorithm being enacted. The char-
acterization of those mechanisms was precisely the goal of direct theorists
of perception like Gibson.
While Marr’s work and influence continue to be celebrated in both philc-
sophical and scientific circles, the science of the visual system has begun
to follow a very different path from the one Marr’s theoretical framework
prescribed. For instance, it is now clear to most workers in the field that de-
taching work at the functional or computational level from considerations of
neural anatomy and physiology is a less fertile strategy than taking neural
architectures seriously as a source of insight and evidence. Even computa-
tionally inclined investigators of vision have parted ways with Marr. For
example, the work of computationalists like Steve Grossberg begins by pay-
ing attention to the patterns of connectivity and the laminar structure of
visual cortex. Rather than believing that computational and biological lev-
els can be held separate in any scientifically useful sense, Grossberg claims
to identify the neural architecture with the algorithm.
Marrls view was, in Patricia Churchland’s phrase ‘brain-shy’ whereas
much of what has happened since the early-80’s has been brain-friendly,
and highly successful. Revisiting Marr’s account of complexity is impor-
tant if we are to understand how the norms governing explanation in the
computational functionalist framework have fared in the practical business
of scientific investigation.
Acknowledgements
I am grateful to Carlos Gersherson for his patience and assistance and to
two anonymous referees for some very helpful comments.
314
References
1. Austin, J. 1967. How to do things with words. Cambridge: Harvard University
Press.
2. Gibson 1966 The Senses Considered as Perceptual Systems Boston:
Houghton Mifflin.
3. Gibson, J.J. 1979. The Ecological Approach to Visual Perception Systems.
Boston: Houghton Mifflin.
4. Hurley, S.-1998. Consciousness in Action. Cambridge: Harvard University
Press.
5. Kosslyn, S. 1994. Image and bmin: The resolution of the imagery debate.
Cambridge: MIT Press.
6. Marr, D. 1982. Vision: A Computational Investigation into the Human Rep-
resentation and Processing of Visual Information. San Francisco: W.H. F’ree-
man.
7. Neisser, U. (1967) Cognitive Psychology. Englewood Cliffs, N.J.:Prentice-
Hall.
8. Stillings, N.A. et al. (1987) Cognitive Science: A n Introduction. Cambridge,
Mass.: MIT Press.
9. Thagard, P. 1997. Mind: A n Introduction to Cognitive Science. Cambridge:
MIT Press.
10. Van Gelder, T . J. 1995 What might cognition be, if not computation? Journal
of Philosophy, 91, 345-381.
11. Van Gelder, T. J., & Port, R. 1995. It’s About Time: An Overview of the
Dynamical Approach t o Cognition. In R. Port & T. van Gelder ed., Mind as
Motion: Eqlomtions in the Dynamics of Cognition. Cambridge MA: MIT
Press.
12. Van Gelder, T . 1998. The Dynamical Hypothesis in Cognitive Science. Be-
havioral and Brain Sciences. 21, 1-14
13. Wade, N. and M. Swanston, Visual perception: an introduction. London:
Routledge
14. Yulle, A.L.& S. Ullman, (1990). ‘Computational theories of low-level vision’.
In: D. N. Osherson, Kosslyn, S. M. &. J.M Hollerbach (Eds), Visual Cogni-
tion and Action, Volume 2, pages 5-39, MIT Press.
ON THE POSSIBLE COMPUTATIONAL POWER
OF THE HUMAN MIND
-Forthcoming in: Carlos Gershenson, Diederik Aerts, and Bruce Edmonds (eds.)
Philosophy a n d Complexity: Essays o n Epistemology, Evolution, a n d
Emergence, World Scientific, 2006-
The aim of this paper is t o address the question: Can an artificial neural network
(ANN) model be used as a possible characterization of the power of the human
mind? We will discuss what might be the relationship between such a model and its
natural counterpart. A possible characterization of the different power capabilities
of the mind is suggested in terms of the information contained (in its computational
complexity) or achievable by it. Such characterization takes advantage of recent
results based on natural neural networks (NNN) and the computational power
of arbitrary artificial neural networks (ANN). The possible acceptance of neural
networks as the model of the human mind’s operation makes the aforementioned
quite relevant.
1. Introduction
Much interest has been focused on the comparison between the brain and
computers. A variety of obvious analogies exist. Based on several thoughts,
some authors from very diverse research areas (Philosophy, Physics, Com-
puter Science, etc.) claim that the human mind could be more powerful
than Turing Nevertheless, they do not agree on what
these “super-Turing” capabilities mean. Consequently, there is no univer-
sally accepted characterization of this extra power and how it could be
related to the human mind, even though there is strong defense of these
authors’ theories based on whether or not humans are “super-minds” ca-
pable of processing information not Turing-computable.
*zenil@ciencias.unam.mx
t fhq@hp.fciencias.unam.mx
315
316
any distinction between the complexity of the weights in terms of the com-
putational power beyond the seminal work of Minsky. Nevertheless, neural
networks which run on digital computers operate on a Turing-computable
subset of irrational numbers, a strong restriction that determines a pri-
ori its computational power. Hence, some enthusiasm generated by im-
portant experimental and theoretical results cannot be extended easily to
applications because there is no straightforward way to make real numbers
available even assuming its possible existence. Digital hardware implemen-
tation uses a finite number of bits for weight storage and rational restraint
values for firings rates, weights and operations and remain limited to a
computational power. Even analog implementations, which are often cited
for their ability to implement real numbers easily (such as analog quanti-
ties), are limited in their precision by issues such as dynamic range, noise,
VLSIb area and power dissipation problems. Thus, in theory and in prac-
tice, most implementations use rational numbers, or at most, a subset of
irrational numbers, the Turing-computable ones.
The classical approach of computability theory is to consider models op-
erating on finite strings of symbols from a finite alphabet. Such strings may
represent various discrete objects such as integers or algebraic expressions,
but cannot represent general real or complex numbers, even though most
mathematical models are based on real numbers. The Thing machineG7and
its generalization in the form of the Universal Turing machine (UTM) is the
accepted model for computation, while under the Church-Turing thesis, it
is considered the authoritative model for effective computation. However,
some researchers have proposed other models in which real numbers play
the main role in effective computations.
Machines with “super-Turing” capabilities were first introduced by Alan
TuringGg,which investigated mathematical systems in which an oracle
was available to compute a characteristic function of a (possibly) non-
computable set. The idea of oracles was to set up a scheme for investigating
relative computation. Oracles are Turing machines with an additional tape,
called the oracle tape, which contains the answer to some non-computable
characteristic functions. An oracle machine is an abstract machine used
to study decision problems. It can be visualized as a Turing machine with
a black box, called an oracle, which is able to determine certain decision
bVery Large Scale Integration are systems of transistor-based circuits into integrated
circuits on a single chip. For example, the microprocessor of a computer is a VLSI
device.
320
‘Davislg rightly pointed out that even if a subset of non rational numbers is used,
namely the set of Turing-computable irrationals, the class of languages recognized by
neural networks remains the same, as Siegelmann’s proof on the power of networks with
rational weights readily extends to nets with computable irrational weights (as Turing
already did with his machines).
dAnother interesting question raises by its own right: if there exists a natural device
with such capabilities how might we be restricted to take advantage of the same physical
322
properties in order t o build an artificial equivalent device? Much of the defense of the
work mentioned above have precisely centered on questions such as if we are taking
advantage of the resources we have in nature.
eThere are at least three important projects currently running: A Linux cluster running
the MPI NeoCortical Simulator (NCS), capable of simulating networks of thousands
of spiking neurons and many millions of synapses, was launched by Phil Goodman at
the University of Nevada. Blue Brain, a 4,000 processor IBM BlueGene cluster, was
used t o simulate a brain in a project started in May 2005 in the Laboratory of Neural
Microcircuitry of the Brain Mind Institute at the EPFL in Lausanne, Switzerland, in
collaboration with lab director Henry Markram. It has as its initial goal, the simulation
of groups of neocortical columns which can be considered the smallest functional units of
the neocortex. Also running is the NCS, t o be combined with Michael Hines’ NEURON
software. The simulation will not consist of a mere artificial neural network, but will
involve much more biologically realistic models of neurons. Additionally, CCortex, a
project developed by a private company Artificial Development, planned t o be a complete
20-billion neuron simulation of the Human Cortex and peripheral systems, on a cluster
of 500 computers: the largest neural network created t o date. Different versions of the
simulation have been running since June 2003. CCortex aims t o mimic the structure of
the human brain with a layered distribution of neural nets and detailed interconnections,
and is planned t o closely emulate specialized regions of the Human Cortex, Corpus
Callosum, Anterior Commissure, Amygdala and Hippocampus.
323
computers were not designed to be models of the brain even when they are
running neural networks to simulate its behavior within their own compu-
tational restrictions. Most fundamental questions are however related to
its computational power, in both senses: time/space complexity and degree
of solvability. Most computational brain models programmed to date are
in fact strictly speaking, less powerful than a UTM. Researchers such as
Stannett63 have speculated that “if biological systems really do implement
analog or quantum computation, or perhaps some mixture of the two, it is
highly likely that they are provably more powerful computationally, than
Turing machines. This statement implies that true human intelligence can-
not be implemented or supported by Turing machines, an opinion shared by
Roger P e n r ~ s e who
~ ~ , believes mechanical intelligence is impossible since
purely physical processes are non-computable. A position strongly criti-
cized by many researchers in the field since its authors first propagated the
idea.
However, assuming some kind of relation between the mind and the
brain’s physical actions, neural networks may be accepted as a model of the
human mind operation. Since such a mindlbrain relation is widely accepted
in different ways and levels, we concern ourselves with the computational
power of these devices and the features that such networks must possess.
We will address which from our point of view, goes to the crux of the
matter when the question of the computational power of the brain is raised,
that is, its solvability degree. This means that we do not concern ourselves
with what could be the recipe in which a simulation could run, since if we
restrict ourselves to the discussion of artificial neural networks running on
actual digital computers, we will be restricted to the lowest computational
degree of solvability. From this, it can be easily deduced that if there is
a fundamental difference between the architecture of the brain and digital
computers, then the efforts of the artificial neural networks to fully simulate
the brain either for the purpose of study or reproduction, are destined to
have fundamentally different degrees of power.
Based on certain references5 as well as our own research, we have iden-
tified at least five mathematical descriptions in which “super-Turing” ca-
pabilities have been formally captured: super- task^^^^* and accelerated
Turing machines, Weyl machines or Zeus machines7, Trial and Error
machines50, Non-Standard Quantum Computation, and the Analog Re-
current Neural Networks59. We have also identified other proposals con-
cerning Turing machines with some kind of interaction between them or the
e n ~ i r o n m e n t ’ ~those
~ ~ ~models
~ ~ ~ ,provide a basis for the following claims:
324
(1) Minds are not computers, because (most) thought processes are not.
(a) It can “violate” Godel’s theorem, therefore is not a com-
puting machine, a claim made most famously by Lucas2’ in 1961.
(b) Mind can “solve” the “Entscheidungsproblem” , there-
fore is not a computing machine.
(2) Minds are computing devices but not of the same power of Turing
machines (maybe Godel himselff).
(a) There are special operations that occur in the brain
which are not Turing computable, a claim made most famously by
Penr~se~~.
(b) The mind could be a machine but with access to a cer-
tain oracle (from an external source or from a previously coded
internal source).
‘In his 1951 Gibbs lecture” Godel attempts to use incompleteness t o reason about
human intelligence. Godel uses the incompleteness theorem to arrive at the following
disjunction: (a) Either mathematics is incompleteable in this sense, that its evident ax-
i o m can never be comprised in a finite rule, that is t o say, the human mind (even within
the realm of pure mathematics) infinitely surpasses the powers of any finite machine, or
(b) or else there exist absolutely unsolvable diophantine problems (or absolute undecid-
able propositions) for which it cannot decide whether solutions exist. Godel finds (b)
not plausible, and thus he seems have believed that the human mind was not equivalent
t o a finite machine, i.e., its power exceeded that of any finite machine, the term used
originally for Turing machines
325
input stream can be simulated by a UTM since the data can be written on
the machine’s tape before it begins operation. From dynamic systems we
often decide almost in an arbitrary way, when a system will be closed in
order t o handle it. However, the chain of such external systems, potentially
infinite (even just by loops) can create a non-linear system which could
truly be more complex and maybe more powerful. Some other claims and
critics have been made in this regard.
Some other authors claim that the “super-mentalistic” perspective is
not a scientific one, as it implies the possibility of assigning non-reducible
phenomena t o some sort of information processing. However, we believe
that this fact does not preclude a study on what formal properties can be
required from non-Turing models of the human mind. A clarification of
these properties would help us understand to what extent “super-Turing”
models of the mind can or cannot be considered in a scientific way.
gNote that the definition for a maximal element is true for any two elements of a partially
ordered set that are comparable. However, it may be the case that two elements of a
given partial ordering are not comparable as in the case of Turing degrees as Post proved
326
hit can be seen as a subset of natural numbers (or in fact the whole set of natural numbers
encoded in a single real number concatening all). It is evident that not all real numbers
are computable (they are also identified as random in Chaitin theory).
327
‘~-1, a-2, b-3, aa-4, ab-6, ba+6, bb-+7,aaa-8, aab-+9,aba-10, . . . For example
if L is defined by all the strings that begins with an “a” followed by an arbitrary number
of “b”sL=(a, ab, abb, abbb, ...}, then the set S will be (2, 5, 11, 23, ...} and the
real number encoding L will be L,=0.0100100000100000000000100... (in fact it is a non
rational number in this case).
328
jA sigmoidal type function called the signal function defined as: signal(z) = 0 if xS0
and 1 in other case.
329
162,16 The class of R-recursive functions is very large. It contains many traditionally
non-computable functions, such as the characteristic functions of sets of the arithmeti-
cal h i e r a r ~ h y ~ *Experimental
%~~. proofs beyond this level could be more difficult if not
impossible since the construction of a sequence of real numbers which can not be com-
putably diagonalized is used to prove that there are continuous functions without a
Turing degree.
332
Acknowledgments
The authors would like to thank the editors and referees for very helpful
comments during the preparation of this paper.
References
1. M. Abeles, Corticonics. Neural circuits in the cerebral cortex, Cambridge, Eng-
land: Cambridge University Press, 1991.
2. A. Aertsen, M. Erb, and G. Palm, Dynamics of functional coupling in the
cerebral cortex: an attempt at a model-based interpretation, Physica D, 75,
103-128, 1994.
3. M. Arbib (Ed.) The Handbook of Brain Theory and Neural Networks, 2nd
Edition. Cambridge and MIT Press. 1072-1076, 2002.
4. P. Benacerraf, Tasks, super-tasks, and the modern elastics, J. Philos. 59 765-
784, 1962.
5. S. Bringsjord and M. Zenzen, Superminds, People harness hypercomputation
and more, Kluwer, 2003.
6. W. Bialek, , F. Rieke, R. Steveninck, de Ruyter van, and Warland, D. Reading
a neural code. Science, 252, 1854-1857.
7. G. Boolos, J. P. Burgess and R. C. Jeffrey, Computability and Logic, Cam-
bridge Universky Press, 2002.
8. Mark Burgin, Super-Recursive Algorithms (Monographs in Computer Science),
Springer, 2004. ’
tional systems beyond the Turing limit, European Association for Theoretical
Computer Science Bulletin, 85:181-189, 2005.
17. M. Davis, Computability and Unsolvability, Dover Publications, 1958.
18. M. Davis, The Undecidable, Dover Publications, 2004.
19. M. Davis, The Myth of Hypercomputation, Alan Turing: Life and Legacy of
a Great Thinker, Springer, 2004.
20. R. FitzHugh, Impulses and physiological states in models of nerve membrane,
Biophysical Journal, 1:445466, 1961.
21. W. Gerstner, R. Kempter, J. van Hemmen and H. Wagner, A neuronal learn-
i n g rule for sub-millisecond temporal coding, Nature, 383, 76-78, 1996.
22. K. Godel, Some Basic Theorems on the Foundations of Mathematics and
Their Implications, in Feferman, et al., 1995, 304-323, 1951.
23. A.L. Hodgkin, A.F. Huxley, A quantitative description of membrane current
and its application to conduction and excitation in nerve, J. Physiol. 117 500-
544, 1952.
24. J.J. Hopfield, Pattern recognition computation using action potential timing
f o r stimulus representation, Nature, 376, 33-36, 1995.
25. J.J. Hopfield, Neural networks and physical systems with emergent collective
computational abilities, Proc. Natl. Acad. Sci. USA 79 25542558, 1982.
26. K. Judd, K. Aihara, Pulse Propagation Networks: A Neural Network Model
That Uses Temporal Coding by Action Potentials, Neural Networks, Vol. 6,
pp. 203-215, 1993
27. S.C. Kleene, Representation of events in nerve nets and finite automata.
In C.E. Shannon and J. McCarthy, editors, Automata Studies, pages 3-42.
Princeton University Press, Princeton, NJ, 1956.
28. C. Koch, Biophysics of Computation: Information Processing in Single Neu-
rons, Oxford University Press, 1999.
29. JR. Lucas, Minds, Machines and Gdel. Philosophy, 36:112-127, 1961.
30. W. Maass, R. A. Legenstein, and N. Bertschinger. Methods for estimating the
computational power and generalization capability of neural microcircuits. In
L. K. Saul, Y. Weiss, and L. Bottou, editors, Advances in Neural Information
Processing Systems, volume 17, pages 865-872. MIT Press, 2005.
31. W. Maass, Lower bounds for the computational power of spiking neurons.
Neural Computation, 8:140, 1996.
32. W. Maass. O n the computational power of neural microcircuit models: Point-
ers to the literature. In Jos R. Dorronsoro, editor, Proc. of the International
Conference on Artificial Neural Networks - ICANN 2002, volume 2415 of Lec-
ture Notes in Computer Science, pages 254-256. Springer, 2002.
33. W. Maass. Neural computation: a research topic for theoretical computer
science? Some thoughts and pointers. In Rozenberg G., Salomaa A., and
Paun G., editors, Current Trends in Theoretical Computer Science, Entering
the 21th Century, pages 680-690. World Scientific Publishing, 2001.
34. W. Maass. Neural computation: a research topic for theoretical computer
science? Some thoughts and pointers. In Bulletin of the European Association
for Theoretical Computer Science (EATCS), volume 72, pages 149-158, 2000.
35. W. Maass and C. Bishop (Eds), Pulsed Neural Networks, MIT Press 1999.
336
HELEN DE CRUZ
Centre for Logic and Philosophy of Science
Vrije Universiteit Brussel
Pleinlaan 2, 1050 Brussels, Belgium
E-mail: hdecmz@vub.ac.be
Algebra has emergent properties that are neither found in the cultural context
in which mathematicians work, nor in the evolved cognitive abilities for m a t h e
matical thought that enable it. In this paper, I argue that an externalization of
mathematical operations in a consistent symbolic notation system is a prerequisite
for these emergent properties. In particular, externalism allows mathematicians
t o perform operations that would be impossible in the mind alone. By comparing
the development of algebra in three distinct historical cultural settings-China,
the medieval Islamic world and early modern Europe-I demonstrate that such
an active externalism requires specific cultural conditions, including a metaphysi-
cal view of the world compatible with science, a notation system that enables the
symbolic notation of operations, and the ontological viewpoint that mathematics
is a human endeavour. I discuss how extending mathematical operations from
the brain into the world gives algebra a degree of autonomy that is impossible to
achieve were it performed in the mind alone.
338
339
‘.
small and larger quantities Recently, single cell recordings in rhesus mon-
keys have identified number-sensitive neurons-individual neurons that
respond only to changes in number, while remaining insensitive to changes
in shape or size. Each neuron is tuned to a preferred quantity: a neuron
preferentially firing at, say two, will fire a bit less at one or three, and even
less when observing higher quantities. These neural tuning curves become
broader as quantities increase. Numerosities are thus not represented as a
linear mental number line, but more as a logarithmic ruler, in which the
psychological distance between one and two is considerably greater than
between fifty and one hundred.
Yet, mathematical systems without exact number representations can
only capture the most rudimentary of numerical relationships. Evidence
for this claim comes from two Amazonian cultures, the Pirahii and the
Munduruk6 7, where exact number words do not exist, such that the word
for ‘one’ can mean ‘two’ and vice versa. People in these cultures cannot
even discriminate a box with three fish from a box with four fish painted
on it. They do not possess a counting routine or any other cultural tool
which helps them to construct positive integer values. Conversely, despite
extensive training, no non-human animal has yet been able to learn com-
plex mathematical operations, such as exact positive integer representa-
tions. One long-term training programme extending over twenty years in-
volved teaching Ai, a female chimpanzee, to understand and produce Arabic
digits’. Ai never managed to count more than nine items, and never gen-
eralized to the counting procedure that children master with ease. These
lines of evidence combined suggest that although our mathematical abilities
build upon an evolved number sense which we share with other animals,
they are clearly more than that. It is as yet unclear how this cognitive
adaptation can account for the vast proliferation and complexity of cul-
tural mathematical concepts.
In this paper, I will argue that complex mathematical theory can emerge
if humans extend their minds into a symbolic notation system. Using al-
gebra as a case-study, I will demonstrate how humans can overcome their
cognitive limitations by externalizing operations that are difficult or impos-
sible to perform in the mind alone. I begin by outlining which cognitive
processes are involved when people learn and perform algebra. I show how
humans combine several evolved specialized neural circuits to solve equa-
tions, but that learning algebra still requires synaptic rewiring. I then go
on to explain how cultural factors, including religion, symbolic notation
and philosophy influence the development and level of abstraction in alge-
340
braic systems in China, the Islamic world and Europe. I argue that this
development critically depends upon the elaboration of a symbolic nota-
tion system, and the assumption that mathematics is a human endeavour,
which can be improved by individual mathematicians. Finally, I discuss
how mathematics can have emergent properties as a result of this external-
ism. Mathematics may be a hybrid system of knowledge, in that it contains
operations that are performable in the head, and operations that can only
exist as symbols.
the subsystems. The drawback for these enhanced cognitive abilities is that
such skills are typically hard to learn. Male hunter-gatherers start learning
to track when they are adolescents, but only gain expertise-as measured
by their return-rates-when they are well in their thirties 13.
Likewise, fMRI studies (e.g. Ref. 14) found that people use different
brain circuits when they solve equations (Fig. 1):
i mantenor cingulatecortex
bilateral horizontalsegment of the intraparietal sulcu!
--.---- bilateralposteriorsuperior parietallobe
I
Figure 1. Brain areas involved in solving equations (only left hemisphere shown)
{ - 6 ~- 2y - z = 6 2
+
3x 21y - 3 z = 0
However, these rods were less useful to express general abstract rules other
than actual calculations, which preserved the concreteness of Chinese math-
ematics 23. Consequently, Chinese algebra textbooks never attempted to
give an abstract formulation of a general rule, but presented examples that
served as paradigms to solve similar problems 2 5 . Next to this, the venera-
ble status of the Nine Chapters on the Mathematical Arts impeded further
progress. Numerous mathematicians wrote commentaries on it, and as
time went by commentaries on those commentaries, and even those who
produced original work felt obliged to extensively refer to it. This attitude
stems from the Confucian point of view that only the wise sages of the past
have attained true wisdom; it was the duty of aspiring scholars to emulate
their mental state 26. The high status of mathematicians gradually eroded
as mathematics came to be perceived as a diligent and unquestioning appli-
344
II.. 1 coefficients
___
I
I
111 II
I
?quations
Figure 2. Representing simultaneous equations with counting rods. Red rods (here
shown in grey) indicate positive coefficients, black rods negative coefficients. Redrawn,
with permission, from Ref. 23, p. 146, Fig. 4.6.
cation of ancient wisdom, rather than an ongoing creative process 23. Chi-
nese mathematics never became an autonomous discipline: mathematicians
remained technical experts dealing with chronology, finances, taxation, ar-
chitecture, and the military 24.
5. Conclusion
In this paper, I have argued that algebra has emergent properties, resulting
from the co-optation of evolved mental abilities and the externalisation of
operations in a symbolic system. Such a system needs extensive cultural
support, both from the metaphysics of the culture it belongs to and the
ontological status of mathematics. It is crucial for mathematics and other
sciences that a metaphysical view is endorsed in which the world is gov-
erned by causal laws, and in which humans are capable of acquiring reliable
knowledge of the world. Once mathematicians realized their practice is a
349
human endeavour, and that they can improve it, they could create higher
levels of abstraction by symbolically representing operations. Early mod-
ern European mathematics was not superior t o its Chinese and Arabian
contemporaries-rather, it could evolve because cultural conditions were
more favourable.
Acknowledgements
I gratefully acknowledge helpful comments by Johan De Smedt and Jean
Paul Van Bendegem on a n earlier draft. This research is supported by grant
OZR916BOF from the Free University of Brussels.
References
1. H. De Cruz, How do cultural numerical concepts build upon an evolved num-
ber sense? In: B. G. Bara, L. Barsalou and M. Bucciarelli (Eds.), Proceedings
of the X X V I I annual conference of the Cognitive Science Society. Mahwah,
Lawrence Erlbaum, 565-570 (2005).
2. C. Uller, R. Jaeger, G. Guidry and C. Martin, Salamanders (Plethodon
cinereus) go for more: rudiments of number in an amphibian. Animal Cog-
nition, 6,105-112 (2003).
3. F. Xu and E. S. Spelke, Large number discrimination in 6-month-old infants.
Cognition, 74, B1-B11 (2000).
4. K. McCrink and K. Wynn, Large-number addition and subtraction by 9-
month-old infants. Psychological Science, 15, 776-781 (2004).
5. A. Nieder, D. J. Freedman and E. K. Miller, Representation of the quantity
of visual items in the primate prefrontal cortex. Science, 297, 1708-1711
(2002).
6. P. Gordon, Numerical cognition without words: evidence from Amazonia.
Science, 306,496499 (2004).
7. P. Pica, C. Lemer, V. Izard and S. Dehaene, Exact and approximate arith-
metic in an Amazonian indigene group. Science, 306,499-503 (2004).
8. D. Biro and T. Matsuzawa, Chimpanzee numerical competence: cardinal and
ordinal skills. In: T. Matsuzawa (Ed.), Primate origins of human cognition
and behavior. Tokyo, Berlin, Springer, 199-225 (2001).
9. D. Sperber, Eqlaining culture. A naturalistic approach. Oxford: Blackwell
(1996).
10. P. Boyer, Evolution of the modern mind and the origins of culture: religious
concepts as a limiting-case. In: P. Carruthers and A. Chamberlain (Eds.),
Evolution and the human mind. Modularity, language and meta-cognition.
Cambridge, Cambridge University Press, 93-112 (2000).
11. R. N. McCauley, The naturalness of religion and the unnaturalness of sci-
ence. In: F. C. Keil and R.A. Wilson (Eds.), Explanation and cognition.
Cambridge, Ma., MIT Press, 61-85 (2000).
350
12. P. Carruthers, The roots of scientific reasoning: infancy, modularity and the
art of tracking. In: P. Carruthers, S. Stich and M. Siegal (Eds.), The cognitive
basis of science. Cambridge, Cambridge University Press, 73-95 (2002).
13. H. Kaplan, K. Hill, J. Lancaster and M. Hurtado, A theory of human life
history evolution: diet, intelligence and longevity. Evolutionary Anthropology,
9, 156-185 (2000).
14. Y. &in, C. S. Carter, E. M. Silk, V. A. Stenger, K. Fissell, A. Goode and J.
R. Anderson, The change of the brain activation patterns as children learn
algebra equation solving. Proceedings of the National Academy of Sciences of
the United States of America, 101,5686-5691 (2004).
15. U. Frith and C. Frith, The biological basis of social interaction. Current
Directions in Psychological Science, 10, 151-155 (2001).
16. S. Dehaene, E. S. Spelke, P. Pinel, R. Stanescu and S. Tsivkin, Sources
of mathematical thinking: behavioral and brain-imaging evidence. Science,
284,970-974 (1999).
17. 0. Simon, J.-F. Mangin, L. Cohen, D. Le Bihan and S. Dehaene, Topograph-
ical layout of hand, eye, calculation, and language-related areas in the human
parietal lobe. Neuron, 33, 475-487 (2002).
18. V. J. Dark and C . P. Benbow, Differential enhancement of working memory
with mathematical versus verbal precocity. Journal of Educational Psychol-
ogy, 83, 48-56 (1991).
19. B. Luna, Algebra and the adolescent brain. Trends in Cognitive Sciences, 8,
437-439 (2004).
20. A. Clark and D. Chalmers, The extended mind. Analysis, 58,7-19 (1998).
21. D. Kirsh and P. Maglio, On distinguishing epistemic from pragmatic action.
Cognitive Science, 18,513-549 (1994).
22. F. Adams and K. Aizawa, The bounds of cognition. Philosophical Psychology,
14,43-64 (2001).
23. G. G. Joseph, The crest of the peacock: non-European roots of mathematics.
Second edition. Princeton: Princeton University Press (2000).
24. S. Restivo, Mathematics in society and history. Sociological inquiries. Dor-
drecht: Kluwer Academic Publishers (1992).
25. K. Chemla, Generality above abstraction: the general expressed in terms of
the paradigmatic in mathematics in ancient China. Science in Context, 16,
413-458 (2003).
26. T. E. Huff, The rise of early modern science. Islam, China and the West.
Second Edition. Cambridge: Cambridge University Press (2003).
27. J. L. Berggen, Mathematics and her sisters in medieval Islam: a selective
review of work done from 1985 t o 1995. Historia Mathematica, 24,407-440
(1997).
28. T. E. Huff, Science and metaphysics in the three religions of the book. Zntel-
lectual Discourse, 8, 173-198 (2000).
29. J. Hoyrup, Jacopo d a Firenze and the beginning of Italian vernacular algebra.
Historia Mathematica, 33, 4-42 (2006).
30. I. Pyysiainen, ‘God’ as ultimate reality in religion and science. Ultimate Re-
ality and Meaning, 22,106-123 (1999).
351