Professional Documents
Culture Documents
De Landa, Manuel - Essays
De Landa, Manuel - Essays
MANUEL DE LANDA
INTERVIEW
ECONOMICS,
COMPUTERS AND
THE WAR MACHINE
MESHWORKS, HIERARCHIES
AND INTERFACES
VIRTUAL ENVIROMENTS
AND THE EMERGENCE
OF SYNTHETIC REASON
Manuel de Landa: Generally, I consider myself a philosopher, although I don’t have any
credentials, I’m more of a street philosopher. I am very interested in the Internet
because it is an international network of computers which organized itself. A very good
example of what processes of self- organization that is a very complex structure that
emerged pretty much spontaneously out of the activities of many different people, each
one trying to do something, but overall effect wasn’t intended or planned by anyone. So
as a philosopher, I am interested in all kinds of phenomena of self-organization, from the
wind patterns that have regulated human life for a long time, like the monsoon or
tradewinds which are self-organized winds, to the self-organizing patterns inside our
bodies, to the self-organizing processes in the economy, to the self-organizing process
that created the Internet. So that is my general philosophy and I am interested in the
Internet because it is a very concrete example of what I am talking about.
Miss M.: You are presenting a concept which is not quite standard, that of markets and
anti-markets. Could you explain that a little bit?
Manuel de Landa: The reason why the concept of self-organization is not very well
known is because it is only about 30 or so years old. It caused a great revolution in
science in very different disciplines like physics, chemistry and the dust is just starting to
sell and so we are starting to see what the consequences of this revelation will be for
human societies. One of the areas that will be influenced, in fact, that is already being
influenced is economics because what we are talking about is here is order that has
come out not because someone planned it, because someone commanded it to its
existence. We tend to think that everything about human society which has a certain
amount of order as being the result of someone planning it. For instance, the city of
Versailles was perfectly planned up to the last little detail by Louis XIV and his ministers,
and that is our image of what human society is. That everything is on purpose. There
are collective actions and consequences which are unintended, and whatever order
there is in those collective consequences that no one planned is self-organizing. The
clearest example of that is markets. Let’s understand them to have a very concrete
image: peasant or small-town markets, a place in town where everybody goes and
brings their stuff to sell or goes there to buy something, and it meets every week in a
certain part of a town and it comes apart and then meets again the following week. In
those very specific places, everybody shows up and everybody shows up with their
intentions: I go there with the intention to buy, or I go there with the intention to sell. So a
lot of what happens is planned, is intentional, but the overall effect, for instance the
prices that every particular commodity happens to go by is unintended. In a real market,
no one actually sets the price. There is no one buyer or seller who says, „I want this to
be the price of this." No one commands the price, prices set themselves. That’s what’s
interesting about markets, that they indeed provide you with a coordination mechanism
with coordinating demand and supply that does not need a central decider, does not
need a centralized agency that does decision-making. Out of this centralized decision
making order comes out.
This is not a new idea, of course. Adam Smith, at the end of the eighteenth century,
came up with the idea of „The Invisible Hand" which was supposed to explain how
markets are organized. My point of view is those theories are obsolete, that indeed, only
with the new conceptual technology, that the new concepts of self-organization that
have developed in the last thirty years can we understand how markets actually work.
So that is one way in which these new theories will affect our lives, allowing us to
understand better how economies work. On the other hand, another problem with
original Adam Smith idea was not so much that it was too simple, but that it applied the
term „markets, to things that were not self-organized. All the way back to Venice in the
fourteenth century, Florence in the fifteenth, Amsterdam in the eighteenth, London in the
nineteenth, in other words, throughout European history, beside these spontaneously
coordinated markets, there have been large wholesalers, large banks or foreign trade
companies or stock markets that are not self-regulated, these are organizations in which
instead of prices self-regulating it, they had commands. Everything is planned from the
top and more or less executed according to planned, everything is more or less
intended. There is very little self-organization going on at all. And indeed, these large
wholesalers, these large merchants, large bankers and so on, made the gigantic profits
they made and they became capitalist thanks to the fact that they were not obeying
obeying demand and supply, they were manipulating demand and supply. For example,
instead of the peasant that shows up to the market to sell a certain amount of corn, here
you have a wholesaler with a huge warehouse where he stores all the corn he can. If the
prices are too low, he can always with drawn certain amounts from the market, put them
in the warehouse, and artificially make the prices go up. When the prices go up, he then
sells the rest of the corn at these high prices and he makes a lot of money. But, of
We need another word to describe these organizations that are large enough to
manipulate markets. A word has been suggested by historian Fernand Braudel and it is
a very simple one: „anti-market." Why? Because they manipulate markets. And so
today, in the United States, there is a very strong political movement, mostly by the right
wing, and Newt Gingrich is perhaps the most well known politician in this regards, who
are trying, as they say, shrink the size of the government, let market forces have more
room to operate. But, of course, translated into the terms we’ve just introduced, what
they really want to do is let anti-market forces run wild. They don’t really want small
producers and small manufacturers and bakers and printers and mom-and-pop shops to
have more room to manoeuver and make money. They want national and international
corporations to have more room to manoeuver. They want to shrink government so that
there are less regulations to keep international and national corporations from doing
what they want. But if you go and study one of these corporations, rather than looking
like a market, they are like mini-Soviet Unions. I mean, everything is planned in these
corporations. The managerial hierarchies are exactly like the hierarchies in the Soviet
Union: they planned everything, prices play a very small role and most of the
organization is done via command.
Now, we used to call the Soviet Union a „command economy," we still call China a
command economy. Well, international corporations and national corporations are
indeed command economies. They have very little to do with prices. In the past they did
have something to do with prices, because either their suppliers or distributors were little
guys. They had to deal with prices one way or another. But since the nineteenth century,
at least in the United States and I’m sure in Europe, a lot of organizations had been
internalizing: buying their suppliers and their distributors and making them part of
themselves, kind of eating them and digesting them and incorporating them into their
own tissue. The more they do that, the more they internalize these little markets, the
less will prices play a role in their coordination, the more the commands play a role. I
guess a good image for this that the United States is far from being a free enterprise
economy, it is an economy run by multiplicities of little Soviet Unions.
Miss M.: Coming back to the Internet, how would you apply that concept to the Internet?
really the main commodity on the Internet right now is text and images. So the Internet
by its very nature, benefits small producers of content. However, right now we are at a
stage in its development for where anti-market forces are making an entry into the
Internet in a big form. Not only Microsoft, not only America On-Line, not only AT&T and
so on, but all kinds of corporations have their own web pages and are utilizing the web.
So now you have advertising as a way of reaching consumers in a more direct way. So
the Net could be changed in a few years from a self-organized entity into a planned
entity. I don’t have any romantic views about the Internet being able to resist all the
forces of planning. Self-organizing forces can be overrun by these planned forces
because they are so gigantic. The Internet serves a lot of functions, let’s just talk about
its economic functions. At the moment that electronic cash and cryptologically secure
methods of sending credit card numbers becomes secure, becomes established,
becomes standard software, then the internet is going to be a place to do business. So
the question is how do we have to guide its evolution so that it ends up benefiting small
producers of content instead of the people who already own the infrastructure: the
telephone companies, the cable companies and so on which can also own the content-
producing companies since they can always internalize them. The content-producing
companies, let’s say Yahoo, which is an index-producing company, even if they’ve
emerged in a market way---as spontaneous, entrepreneurial, and small---Microsoft can
buy Yahoo any time, any time they want to. They haven’t bought them but they could
buy them. And so the question is that Internet is not protected by this in its self-
organization, it could be swallowed up anytime. So the question is what can we do—not
you and me--but everybody who is involved in the Internet. The thing is that in the next
few years the transactional structures will be defined—that is, what kind of transactions
will happen over the Internet.
To give you a very concrete example, if the standards that are settled are such that the
minimum transaction is five dollars per transaction, much like there is a minimum
transaction for credit cards now, that will benefit large producers of content. The only
way it will benefit small producers of content, independent freelance writers and so on, is
by making the minimum transaction very very small. We need to make the infrastructure
of the Internet and the software that be running these transactions be capable of
tracking one-cent transactions so that the small producers of content can sell one page
of their essays. So you can have your home page and people don’t have to buy your
whole essay in advance. They can read the first page, you can charge them one cent
and if they want to buy the whole essay because they are interested, then they buy the
rest. Again this is a very simple example and the minimum transaction that is allowed by
the software—the standards will be decided in two or three years and they will be written
in stone—this is something that matters. To benefit small producers you need to allow
very small transactions. If the minimum transaction is large, you will be automatically
benefiting large producers of content.
So issues like this will be decided in the next few years. Another issue that will affect the
economics of the Internet and whether market or anti-market forces end up winning is
the main scarce commodity on the Internet at the moment: bandwidth, the amount of
information which can run through the channels. Right now, if you have a regular
modem like a 14.4 modem, you know how long it takes for every web page to come into
your system. It is precisely because bandwidth is scarce. It is a trickle of bits that is
going through this channel so by the time you get to your computer, it takes forever to
refresh the screen. For the Internet to become a place where content can be transacted
as a commercial transaction, bandwidth needs to be plenty. The more bandwidth there
is, the more complicated your documents can be and the more complicated the kinds of
services you can offer over the Internet. Right now, the amount and the kinds of services
you can offer are very limited by the bandwidth. So bandwidth is a key issue.
Now there are several forms of bringing cheap bandwidth. There is one form—telephone
companies own a very large portion of the fiber-optic cables that are underneath much
of the United States and large parts of Europe, too. Fiber-optics as opposed to copper
cables, can transmit an enormous amount of bits, and all kinds of parallel channels at
the same time. So, yes, fiber-optics is one solution to the bandwidth problem. But, now,
how do we use it? One solution would be—and this was proposed by a right-wing
economist—is to let gigantic telephone companies who own the fiber-optic cables to
merge together and with cable companies which own the copper which goes from end of
the fiber to the home. Right now, the telephone companies own one part of the thing, the
cable companies own another part, and if we allow them to come together into a hugh
anti-market institution, they would give us cheap bandwidth. But what price? At a price
that one anti-market institution would own the entire infrastructure of the Internet. That
would not be good. Because at one time in the future when they would want to, they
could make the Internet asymmetric—to allow more bits into your computer than you
can let out. Right now the Internet is pretty much symmetric: you can consume bits but
you can produce bits and that is what makes it unique, that it is a two-way system as
opposed to television which is a one-way channel. The telephone companies and the
cable companies could pretty much own the infrastructure of the Internet and at any
point they want to, they could change it into an asymmetric design in which there are
more bits coming in for you to consume—and again they transform it again into a
consumer media—and very few bit go out so if you wanted to produce, you could
produce but painfully.
So, obviously, that’s not the right way of getting cheap bandwidth. The best way to get
cheap bandwidth is for the government to force telephone companies to rent fiber-optic
space to independent little companies. This is something the government is going to
have to understand—that a big monopoly made up of a fusion of telephone companies
and cable companies would simply become a gigantic entity that would become a
powerful company which would be more powerful than the government itself. The only
way to dissipate this danger is precisely by letting the small producers of content rent
their own space in these fiber-optics things by forcing telephone companies to rent it out.
So they still would own it but they would not control it. We would then be able to get
cheap bandwidth without the danger of this one huge monolithic company owning the
guts of the Internet.
Miss M.: But looking at the situation now, we are already confronted with gigantic
monopolies on the Internet. One example, Microsoft, who are now developing their own
Internet browser, their own web browser, and Netscape, who were for some time looked
at as the „good ones" because they were not Microsoft, are now becoming the „bad
ones" because they are starting to monopolize as well. We already have the monopolies.
Manuel de Landa: Yes, absolutely. I had said in the beginning that we are at a
threshold now where anti-market forces are about to enter big-time into the Internet. The
question is whether the all the grass-roots parts of the Internet will thrive on it, whether
the mostly grass-roots network of bulletin boards or whether the different European
parts of the Internet are robust enough, are strong enough to resist the attack. I do not
believe that because Microsoft is extremely powerful necessarily means that it will win
the war. It certainly has more chances of winning it than any one of us has, but the
question is whether those areas of the Internet which were of grass-roots origins will be
able to stay there and sustain the spontaneity and originality of the Net. And whether
Microsoft Network will simply become kind of a fancy neighborhood like America On-
Line, where they have doors to the Internet but are not really owning the Internet.
On the other hand, what could happen is that what we know as the Internet will become
a sort of ghetto: it still survives but it is surrounded by the fiber-optic infrastructure of
what they call the Information Highway which is owned by the Microsoft’s and the MCIs.
All the business and secure transactions would be conducted around there and one
would be offered entry this ghetto where all the artists and bohemians are. The Internet
becomes this Greenwich Village where you go for a cafe where you go to hang out the
hip people, then you’d go back to where the newer fiber-optic networks are to do
transactions, to do business.
The gain is not won by either side. If we could manage to force into the standards of the
Internet that small transactions be allowed, it would be a victory for all of us because it
would benefit small producers of content. that would not be winning the war. This is
going to be one little battle at a time.
Miss M.: I would like to bring in Konrad Becker. You are proposing this theory of
propaganda, that the Internet is a tool of propaganda. How can we merge the two
things, the monopolies on one side and the propagandistic possibilities on the other side
with the monopolies controlling the Internet?
intellectual stimuli, in different ways and it has its origins in the military apparatus.
Hypermedia as such is an invention of the military apparatus to deal with complex
information structures. So we learn today that information as such is more or less a
myth. „Consensual hallucination" is the generic term for it. The possibility of reaching
people through networks are quite tremendous; there is a lot of talk that the Third World
War will be an information war. We are getting to that point that there is non-lethal
warfare, that is propaganda warfare to be used through networks.
Manuel de Landa: I can see his point very well. Indeed, just the increase of enormous
advertising you see on the Web is pointing in that direction. Again to me the key thing
here is whether we keep the network symmetric or asymmetric, whether the same
amount of bits can come into your computer is the same as the amount that can leave. If
it becomes asymmetric, if more bits can come in than can go out, then its transformation
into a propaganda tool is already a reality. If it becomes a consumer tool and big
producers of content simply send you information to consume, that’s it: that is
propaganda, that is a means of control. It is much better than television, it is much more
pointed. It is like personalized television: they know who you are, they know your tastes,
they may be even able to track and make consumer profiles out of your Web surfing,
therefore put you on a mailing list for very specific people to send you information.
On the other hand, if the cables and the connections are kept symmetric so that criticism
can leave the terminal as propaganda comes in, then we have a fighting chance. That
doesn’t mean we will win the war or have utopia or anything. What we can hope for is
having a fighting chance. That’s what would make the game still interesting. It would
make it interesting for all us critics, for social critics to be engaged if we feel we still have
a fighting chance. I believe it all boils down to hardware. If the hardware remains
symmetric, and again that is not a necessity--not something that will happen but could
change, then we have a fighting chance because we could fight propaganda with
criticism.
Miss M.: Konrad, do you think that being able to criticize is a remedy against
propaganda?
Konrad Becker: Well, actually, that could be a trap as well. As Manuel de Landa
explained in his lecture, criticism is part of the scheme. So it is easy to fall into that
hypnotic situation where you have an oscillating paradox: on one side you have right
wingers and on the other side you have left wingers and you are in the middle, totally
snake-charmed. I sometimes feel that propaganda is a secretion of a higher order. It is
natural for polar theoretical points to embattle themselves, which doesn’t change the
course of history. To find means of escape from this hypnotic lock, this propaganda lock
of the informational organism or info-body would be the way to go. These heterogenic
activist situation that you find in the early Internet is one of the very interesting points
where there could be the window of opportunity to break the gridlock.
Manuel de Landa: I absolutely agree. Right now, everybody and their mothers call
themselves „critical". You go to New York City bookstores and there are huge sections
of bookshelves called „Critical Theory". Of course, those theories are the most uncritical
in the whole world. They call themselves critical but in the end are so uncritical because
they take for granted all kinds of assumptions, they don’t really criticize themselves, and
so forth. What I meant when I said „criticism as an antidote to propaganda" would be a
new type of criticism that is much more theoretically grounded and goes beyond the fake
kind of criticism, or dogmatic criticism, that we have become used to. If we criticize the
Internet by simply calling it a „capitalist tool" or a „bourgeois tool". That was a standard
way in which Marxists criticized things in the past, attaching „bourgeois" to everything
they wanted to criticize and, bingo, you had instant criticism like you had instant coffee.
Obviously, that kind of criticism is not going to do anything, and indeed has become a
kind of propaganda itself. I mean, you criticize to propagandize your own idea. The
question in front of us now as intellectuals is whether we’ve inherited so much bullshit
and therefore our criticism is bound, is condemned to be ineffective, or whether we can
find ways out or escape routes out of this and create a new brand of criticism that
recovers its teeth, its ability to bite, its ability to intervene in reality in a more effective
way. Again, it would imply a collective effort of a lot of intellectuals who are fed up with
what had been labeled as criticism in the past and is nothing but dogma and repetition,
and come up with a new brand of criticism that is capable of fighting propaganda.
Konrad Becker: The right wing idea of the „Invisible Hand" you mentioned is a good
point to make. And you mentioned the „commodifiers" on the other hand. You were
focusing more on the faults of the „Invisible Hand" than you were on the faults of the
„commodifiers". One mistake I did mention was the trap that one could fall in. Could you
please specify?
Manuel de Landa: We need to distinguish true criticism from false criticism, or at least
criticism that more or less gets it right from criticism that is pure dogma. Belonging to the
left, as I suppose we do, we are more affected by the kind of dogmatic criticism labeled
as „leftist" rather than „rightist". Most people who are radicals or who are artists or who
are rebels in some way would hardly ever use the term „free enterprise" to talk about the
United States. We are not affected by that dogma, but we are affected by another
dogma which is Marxist dogma. Going back to the distinction between markets and anti-
markets, the problem with Marxist dogma is that it fails to distinguish between self-
regulating economies where there is no where there is no economic power and the anti-
market institutions where economic power is the center and the reason for being of
those institutions. Failing to distinguish that means that, for a Marxist, the very fact that
an object acquires a price, that an object goes into a market to be sold as a commodity
is a bad thing. Remember that Marxists proposed that the solution to our problems is
Socialism and that Socialism is a society where commands have replaced prices
completely. But if I am right, that every national and international institutions are mini
Soviet Unions, mini Socialisims minus the idealism, and that they replace prices with
commands only on a smaller scale, then obviously Socialism is not the solution.
Socialism is indeed an intensification of the trend that is enslaving us.
When the Soviet Union dissolved a few years ago, people in Holland and
Czechoslovakia began talking about the transition to a market economy. Everybody was
talking about how hard it was to make a transition to a market economy. But look at
what they were trying to do: they weren’t trying to make a transition from a planned
economy to one in which many small producers dispersed. No, they were trying to make
a transition to a few large enterprises which were not owned by the government
anymore but were still large, run by a hierarchy of managers with everything
commanded and everything planned. In other words, they were trying to imitate the
United States, that is, they were trying to imitate an anti-market economy. However,
because of the dogma we have inherited, we accept immediately that their intentions
were to go a market economy.
What I am trying to say is that one of the obstacles to think straight in this regard is the
idea that the very entry of an object into a price system is a bad thing. When you have
an object, you give it a price and sell it in a market, it becomes a commodity. The
process by which an object acquires a price and enters a market is called
„commodification". It really should be „commoditization" but „commodification" is a word
that stuck in the Seventies. But today it has become a cliché, a thing you repeat without
thinking. You believe you are criticizing the system when you say something has
become commodified but, indeed, you are not saying anything, if I am right. If we are
supposed to distinguish markets from anti-markets, then to say that something has
become commodified doesn’t even begin to say anything. We still don’t know if it
entered the market as a free commodity, so to speak, which will be affected by supply
and demand, or if it entered the market as a manipulated commodity. When we tend to
think of ideological effects of commodities, we tend to think of planned obsolescence—
creating consumer products that are planned to break down so you have to buy more
consumer products. Or we tend to think of tailfin designs like in the 1950’s, when they
weren’t making innovations on cars, they just putting larger and larger tailfins. But these
kind of innovations, these cosmetic innovations which aren’t market innovations at all,
but are anti-market commoditizations. If we were to use that term in any way that would
be meaningful, it would be to refer to certain products of anti-markets which are
specifically planned and designed to manipulate consumer needs. We would need to
think of McDonald’s burgers. The moment you have a burger war on TV between
McDonald’s and Burger King, or the famous cola wars between Pepsi and Coke, that is
commodities in the bad sense, in the Marxist sense. Objects that have zero use value
are nothing as technological innovations, they are pure cosmetics, pure simulacrum,
pure consumerism. But, again, if you trace those products to their source, you will find
that in a majority of cases they come from anti-markets. And if you trace every
technological innovation, from the steam roller to all the little machines and procedures
that were needed for the Industrial Revolution, electricity and so on, they will have
almost always have come from small producers. The majority, again.
Konrad Becker: The subtitle of one of your essays, „The Geology of Morals," which I
very much appreciated, is called „The Neo-Materialistic Interpretation". How do you
interpret „Neo-Materialism"?
Manuel de Landa: Obviously, I put the word „neo" there to distinguish it from Marxist
materialism. The only good thing that Marxism ever gave us was its materialism, the
idea that we need to explain things that happen right here without appeals to God,
without appeals to Platonic essences, without appeals to anything transcendental. Of
course, Marxists, to the extent that they bought Hegelian dialectics, were never really
true materialists since dialectic is not at all something of this world. That was the good
thing about Marxism. Lenin, as much as I hate many of the things that he did, as a
philosopher was not so bad. He resisted idealistic philosophies in which everything is
just our perceptions of it. By saying that there is a material world out there, we need to
take that into account, that not everything is information, that not everything is ideas,
that not everything is cerebral stuff. That there is also concrete fleshy stuff that is also
very important to consider.
Now, neo-materialism means acknowledging that we have been neglecting matter for a
long time, all the way back to Aristotle. Aristotle already separated formal causes from
material causes in his classification of causes. It is a classification that has stayed with
Western philosophy all the way up to the 20th century. All the way back to the Greeks,
matter was seen as an inert receptacle for forms and humans came up with these forms
that they imposed on this inert receptacle. This has certain class or caste origins
because all the way back to the Greeks, the blacksmith, the guy who worked on matter
or metals, lived outside of the city, spent all day in front of the fire, dealing with metals,
and most importantly, didn’t come to the arbora to talk. So the citizens of Greece didn’t
trust the blacksmith: „He doesn’t talk. He doesn’t come here and blabber." They were
always slaves or ex-slaves, and manual labor, even if it is crafty manual labor—be it
women cooking in the kitchen or blacksmiths working with metal or folk artists creating
objects—was considered a secondary activity, a lower activity. Intellectual activity was
the real activity, to think with concepts. Neo-materialism means to recover the feeling
that we have been neglecting these people for a long time, that there is a much more
interesting form of knowledge that relates to this skill to deal with matter—in the kitchen,
in the blacksmith’s shop, in the carpenter’s shop. To deal with representations and
concepts and mental stuff is interesting, too, but is just as interesting as this direct
sensual knowledge of matter.
This has a lot to do with the Internet because the people who created the personal
computer—and without personal computers we would not have the Internet—were
Steve Jobs and Steve Wosnik from Apple. Steve Jobs was the mental guy, and Steve
Wosnik was someone who thought and spoke very little, was the blacksmith of our time,
who simply had a sensual love and had a neurotic approach to chips. He collected
chips, he kept chips under his bed, and he created Apple though he didn’t have an
engineering degree. He didn’t have any formal training yet he created the first Apple II
out of this sensual knowledge of electricity, of electronics and of chips and began the
revolution, if it does indeed turn out to be a revolution. So sensual knowledge is not
something we left in the past, in the age of craft activity. It is something that is still very
much with us. If you study the history of the transistor, even though it uses some
quantum theories, it uses it after the fact. If you see photographs, it is just a chuck of
silicone stuff with things in it—it looks so funky, so home-made. It was home-made.
These guys were tinkering with matter. They were trying to understand matter by
tinkering with it instead of imposing a pre-conceived idea or form on it.
Konrad Becker: I agree that Aristotle has the position of a creep in history. But isn’t the
blacksmith traditionally associated with the position of a shaman, the one tinkering but
also tinkering with concepts in a way? He was also a psychedelic expert, as far as I
understand. So going back to the military complex, we had a discussion the other day
where we noted that the huge American military complex is rather helpless to people
who believe that if they take a bomb onto their body and go into a building, they will go
straight to heaven. There is something of immaterialistic value in there on a very
practical basis. On the other side, we see an immaterialization of economy as such from
the basic fact that value of money is pulsed in magnetic highways through the globe.
What is the other side of the materialism? Don’t it you think it is becoming more and
more important?
Manuel de Landa: Absolutely. The moment in Amsterdam in the 18th century where
paper money for the first time became as important as metallic money, there was a de-
materialization right there. All of a sudden, money became purely information. Dollar
bills now came to have value because of and effect of confidence. As long as everybody
believed you could go to the bank and get your gold’s worth of things, then everything
worked just perfectly. So I agree with you: on top of the material layer, we have been
building these virtual layers one at a time. Virtual money—we don’t have to wait for e-
cash—paper money is virtual money. The more knowledge becomes an input to
production, has become as important as the material inputs to production—the energy,
the labor, the raw materials that go in—then knowledge is probably as important as
those inputs. So I completely agree with you. My point is, rather than making matter and
energy the whole thing, that they remain the basis for society. United States and Europe
consumes a much greater amount of energy in the form of electricity than they’ve ever
consumed in the past. If you go back all the way to the Industrial Revolution, the
increasing consumption of energy is a steep curve and there is no end in sight. My
thesis is that all these virtual layers we’ve built since the Industrial Revolution, precisely
to the point that our energetic basis has become more and more important--we became
aware of this during the 1973 oil crisis when we became aware how much our entire
society depended on cheap energy, being able to buy cheap oil from the Arabs. The
Arabs said, „Well, no one else is going to sell them cheap oil that cheap," there was a
big crisis. If everything had become virtual already, we would not have had that crisis. It
would have been very easy to switch to another energy source, or use our knowledge to
do something else. The point of the matter is we are much more primitive than we think
we are, much more material. This translates immediately to the fact that, for all the
virtual stuff that happens on the Internet, all the web pages and links and so on, there is
still all the hardware that forms the basis of it, all of which runs on electricity. In other
words, there is this material basis that is so humble that we take it for granted, because
it is not the exciting part. The exciting part is the virtual level. But we shouldn’t take it for
grant because if our philosophy tomorrow is to get it right, we need to always
acknowledge this material basis on which we operate.
The token material entity of current textual theory—just to back track a bit—the ‘60’s in
France was the great period of virtualization. Everything became text. Kristeva and
Derrida and so on we just talking about intertextuality. Even the weather doesn’t exist, it
is what we make of it, what we interpret of it. Everything became virtual in a way.
Baudrillard say that everything is just simulacra, just layers of neon signs on top of
layers of television images on top of layers of film images and more and more virtual
stuff. The computer games and simulations. We need an antidote to that. We need to
acknowledge that we’ve built these layers of virtuality and that they are real, they are
real virtual. They might not be actual but they are real still but that all of them are
running on top of a material basis that ultimately informs the source of power and the
basis of society.
Konrad Becker: What you were proposing in „The Geology of Morals" is that if you
extend that metaphor further, you could go into dangerous waters. There have been
proposals along that line from various corners, from the Gaia hypothesis to the bio-
organism theory. Is there a certain point where you would say you wouldn’t want it to be
interpreted in this way?
Manuel de Landa: Absolutely. Don’t call me Gaia. The Gaia hypothesis is a very
interesting point. I am anti-Gaia in that it is such a romanticization of our planet, but
beside the romantic aspect, which isn’t necessarily that bad because New Age people
are at least motivated for something, trying to save Gaia. Perhaps that will have an
impact on the ecological movement by recruiting more people. We have very serious
ecological problems, all of are material, none of which are virtual, all of which have to do
with the fact that we never learned how to deal our material fluxes. We know how to deal
with the fluxes that come in, the water that comes in, the electricity that comes in, but all
the residues and all the garbage and all the material fluxes that go out simply just go
out. We send all this carbon dioxide into the atmosphere, and we pollute our seas and
pollute our rivers because we are ignoring certain material fluxes that are still operating
of our material world. So the Gaia hypothesis, as romantic as it might be to the extent
that it inspires a few New Age people to become more actively involved in environmental
things, it might be a good thing.
Genes are what keeps this flesh that’s self-organizing and full of non-organic life from
within in order. And the genes are, of course, completely hierarchical, the genes within
the cell obey the cells within the tissue, obey the genes that make up the organs, obey
the genes that make up the organism. We are like perfectly little hierarchical creatures
which the command component is much higher than the self-organizing one, whereas
the atmosphere s much freer, is really pure self-organization without any command
component at all because there are not even genes there that tell them what to do. And
so to call the planet Gaia, to make it an organism, besides being romantic and kind of
cheesy, is a philosophical mistake from the strictly materialist point of view.
Konrad Becker: Yes, but don’t you think that the boundaries between life and animate
and inanimate matter is getting fuzzy and the famous virus is a borderline product where
we are not sure if it is life?
Manuel de Landa: Absolutely. The lines are fuzzy but only when you compare viruses
with, let’s say, crystals. Crystals also grow by replication, viruses also grow by
replication. In a way, a virus is just a fancy crystal. But when you compare creatures
farther away from the border, say, what we call higher animals like ourselves, you can
see the difference with hurricanes. The command component in the mixture increases
with evolution and the higher you get in the evolutionary tree the command component
increases. We call it „higher" but we are not any higher than bacteria, we just happen to
believe we are special. So to use the organism as a metaphor is wrong.
Konrad Becker: But would you consider the possibility of non-organic life?
As it turns out, when the theorists of self-organization studied cardiac tissue, they
discovered that you can take the cells of the heart, put them in a little flat plate and
they’ll keep beating. In other words, what is making the heart beat is a similar process
as to what is making the monsoon beat. It has very little to do with genes, that the flesh
itself can beat, that non-organic life has this breathing rhythm all to itself.
by Manuel De Landa.
When we "civilians" think about military questions we tend to view the subject as
encompassing a rather specialized subject matter, dealing exclusively with war and its
terrible consequences. It seems fair to say that, in the absence of war (or at least the
threat of war, as in the case of government defense budget debates) civilians hardly
ever think about military matters. The problem is that, from a more objective historical
perspective, the most important effects of the military establishment on the civilian world
in the last four hundred years have been during peacetime, and have had very little to
do with specifically military subjects, such as tactics or strategy. I would like to suggest
that, starting in the 1 500's, Western history has witnessed the slow militarisation of
civilian society, a process in which schools, hospitals and prisons slowly came to adopt
a form first pioneered in military camps and barracks, and factories came to share a
common destiny with arsenals and armories. I should immediately add, however, that
the influence was hardly unidirectional, and that what needs to be considered in detail
are the dynamics of complex "institutional ecologies", in which a variety of organizations
exert mutual influences on one another. Nevertheless, much of the momentum of this
process was maintained by military institutions and so we may be justified in using the
term "militarisation".
On one hand, there is nothing too surprising about this. Ever since Napoleon changed
warfare from the dynastic duels of the eighteenth century to the total warfare with which
we are familiar in this century, war itself has come to rely on the complete mobilization of
a society's industrial and human resources. While the armies of Frederick the Great
were composed mostly of expensive mercenaries, who had to be carefully used in the
battlefield, the Napoleonic armies benefited from the invention of new institutional
means of converting the entire population of a country into a vast reservoir of human
resources. Although technically speaking the French revolution did not invent
compulsory military service, its institutional innovations did allow its leaders to perform
the first modern mass conscription, involving the conversion of all men into soldiers, and
of all women into cheap laborers. As the famous proclamation of 1793 reads:
"...all Frenchmen are permanently requisitioned for service into the armies. Young men
will go forth to battle; married men will forge weapons and transport munitions; women
will make tents and clothing and serve in hospitals; children will make lint from old linen;
and old men will be brought to the public squares to arouse the courage of the soldiers,
while preaching the unity of the Republic and hatred against Kings." {1}
This proclamation, and the vast bureaucratic machinery needed to enforce it, effectively
transformed the civilian population of France into a resource (for war, production,
motivation) to be tapped into at will by the military high command. A similar point applies
to the industrial, mineral and agricultural resources of France and many other nation
states. Given the complete mobilization of society's resources involved in total war it is
therefore not surprising that there has been a deepening of military involvement in
civilian society in the last two centuries. However, I would want to argue that, in addition
to the links between economic, political and military institutions brought about by war
time mobilizations, there are other links, which are older, subtler but for the same reason
more insidious, which represent a true militarisation of society during peace time. To
retire to the French example, some of the weapons that the Napoleonic armies used
were the product of a revolution in manufacturing techniques which took place in French
armories in the late eighteenth century. In French armories, the core concepts and
techniques of what later would become assembly-line, mass production techniques,
were for the first time developed. The ideal of creating weapons with perfectly
interchangeable parts, and ideal which could not be fulfilled without standardization and
routinization of production, was taken even further in American arsenals in the early 19th
century. And it was there that military engineers first realized that in practice,
standardization went hand in hand with replacement of flexible individual skills with rigid
collective routines, enforced through constant discipline and monitoring.
Even before that, in the Dutch armies of the sixteenth century, this process had already
begun. Civilians tend to think of Frederick Taylor, the late nineteenth century creator of
so-called "scientific management" techniques, as the pioneer of labor process analysis,
that is, the breaking down of a given factory practice into micro-movements and the
streamlining of these movements for greater efficiency and centralized management
control. But Dutch commander Maurice of Nassau had already applied these methods to
the training of his soldiers beginning in the 1560's. Maurice analyzed the motion needed
to load, aim and fire a weapon into its micro-movements, redesigned them for maximum
efficiency and then imposed them on his soldiers via continuous drill and discipline. {2}
Yet, while the soldiers increased their efficiency tremendously as a collective whole,
each individual soldier completely lost control of his actions in the battlefield. And a
similar point applies to the application of this idea to factory workers, before and after
Taylorism. Collectively they became more productive, generating the economies of
This is but one example of the idea of militarisation of society. Recent historians have
rediscovered several other cases of the military origins of what was once thought to be
civilian innovations. In recent times it has been Michel Foucault who has most forcefully
articulated this view. For him this intertwining of military and civilian institutions is
constitutive of the modern European nation-state. On one hand, the project of
nationbuilding was an integrative movement, forging bonds that went beyond the
primordial ties of family and locality, linking urban and rural populations under a new
social contract. On the other, and complementing this process of unification, there was
the less conscious project of uniformation, of submitting the new population of free
citizens to intense and continuous training, testing and exercise to yield a more or less
uniform mass of obedient individuals. In Foucault's own words:
"Historians of ideas usually attribute the dream of a perfect society to the philosophers
and jurists of the eighteenth century; but there was also a military dream of society; its
fundamental reference was not to the state of nature, but to the meticulously
subordinated cogs of a machine, not to the primal social contract, but to permanent
coercions, not to fundamental rights, but to indefinitely progressive forms of training, not
to the general will but to automatic docility... The Napoleonic regime was not far off and
with it the form of state that was to survive it and, we must not forget, the foundations of
which were laid not only by jurists, but also by soldiers, not only counselors of state, but
also junior officers, not only the men of the courts, but also the men of the camps. The
Roman reference that accompanied this formation certainly bears with it this double
index: citizens and legionnaires, law and maneuvers. While jurists or philosophers were
seeking in the pact a primal model for the construction or reconstruction of the social
body, the soldiers and with them the technicians of discipline were elaborating
procedures for the individual and collective coercion of bodies." {3}
Given that modern technology has evolved in such a world of interacting economic,
political and military institutions, it should not come as a surprise that the history of
computers, computer networks, Artificial Intelligence and other components of
contemporary technology, is so thoroughly intertwined with military history. Here, as
before, we must carefully distinguish those influences which occurred during war-time
from those that took place in peace-time, since the former can be easily dismissed as
involving the military simply as a catalyst or stimulant, that is, an accelerator of a
process that would have occurred more slowly without its direct influence. The computer
itself may be an example of indirect influence. The basic concept, as everyone knows,
originated in a most esoteric area of the civilian world. In the 1 930's British
mathematician Alan Turing created the basic concept of the computer in an attempt to
solve some highly abstract questions in metamathematics. But for that reason, the
Turing Machine, as his conceptual machine was called, was a long way from an actual,
working prototype. It was during World War 11, when Turing was mobilized as part of
the war effort to crack the Nazi's Enigma code, that, in the course of his intense
participation in that operation, he was exposed to some of the practical obstacles
blocking the way towards the creation of a real Turing Machine. On the other side of the
Atlantic, John Von Neuman also developed his own practical insights as to how to bring
the Turing Machine to life, in the course of his participation in the Manhattan Project and
other war related operations.
In this case we may easily dismiss the role that the military played, arguing that without
the intensification and concentration of effort brought about by the war, the computer
would have developed on its own, perhaps at a slower pace. And I agree that this is
correct. On the other hand, many of the uses to which computers were put after the war
illustrate the other side of the story: a direct participation of military institutions in the
development of technology, a participation which actually shaped this technology in the
direction of uniformization, routinization and concentration of control. Perhaps the best
example of this other relation between the military and technology is the systems of
machine-part production known as Numerical Control methods. While the methods
developed in 19th. century arsenals, and later transferred to civilian enterprises, had
already increased uniformity and centralized control in the production of large quantities
of the same object (that is, mass production), this had left untouched those areas of
production which create relatively small batches of complex machine parts. Here the
skills of the machinist were still indispensable as late as World War II. During the 1950's,
the Air Force underwrote not only the research and development of a new system to get
rid of the machinist's skills, but also the development of software, the actual purchase of
machinery by contractors, and the training of operators and programmers. In a
contemporary Numerical Control system, after the engineer draws the parts that need to
be produced, the drawings themselves are converted into data and stored in cards or
electronically. From then on, all the operations needed to be performed, drilling, milling,
lathing, boring, and so on, are performed automatically by computer-controlled
machines. Unlike mass-production techniques, where this automatism was achieved at
the expense of flexibility, in Numerical Control systems a relatively simple change in
software (not hardware) is all that is needed to adapt the system for the production of a
new set of parts. Yet, the effects on the population of workers were very similar in both
cases: the replacement of flexible skills by rigid commands embodied in hardware or
software, and over time, the loss of those skills leading to a general process of worker
de-skilling, and consequently, to the loss of individual control of the production process.
The question in both cases is not the influence that the objects produced in militarized
factories may have on the civilian world. One could, for instance, argue that the support
of the canned food industry by Napoleon had a beneficial effect on society, and a similar
argument may be made for many objects developed under military influence. The
question, however, is not the transfer of objects, but the transfer of the production
processes behind those objects that matters, since these processes bring with them the
entire control and command structure of the military with them. To quote historian David
Noble:
"The command imperative entailed direct control of production operations not just with a
single machine or within a single plant, but worldwide, via data links. The vision of the
architects of the [Numerical Control] revolution entailed much more than the automatic
machining of complex parts; it meant the elimination of human intervention -a shortening
of the chain of command - and the reduction of remaining people to unskilled, routine,
and closely regulated tasks." And he adds that Numerical Control is a "giant step in the
same direction [as the 19th. century drive for uniformity]; here management has the
capacity to bypass the worker and communicate directly to the machine via tapes or
direct computer link. The machine itself can thereafter pace and discipline the
worker." {4}
Let's pause for a moment and consider a possible objection to this analysis. One may
argue that the goal of withdrawing control from workers and transferring it to machines is
the essence of the capitalist system and that, if military institutions happened to be
involved, they did so by playing the role assigned to them by the capitalist system. The
problem with this reply is that, although it may satisfy a convinced Marxist, it is at odds
with much historical data gathered by this century's best economic historians. This data
shows that European societies, far from having evolved through a unilinear progression
of "modes of production" (feudalism, capitalism, socialism), actually exhibited a much
more complex, more heterogeneous coexistence of processes. In other words, as
historian Ferdinand Braudel has shown, as far back as the fourteenth and fifteenth
centuries, institutions with the capability of exercising economic power (large banks,
wholesalers, long-distance trade companies) were already in operation. and fully
coexisted with feudal institutions as well as with economic institutions that did no have
economic power, such as retailers and producers of humble goods. Indeed, Braudel
shows that these complex coexistances of institutions of different types existed before
and after the Industrial Revolution, and suggests that the concept of a "capitalist
system" (where every aspect of society is connected into a functional whole) gives a
misleading picture of the real processes. What I am suggesting here is that we take
Braudel seriously, forget about our picture of history as divided into neat, internally
homogeneous eras or ages, and tackle the complex combinations of institutions
involved in real historical processes.
The models we create of these complex "institutional ecologies" should include military
organizations playing a large, relatively independent role, to reflect the historical data we
now have on several important cases, like fifteenth century Venice, whose famous
Arsenal was at the time the largest industrial complex in Europe, or at eighteenth
century France and nineteenth century United States, and their military standardization
of weapon production. Another important example, involves the development of the
modern corporation, particularly as it happened in the United States in the last century.
The first American big business was the railroad industry, which developed the
management techniques which many other large enterprises would adopt later on. This
much is well known. What is not so well known is that military engineers were deeply
involved in the creation of the first railroads and that they developed many of the
features of management which later on came to characterize just about every large
commercial enterprise in the United States, Europe and elsewhere. In the words of
historian Charles O'Connell:
"As the railroads evolved and expanded, they began to exhibit structural and procedural
characteristics that bore a remarkable resemblance to those of the Army. Both
organizations erected complicated management hierarchies to coordinate and control a
variety of functionally diverse, geographically separated corporate activities. Both
created specialized staff bureaus to provide a range of technical and logistical support
services. Both divided corporate authority and responsibility between line and staff
agencies and officers and then adopted elaborate written regulations that codified the
relationship between them. Both established formal guidelines to govern routine
activities and instituted standardized reporting and accounting procedures and forms to
provide corporate headquarters with detailed financial and operational information which
flowed along carefully defined lines of communication. As the railroads assumed these
characteristics, they became America's first 'big business'." {5}
Thus, the transfer of military practices to the civilian world influenced the lives not only of
workers, but of the managers themselves. And the influence did not stop with the
development of railroads. The "management science" which is today taught in business
schools is a development of military "operations research", a discipline created during
World War 11 to tackle a variety of tactical, strategic and logistic problems. And it was
the combination of this "science of centralization" and the availability of large computers
that, in turn, allowed the proliferation of transnational corporations and the consequent
internationalization of the standardization and routinization of production processes.
Much as skills were replaced by commands in the shop floor, so were prices replaced by
commands at the management level. (This is one reason not to use the term "markets"
when theorizing big business. Not only they rely on commands instead of prices, they
manipulate demand and supply rather than being governed by them. Hence, Braudel
has suggested calling big business "anti-markets"). {6}
homogenize the history of computers (there are large differences between the
development of mainframes and minicomputers, on one hand, and the personal
computer, on the other) but it would obscure the fact that, if computers have come to
play the "disciplinarian" roles they play today it is as part of a historical processes which
is several centuries old, a process which computers have only intensified.
To conclude this talk I would like to give one example, from the world of computers, of
two American industrial hinterlands which illustrate the difference between economies of
scale and of agglomeration: Silicon Valley in Northern California, and Route 128 near
Boston:
"Silicon Valley has a decentralized industrial system that is organized around regional
networks. Like firms in Japan, and parts of Germany and Italy, Silicon Valley companies
tend to draw on local knowledge and relationships to create new markets, products, and
applications. These specialist firms compete intensely while at the same time learning
from one another about changing markets and technologies. The region's dense social
networks and open labor markets encourage experimentation and entrepreneurship.
The boundaries within firms are porous, as are those between firms themselves and
between firms and local institutions such as trade associations and universities." {7}
The growth of this region owed very little to large financial flows from governmental and
military institutions. Silicon Valley did not develop so much by economies of scale, as by
the benefits derived from an agglomeration of visionary engineers, specialist consultants
and financial entrepreneurs. Engineers moved often from one firm to another,
developing loyalties to the craft and region's networks, not to the corporation. This
constant migration, plus an unusual practice of information sharing among the local
producers, insured that new formal and informal knowledge diffused rapidly through the
entire region. Business associations fostered collaboration between small and medium-
sized companies. Risk-taking and innovation were preferred to stability and routinization.
This, of course, does not mean that there were not large, routinized firms in Silicon
Valley, only that they did not dominate the mix. Not so in Route 128:
"While Silicon Valley producers of the 1970's were embedded in, and inseparable from,
intricate social and technical networks, the Route 128 region came to be dominated by a
small number of highly self-sufficient corporations. Consonant with New England's two
century old manufacturing tradition, Route 128 firms sought to preserve their
independence by internalizing a wide range of activities. As a result, secrecy and
corporate loyalty govern relations between firms and their customers, suppliers, and
competitors, reinforcing a regional culture of stability and self-reliance. Corporate
hierarchies insured that authority remains centralized and information flows vertically.
The boundaries between and within firms and between firms and local institutions thus
remain far more distinct." {8}
While before the recession of the 1980's both regions had been continuously expanding,
one on economies of scale and the other on economies of agglomeration (or rather,
mixtures dominated by one or the other), they both felt the full impact of the downturn. At
that point some large Silicon Valley firms, unaware of the dynamics behind the region's
success, began to switch to economies of scale, sending parts of their production to
other areas, and internalizing activities previously performed by smaller firms. Yet, unlike
Route 128, the intensification of routinization and internalization in Silicon Valley was not
a constitutive part of the region, which meant that the old meshwork system could be
revived. And this is, in fact, what happened. Silicon Valley's regional networks were re-
energized, through the birth of new firms in the old pattern, and the region has now
returned to its former dynamic state, unlike the command-heavy Route 128 which
continues to stagnate. What this shows is that, while both scale and agglomeration
economies, as forms of positive feedback, promote growth, only the latter endows firms
with the flexibility needed to cope with adverse economic conditions.
In conclusion I would like to repeat my call for more realistic models of economic history,
models involving the full complexity of the institutional ecologies involved, including
markets, anti-markets, military and bureaucratic institutions, and if we are to believe
Michel Foucault, schools, hospitals, prisons and many others. It is only through an
honest philosophical confrontation with our complex past that we can expect to
understand it and derive the lessons we may use when intervening in the present and
speculating about the future.
References:
{1} Excerpt from the text of the levee en mass of 1793, quoted in William H. McNeill. The
Pursuit of Power. Technology, Armed Force and Society since A.D. 1000. (University of
Chicago Press, 1982). p. 192
{3} Michel Foucault. Discipline and Punish. The Birth of Prison. (Vintage Books, New
York, 1979) p. 169
{5} Charles F. O'Connell, Jr. The Corps of Engineers and the Rise of Modern
Management. In ibid. p. 88
{6} Fernand Braudel. The Wheels of Commerce. (Harper and Row, New York, 1986).
p.379
{7} Annalee Saxenian. Lessons from Silicon Valley. In Technology Review, Vol. 97, no.
5. page. 44
{8} ibid. p. 47
by Manuel De Landa.
And what is true of physical systems is all the more so for biological ones.
Attractors and bifurcations are features of any system in which the dynamics
are nonlinear, that is, in which there are strong interactions between
variables. As biology begins to include these nonlinear dynamical
phenomena in its models, for example, in the case of evolutionary arms-
races between predators and prey, the notion of a "fittest design" loses its
meaning. In an arms-race there is no optimal solution fixed once and for all,
since the criterion of fitness itself changes with the dynamics. This is also
true for any adaptive trait which value depends on how frequent it occurs in
a given population, as well as in cases like migration, where animal behavior
interacts nonlinearly with selection pressures. As the belief in a fixed
criterion of optimality disappears from biology, real historical processes
come to reassert themselves once more. {2}
These new ideas are all the more important when we move on to the social
sciences, particularly economics. In this discipline, we tend to uncritically
assume systematicity, as when one talks of the "capitalist system", instead
of showing exactly how such systematic properties of the whole emerge
from concrete historical processes. Worse yet, we then tend to reify such
unaccounted-for systematicity, ascribing all kinds of causal powers to
capitalism, to the extent that a clever writer can make it seem as if anything
at all (from nonlinear dynamics itself to postmodernism or cyberculture) is
the product of late capitalism. This basic mistake, which is, I believe, a major
obstacle to a correct understanding of the nature of economic power, is
partly the result of the purely top-down, analytical style that has dominated
economic modeling from the eighteenth century. Both macroeconomics,
which begins at the top with concepts like gross national product, as well as
microeconomics, in which a system of preferences guides individual choice,
are purely analytical in approach. Neither the properties of a national
economy nor the ranked preferences of consumers are shown to emerge
from historical dynamics. Marxism, is true, added to these models
intermediate scale phenomena, like class struggle, and with it conflictive
dynamics. But the specific way in which it introduced conflict, via the labor
theory of value, has now been shown by Shraffa to be redundant, added
from the top, so to speak, and not emerging from the bottom, from real
struggles over wages, or the length of the working day, or for control over
the production process. {4}
real history more evident that in the subject of the dynamics of economic
power, defined as the capability to manipulate the prices of inputs and
outputs of the production process as well as their supply and demand. In a
peasant market, or even in a small town local market, everybody involved is
a price taker: one shows up with merchandise, and sells it at the going
prices which reflect demand and supply. But monopolies and oligopolies are
price setters: the prices of their products need not reflect demand/supply
dynamics, but rather their own power to control a given market share. {5}
When approaching the subject of economic power, one can safely ignore
the entire field of linear mathematical economics (so-called competitive
equilibrium economics), since there monopolies and oligopolies are basically
ignored. Yet, even those thinkers who make economic power the center of
their models, introduce it in a way that ignores historical facts. Authors
writing in the Marxist tradition, place real history in a straight-jacket by
subordinating it to a model of a progressive succession of modes of
production. Capitalism itself is seen as maturing through a series of stages,
the latest one of which is the monopolistic stage in this century. Even non-
Marxists economists like Galbraith, agree that capitalism began as a
competitive pursuit and stayed that way till the end of the nineteenth
century, and only then it reached the monopolistic stage, at which point a
planning system replaced market dynamics.
This would have the added advantage that it would allow us to get rid of
historical theories framed in terms of stages of progress, and to recognize
the fact that antimarkets could have arisen anywhere, not just Europe, the
moment the flows of goods through markets reach a certain critical level of
intensity, so that organizations bent on manipulating these flows can
emerge. Hence, the birth of antimarkets in Europe has absolutely nothing to
do with a peculiarly European trait, such as rationality or a religious ethic of
thrift. As is well known today, Europe borrowed most of its economic and
accounting techniques, those techniques that are supposed to distinguish
her as uniquely rational, from Islam. {8}
interaction. Moreover, other historians have recently shown that that specific
form of industrial production which we tend to identify as "truly capitalist",
that is, assembly-line mass production, was not born in economic
organizations, but in military ones, beginning in France in the eighteenth
century, and then in the United States in the nineteenth. It was military
arsenals and armories that gave birth to these particularly oppressive control
techniques of the production process, at least a hundred years before Henry
Ford and his Model-T cars {10} Hence, the large firms that make up the
antimarket, can be seen as replicators, much as animals and plants are.
And in populations of such replicators we should be able to observe the
emergence of the different commercial forms, from the family firm, to the
limited liability partnership to the joint stock company. These three forms,
which had already emerged by the fifteenth century, must be seen as
arising, like those of animals and plants, from slow accumulations of traits
which later become consolidated into more or less permanent structures,
and not, of course, as the manifestation of some pre-existing essence. In
short, both animal and plant species as well as "institutional species" are
historical constructions, the emergence of which bottom-up models can help
us study.
the past, Venice when it was still subordinated to Byzantium, or the network
New York-Boston-Philadelphia when still a supply zone for the British
empire, engage in what she calls, import-substitution dynamics. Because of
their subordinated position, they must import most manufactured products,
and export raw materials. Yet, meshworks of small producers within the city,
by interlocking their skills can begin to replace those imports with local
production, which can then be exchanged with other backward cities. In the
process, new skills and new knowledge is generated, new products begin to
be imported, which in turn, become the raw materials for a new round of
import-substitution. Nonlinear computer simulations have been created of
this process, and they confirm Jacobs' intuition: a growing meshwork of
skills is a necessary condition for urban morphodynamics. The meshwork as
a whole is decentralized, and it does not grow by planning, but by a kind of
creative drift. {13}
What would one expect to emerge from such populations of more or less
centralized organizations and more or less decentralized markets? The
answer is, a world-economy, or a large zone of economic coherence. The
term, which should not be confused with that of a global economy, was
coined by Immanuel Wallerstein, and later adapted by Braudel so as not to
depend on a conception of history in terms of a unilineal progression of
modes of production. From Wallerstein Braudel takes the spatial definition of
a world-economy: an economically autonomous portion of the planet,
perhaps coexisting with other such regions, with a definite geographical
structure: a core of cities which dominate it, surrounded by yet other
economically active cities subordinated to the core and forming a middle
zone, and finally a periphery of completely exploited supply zones. The role
of core of the European world-economy has been historically played by
several cities: first Venice in the fourteenth century, followed by Antwerp and
Genoa in the fifteenth and sixteenth. Amsterdam then dominated it for the
next two centuries, followed by London and then New York. Today, we may
be witnessing the end of American supremacy and the role of core seems to
be moving to Tokyo. {16}
Interestingly, those cities which play the role of core, seem to generate in
their populations of firms, very few large ones. For instance, when Venice
played this role, no large organizations emerged in it, even though they
already existed in nearby Florence. Does this contradict the thesis that
capitalism has always been monopolistic? I think not. What happens is that,
in this case, Venice as a whole played the role of a monopoly: it completely
controlled access to the spice and luxury markets in the Levant. Within
Venice, everything seemed like "free competition", and yet its rich merchants
enjoyed tremendous advantages over any foreign rival, whatever its size.
Perhaps this can help explain the impression classical economists had of a
competitive stage of capitalism: when the Dutch or the British advocated
"free competition" internally is precisely when their cities as a whole held a
virtual monopoly on world trade.
Something like Allen's approach would be useful to model one of the two
things that stitch world-economies together, according to Braudel: trade
circuits. However, to generate the actual spatial patterns that we observe in
the history of Europe, we need to include the creation of chains of
subordination among these cities, of hierarchies of dependencies besides
the meshworks of interdependencies. This would need the inclusion of
monopolies and oligopolies, growing out of each cities meshworks of small
producers and traders. We would also need to model the extensive
networks of merchants and bankers with which dominant cities invaded their
Insights coming from running simulations like these can, in turn, be used to
build other simulations and to suggest directions for historical research to
follow. We can imagine parallel computers in the near future running
simulations combining all the insights from the ones we just discussed:
spatial networks of cities, breathing at different rhythms, and housing
evolving populations of organizations and meshworks of interdependent
skills. If power relations are included, monopolies and oligopolies will
emerge and we will be able to explore the genesis and evolution of the
their enormous economic power (most of them are oligopolies), and to their
having access to intense economies of scale. However, meshworks of small
producers interconnected via computer networks could have access to
different, yet as intense economies of scale. A well studied example is the
symbiotic collection of small textile firms that has emerged in an Italian
region between Bologna and Venice. The operation of a few centralized
textile corporations was broken down into a decentralized network of firms,
in which entrepreneurs replace managers and short runs of specialized
products replace large run of mass produced ones. Computer networks
allow these small firms to react flexibly to sudden shifts in demand, so that
no firm becomes overloaded while others sit idly with spare capacity. {24}
may one day play a crucial role in adding some heterogeneity to a world-
economy that's becoming increasingly homogenized.
FOOTNOTES:
{1} Ilya Prigogine and Isabelle Stengers. Order out of Chaos. (Bantam
Books, New York 1984). p.169.
{3} Christopher G. Langton. Artificial Life. In C.G. Langton ed. Artificial Life.
(Addison-Wesley, 1989) p.2
{5} John Keneth Galbraith. The New Industrial State. (Houghton Mifflin,
Boston 1978) p.24
{9} Merrit Roe Smith. Army Ordnance and the "American system" of
Manufacturing, 1815-1861. In M.R.Smith ed. Military Enterprise and
Technological Change.
{11} Richard Dawkins. The Selfish Gene. (Oxford University Press, New
{13} Jane Jacobs. Cities and the Wealth of Nations. (Random House, New
York 1984) p.133
Gilles Deleuze and Felix Guattari. 1440: The Smooth and the Striated. In A
Thousand Plateaus. (University of Minnesota Press, Minneapolis 1987)
ch.14
{24} Thomas W. Malone and John F. Rockart. Computers, Networks and the
Corporation. In Scientific American Vol 265 Number 3 p.131
Also:
A Neo-Materialist Interpretation
by Manuel De Landa.
I would indeed limit the sense of the term even more to refer exclusively to
those weakly gatherings of people at a predefined place in town, and not to
a dispersed set of consumers catered by a system of middleman (as when
one speaks of the "market" for personal computers). The reason is that, as
historian Fernand Braudel has made it clear, it is only in markets in the first
sense that we have any idea of what the dynamics of price formation are. In
other words, it is only in peasant and small town markets that decentralized
decision-making leads to prices setting themselves up in a way that we can
fairs which have taken place periodically since the Middle Ages, they give
rise to commercial hierarchies, with a money market on top, a luxury goods
market underneath and, after several layers, a grain market at the bottom. A
real society, then, is made of complex and changing mixtures of these two
types of structure, and only in a few cases it will be easy to decide to what
type a given institution belongs.
Perhaps a concrete example will help clarify this rather crucial point. When
we say (as Marxists used to say) that "class struggle is the motor of history"
we are using the word "motor" in a purely metaphorical sense. However,
when say that "a hurricane is a steam motor" we are not simply making a
linguistic analogy: rather we are saying that hurricanes embody the same
diagram used by engineers to build steam motors, that is, that it contains a
reservoir of heat, that it operates via thermal differences and that it
circulates energy and materials through a (so-called) Carnot cycle. Deleuze
and Guattari use the term "abstract machine" to refer to this diagram shared
by very different physical assemblages. Thus, there would be an "abstract
motor" with different physical instantiations in technological objects and
natural atmospheric processes.
What I would like to argue here is that there are also abstract machines
behind the structure-generating processes which yield as historical products
specific meshworks and hierarchies. Let us begin by discussing the case of
hierarchical structures, and in particular, of social strata (classes, castes).
The term "social stratum" itself is clearly a metaphor, involving the idea that
just as geological strata are layers of rocky materials stacked on top of each
other so classes and castes are like layers of human materials in which
some are higher and some lower. Is it possible to go beyond metaphor and
show that the genesis of both geological and social strata involve the same
engineering diagram?. Geological strata (accumulations of sedimentary
rocks like sandstone or limestone) are created through a process involving
(at least) two distinct operations. When one looks closely at the layers of
rock in an exposed mountain side, one striking characteristic is that each
layer contains further layers, each composed of small pebbles which are
nearly homogenous with respect to size, shape and chemical composition.
Since pebbles in nature do not come in standard sizes and shapes, some
kind of sorting mechanism needs to be involved here, some specific device
to take a multiplicity of pebbles of heterogeneous qualities and distribute
them into more or less uniform layers.
one scale to structures at another scale. Deleuze and Guattari call these two
operations "content" and "expression" , and warn us against confusing them
with the old philosophical distinction between "substances" and "forms". The
reason is that each one of the two articulations involves substances and
forms: sedimentation is not just about accumulating pebbles (substance) but
also about sorting them into uniform layers (form); while consolidation not
only effects new architectonic couplings between pebbles (form) but also
yields a new entity, a sedimentary rock (substance). Moreover, these new
entities may themselves accumulate and sort (as in the alternating layers of
schist and sandstone that make up Alpine mountains) and become
consolidated when tectonic forces cause the accumulated layers of rock to
fold and become a higher scale entity, a mountain. {5}
We can also find these two operations (and hence, this abstract diagram) in
the formation of social classes. We talk of "social strata" whenever a given
society presents a variety of differentiated roles to which not everyone has
equal access, and when a subset of those roles (i.e. those to which a ruling
elite alone has access) involves the control of key energetic and material
resources. While role differentiation may be a spontaneous effect of an
intensification in the flow of energy through society (e.g. as when a Big Man
in pre-State societies acts as an intensifier of agricultural production ), the
sorting of those roles into ranks along a scale of prestige involves specific
group dynamics. In one model, for instance, members of a group who have
acquired preferential access to some roles begin to acquire the power to
control further access to them, and within these dominant groups criteria for
sorting the rest of society into sub-groups begin to crystallize. "It is from
such crystallization of differential evaluation criteria and status positions that
some specific manifestations of stratification and status differences -such as
segregating the life-styles of different strata, the process of mobility between
them, the steepness of the stratificational hierarchies, some types of stratum
consciousness, as well as the degree and intensity of strata conflict-
develops in different societies." {7}
However, even though most societies develop some rankings of this type,
not in all of them do they become an autonomous dimension of social
organization. In many societies differentiation of the elites is not extensive
(they do not form a center while the rest of the population forms an excluded
periphery), surpluses do not accumulate (they may be destroyed in ritual
feasts), and primordial relations (of kin and local alliances) tend to prevail.
Hence a second operation is necessary beyond the mere sorting of people
into ranks for social classes or castes to become a separate entity: the
informal sorting criteria need to be given a theological interpretation and a
legal definition, and the elites need to become the guardians and bearers of
the newly institutionalized tradition, that is, the legitimizers of change and
delineators of the limits of innovation. In short, to transform a loose
accumulation of traditional roles (and criteria of access to those roles) into a
social class, the latter needs to become consolidated via theological and
legal codification. {8}
and social classes (and other institutionalized hierarchies) are all historical
constructions, the product of definite structure-generating processes which
take as their starting point a heterogeneous collection of raw materials
(pebbles, genes, roles), homogenize them through a sorting operation and
then give the resulting uniform groupings a more permanent state through
some form of consolidation. Hence, while some elements remain different (e.
g. only human institutions, and perhaps, biological species, involve a
hierarchy of command) others stay the same: the articulation of
homogenous components into higher-scale entities. (And all this, without
metaphor ).
attractor). {10}
The question now is whether from these and other empirical studies of
meshwork behavior we can derive a structure-generating process which is
abstract enough to operate in the worlds of geology, biology and human
society. In the model proposed by Deleuze and Guattari, there are three
elements in this diagram. First, a set of heterogeneous elements is brought
together via an articulation of superpositions , that is, an interconnection of
diverse but overlapping elements. (In the case of autocatalytic loops, the
nodes in the circuit are joined to each other by their functional
complementarities ). Second, a special class of operators, or intercallary
elements, is needed to effect this interlock via local connections. (In our
case, this is the role played by catalysts, inserting themselves between two
other chemical substances to facilitate their interaction). Finally, the
interlocked heterogeneities must be capable of endogenously generating
stable patterns of behavior (for example, patterns at regular temporal or
spatial intervals.) {12} Is it possible to find instances of these three elements
in all different spheres of reality?
Besides the sedimentary type there exists another great class of rocks
called "igneous rocks" (such as granite) which are the outcome of a radically
different process of construction. Granite forms directly out of a cooling
magma, a viscous fluid made out of a diversity of molten materials. Each of
these liquid components has a different threshold of crystallization, that is,
each undergoes the bifurcation towards the solid state at a different critical
point in temperature. This means that as the magma cools down its different
elements will separate as they crystallize in sequence, those that solidify
earlier serving as containers for those which acquire a crystal form later. In
these circumstances the result is a complex set of heterogeneous crystals
which interlock with one another, and this is what gives granite its superior
strength. {13}
When a reaction like the one involved in chemical clocks is not stirred, the
temporal intervals generated become spatial intervals, forming beautiful
spiral and concentric circle patterns which sometimes can be observed in
frozen form in some igneous rocks. {15}
Of course, for this to work prices must set themselves, and therefore we
must imagine that there is not a wholesaler in town who can manipulate
prices by dumping (or hoarding) large amounts of a given product into the
market. In the absence of price manipulation, money (even primitive money
such as salt, shells or cigarettes) performs the function of intercallary
element: while with pure barter the possibility of two exactly matching
demands meeting by chance is very low, with money those chance
encounters become unnecessary, and complementary demands may find
each other at a distance, so to speak. Finally, markets also seem to
generate endogenous stable states, particularly when commercial towns
form trading circuits, as can be seen in the cyclic behavior of their prices.
From the point of view of the nonlinear dynamics of our planet, the thin rocky
crust on which we live and which we call our land and home is perhaps its
least important component. Indeed, if we waited long enough, if we could
observe planetary dynamics at geological time scales, the rocks and
mountains which define the most stable and durable traits of our reality
would dissolve into the great underground lava flows of which they are but
temporary hardenings. Indeed, given that it is just a matter of time for any
one rock or mountain to be reabsorbed into the self-organized flows of lava
driving the dynamics of the lithosphere, these geological structures
represent a local slowing-down in this flowing reality. It is almost as if every
part of the mineral world could be defined by specifying its chemical
composition and its speed of flow : very slow for rocks, faster for lava.
First of all, the fact that meshworks and hierarchies occur mostly in mixtures,
makes it convenient to have a label to refer to these changing combinations.
If the hierarchical components of the mix dominate over the meshwork ones
we may speak of a highly stratified structure, while the opposite combination
will be referred to as one with a low degree of stratification. Moreover, since
meshworks give rise to hierarchies and hierarchies to meshworks, we may
speak of a given mixture as undergoing processes of destratification as well
as restratification, as its proportions of homogenous and heterogeneous
components change. Finally, since according to this way of viewing things
what truly defines the real world are neither uniform strata nor variable
meshworks but the unformed and unstructured morphogenetic flows from
which these two derive, it will also be useful to have a label to refer to this
special state of matter-energy-information, to this flowing reality animated
from within by self-organizing processes constituting a veritable non-organic
life : the Body without Organs (BwO):
"The organism is not at all the body, the BwO; rather it is a stratum on the
BwO, in other words, a phenomenon of accumulation, coagulation, and
sedimentation that, in order to extract useful labor from the BwO, imposes
upon it forms, functions, bonds, dominant and hierarchized organizations,
organized trascendences...the BwO is that glacial reality where the
alluvions, sedimentations, coagulations, foldings, and recoilings that
compose an organism -and also a signification and a subject- occur. " {17}
The label itself is, of course, immaterial and insignificant. We could as well
refer to this cauldron of non-organic life by a different name. (Elsewhere, for
instance, I called it the "machinic phylum"). {18} Unlike the name, however,
the referent of the label is of extreme importance, since the flows of lava,
biomass, genes, memes, norms, money (and many others) are crucial for
the emergence of just about any stable structure that we cherish and value
(or, on the contrary, that oppresses and slaves us). We could define the
BwO in terms of this unformed, destratified flows, as long as we keep in
mind that what counts as destratified at any given time and space scale is
entirely relative. The flow of genes and biomass are "unformed" if we
compare them to any given individual organism, but they themselves have
internal forms and functions. Indeed, if instead of taking a planetary
perspective we adopted a properly cosmic viewpoint, our entire planet
(together with its flows) would itself be a mere provisional hardening in the
vast flows of plasma which permeate the universe.
Human history has involved a variety of Bodies without Organs. First, the
sun, that giant sphere of plasma which intense flow of energy drives most
processes of self-organization in our planet, and in the form of grain and
fossil fuel, in our civilizations. Second, the body of lava "conveyor
belts" (convection cells) which drive plate tectonics and which are
responsible for the most general geopolitical features of our planet, such as
the breakdown of Pangea into our current continents, and the subsequent
distribution of domesticable species, a distribution that benefited Eurasia
over the rest of the world. Third, the BwO constituted by the coupled
dynamics of the Hydrosphere/Atmosphere, and their wild variety of self-
organized entities: hurricanes, tsunamis, pressure blocks, cyclones, and
wind circuits. (The conquest of the wind circuits of the Atlantic, the trade
winds and the westerlies, is what allowed the transformation of the American
continent into a vast supply zone to fuel the growth of the European urban
economy).
Fourth, the genetic BwO constituted by the more or less free flow of genes
through microorganisms (via plasmids and other vectors), which unlike the
more stratified genetic flow in animals and plants, has avoided human
control even after the creation of antibiotics. Fifth, those portions of the flow
of solar energy through ecosystems (flesh circulating in natural food webs)
which have escaped urbanization, particularly animal and vegetable weeds
or rhizomes (The BwO formed by underground rodent cities, for example).
Finally, our languages, when they formed dialect continua and
circumstances conspired to remove any stratifying pressure, also formed a
BwO, as when the Norman invaders imposed French as the language of the
elites allowing the peasant masses to create the English language out of an
amorphous soup of Germanic norms with Scandinavian spices.
Hence, using these abstract diagrams to represent what goes on in the BwO
is equivalent to using a system of representation in terms of intensities,
since it is ultimately the intensity of each parameter that determines the kind
of dynamic involved, and hence, the character of the structures that are
generated. Indeed, one way of picturing the BwO is as that "glacial" state of
matter-energy-information which results from turning all these knobs to
zero , that is, to the absolute minimum value of intensity, bringing all
production of structured form to a halt:
"A BwO is made in such a way that it can be occupied, populated only by
intensities. Only intensities pass and circulate. Still, the Bwo is not a scene,
a place, or even a support upon which something comes to pass... It is not
space, nor is it in space; it is matter that occupies space to a given degree -
to the degree corresponding to the intensities produced. It is nonstratified,
unformed, intense matter, the matrix of intensity, intensity=0 ... Production of
the real as an intensive magnitude starting at zero." {19}
References:
{1} Herbert Simon. The Sciences of the Artificial. (MIT Press, 1994). p. 32-
36
{2} Fernand Braudel. The Wheels of Commerce. (Harper and Row, New
York, 1986). p. 28-47
For the idea that "invisible hand" economics simply assumes that demand
and supply cancel each other out (i.e. that markets clear) without ever
specifying the dynamics that lead to this state see:
Mirowsky shows how the concept of the concept of the "invisible hand" was
formalized in the nineteenth century by simply copying the form of
equilibrium thermodynamics (Hence, in his opinion, this branch of physics
provided more heat than light). He also warns that recent attempts to apply
Prigogineis theories to economics are doing the same thing, for example,
assuming the existence of attractors without specifying just what it is that is
being dissipated (i.e. only energetically dissipative or "lossy" systems have
attractors). See:
"Stating the distinction in its more general way, we could say that it is
between stratified systems or systems of stratification on the one hand, and
consistent, self-consistent aggregates on the other... There is a coded
system of stratification whevever, horizontally, there are linear causalities
between elements; and, vertically, hierarchies of order between groupings;
and, holding it all together in depth, a succession of framing forms, each of
which informs a substance and in turn serves as a substance for another
form. [e.g. the succession pebbles-sedimentary rocks-folded mountains,
footnote#5 below]... On the other hand, we may speak of aggregates of
consistency when instead of a regulated succession of forms-substances we
are presented with consolidations of very heterogeneous elements, orders
that have been short-circuited or even reverse causalities, and captures
between materials and forces of a different nature... ".
{5} Gilles Deleuze and Felix Guattari. A Thousand Plateaus. op. cit. p. 41
Other researchers have discovered that as the loop adds new nodes it may
reach a critical threshold of complexity and undergo a bifurcation, a
transition to a new state where complexification accelerates. Since the
states to which a phase transition leads are in no way "directed" or
"progressive", changing and developing by crossing bifurcations is another
way of growing by drift.
{10} Ilya Prigogine and Isabelle Stengers. Order out of Chaos. op.cit.. p. 147
{12} Gilles Deleuze and Felix Guattari. A Thousand Plateaus. op.cit. p.329
But we can go further. Defined this way, "catalysis" becomes a true abstract
operation: anything that switches a dynamical system (an interacting
population of molecules, ants, humans or institutions) from one stable state
to another is literally a catalyst in this sense. Hence, we may use this
definition not only to move down from chemistry (the field of the literal
application of the term) to physics without metaphor, but also up, to biology,
sociology, linguistics. Cities and institutions, for example, would be
instanciations of this operator to the extent that they arise form matter-
energy flows and decision-making processes, but then react back on these
flows and processes to constrain them in a variety of ways (stimulating them
or inhibiting them). On the other hand, as Iberall himself notes, catalytic
constraints may combine with one another and form language-like systems.
Another physicist, Howard Pattee, has further elaborated the notion of
enzymes (organic catalysts) as syntactical constraints, operating on a
semantic world defined by its stable states.
{17} Gilles Deleuze and Felix Guattari. A Thousand Plateaus. op. cit. p.159
{18} While the term "Body Without Organs" was first used in a philosophical
context by Deleuze (borrowing from Artaud), the almost synonymous
"machinic phylum" seems to have been coined and first used bt Guattari, in:
I do not claim that the two terms are strictly synonymous (although I myself
do use them that way). Rather what seems to be happening is that these
philosophers instead of building one theory are attempting to create a
meshwork of theories, that is, a set of partially overlapping theories. Hence,
key (near synonymous) concepts (BWO, phylum, smooth space, rhizome)
do not exactly coincide in meaning but are slightly displaced from one
another to create this overlapping effect. The point remains that it is the
referents of these labels that matter and not the labels themselves.
by Manuel De Landa.
This author claims that by the time Greek philosophers like Democritus or
Aristotle developed their philosophies of matter, practically everything about
the behaviour of metals and alloys that could be explored with pre-industrial
technology, was already known to craftmen and blacksmiths. For at least a
thousand years before philosophers began their speculations, this
knowledge was developed on a purely empirical basis, through a direct
interaction with the complex behaviour of materials. Indeed, the early
philosophies of matter may have been derived from observation and
conversation with those "whose eyes had seen and whose fingers had felt
the intricacies of the behaviour of materials during thermal processing or as
they were shaped by chipping, cutting or plastic deformation." {2} For
instance, Aristotleis famous four elements, fire, earth, water and air, may be
said to reflect a sensual awareness of what today we know as energy and
the three main states of aggregation of matter, the solid, liquid and gas
states.
James Gordon has called the study of the strenght of materials the
Cinderella of science, partly because much of the knowledge was developed
by craftmen, metallurgists and engineers (that is the flow of ideas often ran
from the applied to the pure fields), and partly because by its very nature,
the study of materials involved an interaction between many scientific
disciplines, an interdisciplinary approach which ran counter to the more
prestigious tradition of "pure" specialization. {6} Today, of course, the
interdisciplinary study of complexity, not only in materials but in many other
areas of science, from physics and ecology to economics, is finally taking its
place at the cutting-edge of scientific research. We are begining to
understand that any complex system, whether composed of interacting
molecules, organic creatures or economic agents, is capable of
spontaneoulsy generating order and of actively organizing itself into new
structures and forms. It is precisely this ability of matter and energy to self-
organize that is of greatest significance to the philosopher. Let me illustrate
this with an example from materials science.
Long ago, practical metallurgists understood that a given piece of metal can
be made to change its behaviour, from ductile and tough to strong and
brittle, by hammering it while cold. The opposite transmutation, from hard to
ductile, could also be achieved by heating the piece of metal again and then
allowing it to cool down slowly (that is, by annealing it). Yet, although
blacksmiths knew empirically how to cause these metamorphoses, it was
not until a few decades ago that scientists understood its actual microscopic
mechanism. As it turns out, explaining the physical basis of ductility involved
a radical conceptual change: scientists had to stop viewing metals in static
terms, that is, as deriving their strenghth in a simple way from the chemical
bonds between their composing atoms, and begin seeing them as
dynamical systems. In particular, the real cause of brittleness in rigid
materials, and the reason why ductile ones can resist being broken, has to
do with the complex dynamics of spreading cracks.
What matters from the philosophical point of view is precisely that toughness
or strength are emergent properties of a metallic material that result from the
complex dynamical behaviour of some of its components. An even deeper
". . . during the last years the whole field of materials science and related
technologies has experienced a complete renewal. Effectively, by using
techniques corresponding to strong nonequilibrium conditions, it is now
possible to escape from the constraints of equilibrium thermodynamics and
to process totally new material structures including different types of
glasses, nano- and quasi-crystals, superlaticces . . . As materials with
increased resistance to fatigue and fracture are sought for actual
applications, a fundamental understanding of the collective behaviour of
dislocations and point defects is highly desirable. Since the usual
thermodynamic and mechanical concepts are not adapted to describe those
situations, progress in this direction should be related to the explicit use of
genuine nonequilibrium techniques, nonlinear dynamics and instability
theory". {8}
very different disciplines may apply across the board, and this may help
legitimize the intrinsic interdisciplinary approach of materials science. As I
just said, however, the common behaviour of different collectivities in
nonlinear, nonequilibrium conditions is of even greater importance to the
philosopher of matter. This is very clear in the philosophy of Gilles Deleuze
and Felix Guattari who are perhaps the most radical contemporary
representatives of this branch of philosophy. Inspired in part by some early
versions of complexity theory (e.g. Rene Thomis catastrophe theory, and the
theories of technology of Gilbert Simondon) these authors arrived at the idea
that all structures, whether natural or social, are indeed different expressions
of a single matter-energy behaving dynamically, that is, matter-energy in
flux, to which they have given the name of "machinic phylum". In their words:
". . .the machinic phylum is materiality, natural or artificial, and both
simultaneoulsy; it is matter in movement, in flux, in variation. . ." {9}
". . . what metal and metallurgy bring to light is a life proper to matter, a vital
state of matter as such, a material vitalism that doubtless exists everywhere
but is ordinarly hidden or covered, rendered unrecognizable . . . Metallurgy
is the consciousness or thought of the matter-flow, and metal the correlate
of this consciousness. As expressed in panmetallism, metal is coextensive
to the whole of matter, and the whole of matter to metallurgy. Even the
waters, the grasses and varieties of wood, the animals are populated by
"The widespread use of steel for so many purposes in the modern world is
only partly due to technical causes. Steel, especially mild steel, might
euphemistically be described as a material that facilitates the dilution of
skills. . . Manufacturing processes can be broken down into many separate
stages, each requiring a minimum of skill or intelligence. . . At a higher
mental level, the design process becomes a good deal easier and more
foolproof by the use of a ductile, isotropic, and practically uniform material
with which there is already a great deal of accumulated experience. The
Gordon sees in the spread of the use of steel in the late nineteen- and early
twenty centuries, a double danger for the creativity of structural designers.
The first danger is the idea that a single, universal material is good for all
different kinds of structure, some of which may be supporting loads in
compression, some in tension, some withstanding shear stresses and others
torsional stresses. But as Gordon points out, given that the roles which a
structure may play can be highly heterogenous, the repertoir of materials
that a designer uses should reflect this complexity. On the other hand, he
points out that, much as in the case of biological materials like bone, new
designs may involve structures with properties that are in continuous
variation, with some portions of the structure better able to deal with
compression while others deal with tension. Intrinsically heterogenous
materials, such as fiberglass and the newer hi-tech composites, afford
designers this possibility. As Gordon says, "it is scarcely practicable to
tabulate elaborate sets of "typical mechanical properties" for the new
composites. In theory, the whole point of such materials is that, unlike
metals, they do not have "typical properties, because the material is
designed to suit not only each individual structure, but each place in that
structure." {11}
First, the nineteenth century process of transfering skills from the human
worker to the machine, and the task of homogenizing metallic behaviour
went hand in hand. As Cyril Stanley Smith remarks "The craftman can
compensate for differences in the qualities of his material, for he can adjust
the precise strength and pattern of application of his tools to the materialis
local vagaries. Conversely, the constant motion of a machine requires
constant materials." {12} If its is true as I said at the beggining of this essay
that much of the knowledge about the complex behaviour of materials was
developed outside science by empirically oriented individuals, the deskilling
of craftmen that accompanied mechanization may be seen as involving a
loss of at least part of that knowledge, since in many cases empirical know-
how is stored in the form of skills.
Second, as I just said, not only the production process was routinized this
way, so was to a lesser extent the design process. Many professionals who
design load-bearing structures lost their ability to design with materials that
are not isotropic, that is, that do not have identical properties in all
directions. But it is precisely those abilities to deal with complex,
continuously variable bahaviour that are now needed to design structures
with the new composites. Hence, we may need to nurture again our ability to
deal with variation as a creative force, and to think of structures that
incorporate heterogenous elements as a challenge to be met by innovative
design.
Third, the quest for uniformity in human and metallic behaviour went beyond
the specific disciplinary devices used in assembly-line factories. Many other
things became homogenized in the last few centuries. To give only two
examples: the genetic materials of our farm animals and crops have become
much more uniform, at first due to the spread of the "pedigree mystique",
and later in this century, by the development and diffusion of miracle crops,
like hybrid corn. Our linguistic materials also became more uniform as the
meshworks of heterogenous dialects which existed in most countries began
to yield to the spread of standard languages, through compulsory education
systems and the effects mass media. As before, the question is not whether
we achieved some efficiencies through genetic and linguisitic standarization.
We did. The problem is that in the process we came to view heterogeneity
and variation as something to be avoided, as something pathological to be
cured or uprooted since it endangered the unity of the nation state.
Finally, as Deleuze and Guattari point out, the nineteen-century quest for
uniformity may had had damaging effects for the philosophy of matter by
References:
1) Cyril Stanley Smith. Matter Versus Materials: A Historical View. In A
Search for Structure. (MIT Press, 1992). p. 115
2) ibid. p.115
5) ibid. p. 21 and 22
6) ibid p. 3
7) ibid. p. 111
by Manuel De Landa
For a philosopher there are several interesting issues involved in this new
interface paradigm. The first one has to do with the history of the software
infrastructure that has made this proliferation of agents possible. From the
point of view of the conceptual history of software, the creation of worlds
populated by semi-autonomous virtual creatures, as well as the more
familiar world of mice, windows and pull-down menus, have been made
possible by certain advances in programming language design. Specifically,
programming languages needed to be transformed from the rigid hierarchies
which they were for many years, to the more flexible and decentralized
structure which they gradually adopted as they became more "object-
The idea of coding data into punched cards spread slowly during the 1800's,
and by the beginning of our century it had found its way into computing
machinery, first the tabulators used by Hollerith to process the 1890 United
States census, then into other tabulators and calculators. In all these cases
control remained embodied in the machine's hardware. One may go as far
as saying that even the first modern computer, the imaginary computer
created by Alan Turing in the 1930's still kept control in the hardware, the
scanning head of the Turing machine. The tape that his machine scanned
held nothing but data. But this abstract computer already had the seed of
the next step, since as Turing himself understood, the actions of the
The next step in this migration took place when control of a given
computational process moved from the software to the very data that the
software operates on. For as long as computer languages such as
FORTRAN or Pascal dominated the computer industry, control remained
hierarchically embedded in the software. A master program would surrender
control to a subroutine whenever that sub-task was needed to be performed,
and the subroutine itself may pass control to an even more basic subroutine.
But the moment the specific task was completed, control would move up the
hierarchy until it reached the master program again. Although this
arrangement remained satisfactory for many years, and indeed, many
computer programs are still written that way, more flexible schemes were
needed for some specific, and at the time, esoteric applications of
computers, mostly in Artificial Intelligence.
whatever task they were supposed to do. In a very real sense, it was now
the data itself that controlled the process. And, more importantly, if the data
base was connected to the outside world via sensors, so that patterns of
data reflected patterns of events outside the robot, then the world itself was
now controlling the computational process, and it was this that gave the
robot a degree of responsiveness to its surroundings.
care because it has been greatly abused over the last century by theorists
on the left and the right. As Simon remarks, the term does not refer to the
world of corporations, whether monopolies or oligopolies, since in these
commercial institutions decision-making is highly centralized, and prices are
set by command.
I would indeed limit the sense of the term even more to refer exclusively to
those weakly gatherings of people at a predefined place in town, and not to
a dispersed set of consumers catered by a system of middleman (as when
one speaks of the "market" for personal computers). The reason is that, as
historian Fernand Braudel has made it clear, it is only in markets in the first
sense that we have any idea of what the dynamics of price formation are. In
other words, it is only in peasant and small town markets that decentralized
decision-making leads to prices setting themselves up in a way that we can
understand. In any other type of market economists simply assume that
supply and demand connect to each other in a functional way, but they do
not give us any specific dynamics through which this connection is effected.
{4} Moreover, unlike the idealized version of markets guided by an "invisible
hand" to achieve an optimal allocation of resources, real markets are not in
any sense optimal. Indeed, like most decentralized, self-organized
structures, they are only viable, and since they are not hierarchical they
have no goals, and grow and develop mostly by drift. {5}
species gene pool. On the other hand, ecosystems are examples of self-
consistent aggregates, since they link together into complex food webs a
wide variety of animals and plants, without reducing their heterogeneity. I
have developed this theory in more detail elsewhere, but for our purposes
here let's simply keep the idea that besides centralization and
decentralization of control, what defines these two types of structure is the
homogeneity or heterogeneity of its composing elements.
A similar point may be made about the worlds inhabited by software agents.
The Internet, to take the clearest example first, is a meshwork which grew
mostly by drift. No one planned either the extent or the direction of its
development, and indeed, no one is in charge of it even today. The Internet,
or rather its predecessor, the Arpanet, acquired its decentralized structure
because of the needs of U.S. military hierarchies for a command and
communications infrastructure which would be capable of surviving a
nuclear attack. As analysts from the Rand Corporation made it clear, only if
the routing of the messages was performed without the need for a central
computer could bottlenecks and delays be avoided, and more importantly,
could the meshwork put itself back together once a portion of it had been
nuclearly vaporized. But in the Internet only the decision-making behind
routing is of the meshwork type. Decision-making regarding its two main
resources, computer (or CPU) time and memory, is still hierarchical.
These ideas are today being hotly debated in the field of interface design.
The general consensus is that interfaces must become more intelligent to be
able to guide users in the tapping of computer resources, both the
informational wealth of the Internet, as well as the resources of ever more
elaborate software applications. But if the debaters agree that interfaces
must become smarter, and even that this intelligence will be embodied in
agents, they disagree on how the agents should acquire their new
capabilities. The debate pits two different traditions of Artificial Intelligence
against each other: Symbolic AI, in which hierarchical components
predominate, against Behavioral AI, where the meshwork elements are
dominant. Basically, while in the former discipline one attempts to endow
machines with intelligence by depositing a homogenous set of rules and
symbols into a robot's brain, in the latter one attempts to get intelligent
behavior to emerge from the interactions of a few simple task-specific
modules in the robot's head, and the heterogeneous affordances of its
environment. Thus, to build a robot that walks around a room, the first
approach would give the robot a map of the room, together with the ability to
reason about possible walking scenarios in that model of the room. The
second approach, on the other hand, endows the robot with a much simpler
set of abilities, embodied in modules that perform simple tasks such as
collision-avoidance, and walking-around-the-room behavior emerges from
the interactions of these modules and the obstacles and openings that the
real room affords the robot as it moves.{8}
In terms of the location of control, there is very little difference between the
agents that would result, and in this sense, both approaches are equally
decentralized. The rules that Symbolic AI would put in the agents head,
most likely derived from interviews of users and programmers by a
Knowledge Engineer, are independent software objects. Indeed, in one of
the most widely used programming languages in this kind of approach
(called a "production system") the individual rules have even more of a
meshwork structure that many object-oriented systems, which still cling to a
hierarchy of objects. But in terms of the overall human-machine system, the
approach of Symbolic AI is much more hierarchical. In particular, by
assuming the existence of an ideal user, with homogenous and unchanging
habits, and of a workplace where all users are similar, agents created by this
approach are not only less adaptive and more commanding, they
themselves promote homogeneity in their environment. The second class of
agents, on the other hand, are not only sensitive to heterogeneities, since
they adapt to individual users and change as the habits of this users
change, they promote heterogeneity in the work place by not subordinating
every user to the demands of an idealized model.
One drawback of the approach of Behavioral AI is that, given that the agent
has very little knowledge at the beginning of a relationship with a user, it will
be of little assistance for a while until it learns about his or her habits. Also,
since the agent can only learn about situations that have recurred in the
past, it will be of little help when the user encounters new problems. One
possible solution, is to increase the amount of meshwork in the mix and
allow agents from different users to interact with each other in a
decentralized way. {10} Thus, when a new agent begins a relation with a
user, it can consult with other agents and speed up the learning process,
assuming that is, that what other agents have learned is applicable to the
new user. This, of course, will depend on the existence of some
homogeneity of habits, but at least it does not assume a complete
homogenous situation from the outset, an assumption which in turn
promotes further uniformization. Besides, endowing agents with a static
model of the users makes them unable to cope with novel situations. This is
also a problem in the Behavioral AI approach but here agents may aid one
another in coping with novelty. Knowledge gained in one part of the
workplace can be shared with the rest, and new knowledge may be
generated out of the interactions among agents. In effect, a dynamic model
of the workplace would be constantly generated and improved by the
collective of agents in a decentralized way, instead of each one being a
replica of each other operating on the basis of a static model centrally
created by a knowledge engineer.
I would like to conclude this brief analysis of the issues raised by agent-
based interfaces with some general remarks. First of all, from the previous
comments it should be clear that the degree of hierarchical and
homogenizing components in a given interface is a question which affects
more than just events taking place in the computer's screen. In particular,
the very structure of the workplace, and the relative status of humans and
To make things worse, the solution to this is not simply to begin adding
meshwork components to the mix. Indeed, one must resist the temptation to
make hierarchies into villains and meshworks into heroes, not only because,
as I said, they are constantly turning into one another, but because in real
life we find only mixtures and hybrids, and the properties of these cannot be
established through theory alone but demand concrete experimentation.
Certain standardizations, say, of electric outlet designs or of data-structures
traveling through the Internet, may actually turn out to promote
heterogenization at another level, in terms of the appliances that may be
designed around the standard outlet, or of the services that a common data-
structure may make possible. On the other hand, the mere presence of
increased heterogeneity is no guarantee that a better state for society has
been achieved. After all, the territory occupied by former Yugoslavia is more
heterogeneous now than it was ten years ago, but the lack of uniformity at
one level simply hides an increase of homogeneity at the level of the warring
ethnic communities. But even if we managed to promote not only
heterogeneity, but diversity articulated into a meshwork, that still would not
be a perfect solution. After all, meshworks grow by drift and they may drift to
places where we do not want to go. The goal-directedness of hierarchies is
the kind of property that we may desire to keep at least for certain
institutions. Hence, demonizing centralization and glorifying decentralization
as the solution to all our problems would be wrong. An open and
experimental attitude towards the question of different hybrids and mixtures
is what the complexity of reality itself seems to call for. To paraphrase
Deleuze and Guattari, never believe that a meshwork will suffice to save us.
{11}
Footnotes:
{2} Andrew Hodges. Alan Turing: The Enigma. (Simon & Schuster, New
York 1983). Ch. 2
{3} Herbert Simon. The Sciences of the Artificial. (MIT Press, 1994). p.43
{4} Fernand Braudel. The Wheels of Commerce. (Harper and Row, New
York, 1986). Ch. I
{7} M.S. Miller and K.E. Drexler. Markets and Computation: Agoric Open
Systems. In The Ecology of Computation. Bernardo Huberman ed. (North-
Holland, Amsterdam 1988).
{10} Yezdi Lashari, Max Metral and Pattie Maes. Collaborative Interface
Agents. In Proceedings of 12th National Conference on AI. (AAAI Press,
http://www.t0.or.at/delanda/meshwork.htm (11 z 12)2006-02-23 23:42:51
Zero News Datapool, MANUEL DE LANDA, MESHWORKS,HIERARCHIES AND INTERFACES
{11} Deleuze and Guattari. op. cit. p. 500. (Their remark is framed in terms
of "smooth spaces" but it may be argued that this is just another term for
meshworks).
VIRTUAL ENVIROMENTS
AND THE EMERGENCE OF SYNTHETIC REASON
by MANUEL DE LANDA
At the end of World War II, Stanislav Ulam and other scientists previously
involved in weapons research at Los Alamos, discovered the huge potential
of computers to create artificial worlds, where simulated experiments could
be conducted and where new hypotheses could be framed and tested. The
physical sciences were the first ones to tap into this "epistemological
reservoir" thanks to the fact that much of their accumulated knowledge had
already been given a mathematical form. Among the less mathematized
disciplines, those already taking advantage of virtual environments are
psychology and biology (e.g. Artificial Intelligence and Artificial Life) ,
although other fields such as economics and linguistics could soon begin to
profit from the new research strategies made possible by computer
simulations.
Yet, before a given scientific discipline can begin to gain from the use of
virtual environments, more than just casting old assumptions into
mathematical form is necessary. In many cases the assumptions
themselves need to be modified. This is clear in the case of Artificial
Intelligence research, much of which is still caught up into older paradigms
of what a symbol-manipulating "mind" should be, and hence has not
benefited as much as it could from the simulation capabilities of computers.
Artificial Life, on the other hand, has the advantage that the evolutionary
biologist's conceptual base has been purged from classical notions of what
living creatures and evolution are supposed to be, and this has put this
discipline in an excellent position to profit from the new research tool
represented by these abstract spaces. Since this is a crucial point, let's take
a careful look at just what this "purging"has involved.
The first classical notion that had to be eliminated from biology was the
Aristotelian concept of an "ideal type", and this was achieved by the
development of what came to be known in the 1930's as "population
thinking". In the old tradition that dominated biological thought for over two
thousand years, a given population of animals was conceived as being the
more or less imperfect incarnation of an ideal essence. Thus, for example, in
the case of zebras, there would exist an ideal zebra, embodying all the
attributes which together make for "zebrahood"(being striped, having hoofs
etc. The existence of this essence would be obscured by the fact that in any
given population of zebras the ideal type would be subjected to a multiplicity
of accidents (of embryological development, for instance) yielding as end
result a variety of imperfect realizations. In short, in this view, only the ideal
essence is real, with the variations being but mere shadows.
When the ideas of Darwin on the role of natural selection and those of
Mendel on the dynamics of genetic inheritance were brought together six
decades ago, the domination of the Aristotelian paradigm came to an end. It
became clear, for instance, that there was no such thing as a preexistent
collection of traits defining "zebrahood".Each of the particular adaptive traits
which we observe in real zebras developed along different ancestral
lineages, accumulated in the population under the action of different
selection pressures, in a process that was completely dependent on specific
(and contingent) historical details. In other words, just as these traits
(camouflage, running speed and so on) happened to come together in
zebras, they may not have, had the actual history of those populations been
any different.
Moreover, the engine driving this process is the genetic variability of zebra
populations. Only if zebra genes replicate with enough variability can
selection pressures have raw materials to work with. Only if enough variant
traits arise spontaneously, can the sorting process of natural selection bring
together those features which today define what it is to be a zebra. In short,
for population thinkers, only the variation is real, and the ideal type (e.g. the
average zebra) is a mere shadow. Thus we have a complete inversion of the
classical paradigm. 1
Further refinement of these notions have resulted in the more general idea
that the coupling of any kind of spontaneous variation to any kind of
selection pressure results in a sort of "searching device". This "device"
spontaneously explores a space of possibilities (i.e. possible combinations
of traits), and is capable of finding, over many generations, more or less
stable combinations of features, more or less stable solutions to problems
posed by the environment. This "device" has today been implemented in
populations that are not biological. This is the so called "genetic
algorithm" (developed by John Holland) in which a population of computer
programs is allowed to replicate in a variable form, and after each
generation a test is performed to select those programs that most closely
approximate the desired performance. It has been found that this method is
capable of zeroing-in the best solutions to a given programming task. In
essence, this method allows computer scientists to breed new solutions to
problems, instead of directly programming those solutions. 2
The difference between the genetic algorithm and the more ambitious goals
of Artificial Life is the same as that between the action of human breeding
techniques on domesticated plants and animals, and the spontaneous
evolution of the ancestors of those plants and animals. Whereas in the first
case the animal or plant breeder determines the criterion of fitness, in the
second one there is no outside agency determining what counts as fit. In a
way what is fit is simply that which survives, and this has led to the criticism
that Darwinism's central formula (i.e. "survival of the fittest") is a mere
tautology ("survival of the survivor"). Partly to avoid this criticism this formula
is today being replaced by another one: survival of the stable. 3
optimal solution has been found, any mutant strategy arising in the
population is bound to be defeated. The strategy will be, in this sense, stable
against invasion. To put it invisual terms, it is as if the space of possibilities
explored by the "searching device" included mountains and valleys, with the
mountain peaks representing points of optimal performance. Selection
pressures allow the gene pool of a reproductive population to slowly climb
those peaks, and once a peak has been reached, natural selection keeps
the population there.
One may wonder just what has been achieved by switching from the
concept of a "fittest mutant" to that of an "optimal" one, except perhaps, that
the latter can be defined contextually as "optimal given existing constraints".
However, the very idea that selection pressures are strong enough to pin
populations down to "adaptive peaks" has itself come under intense
criticism. One line of argument says that any given population is subjected
to many different pressures, some of them favoring different optimal results.
For example, the beautiful feathers of a peacock are thought to arise due to
the selection pressure exerted by "choosy" females, who will only mate with
those males exhibiting the most attractive plumage. Yet, those same vivid
colors which seduce the females also attract predators. Hence, the male
peacock's feathers will come under conflicting selection pressures. In these
circumstances, it is highly improbable that the peacock's solution will be
optimal and much more likely that it will represent a compromise. Several
such sub-optimal compromises may be possible, and thus the idea that the
solution arrived at by the "searching device" is unique needs to be
abandoned. 4 But if unique and optimal solutions are not the source of
stability in biology, then what is?.
The answer to this question represents the second key idea around which
the field of Artificial Life revolves. It is also crucial to understand the potential
application of virtual environments to fields such as economics. The old
conceptions of stability (in terms of either optimality or principles of least
effort) derive from nineteenth century equilibrium thermodynamics. It is well
known that philosophers like Auguste Comte and Herbert Spencer (author of
the formula "survival of the fittest") introduced thermodynamic concepts into
social science. However, some contemporary observers complain that what
was so introduced (in economics, for example) represents "more heat than
light". 5
This static conception of stability was the second classical idea that needed
to be eliminated before the full potential of virtual environments could be
unleashed. Like population thinking, the fields that provided the needed new
insights (the disciplines of far-from-equilibrium thermodynamics and
nonlinear mathematics) are also a relatively recent development, associated
with the name of Ilya Prigogine, among others. Unlike the
"conservative"systems dealt with by the old science of heat, systems which
are totally isolated from their surroundings, the new science deals with
systems that are subjected to a constant flow of matter and energy from the
outside. Because this flow must also exit the system in question, that is, the
waste products need to be dissipated, this systems are called "dissipative". 7
For our purposes here what matters is that once a continuous flow of matter-
energy is included in the model, a wider range of possible forms of dynamic
equilibria becomes possible. The old static stability is still one possibility,
except that now these equilibrium points are neither unique nor optimal (and
yet they are more robust that the old equilibria). Non-static equilibria also
exist, in the form of cycles, for instance. Perhaps the most novel type of
stability is that represented by "deterministic chaos", in which a given
population can be pinned down to a stable, yet inherently variable,
dynamical state. These new forms of stability have received the name of
"attractors", and the transitions which transform one type of attractor into
The typical Artificial Life experiment involves first the design of a simplified
version of an individual animal, which must possess the equivalent of a set
of genetic instructions used both to create its offspring as well as to be
transmitted to that offspring. This transmission must also be "imperfect"
enough, so that variation can be generated. Then, whole populations of
these "virtual animals" are unleashed, and their evolution under a variety of
selection pressures observed. The exercise will be considered successful if
novel properties, unthought of by the designer, spontaneously emerge from
this process.
remained (at least until the 1980's) largely top-down and analytical. Instead
of treating the symbolic properties they study as the emergent outcome of a
dynamical process, these researchers explicitly put symbols (labels, rules,
recipes) and symbol-manipulating skills into the computer. When it was
realized that logic alone was not enough to manipulate these symbols in a
significantly "intelligent" way, they began to extract the rules of thumb, tricks
of the trade and other non-formal heuristic knowledge from human experts,
and put these into the machine but also as fully formed symbolic structures.
In other words, in this approach one begins at the top, the global behavior of
human brains, instead of at the bottom, the local behavior of neurons. Some
successes have been scored by this approach, notably in simulating skills
such as those involved in playing chess or proving theorems, both of which
are evolutionarily rather late developments. Yet the symbolic paradigm of
Artificial Intelligence has failed to capture the dynamics of evolutionarily
more elementary skills such as face-recognition or sensory-motor control. 10
Although a few attempts had been made during the 1960's to take a more
bottom-up approach to modeling intelligence (e.g. the perception), the
defenders of the symbolic paradigm practically killed their rivals in the battle
for government research funds. And so the analytical approach dominated
the scene until the 1980's when there occurred a spectacular rebirth of a
synthetic design philosophy. This is the new school of Artificial Intelligence
known as "connectionism". Here, instead of one large, powerful computer
serving as a repository for explicit symbols, we find a large number of small,
rather simple computing devices (in which all that matters is their state of
activation), interacting with one another either to excite or inhibit each
other's degree of activation. These simple processors are then linked
together through a pattern of interconnections which can vary in strength.
No explicit symbol is ever programmed into the machine since all the
information needed to perform a given cognitive task is coded in the
interconnection patterns as well as the relative strengths of these
interconnections. All computing activity is carried out by the dynamical
activity of the simple processors as they interact with one another (i.e. as
excitations and inhibitions propagate through the network), and the
processors arrive at the solution to a problem by settling into a dynamical
state of equilibrium. (So far point attractors are most commonly used,
These networks also have the ability to generalize from the patterns they
have learned, and so will be able to recognize a new pattern that is only
vaguely related to one they have been previously exposed to. In other
words, the ability to perform simple inductive inferences emerges in the
network without the need to explicitly code into it the rules of a logical
calculus. These designs are also resilient against damage unlike their
symbolic counterparts which are inherently brittle. But perhaps the main
advantage of the bottom-up approach is that its devices can exhibit a degree
of "intentionality".
In the real world we find realizations of this dilemma in, for example, the
phenomenon known as "bank runs". When news that a bank is in trouble
first come out, each individual depositor has two options: either to rush to
the bank and withdraw his savings or to stay home and allow the bank to
recover. Each individual also knows that the best outcome for the
community is for all to leave their savings in the bank and so allow it to
survive. But no one can afford to be the one who loses his savings, so all
rush to withdraw their money ruining the institution in the process.
Hofstadter offers a host of other examples, including one in which the choice
to betray or cooperate is faced by the participants not once, but repeatedly.
For instance, imagine two"jungle traders" with a rather primitive system of
trade: each simply leaves a bag of goods at a pre defined place, and comes
back later to pick another bag, without ever seeing the trading partner. The
idea is that on every transaction, one is faced with a dilemma, since one can
profit the most by leaving an empty bag and sticking the other with the
"sucker payoff". Yet, the difference is that doing this endangers the trading
situation and hence there is more to lose in case of betrayal here. (This is
called the "iterated Prisoner's Dilemma").
Axelrod's creatures played such an iterated version of the game with one
another. What matters to us here is that after several decades of applying
analytical techniques to study these situations, the idea that "good guys
finish last" (i.e. that the most rational strategy is to betray one's partner) had
become entrenched in academic (and think tank) circles. For example, when
Axelrod first requested entries for his virtual tournament most of the
programs he received were "betrayers". Yet, the winner was not. It was
"nice" (it always cooperated in the first encounter so as to give a sign of
good faith and begin the trading situation), "retaliatory" (if betrayed it would
respond with betrayal in the next encounter) yet "forgiving" (after retaliating it
was willing to reestablish a partnership). As mentioned above, these were
not truly intentional creatures so the properties of being "nice, retaliatory and
forgiving"were like emergent properties of a much simpler design. Its name
was "TIT-FOR-TAT" and its actual strategy was simply to always cooperate
in the first move and there after do what the other player did in the previous
move. This program won because the criterion of success was not how
many partners one beats, but how much overall trade one achieves.
Because the idea that "good guys finish last" had become entrenched,
further analysis of the situation (which could have uncovered the fact that
this principle does not apply to the "iterated" version of the game), was
blocked. What was needed was to unblock this path by using a virtual
enviroment to "synthesize" a fresh intuition. And in a sense, that is just what
Axelrod did. He then went further and used more elaborate simulations
(including one in which the creatures replicated, with the number of progeny
being related to the trading success of the parent), to generate further
intuitions as to how cooperative strategies could evolve in an ecological
environment, how robust and stable these strategies were, and a host of
other questions. Evolutionary biologists, armed with these fresh insights,
have now discovered that apes in their natural habitats play a version of TIT-
FOR-TAT. 14
Thus, while some of the uses of virtual environments presuppose that old
and entrenched ideas (about essences or optimality) have been
superseded, these abstract worlds can also be used to synthesize the
intuitions needed to dislodge other ideas blocking the way to a better
understanding of the dynamics of reality.
Virtual environments may not only allow us to capture the fluid and changing
nature of real languages, they could also be used to gain insights into the
processes that tend to "freeze" languages, such as the processes of
standardization which many European languages underwent beginning in
the seventeenth century. Unlike the cases of Spanish, Italian and French,
where the fixing of the rules and vocabulary of the language were enforced
by an institution (e.g. an Academy), in England the process of
standardization was carried out via the mass publication of authoritative
dictionaries, grammars and orthographies. Just how these "linguistic
engineering" devices achieved the relative freezing of what was formerly a
fluid "linguistic matter", may be revealed through a computer simulation.
Similarly, whenever a language becomes standardized we witness the
political conquest of many "minority" dialects by the dialect of the urban
capital (London's dialect in the case of English). Virtual environments could
allow us to model dynamically the spread of the dominant dialect across
cultural and geographical barriers, and how technologies such as the
railroader the radio (e.g. the BBC) allowed it to surmount such barriers. 17
Future linguists may one day look back with curiosity at our twentieth
century linguistics, and wonder if our fascination with a static (synchronic)
view of language could not be due to the fact that the languages where
these views were first formulated (French and English), had lost their fluid
nature by being artificially frozen a few centuries earlier. These future
investigators may also wonder how we thought the stability of linguistic
structures could be explained without the concept of an attractor. How, for
instance, could the prevalence of certain patterns of sentence structure (e.g.
subject-verb-object, or "SVO") be explained, or how could bifurcations from
one pattern to another be modeled without some form of nonlinear
stabilization. (e.g. English, which may have switched over a millennium from
SOV to SVO). 18 Tomorrow's linguists will also realize that, because these
dynamic processes depend on the existence of heterogeneities and other
nonlinearities, the reason we could not capture them in our models was due
to the entrenchment of the Chomskian idea of a homogeneous speech
community of monolinguals, in which each speaker has equal mastery of the
language.
Sociolinguists have also tackled the study of the second element selection
pressures. The latter can take a variety of forms. In small communities,
where language style serves as a badge of identity, peer pressure in social
networks can act as a filtering device, promoting the accumulation of those
forms and structures that maintain the integrity of the local dialect. On the
other hand, stigmatization of certain forms by the speakers of the standard
language (particularly when reinforced by a system of compulsory
education) can furnish selection pressures leading to the elimination of local
styles. Despite these efforts, formalism is still well entrenched in linguistics
and so this discipline cannot currently benefit from the full potential of virtual
environments. (Which does not mean, of course, that computers are not
used in linguistic investigations, but this use remains analytical and top-
down instead of synthetic and bottom-up.)
Not only are economic agents now viewed as severely limited in their
computational skills, but this bounded rationality is being located in the
context of the specific organizations where it operates and where it is further
constrained by the daily routines that make up an "organizational memory".
In other words, not only is decision-making within organizations performed
on the basis of adaptive beliefs and action rules (rather than optimizing
rationality), but much of it is guided by routine procedures for producing
objects, for hiring/firing employees, for investing in research and
development an so on. Because these procedures are imperfectly copied
whenever a firm opens up a new plant, this process gives us the equivalent
of variable reproduction. 22 A changing climate for investment, following the
ups and downs of boom years and recessions, provides some of the
selection pressures that operate on populations of organizations. Other
pressures come from other organizations, as in natural ecosystems, where
other species (predators, parasites) are also agents of natural selection.
Here giant corporations, which have control over their prices (and hence are
not subjected to supply/demand pressures) play the role of predators,
dividing their markets along well defined territories (market shares).
then, but none has gained complete acceptance. What matters to us here, is
that the M.I.T. model endogenously generates this periodic oscillation, and
that this behavior emerged spontaneously from the interaction of
populations of organizations, to the surprise of the designers, who were in
fact unaware of the literature on Kondratieff cycles. 23
The key ingredient which allows this and other models to generate
spontaneous oscillations, is that they must operate far-from-equilibrium. In
traditional economic models, the only dynamical processes that are included
are those that keep the system near equilibrium (such as "diminishing
returns" acting as negative feedback).The effects of explosive positive
feedback processes (such as"economies of scale") are typically minimized.
But it is such self-reinforcing processes that drive systems away from
equilibrium, and this together with the nonlinearities generated by imperfect
competition and bounded rationality, is what generates the possibility of
dynamical stabilization. 24
In the M.I.T. model, it is precisely a positive feedback loop that pushes the
system towards a bifurcation, where a point attractor suddenly becomes a
cyclic one. Specifically, the sector of the economy which creates the
productive machinery used by the rest of the firms (the capital goods
sector), is prone to the effects of positive feedback because whenever the
demand for machines grows, this sector must order from itself. In other
words, when any one firm in this sector needs to expand its capacity to meet
growing demand, the machines used to create machines come from other
firms in the same sector. Delays and other nonlinearities can then be
amplified by this feedback loop, giving rise to stable yet periodic behavior. 25
"From the physicist's point of view this involves a distinction between states
of the system in which all individual initiative is doomed to insignificance on
one hand, and on the other, bifurcation regions in which an individual, an
idea, or a new behavior can upset the global state. Even in those regions,
amplification obviously does not occur with just any individual, idea, or
behavior, but only with those that are 'dangerous' - that is, those that can
exploit to their advantage the nonlinear relations guaranteeing the stability of
the preceding regime. Thus we are led to conclude that the same
nonlinearities may produce an order out of the chaos of elementary
At any rate, the crucial point is to recognize the existence, in all spheres of
reality, of the reservoir of possibilities represented by nonlinear stabilization
and diversification (a reservoir I have somewhere else called "the machinic
phylum"). 27
We must also recognize that by its very nature, systems governed by
nonlinear dynamics resist absolute control and that sometimes the machinic
phylum can only be tracked, or followed. For this task even our modicum of
free will may suffice. The "searching device" constituted by genetic variation
and natural selection, does in fact track the machinic phylum. That is,
biological evolution has no foresight, and it must grope in the dark, climbing
from one attractor to another, from one engineering stable strategy to
another. And yet, it has produced the wonderfully diverse and robust
ecosystems we observe today. Perhaps one day virtual environments will
become the tools we need to map attractors and bifurcations, so that we too
can track the machinic phylum in search for a better destiny for humanity.
MANUEL DE LANDA
NOTES:
1)
Elliot Sober. The Nature of Selection. (MIT Press, 1987), pages 157-161.
2)
3)
4)
5)
6)
John Maynard Smith. Evolution and the Theory of Games. In Did Darwin
Get it Right?: Essays on Games, Sex and Evolution.(Chapman and Hall, NY
1989).
7)
Ilya Prigogine and Isabelle Stengers. Order Out of Chaos. (Bantam Books,
NY 1984).
8)
Ian Stewart. Does God Play Dice: the Mathematics of Chaos. (Basil
Blackwell, Oxford 1989), pages 95-110.
9)
10)
11)
12)
13)
14)
15)
16)
17)
18)
19)
David Decamp. The Study of Pidgin and Creole Languages. In Dell Hymes
ed. Pidginization and Creolization of Languages. (Cambridge University
Press, Oxford UK 1971).
20)
21)
22)
23)
24)
25)
26)
27)
ILLogical Progression
Manuel De Landa
What is history? Theories of the substance and form of history come and go
with the frequency of styles and trends - some are flashes across the cultural
landscape that show us for a brief moment another light to view the world
through, while others remix the ways we view time, instinct, and perceive the
warp and weave of the consensual social world around us. Our ideas about
continuity are woven from a strange cloth that for better or worse, is pretty
much anthropocentric, and the threads that hold it all together are, at best,
utterly tenuous. The way Proust or Tolstoy viewed history - compared to
Thucydides or Herodotus, or Hegel and Marx, is vastly different - let's not
forget about the 20th century where the works of people as diverse as Joyce
or Fernand Braudel, Carolyn Merchant or Sadie Plant, William S. Burroughs
and Rupert Sheldrake all share shelf space. For us living in our contemporary McKenzie Wark
hypermediated world of distributed networks and frequency exchanges, these
people come to us through the filter of media, a place where all expression
has a short shelf life, and at best almost all ideas are phantasms, fading Steven Shaviro
reputations - rumors of an "historical" existence most readers would be hard
pressed to really verify. Time itself sometimes seems to be debateable and
indeed, it often is. But where does Western style of history begin, and where,
Paul D. Miller
if there is a beginning, does it end? "Herodotus of Halicarnassus here
displays his story, so that human achievements may not become forgotten in
time, and great and marvellous deeds...may not be without their glory..." we
are told in one of the first accounts of epic history in the West. Back in the
5th century B.C. Herodotus was one of several historians that began the kind
of history we all take for granted today - events for him took place as an
interwoven account of events, dates, and the interactions of different
individuals in a "non-linear" mode - his was a kind of oral history where
words were like hypertext, each phrase a reference to countless other
references, and so on and so on. "I prefer to rely on my own knowledge, and
to point out who it was in actual fact that first [took action]...then I will
proceed with my history, telling the story as I go of small cities of men no
less than of great. For most of those which were great once are small today;
and those which used to be small were great in my own time. Knowing,
therefore, that human prosperity never abides long in the same place, I shall
pay attention to both alike...." His story began with a declaration of who he
was and what he was doing, and ended with a summary of the accounts of
the people he had given narrative.
Back in 1992 Art critic and historian Thomas McEvilley wrote in his classic
shift in perspective takes place, creating a world where words act as a bridge
3 acorss broken and fractured "times" that exist pretty much simulatneously.
Delanda has given an account of an overview of what he calls "historic
materialism" and how the basic processes of the full environment we live in
have shaped contemporary thought.
I have to say it -- writing about Manuel Delanda's work is difficult. It isn't the
fact that his work is extremely well researched (it is), or the fact that much of
it involves extremely precise investigations into different realms of theoretical
approaches to the way we humans live and think in different multiplex
contexts that themselves are part question part unanswerable - for lack of a
better word - motif. With Manuel Delanda, there's always that sense that one
thing leads to another and basically there is no discrete and stable form of
inquiry: the question changes and configures the answer which again,
reconfigures what was originally asked. And so the loops go on. Delanda's
first book "War in the Age of Intelligent Machines" was an instant classic; it
encapsulated what so many different theorists were trying (without much
success) to achieve - an overview of our time and the different historical
cybernetic developments that created the milieu we live in. Call it morphic
resonance, or material convergence, or hermeneutical fusion, yada yada
yada, but you get the basic idea: that the "real" of the kind that theorists and
philosphers such as Heidegger (of "The Age of the World Picture" essay
fame) and a whole cast of people supporting the ideas and extensions of
European rationality like Francis Fukuyama, and Frederic Jameson have not
only been cripplling and distorting frameworks to view human history from,
but have also divorced us from the physical processes of the world of flux
and constant change that we are immersed in. Written from the viewpoint of
a cybernetic historian, De Landa's "War in the Age of Intelligent Machines"
created a sense of "historian as actor as philosopher:" the history of and
relationships between humans and the machines we use to create our
cultures were transformed into a continuum where the line dividing the
organic from the inorganic blurs, and the end result is something altogether
completely hybrid. "One Thousand Years of Nonlinear History" takes up
where "War..." left off: with a critique this time, of the material processes
embedded in the migrations of not only human cultres, but of the geological,
biological, linguistic and memetic systems that have impacted on this planet
and its inhabitants over the last thousand years. Teleology, contemporary
constructions of identity, frameworks of philosophical investigation - in the
flow of time like the old Borges poem "The Hourglass" says: "all are
obliterated, all brought down By the tireless trickle of the endless sand. I do
not have to save myself - I too Am a whim of time, that shifty element..."
"Those who would speak with understanding should base themselves upon
what is common to all (things), just as a polis (does) upon its law - and much
more firmly: for all [=political] laws nourish themselves upon one, the divine
(law); for it prevails as far as it will and is sufficient for all and is still left
over." -- Heraclitus of Ephesus, fragment 114, 6th century B.C.
Manuel De Landa: One of the ideas that I attack in my book is precisely the
primacy of "interpretations" and of "conceptual frameworks." Sure, ideas and
beliefs are important, and do play a role in history, but academics of different
brands have reduced all material and energetic processes, and all human
practices that are not linguistic or interpretative (think of manual skills, of
"know-how.") to a "framework." The 20th century has been obsessed with
positioning everything. Every culture, given that it has it's own framework of
beliefs, has become its own "world" and relativism (both moral and
epistemological) now prevails. But once you break away from this outmoded
view, once you accept all the non-linguistic practices that really make up
4 society (not to mention the non-human elements that also shape it, such as
viruses, bacteria, weeds or non-organic energy and material flows like wind
and ocean currents) then language itself becomes just another material that
flows through a much expanded picture. Language, in my view, is best
thought of as a catalyst, a trigger for energetic processes (think of the words
"begin the battle" triggering an enormous and destructive process). The
question of "missed opportunities" is important, since for most of the
millennium both China and India had in fact a better chance to conquer the
world than did the West, so that the actual outcome ,a world dominated by
Western colonialism, was quite contingent. Things could have happened in
several other ways.
What are your thoughts on digital art and its relationship to the
different forms of communication in our dense and continuously
changing world. Is there any return to the "comforts" of a
"homogenous" culture on the horizon?
Here again we have two different answers depending on whether you believe
in "conceptual frameworks" or not. If you do, then you also believe that
there's such a thing as "the bourgeois ideology of the individual," a pervasive
framework within which all artistic production of the last few centuries is
inscribed. But if you do not believe there was ever such a thing, then history
becomes much less homogenous, much less dominated by any one
framework, and hence you begin to look at all the different ways in which art
has escaped the conditions of its production (which admittedly, did include
ruling classes as suppliers of resources). Put differently, once you admit that
history has been much more complex and heterogeneous than we have been
told, then even the "enemy" looks less in control of historical processes than
we thought. In a sense, what I am trying to do is to cut the enemy down to
size, to see all the potential escape routes that we have been overlooking by
exaggerating the importance of "frameworks" or "ideologies." Clearly, if the
enemy was never as powerful as we thought (which is not to say that it did
not to say that it did have plenty of power) the question of the role of art
(digital or otherwise) in changing social reality acquires new meanings and
possibilities.
There are two main differences between my philosophical ideas about history
and those of previous philosophers. The first one, which is shared by many
these days, is a rejection of Platonic essences as sources of form, you know,
the idea that the form of this mountain here or of that zebra over there
emanates from an essence of "mountain-hood" or of "zebra-hood" existing in
some ideal world or in the mind of the God that created these creatures.
Instead, for each such entity (not only geological and biological entities, but
also social and economic ones), I force myself to come up with a process
capable of creating or producing such an entity. Sometimes these processes
are already figured out by scientists (in those disciplines linked to questions
of morphogenesis, like chaos theory and non-linear dynamics) and so I just
borrow their model, other times I need to create new models using
philosophical resources - and people like Deleuze and Guattari have been
very helpful in this regard. The other difference is my rejection of the
existence of totalities, that is, entities like "Western Society" of the "Capitalist
System." The morphogenetic point of view does allow for the emergence of
wholes that are more than the sum of their parts, but only if specific
historical processes, specific interactions between "lower scale entities," can
be shown to have produced such wholes. Thus, in my view, institutional
organizations like bureaucracies, banks, and stock markets acquire a life of
their own from the interactions of individuals. From the interactions of those
institutions cities emerge, and from the interactions between cities nation
5 states emerge. Yet, in these bottom-up approaches, all the heterogeneity of
real nation states can be pockets of minorities, the dialect differences, the
local transience - unlike when history is modeled on totalities (concepts like
"society" or "culture" or "the system"). In this latter situation homogeneity
has to be artificially injected into the model.
I agree that the domination of this century by linguistics and semiotics (which
is what allows us to reduce everything to talk of "frameworks of
interpretation"), not to mention the post-colonial guilt of so many white
intellectuals which forces them to give equal weight to any other culture's
belief system, has had a very damaging effect, even on art. Today I see art
students trained by guilt-driven semioticians or post-modern theorists, afraid
of the materiality of their medium, whether painting, music, poetry or virtual
reality (since, given the framework dogma, every culture creates its own
reality). The key to break away from this is to cut language down to size, to
give it the importance it deserves as a communications medium, but to stop
worshipping it as the ultimate reality. Equally important is to adopt a hacker
attitude towards all forms of knowledge: not only to learn UNIX or Windows
NT to hack this or that computer system, but to learn economics, sociology,
physics, biology to hack reality itself. It is precisely the "can do" mentality of
the hacker, naive as it may sometimes be, that we need to nurture
everywhere.
[from DJ Spooky.com ]
© 2004 frontwheeldrive.com
MANUEL DE LANDA
Homes:
Meshwork or Hierarchy?
.
.
The computer simulation of evolutionary processes is already a well established technique for the study
of biological dynamics. One can unleash within a digital environment a population of virtual plants or
animals and keep track of the way in which these creatures change as they mate and pass their virtual
genetic materials to their offspring. The hard work goes into defining the relation between the virtual
genes and the virtual bodily traits that they generate, everything else—keeping track of who mated with
whom, assigning fitness values to each new form, determining how a gene spreads through a population
over many generations is a task performed automatically by certain computer programs collectively
known as “genetic algorithms”. The study of the formal and functional properties of this type of
software has now become a field in itself, quite separate from the applications in biological research
which these simulations may have. In this essay I will deal neither with the computer science aspects of
genetic algorithms (as a special case of “search algorithms”) nor with their use in biology, but focus
instead on the applications which these techniques may have as aids in artistic design.
In a sense evolutionary simulations replace design, since artists can use this software to breed new forms
rather than specifically design them. This is basically correct but, as I argue below, there is a part of the
process in which deliberate design is still a crucial component. Although the software itself is relatively
well known and easily available, so that users may get the impression that breeding new forms has
become a matter of routine, the space of possible designs that the algorithm searches needs to be
sufficiently rich for the evolutionary results to be truly surprising. As an aid in design these techniques
would be quite useless if the designer could easily foresee what forms will be bred. Only if virtual
evolution can be used to explore a space rich enough so that all the possibilities cannot be
considered in advance by the designer, only if what results shocks or at least surprises, can genetic
algorithms be considered useful visualization tools. And in the task of designing rich search spaces
certain philosophical ideas, which may be traced to the work of Gilles Deleuze, play a very important
role. I will argue that the productive use of genetic algorithms implies the deployment of three forms of
philosophical thinking (populational, intensive, and topological thinking) which were not invented by
Deleuze but which he has brought together for the first time and made the basis for a brand new
conception of the genesis of form.
To be able to apply the genetic algorithm at all, a particular field of art needs to first solve the problem
of how to represent the final product (a painting, a song, a building) in terms of the process that
generated it, and then, how to represent this process itself as a well-defined sequence of operations. It is
this sequence, or rather, the computer code that specifies it, that becomes the “genetic material” of the
painting, song, or building in question. In the case of architects using computer-aided design (CAD) this
problem becomes greatly simplified given that a CAD model of an architectural structure is already
given by a series of operations. A round column, for example, is produced by a series such as this: 1)
draw a line defining the profile of the column; 2) rotate this line to yield a surface of revolution; 3)
perform a few “Boolean subtractions” to carve out some detail in the body of the column. Some
software packages store this sequence and may even make available the actual computer code
corresponding to it, so that this code now becomes the “virtual DNA” of the column. (A similar
procedure is followed to create each of the other structural and ornamental elements of a building.)
At this point we need to bring one of the philosophical resources I mentioned earlier to understand what
happens next: population thinking. This style of reasoning was created in the 1930’s by the biologists
who brought together Darwin’s and Mendel’s theories and synthesized the modern version of
evolutionary theory. In a nut shell what characterizes this style may be phrased as “never think in terms
of Adam and Eve but always in terms of larger reproductive communities”. More technically, the idea is
that despite the fact that at any one time an evolved form is realized in individual organisms, the
population not the individual is the matrix for the production of form. A given animal or plant
architecture evolves slowly as genes propagate in a population, at different rates and at different times,
so that the new form is slowly synthesized within the larger reproductive community.-1- The lesson for
computer design is simply that once the relationship between the virtual genes and the virtual bodily
traits of a CAD building has been worked out, as I just described, an entire population of such buildings
needs to be unleashed within the computer, not just a couple of them. The architect must add to the CAD
sequence of operations points at which spontaneous mutations may occur (in the column example: the
relative proportions of the initial line; the center of rotation; the shape with which the Boolean
subtraction is performed) and then let these mutant instructions propagate and interact in a collectivity
over many generations.
To population thinking Deleuze adds another cognitive style which in its present form is derived from
thermodynamics, but which as he realizes has roots as far back as late medieval philosophy: intensive
thinking. The modern definition of an intensive quantity is given by contrast with its opposite, an
extensive quantity. The latter refers to the magnitudes with which architects are most familiar with,
lengths, areas, volumes. These are defined as magnitudes which can be spatially subdivided: if one takes
a volume of water, for example, and divides it in two halves, one ends up with two half volumes. The
term “intensive” on the other hand, refers to quantities like temperature, pressure or speed, which cannot
be so subdivided: if one divides in two halves a volume of water at ninety degrees of temperature one
does not end up with two half volumes at forty five degrees of temperature, but with two halves at the
original ninety degrees. Although for Deleuze this lack of divisibility is important, he also stresses
another feature of intensive quantities: a difference of intensity spontaneously tends to cancel itself out
and in the process, it drives fluxes of matter and energy. In other words, differences of intensity are
productive differences since they drive processes in which the diversity of actual forms is produced.-2-
For example, the process of embryogenesis, which produces a human body out of a fertilized egg, is a
process driven by differences of intensity (differences of chemical concentration, of density, of surface
tension).
What does this mean for the architect? That unless one brings into a CAD model the intensive elements
of structural engineering, basically, distributions of stress, a virtual building will not evolve as a
building. In other words, if the column I described above is not linked to the rest of the building as a
load-bearing element, by the third or fourth generation this column may be placed in such a way that it
cannot perform its function of carrying loads in compression anymore. The only way of making sure that
structural elements do not lose their function, and hence that the overall building does not lose viability
as a stable structure, is to somehow represent the distribution of stresses, as well as what type of
concentrations of stress endanger a structure’s integrity, as part of the process which translates virtual
genes into bodies. In the case of real organisms, if a developing embryo becomes structurally unviable it
won’t even get to reproductive age to be sorted out by natural selection. It gets selected out prior to that.
A similar process would have to be simulated in the computer to make sure that the products of virtual
evolution are viable in terms of structural engineering prior to being selected by the designer in terms of
their “aesthetic fitness”.
Now, let’s assume that these requirements have indeed been met, perhaps by an architect-hacker who
takes existing software (a CAD package and a structural engineering package) and writes some code to
bring the two together. If he or she now sets out to use virtual evolution as a design tool the fact that the
only role left for a human is to be the judge of aesthetic fitness in every generation (that is, to let die
buildings that do not look esthetically promising and let mate those that do) may be disappointing. The
role of design has now been transformed into (some would say degraded down to) the equivalent of
a prize-dog or a race-horse breeder. There clearly is an aesthetic component in the latter two
activities, one is in a way, “sculpting” dogs or horses, but hardly the kind of creativity that one identifies
with the development of a personal artistic style. Although today slogans about the “death of the author”
and attitudes against the “romantic view of the genius” are in vogue, I expect this to be fad and questions
of personal style to return to the spotlight. Will these future authors be satisfied with the role of breeders
of virtual forms? Not that the process so far is routine in any sense. After all, the original CAD model
must be endowed with mutation points at just the right places (an this involves design decisions) and
much creativity will need to be exercised to link ornamental and structural elements in just the right
way. But still this seems a far cry from a design process where one can develop a unique style.
There is, however, another part of the process where stylistic questions are still crucial, although in a
different sense than in ordinary design. Explaining this involves bringing in the third element in
Deleuze’s philosophy of the genesis of form: topological thinking. One way to introduce this other
style of thinking is by contrasting the results which artists have so far obtained with the genetic
algorithm and those achieved by biological evolution. When one looks at current artistic results the most
striking fact is that, once a few interesting forms have been generated, the evolutionary process seems to
run out of possibilities. New forms do continue to emerge but they seem too close to the original ones, as
if the space of possible designs which the process explores had been exhausted.-3- This is in sharp
contrast with the incredible combinatorial productivity of natural forms, like the thousands of original
architectural “designs” exhibited by vertebrate or insect bodies. Although biologists do not have a full
explanation of this fact, one possible way of approaching the question is through the notion of a “body
plan”.
As vertebrates, the architecture of our bodies (which combines bones bearing loads in compression and
muscles bearing then in tension) makes us part of the phylum “chordata”. The term “phylum” refers to a
branch in the evolutionary tree (the first bifurcation after animal and plant “kingdoms”) but it also
carries the idea of a shared body-plan, a kind of “abstract vertebrate” which, if folded and curled in
particular sequences during embryogenesis, yields an elephant, twisted and stretched in another
sequence yields a giraffe, and in yet other sequences of intensive operations yields snakes, eagles, sharks
and humans. To put this differently, there are “abstract vertebrate” design elements, such as the tetrapod
limb, which may be realized in structures as different as as the single digit limb of a horse, the wing of a
bird, or the hand with opposing thumb of a human. Given that the proportions of each of these limbs, as
well as the number and shape of digits, is variable, their common body plan cannot include any of these
details. In other words, while the form of the final product (an actual horse, bird or human) does have
specific lengths, areas and volumes, the body-plan cannot possibly be defined in these terms but must be
abstract enough to be compatible with a myriad combination of these extensive quantities. Deleuze uses
the term “abstract diagram” (or “virtual multiplicity”) to refer to entities like the vertebrate body plan,
but his concept also includes the “body plans” of non-organic entities like clouds or mountains.-4-
What kind of theoretical resources do we need to think about these abstract diagrams? In mathematics
the kind of spaces in which terms like “length” or “area” are fundamental notions are called “metric
spaces”, the familiar Euclidean geometry being one example of this class. (Non-Euclidean geometries,
using curved instead of flat spaces, are also metric). On the other hand, there are geometries where these
notions are not basic, since these geometries possess operations which do not preserve lengths or areas
unchanged. Architects are familiar with at least one of these geometries, projective geometry (as in
perspective projections). In this case the operation “to project” may lengthen or shrink lengths and areas
so these cannot be basic notions. In turn, those properties which do remain fixed underprojections may
not be preserved under yet other forms of geometry, such as differential geometry or topology. The
operations allowed in the latter, such as stretching without tearing, and folding without gluing, preserve
only a set of very abstract properties invariant. These topological invariants (such as the dimensionality
of a space, or its connectivity) are precisely the elements we need to think about body plans (or more
generally, abstract diagrams.) It is clear that the kind of spatial structure defining a body plan cannot be
metric since embryological operations can produce a large variety of finished bodies, each with a
different metric structure. Therefore body plans must be topological.
To return to the genetic algorithm, if evolved architectural structures are to enjoy the same degree of
combinatorial productivity as biological ones they must also begin with an adequate diagram, an
“abstract building” corresponding to the “abstract vertebrate”. And it is a this point that design goes
beyond mere breeding, with different artists designing different topological diagrams bearing their
signature. The design process, however, will be quite different from the traditional one which operates
within metric spaces. It is indeed too early to say just what kind of design methodologies will be
necessary when one cannot use fixed lengths or even fixed proportions as aesthetic elements and must
instead rely on pure connectivities (and other topological invariants). But what it is clear is that without
this the space of possibilities which virtual evolution blindly searches will be too impoverished to be of
any use. Thus, architects wishing to use this new tool must not only become hackers (so that they
can create the code needed to bring extensive and intensive aspects together) but also be able “to hack”
biology, thermodynamics, mathematics, and other areas of science to tap into the necessary
resources. As fascinating as the idea of breeding buildings inside a computer may be, it is clear that
mere digital technology without populational, intensive and topological thinking will never be enough.
References
-1-
First....the forms do not preexist the population, they are more like statistical results. The
more a population assumes divergent forms, the more its multiplicity divides into
multiplicities of a different nature....the more efficiently it distributes itself in the milieu,
or divides up the milieu....Second, simultaneously and under the same conditions....
degrees are no longer measured in terms of increasing perfection....but in terms of
differential relations and coefficients such as selection pressure, catalytic action, speed of
propagation, rate of growth, evolution, mutation....Darwinism’s two fundamental
contributions move in the direction of a science of multiplicities: the substitution of
populations for types, and the substitution of rates or differential relations for degrees.
Gilles Deleuze and Felix Guattari. A Thousand Plateaus. (University of Minnesota Press, Minneapolis,
1987). Page 48.
-2-
Difference is not diversity. Diversity is given, but difference is that by which the given is
given...Difference is not phenomenon but the nuomenon closest to the phenomenon...
Every phenomenon refers to an inequality by which it is conditioned...Everything which
happens and everything which appears is correlated with orders of differences: differences
of level, temperature, pressure, tension, potential, difference of intensity
Gilles Deleuze. Difference and Repetition. (Columbia UniversityPress, New York, 1994). Page 222.
-3- See for example: Stephen Todd and William Latham. Evolutionary Art and Computers. (Academic
Press, New York, 1992).
-4-
An abstract machine in itself is not physical or corporeal, any more than it is semiotic; it is
diagrammatic (it knows nothing of the distinctions between the artificial and the natural
either). It operates by matter, not by substance; by function, not by form... The abstract
machine is pure Matter-Function—a diagram independent of the forms and substances,
expressions and contents it will distribute.
Gilles Deleuze and Felix Guattari. A Thousand Plateaus. Op. Cit. Page 141
i
A Thousand Marxes
ByEliot Albert
Eliot Albert challenges Manuel De Landa over his perfunctory dismissal of Marxism in his latest book
In the "Conclusions and Speculations" to his A Thousand Years of Nonlinear History1 Manuel De
Landa describes his book as offering a "historical survey of these flows of ’stuff’, as well as with the
hardenings themselves" (259). The stuff referred to is broken down in the structure of the book into
three sections, each corresponding to one of Gilles Deleuze and Felix Guattari’s three major strata, that
is, the "physicochemical, organic and anthropomorphic" (ATP 502/627)2, which for De Landa are
taken to be ’Lavas and Magmas’, ’Flesh and Genes’, and ’Memes and Norms’. The hardenings, or
elsewhere ’slowings down’ of this stuff, or matter-energy in movement, take different forms on the
three strata: the formation of features of the geological landscape by the hardening of the eponymous
lavas and magmas; the coagulation of the flows of biological matter, "biomass, genes, memes and
norms" (258) into human and animal bodies; the extrusion of languages from the "momentary slowing
downs or thickenings in a flow of norms", and the creation of institutions considered as "transitory
hardenings in the flows of money, routines and prestige" (259).
In executing this analysis De Landa builds a conceptual armature out of a weave of two elements. The
first is an historical perspective culled from Fernand Braudel’s magisterial Civilisation and
Capitalism: 15th-18th Century, the fine grain of the micrological secured by the perspective of the
longue durée and an attention to the cycles and flows of economic life: Kondratieff waves altered to
take account of nonlinear dynamics. The second, and perhaps more apparent, is a deployment of
concepts culled from the two volumes of Deleuze and Guattari’s Capitalism and Schizophrenia,in
particular from the second volume, A Thousand Plateaus. The Deleuzo-Guattarian concepts with
which De Landa is most concerned are those of de- and restratification, nonorganic life, the Body
without Organs, and the machinic phylum. The result of the fusion of these two elements, it is not
inconceivable to suggest, turns A Thousand Years into a sustained attempt to demonstrate the validity
of Guattari’s claim in his article "The Plane of Consistency" that, "what makes the thread of history -
from protohistory until the scientific revolutions - is the machinic phylum"3.
But lurking behind De Landa’s work is an unexpected attack upon, and rejection of, Marxism as a tool
of historical analysis. The only overt criticism of Deleuze and Guattari given in A Thousand Years is
precisely for their commitment to Marxism: "[d]espite the fact that their philosophical work represents
an intense movement of destratification, Deleuze and Guattari seem to have preserved their own
stratum, Marxism, which they hardly touch or criticise" (331). De Landa himself seems to want to
understand Marxism as a set of incontrovertible truths rather than as a method, and thus he forms his
criticisms, dismissing Marx on two counts: "the labour theory of value [...] and the built-in teleology in
the traditional Marxist periodisation of history" (281).
However, these are ancient attacks, and Marxism has long ceased to be bothered by them. Regarding
the first, Antonio Negri has written that the redundancy of the labour theory of value is tied "to a
previous and out-dated organisation of labour and accumulation" and goes on to say it is the
conjunction of "post-Fordism as the principal condition of the new social organisation of labour and as
the new model of accumulation, and post-Modernism as the capitalist ideology adequate to this new
1
mode of production" that together form the assemblage he calls "the real subsumption of society
within capital."4 And Felix Guattari pointed out that it is Marx himself in the Grundrisse who "insisted
on the absurdity and the transitional character of a measure of value based on work time". The simple
reason for this being the growing discrepancy between the machinic, intellectual and manual
components of labour which meant that "human time is increasingly replaced by machinic time."5
De Landa’s second reason for rejecting Marxism, that it is beholden to a predetermined teleological
progression of stages, is simply fatuous. The attack devolves to a position from which De Landa
criticises Marxism for being unable to countenance the fact of ’combined and uneven development’,
an idea of which De Landa has, in his recent talks at the ICA and the Architectural Association in
London, demonstrated a total and resounding ignorance. On this latter point he suggests that only his
model, nominally derived from the application of nonlinear dynamics to economic theory, can account
for a situation such as that in which one sees the existence of an urban agglomeration evincing
nominally capitalist relations of production surrounded by a rural expanse dominated by feudal
relations. It is, however, precisely such assemblages of heterogeneous elements that are accounted for
by the concept of combined and uneven development, as we find in Marxist analyses of the situation in
Tsarist Russia on the eve of the 1917 Revolution. Here, in Trotsky’s writings, is the classic statement
of the principle: "Unevenness, being the most general law of the historic process, reveals itself most
sharply and complexly in the destiny of the backward countries [...] From the universal law of
unevenness thus derives [...] the law of combined development - [...] an amalgam of archaic with more
contemporary forms"6.
Capitalism, contrary to De Landa’s claims, is never characterised in Marxist theory as a smooth space
of homogeneous relations, but rather is marked by the radical coexistence of unevenness. Marxist
economics at its most powerful, in contrast to bourgeois economics, is committed to an anti-Platonism
precisely in the sense that it rejects the possibility of the existence of pure forms - an evolutionary
procession of stages - and therefore is predicated upon the existence of noncapitalist relations.
The problem with De Landa’s position in A Thousand Years is that, by reading Marxism as he does, he
forces himself to reject an axiomatic part of the work of Deleuze and Guattari while simultaneously
claiming to be fully consistent with their project. Their account of capitalism’s spread across the globe
in terms of "(t)he four principal flows that torment the representatives of the world economy, or of the
axiomatic, [which] are the flow of matter-energy, the flow of population, the flow of food, and the
urban flow [...] the axiomatic never ceases to create all of these problems, while at the same time, its
axioms, even multiplied, deny it the means of resolving them" (ATP 468/584) is explicitly given in
terms derived from, and consistent with, the most sophisticated Marxist accounts of the functioning of
the capitalist world machine or axiomatic. One can only agree with Fredric Jameson’s judgement on
this: "Deleuze is alone among the great thinkers of so-called poststructuralism in having accorded
Marx an absolutely fundamental role in his philosophy"7. De Landa’s misreading of Marx thus
becomes, not an academic quibble, but a grotesque misrepresentation of Deleuze and Guattari’s work.
Why so? Because if we return to the question of the nature of the capitalist machine we have to ask
what is a machine to Deleuze and Guattari? And if we do this we find that inseparable from their
concept of the machine is the concept of machinic enslavement. Now, the Marxist concept of the
2
machine obviously has a direct correlate to this in that of exploitation, but anything even vaguely
resembling such a concept is entirely absent in De Landa; essentially he presents a concept of
capitalism (and indeed of all social regimens) that is devoid of conflict either endemic or accidental.
And the ultimate reason for this lack is his rejection of the theory of surplus value. Just as Negri notes
that "the theory of surplus value [...] is the centre, now and always, of Marxist theory" and the key to
demonstrating the "productive materialisation" of its method8, so the deformations and deployments
of surplus value (principally as surplus value of code, and in the distinction between machinic and
human surplus value) form a critically important element in the Deleuzo-Guattarian conceptual
assemblage. The transfiguration of the theory of surplus value as surplus value of code is to be
understood as the principal mechanism of Deleuzo-Guattarian thought, one that cuts across the strata
operating as follows: "Each chain captures fragments of other chains from which it ’extracts’ a surplus
value, just as the orchid code ’attracts’ the figure of a wasp: both phenomena demonstrate the surplus
value of a code" (ATP 39/47). The ramifications of this are broad, because in Deleuze and Guattari’s
hands the concept of the surplus value of code, the capture of code fragments, gives us first, the
principal mode of understanding deterritorialisation (decoding) processes, and second, the mechanism
whereby philosophy avoids representation (the goal of a nonrepresentational thought): "the wasp in
turn deterritorialises by joining with the orchid: the capture of a fragment of the code, and not the
representation of an image." Thus no theory of surplus value = no theory of machinic surplus value =
no concept of conflict = definitive break with Deleuze and Guattari.
NOTES
1. Manuel De Landa, A Thousand Years of Nonlinear History, Swerve Editions, New York 1997, pp.
257–74. References to this text will henceforth be designated by page number in Arabic numerals.
2. Gilles Deleuze and Felix Guattari, A Thousand Plateaus: capitalism and schizophrenia, trans. Brian
Massumi, Athlone Press, London 1988, pp. 502/627. References henceforth designated by ATP
followed by page numbers.
3. Felix Guattari, La revolution moleculaire, Editions Recherches, Paris 1977 , p. 315
4. Antonio Negri, Twenty Theses on Marx: Interpretation of the Class Situation Today, trans. by
Michael Hardt in Makdisi, Casarino and Karl (eds.) Marxism Beyond Marxism, Routledge, New York
1996, p. 154
5. Felix Guattari, "Capital as the Integral of Power Formations", trans. by Charles Wolfe and Sande
Cohen, in Soft Subversions, ed. by Sylvere Lotringer, Semiotext(e), New York 1996, pp. 202–24
6. Leon Trotsky, The History of the Russian Revolution, trans. by Max Eastman, Pathfinder Books,
New York 1980, p. 6
7. Fredric Jameson, "Marxism and Dualism in Deleuze", in A Deleuzian Century?, South Atlantic
Quarterly Special Issue Summer 1997, Vol. 93, No. 3, p. 395.
8. Antonio Negri, Marx Beyond Marx: Lessons on the Grundrisse, trans. by Harry Cleaver, Michael
Ryan, and Maurizio Viano, Pluto Press, London 1991, p. 160