Gilbert - 1999 - The Simulation Fo Social Processes

You might also like

Download as pdf or txt
Download as pdf or txt
You are on page 1of 16

THE SIMULATION OF SOCIAL PROCESSES1

Nigel Gilbert
Centre for Research on Simulation in the Social sciences
School of Human Sciences

University of Surrey
Guildford
Surrey GU2 5XH
United Kingdom

N.Gilbert@soc.surrey.ac.uk
February 14, 1999

This talk is about the idea of simulating social processes and so I will not say much about
actual studies. I shall discuss the uses of simulation in the social sciences and a little about
the history of simulation. I shall also review some of the major approaches to simulation in
the social sciences and then at the end, and I hope in the discussion, talk a little about future
directions. So, that is the map of where we are going.

The uses of simulation


Let me begin with the basics, the uses of social simulation. Perhaps the most common use of
simulation in the social sciences is to try to achieve understanding (Axelrod, 1997). That
begs the question ‘What do we mean by understanding?’ and social scientists have proposed
many different ideas of what it means to understand social phenomena. But the kind of
understanding that is most allied to simulation is understanding in terms of processes or
mechanisms, because it follows very nicely from having built some kind of simulation that
we can then say, what we have here is a model of a social process.
The idea of understanding societies in terms of processes may seem an obvious one.
Nevertheless, if you look at social science, and here I am thinking of economics, sociology

1
Edited transcript of a talk given at the SMAGET Conference, 6-8 October 1998, Clermont-Ferrand,
France.

- 1 -
and political science, and perhaps geography as well, it is surprisingly uncommon to find
theories that are really about processes and mechanisms. There are lots of theories which say
that when you observe ‘x’, you will also find ‘y’. Some propose more complicated relations
than that, but they are essentially theories about the correlation between things that happen at
one moment in time. If we are talking about social processes, we are of course talking about
processes through time, how things grow, how they develop, how they emerge, how they
decline. It is quite difficult to theorise about processes unless you have some tool like
simulation.
A second common reason for people becoming interested in simulation is the apparent
opportunity to make predictions: about the end of the world as we know it or about the future
of prices in the stock market. However, this is usually a bad use of simulation, at least bad in
the social sciences. Accurate predictions are always difficult and in the social sciences there
are good reasons why one cannot usually make predictions with any degree of accuracy.
The third reason for doing simulation is to use simulation as a tool. For example, some
simulations are aimed at policy makers. They can use the simulation to do ‘What if?’
calculations to see what would happen if a particular policy was put into effect. This is a
little like prediction and it does sometimes have some of the same difficulties, but at least in
this case, the results consist of some possible scenarios rather than a specific forecast. The
simulation is providing a model that someone other than the model-builder can use for a
practical purpose.
Training is another practical use for simulation. The best known example is the flight
simulator that airline pilots use to learn how to fly. There are many other simulations with
similar educational purposes. They are increasingly being used to train students at university
level in physics, ecology and in many other areas.

Entertainment is a fifth reason for building simulations. SimCity™ is an example in which


the player can built a virtual city, acting as its mayor. If you don’t do a good job then
everybody will leave your city and you will lose lots of money and the game. If you do a
good job, you will have a thriving city. That is perhaps the closest example of using social
simulation for entertainment that exists at the moment, but computer games are moving in the
direction of more complex and more sociologically realistic scenarios and so we are likely to
see more examples in the future. Computer gaming is certainly the kind of simulation that is
best known in the world at large.

- 2 -
Perhaps the most exciting of these many possible uses of simulation is using it to discover
unexpected connections and consequences within the social world. I shall discuss this more
later when I describe building artificial societies.
Finally, there is the use of computer simulation as a means for formalising social theories. In
the natural sciences, in physics in particular, mathematics is the language of science. In the
social sciences, we have had either to use ordinary language with all its problems of
ambiguity, or try to use mathematics, which is sometimes less clear and more cumbersome
than we would like. Using a simulation as a way of formalising a theory, and here I
specifically mean a computer program, can be very effective (Troitzsch, 1997). There are
benefits in using computer programs as a means of formalisation when your theory involves
parallel processes, that is, lots of things happening at the same time, when there is a high
degree of non-linearity, and also when you are dealing with a heterogeneous collection of
actors or agents. Under all the three conditions, it is often quite difficult to formulate ideas
mathematically. All three conditions actually apply to human societies, so it is perhaps not
surprising that mathematics has not been that well used in many areas of the social sciences
and that simulation has an important role to play. As you might expect, this is quite a
controversial point amongst mathematicians and many scientists.

The history of simulation in the social sciences


Let me now turn to the history of simulation in the social sciences. Computer simulation as
an idea has a history that goes back to the beginning of computers, but if we look specifically
at the history of the simulation of societies, the history is much shorter. During the late fifties
and early sixties, there was quite a lot of research that tried to model macro-processes
embracing the whole world. The best known example is the Club of Rome work by
Meadows et al (1972) using ideas developed by Forrester (1971). This constructed a model
interrelating population, pollution levels, the availability of natural resources and capital
stocks. There were also other efforts to do social simulation during this period (see
Troitzsch, 1997). While these were interesting innovations, their methodology was heavily
criticised and social simulation consequently came to be regarded as a failed experiment in
the 1970s and 1980s. It was not until the late 1980s and 1990s that social simulation re-
emerged, having learned some hard lessons from the past.
So what were these lessons? Two are particularly important. One was that the aim of
simulation should be understanding, rather than prediction. The aim in the sixties was to

- 3 -
predict, for example, what would happen to the ecology of the world over the next fifty years.
To achieve such predictions, however, you have to make a great number of assumptions
about how people would live and thus about the values of the many parameters in the model.
The majority of these assumptions could not easily be justified. Consequently, the
predictions were very easy to criticise and demolish. That is the first lesson: orient simulation
towards understanding, rather than prediction.
The second lesson is that the technology needed to match the issue being examined. In the
1960s, the simulations were crude, from our present point of view. One of the major
innovations in the late 1980s and 90s was the development of simulations based on multi-
agent modelling, in particular bringing in ideas from artificial intelligence and cellular
automata (Ferber, 1998; Gaylord and D'Andria, 1998). We are now beginning to see
simulations that incorporate carefully designed cognitive models and those that take seriously
the implications for modelling of the human capacity for language use.
So the lessons we need to learn are first that what we should do is to strive towards
understanding the phenomena we are studying rather than jumping too quickly to making
bold predictions, and secondly that our models need to be appropriate to what we are
modelling.

Outstanding issues
Even having absorbed these lessons from the past, there still some major unsolved issues.
One of the basic problems of social science is to understand the relationship between the
micro-level and the macro-level. This has been a concern of sociologists and philosophers
for centuries. For example, it was one of the main interests of one of the founding fathers of
sociology, Emile Durkheim. What is the relationship between individual action and social
action? How do individuals make up a society and what is the effect of society on its
members? One of the important features of current work in social simulation is a new
understanding of the emergence of macro-level phenomena from the micro-level (Gilbert,
1995).
Another area where simulation is increasingly being seen to be helpful is in thinking
theoretically about self-organisation, that is, how do individuals organise themselves into
social structures without outside intervention. Self-organisation theory or autopoiesis is an
approach – perhaps even a paradigm – that was originally developed in biology, but that is

- 4 -
becoming more and more influential in social simulation (Varela et al., 1991; Maturana and
Varela, 1992).
A third area where simulation has value is in understanding the effects of bounded rationality
(Elster, 1989).
Last in my list, and one that hasn’t yet been sufficiently exploited in current social simulation
research, is the effect of spatial location. Most social science, with the obvious exception of
geography, ignores the effect of space. The consequences of location on, for example,
networks and inter-relationships can be explored using simulation in a way that has never
previously been possible.

A brief introduction to the methodology of simulation


I begin this section by introducing some basic terminology and then consider the
methodological logic of simulation and what this implies about how one goes about building
a simulation. In the social sciences there are well understood methods for, for example,
doing a social survey. It is less well understood how one does a simulation, and so I shall
briefly mention some of the tools that one might use.
Suppose that there is something out there that we wish to understand. I shall call this the
‘target’ (Doran and Gilbert, 1994). In order to perform a simulation, we need to have a
‘specification’ of the simulation, that is, an abstracted and theoretically informed description
of the simulation. It may be a textual description or something more formal. The world itself
is indefinitely complicated and therefore we need to have some kind of abstraction to help us
focus on what are the important issues for the specification. This abstraction must be
informed by theory, that is, the accumulated wisdom of our predecessors about what is
important and what is not important in trying to understand the target.
We also need a ‘model’. A model is distinguished from a specification because a model is
something that you can run – you can press the start button and something happens. In the
context of social simulation, a model is normally a computer program (but it might be a role
playing game). And when one does press the start button and something happens, then and
only then do you have a simulation. The simulation is a dynamic representation of some
social process in the world. It is useful to make distinctions between the model as something
that is capable of running, the simulation as a running model and the specification as a
description of the model.

- 5 -
In order to know whether this model is a good model of the target, we need to validate it. The
most important part of validation is to compare the behaviour of the model with the
equivalent behaviour of the target. To capture the behaviour of the target, we need to gather
some data directly from the real world. That might be done by observation or other
appropriate ways, such as a survey or the analysis of administrative records or whatever.
When we run the simulation we will observe the simulation producing some kind of output,
which I shall call the ‘simulation data’. We can then compare the data we have observed from
the running simulation with the data that we have collected from the target. If there is what I
call ‘structural similarity’, that is, there is co-variation in similar circumstances, then this
counts as evidence about the adequacy of the model, as a representation of the target.

One of the criticisms often made of simulation-based research is that it is less trustworthy and
less rigorous than research based on statistical models. Indeed, at least in the social sciences,
many people regard simulation as ‘messing around’ as compared with the ‘really hard work’
that statisticians do, and they complain that there is no clear methodology behind simulation
research.
I wonder whether that is true, because the logic of statistical modelling is rather similar to the
logic I have just described for simulation modelling. Just think for a moment of a standard
piece of survey-based research in which one collects some data about people’s opinions or
some economic data and, in the normal way, one then builds a regression model to try to
explain those data. We have a target – some social processes we are interested in – we build
a model, called a regression equation, which is an abstraction of the target, we use regression
procedures in order to estimate some parameters and also to generate some predicted data.
We then compare the predicted data with the actual data we have collected about people’s
opinions, about prices or whatever it may be, to see whether there is some similarity between
the two. We report the difference between the collected data and the predicted data in terms
of a coefficient of structural similarity like a correlation coefficient, an r2. Put in these terms,
the logic of statistical modelling and logic of simulation are essentially the same.
In both cases what we are doing is building models and comparing the output of those models
with what we have seen in the world in order to validate or falsify the model. The logic is
identical. It just so happens that in the case of simulation the model is a computer program,
while in the statistical case, the model is a regression equation, or a structural equation or
something of that kind.

- 6 -
One of the difficulties faced by both those building statistical models and those who are
building simulation models is that there are a great number, possibly an infinite number of
models that one can build for a particular target. One of the principal ways in which these
models will vary is the level of detail that they embody. There has to be a degree of
abstraction of the target in any model, but how much detail do we throw away? How much
abstraction do we really need?
You might think that the more complicated the better, and it is certainly true that the more
detail that included in the model, the closer it will be to the target. On the other hand, the
more complex the model is, the harder it is to build and, perhaps more importantly, the harder
it is to validate. In particular, complex models have to rely on many parameters. The
magnitude of each of these has to be justified. The Limits to Growth model is an example that
shows that complicated models with many parameters are not necessarily the way to go.
There are thus strong arguments for suggesting that the simpler the model, the better; with
simpler models it is easier to develop conclusions which one can hold with reasonable
conviction. Complicated models require the setting of many parameters and, if with non-
linear simulations, we can’t tell beforehand what the effect of any one parameter will be on
the output. Nor can we usually measure all the parameters: inevitably some of them have to
be assumed.
Let me describe a particularly clear example of this. There has been work by archaeologists
on Mexico about a thousand years ago (e.g. Van West, 1994). The aim of the research was to
try to understand the ecology of farming in Pueblo settlements (Kohler et al., 1996). They
were interested in why the population disappeared in about 1300 AD. The people either went
away or died off and it is not clear why. It might have been from warfare or because they had
exhausted the soil and their agriculture was no longer viable. To try to understand what
happened, a simulation was built. The aim was to put in all the data that could be found in
order to model the actual locations of the settlements in one valley in Mexico, and to simulate
the changes in population over time. The researchers collected a huge amount of data from
archaeological remains about the agricultural conditions in this particular valley, the weather
and climate change and so on, for a period of about five hundred years. They developed a
model about how many people could be supported in the valley and ran the model and looked
at the way in which the simulated population changed over the years.
Unfortunately the match between what the model produced and what was known from the
archaeological record was not particularly good. The trouble was that their model was going

- 7 -
against two of the maxims I have been arguing for: first, it aimed to be a facsimile of the
target; and second, it was built with the intention of making predictions (about the state of
agriculture over time). Because it was attempting to be an exact and detailed model of the
ecology of this valley, there were some tens of thousands of parameters of different kinds.
The values of many of these had been measured, but others had to be assumed because the
data could not be recovered from the archaeological record. It was not clear whether the
problems of the model were due to the fact that the basic principles of their simulation were
mistaken or because their assumptions and parameters were wrong. There is no way forward,
as far as I can see, with that research. One could do a bit more archaeology, but there will
always be some assumptions left to make. Although one can get closer and closer to
matching the target with a more and more complicated model, this seems to me to be the
wrong way of proceeding. What we want to do is not make our models more detailed, but
simpler and more fundamental, and in that way we will perhaps have a basic understanding of
the underlying social processes rather than a complex and unmanageable model.
A comparison between data collected from the target and data collected as the output from a
simulation is part of the validation of the model. In addition, one needs to spend time on
verification. Verification translates as finding all the bugs (Balci, 1994). The more complex
a simulation, the more likely that the program contains bugs, that is, the model does not
correspond to its specification. Almost all programs include bugs when they are first written
and therefore the researcher must test the code thoroughly. The observed behaviour must
really be due to the intended model, and not a side effect of some bug.
Having verified the model, the researcher can proceed to validation. There are traps for the
unwary here also. I have suggested earlier that validation is a matter of comparing the
behaviour of the target with that of the simulation. This must be done for all reasonable
initial conditions. It is possible for the behaviour of a model to correspond closely with that
of the target for one range of parameters, but diverge greatly for others. Let me give you an
example. I will return to the Club of Rome work I mentioned earlier. What Forrester did was
to develop a model that, amongst other things, predicted the future population of the world
given a whole variety of assumptions. The book that came out of this research, The limits to
growth, includes many graphs that show exponential population growth (Meadows et al.,
1972). The argument was that we could not go on as we are because the culmination of this
population growth will be ecological disaster. Forrester created these predictions by setting
up his model with current parameter values and then stepping forward in time to show what

- 8 -
would happen to the population total in future. However, we can also run the model
backwards to ‘predict’ what the population was in the past (Troitzsch, 1997). If we do that,
the output indicates that the world population in the recent past was vastly greater than it is
now, which of course it was not the case. The model generates spurious outputs when
modelling the past and so we may not want to trust it to make predictions about the future.

I have already talked briefly about examining the sensitivity of models to values of the input
parameters. One of the problems is that small changes in the value of the inputs in non-linear
models do not necessarily lead to small changes in the values of the outputs. In other words,
it may be that the values of the parameters are critical in determining how the simulation
behaves. It is important to see whether this is true for a particular model and particular input
values. This can be done by running the simulation for a range of values of each parameter.
Of course, if there are many parameters, and every combination of parameters is to be tested
over a range of values, the number of runs required becomes impracticably large. One way of
dealing with this problem is to vary the input values by adding some random ‘noise’ to each
and then examining the outputs for unexpected behaviours.
A productive strategy is to adopt a deductive, rather than inductive approach to simulation.
This means testing ideas by subjecting them to ‘crucial experiments’, that is, experiments that
can distinguish between alternative hypotheses. This follows from my remarks about not
building highly complicated models because of all the problems of showing that models are
valid. If we can build simple simulations that demonstrate the consequences of particular
theoretical ideas, this will usually be much more effective than building complicated models
that can be criticised in many ways because of the number of different assumptions that they
make.

Another useful strategy is to develop artificial societies (Doran, 1997). Artificial societies are
societies of artificial agents which have existence only in the computer and which are not
necessarily modelled on real-world targets. We can therefore simulate societies that do not
have any real existence. There are an increasing number of examples where this has proved
to be a good strategy. We still have to go through a process of developing a model and a
simulation but there is no validation step because what we are doing is not comparing a
model against a some observed data, but rather exploring possible societies that do not have
any correspondence to a real society.

- 9 -
Types of simulation model
That’s all I want to say about methodology. Next, I shall briefly review the main types of
model being used today in the social simulation arena. The first of these I need only mention
briefly because I have referred to it earlier. System dynamics is the idea that one can model
the world in terms of difference equations, or differential equations (Forester, 1971). In
system dynamics models, there is just one actor, described by a multitude of attributes. As I
have said, system dynamics models were made famous by the Club of World models but they
also used in many other contexts (e.g. Jacobsen and Vanki, 1998). There are tools for
building system dynamics models and the techniques are well documented.
Possibly less well known is micro-simulation (Harding, 1990). A micro-simulation model
simulates the development of a population of individual agents over time (in contrast to
system dynamics where is just one agent). The researcher starts with data about a large
random sample of people, households, firms or organisations. A set of transition probabilities
is applied to the data about each of the individuals in the sample in order to predict how the
population would look after a year or after one time unit. For example, there might be some
small probability that an individual would die during the year; if this occurred that individual
would be removed from the sample for the following year. Similarly, there may be some
probability that the individual would become unemployed, in which case this would be
reflected in the following year’s data about that person. After working through all the
individuals in this way, predicting their state after one year, we repeat this procedure a
number of times. We can thus predict the situation of this sample some time in the future and
then extrapolate to the population as a whole. This technique is often used to make
predictions about such things as the costs of government welfare, pensions and tax revenues.
From the perspective that I have been arguing for, the difficulty with this is that, although it
permits predictions to be made, microsimulation does not allow us any purchase on
understanding the social processes underlying the changes. The understanding is all built into
the transition probabilities. But micro-simulation, as a technique, is probably the most
common and most used simulation technique in the social sciences at the moment, world-
wide.

The third type of simulation used in the Social sciences is based on cellular automata models,
such as the classic Game of Life (Berlekamp et al., 1982). The characteristic feature of this
kind of model is that there is an array of cells. Each cell changes its state over time according
to a rule that takes into account the states of the local neighbourhood cells. Although

- 10 -
developed by physicists, this type of model has some obvious applications to the modelling
of social phenomena. We can easily model spatial location and interaction between agents,
and many agents can be represented, one to a cell (Hegselmann, 1996). However, each agent
can only model very simple behaviour. This may of course be an appropriate choice in the
light of what I was saying earlier about simplicity being a virtue.

Let me give you an example of the use of a cellular automaton in the context of social
psychology. A psychologist, Bibb Latané, proposed a theory of ‘social impact’. This theory
(Latané, 1981) aimed to explain the way in which people’s attitudes, opinions and beliefs
changed as a result of the influence of other people. His theory states that the impact of other
people on a given individual’s attitudes is a simple multiplicative function of the strength of
the others’ opinions, how credible or persuasive they are, how near in a social sense they are,
and how many people there are of that particular opinion. For example, if you want to
understand why some people believe that the European Union is a good thing, these are the
factors that Latané argues are important: the number of other people, how close they are to
the person being influenced and how strongly they favour the EU.
The original theory considered only the effects of other people on one person’s opinions, but
of course the effects will be reciprocal. There is therefore a complex interaction between the
people and the ultimate distribution of opinion is difficult to grasp analytically. However, we
can simulate it using a cellular automata model (Latané, 1996). We create a grid of cells in
which each cell is either in a state representing a positive opinion of the European Union, or a
state representing a dislike of the European Union. The states of all the individuals are then
repeatedly updated according to Latané’s multiplicative formula. If we start with a random
initial distribution of attitudes, with 30 per cent for and 70 per cent against, what happens? It
seems possible that because those against are in a majority, they will convert all those for and
we will finish with a population entirely against. However, it turns out that we always get
islands of the minority view. The majority view never completes wins. One can experiment
with a variety of assumptions and initial conditions and discover that this simple model is
extraordinarily robust. Clustering always occurs, with each cluster protecting those who are
inside from further attitude change (Latané and Nowak, 1997).
There are a number of areas in which we can apply this kind of model, not just in attitude
change, but also in other situations where there are individuals influencing each other. I
chose this example because it illustrates rather well that one can use even very simple models
to explore and understand social processes. Although they may be very simple, they also are

- 11 -
very general. Moreover, because this model is so simple, we can explore different
assumptions relatively easily. We can, for example, examine the consequences of variations
on the multiplicative rule.
The final type of simulation model I shall mention is the multi-agent system, but because this
is a conference on multi-agent models, I shall say very little about it. Nevertheless, from the
point of view of a sociologist, there are a number of issues that are relatively infrequently
discussed, but which need to be considered. I shall just list some issues for you to think about
when you are listening to the talks during the rest of the day.
If we are developing multi-agent models in which the actors are intended to be people then
issues of autonomy, of social ability, of dealing with other people and pro-activity need to be
thought through (Jennings, 1996). These are all features that are intrinsic to what we consider
humans and human societies to consist of. It is not necessary to simulate them all in every
model, but we do need to consider carefully which of these features do need to be modelled
and which do not. This in turn means thinking about how to represent goals, knowledge and
belief, inference, intention and emotion.

Conclusion
I shall close with an advertisement and a bibliography.
Much of what I have said today about methodology comes from a book that will be published
in 1999, co-written by Klaus G. Troitzsch and myself, Simulation for the Social Scientist.
This covers not only the methodology of simulation in the social sciences, but also practical
techniques for doing Social Simulation. Other sources for current research include a number
of conference proceedings and an electronic journal, the Journal of Artificial Societies and
Social Simulation (http://www.soc.surrey.ac.uk/JASSS/). I recommend these to you as a
source of interesting ideas and results in the now rapidly growing area of social simulation.

Questions: from the audience


1. My question is about the idea of artificial societies. We use artificial models of
societies in the field of development and environment and I agree with you that artificial
societies are useful for going far from reality where you can test theories and have ideas on
qualitative results on the links between agents, and so on. However, unfortunately when we
deal with problems in the field and problems of the management of environment, the

- 12 -
researchers we are working with tell us that artificial societies say nothing to them. It is
something refreshing for them, but not useful for their problems.
One other development in the 90s is that the fact that everyone agrees that a model is a
representation. Even to field researchers, a model is a representation for tackling
environmental problems. We have a lot of representations of the same problem, of the same
space, of the same field and the models can be used as a tool to talk about these, to make
representations emerge and so on. Our model is a result of a negotiation between a computer
scientist who tries to make the models simpler and other people who are concerned with the
complexity of problem. What is most useful is a family of models, some of them belonging
to the family of artificial societies where you have qualitative results, but others based on
discussions that are more complex. The latter models you would not validate in a strong
sense, but they do serve as tools to discuss reality. So in my opinion, we must have a family
of models and you cannot make do with only artificial societies.
NG: What you are saying is that different kinds of models are appropriate to these different
uses. A model that is good for understanding, which may well be very simple, may be
completely inappropriate for training, or as a tool for policy makers. I was concentrating on
the uses of simulation for understanding, discovery and the formalisation.
2. Thank you for a great talk, very clear. When a talk is clear it makes it so much easier
for the audience to ask questions about what they don’t understand. I was extremely
surprised when you presented repeatability as an issue: that the output of the model needs be
repeatable. Take the case of the Latané model, the output is not repeatable. You told us that
starting with 30 per cent ‘for’, you get 14 per cent ‘for’ at the end. This is something that is
not repeatable. The closer you get to 50 per cent against at the start, the less repeatable it is.
You might want to say that it is the distribution of outputs that is repeatable, but there area
number of cases, especially in dynamical systems when you would need exactly the same
conditions to be able to repeat the output.
NG: It all depends what you mean by ‘repeatable’. You are right that if, for example, I ran
exactly the Latané simulation again, starting from a different starting point, but again with 30
per cent ‘for’, I would not get exactly the same clusters. Almost certainly the pattern of
opinions would be different and it is very likely that the percentages at the end would also be
different. But it would be similar, and what I will guarantee is that you will get clusters and
that there will be some of each opinion left. In that sense, all the outputs will be similar. So,

- 13 -
it all depends what you mean by repeatable, and that is something that the researcher has to
worry about. What is the appropriate level of abstraction for judging similarity?
Another way of thinking about this is to go back to the logic of simulation, in which I talked
about ‘structural similarity’. I was rather vague about what I meant by structural similarity,
and I am not sure that I have a good definition. In the Latané example, it is something about
clustering and the fact that the majority does not take over the world, that there is always a
minority remaining.
3. I do not agree that the process of modelling is a linear process, theory, modelling,
verification and then validation. This is a vision of research that is too linear for me. There
is no feedback. The model is not very important. What is important is the process of
modelling.
NG Yes, I should put in a feedback loop. In reality that is of course how it works and in
fact, there are many inner loops as well. But my point was to emphasise that validation,
verification, and something I didn’t talk about, the translation of the model into some
theoretical or policy conclusions, are as important as actually building the model. I think that
this is the point that you wanted to make as well, that this is part of an interactive process.
The input of the users of the research, whether they be other social scientists or policy
makers, are as important as everything else.

References
Axelrod, R. (1997) ‘Advancing the art of simulation in the social sciences’ in R. Conte, R.
Hegselmann and P. Terna (ed.) Simulating Social Phenomena Berlin: Springer, pp. 21-
40.
Balci, O. (1994) ‘Validation, verification, and testing techniques throughout the life cycle of
a simulation study’ Annals of Operations Research vol. 53, pp. 121-173.

Berlekamp, E., J. Conway and R. Guy (1982) Winning ways for your mathematical plays
vol. 2: Games in particular London: Academic.
Doran, J. (1997) ‘From computer simulation to artificial societies’ Transactions of the Society
for Computer Simulation International vol. 14(2), pp. 69-78.
Doran, J. and G. N. Gilbert (1994) ‘Simulating societies: an introduction’ in G. N. Gilbert
and J. Doran (ed.) Simulating Societies: the computer simulation of social phenomena
London: UCL Press.

- 14 -
Elster, J. (1989) Nuts and bolts for the social sciences Cambridge: Cambridge University
Press.
Ferber, J. (1998) Multi-agent systems Reading, MA: Addison-Wesley.
Forester, J. W. (1971) World Dynamics Cambridge, MA: MIT.
Gaylord, R. J. and L. J. D'Andria (1998) Simulating society: a Mathematica toolkit for
modelling socioeconomic behavior Berlin: TELOS/Springer-Verlag.
Gilbert, N. (1995) ‘Emergence in social simulation’ in N. Gilbert and R. Conte (ed.) Artificial
Societies: the computer simulation of social life London: UCL Press, pp. 144-156.
Harding, A. (1990) Dynamic microsimulation models: problems and prospects Discussion
Paper 48, Welfare State Programme London School of Economics.
Hegselmann, R. (1996) ‘Cellular automata in the social sciences: perspectives, restrictions
and artefacts’ in R. Hegselmann, U. Mueller and K. G. Troitzsch (ed.) Modelling and
simulation in the social sciences from the philosophy of science point of view
Dordrecht: Kluwer, pp. 209-234.
Jacobsen, C. and T. Vanki (1998) ‘Violating an occupational sex-sterotype: Israeli women
earning engineering degrees’ Sociological Research Online vol. 1(4),
http://www.socresonline.org.uk/socresonline/1/4/3.html.
Jennings, N. (1996) ‘Software Agents’ IEE Review (January), pp. 17-20.
Kohler, T. A., C. R. Van West, E. P. Carr and C. G. Langton (1996) ‘Agent-based modelling
of prehistoric settlement systems in the Northern American Southwest’ Third
International Conference integrating GIS and environmental modelling, Santa Fe,
Santa Barbara: National Center for Geographic Information and Analysis.
Latané, B. (1981) ‘The psychology of social impact’ American Psychologist vol. 36, pp.
343-356.
Latané, B. (1996) ‘Dynamic social impact’ in R. Hegselmann, U. Mueller and K. G.
Troitzsch (ed.) Modelling and simulation in the social sciences from the philosophy of
science point of view Berlin: Springer-Verlag.
Latané, B. and A. Nowak (1997) ‘Self-Organizing Social Systems: necessary and sufficient
conditions for the emergence of clustering, consolidation and continuing diversity’ in
G. A. Barnett and F. J. Boster (ed.) Progress in communication sciences: persuasion
Norwood, NJ: Ablex, pp. 43-74.

- 15 -
Maturana, H. and F. Varela (1992) The tree of knowledge: the biological roots of human
understanding Revised edition edn. Boston: Shambhala/New Science Press.
Meadows, D. H., D. L. Meadows, Jorge-Ruuders and W. W. Behrens III (1972) The limits to
growth London: Earth Island.
Troitzsch, K. G. (1997) ‘Social science simulation: origins, prospects, purposes’ in R. Conte,
R. Hegselmann and P. Terna (ed.) Simulating social phenomena Berlin: Springer, pp.
41-54.
Van West, C. (1994) Modeling Prehistoric Agricultural productivity in Southwestern
Colorado: a GIS approach , Washington State University.
Varela, F. J., E. Thompson and E. Rosch (1991) The embodied mind: cognitive science and
human experience Cambridge, MA: MIT.

- 16 -

You might also like