Professional Documents
Culture Documents
Evolutionary Game Theory and The Modeling of Econo
Evolutionary Game Theory and The Modeling of Econo
net/publication/225849932
CITATIONS READS
14 519
2 authors:
Some of the authors of this publication are also working on these related projects:
All content following this page was uploaded by Gerard van der Laan on 12 December 2013.
BY
Key words: noncooperative symmetric bimatrix game, evolutionary stable strategy, replicator dynam-
ics, learning and imitation, metastrategy, stable population
1 INTRODUCTION
Starting with the famous Nash equilibrium for noncooperative games, game
theory became a popular field of research in the 1950s. The general feeling was
that it would be possible to solve a lot of previously unsolvable problems using
game theory. Although game theory was indeed applied to a wide variety of prob-
lems, attention for game theory diminished throughout the 1960s and the early
1970s. In the late 1970s and the early 1980s game theory boomed again, espe-
cially after Ariel Rubinstein’s 1982 article in which he proved that a particular
noncooperative bargaining model had a unique subgame perfect equilibrium. At
the end of the 1980s however, there was a general feeling of discomfort about
the use of the notion of Nash equilibrium to predict outcomes. The mode of re-
search at the time was to use a different and very specific notion of Nash equi-
librium for every problem and players were assumed to be hyperrational. The
number of Nash equilibrium refinements had grown enormously and it was not
clear in advance which refinement best suited which situation. What was clear
was the fact that in the real world people did not act in the way behavior was
postulated in the models. Before a new decline in interest in game theory could
take place, game theorists picked up the idea of evolution from biology. In ret-
rospect, the 1973 article by the biologists Maynard Smith and Price ~1973! in
which they defined the concept of Evolutionary Stable Strategies, was the most
important in transferring evolutionary thinking from biology to game theory. The
book Evolution and the Theory of Games by Maynard Smith ~1982! explicitly
introduced evolutionary selection pressure in a game-theoretic setting. The notion
of evolution of strategies in a repeatedly played game closely resembles certain
models in which players learn from past behavior, thus facilitating the shift of
* G. van der Laan, Department of Econometrics and Tinbergen Institute, Free University, De Boele-
laan 1105, 1081 HV Amsterdam, The Netherlands. A.F. Tieman, Department of Econometrics and
Tinbergen Institute, Free University, De Boelelaan 1105, 1081 HV Amsterdam, The Netherlands.
** This research is part of the Research program ‘Competition and Cooperation’ of the Faculty of
Economics and Econometrics, Free University, Amsterdam.
@beta/een/CLS_kap/GRP_201/JOB_008/DIV_199-3- P59
60 G. VAN DER LAAN AND A.F. TIEMAN
attention in the field towards evolutionary models. In the last decade evolutionary
models have become increasingly popular in game theory and other fields of eco-
nomics. Special issues of leading journals have been devoted to the subject, e.g.
the issue of Games and Economic Behavior, Volume 3 ~1991! introduced by
Selten, the issue in Journal of Economic Theory, Volume 57 ~1992! introduced
by Mailath, and the issue in Journal of Economic Behavior and Organization,
Volume 29 ~1996!. Moreover, at many conferences in the field of game theory or
economics special sessions were devoted to the subject, see for instance Van
Damme ~1994!.
In this paper we discuss the main concepts of evolutionary game theory and
their applicability to economic issues. Traditional game theory focusses on the
static Nash equilibria of the game and therefore implicitly assumes rational and
knowledgeable players being able to anticipate correctly on the behavior of the
other players in equilibrium. In evolutionary game theory it is assumed that ini-
tially all individuals are preprogrammed with some strategy and that the number
of individuals playing a particular strategy is adjusted over time by some pre-
specified process according to the successfulness of the various strategies. This
results in a dynamic process which may lead to an equilibrium and may give us
a better understanding of how equilibrium strategies emerge within a society of
interacting individuals. We start this survey article by reviewing the technical ba-
sis of the standard ~Nash! equilibrium theory in section 2. This theory will be
linked to the notion of Evolutionary Stable Strategies in section 3. In section 4
the system of differential equations specifying a biological evolutionary process,
the replicator dynamics, will be discussed. Following that we discuss the eco-
nomic significance of the ESS and the replicator dynamics in sections 5, 6 and 7.
In section 8 we draw attention to an alternative branch of modeling, the so-called
local interaction models. Finally, we pose some concluding remarks in section 9.
2 NASH EQUILIBRIUM
Noncooperative game theory has become a standard tool in modeling conflict situ-
ations between rational individuals. Such a model describes the set of strategies
of each individual or player and the payoff to each player for any strategy profile
~the list of strategies chosen by the players!. The concept of Nash equilibrium is
the cornerstone in predicting the outcome of a noncooperative game. In a Nash
equilibrium each player’s strategy maximizes his utility given the strategies played
by the other players. In many situations the Nash equilibrium is not unique.
Therefore many papers in game theory have been devoted to the issue of equi-
librium selection. Refining the concept of Nash equilibrium allows one to discard
certain Nash equilibria as not satisfying certain types of rationality. In this sec-
tion we review some of the standard Nash equilibrium theory and its refinements.
This enables us in the next section to show how the basic concepts of evolution-
ary game theory fit into this framework.
@beta/een/CLS_kap/GRP_201/JOB_008/DIV_199-3- P60
EVOLUTIONARY GAME THEORY 61
H U ( x 5 1J .
mj
S 5 xj [ R
mj mj
1 jk
k51
The kth unit vector in R m j is denoted by e j~k! and is the vertex of S m j in which
player j plays his action k with probability 1, i.e. when he plays a pure strategy
by choosing action k for sure. The set S| 5 P j51 n
S m j is the mixed strategy set of
the game, i.e. an element x 5 ~x 1, . . . , x n! [ S| specifies a mixed strategy
x j [ S m j for each j [ I n. Thus x ji denotes the probability with which player
j, j [ I n, plays action i, i 5 1, . . . , m j. For j [ I n, let a j~u! be the payoff to player
j when the vector of actions u 5 ~u 1, . . . , u n! is played, where u h [ $1, . . . , m h% is
the action chosen by player h, h [ I n. Let Q be the collection of all such vectors
of actions. The payoff function u j: S| → R assigning the payoff to player j at any
vector of mixed strategies x [ S |, is defined as the expected value of the payoff
for player j at any mixed strategy profile x 5 ~x 1, . . . , x n! [ S|, i.e.
u j~x! 5 (
u[Q
P h51
n
x hu h a j~u! , ~1!
where P h51 n
x hu h is the probability with which the vector of actions
u 5 ~u , . . . , u n! [ Q is played under the mixed strategy profile x [ S|. Finally, at
1
a mixed strategy profile x [ S |, the value u j~e j~k!, x 2j! denotes the expected mar-
ginal payoff for player j, when j changes his mixed strategy x j into his kth pure
strategy of playing action k for sure and the other players h Þ j stick to their
mixed strategy x h, h Þ j, i.e.,
From equations ~1! and ~2! it follows straightforwardly that for any x [ S| and
any j [ I n it holds that
mj
u ~x! 5
j
(x
k51
jk u j~e j~k!, x 2j! ,
@beta/een/CLS_kap/GRP_201/JOB_008/DIV_199-3- P61
62 G. VAN DER LAAN AND A.F. TIEMAN
saying that u j~x! is the weighted sum over all expected marginal payoffs and
therefore the payoff function u j~x! is linear in the components x jk, k 5 1, . . . , m j,
of the mixed strategy vector x j. Observe that this linearity follows from the as-
sumption that the number m j of actions available to player j is finite.
We denote the non-cooperative n-person game in mixed strategies by
|, u!. For n 5 2 this game reduced to a so-called two-player bimatrix
G 5 ~I n, S
game denoted by G 5 ~A, B!, where A ~and B, respectively! is the m 1 3 m 2 pay-
off matrix of player 1 ~and 2, respectively!, i.e. the element a kh of matrix A ~and
the element b kh of matrix B, respectively! is the payoff to player 1 ~2! when
player 1 plays his action k and player 2 plays his action h. Clearly for
x[S | 5 S m 1 3 S m 2 we have that u 1~x! 5 x ª
1 Ax 2 and u ~x! 5 x 1 Bx 2.
2 ª
Within this framework it is assumed that each player behaves rational and
searches to maximize his own payoff. This is expressed in the equilibrium con-
cept by John Nash ~1950!. The Nash equilibrium concept is the most fundamen-
tal idea in noncooperative game theory. A strategy profile x* [ S| is a Nash equi-
librium if no player can gain by unilaterally deviating from it. This means that in
the Nash equilibrium concept it is implicitly taken as given that the players make
their choices simultaneous and independent.
According to the linearity of u j~x! in the variables x jk, the definition implies that
for any player j it holds that at a Nash equilibrium x* there is no ~mixed! strat-
egy y j [ S m j that gives player j a higher payoff than x j*, given that all the other
players h Þ j stick to their equilibrium strategy x h*. In a Nash equilibrium every
player plays a best reply to the strategy of the others.
@beta/een/CLS_kap/GRP_201/JOB_008/DIV_199-3- P62
EVOLUTIONARY GAME THEORY 63
A5 F 0
w2e w2e
w
GF G 5
0
5
10
5
and
B5 F 2c
y2c2w y2w
2w
GF 5
2c
10 2 c
210
10 G ,
with the first player ~worker! choosing a row ~either S or W! and the second
player ~principal! a column ~either I ot N!. For c . 10 the principal’s action N
always yields a higher outcome than this action I. Since the worker’s action S is
the unique best reply to N we have that in this case the strategy profile ~S, N!,
i.e. the worker always plays his first action and the principal always plays his
second action, is the unique Nash equilibrium in pure strategies. For 0 , c , 10
there is no Nash equilibrium in pure strategies. Suppose the principal does not
inspect. Then it is optimal for the worker to shirk and therefore the principal is
better off if he inspects. However, when the principal inspects, the agent prefers
to work and then it is optimal for the principal to choose the action not inspect.
So, in equilibrium both the principal and the worker have to play a mixed strat-
egy. For 0 , c , 10 the unique Nash equilibrium in mixed strategies is given by
The elegance of the Nash equilibrium concept and the underlying rational behav-
ior of the agents has inspired many economists in formulating economic prob-
lems as noncooperative n-person games. Since the 1970s the Nash equilibrium
concept has been applied to a wide range of problems. However, in applying the
concept game theorists became aware of a serious drawback of the Nash equi-
librium, namely that a noncooperative n-person game may have many Nash equi-
libria and therefore one particular arbitrarily chosen equilibrium does not make
much sense as a prediction of the outcome of the problem. However, in many
cases not all outcomes are consistent with the intuitive notion about what should
be the outcome of the game. From the 1970s on several game theorists have
@beta/een/CLS_kap/GRP_201/JOB_008/DIV_199-3- P63
64 G. VAN DER LAAN AND A.F. TIEMAN
From the definition it follows immediately that any Nash equilibrium with com-
pletely mixed strategies is a perfect Nash equilibrium. Furthermore, Selten ~1975!
proved that any game G 5 ~I n, S|, u! has a perfect Nash equilibrium, even if the
game has no Nash equilibrium in completely mixed strategies.
A5 F G 0 10
5 5
and B5 F 210
0
210
10 G .
@beta/een/CLS_kap/GRP_201/JOB_008/DIV_199-3- P64
EVOLUTIONARY GAME THEORY 65
In this case it is optimal for the worker to play action S against any mixed strat-
egy vector x 2 of the principal satisfying x 21 # 12 . As long as the probability of not
inspect is at least as high as the probability of playing inspect, it is optimal for
the worker to shirk. On the other hand, as the worker shirks, any strategy of the
principal gives him an expected payoff equal to 210, so any mixed strategy of
the principal is a best reply against the action S of the worker. So, any
x* 5 ~x 1*, x 2*! with x 11* 5 1 2 x 12
* 5 1 ~the worker 1 plays S for sure! and
* 5 1 2 x 22
0 # x 21 * # 12 ~the probability that the principal inspects is at most 12 ! is a
Nash equilibrium. However, if there is a small probability that the worker makes
an error by choosing to work, it is optimal for the principal not to inspect, be-
cause the action not inspect gives him a higher payoff than the action inspect
when the error indeed occurs, while both actions give him the same payoff if the
error does not occur. So, as soon as there is an arbitrarily small probability that
the worker by mistake will choose to play W, then playing a mixed strategy is no
longer optimal for the principal and his unique best reply is to play N. The equi-
libria in which the principal plays a mixed strategy are not robust against the
small probability that the worker will deviate from his optimal strategy of play-
ing S. Only the Nash equilibrium in which the principal plays N for sure is a
perfect Nash equilibrium.
Although the notion of perfect equilibrium wipes out Nash equilibria which are
not robust with respect to small probabilities of mistakes by error-making players
and hence are not credible under the assumption of rational behavior of the play-
ers, Myerson ~1978! argued that the probability with which a rational player plays
a strategy by mistake will depend on the detrimental effect of the non-optimal
strategy. More costly mistakes will be less probable than less costly mistakes. To
further refine the set of perfect Nash equilibria, Myerson introduced the notion of
a proper Nash equilibrium. Again we have that a proper Nash equilibrium is the
limit of a sequence of m-proper Nash equilibria as m goes to zero. As in a m-per-
fect equilibrium, at a m-proper Nash equilibrium each pure strategy is played with
a positive probability. However, if for some player j his kth action is a worse
response against the strategies of the other players than some other action h, then
the probability that strategy k is played is at most m, 0 , m , 1, times the prob-
ability with which strategy h is played and so worse responses are played with
smaller probabilities. Formally, for some real number 0 , m , 1, a completely
mixed strategy profile x [ S | is a m-proper Nash equilibrium if for any player
j [ I n, x jk # mx jh if u j~e j~k!, x 2j! , u j~e j~h!, x 2j!. For example, let
x* 5 ~x 1*, . . . , x n*! be a 0.1-proper equilibrium and suppose that player 1 has three
pure actions satisfying u 1~e j~1!, x 2j * ! , u 1~e j~2!, x 2j
* ! , u 1~e j~3!, x 2j
* !. Then player
1 plays his optimal third action with a probability of at least 100 111 , his not optimal
second action with a probability of at most one-tenth of the probability of the
optimal action, and his worst first action with a probability of at most one-tenth
of the probability of the second action.
@beta/een/CLS_kap/GRP_201/JOB_008/DIV_199-3- P65
66 G. VAN DER LAAN AND A.F. TIEMAN
Again it immediately follows that any interior ~i.e. completely mixed! Nash equi-
librium is a proper Nash equilibrium. Moreover, each proper Nash equilibrium
satisfies the conditions of a perfect equilibrium, because the notion of properness
further restricts the set of allowable mistakes. Myerson ~1978! proved that any
game G 5 ~I n, S|, u! has at least one proper Nash equilibrium.
The concepts of perfect and proper Nash equilibria are nicely illustrated by a
well-known example of Myerson ~1978!. Let the two-person bimatrix game be
given by
3 4
1 0 29
A5B5 0 0 27 .
29 27 27
This game has three Nash equilibria in pure strategies, namely each strategy pro-
file in which the two players play the same pure action. However, the Nash equi-
librium in which both players play the third strategy and both receive a payoff of
27 is very unlikely. This equilibrium is ruled out by the notion of perfectness,
which only allows for the two equilibria in which either both players play their
first action, giving both players a payoff of 1 or both players play their second
action, giving both players a payoff of 0. The latter equilibrium is ruled out by
the notion of properness. So, properness selects the equilibrium in which both
players use the pure strategy of playing their first action for sure as the unique
outcome of the game.
In this section we have discussed the concept of Nash equilibrium and some
of its refinements for games in which all players have a finite set of strategies,
resulting in payoff functions which are linear in their own strategy variables. Non-
cooperative games and in particular the two-player games have a wide applica-
bility in ~micro!economic theory in general and in particular in the fields of in-
dustrial organization ~for instance spatial location models as introduced by
Hotelling ~1929!, see Goeree ~1997! or Webers ~1996! and the references men-
tioned in these dissertations, or auction theory, see McAfee and McMillan ~1987!,
international trade ~see for instance Krishna ~1992!, Spencer and Jones ~1992! or
@beta/een/CLS_kapvol/GRP_201/JOB_192/DIV_199-3- P66
EVOLUTIONARY GAME THEORY 67
Choi ~1995! on the theory of optimal tariffs and protection!, or policy analysis
~see for instance Barro and Gordon ~1983! on monetary policy or Van der Ploeg
and De Zeeuw ~1992! on environmental policy!. In essence, game-theoretic mod-
els are needed to analyse multilateral decision-making. Well-known examples are
the inspection game, the entry-deterrence game, the coordination game, the stag-
hunt game and the advertisement game ~prisoner’s dilemma!. In the next section
we restrict ourselves to 2-person games with two identical players.
This brings us to the question what economists can learn from evolutionary game
theory, introduced by biologists in studying the evolution of populations and the
individual behavior of its members. Where has evolutionary game theory brought
us and where might it be applicable? We will see that evolutionary game theory
is able to analyse and to predict the evolutionary selection of outcomes in inter-
active environments in which the behavior of the players is conditioned or pre-
programmed to follow biological, sociological or economical rules.
Evolutionary or biological game theory originated from the seminal paper ‘The
logic of animal conflict’ by Maynard Smith and Price ~1973!, see also Maynard
Smith ~1974! and ~1982!. Maynard Smith considers a population in which mem-
@beta/een/CLS_kap/GRP_201/JOB_008/DIV_199-3- P67
68 G. VAN DER LAAN AND A.F. TIEMAN
bers are randomly matched in pairs to play a bimatrix game. The players are
anonymous, that is any pair of players plays the same symmetric bimatrix game
and the players are identical with respect to their set of strategies and their pay-
off function. So, for any member of the population, let m be the number of ac-
tions and S 5 S m the set of mixed strategies. Furthermore, the payoff function
u: S 3 S → R 2 assigns to any pair of two players the payoff pair
~u 1~x!, u 2~x!! ª, x [ S 3 S. The assumed symmetry of the bimatrix game implies
that u 1~x, y! 5 u 2~y, x! for any pair ~x, y! [ S 3 S, which states that the payoff of
the first player when he plays x [ S and his opponent plays y [ S is equal to the
payoff of the second player when the latter plays x [ S and the first player plays
y [ S. So, any pair plays the symmetric bimatrix game G 5 ~A, A ª! with A an
m 3 m matrix. Observe that it is not assumed that A is symmetric. In the follow-
ing we denote with y~x, y! 5 u 1~x, y! 5 u 2~y, x! 5 x ªAy the payoff of a player
playing x [ S against an opponent playing y [ S. In this way all members of the
population are symmetric, except in their strategy choice. In biological game
theory it is not assumed that the members ~animals! in the population behave
rationally. Instead it is assumed that any member is preprogrammed with an in-
herited, possibly mixed, strategy and that this strategy is fixed for life. Now let
x [ S be the vector of average frequencies with which the strategies are played
by the members of the population. So, x k is the average probability or frequency
over all members of the population that action k, k 5 1, . . . , m, is played. Assum-
ing that the population is very large, the differences between the expected strat-
egy frequencies faced by different members are negligible and the average ex-
pected payoff of an arbitrarily chosen member of the population when paired at
random with one of the other members is given by y~x, x! 5 x ªAx. However,
now suppose that a perturbation of the population occurs and that at random a
~small! fraction e of the population is replaced by individuals which are all going
to play a so-called ‘mutant’ strategy q. Then the vector of average frequencies
becomes
y 5 ~1 2 e!x 1 eq
and hence the average expected payoff of the incumbent individuals of the popu-
lation when matched at random with a member of the perturbed population be-
comes
@beta/een/CLS_kap/GRP_201/JOB_008/DIV_199-3- P68
EVOLUTIONARY GAME THEORY 69
Now the population is said to be stable against mutants if for all q Þ x there is
an e~q! . 0 such that for all 0 , e , e~q!
A strategy x satisfying ~5! for any q Þ x with some e~q! . 0 is called an Evolu-
tionary Stable Strategy ~ESS!. So, an evolutionary stable strategy is stable against
any mutant strategy as long as the number of mutants is small enough.
The reasoning above allows for several viewpoints. Originally, the stability
condition was only applied for monomorphic populations, i.e. populations in
which all individuals are endowed with the same strategy x. In this framework
the frequency x k is the probability with which any member of the population plays
action k. So, any incumbent individual plays x and hence the expected payoff of
an incumbent is equal to the average expected payoff. The concept of Evolution-
ary Stable Strategy is the central equilibrium concept in the biological game
theory about monomorphic populations.
From another ~opponent! viewpoint the stability condition can also be applied
to a polymorphic population in which each member is preprogrammed with one
of the pure strategies. Within this framework, which originated from mathemati-
cians like Taylor and Jonker ~1978! ~see also Zeeman ~1981!! the frequency x k is
the fraction of members preprogrammed with action k. Of course, intermediate
cases are possible as well in which multiple mixed strategies are present in the
population.
Although both viewpoints are allowed, for the moment we restrict ourselves
to the case of the monomorphic population and suppose that x [ S satisfies the
best reply condition y~q, x! # y~x, x! for all q [ S. Clearly, this is a sufficient and
necessary condition for the strategy pair ~x, x! to be a Nash equilibrium for the
symmetric bimatrix game ~A, A ª!. In this equilibrium both players play the same
strategy x. The equilibrium ~x, x! [ S 3 S is called a symmetric Nash equilibrium
and x a symmetric equilibrium strategy. It is well-known that any symmetric bi-
matrix game has at least one symmetric equilibrium, see e.g. Weibull ~1995!. It
should be noted that a symmetric bimatrix game may also have nonsymmetric
Nash equilibria in which the players use different strategies. Now, suppose that
for a symmetric equilibrium strategy x [ S the stronger condition y~q, x! , y~x, x!
holds for any mutant strategy q [ S. Then it follows from equations ~3! and ~4!
that there is some e~q! . 0 such that equation ~5! holds and hence the strategy
x [ S is also ESS. However, no symmetric equilibrium strategy x is ESS. To see
this, observe that every strategy q is a best reply to x and hence y~q, x! 5 y~x, x!
for any q [ S, when x is a completely mixed strategy. More precisely, for any
mixed equilibrium strategy x [ S it holds that y~q, x! 5 y~x, x! for any q [ S sat-
isfying q k 5 0 if x k 5 0. So, when x is a mixed equilibrium strategy,
y~q, x! , y~x, x! does not hold for all q Þ x. However, in case that
y~q, x! 5 y~x, x!, it follows from equations ~3! and ~4! that equation ~5! still holds
@beta/een/CLS_kap/GRP_201/JOB_008/DIV_199-3- P69
70 G. VAN DER LAAN AND A.F. TIEMAN
if y~q, q! , y~x, q!, i.e. if the incumbent strategy x performs better against the
mutant strategy q than the mutant against itself. This is called the second-order
best reply condition. This gives us the following formal definition of ESS as origi-
nally formulated by Maynard Smith and Price ~1973!.
Clearly, x [ S satisfies the conditions of definition 3.1 if and only if for any q [ S,
x satisfies equation ~5! for some e~q! . 0, see e.g. Weibull ~1995!. Furthermore,
the first-order condition i! of definition 3.1 shows that ~x, x! is a Nash equilib-
rium for the bimatrix game G 5 ~A, A ª! if x is ESS. However, the reverse is not
true. The symmetric Nash equilibrium strategy x is ESS only if x also satisfies
the second-order condition ii!.
A5 F G 8 1
0 5
.
In this case both players receive a payoff of eight when both play their first ac-
tion and a payoff of five when both play their second action. When the players
play different actions, then the payoff is equal to one for the player using his first
action and zero for the player using his second action. Therefore it is optimal for
the players to coordinate on the first action, while coordinating on the second
action is still better for both players than playing different actions. This bimatrix
game has three symmetric equilibrium strategies, given by the two pure strategies
e~1! ~both players play the first action! and e~2! ~both players play the second
action! and the mixed equilibrium strategy x* 5 ~ 13 , 23 ! ª ~both players play their
first action with probability equal to 13 and their second action with probability
equal to 23 !. However, only the two pure equilibrium strategies are ESS. To see
this, consider the equilibrium strategy e~1! and let q 5 ~q 1, q 2! be any ~mixed!
strategy with q 1 , 1. Then q ªAe~1! 5 8q 1 , 8 5 e~1! ªAe~1! and so conditions i!
and ii! of definition 3.1 are satisfied. Analogous, for any ~mixed! strategy
q 5 ~q 1, q 2! with q 2 , 1 we have q ªAe~2! 5 q 1 1 5q 2 5 1 1 4q 2 , 5 5
e~2! ªAe~2! and again the conditions i! and ii! of definition 3.1 are satisfied. Now
consider the strategy x*. Then, for any q [ S we have that q ªAx* 5 10 3 5 x* Ax*
ª
and the first-order condition i! is satisfied with equality. Now, take q 5 e~1!. Then
@beta/een/CLS_kap/GRP_201/JOB_008/DIV_199-3- P70
EVOLUTIONARY GAME THEORY 71
q ªAq 5 8 . x* ªAq 5 83 and hence the second-order condition ii! is not satisfied.
It follows that x* is not ESS.
By excluding equilibria not satisfying the conditions of definition 3.1 we see that
the ESS condition yields a refinement on the set of Symmetric Nash equilibria.
More precisely, we find that if x is ESS, then ~x, x! is a symmetric proper Nash
equilibrium, see e.g. Van Damme ~1987!, page 224. According to Weibull ~1995!,
page 42, we can say that an ESS satisfies the equilibrium condition and is ‘cau-
tious,’ i.e. is robust with respect to low-probability mistakes such that more costly
mistakes are less probable than less costly mistakes. Again, the reverse implica-
tion is not true. When ~x, x! is a symmetric proper Nash equilibrium, then x is
not necessarily an ESS. Even stronger, not any symmetric bimatrix game has an
ESS ~for instance the unique symmetric equilibrium strategy of the Rock-Scis-
sors-Paper game is not ESS, see Weibull ~1995!, page 39!, while it has been
proven recently by Van der Laan and Yang ~1996! that any symmetric bimatrix
game has a symmetric proper equilibrium. To summarize the above results let NE
denote the set of symmetric Nash equilibrium strategies, PE the set of symmetric
proper equilibrium strategies and ES the set of Evolutionary Stable Strategies.
Then we have that
ES , PE , NE and PE Þ Ø .
4 REPLICATOR DYNAMICS
@beta/een/CLS_kap/GRP_201/JOB_008/DIV_199-3- P71
72 G. VAN DER LAAN AND A.F. TIEMAN
Taking for granted the theory of differential equations ~see for instance the intro-
duction given in Hirsch and Smale ~1974!! this system has a unique solution
x~t, x 0!, t $ 0, for any initial point x 0 5 x~0! [ S. Furthermore, summing up the
equations ~6! over all k 5 1, . . . , m, we get that (k51 m
ẋ k 5 0 because
(k51 x k 5 1 and hence we have that S ~and all its faces! is invariant, i.e. any
m
trajectory starting in ~a face of! S stays in ~the same face of! S. So,
(k51
m
x k~t! 5 1 for all t and x k~t! 5 0 for any t $ 0 if x k~s! 5 0 for some s $ 0.
The latter property says that if at a certain time the fraction of members playing
action k is equal to zero, then it will always remain zero and it has always been
zero. On the other hand x k~t! . 0 for all t $ 0 if x 0k . 0, so an action will survive
forever if it is available at t 5 0. Of course, this does not exclude that x k~t! con-
verges to zero if t goes infinity, i.e. it may happen that a trajectory starting in the
interior of S converges to the boundary. Now, a Nash equilibrium strategy x [ S
~of the monomorphic population in which all members play x! is said to be as-
ymptotically stable for the polymorphic population ~in which x k denotes the frac-
tion of members playing action k!, if there is a neighborhood X of x, i.e. an open
set X in S containing x, such that any trajectory of the replicator dynamics start-
ing at x 0 [ X converges to x. The following result is due to Taylor and Jonker
~1978!, see also Hines ~1980! or Zeeman ~1981!, and can be seen as the basic
result relating replicator dynamics with Evolutionary Stable Strategies.
@beta/een/CLS_kap/GRP_201/JOB_008/DIV_199-3- P72
EVOLUTIONARY GAME THEORY 73
Theorem 4.1
The theorem says that for any ESS there is a neighborhood such that any trajec-
tory of the replicator dynamics starting in this neighborhood converges to this
ESS. However, the reverse is not true. The replicator dynamics may converge to
a strategy not being ESS. Moreover, a trajectory does not always converge to
some limit point x* if t goes to infinity. It may happen that the trajectory path is
a cycle on S or moves outwards toward a hyperbola. For other properties of the
trajectories, for instance when in the limit the replicator dynamics wipes out all
strictly dominated strategies if initially all strategies are present, we refer to e.g.
Weibull ~1995! or Van Damme ~1987!. In these references many results are stated
about the relation between ~asymptotically! stable equilibria of the replicator dy-
namics and refinements of the Nash equilibrium, for instance that any asymptoti-
cally stable strategy is a perfect Nash equilibrium strategy. For a further discus-
sion about the replicator dynamics see also Mailath ~1992! and Friedman ~1991!.
Here we want to restrict ourselves to a more detailed discussion of the results
for 2 3 2 symmetric matrix games. Let the payoff matrix A be given by
A5 F G a
c
b
d
.
It is well-known that the set of Nash equilibrium strategies is invariant with re-
spect to adding the same constant to the payoffs of one player for a given pure
strategy of the other player. From this it follows that we can distinguish three
types of 2 3 2 games ~see also e.g. Weibull ~1995! or Friedman ~1995!.
@beta/een/CLS_kap/GRP_201/JOB_008/DIV_199-3- P73
74 G. VAN DER LAAN AND A.F. TIEMAN
x* 5 S d2b
,
a1d2b2c a1d2b2c
a2c
Dª
Observe that for type I and type II games the replicator dynamics converges to a
pure evolutionary stable strategy, whereas for type III games the dynamics con-
verges to a mixed ESS. In the latter case the dynamics converges to a population
in which a fraction x 1* plays hawk and a fraction 1 2 x*1 play dove, so that in
such a stable population two types of individuals can be distinguished. Of course,
in equilibrium both types have the same average fitness. We have also seen that
for 2 3 2 games the replicator dynamics always leads to an ESS when starting at
an interior point of the strategy space. So, for the 2 3 2 case the replicator dy-
namics always converges. This does not hold in case of games with more than
two actions.
Until now we have discussed the notion of Evolutionary Stable Strategy and
the concept of replicator or population dynamics. We now come to the question
where this biological game theory has brought us. What is the meaning of this
theory for economics? This question will be addressed in the next sections.
We have seen that the concept of Evolutionary Stable Strategy gives a further
refinement of the Nash equilibrium concept. If an ESS exists, then it is a proper
@beta/een/CLS_kap/GRP_201/JOB_008/DIV_199-3- P74
EVOLUTIONARY GAME THEORY 75
equilibrium strategy and therefore ESS is cautious and robust against trembles.
Moreover, an ESS is stable against mutants and is asymptotically stable with re-
spect to the replicator dynamics. This latter result shows that rational behavior is
not necessary to obtain such a sophisticated equilibrium. A population of genetic
players inherited with the behavior of their parents is able to reach an ESS when
the number of offspring is determined by the fitness of the strategy of the play-
ers. However, the application of this result in economics is not straightforward.
First of all, the replicator dynamics always leads to Nash equilibria. Unfortu-
nately, Nash equilibria often reveal bad outcomes for the society as a whole. They
may suffer from the tragedy of the commons. This is nicely illustrated by the
PDG. The Nash equilibrium yields the worst possible outcome, both players can
obtain higher payoffs by playing the dominated strategy. In contrast, in empirical
prisoner’s dilemma-like situations cooperative behavior can be observed rather
frequently. This has inspired many authors to develop game-theoretic models in
which the players cooperate by playing the dominated strategy. Many papers on
repeated games are devoted to this topic. Unfortunately, biological evolutionary
game theory is not very helpful in explaining cooperative behavior. Even worse,
in the Hawk-Dove game the Nash equilibrium selected by the ESS outcome is
the worst possible Nash equilibrium. Taking a 5 d 5 0 and hence b . 0 and c . 0,
it can easily be seen that both players obtain more payoff in either of the two
equilibria in pure strategies. So, ESS does not sustain cooperative behavior in
type I or type III 2 3 2 games. This might explain that many theoretical papers
on evolutionary games in economic journals have been focussed explicitly on co-
ordination games. Indeed, for such games coordination is sustained by ESS. Fur-
thermore, the most efficient outcome has the largest region of attraction. From
this it follows that to bring the population from one of the ESS outcomes in
coordination games to the other, considerably less mutants are needed to go from
the worst ESS to the best ESS than vice versa. Adding small probabilities of
mutations to the dynamic process, Kandori, Mailath and Rob ~1993!, see also
Young ~1993!, show that in the long run the evolutionary process converges to
the best possible outcome. Within this framework mutants can be seen as indi-
viduals who are experimenting by deviating from the preprogrammed strategy.
For further results on this topic we also refer to Ellison ~1993! and ~1995!, Rob-
son and Vega-Redondo ~1996!, and Bergin and Lipman ~1996!.
For type I and type III 2 3 2 games we may conclude that the replacement of
the usual assumption of rationality in economics by the biological fitness crite-
rion is not of any help in sustaining cooperative behavior, because basically both
assumptions lead to best reply strategies and therefore the only possible out-
comes are the Nash equilibria, which are not Pareto-efficient. Other ideas have to
be exploited to model the sustaining of cooperation. Recent literature in the field
of sociology and psychology offers a way out by replacing the economic ratio-
nality approach by procedural rationality, see e.g. Simon ~1976!, specifying a rule
of behavior. The ‘Tit-for-Tat’ strategy, well-known from repeated game theory,
@beta/een/CLS_kap/GRP_201/JOB_008/DIV_199-3- P75
76 G. VAN DER LAAN AND A.F. TIEMAN
@beta/een/CLS_kap/GRP_201/JOB_008/DIV_199-3- P76
EVOLUTIONARY GAME THEORY 77
is performing worse than oneself, and imitate individuals doing better with a prob-
ability proportional to how much better they perform. It is shown that this be-
havioral rule results in an adjustment process that can be approximated by the
replicator dynamics.
Finally, we would like to mention a third problem when replacing the genetic
mechanism by a story of learning or imitation. To formulate this problem, we
quote Mailath ~1992! when he discusses the problem of the often assumed
bounded rationality of the players. He states:
Players in these models are often quite stupid. For example, in evolu-
tionary models, players are not able to detect cycles that may be gener-
ated by the dynamics. The criticism is that the models seem to rely on
the players being implausibly stupid. Why are the players not able to
figure out what the modeler can? Recall that in the traditional theory,
players are often presumed to be computationally superior to the mod-
eler.
Here we come to the question that when frequencies are adjusted by a learning
process, why then would players adapt their strategies according to a nonsophis-
ticated process like the replicator dynamics? As we have seen before, conver-
gence of the replicator dynamics cannot be guaranteed. In fact, the replicator dy-
namics is equivalent with the Walrasian tatonnement process in general
equilibrium theory and suffers from the same weaknesses. However, in general
equilibrium theory several globally convergent price adjustment processes based
on path-following methods have been designed, see for instance Van der Laan
and Talman ~1987! and Kamiya ~1990!. In these processes cycling is prevented
by taking the starting price vector into account. In Van den Elzen and Talman
~1991! and Yang ~1996! the path-following technique is modified to convergent
strategy adjustment processes for noncooperative games in normal form. The lat-
ter process always leads to a proper Nash equilibrium. Instead of these determin-
istic processes convergence can also be obtained by using stochastic differential
equations. For these so-called urn processes we refer to e.g. Arthur ~1988! or
Dosi, Ermoliev and Kaniovski ~1994!. It might be interesting to apply both tech-
niques to the framework of a monomorphic population and to search for an in-
terpretation of such processes as sophisticated learning or imitation processes.
@beta/een/CLS_kap/GRP_201/JOB_008/DIV_199-3- P77
78 G. VAN DER LAAN AND A.F. TIEMAN
man ~1995!, evolutionary game theory can easily be adapted to model these fea-
tures.
First, instead of pairwise matching, we consider the case that all players are
interacting together, i.e. all players are ‘playing the field.’ Now the payoff to a
player is determined by his own strategy and the strategies of all other players.
However, we can still consider the payoff function as a fitness function and apply
the replicator dynamics to this population. As an example we take a simple model
of a pure exchange economy consisting of a population of economic agents and
a fixed number of m commodities. Suppose that initially each individual is en-
dowed with one unit of just one of the commodities. The commodity with which
an individual is endowed can be seen as the strategy of the individual inherited
from his parents or obtained by education. Let x k, k 5 1, . . . , m, be the fraction of
individuals endowed with one unit of commodity k. Furthermore, the utility of an
agent obtained from a vector y [ R m of commodities is given by u~y!. So, all
agents are identical except for the distribution of the endowments. Now, for this
economy, suppose there is a unique Walrasian equilibrium price vector p~x! [ R m,
i.e. at this price vector each individual maximizes his utility given his income,
and the average consumption of commodity k equals x k, k 5 1, . . . , m. Clearly,
the Walrasian price vector p~x! depends on the distribution x of the endowments.
Within this framework the income p k~x! of an individual of type k, i.e. an agent
endowed with commodity k, can be seen as the payoff of a type k individual.
This payoff depends on his own endowment and the endowments of all others.
Allowing for the adjustment of endowments, for instance because agents die and
are replaced by newborn agents, we can apply the standard replicator dynamics
by stating that the distribution of the endowments develops according to the sys-
tem of differential equations
F G
m
1
ẋ k 5 p k~x! 2
m
(
h51
p h~x!x h x k , k 5 1, . . . , m ,
so that the fraction of individuals endowed with commodity k will increase ~de-
crease! if p k~x! is above ~below! the weighted average of the prices. By taking
1
the weighted average
m
(h51
m
p h~x!x h we find that the system satisfies
(k51 ẋ k 5 0, so that along the solution trajectory the sum of the fractions x k re-
m
mains equal to one. Assuming that initially all commodities are available and that
the process will converge, the economy develops to a situation in which all prices
are equal. If initially a commodity has a very high price, there is a relative short-
age of this commodity and more agents are going to adopt this commodity. On
the other hand, less individuals are going to adopt a low-priced commodity. It
can be shown that the stable state in which all prices are equal maximizes social
welfare. This result is in line with the observation by Nelson ~1995! in a recent
@beta/een/CLS_kap/GRP_201/JOB_008/DIV_199-3- P78
EVOLUTIONARY GAME THEORY 79
1 Ax 2 $ q Ax 2
xª ª
i! for all q [ S m 1 , ~7!
1 Bx 2 $ x 1 Bq
xª ª
ii! for all q [ S m 2 . ~8!
@beta/een/CLS_kap/GRP_201/JOB_008/DIV_199-3- P79
80 G. VAN DER LAAN AND A.F. TIEMAN
strategy of a population performs better against the other population than any
mutant strategy. Hence we have the following definition.
A strategy pair ~x 1, x 2! [ S
| is an Evolutionary Stable Strategy pair of the asym-
metric bimatrix game G 5 ~A,B! if it satisfies the Nash equilibrium conditions ~7!
and ~8! with strict inequality.
It has already been noticed by Selten ~1980! that an evolutionary stable strategy
pair is not only stable when mutants appear in one of the populations but also if
mutants appear simultaneously in both populations. More precisely, if ~x 1, x 2! is
evolutionary stable, then for any q 5 ~q 1, q 2! [ S| there is an e~q! [ ~0, 1! such
that for all e , e~q! we have that either x 1 performs better against y 2 than q 1
does or x 2 performs better against y 1 than q 2 does, or both, where
y j 5 ~1 2 e!x j 1 eq j, j 5 1, 2. So, if mutants appear in both populations at the
same time, the incumbent strategy performs better in at least one of the popula-
tions and hence within this population the mutants will die out in the limit. As
soon as the fraction of mutants within this population is small enough, it follows
from definition 7.1 that in the other population the incumbent strategy performs
better than the mutant strategy and that therefore in the latter population too the
mutants will die out.
An evolutionary stable strategy pair is known in the traditional game theory as
a strict Nash equilibrium and has the property that x j, j 5 1, 2 is the unique best
reply against x h, h Þ j. As already discussed in section 2 this property implies
that an evolutionary stable equilibrium is an equilibrium in pure strategies. This
shows a serious weakness of evolutionary game theory in case of multiple popu-
lations. At any nonstrict equilibrium the populations are not stable against mu-
tants playing alternative best replies. However, for a large class of games a Nash
equilibrium in pure strategies does not exist and therefore evolutionary game
theory fails to make any prediction about the outcome. For weaker stability con-
cepts we refer to Swinkels ~1992!.
There are several possibilities to generalize the replicator dynamics from the
single population case to the multiple population case. The most common gener-
alization has been proposed by Taylor ~1979! and is in a two-population case
given by
ẋ 1k 5 @~e 1~k! 2 x 1! ªAx 2#x 1k , k 5 1, . . . , m 1 ,
with again e j~k! the kth unit vector in R m j, j 5 1, 2. Clearly, this is a straightfor-
ward generalization of equation ~6! and the solution path of these dynamics has
about the same properties as the path of the replicator dynamics for single popu-
@beta/een/CLS_kap/GRP_201/JOB_008/DIV_199-3- P80
EVOLUTIONARY GAME THEORY 81
lations. In particular we see that every strict ~evolutionary stable! Nash equilib-
rium is asymptotically stable and that strictly dominated strategies vanish along
any interior solution path. However, since strict Nash equilibria often do not ex-
ist, the concept of asymptotic stability in asymmetric games is much less useful
than in symmetric games. This has motivated Samuelson and Zhang ~1992! to
search for more robust stability results by specifying alternative formulations for
the dynamics. Unfortunately, even if the dynamics are required to be payoff
monotonic, i.e. a strategy with a higher payoff has a higher growth rate, a strictly
dominated strategy may survive along the solution path if it is not strictly domi-
nated by just one other pure strategy but only by mixed strategies. This brings
Samuelson and Zhang to the conclusion that to get reasonable outcomes addi-
tional structure has to be placed on the evolutionary selection or learning process
and that therefore theories of learning are an important area for further research.
For some illustrating examples of the application of the replicator dynamics to
nonsymmetric multipopulation models we refer to Weibull ~1995! and Friedman
~1991!. The properties and weaknesses of the dynamics are very nicely demon-
strated by the Entry Deterrence Game given by the payoff matrices
A5 F G 2
1
0
1
and B5 F G
2
4
0
4
.
@beta/een/CLS_kap/GRP_201/JOB_008/DIV_199-3- P81
82 G. VAN DER LAAN AND A.F. TIEMAN
ultimatum games players offer fair divisions to their opponent and reject unfair
bids by their opponent. It is shown that a strategy in which unfair bids are pun-
ished appears to be evolutionary successful when players interact in groups. This
result can be interpreted as presenting a foundation for the emergence of social
norms.
To conclude this section we like to stress that random matching of the players
is one of the basic assumptions of evolutionary game theory. Although this as-
sumption might be appropriate within a biological framework, in an economic
setting an agent often only interacts within a small subset of the other agents.
Therefore, opposite to the random matching, some authors have also discussed
the implications of local matching rules in which each individual only interacts
within a small group of friends or neighbors. Learning under local interaction has
been studied for coordination games by e.g. Ellison ~1993! and ~1995! and Bern-
inghaus and Schwalbe ~1996!. Ellison concludes that under local matching evo-
lutionary forces are much stronger, that is the system adjusts much faster. While
under random matching convergence times are often incredibly long and there-
fore of little help for useful predictions on the outcome, under local interaction
convergence may appear early in the process. Berninghaus and Schwalbe ~1996!
show that the smaller the neighborhood group, the higher the probability to reach
an efficient equilibrium. Nevertheless, in all these models we are still confined to
the Nash equilibrium outcome and hence the worst possible outcome in Prison-
er’s Dilemma Games. In the next section we will see that under sociological strat-
egy adaption rules cooperation may emerge in Prisoner’s Dilemma Games with
local interaction.
Local interaction models are interesting because they often have some relation-
ship to real world interaction situations and therefore have a nice economic in-
terpretation. In every day ~economic! life, typically not everyone will interact with
all the other individuals present in a certain environment. Each individual has a
~small! number of others with whom ~s!he interacts, consisting mostly of col-
leagues, friends, relatives, and business associates, or in the case of firms other
related firms. These colleagues and friends in their turn interact with their respec-
tive groups of relatives and friends and since these groups usually overlap, but
are not identical, there is an indirect interaction through their respective groups
of friends and colleagues between people who do not know each other and who
never actually meet.
Local interaction is typically modeled in a spatial way in which the people
with whom one interacts are located nearby. The individuals with whom a mem-
ber of the population interacts are called one’s neighbors. As stated above, in
local interaction models there is an indirect influence on behavior of individuals
by people that are not one’s neighbors via ~a series of! other individuals. In real
@beta/een/CLS_kap/GRP_201/JOB_008/DIV_199-3- P82
EVOLUTIONARY GAME THEORY 83
life the size of group of neighbors will vary across individuals. For reasons of
simplicity however, the group size is taken fixed in most of the models in this
field of research. Modeling the spatial environment as a discrete 2-dimensional
torus, it seems very plausible to think of every individual as having four ~adja-
cent! or eight ~adjacent and diagonal! neighbors, see e.g. Ellison ~1995!. In local
interaction models a large number of games is played. At the beginning of every
game an individual is selected at random and this individual is called the subject.
This subject plays the game with ~one or some of! his neighbors. A neighbor is
either picked at random or there is some selection criterion in the model. On the
basis of the payoff obtained from the game and further obtained information a
player decides whether to change his action the next time he gets to play the
game, giving individuals ample opportunity to learn. The specific way in which
learning is modeled differs a lot amongst models, but most modelers use some
form of rational behavior with bounded recall ~best reply dynamics!. Most of the
time these models are used to either select amongst Nash equilibria or to explain
features of the emergence of cooperative behavior while this cooperative behav-
ior is not one of the Nash equilibria of the game. However, to explain the emer-
gence of cooperative behavior the majority of local interaction models face a
problem, since they need to expect too much rationality of the individuals. In
other words, most models incorporate learning dynamics that are far more com-
plicated than those used in everyday life. In our opinion, in general, people do
not optimize in most situations in which interaction takes place, but rather let
simple heuristics determine their behavior.
Based on literature in the field of sociology and psychology, in Tieman, Van
der Laan and Houba ~1996! a fundamentally different approach has been chosen.
In this paper the social environment has been modeled by a 2-dimensional torus
as described above. The dynamics is modeled as an adaptation process on the
level of the individual, as Ellison ~1995! did, but instead of using best-reply dy-
namics, a sociological perspective has been chosen. This has been implemented
by introducing a decision heuristic or metastrategy for all individuals on the torus,
according to which they adapt the action they are playing. This metastrategy is
an augmented version of the strategy Tit-for-Tat. Tit-for-Tat is a strategy speci-
fied for a game in which an individual can play one of two ~pure! actions, co-
operate or defect, and prescribes an individual to start the first game by playing
the cooperative action. In the remainder of the game individuals simply play the
action the opponent has played in the last interaction. So, an individual that uses
the Tit-for-Tat strategy punishes an opponent who chooses the defective action
by playing defective in the next encounter. Tit-for-Tat is also a forgiving strategy:
as soon as the opponent decides to play the cooperative action again, the Tit-for-
Tat player will start playing cooperative again in the next game both players play.
The Tit-for-Tat strategy became very popular after a tournament organized by
Robert Axelrod ~1984!, in which he invited a large number of scientists from
different disciplines to send in computer programs to take part in a competition
@beta/een/CLS_kap/GRP_201/JOB_008/DIV_199-3- P83
84 G. VAN DER LAAN AND A.F. TIEMAN
@beta/een/CLS_kap/GRP_201/JOB_008/DIV_199-3- P84
EVOLUTIONARY GAME THEORY 85
are the only stochastic components in the model, the remaining part of the simu-
lation runs only implies straightforward ~deterministic! calculation of the payoffs
and the updating of actions. The simulations involve checks on the stability of
the population. A stable state of the population is reported each time it is encoun-
tered and the final results of a simulation run consist of averages over all en-
countered stable states. These simulations result in a stable state of the popula-
tion in which a high degree of cooperation ~up to about 97% on average! is
present, that is, most producers set a relatively high price that is very near the
price they would set if they form a cartel. So, under cooperation profits are maxi-
mized by monopolistic price setting. Although these results depend on the exact
specification of the metastrategy, they nicely illustrate the general idea that coop-
erative behavior can be explained by interaction models incorporating sociologi-
cal adaptation rules. Another interesting result of this model is the emergence of
price wars. Even when the population is in a stable state, it may happen that a
producer comes in a lose situation and starts to lower his price. Other producers
in the neighborhood of this producer start losing customers and therefore are in a
lose situation whenever they play against the producer with the lower price. Thus
other producers in the neighborhood of this producer also start to lower their
prices. In this manner a local price war can emerge. After some time the prices
in such a neighborhood engaged in a price war have declined considerably and
more producers get to be in win situations again and convergence to the same
stable situation the population was in before starts again.
This model may explain the emergence of cooperation, even though coopera-
tion is not a Nash equilibrium. This is in accordance with observations in the
real world, where Prisoner’s Dilemma-like situations in which the players play
the cooperative action are often encountered. The arguments that are used to ex-
plain these observations usually incorporate some elements of either altruism or
repeated play. Although there are situations in which these explanations are sat-
isfactory, there are also situations in which they are not. Furthermore, arguments
based on observed behavior seem more plausible than a lot of the claims con-
cerning altruism, mainly since most explanations based on altruism crucially de-
pend on the presence of a certain minimum degree of altruism in the behavior of
individuals and since this degree of altruism individuals display is very hard to
quantify. A repeated game argument for cooperation seems plausible in situations
which are not too complicated. In complicated situations individuals do not per-
form extensive calculations involving optimal credible punishment and alike. In-
stead, individuals use heuristics like the one above. Note that there is an element
of punishment present in the heuristic. Whenever the neighbors of the subject
defect on him, he will switch to a more defective action too. However, the above
heuristic, like Tit-for-Tat, is forgiving, since the subject will start displaying more
cooperative behavior a little while after his neighbors play more cooperatively.
@beta/een/CLS_kap/GRP_201/JOB_008/DIV_199-3- P85
86 G. VAN DER LAAN AND A.F. TIEMAN
9 CONCLUDING REMARKS
Evolutionary game theory puts the static Nash equilibrium concept in a dynamic
setting. This dynamic framework may provide us with a better understanding of
the stability of equilibria and equilibrium selection mechanisms. The standard rep-
licator dynamics has limited applicability in economic models and has to be re-
placed by a process of learning and imitation to be fruitful within the field of
economic theory. In this way evolutionary game theory forces us to think more
thoroughly about the behavior of players outside equilibrium and the game they
are playing. As long as individuals are supposed to play best replies, at best so-
phisticated Nash equilibria will emerge. This may deepen our understanding of
coordination problems, but does not provide us with a better insight in coopera-
tion problems. To explain the emergence of cooperative behavior in society as a
whole, we have to allow for more elaborate strategies reflecting social rules in
the behavior of individuals in an interactive environment.
REFERENCES
Arthur, W.B. ~1988!, ‘Self-enforcing Mechanisms in Economics,’ in: P.W. Anderson, K.F. Arrow, and
R. Pines ~eds.!, The Economy as an Evolving Complex System, New York, Addison-Wesely, pp. 9–
31.
Axelrod, R. ~1984!, The Evolution of Cooperation, New York, Basic Books.
Barro, R.J. and D.B. Gordon ~1983!, ‘Rules, Discretion, and Reputation in a Model of Monetary
Policy,’ Journal of Monetary Economics, 12, pp. 101–120.
Bergin, J. and B.L. Lipman ~1996!, ‘Evolution with State-dependent Mutations,’ Econometrica, 64,
pp. 943–956.
Berninghaus, S.K. and U. Schwalbe ~1996!, ‘Evolution, Interaction and Nash Equilibria,’ Journal of
Economic Behavior and Organization, 29, pp. 57–85.
Binmore, K. and L. Samuelson ~1994!, ‘Drift,’ European Economic Review, 38, pp. 859–867.
Björnerstedt, B.J. and J. Weibull ~1993!, ‘Nash Equilibrium and Evolution by Imitation,’ in: K. Arrow
and E. Colombatto ~eds.!, Rationality in Economics, New York, Macmillan, forthcoming.
Björnerstedt, B.J. ~1993!, Experimentation, Imitation and Evolutionary Dynamics, mimeo, Department
of Economics, Stockholm University.
Bomze, I. ~1986!, ‘Non-cooperative Two-person Games in Biology: A Classification,’ International
Journal of Game Theory, 15, pp. 31–57.
Choi, J.P. ~1995!, ‘Optimal Tariffs and the Choice of Technology: Discriminatory Tariffs vs. the “Most
Favored Nation” Clause,’ Journal of International Economics, 35, pp. 143–160.
Crawford, V.P. ~1989!, ‘Learning and Mixed-strategy Equilibria in Evolutionary games,’ Journal of
Theoretical Biology, 140, pp. 537–550.
Crawford, V.P. ~1991!, ‘An “Evolutionary” Interpretation of Van Huyck, Battalio, and Beil’s Experi-
mental Results of Coordination,’ Games and Economic behavior, 3, pp. 25–59.
Crawford, V.P. ~1995!, ‘Adaptive Dynamics in Coordination Games,’ Econometrica, 63, pp. 103–143.
Cressman, R. ~1992!, The Stability Concept of Evolutionary Game Theory, Berlin, Springer-Verlag.
@beta/een/CLS_kap/GRP_201/JOB_008/DIV_199-3- P86
EVOLUTIONARY GAME THEORY 87
Damme, E. van ~1987!, Stability and Perfection of Nash Equilibria, Berlin, Springer-Verlag, 2nd edi-
tion.
Damme, E. Van ~1994!, ‘Evolutionary Game Theory,’ European Economic Review, 38, pp. 847–858.
Dosi, G., Y. Ermoliev and Y. Kaniovsky ~1994!, ‘Generalized Urn Schemes and Technological Dy-
namics,’ Journal of Mathematical Economics, 23, pp. 1–19.
Ellison, G. ~1993!, ‘Learning, Local Interaction and Coordination,’ Econometrica, 61, pp. 1047–1071.
Ellison, G. ~1995!, Basins of Attraction, Long Run Equilibria, and the Speed of Step-by-Step Evolu-
tion, mimeo, Department of Economics, Massachusetts Institute of Technology.
Elzen, A.H. van den and A.J.J. Talman ~1991!, ‘A Procedure for Finding Nash Equilibria in Bi-matrix
Games,’ ZOR-methods and Models of Operations Research, 35, pp. 27–43.
Friedman, D. ~1991!, ‘Evolutionary Games in Economics,’ Econometrica, 59, pp. 637–666.
Friedman, D. ~1995!, On Economic Applications of Evolutionary Games, mimeo, Economics Depart-
ment, University of California of Santa Cruz.
Gale, J., K.G. Binmore, and L. Samuelson ~1995!, ‘Learning to Be Imperfect: The Ultimatum Game,’
Games and Economic Behavior, 8, pp. 56–90.
Goeree, J.K. ~1997!, Applications of Discrete Choice Models, Ph.D. Dissertation, University of Am-
sterdam, Tinbergen Institute Research Series 135.
Hines, W.G.S. ~1980!, ‘Three Characterizations of Population Strategy Stability,’ Journal of Applied
Probability, 17, pp. 333–340.
Hirsch, M. and S. Smale ~1974!, Differential Equations, Dynamical Systems and Linear Algebra, San
Diego, Academic Press.
Hotelling, H. ~1929!, ‘Stability in Competition,’ Economic Journal, 39, pp. 41–57.
Huck, S., G. Kirchsteiger, and J. Oechssler ~1997!, Learning to Like What You Have: Explaining the
Endowment Effect, mimeo, Humboldt University Berlin.
Huck, S. and J. Oechssler ~1996!, The Indirect Evolutionary Approach to Explaining Fair Allocations,
Discussion paper 13, Humboldt University Berlin.
Joosten, R. ~1996!, ‘Deterministic Evolutionary Dynamics: A Unifying Approach,’ Evolutionary Eco-
nomics, 6, pp. 313–324.
Kamiya, K. ~1990!, ‘A Global Price Adjustment Process,’ Econometrica, 58, pp. 1481–1485.
Kandori, M., G.J. Mailath, and R. Rob ~1993!, ‘Learning, Mutation, and Long Run Equilibria in
Games,’ Econometrica, 61, pp. 29–56.
Krebs, D.L. and D.T. Miller ~1985!, ‘Altruism and Agression,’ in: G. Lindzey and E. Aronson ~eds.!,
The Handbook of Social Psychology, Hillsdale, NJ, Erlbaum.
Krishna, K. ~1992!, ‘Trade Restrictions as Facilitating Practices,’ Journal of International Economics,
32, pp. 31–55.
Laan, G. van der and A.J.J. Talman ~1987!, ‘Adjustment Processes for Finding Economic Equilibrium
Problems on the Unit Simplex,’ Economics Letters, 23, pp. 119–123.
Laan, G. van der and Z. Yang ~1996!, On the Existence of Proper Equilibria in Symmetric Bimatrix
Games, mimeo, Free University Amsterdam.
Mailath, G.J. ~1992!, ‘Introduction: Symposium on Evolutionary Game Theory,’ Journal of Economic
Theory, 57, pp. 259–277.
Matsui, A. ~1992!, ‘Best Response Dynamics and Socially Stable Strategies,’ Journal of Economic
Theory, 57, pp. 343–362.
Maynard Smith, J. ~1974!, ‘The Theory of Games and the Evolution of Animal Conflicts,’ Journal of
Theoretical Biology, 47, pp. 209–221.
@beta/een/CLS_kap/GRP_201/JOB_008/DIV_199-3- P87
88 G. VAN DER LAAN AND A.F. TIEMAN
Maynard Smith, J. ~1982!, Evolution and the Theory of Games, Cambridge, Cambridge University
Press.
Maynard Smith, J. and G.R. Price ~1973!, ‘The Logic of Animal Conflict,’ Nature, 246, pp. 15–18.
McAfee, R.P. and J. McMillan ~1987!, ‘Auctions and Bidding,’ Journal of Economic Literature, 22,
pp. 699–738.
Messick, D.M. and W.B.G. Liebrand ~1995!, ‘Individual Heuristics and the Dynamics of Cooperation
in Large Groups,’ Psychological Review, 102, pp. 131-145.
Myerson, R.B. ~1978!, ‘Refinements of the Nash Equilibrium Concept,’ International Journal of Game
Theory, 7, pp. 73–80.
Nachbar, J. ~1990!, ‘ “Evolutionary” Selection Dynamics in Games: Convergence and Limit Proper-
ties,’ International Journal of Game Theory, 19, pp. 59–89.
Nash, J. ~1950!, ‘Equilibrium Points in N-person Games,’ Proceedings of National Academy of Sci-
ence U.S.A., 36, pp. 48–49.
Nelson, R. ~1995!, ‘Recent Evolutionary Theorizing about Economic Change,’ Journal of Economic
Literature, 33, pp. 48–90.
Offerman, T., J. Sonnemans and A. Schram ~1995!, ‘Value Orientations, Expectations, and Voluntary
Contributions in Public Goods,’ forthcoming in Economic Journal.
Ploeg, F. van der and A. de Zeeuw ~1992!, ‘International Aspects of Pollution Control,’ Environmen-
tal and Resource Economics, 2, pp. 117–139.
Rabin, M. ~1993!, ‘Incorporating Fairness into Game Theory and Economics,’ American Economic
Review, 83, pp. 1281–1302.
Robson, A.J. and Vega-Redondo, F. ~1996!, ‘Efficient Equilibrium Selection in Evolutionary Games
with Random Matching,’ Journal of Economic Theory, 70, pp. 65–92.
Samuelson, L. and J. Zhang ~1992!, ‘Evolutionary Stability in Asymmetric Games,’ Journal of Eco-
nomic Theory, 57, pp. 363–391.
Schlag, K.H. ~1996!, Why Imitate, and If So, How? A Bounded Rational Approach to Multi-armed
Bandits, Discussion Paper no. B-361, Department of Economics, University of Bonn.
Selten, R. ~1975!, ‘Reexamination of the Perfectness Concept for Equilibrium Points in Extensive
Games,’ International Journal of Game Theory, 4, pp. 25–55.
Selten, R. ~1980!, ‘A Note on Evolutionary Stable Strategies in Asymmetric Animal Conflicts,’ Jour-
nal of Theoretical Biology, 84, pp. 93–101.
Senten, R. ~1991!, ‘Evolution, Learning, and Economic Behavior,’ Games and Economic Behavior, 3,
pp. 3–24.
Simon, H.A. ~1976!, ‘From Substantive to Procedural Rationality,’ in: S.J. Latsis ~ed.!, Methods and
Appraisal in Economics, New York, Cambridge University Press.
Smith, V.L. ~1991!, ‘Rational Choice: The Contrast between Economics and Psychology,’ Journal of
Political Economy, 99, pp. 877–897.
Spencer, B.J. and R.W. Jones ~1992!, ‘Trade and Production in Vertically Related Markets,’ Journal
of International Economics, 32, pp. 31–55.
Swinkels, J. ~1992!, ‘Evolutionary Stability with Equilibrium Entrants,’ Journal of Economic Theory,
57, pp. 306–332.
Taylor, P. ~1979!, ‘Evolutionary Stable Strategies with Two Types of Players,’ Journal of Applied
Probability, 16, pp. 76–83.
Taylor, P. and L. Jonker ~1978!, ‘Evolutionary Stable Strategies and Game Dynamics,’ Mathematical
Biosciences, 40, pp. 145–156.
Thibaut, J.W. and H.H. Kelley ~1959!, The Social Psychology of Groups, New York, Wiley.
@beta/een/CLS_kap/GRP_201/JOB_008/DIV_199-3- P88
EVOLUTIONARY GAME THEORY 89
Thomas, B. ~1985!, ‘On Evolutionary Stable Sets,’ Journal of Mathematical Biology, 22, pp. 105–
115.
Tieman, A.F., G. van der Laan, and H.E.D. Houba ~1996!, Bertrand Price Competition in a Social
Environment, Tinbergen Institute Discussion Paper TI 96-140/8, Free University, Amsterdam.
Van Huyck, J., R. Battalio, and R. Beil ~1990!, ‘Tacit Coordination Games, Strategic Uncertainty, and
Coordination Failure,’ American Economic Review, 80, pp. 234–248.
Van Huyck, J., R. Battalio, and R. Beil ~1991!, ‘Strategic Uncertainty, Equilibrium Selection Prin-
ciples, and Coordination Failure in Average Opinion Games,’ Quarterly Journal of Economics, 106,
pp. 885–910.
Webers, H.M. ~1996!, Competition in Spatial Location Models, Ph.D. Dissertation, Center for Eco-
nomic Research, Tilburg University, CentER Dissertation 16.
Yang, Z. ~1996!, Simplicial Fixed Point Algorithms and Applications, Ph.D. Dissertation, Center for
Economic Research, Tilburg University, CentER Dissertation 9.
Young, H.P. ~1993!, ‘The Evolution of Conventions,’ Econometrica, 61, pp. 57–84.
Weibull, J.W. ~1995!, Evolutionary Game Theory, Cambridge, The MIT Press.
Zeeman, E. ~1981!, ‘Dynamics of the Evolution of Animal Conflicts,’ Journal of Theoretical Biology,
89, pp. 249–270.
Summary
Since the 1950s economists have applied game-theoretical concepts to a wide variety of economic
problems. The Nash equilibrium concept has proven to be a powerful instrument in analyzing the
outcome of economic processes. Since the late 1980s economists have also shown a growing interest
in the application of evolutionary game theory. This paper discusses the main concepts of evolution-
ary game theory and their applicability to economic issues. Whereas traditional game theory focusses
on the static Nash equilibria as the possible outcomes of the game, evolutionary game theory teaches
us to explicitly model the behavior of individuals outside equilibrium. This may provide us with a
better understanding of the dynamic forces within a society of interacting individuals.
@beta/een/CLS_kap/GRP_201/JOB_008/DIV_199-3- P89