Game Theory

You might also like

Download as ppt, pdf, or txt
Download as ppt, pdf, or txt
You are on page 1of 54

Game Theory

The Competitive Dynamics of Strategy

Game Theory and Strategy


Competitive Behaviour
• An economy is an interdependent system. In “solving” this
system, we have deliberately pushed this interdependency into
the background.

• In a competitive market structure, the individual, both as


consumer and producer, is small relative to the market and can
thus take everyone else’s behaviour as given.

• For each individual, the rest of the economic world consists of a


set of prices; prices at which he can sell what he produces and
buy what he wants.

• Competitive markets are not really consistent with what we


think of competitive behaviour at all.
Monopoly Behaviour
• A monopolist is big enough to affect the entire market
because they are the market. But a monopolist deals with
many different consumers, each of whom know they
have no power to influence the monopolists’ behaviour.

• Thus each consumer in a monopoly market reacts


passively to the seller. Consumers make decisions in the
marketplace to maximize consumer welfare at prices the
monopolist decides to charge.

• From the monopolists’ standpoint, the consumer is not


really a person at all. The consumer is just a demand
curve and that’s about it.
Strategic Behaviour
• Our analysis thusfar has avoided an important element of
human interaction in many markets – negotiation, bargaining,
threats, bluffs, and anything else that constitutes strategic
behaviour.

• Indeed we are used to seeing human interaction portrayed as a


clash of wills, whether in the boardroom, on the battlefield, or
on the silver screen.

• While we have presented economics thusfar in terms of solitary


individuals (consumers/firms), each maximizing against an
essentially non-human environment, we now present economic
problems in terms of a clash of self-willed human beings, and all
the strategic behaviour that it entails.
Game Theory
• Games describe situations where there is potential for
conflict and cooperation. Many business activities and
social interactions have both of these features.

• Here we use game theory to show how people should


think about strategic decisions they continually face.

• The essential question is: If I believe that my rivals are


rational and act to maximize their payoff, how should I
take their behavior into account when making my own
payoff-maximizing decision?
Types of Games
• 2 Basic Types of Games: Non-cooperative and
Cooperative games.

• Cooperative Games occur where players can negotiate


binding contracts that allow them to plan joint
strategies.

• Non-cooperative games are those for which


negotiation and/or enforcement of binding contracts
are not possible.

• Here we’re concerned with non-cooperative games! A


crucial feature is understanding your opponent’s point
of view and deducing how they are likely to respond to
your actions.
Non-Cooperative Games: Simultaneous One-Shot
Games
• A game can be described in 2 different forms: (1) Normal Form
(i.e., a payoff matrix) or (2) Extensive Form (i.e., a decision
tree).

• Normal Form games consist of a series of lists:


– List of Players: player 1 vs. player 2.
• Can be people, firms, families, clubs, governments, etc.
– List of Actions for each player.
• Complete and non-random plan for the game.
– List of Strategy Profiles.
• Complete list of distinct combinations of actions.
– List of Outcomes.
• Complete list of payoffs (i.e., profits, utility, etc.) for each
strategy profile (summarized by the payoff-matrix).
Simultaneous One-Shot Games (cont.)
• Extensive Form games consist of a game tree (i.e., decision tree). It includes:

– List of players: players 1 and 2.

– Tree defines the availability and sequence of moves at every


decision node.

– Each branch ends at an outcome, i.e., the payoff (profit or utility)


for each player.

– Information sets: They show the extent of knowledge of a player


about his/her position in the tree. A player only knows that he/she
is in an info. set, which may have one or more nodes.

• Despite sequential nature of game trees, info. sets allow a game of


simultaneous moves to be described by a game tree.

– A game where each info. set contains only one decision node
implies perfect information, i.e. each player knows the complete
history of play at the time a decision has to be made; otherwise,
two or more nodes within the same information set implies it’s a
game of imperfect information.
Dominant Strategy Equilibrium
• Some strategies may be successful
if rivals make certain choices but fail
otherwise. Other strategies may be
successful whatever its rival does. Player 1
Low P High P
• Definition: A dominant strategy is
a strategy that is optimal (i.e., a 500 0
best response) for a player no Low P
500 1000

Player 2
matter what its rival does.
1000 750
• {Low P, Low P} is a DSE. High P
0 750

• Notice, from a Social Perspective,


the equilibrium is not efficient!

• Collusion may be possible if game is


repeated. Suggests competition (not
cooperation will occur in settings
where players don’t expect to
compete on continuing basis.
The Prisoner’s Dilemma: Games of
Dominant Strategies
• Two car thieves are caught in possession of stolen
property.
• Police interrogate them in separate rooms.
• Evidence against them for possession of stolen property
is strong, but evidence of suspected auto theft is not.
• Hence, Crown needs one to testify against the other to
get conviction on more serious charge– otherwise they
both get charged with lesser offense.
• Crown offers both the same deal.
– Rat against the other and get immunity.
– Be ratted against and get convicted on both charges.
• Should they rat or remain silent?
The Prisoner’s Dilemma:
Games of Dominant Strategies

Prisoner B
Silent Rat
-1 0
Silent
Prisoner A

-1 -9
-9 -6
Rat

0 -6
Pricing for Market Share
Games of Dominant Strategies (cont)

• Two managers want to maximize market


share
• Strategies are pricing decisions
• Simultaneous moves
• One-shot game
Pricing for Market Share
Games of Dominant Strategies (cont)
Manager B
P=10 P=5 P=1
0.5 0.8 0.9
P=10

0.5 0.2 0.1


Manager A

0.2 0.5 0.8


P=5

0.8 0.5 0.2


0.1 0.2 0.5
P=1

0.9 0.8 0.5


An Advertising Game
• Two firms’ (Kellogg & General Mills)
managers want to maximize profits

• Strategies consist of advertising campaigns

• Simultaneous moves

• One-shot interaction
A One-Shot Advertising Game
General Mills
None Moderate High
12 20 15
High Mod None

12 1 -1
Kellogg

1 6 9
20 6 0
-1 0 2
15 9 2
Advertising Game (Cont)
• Neither firm has a strictly dominant strategy, i.e., a strategy yielding
a higher payoff than the other strategies regardless of what the rival
does.

• Note, however, both firms have a strictly dominated strategy.


Neither firm will choose “None” in response to what their rival does.

• Kellogg knows General Mills will never choose “None”, so Kellogg


can eliminate “None” as part of General Mill’s strategy set when
trying to anticipate what General Mill’s will do.

• The same goes for Kellogg. They will never choose “None”, General
Mills knows that, and so will eliminate “None” from Kellogg’s
strategy set when trying to figure out what Kellogg will do.
Advertising Game (Cont)
• By eliminating both firms’ dominated
strategies, both firms know they are
General Mills
really playing a 2x2 game, not a 3x3.
• By the same reasoning, both firms M H
know the other will never play
“Moderate,” given that “None” has 6 9
M
eliminated from each firms strategy 6 0
set.

Kellogg
• That is, “Moderate” is dominated for H
0 2
both firms given that “None” is
dominated for both. 9 2
• Thus, the solution, i.e., equilibrium,
is {high, High}={2,2}.
• Solution technique here is called
Iterated Elimination of Dominated
Strategies.
Nash Equilibrium
• To determine the likely outcome of a game we want stable
or self-enforcing strategies. Not all games have dominant
(or dominated) strategies. Thus, we use the more general
solution concept of a Nash Equilibrium (NE).

• DSE: I’m doing the best I can no matter what you do.
You’re doing the best you can no matter what I
do.

• NE: I’m doing the best I can given what you’re doing.
You’re doing the best you can given what I’m doing.

• DSE is a special case of a NE.


Nash Equilibrium (Cont)

• Consider the same game above, i.e., two firms


choosing between {Low p, High p}.
1
• This time, 2 no longer has a DS.
Low P High P
500 0
• But if 2 puts itself in 1’s shoes, 2 will know
that 1 will not choose “High P” (the dominated Low P 500 1000
strategy); 2 will know that 1 will pick “Low P.”
1000 750
2
• Given 1 will choose “Low P,” 2 will choose 600 750
“High P”. High P

• Nash equilibrium:
{High P, Low P}={600, 1000}.

• Check that neither firm has an incentive to


deviate from {High P, Low P}.
Justification of a Nash Equilibrium

• If the players play a Nash Equilibrium, no one has an


incentive to change their play or have second
thoughts about their strategy.

• Other potential outcomes do not have this property;


if an outcome is not a NE, at least one player wishes
to reconsider their strategy.

• Games may have more than a single NE; we refer to


these games as having multiple equilibria.
Multiple equilibria

• Product Choice Problem: Cereal.


2

• Goal is to avoid introducing the same product as Crispy Sweet


the rival.
Crispy (-5, -5) (10, 10)
• Iteratively eliminating dominated strategies
eliminates nothing!
1

• Two Nash Equilibria: {Crispy, Sweet} and Sweet (10, 10) (-5, -5)
{Sweet, Crispy}. Can you see why?

• This is called a coordination game; both firms


have an incentive to coordinate on one of the
Nash Equilibria.

• At this point, we cannot tell where they’ll go, but


we’ll learn how to predict where they’ll likely go.
Maximin Strategies (i.e., Secure Strategies)
• Concept of NE relies heavily on individual
rationality. Each player’s choice depends not only
on its own rationality, but also on that of its
opponent. This can be a limitation. 2

• Here, {Bottom, Right} is a unique NE. Left Right

• But, 1 better be sure 2 is rational and plays Top (1, 0) (1, 1)


“Right.”
1
• Instead, if 1 is cautious, 1 might choose “Top”,
assuring a payoff of 1 rather than -1000. This is Bottom (-1000, 0) (2, 1)
called a maximin strategy, b/c it maximizes the
minimum gain that can be earned.

• Solution Technique: Player 1 forecasts the


worst outcome for each action and chooses the
action that offers the highest payoff among the
worst payoffs.

• In this example, its only necessary for 1 to do


this, since only player 1 has the potential for a
very bad outcome.
Mixed Strategy Equilibrium
• Some games of simultaneous choice have no Nash equilibrium in pure
strategies. Hence, we need a way to “solve” the game.

• Some simultaneous games have multiple equilibria in pure strategies.


Hence, we need a way to discriminate between them, i.e. equilibrium
selection.

• These types of situations often arise when it pays to be seen as


unpredictable, i.e., when there are benefits that stem from the element of
surprise.

• The key to mixed strategies is to expand the strategy set by allowing


players to randomize between pure strategies; that is, to play pure
strategies with some probability less than 1.

• Hence, random strategies are known as mixed strategies, because the


players mix across various pure strategies.

* Technically, we call it a mixed strategy Nash equilibrium, because it still


satisfies all the properties of a Nash equilibrium defined above.
Examples of Mixed Strategies
• Tax authority and taxpayers
• Speed traps by police on the highway
• Bait cars in Vancouver
• Tennis, football or virtually any sport.
• Political mudslinging
• Military tactics
• Car games like poker
Matching Pennies

• Both players simultaneously choose


“Heads” or “Tails.”
Column

• If they match, Row gets the coins; if Heads Tails


they don’t Column gets the coins.
Heads (1, -1) (-1, 1)

• There is no NE in pure strategies! Row

Tails (-1, 1) (1, -1)


• But there is a NE if the players
randomize their choices, i.e., if players
appear unpredictable.

• Mixing across various pure strategies


is called mixed strategies.
Equilibrium with Mixed Strategies
• Suppose Row believes Column will play “Heads” with
probability p.

• If Row plays “Heads,” he expects to get: 1p+(-1)


(1-p)=2p-1. Column

• If Row plays “Tails,” he expects to get: - Heads (p) Tails (1-p)


1p+(1)(1-p)=1-2p.
Heads (1, -1) (-1, 1)
• If 2p-1>1-2p, Row is better off playing “Heads.”
Row
• If 2p-1<1-2p, Row is better off playing “Tails.”
Tails (-1, 1) (1, -1)
• If 2p-1=1-2p, Row gets the same payoff whether he plays
“Heads,” “Tails” or whether he flips a coin.

• Randomization requires equality of expected payoffs.


When Row randomizes between Heads and Tails, Row’s
expected payoff is the same under both strategies. If this
was not the case, Row would prefer one strategy over the Row is willing
other. to randomize
provided
p=1/2.
Don’t get confused about the fact that it’s Column’s probability (p) that
determines Row’s expected payoff and hence Row’s willingness to randomize!

• If p=1/2, Row is indifferent to Heads


or Tails.
Column
• If p<1/2, Row chooses Tails (q=0).
Heads Tails Row’s
(p) (1-p) Exp
• If p>1/2, Row chooses Heads (q=1). Payoff
Heads (1,-1) (-1,1) 2p-1
(q)
Row Tails (-1,1) (1,-1) 1-2p
(1-q)
• If q=1/2, Column is indifferent.
Column’s 1-2q 2q-1
Exp
• If q<1/2, Column chooses Heads Payoff
(p=1).

• If q>1/2, Column chooses Tails


(p=0).

* Mixed Strategy Nash equilibrium:


p=q=1/2
The Battle of the Sexes Women

Baseball Ballet
• This game has 2 NE in pure strategies: (baseball,
baseball), (ballet, ballet), i.e., (3,2), (2,3).
Man Baseball (3,2) (1,1)
• What about a mixed strategy equilibrium?
Ballet (0,0) (2,3)
• Man is indifferent when 2p+1=2-2p, or when
p=1/4.

• Man’s Best Response:


Women
If p<1/4, Man chooses Ballet (q=0).
If p>1/4, Man chooses Baseball (q=1). Baseball Ballet Man’s Exp
If p=1/4, Man is indifferent. (p) (1-p) Payoff

• Women is indifferent when 2q=3-2q, or when Baseball (3,2) (1,1) 3p+1(1-


q=3/4. (q) p)=2p+1

Man Ballet (0,0) (2,3) 0p+2(1-


• Women’s Best Response:
(1-q) p)=2-2p

If q<3/4, Women chooses Ballet (p=0). Women’s 2q+0(1- 1q+3(1-


If q>3/4, Women chooses Tails (p=1).
Exp q)=2q q)=3-2q
Payoff
If q=3/4, Women is indifferent.
The Battle of the Sexes

• Three Nash Equilibria

• Two in pure strategies:

(ballet, ballet) or (p=0, q=0) Women


(baseball, baseball) or (p=1, q=1)
Baseball Ballet
• One in mixed strategies:
P=1/4 (1-p)=3/4
Man: {q=3/4, (1-q)=1/4}
Women: {p=1/4, (1-p)=3/4} Man Baseball 3/16 9/16
q=3/4
• Since the probabilities are independent, we can compute
the likelihood of each outcome, i.e., the joint probability of Ballet 1/16 3/16
each strategy profile.
(1-q)=1/4
• The most likely outcome is each goes to their most
preferred event even though this results in a worse
outcome than going together somewhere.

• Furthermore, in the event they go together to the same


event, it is equally likely they go to ballet or baseball.

• Note: this results follows from the symmetry of the payoffs.

• If payoffs were asymmetric, then we could say which of the


2 pure strategy equilibria is more likely the couple will
coordinate on. That is, with asymmetric payoffs, equilibrium
selection is possible.
Comments about Mixed Strategies
• Students are often uneasy about the notion that players choose strategies randomly.

• Indeed, people don’t generally make important decisions by throwing die, flipping coins, or
reading their horoscope.

• Thus, the best way to think about mixed strategies is that they represent the beliefs of the
other players in the game as to what a given player is going to do.

• Example: Let the following be a mixed strategy equilibrium:


Row: {Up: q=1/4, Down: (1-q)=3/4}
Column: {Left: p=2/3, Right: (1-p)=1/3}

• Row believes Column is unpredictable in any given play but is twice as likely to pick Left
over Right, i.e., 1/3x2=2/3.

• Column believes Row is unpredictable in any given play but it three times as likely to
choose Down over Up, i.e., 1/4x3=3/4.

• This does not mean people actually choose randomly. People can choose deterministically
(i.e., decision could be determined by what side of the bed Row and Column get up on),
but it fundamentally requires that your opponent does not know (and cannot infer) your
decision rule. Thus, decisions can be made according to a deterministic rule, but it must
appear random to your opponent.
Introduction to Sequential Games

• Not all games are played


simultaneously. In fact, many strategic
situations involve sequential decision- Airbus
making. alpha beta

alpha (100, 50) (40, 40)


• Consider the simultaneous
(coordination) game of implementing a Boeing
new communications system. Both
(25, 25) (50, 100)
Boeing and Airbus benefit by choosing beta
the same system (scale effects, learning
curve effects for airlines, etc.)

• Verify there are 2 Nash equilibria:


(alpha, alpha); (beta, beta)
Simultaneous Choice in Extensive Form (game trees)
We can capture the exact same game in extensive form. Note the use of an information set
to capture the idea that Airbus doesn’t know whether its at Node 2 or 3 when they make
their choice. In other words, Airbus must choose without knowing what Boeing has done;
this is what makes it simultaneous play.

Node 1 B
Information Set
Alpha Beta

Node 2 A A Node 3

Alpha Beta Alpha


Beta

(100,50) (40,40) (25,25) (50,100) Payoffs


2-Stage Sequential Games
• Now suppose the game is played
sequentially, where Boeing goes first.
• Here, Airbus knows what choice Boeing has
made; i.e., Airbus knows where it is in the
game tree (nodes 2 and 3 are now in
different information sets). B

• Solution Technique here is “backward


induction.” Boeing anticipates what Airbus alpha beta
will do at nodes 2 and 3, and then makes
its choice knowing what Airbus will do in
response.
A A
• 2nd Stage:
At node 2, Airbus will choose alpha (50>40)
At node 3, Airbus will choose beta (100>25) alpha beta alpha beta
• 1st Stage:
Knowing how Stage2 will unfold, Boeing will
choose alpha (since 100>50)
(100,50) (40,40) (25,25) (50,100)
• NE consists of the strategy profile
{alpha, alpha}.
Comments
• Note that while the simultaneous game has 2 Nash equilibria,
the sequential game has only 1 Nash equilibrium.

• Note also that Boeing has an advantage by virtue of choosing


first. However, if Airbus made its choice first, Airbus would have
an advantage, and the equilibrium would be different. In this
case, Airbus would know that Boeing has an incentive to match
technologies.

• Verify that the NE with Airbus choosing first is {Beta, Beta}.

• You must not conclude from this that all sequential games have
first-mover advantages. They don’t. Sometimes its pays to
move second (e.g., product imitation, process innovation
through reverse engineering, etc.).
Credible Threats and Commitment
• With Boeing choosing first, Airbus has an incentive to influence
the actions of Boeing in ways favorable to itself.

• Specifically, Airbus would like Boeing to choose “beta” since


100>50.

• So how might Airbus get Boeing to choose “beta”? Suppose


Airbus announces its plan (in the media) to choose “beta” no
matter what Boeing does, which hopefully gives Boeing an
incentive to choose “beta” as well (since 50>40).

• Is this a credible threat?

• Clearly not! If Boeing ignores Airbus’ threat and chooses alpha,


Boeing knows it’s in Airbus’ interest to also adopt alpha. Thus,
Airbus’ threat is cheap-talk.
Commitment (cont.)
• Credible threats require Airbus to restrict its
own future actions by making a binding
commitment to beta. A commitment means
Airbus will choose beta no matter what
Boeing does. Only then will Boeing change its
beliefs as to what Airbus will do.
• Example, suppose Airbus signs a long-term
contract with beta company. Contract B
stipulates that if Airbus breaches the contract
(by choosing alpha) Airbus pays beta
company 20 in liquidated damages. alpha beta
• Now, the payoff for {alpha, alpha} is
(100,30), and {beta, alpha) is (25,5).
A A
• Is Airbus’ threat of choosing beta credible
now?
alpha beta alpha beta
• Yes! Airbus has an incentive to choose beta
no matter what Boeing does, and this
commitment is sufficient to change Boeing’s
beliefs. (100,50) (40,40) (25,25) (50,100)
• The NE is now {beta, beta}=(50,100).

30 5
Repeated Games
• A Repeated Game is a simultaneous choice game played two
or more times (i.e., stage 1, stage 2, and so on).

• Repeated games played an unknown (possibly infinite) number


of times are called Supergames.

• Many games are Supergames:

 When GM and Toyota compete, they have no idea how many


years they will continue to do so.

 Interactions among couples in marriage, the workplace, etc.

• We start by looking at what happens when a game is played for


a fixed but known number of times. We’ll then turn to the more
realistic case of Supergames.
Reconsider the One-Shot Advertising Game

General Mills
None Moderate High
12 20 15
High Mod None

12 1 -1
Kellogg

1 6 9
20 6 0 NE is (2,2).
-1 0 2 It’s not

15 9 2 * efficient.
Prisoner’s Dilemma
• Private rationality leads to collective
disaster!.
 The equilibrium that arises from playing
equilibrium (possibly dominant) strategies is
worse for every player than the outcome that
would arise if every player adopted another
(possibly dominated) strategy.

• The Goal: To overcome the incentive to


“cheat” and sustain a mutually
beneficial cooperative outcome
Moving Beyond the Prisoner’s Dilemma

• Why does the Dilemma occur?

 Nature of Interaction

No fear of punishment


Myopia (i.e., short-sighted play)

• To overcome the dilemma, we require:

 Introduction of repeated encounters


 Introduction of uncertainty to the payoff
Finite Interaction: (Theoretical Note)
• Suppose a relationship between two firms lasts for
only T periods (where T is finite).

• Use backward induction!

• Tth period: no incentive to cooperate.

• T-1st period: no incentive to cooperate.


– Both know there’s no cooperation in period T, so both treat
the 2nd to last period as if it’s the last.

Unraveling: by similar logic, both players treat the very


first period as if it’s the last!

• This outcome is called the final period problem!


Example: Can collusion work in the advertising game if
it’s repeated 2 times? Can they coordinate on the
strategy profile {None, None}?
General Mills
None Moderate High
12 20 15
High Mod None

12 * 1 -1
Kellogg

1 6 9
20 6 0
-1 0 2
15 9 2
Answer: No … by backward induction.
• In period 2, the game is a one-shot game, so
equilibrium entails “High Advertising” in the last period.

• This means period 1 is “really” the last period, since


everyone knows what will happen in period 2.

• Equilibrium entails High Advertising by each firm in


both periods.

• Important! Cooperation is not likely if the relationship


between players is for a fixed and known length of
time.
• But, players think forward if …

 Game length is uncertain (i.e., probabilistic termination).


Long-term Repeated Interaction
• No (known) last period, so no rollback.

• Use history-dependent strategies called Trigger Strategies.

Begin by cooperating.
Cooperate as long as rivals do.
Upon observing a “defection,” immediately revert to
a period of punishment of specified length in which
everyone plays non-cooperatively.

• We call this basic type of strategy a trigger strategy


because defection “triggers” a response.
Two Extreme Trigger Strategies
• Grim Trigger Strategy (GTS):
 Cooperate until a rival “defects.”
 Once a defection occurs, play non-
cooperatively for the rest of the game (i.e.,
indefinitely).

• Tit-for-Tat (TFT):
 Cooperate if your rival cooperated in the
most recent period.
 Defect if your rival cheated in the most
recent period.
Trigger Strategy Extremes
• Tit-for-Tat is: • Grim trigger is:
 Most forgiving  Least forgiving
 Shortest memory  Longest memory
 Proportional  MAD
response  Adequate
 Credible but lacks deterrence but lacks
deterrence credibility.

• TFT answers: • GTS answers:


“Is cooperation easy?” “Is cooperation possible?”
Why Cooperate (Against GTS)?

• Cooperate if the present value


(PV) of cooperation is greater Firm 2
than the PV of defection. Low High

Low (54, 54) (72, 47)


• Cooperate: Firm 1
60 today, 60 next year, 60, 60, … High (47, 72) (60, 60)

• Defect:
72 today, 54 next year, 54, 54, …
Technical Note: Discounting
• The sum of a geometric sequence is called a geometric series.
• If the sequence goes off to infinity, then its sum is an infinite
geometric series.

• Infinite sequence: 1, δ , δ2, δ3, δ4, …


• Infinite series: 1+ δ + δ2 + δ3 + δ4 +…

• The series converges to 1/1-δ as long as δ Є (-1, 1), i.e., as long as δ


takes on a value in the open interval (-1, 1).

• Think of the infinite series as representing a special type of annuity


called a perpetuity, i.e., a sequence of $1 payments forever.

• Also, you can think of δ as being either:


(i) the probability the game terminates in any given period, or
(ii) the “discount rate,” i.e., the rate at which we discount a
continuous stream of payments. Frequently, the
discount rate is defined in terms of interest rates (or cost of
capital); that is, δ=1/1+r.
Technical Note (Cont.)
• Perpetuity where payments are • Perpetuity where payments are
made at beginning of each period. made at the end of each period.

• 1+ δ + δ2 + δ3 + δ4 +…=1/1- δ • δ + δ2 + δ3 + δ4 +…= δ/1- δ

• Why? • Why?

• (1) z = 1+ δ + δ2 + δ3 +… • (1) z = δ + δ2 + δ3 +…
• (2) zδ = δ + δ2 + δ3 +… • (2) zδ = δ2 + δ3 + δ4 +…

• Subtract (2) from (1) to get: • Subtract (2) from (1) to get:

• z- zδ=1 (now solve for z) • z- zδ= δ (now solve for z)


• z=1/1- δ. • z= δ/1- δ.

• If δ=1/1+r, then z=1+1/r. • If δ=1/1+r, then z=1/r.

•These are the PV of a perpetuity paying $1 today, $1 next period, and $1 every period thereafter.
Calculus of GTS
• Cooperate if …
PV (cooperation)> PV (defection)
60, 60, 60, … > 72, 54, 54, …
60+60/r > 72+54/r
6/r > 12
r < 6/12=0.5

• Cooperation is sustainable using GTS as long as r < 50%.


Or … as long as $1 invested today does not return more
than $1.50 next period.

• If the future matters to the players (and r<50%), they


have a private incentive to act as if they are cooperating.
Acting as if they are cooperating today makes sense if they
care about the future.
The Payoff Matrix of a Repeated Game
Solution:
If r>0.5 the unique NE is {Low, Low}.
If r<0.5, there are 2 Nash equilibria:
{Low, Low} and {High, High}.
Firm 2
Low High
Low

(54+54/r, 54+54/r) (72+54/r, 47+54/r)


Firm 1
High

(47+54/r, 72+54/r) (60+60/r, 60+60/r)


Calculus of TFT
• Cooperate if …
PV (cooperation) > PV (defection)
And
PV (cooperation) > PV (defect once)

60, 60, 60, … > 72, 47, 60, 60, …


60+ 60/(1+r)+ 60/r > 72+ 47/(1+r) + 60/r
13/(1+r) > 12
r < 1/12= 0.083

• TFT is much harder to sustain, because now we require r<8.3% rather


than r<50% under GTS. That is, a smaller range of interest rates will
support TFT.

• Thus, cooperation is less likely under TFT.


Trigger Strategies

• GTS and TFT are extremes.


• Need to balance the following two goals:
 Deterrence (a clear policy of punishment)
 GTS is adequate punishment.
 TFT might be too little.
 Credibility (must incorporate forgiveness)
 GTS hurts the punisher too much.
 TFT is more credible.
• Optimal Punishment:
 In announcing a punishment (i.e., trigger) strategy, punish
enough to deter your opponent, but temper the punishment to
remain credible.
Axelrod’s Simulation
• R. Axelrod, The Evolution of Cooperation
• Prisoner’s Dilemma repeated 200 times
• Economists submitted strategies
• Pairs of strategies competed

• Winner: Tit-for-Tat!

• Reasons:
 Clear (simple to understand)
 Forgiving (punishment is not too severe)
 Provocable (defection triggers a response).

You might also like