Download as pdf or txt
Download as pdf or txt
You are on page 1of 6

HW 3

PS 1514 Strategy

1. This question uses the one-shot (i.e. not infinitely repeated) version of the game
described in the Morrow article that we read and covered in class extensively. (For
reference, in the article, the sequence of the game is described on page 402. The
normal form components of the game are described on pages 395 and 398. We also
repeated them on slide 13 of the accompanying lecture slides.) In class, we discussed a
babbling equilibrium and a pure coordination equilibrium. This question concerns
what Morrow calls the honest communicative equilibrium (discussed on page 405).
The parts of the equilibrium are as follows:
Players report an honest message. E.g. If they receive signal 1, they send
message 1. If they receive signal 2, they send message 2.
If both players send message 1, player 1 plays A and player 2 plays a.
If both players send message 2, player 1 plays B and player 2 plays b.
If the messages do not match, they play A;a with probability

1
2

and B;b with

probability 12 .
(a) Using Bayes rule, if player 1 observes their own signal and sees that it is a 1,
what does she believe is the probability that the two players are playing the both
prefer A game? (Before any messages have been sent.)
(b) Using Bayes rule, if player 1 observes their own signal and sees that it is a 1 and
also observes that player 2 has sent message 1, what does she believe is the
probability that the two players are playing the both prefer A game?

(c) As p (the probability that the two players are playing a both prefer game)
decreases, this equilibrium breaks down. Why? Your answer should describe the
deviation that a player wants to make and show that this deviation becomes more
profitable as p increases.

2. Read Peter Haas article Do Regimes Matter? (uploaded to Courseweb, under the
assignments section). In what ways is the Med Plan an example of Morrows honest
communicative equilibrium? The best answers will say: here are the key features of
an honest communicative equilibrium, and here is how the real-world example of the
Med Plan demonstrates those features.
Your answer should be a maximum of 1 page.
Solution to (1a): The basic intuition here is that a player has her prior beliefs about the
state of the world, and she is using the signal she gets to update those beliefs. Let TU
denote that the state of the world is The usual,BPA denote Both prefer A and BPB
denote Both prefer B. Remember these are states of the world as in, which game is being
played not statements about strategies that players might subsequently employ. Let S11
denote that Player 1 got signal 1.
P r(BP A|S11) =

P r(BP A)P r(S11 |BP A)


P r(BP A)P r(S11 |BP A)+P r(BP A)P r(S11 |BP A)

P r(BP A|S11) =

P r(BP A)P r(S11 |BP A)


P r(BP A)P r(S11 |BP A)+P r(T U )P r(S11 |T U )

P r(BP A|S11) =

p
1
2
p
1
1+(1p)
2
2

P r(BP A|S11) = p

Note how Player 1s beliefs have moved from her prior to her posterior beliefs. Her prior
was that she was playing BPA with probability p2 . Then she gets signal 1. Now she thinks
the probability that she is playing BPA is higher: p > p2 .
2

Solution to (1b): The basic intuition here is that player 1 must now incorporate two
pieces of information to form her posterior beliefs, her signal and player 2s message. Denote
Player 2 sends message 1 as M21 . One wrinkle is that we must also account for the fact
that messages do not necessarily correspond to signals. However, given the equilibrium being
played, we know that M21 implies S21 . For those unfamiliar with the notation, essentially
means and in probability lingo. S11 M21 means the event in which player 1 gets signal 1
AND player 2 sends message 1.
P r(BP A|S11 M21 ) =

P r(BP A)P r(S11 M21 |BP A)


P r(BP A)P r(S11 M21 |BP A)+P r(BP A)P r(S11 M21 |BP A)

P r(BP A|S11 M21 ) =

P r(BP A)P r(S11 M21 |BP A)


P r(BP A)P r(S11 M21 |BP A)+P r(T U )P r(S11 M21 |T U )

P r(BP A|S11 M21 ) =

p
1
2
p
1 1
1+(1p)(
)( )
2
2 2

P r(BP A|S11 M21 ) =

2p
p+1

Note how, now, player 1 is even more sure that shes playing BPA,

2p
p+1

> p. In other

words, incorporating the piece of information player 2 gave her, has improved her knowledge
about what game theyre playing.
Solution to (1c): The basic intuition here requires you to understand the tension between
two things. On the one hand, honestly sharing information is good from a coordination
perspective. It improves your knowledge about which game is being played (as 1a and 1b
demonstrated), and therefore improves your ability to pick the strategy that gets you the
best payoff. On the other hand, lying can be good. If your partner believes your messages,
you are sometimes tempted to lie, convince them of something incorrect, and ensure yourself
a higher payoff, even if the two of you play the wrong thing, i.e. the strategy that isnt
jointly optimal given the game youre playing.
For checking best responses, remember, we can only change one part of one players
strategies. First, lets be a little savvy about what deviation is even tempting at all. Given

particular pieces of information, do the players ever have an incentive to deviate from coordinating their strategies (i.e. play A when you know youre partners playing B)? No. This
guarantees you a payoff of zero, regardless of the game, and we already know that deviation
only leaves you worse off.
Lets instead focus on when youd want to lie about your signal. Suppose player 1 observes
her signal, and it is a 2. Player 1 has two options: (1) she can tell the truth, and send message
2. If player 2 receives a signal of 1 and reports it honestly (as specified by the equilibrium),
this will cause both players to believe that they are playing TU with probability 1. Again as
specified by the equilibrium, they will flip a coin and play (A;a) half the time and (B;b) half
of the time. If player 2 receives a signal of 2 and reports it honestly, the players will believe
they are playing BPB with some probability and they will play (B;b). The expected utility
to player 1 of sending message 2, given that she received signal 2 is as follows:
EU1 (M12 |S12 ) = P r(BP B|S12)U1 (M12 |BP B) + P r(T U|S12)U1 (M12 |T U)
Take a second and convince yourself what each part of that equation means. Were
thinking about the utility of sending message 1 given that player 1 received signal 1. We
have to take into account the fact that player 1 still doesnt know what world shes in. She
could be in BPB or in TU, because she got signal 2. Now lets fill in her utility for sending
message 1. This will also force us to take into account the fact that she doesnt know what
signal player 2 got (and therefore what message player 2 will send), should the true state of
the world be TU. Also note that, from 1a, P r(BP B|S12) = P r(BP A|S11) = p and we can
do a similar Bayes rule calculation to get P r(T U|S12) (or we can say to ourselves, if the
probability of BPB is p, then one-minus-p is the probability of TU).
EU1 (M12 |S12 ) = p a + (1 p)[ 21 1 + 21 ( 12 a + 12 1)]
Take a second and convince yourself what the second term, the one after the first + sign

means. This is player 1s utility from sending message 2, given that they are playing TU.
Note that it takes into account uncertainty over what player 2 is going to do. When the true
state of the world is TU, one half the time, player 2 will get a signal of 2. This will cause
player 2 to send message 2, and the road map/equilibrium says that both players then play
(B;b). And when they play (B;b) in the game TU, player 1 gets a payoff of 1. One half the
time, player 2 will get a signal of 1 (in which case he sends message 1, and since the messages
do not match, the two players follow what the equilibrium tells them to do). This causes
them to flip a coin to decide whether to play (A;a) or (B;b). In doing so, player 1 gets a
payoff of a half the time and 1 the other half.
Now lets construct the expected utility to player 1 of lying, and saying she has a 1,
despite having seen a signal of 2.
EU1 (M11 |S12 ) = P r(BP B|S12)U1 (M11 |BP B) + P r(T U|S12)U1 (M11 |T U)
Note how weve kept everything the same in this equation, except what message player
1 sends. Now we can fill in these parts, as well.
EU1 (M11 |S12 ) = p [ 21 1 + 12 a] + (1 p)[ 21 a + 21 ( 12 a + 12 1)]
Now, Im going write those two expected utilities on adjacent lines, and Ill underline
how theyre different. From there, we can see how p matters.
EU1 (M12 |S12 ) = p a + (1 p)[ 21 1 + 21 ( 12 a + 12 1)]
EU1 (M11 |S12 ) = p [ 21 1 + 12 a] + (1 p)[ 21 a + 21 ( 12 a + 12 1)]

Now we can see how p affects whether or not you want to lie. Looking at the first
underlined term in each equation, we know that a > 12 (1 + a) (by assumption). This makes
the top line (telling the truth) more attractive. But looking at the second underlined terms
in each equation, we know that 1 < a. This makes the bottom line (lying) more attractive.
5

So comparing the first underlined term in each equation shows you the costs to lying- the
difference between the two is the utility you lose when you lie. You lose utility because you
waste information, and do worse at correctly coordinating with your partner. This cost is
incurred with probability p. Comparing the second underlined term in each equation shows
you the benefits of lying- the difference between those to terms is what you can expect to
gain when you lie. You gain utility because you steer your partner towards strategies that
yield you as instead of 1s. This benefit is accrued with probability (1 p). One way to
think about this is thus: as p increases, the gains to lying decrease and the costs to lying
increase. As p decreases, the gains to lying increase and the costs to lying decrease.
A note on grading for this question: the expected utility equations were hard. I gave
people lots of partial credit. I am most interested in your understanding the tradeoff between
lying and telling the truth in these coordination games.

Problem 2: For the careful reader, Morrow essentially answered this question for you
on pages 391-393. The Med Plan had the four characteristics of the situation this model
highlighted: (1) Each issue has several solutions (2) Coordination was better than mismatched action (3) Actors were uncertain about which solution they preferred and (4) Actors
had divergent preferences over which solution they most preferred. A full answer to this
question would have hit each of those points. Morrows answer to this question begins on
the bottom of page 392.

You might also like