Download as pdf or txt
Download as pdf or txt
You are on page 1of 4

Strategic Information Transmission (Cheap Talk)

December 9, 2018

1 Introduction
This paper investigates the effect of allowing cost-less signaling between agents on the pos-
sible equilibrium outcomes of a strategic game.
There are two agents, one of whom has private information relevant to both. The better-
informed agent, henceforth called the Sender (S), sends a possibly noisy signal, based on his
private information, to the other agent, henceforth called the Receiver (R). R then makes a
decision that affects the welfare of both, based on the information contained in the signal. In
equilibrium, the decision rules that describe how agents choose their actions in the situations
in which they find themselves are best responses to each other.
The key assumption here is that the receiver cannot commit to a decision rule, which
makes it different from mechanism design. Also, the message is not costly to send (cheap
talk), which makes it different from the signaling literature.
Although S’s choice of signaling rule is not restricted a priori, under certain assumptions
(discussed later), in equilibrium he partitions the support of the probability distribution of
the variable that represents his private information and, in effect, introduces noise into his
signal by reporting only which element of the partition his observation actually lies in. This
represents S’s optimal compromise between including enough information in the signal to
induce R to respond to it and holding back enough so that his response is as favorable as
possible.

2 The model
There are two players, a Sender (S) and a Receiver (R); only S has private information.
S observes the value of a random varible, m, whose differentiable probability distribution
function, F (m), with density f (m), is supported on [0, 1]. S has a twice continuously
differentiable von Neumann-Morgenstern utility function U S (y, m, b), where y, a real number,
is the action taken by R upon receiving S’s signal and b is a scalar parameter we shall later
use to measure how nearly agents’ interests coincide. R’s twice continuously differentiable
von Neumann-Morgenstem utility function is denoted U R (y, m). All aspects of the game
except m are common knowledge.
It is assumed here that, for each m and for i = R, S, denoting partial derivatives by
subscripts in the usual way, U1i (.) = 0 for some y, and U11
i
(.) < 0, so that U i has a unique

1
i
maximum in y for each given pair; and that U12 (.) > 0. The latter condition is a sorting
condition; it ensures that the best value of y from a fully informed agent’s standpoint is a
strictly increasing function of the true value of m.

3 The concept of equilibrium


An equilibrium consists of a family of signaling rules for S, denoted q(n|m), and an action
rule for R, denoted y(n) such that
1. For each m ∈ [0,1], N q(n|m)dn = 1, where N is the set of feasible signals, and if n∗
R

is in the support of q(∗|m), then n∗ solves max U S (y(n), m, b)


R1
2. For each n, y(n) solves max 0 U R (y, m)p(m|n)dm, where
R1
p(m|n) = q(n|m)f (m)/ 0 q(n|t)f (t)dt
Condition 1. says that S’s signaling rule yields an expected-utility maximizing action for each
of his information ”types” taking R’s action rule as given. Condition 2. says that R responds
optimally to each possible signal, using Bayes’ Rule to update his prior, taking into account
R
S’s signaling strategy and the signal he receives. Since U11 (.) < 0, the objective function in
2. is strictly concave in y; therefore, R will never use mixed strategies in equilibrium.

4 The main result


The paper shows that the sender partitions his type space into a finite number of partitions
at equilibrium, and sends the same signal for all types realized in one partition.
Define, for all m ∈ [0, 1],

y S (m, b) = arg max U S (y, m, b) (1)


y

y R (m) = arg max U R (y, m) (2)


y
i i
Since U11 (.) < 0 and U12 (.) > 0, i = R, S, y S (m, b) and y R (m) are well defined and
continuous in m.
R Let N̄ = {n : y(n) = ȳ}. We say that an action ȳ is induced by an S-type m̄ if

q(n|m̄)dn > 0. The following lemma establishes that there can only be a finite number
of actions induced in equilibrium if the interests of S and R do not coincide.
Lemma 1. If y S (m, b) 6= y R (m) for all m, then there exists an  > 0 such that if u and v are
actions induced in equilibrium, |u − v| ≥ . Further, the set of actions induced in equilibrium
is finite.
S
Proof. Let u < v without loss of generality. Let m induce u and n induce v. Since U12 (.) > 0,
S S S S
m < n. By revealed preferences, U (u, m, b) ≥ U (v, m, b) and U (v, n, b) ≥ U (u, n, b),
which implies

U S (u, m, b) − U S (v, m, b) ≥ 0 ≥ U S (u, n, b) − U S (v, n, b) (3)

2
Then, by continuity, there exists an m̄ ∈ [m, n] such that

U S (u, m̄, b) − U S (v, m̄, b) = 0 (4)


S
Since u < v and U11 (.) < 0,
u < y S (m̄, b) < v (5)
This implies that u is not induced by any m > m̄ and v is not induced by any m < m̄. This,
R
along with U12 (.) > 0 implies
u ≤ y R (m̄) ≤ v (6)
Since y S (m, b) 6= y R (m) for all m, there exists an  > 0 such that |y S (m, b) − y R (m)| ≥  for
all m. This, along with 5 and 6, implies that |u − v| ≥ . Since the set of actions induced in
equilibrium is bounded by y R (0) and y R (1) because U12 i
(.) > 0, the set of actions induced in
equilibrium is finite.
Define, for all a, ā ∈ [0, 1], a ≤ ā
 R ā 
arg max a U R (y, m)f (m)dm if a < ā
ȳ(a, ā) = (7)
y R (a) if a = ā

The following theorem establishes the existence of equilibria, and characterizes them.

Theorem 1. Suppose b is such that y S (m, b) 6= y R (m) for all m. Then there exists a
positive integer N (b) such that, for every N with 1 ≤ N ≤ N (b), there exists at least one
equilibrium (y(n), q(n|m)), where q(n|m) is uniform (supported on [ai , ai+1 ] if m ∈ (ai , ai+1 )),
y(n) = y(ai , ai+1 ) for all n ∈ (ai , ai+1 ), and ai are such that:

U S (ȳ(ai , ai+1 ), ai , b) − U S (ȳ(ai−1 , ai ), ai , b) = 0 ∀i ∈ {1, 2, ..., N − 1} (8)

a0 = 0 (9)
aN = 1 (10)

Proof. The outline of the proof is as follows. Given Lemma 1, each S-type must, in an
equilibrium of size N , choose from a set of N values of y (Lemma 1 also implies that there
S
must be an upper bound on N , which we denote N (b)). Since U12 (.) > 0, the S-types for
whom each value of y that occurs in equilibrium best form an interval, and these intervals
form a partition of [0, 1]. The partition, a, is determined by 8, a well-defined second-order
nonlinear difference equations in the ai , and 9 and 10, its initial and terminal conditions.
Equation 8 is an ”arbitrage” condition, requiring that S-types who fall on the boundaries
between steps are indifferent between the associated values of y. With the assumptions on
U S , this condition is necessary and sufficient for S’s signaling rule to be a best response to
y(n). Finally, given the signaling rules in the statement of the Theorem, it is easily verified
that the integral on the right side of 7 is R’s expected utility conditional on hearing a signal in
a, ā. It follows that y(n) = y(ai , ai+1 ) gives R’s unique best response to a signal in (ai , ai+1 ),
and that the signaling rules given in the Theorem are in equilibrium. Any other signaling
rules that induce the same actions would also work, but any other signaling rules consistent
with an equilibrium of a given size must, given Lemma 1, induce the same actions.

3
5 An example
Suppose F (m) is uniform (on [0, 1]), U S (y, m, b) = −(y − (m + b))2 , and U R (y, m) =
−(y − M )2 .
Note that these functions satisfy the assumptions imposed earlier, i.e., U1i (.) = 0 for some
i i
y, U11 (.) < 0, and U12 (.) > 0.
We see that
y S (m, b) = arg max U S (y, m, b) = m + b (11)
y

y R (m) = arg max U R (y, m) = m (12)


y

Clearly, if b = 0, then the interests of S and R coincide and it is trivially optimal for S
to directly convey the realization of m. Thus we will set b > 0 without loss of generality.
Consider the conditions that characterize a partition equilibrium of size N . Letting a
denote the partition as before, we can compute

ȳ(ai , ai+1 ) = (ai + ai+1 )/2 (13)

The arbitrage condition specializes to


2 2
ai + ai+1 ) ai + ai−1 )
( − ai − b) = ( − ai − b) (14)
2 2
which can only hold, given the monotonicity of a, if

ai = a1 i + 2i(i − 1)b (15)

N (b) in Theorem 1 is the largest positive integer i such that 2i(i − 1)b < 1 i.e.,
1 1 2
N (b) =< − + (1 + )1/2 > (16)
2 2 b
(where < z > denotes the smallest integer greater than or equal to z). Thus it is clear
that the closer b approaches zero - the more nearly agents’ interests coincide - the finer
partition equilibria there can be. As b →
− inf, N (b) eventually falls to unity and only the
completely uninformative equilibrium remains; in fact, this occurs in this example as soon
as b exceeds 1/4.

You might also like