1 Moral Hazard: Theory

You might also like

Download as pdf or txt
Download as pdf or txt
You are on page 1of 24

Econ 201c, UCLA Simon Board

2. Moral Hazard
April 20, 2020

After a contract is signed an, an agent can take actions to affect the characteristics. Examples

• CEO can choose how hard to work

• Teacher can choose what to teach (e.g., teaching to the test).

Can view this problem as as

• An externality: When the CEO shirks, it hurts the shareholders

• An information problem: The parties cannot contract on the CEO’s actions

These notes follow MWG, and Segal and Tadelis.

1 Moral Hazard: Theory

Model

• A (female) principal offers a contract hw(q)i to a (male) agent.

• The agent takes action a ∈ A ∈ <

• Output q ∼ f (·|a) is observed by both parties and is contractible.

• The principal pays the agent a wage w(q)

Payoffs

• The agent receives utility U = u(w) − c(a), where u(·) is increasing and concave, and c(·) is
increasing and convex.

– We will often assume the functions are differentiable, or satisfy convenient boundary
conditions, e.g. c0 (0) = 0.

• The agent has outside option U . One can think of this as the agent’s utility from another job.

• The principal receives profit Π = q − w.

Parametric examples that you’ll see:

• Example: a ∈ {L, H}, q ∼ f (·|a).

• Example: a ∈ <+ and q = a + , where  ∼ N (0, σ 2 )

1
Econ 201c, UCLA Simon Board

• Example: a ∈ [0, 1], q ∈ {0, 1} and Pr(q = 1|a) = a.

Throughout we’ll assume that output q is contractible (i.e. a judge will enforce the contract w(q)).
We differentiate between the following:

• Effort is unobserved. This means the principal does not see the agent’s effort. This is the
“moral hazard” problem. We’ll primarily be interested in this case.

• Effort is contractible or verifiable. This means the principal and a court can see the agent’s
effort, and so the parties can write a contract on it. This is the “first-best” case that we study
next.

• Effort is observed. This means the principal sees the agent’s effort, but an outside court may
not. It turns out that you can implement first-best if you are sufficiently cunning. See HW 1,
Q2. We’ll come back to this case when we look at relational contracts.

1.1 First-Best Benchmark

The problem

• Suppose that the action a is contractible, so there is no moral hazard problem.

• The principal chooses the action a and the wage w(q) to solve

max Π = E [q − w(q)|a]
a,w(q)

s.t. (IR) E [u(w(q)) − c(a)|a] ≥ U

• The “IR” stands for “individual rationality”. It means that the agent is willing to sign the
contract.

CLAIM: (IR) binds in the optimal contract.

• If (IR) is slack, then we can lower the wage in each state. For example, given contract w(q)
define a new contract w̃(q) = w(q) − .

• We will actually want a slightly stronger statement: That the Lagrange multiplier on the
constraints is strictly positive.

– This follows since the a reduction in U enables the firm to lower wages and strictly
increase profits.
– One can also prove this via contradiction. If the multiplier were zero then the optimal
wage would be minus infinity!

2
Econ 201c, UCLA Simon Board

The Lagrangian
The principal maximizes
 
L = E [q − w(q)|a] + λE u(w(q)) − c(a) − U |a
Z
= [q − w(q) + λu(w(q)))] dF (q|a) − λc(a) − λU (1)
q

CLAIM: The optimal contract fully insures the agent

• What wages should the principal choose? Equation (1) is additive over the states, so we can
maximize pointwise. That is,
max −w + λu(w)
w

So the optimal choice of wage is independent of the output, q. This objective is also concave.

• Differentiating, the principal sets w(q) = w∗ for all q, where w∗ is given by


1
=λ (2)
u0 (w∗ )

• Interpretation: 1/u0 is the cost of delivering a util. To see this let w(u0 ) be the wage required
to deliver u0 utils, u(w(u0 )) = u0 . Differentiating with respect to u0 and rearranging,
1
w0 (u0 ) =
u0 (w(u0 ))

The FOC (2) thus says that the cost of delivering a util is equated to across all states and
given by λ.

What is the optimal effort choice?

• The binding (IR) constraint says that

u(w∗ ) − c(a) = U

• Hence the principal chooses the action a to maximize

Π = E[q|a] − w∗
= E[q|a] − u−1 (c(a) + U )

Denote the first-best solution by a∗ .

Remark about bargaining power

• We assume the principal makes a take-it-or-leave-it (TIOLI) offer to the agent. This is actually
without loss. By varying U , we can trace out the Pareto frontier Π(U ).

3
Econ 201c, UCLA Simon Board

• We could equivalently give all the bargaining lower to the agent and have the agent make an
offer subject to giving the principal profits of Π̄. These problems of duals of each other.

1.2 Moral Hazard Problem

• We suppose that the agent’s action is not observed.

The principal’s problem

• The principal chooses a wage w(q) and a recommended action a. The agent must want to
follow this recommendation.

• The principal maximizes

max Π = E [q − w(q)|a]
a,w(q)

s.t. (IR) E [u(w(q)) − c(a)|a] ≥ U


(IC) a ∈ arg max E [u(w(q)) − c(ã)|ã]
ã∈A

• Here, “IC” stands for “Incentive Compatible”. It means that the agent is happy to follow the
principal’s recommendation.

• The idea of having the principal recommend an action is without loss; it’s a version of the
“revelation principle” (Myerson, 1983, JMathE).

When is first-best implementable?

• We will consider three situations when the principal can obtain first-best.

Case 1: First-Best is the Least Costly Action

• The problem with the first-best wage is that the agent has no incentives. But this is fine as
the agent will choose the first-best action on his own.

• Formally, suppose c(a∗ ) = mina∈A c(a).

• Suppose we let w(q) = w∗ = u−1 (c(a) + U ).

• We can verify that (IC) is satisfied:

u(w∗ ) − c(a∗ ) ≥ u(w∗ ) − c(a) ∀a ∈ A

Case 2: The Agent is Risk-Neutral

4
Econ 201c, UCLA Simon Board

• If the agent is risk-neutral, then the principal can “sell the firm”, making the agent the residual
claimant.

• Formally, suppose u(w) = w. The first-best action a∗ maximizes

Π = E[q|a] − u−1 (c(a) + U ) = E[q|a] − c(a) − U

• Then consider the contract


w(q) = q − k

Intuitively, the agent makes an up-from payment of k and then gets to keep all the output
that he creates.

• Under this contract, the agent chooses a to maximize

U = E [u(w(q)) − c(a)|a] = E [q|a] − k − c(a)

Hence they choose a∗ .

• The optimal k is then chosen that the agent’s (IR) binds

k = E[q|a∗ ] − c(a∗ ) − U

which equals the firm’s profits in the first-best contract.

Case 3: The Agent’s Effort Shifts the Support of Output

• The support of output given action a is S(a) = {q : f (q|a) > 0}

• Suppose S(a)\S(a∗ ) has positive measure for all a 6= a∗

• The principal can implement first-best by setting



w∗ if q ∈ S(a∗ )
w(q) =
−∞ otherwise

so there is a large punishment if something “unexpected” happens.

• Example: a ∈ <, and q = a +  with  ∼ U [0, 1].

– If a∗ = 10, then output should be in the range q ∈ [10, 11]. If anything else happens, the
principal should punish the agent very heavily.

• Example: a ∈ <, and q = a +  with  ∼ N (0, σ 2 ).

5
Econ 201c, UCLA Simon Board

– The support here is < so it doesn’t satisfy the shifting support assumption, however the
support is “almost” shifting, and the first-best contract can “almost” be implemented.

– In particular, suppose a < a∗ then the likelihood ratio


f (q|a)
→∞ as q → −∞
f (q|a∗ )
This means that low outputs are infinitely more likely to come from a cheating agent
than an obedient one.

– In practice, the normal-additive model is used a lot (see HW1, Q4). We typically restrict
the principal to linear contracts, w = α + βq. Without this restriction, first-best is
attainable with an extreme contract that punishes agents in the lower tail.

1.3 Solving the Problem: Two Actions

Setup

• Suppose the agent chooses a ∈ {L, H}, and that c(L) < c(H).

• This has the advantage that the (IC) constraint is simple: we need to stop the agent shirking
and choosing the low action.

• To avoid triviality, assume a∗ = H.

Two step approach

(1) Find the cheapest way to implement each action

(2) Choose which action we prefer.

Step 1a: Implementing a = L.

• If we wish the agent to choose the low action, we need not provide incentives and can thus
pay a constant wage so (IR) binds,

w(q) = u−1 (c(L) + U )

• This induces profits


Π = E[q|L] − u−1 (c(L) + U )

Step 1b: Implementing a = H

6
Econ 201c, UCLA Simon Board

• This is the more interesting case! The principal’s problem is

max E [q − w(q)|H]
w(q)

s.t. (IR) E [u(w(q)) − c(H)|H] ≥ U


(IC) E [u(w(q)) − c(H)|H] ≥ E [u(w(q)) − c(L)|L]

Let the Lagrange multiplier on (IR) be λ, and on (IC) be µ.

CLAIM: The constraints (IR) and (IC) binds. In particular, λ > 0 and µ > 0.

• Suppose (IR) is slack. Then we can lower wages, giving us a contradiction. In particular,
given contract w(q) define a new contract w̃(q) such that u(w̃(q)) = u(w(q)) − . By lowering
the utility in each state, this leaves (IC) unaffected.

• Suppose (IC) is slack. Then the solution to the problem is a flat wage, w∗ . But then the
optimal action is a = L, giving us a contradiction.

Solving the problem

• The Lagrangian is
 
L = max E [q − w(q)|H]+λE u(w(q)) − c(H) − U |H +µ [E [u(w(q)) − c(H)|H] − E [u(w(q)) − c(L)|L]]
w(q)

• Ignoring the terms that don’t involve w, we get


Z Z
max [−w(q) + (λ + µ)u(w(q))]dF (q|H) − µu(w(q))dF (q|L)
w(q)
Z     
f (q|L)
= u(w(q)) λ + µ 1 − − w(q) dF (q|H)
f (q|H)

• Pointwise maximization implies we maximize the concave objective


  
f (q|L)
max u(w(q)) λ + µ 1 − − w(q)
w(q) f (q|H)

• Differentiating,  
1 f (q|L)
=λ+µ 1− (3)
u0 (w(q)) f (q|H)

• Interpretation: Consider raising the wage in state q by enough to raise the worker’s utility
by one util. The LHS is the cost to the principal of raising the agent’s utility by a util. On
the RHS, λ represents the benefit of loosening the (IR) constraint, while the second term
represents the impact of the wage increase on the (IC) constraint. The wage boost loosens
the (IC) constraint if f (q|H) > f (q|L), meaning that the output is more likely to occur under
the high action.

7
Econ 201c, UCLA Simon Board

Figure 1.1: Non-monotone Wages. In the last picture, ŵ is defined by 1/u0 (ŵ) = λ.

Discussion of equation (3)

• Let `(q) = f (q|H)/f (q|L) be the likelihood ratio. This is a sufficient statistic for output.
Thus, we can write  
1 1
=λ+µ 1−
u0 (w(`)) `

• The RHS increases in `. Since the LHS increases in w, this means the wage increases in `.
Interpretation: the more likely the agent took action H, the higher the wage.

• This is like a statistical inference problem. Given output, we try to infer whether the agent
shirked or worked. If the evidence suggest he worked, we pay him more. The weird thing is
that we know that the agent actually worked. The principal punishes him in states that are
indicative of low effort in order to deter deviations.

• This points to a time inconsistency problem. After the agent has chosen the effort, the
principal would like to fully insure him. But if she did this, the agent would exert no effort.
See Fudenberg and Tirole (1990, Ecta).

Are wages monotone?

• Suppose `(q) increase in q; this is called the Monotone Likelihood Ratio Property (MLRP).
Then wages w(q) are increasing in q.

• In general this may not be the case. See Figure 1.1, which is taken from MWG.

Step 2: Choose the action a ∈ {L, H}.

8
Econ 201c, UCLA Simon Board

• In principle we can calculate profit from the two actions. However, since we don’t have a
closed form expression for profits from a = H, it’s hard to say much.

• We can say that there is a downward distortion relative to first-best. Profits from a = L are
the same as in first-best, while profits from a = H are lower. Thus we may choose a = L
when a∗ = H.

A sufficient statistics result

• In general q could be binary, continuous or even multi-dimensional. The wage at a given signal
is determined by the likelihood ratio, Pr(q|H)/ Pr(q|L).

• Suppose we have two statistics of firm performance, (q, q̃). For example, q could be revenue,
while q̃ are costs.

• CLAIM: If q is a sufficient statistic for (q, q̃) then we should make the wage dependent on q
alone.

• Proof: q is a sufficient statistic for (q, q̃) if we can write

f (q, q̃|a) = f0 (q, q̃)f1 (q|a)

In which case the likelihood ratio is


f (q, q̃|H) f1 (q|H)
`(q, q̃) = = ,
f (q, q̃|L) f1 (q|L)
which is independent of q̃.

• The general principle is that we should not make the agent’s wage depend on things outside
his control. This introduces risk without introducing incentives.

• See HW1, Q3 for more on this.

Exercise: Suppose the principal is also risk averse, obtaining payoffs v(q − w(q)), where v(·) is
strictly concave. How does this affect the first-order condition (3)?

1.4 Solving the Problem: Continuous Actions

• See Segal and Tadelis (Section 5.4)

The principal’s problem:

9
Econ 201c, UCLA Simon Board

• Suppose a ∈ <+ . The principal maximizes

max E [q − w(q)|a]
a,w(q)

s.t. (IR) E [u(w(q)) − c(a)|a] ≥ U


(IC) a ∈ arg max U (ã) = E [u(w(q)) − c(ã)|ã]
ã∈A

The first-order approach

• We’ll replace the agent’s IC constraint with the first-order condition corresponding to his
optimization problem,
Z
U 0 (a) = 0 ⇔ u(w(q))fa (q|a)dq − c0 (a) = 0 (ICF OC)

• This gives us the Lagrangian


Z Z  Z 
0
L = [q−w(q)]dF (q|a)+λ [u(w(q)) − c(a)]dF (q|a) − U +µ u(w(q))fa (q|a)dq − c (a)

where λ ≥ 0 and µ ≷ 0.

Optimal wages

• Focusing on the parts that include wages,


Z   
fa (q|a)
max −w(q) + u(w(q)) λ + µ dF (q|a)
w(q) f (q|a)

• Pointwise maximization means we solve


 
fa (q|a)
max −w + u(w) λ + µ
w f (q|a)

• This gives us the FOC


1 fa (q|a)
=λ+µ (4)
u0 (w(q)) f (q|a)
fa (q|a)
• Below, we show that MLRP implies f (q|a) increases in q and µ > 0. Thus, the RHS of (4)
increases in q. Since the LHS increases in w, so w(q) increases in q.
fa (q|a)
CLAIM: Under MLRP, f (q|a) increases in q.

• Recall MLRP states that f (q|aH )/f (q|aL ) increases in q for aH > aL .

• Hence log[f (q|aH )/f (q|aL )] = log f (q|aH ) − log f (q|aH ) increases in q.

10
Econ 201c, UCLA Simon Board

• Hence
fa (q|a) d log f (q|a + ) − log f (q|a)
= log f (q|a) = lim
f (q|a) da →0 
is increasing in q.

Why is µ > 0? An intuition.

• Consider the consumer problem: maxx u(x) s.t. m − p · x = 0. The Lagrangian is

L = u(x) + λ[m − p · x]

If λ > 0 then this is the same as solving maxx u(x) s.t. m − p · x ≥ 0. Intuitively, the agent
would like to make m − p · x go negative; to ensure this does not happen, she must pay λ utils
if she exceeds her budget.

• Here we are essentially solving maxw Π s.t. U 0 (a) = 0. The Lagrangian is

L = Π + µU 0 (a)

If µ > 0 then this is the same as solving maxw Π s.t. U 0 (a) ≥ 0. If U (a) is concave and
maximized at a0 , then we can also write this as maxw Π s.t. a ≤ a0 , meaning that if the
principal could choose any action [0, a0 ], then she would choose a0 . That is, she’d like the
agent to take more effort, and not less.

• We now wish to prove µ > 0 formally. This requires a preliminary lemma.

LEMMA: Consider two functions v(q) and g(q) such that v(q) is decreasing and g(q) is mean zero
and quasi-increasing.1 Then the correlation is negative
Z
Corr = v(q)g(q)dq ≤ 0

• Since g(q) averages to zero we are putting positive weight on “high q” outcomes (where v is
low) and negative weight on “low q” outcomes (where v is high). The average is thus negative.
1
The function g(q) is quasi-increasing if g(q) < 0 for q < q̂ and g(q) > 0 for q > q̂. The naming is parallels
“quasi-convex” since the the derivative of a quasi-convex function is quasi-increasing, like the derivative of a convex
function is increasing. Such a function is sometimes called single-crossing.

11
Econ 201c, UCLA Simon Board

• Formally, let the crossing point of g(q) be q̂ (see Figure 1.2). Then
Z Z
Corr = v(q)g(q)dq + v(q)g(q)dq
q<q̂ q>q̂
Z Z
≤ v(q̂)g(q)dq + v(q̂)g(q)dq
q<q̂ q>q̂
Z
= v(q̂) g(q)dq

=0

To understand the inequality, note that: (i) When q < q̂, v(q) ≥ v(q̂) and g(q) ≤ 0, so
v(q)g(q) ≤ v(q̂)g(q), and (ii) When q > q̂, v(q) ≤ v(q̂) and g(q) ≥ 0, so v(q)g(q) ≤ v(q̂)g(q).

CLAIM: MLRP implies that µ ≥ 0 and hence wages increase in output.

• By contradiction, suppose that µ < 0, so w(q) decreases in q. Intuitively, this is going to be


terrible for incentives.

• Formally, we wish to show that


Z
U 0 (a) = u(w(q))fa (q|a)dq − c0 (a) < 0 (5)

• To apply the Lemma, let v(q) = u(w(q)) and g(q) = fa (q|a). By assumption u(w(q)) is
R
decreasing. By MLRP, fa (q|a) is quasi-increasing. Moreover, it averages to zero, fa (q|a)dq =
0. Intuitively, a higher effort will make some output more likely and some less likely; this
cancels out on average. Formally,
Z Z
d d
fa (q|a)dq = f (q|a)dq = [1] = 0
da da

• Inequality (5) follows from the Lemma and the fact that c0 (a) > 0.

Can we justify the first-order approach?

• It’s very convenient to replace the (IC) condition with (ICFOC), but this is only valid if the
agent’s problem is concave.

• There are a couple of sufficient conditions that guarantee this but neither are satisfactory.
In practice, researchers calibrating a moral hazard typically assume the first-order approach
works and numerically check after the fact.

Sufficient condition 1: Linear Distribution Function

• Suppose A = [0, 1], and there exists FL (q) and FH (q) such that F (q|a) = aFH (q)+(1−a)FL (q)

12
Econ 201c, UCLA Simon Board

Figure 1.2: A Decreasing Function and Quasi-Increasing Function are Negatively Correlated.

• Then
Z
U (a) = u(w(q))dF (q|a) − c(a)
Z Z
= a u(w(q))dFH (a) + (1 − a) u(w(q))dFL (q) − c(a)

is concave in a.

• Problem: This is similar to assuming that actions are binary.

Sufficient condition 2: MLRP and Convex Distribution Function

• Suppose F (q|a) is convex in a on support [q, q].

• Integrating by parts,
Z
U (a) = u(w(q))dF (q|a) − c(a)
Z
= u(w(q)) − u0 (w(q))w0 (q)F (q|a)dq − c(a)

which is concave in a. This follows since u0 (w) < 0 and by concavity, w0 (q) > 0 by MLRP.

1.5 Mathematical Appendix: Stochastic Orders

• We’ll discuss three stochastic orders. For more on this see Shaked and Shanthikumar (2007)

• Suppose X ∼ F and Y ∼ G.

• For the examples, F is the distribution of mens’ longevity, and G is the distribution of womens’.

Usual Stochastic Order (also known as, first-order stochastic dominance)

13
Econ 201c, UCLA Simon Board

• X ≤st Y if F (u) ≥ G(u) for all u.

• Interpretation: Given any age, there are more women alive than men.

• Characterization: X ≤st Y iff E[φ(X)] ≤ E[φ(y)] for any increasing function φ(·).

Hazard Rate Order


f (u) g(u)
• X ≤hr Y if 1−F (u) ≥ 1−G(u) for all u.

• Interpretation: At any age, men are more likely to die than women.

• Characterization: X ≤hr Y iff [X|a ≤ X] ≤st [Y |a ≤ Y ] for all a. That is, if we truncate
the distribution below and renormalize it, then distribution of Y dominates X in the usual
stochastic order.

• Hence the hazard rate order implies the usual stochastic order.

Likelihood Ratio Order

• X ≤lr Y if f (u)/g(u) > f (v)/g(v) for all v > u.

• Interpretation: At lower ages, proportionately more men die. While the distribution of mens’
and womens’ agent probably do satisfy the hazard rate order, they do not satisfy this since
at the f /g is higher at 18 year of age than at 12.

• Characterization: X ≤hr Y iff [X|a ≤ X ≤ b] ≤st [Y |a ≤ Y ≤ b] for all a, b. That is, if we


truncate the distribution above and below, and renormalize it, then distribution of Y dominates
X in the usual stochastic order.

• Hence the likelihood ratio order implies the hazard rate order.

14
Econ 201c, UCLA Simon Board

2 Moral Hazard: Applications

2.1 Debt Contracts

• There is a sense that debt contract motivate managers to work harder. This famously underlay
the leveraged buyout wave of the 1980s (e.g., read “Barbarians at the Gate”)

• Can we formalize this idea? This is based on Innes (1990, JET)

Model

• A risk-neutral entrepreneur (agent) seeks funds I from a competitive market of risk-neutral


investors (principals)

• After receiving funding, the agent chooses action a ∈ R+ at cost c(a).

• Output q ∼ f (q|a) is publicly observed.

• A contract r(q) ∈ [0, q] tells us what fraction of the output the agent repays.

• Formally, this is a standard moral hazard problem with risk-neutral agent and constraints on
the wage.

The problem

• The agent designs a contract ha, r(q)i to solve

max E[q − r(q)|a] − c(a)


a,r(q)

s.t. (IR) E[r(q)|a] ≥ I


(IC) a ∈ arg max E[q − r(q)|ã] − c(ã)

(F E) 0 ≤ r(q) ≤ q

where “FE” stands for feasibility.

• One may wonder: If the agent is designing the contract, then why do we need (IC)? After all,
isn’t the action chosen to maximize his utility. The issue is that the agent is time inconsistent.
Consider the model from Section 1.3. Without (IC), any Pareto efficient contract fully insures
the agent. However, if the agent is fully insured, then he will not work.

Solving the problem

• Let’s use the first-order approach.

15
Econ 201c, UCLA Simon Board

• Ignoring the (FE) constraints, we can set up a Lagrangian,


Z Z  Z 
0
L = [q − r(q) − c(a)]dF (q|a) + λ r(q)dF (q|a) − I + µ [q − r(q)]fa (q|a)dq − c (a)

• Pointwise maximization means we can ignore all the parts that don’t involve r(q),
 
fa (q|a)
max r λ − µ −1
r f (q|a)

Denote the term in brackets by η(q).

• Thus 
q if η(q) > 0
r(q) =
0 if η(q) < 0

• Suppose MLRP holds. If µ > 0 then η(q) is decreasing in q.2 Thus there exists a q̂ such that

q if q < q̂
r(q) =
0 if q > q̂

This “live or die” contract does not look like debt: The agent pays everything if the project
makes less than q̂, but pays nothing if it is successful.

Obtaining debt

• Innes observed that the above contract has a problem. Suppose the agent produced output
q̂ − . Then he could borrow 2, lower is payments to zero, and pay back the loan. This
motivated Innes to consider monotone contracts in which r(q) is increasing.

• With this constraint, the optimal contract has the form



q if q < D
r(q) =
D if q > D

where D can be interpreted as the level of debt.

What is the optimal contract?

• A contract is defined by two numbers: The amount of debt D and the effort level a. There
2
When is it the case that µ > 0? Innes (1990) supposes the agent’s utility U (a) is concave and that the agent can
only deviate downwards, so U 0 (a) ≥ 0 and µ ≥ 0. There are two cases. If µ > 0, we have the analysis in the text. If
µ = 0 then IC is irrelevant and we can obtain first-best.

16
Econ 201c, UCLA Simon Board

are two constraints:


Z
(IR) r(q)dF (q|a) = I
Z
(ICF OC) (q − r(q))fa (q|a)dq = c0 (a)

2.2 Multitasking

• Often agent have more than one actions. How does a principal incentivize an agent to do the
right thing?

• Motivation

– Teachers choosing to or “teach new material” or “test prep”

– Police choosing to “catch criminals” or “manipulate crime statistics”

– A Wells Fargo banker chooses to “help the customer” or “create fake accounts”

• This is based on Holmstrom and Milgrom (1987)

• In terms of modeling style, this is quite different from the generality of Section 1. The aim
here is to design the simplest possible model to get some clean economic insights.

Model

• An agent has two tasks: a1 and a2 .

• Output is q = a1 + f a2 , where the constant f may be positive or negative.

• The principal sees a performance measure m = a1 + ga2 , where the constant g may be positive
or negative.

• For simplicity, assume the principal pays a linear wage, w = α + βm.

• The agent is risk-neutral, U = w − c(a1 ) − c(a2 ) with outside option U . Let c(a) = a2 /2.

• The principal is risk neutral, Π = q − w.

The agent’s problem

• The agent solves


max U = α + β(a1 + ga2 ) − c(a1 ) − c(a2 )
a1 ,a2

• The FOCs are


a1 = β and a2 = gβ

17
Econ 201c, UCLA Simon Board

The firm’s problem

• The firm choose (α, β) to maximize

maxΠ = a1 + f a2 − α − βm
α,β,a

s.t. (IR) α + βm − c(a1 ) − c(a2 ) ≥ U


(ICF OC) a1 = β, a2 = gβ

• We can use the (IR) constraint to eliminate α. The firm thus maximizes welfare,

Π = a1 + f a2 − c(a1 ) − c(a2 ) − U
β2
= β(1 + f g) − (1 + g 2 ) − U
2
where the second line uses (ICFOC) to eliminate (a1 , a2 ).

• Differentiating we obtain
1 + fg
β=
1 + g2
1 (1 + f g)2
Π=
2 1 + g2

• Intuitively, β captures the degree of alignment between output F = (1, f ) and monitoring
G = (1, g). More precisely, let θ be the angle between F and G. Then

F ·G |F | 1
β= 2
= cos θ = projG F
|G| |G| |G|

Example: Suppose f = 1, so both actions are equally productive. Unfortunately, action a2 is not
as easy to measure, g ≤ 1. Also assume U = 0.

• Motivation: Consider a manager can spend her time finding new leads or team-building. The
firm cares about both since it affects lifetime profits (q) but team-building doesn’t have as
much impact on short-term sales (m).

• Suppose g = 1. Then β = 1, a1 = a2 = 1 and Π = 1. This is the first-best benchmark.

• Suppose g = 12 . Then β = 65 , a1 = 56 , a2 = 3
5 and Π = 9
10 . Thus the principal over-incentivizes
action a1 in order to get some a2 .

• Suppose g = 0. Then β = 0, a1 = 1, a2 = 0 and Π = 21 . The principal gives up on a2 .

Exercise (Harmful actions): What happens if f = −1, so task 2 is harmful to the firm.

18
Econ 201c, UCLA Simon Board

• Motivation: The police can catch bad guys (a1 ) or fake crime statistics (a2 ).

• What happens if g = 1, g = 1/2 or g = 0?

In this model effort in task 1 does not crowd-out effort in task 2.

• But was is the agent only has so many hours in the day?

• See HW2, Q1 for this problem.

19
Econ 201c, UCLA Simon Board

3 Multiple Agents

Multiple agents may interact in three main ways:

• They interact via the performance scheme (e.g. a tournament) even though their performance
is unrelated. See Section 3.1.

• They work as a team, and impose externalities on each other. See Section 3.2.

• The are subject to common shocks. So making agent 1’s pay depend on agent 2’s performance
can lower their risk. See HW2, Q2 for a study of such “relative performance evaluation”.

3.1 Tournaments

Model

• There are two risk-neutral agents with outside option U .

• Each chooses ai at costs c(ai ).

• Each produces output qi = ai + i , where (i , j ) are IID with zero mean.

• Notice that there is no inherent relationship between the two agents. They will purely be
linked via the performance scheme.

What is the first-best benchmark for a single agent?

• The principal maximizes

max Π = E[q − w(q)|a]


a,w(q)

s.t. (IR) E[w(q) − c(a)|a] ≥ U

• Substituting in for (IR), we get

Π = E[q|a] − c(a) − U
= a − c(a) − U

• The FOC is thus


c0 (a∗ ) = 1 (6)

A tournament

• Suppose there are prizes wH and wL for first and second place.

20
Econ 201c, UCLA Simon Board

• How well can a tournament do?

Agent’s problem

• Agent i wins with probability

Pr(i win) = Pr(qi > qj )


= Pr(ai − aj > j − i )
= H(ai − aj )

where H(·) is the distribution of j − i . This is symmetric around zero.

• Agent i chooses ai to maximize

max Ui = (wH − wL )H(ai − aj ) + wL − c(ai )


ai

• The FOC is
(wH − wL )h(ai − aj ) = c0 (ai )

• In a symmetric pure strategty equilibrium, ai = aj = a and this becomes

(wH − wL )h(0) = c0 (a) (7)

• Exercise: Prove there are no asymmetric pure strategy equilibria.

• There is a pure strategy equilibrium if utility Ui is quasi-concave in ai . This may well not
hold. For example, if there is no noise then there is a mixed NE.

Can we implement first-best?

• We wish to implement a∗ and give the agent utility U .

• We have two variables: wH and wL . Intuitively, we can choose the spread to get the right
incentives, and the level to make the agent participate.

• Equations (6) and (7) imply that we get first-best effort if


1
(wH − wL ) =
h(0)

• We then need (IR) to bind. In the symmetric equilibrium, both agent win with probability
1/2, so
1
U = (wH + wL ) − c(a∗ ) = U
2

21
Econ 201c, UCLA Simon Board

Problem with tournaments:

• Risk aversion. We’re making i’s pay depend on j’s performance. Why introduce any relation-
ship between them?

• Collusion. If the agent interact repeatedly, they might collude on a1 = a2 = 0. See HW2, Q4.

• Sensitivity. The contract requires that the principal knows h(0) exactly. This is unlikely. In
comparison, a piece rate contract is more robust.

Why does anyone use a tournament?

• They naturally arise in the world, so may not be planned.

• They are useful if the principal cannot be trusted to truthfully reveal output (“private evalu-
ations”). With a tournament, the principal always pays out the same amount, and so has no
incentive to lie.

3.2 Free-riding in a Partnership

Model

• N risk-neutral agents form a team.

• Each choosing action ai at cost c(ai ). Agent i has reserve utility U .

• Team output depends on all the actions, q(a), where a = (a1 , . . . , aN ). Assume q(·) is strictly
increasing in it’s arguments.

• The agent form a partnership, meaning that there is no principal.

• Throughout, we focus on partial implementation. That is, we look for an equilibrium that
implements the desired outcome rather than insisting that all equilibria implement the desired
outcome. See HW2, Q3 for more on this.

Full-information benchmark

• The first-best actions {a∗i } maximize


X
q(a) − c(ai )
i

• The FOC is

q(a∗ ) = c0 (a∗i ) (8)
∂ai

22
Econ 201c, UCLA Simon Board

• Assuming full participation is first-best, we can then split the surplus in any way we like so
as to give each agent at least U .

• In what follows, I focus on implementing the equal split, q(a∗ )/N .

Agent’s problem
P
• Suppose we use an output sharing rule {ti (q)} such that ti (q) = q.

• An equilibrium {a0i }, agent i solves

max t(q(ai , a0−i )) − c(ai )


ai

• Which gives us the FOC



t0i (q(a0 )) q(a0 ) = c0 (a0i )
∂ai
Can we implement first-best?

• The FOC needs to be satisfied at ai = a∗i . That is,


t0i (q(a∗ )) q(a∗ ) = c0 (a∗i ) (9)
∂ai

• Comparing (8) and (9), we require that t0i (q(a∗ )) = 1 which means that agent i needs to be
the residual claimant.

• However, there is only one surplus, so not everyone can be the residual claimant. Formally,
differentiating i ti (q) = q gives us i t0i (q) = 1. We cannot give everyone the marginal
P P

dollar.

• Are there ways of implementing first-best in a team problem?

Case 1: Spotting individual deviations

• Suppose agent 1 is in charge of marketing, and agent 2 in charge of operations. Then we can
see why the team failed (low sales, or high costs), and punish the appropriate person.

• Formally, if q(ai , a∗−i ) 6= q(aj , a∗−j ) for all ai 6= a∗i and aj 6= a∗j , then we can spot the deviator
by giving them no payment.

Case 2: Destroying output

• Suppose we set 
q(a∗ )/N if q = q(a∗ )
ti (q) =
0 otherwise
That is, we destroy the team’s work if we don’t get first-best output.

23
Econ 201c, UCLA Simon Board

• There is an equilibrium in which each agent i chooses the first-best.

• However, there are problems:

– This is not renegotiation proof

– There is another equilibrium in which all agents choose ai = 0

– This does not work if output is random

Case 3: Introducing a budget breaker.

• The problem is that we cannot give the marginal dollar to everyone. But, suppose we introduce
a third agent who could chip in the required money.

• Suppose each agent i gets ti (q) = q − F , so that under first-best effort they split the pie,
q(a∗ ) N −1
ti (q(a∗ )) = N . This means F = ∗
N q(a ).

• Agent N + 1 takes the up-front payment of F from the N agents and makes each the residual
claimant,
X
tN +1 (q) = q(a) − ti (q(a))
i

= N F − (N − 1)q(a)

• Hence the budget-breaker breaks even in equilibrium, but earns money if the agent shirk. This
acts as a commitment device.

• One odd feature: The budget breaker really wants the team to fail. This is reminiscent of
scheme in “The Producers” whereby the title characters sell the revenue to a play many times
over, and try to stage the world’s worst play.

24

You might also like