Download as pdf or txt
Download as pdf or txt
You are on page 1of 60

Introduction: About the Instructor

I Shota Ichihashi

I Employment: Bank of Canada

I Research interest: Microeconomic Theory, Game Theory

I Education: PhD Stanford Univ. 2018

1 / 54
Introduction

Description:
I This course is the second half of the first-year PhD micro sequence.
I ECON 811 is prerequisite.
Topics (tentative):
1. Moral hazard and the principal-agent theory
2. Monopoly screening and its applications
3. Adverse selection and competitive screening
4. Signaling
5. Market design and matching
6. General equilibrium
7. General equilibrium under uncertainty

2 / 54
A bit more on logistics

I Until February 28, we use Zoom (both for class and office hour).
I We use lecture slides and notes, uploaded on onQ.
I No required text books, but MWG is useful (syllabus for detail).
I Tentative plan for grading:
5 homeworks (25-30%), midterm (30%), final (40-45%).

3 / 54
Lecture 1: Moral Hazard

ECON 813

Shota Ichihashi

4 / 54
Principal-Agent Models
Principal-agent models: principal designs a mechanism/contract for
agent(s) to participate in. Ask what contracts are optimal.
1. Employment contract: A firm offers an output-contingent wage. After
the worker signs a contract, they exert effort and get paid.
2. Insurance contract: A car insurance company offers an insurance
contract, which specifies payments conditional on a normal event and
on an accident (i.e., premium and deductible). After a person signs a
contract, they decide how carefully to drive.
3. Selling products: An smartphone company designs a menu of
price-quality pairs to supply. The buyer chooses an item from the
menu, pays the price, and gets the phone.
The agent often has private information
(really? At least we assume so in this class)
5 / 54
Moral Hazard & Adverse Selection
Old paradigm: MH = hidden action, AS = hidden info

Better way to understand is to look at when private info arises

Moral hazard: Agent has relevant private info that arises after contracting
(symmetric info at the time of contracting)

I A production worker has private info about whether they have shirked
by working slowly or failing to care for equipment (“hidden action”)
I A salesperson gain private information about customer demands and
competitive offering that affect his sales (“hidden information”)

Adverse Selection: Agent has relevant private info before contracting


I An employee is privately informed about their ability
(“hidden information”)
I Before meeting firm, worker had a choice to exert effort, which is
private and affect their relationship (“hidden action”)
6 / 54
Goals for This Lecture

I Study models of Moral Hazard with hidden action.

I Solve the first-best problem (observable action)

I Learn when the principal can/can’t attain the first-best even when
action is unobservable.

I Study several variations that violate the conditions for the first-best
(analyze the principal’s second best contract).

I Learn how moral hazard distorts the second-best contract

I We also learn basics of monotone comparative statics


(e.g., increasing differences, strong set order, supermodularity)

7 / 54
A Simple Model
A principal and an agent (e.g., a firm and a worker).
I The principal offers a contract, w(·). A contract w(·) maps a realized
output x ∈ R to compensation w(x).
I The agent decides whether to sign the contract; if refuses, he earns
an outside option of u ∈ R.
I If signed, the agent privately chooses an effort level e ∈ E, where E is
the set of possible effort levels.
I The output x = g(e) + ε is realized, where g : E → R
(the output depends also on noise ε).
I The output is observed, and the agent receives w(x).
Payoffs are
I The agent: u(w(x), e)
I The principal: x − w(x) (risk neutral)
8 / 54
The First-Best Soultion

I First-best: no private information; in our context, observable effort


I The principal can directly specify what e the agent should choose
I The contract has to offer the agent a payoff of at least u.
I The principal’s problem is:

max E [g(e) + ε − w(g(e) + ε)] ,


e,w(·)

subject to
(PC) E [u (w(g(e) + ε), e)] ≥ u.

9 / 54
The First-Best Solution

Assume the agent’s risk aversion: for each e ∈ E, u(w, e) is strictly


increasing and strictly concave in w (risk averse).

Claim
At the first-best solution, the agent is paid a fixed wage, w(π) = w for all
possible output π = g(e) + ε. Given e, the optimal w solves u(w, e) = u.
Proof 1.
I Otherwise, the principal could pay the agent its certainty equivalent
wage: wC such that u(wC , e) = E [u (w(g(e) + ε), e)]
I wC ≤ E[w(g(e) + ε)] is less than the expected wage and increases
the principal’s payoff.

10 / 54
The First-Best Solution
The first-best solution under the risk-averse agent pays a constant wage.

Proof 2. For simplicity, suppose there are finitely many possible outputs
π1 , . . . , πn , and let pj (e) denote the probability of output πj given effort e.
Principal’s first-best problem: choose w1 , . . . , wn , pay wj for output πj
The Lagrangian for the principals’ problem is

n
X n
X
pj (e)(πj − wj ) + λ [pj (e)u (wj , e) − u] .
j=1 j=1

∂Lagrangian ∂u
= −pj (e) + λpj (e) =0
∂wj ∂wj
∂u ∗ 1
⇒ (wj , e) = .
∂wj λ

Thus w∗1 = · · · = w∗n .


11 / 54
Risk-Netural Case
From now on, focus on unobservable effort. In general, the principal’s
problem under moral hazard is as follows.

max E [g(e) + ε − w(g(e) + ε)] ,


e,w(·)

(IC) e ∈ arg max E u w(g(e0 ) + ε), e0


 
subject to
e0 ∈E

(PC) E [u (w(g(e) + ε), e)] ≥ u.

Suppose that the agent is risk-neutral, i.e., u(w, e) = w − e. For simplicity,


we also assume E[ε] = 0. Then the problem becomes

max E [g(e) − w(g(e) + ε)] ,


e,w(·)

(IC) e ∈ arg max E w(g(e0 ) + ε) − e0


 
subject to
e0 ∈E

(PC) E [w(g(e) + ε) − e] ≥ u.

12 / 54
Solution to Risk-Neutral Case: “Sell the Firm”

Recall the notation π = g(e) + ε.


I An optimal contract: Let e∗ ∈ arg maxe∈E (g(e) − e) and
w(π) = π − p, where p = g(e∗ ) − e∗ − u.
I The contract (i) maximizes the sum of the expected payoffs g(e) − e,
(ii) makes PC binding, and (iii) e∗ satisfies IC.
I This contract is as if the principal sells the firm to the agent a price
of p.
I The pricnipal’s net profit is p, independent of π .
I The contract attains the first-best outcome!
From now on: study the second-best contract when “selling the firm”
doesn’t work

13 / 54
Remark: First-Best Problem and Efficient Contracts
I We set up the first-best problem as the principals’ profit maximization.
I We can interpret it as the problem of finding efficient contracts.
(no other contracts can improve utilities of P and A simultaneously.)
I Why? Efficient contracts maximize the principal’s utility by fixing
(minimum) utility u of the agent.

14 / 54
Remark: First-Best Problem and Efficient Contracts
I We set up the first-best problem as the principals’ profit maximization.
I We can interpret it as the problem of finding efficient contracts.
(no other contracts can improve utilities of P and A simultaneously.)
I Why? Efficient contracts maximize the principal’s utility by fixing
(minimum) utility u of the agent.
Principal’s
utilty

Agent’s utility 14 / 54
Remark: First-Best Problem and Efficient Contracts
I We set up the first-best problem as the principals’ profit maximization.
I We can interpret it as the problem of finding efficient contracts.
(no other contracts can improve utilities of P and A simultaneously.)
I Why? Efficient contracts maximize the principal’s utility by fixing
(minimum) utility u of the agent.
Principal’s
utilty

The set of all outcomes


(varying e and w(·))

Agent’s utility 14 / 54
Remark: First-Best Problem and Efficient Contracts
I We set up the first-best problem as the principals’ profit maximization.
I We can interpret it as the problem of finding efficient contracts.
(no other contracts can improve utilities of P and A simultaneously.)
I Why? Efficient contracts maximize the principal’s utility by fixing
(minimum) utility u of the agent.
Principal’s
utilty

Pareto Frontier (efficient outcomes)

The set of all outcomes


(varying e and w(·))

Agent’s utility 14 / 54
Remark: First-Best Problem and Efficient Contracts
I We set up the first-best problem as the principals’ profit maximization.
I We can interpret it as the problem of finding efficient contracts.
(no other contracts can improve utilities of P and A simultaneously.)
I Why? Efficient contracts maximize the principal’s utility by fixing
(minimum) utility u of the agent.
Principal’s
utilty

Pareto Frontier (efficient outcomes)

The set of all outcomes


(varying e and w(·))

u Agent’s utility 14 / 54
Remark: First-Best Problem and Efficient Contracts
I We set up the first-best problem as the principals’ profit maximization.
I We can interpret it as the problem of finding efficient contracts.
(no other contracts can improve utilities of P and A simultaneously.)
I Why? Efficient contracts maximize the principal’s utility by fixing
(minimum) utility u of the agent.
Principal’s
utilty

Pareto Frontier (efficient outcomes)


Every efficient contract maximizes

P’s utility subject to

The set of all outcomes a fixed minimum level of A’s utility.

(varying e and w(·))

u Agent’s utility 14 / 54
Remark: First-Best Problem and Efficient Contracts
I We set up the first-best problem as the principals’ profit maximization.
I We can interpret it as the problem of finding efficient contracts.
(no other contracts can improve utilities of P and A simultaneously.)
I Why? Efficient contracts maximize the principal’s utility by fixing
(minimum) utility u of the agent.
Principal’s
utilty

Pareto Frontier (efficient outcomes)


Every efficient contract maximizes

P’s utility subject to

The set of all outcomes a fixed minimum level of A’s utility.

(varying e and w(·))

u0 u Agent’s utility 14 / 54
Second-Best Analysis

We cover 3 cases where the principal can’t attain the first-best:


Moral hazard (unobservable effort) with
1. Limited liability
2. Risk-aversion
3. Multi-tasking

15 / 54
Limited Liability

I “Selling the firm”: the agent may pay a large penalty to the principal
when π takes a negative value.
I The principal cannot use such a contract if the agent is protected by
limited liability.
I We study the second-best contract under limited liability in a simple
setting.

16 / 54
Limited Liability: Formulation
Setting:
I Two outputs: “success” which worth π or “failure” which worth 0 to the
principal.
I For any e ≥ 0, the probability of success is g(e) ∈ [0, 1), and the
effort cost is e.
I g is smooth, increasing, and strictly concave and g(0) = 0.
I We can write any compensation function w(·) as (w1 , w0 ), i.e., w1 is
the payment given success, and w0 is the payment given failure.
I The limited liability: w0 , w1 ≥ 0.
The agent’s problem: maxe≥0 {g(e)w1 + (1 − g(e))w0 − e}

The agent’s optimum is characterized by the first-order condition:


g0 (e)(w1 − w0 ) = 1 ⇒ w1 = w0 + 1
g0 (e) .
17 / 54
The Principal’s Problem

Taking the FOC as the IC constraint, we can write the principal’s problem
as:

max g(e)(π − w1 ) − (1 − g(e))w0


e,w0 ,w1
1
s.t. (IC) w1 = w0 +
g0 (e)
(PC) g(e)w1 + (1 − g(e))w0 − e ≥ u
(LL) w0 , w1 ≥ 0.

If w0 , w1 are non-negative in the original problem (without LL), then effort


is first-best, and
1 u + e∗ − g(e∗ )w1 g(e∗ )
w1 = w0 + and w0 = = u + e∗ − 0 ∗ .
g0 (e∗ ) ∗
1 − g(e ) g (e )

18 / 54
Low u or High e
I If u = 0 , we may find that (LL) is binding and (PC) is not binding.
1
I w0 = 0 and w1 = g0 (e)
I The agent’s rent (premium of utility above the minimum level required)
g(e)
Rent(e) = g(e)w1 − e = g0 (e) −e
I The optimal choice of e solves  
1
maxe πg(e) − e − Rent(e) = maxe g(e) π − g0 (e) .

g(e)

e
Rent(e)

I Rent(e) is increasing in e
19 / 54
Reduced Optimal e

I If g(·) is strictly concave, Rent(·) is increasing.


I If w0 = 0, the only way to induce higher effort is to reward success
more.
I The first-best effort maximizes πg(e) − e, whereas the second-best
effort maximizes πg(e) − e − Rent(e).
I Intuitively, the second-best effort level should be lower because the
principal faces an additional cost Rent(e) that increases in e.
I Formalization? We use Monotone Comparative Statics (covered
after this lecture).

20 / 54
Recap

I We introduced a wage lower bound to block “selling the firm to the


agent.”
I In a model of binary outcomes, we studied the incentive constraint
and identifies the wage contracts (w1 , w0 ) that implement any given e.
I We found the best such wage contract for the principal when the LL
bound is zero.
I We characterized the optimal e in such a case and showed that it is
less than the full-information optimum.

21 / 54
Analytical Approach to Solve the Principal’s Problem

We (roughly) analyzed the limited liability model as follows:


I First, for any e, what w(·) elicits effort e (under IC) and satisfy PC?
I Second, if a given e can be induced by multiple compensation
contracts that satisfy 1, which one maximizes the principal’s payoff?
I Third, what e should optimally be elicited?

This is a typical way to analyze the second-best problem (similar to the


cost minimization and profit maximization in producer theory)

We can sometimes get nice economic intuitions just from Step 1.

22 / 54
Second-Best Analysis

We cover 3 cases where the principal can’t attain the first-best:


Moral hazard (unobservable effort) with

1. Limited liability
2. Risk-aversion
3. Multi-tasking

23 / 54
Risk-Averse Agent Revisited

I n ≥ 2 possible outcomes, 1, . . . , n.
I K effort levels, i.e., E = {e1 , . . . , eK }.
I Effort e ∈ E determines p(e) = (p1 (e), . . . , pn (e)), where
pi (e) = Pr(outcome i).
I The value of output i to the principal is πi .
Pn
I The expected value of the output π(e) := i=1 pi (e)πi .
I A contract: (e, w1 , . . . , wn ), where wi is the payment given outcome i.
I The agent’s payoff is u(w, e) = u(w) − c(e), where u is smooth and
strictly concave in w.

In the first best (observable e), the optimal contract entails fixed payment.

24 / 54
Risk-Averse Agent: The Principal’s Problem

n
X
max π(e) − pi (e)wi
e,w1 ,...,wn
i=1

subject to
n
X n
X
(IC) − c(e) + pi (e)u(wi ) ≥ −c(ek ) + pi (ek )u(wi ), k = 1, . . . , K
i=1 i=1
Xn
(PC) − c(e) + pi (e)u(wi ) ≥ 0 (we set u = 0).
i=1

How would the optimal contract change under moral hazard?

The first step to solve the problem is derive the cost-minimizing way of
inducing effort level e.
25 / 54
The Principal’s Cost-Minimization Problem
What is the cost-minimizing way of inducing effort level e? The principal’s
cost-minimization problem is
n
X
min pi (e)wi
w1 ,...,wn
i=1
n
X n
X
s.t. (IC) − c(e) + pi (e)u(wi ) ≥ −c(ek ) + pi (ek )u(wi ), k = 1, . . . , K
i=1 i=1
Xn
(PC) − c(e) + pi (e)u(wi ) ≥ 0.
i=1
The Lagrangian is
n K
" n
#
X X X
L=− pi (e)wi + λk c(ek ) − c(e) + [pi (e) − pi (ek )]u(wi )
i=1 k=1 i=1
" n
#
X
+ µ −c(e) + pi (e)u(wi ) .
i=1 26 / 54
The Principal’s Cost-Minimization Problem

n K
" n
# " n
#
X X X X
L=− pi (e)wi + λk c(ek ) − c(e) + [pi (e) − pi (ek )]u(wi ) + µ −c(e) + pi (e)u(wi )
i=1 k=1 i=1 i=1

The FOC with respect to wi (= payment given output i) is

K
∂L X
λk (pi (e) − pi (ek ))u0 (wi ) + µpi (e)u0 (wi ) = 0, (1)
 
= −pi (e) +
∂wi
k=1

which implies

K PK
1 k=1 λk pi (ek )
X
= µ + λk − . (2)
u0 (wi ) pi (e)
k=1

wi typically varies with output i.


27 / 54
Moral Hazard with Risk-Averse Agent: So Far

First-best
I Optimal contract: payment is independent of output
I Minimize expected payment given the agent’s risk aversion
Second-best (unobservable e)
I Optimal contract: payment may depend on output
I Otherwise, the principal can only induce the effort that minimizes the
agent’s cost
I Trade-off between risk and incentive: To make agent work hard,
payment should depend on output But doing so is costly when agent
is risk-averse
How does the principal exactly trades-off risk and incentive?

28 / 54
A Statistical Interpretation

K PK
1 X
k=1 λk pi (ek )
= µ + λk − . (3)
u0 (wi ) pi (e)
k=1

I The agent’s pay depends on the following “statistic”: Test statistic:


P K
k=1 λk pi (ek )
pi (e)
I The test statistic = likelihood ratio between “agent follows instruction
(choosing e)” and “agent deviated to mixed action with the probability
proportional to the λk of action ek ”
I Higher λk (shadow price for IC of ek ) means that agent really like k
I Interpretation
I An efficient “test” provides efficient effort incentives.
I The test statistic weights only “attractive” choices (i.e., ek with high λk )
I At the solution, the agent’s action is perfectly predicted! (Not the case
that the principal actually conducts hypothesis testing.)
29 / 54
Moral Hazard with Risk-Averse Agent: Recap

I If e observable, constant payment


I If e unobservable, payment may depend on output
I Optimal w(·) (to induce non-trivial e) trades-off risk and incentive
(reward an output that indicates that agent is taking e)
I We didn’t answer which e the principal would induce (in general,
could be higher or lower than the first-best).

30 / 54
Second-Best Analysis

We cover 3 cases where the principal can’t attain the first-best:


Moral hazard (unobservable effort) with
1. Limited liability
2. Risk-aversion
3. Multi-tasking

31 / 54
Multi-Tasking

In practice, the agent allocate efforts across multiple tasks:


1. A teacher may be asked to increase students’ test scores and
communication skills.
2. A production worker may be responsible for producing and taking
cares of machines.
3. A CEO may be asked to increase short-term profits and take a care of
long-term investment strategy.
4. A worker of a cat cafe may be responsible for the number of visitors
they serve per day and the cats’ health.

32 / 54
Multi-Tasking: Key Idea

Key Idea:
I Holmstrom and Milgrom (1991): Incentivizing an effort for one task
that is easy to measure distorts the effort level of another task that is
hard to measure.
I E.g., paying a teacher according to the test scores of students may be
a bad idea, because the teacher would then lower the effort to teach
students communication skills.
Ingredients:
I The agent’s effort devoted to multiple tasks are substitutes:
Increasing effort on one task make it harder to do so on other tasks.
I The principal cares about outputs from multiple tasks, not just one.
I Some types of performance are harder to measure than others.

33 / 54
Multi-Tasking: A Simple Model

I The agent has two types of effort to choose, (e1 , e2 ) ∈ R2 .

I The agent is risk neutral and receives a payoff of w − c(e1 , e2 ), where


c is strictly convex.
I The principal’s (expected) value of (e1 , e2 ) is π1 e1 + π2 e2 , where
π1 , π2 > 0.
I The principal can observe e1 but not e2 (no observable output for e2 )
→ offer a fixed wage of w and ask the agent to choose e1 .

34 / 54
Multi-Tasking: A Simple Model

The principal’s problem is

max π1 e1 + π2 e2 − w
w,e1 ,e2

subject to
(IC) e2 ∈ arg min c(e1 , e)
e

(PC) w − c(e1 , e2 ) ≥ 0.

IC leads to eA2 (e1 ), and PC binds at the optimum.


This problem is thus equivalent to

max π1 e1 + π2 eA2 (e1 ) − c(e1 , eA2 (e1 )), (4)


e1

where eA2 (e1 ) is the agent’s cost-minimizing e2 given e1 .


35 / 54
Multi-Tasking

The second-best problem is

max π1 e1 + π2 eA2 (e1 ) − c(e1 , eA2 (e1 )),


e1

The first-best problem is

max π1 e1 + π2 e2 − c(e1 , e2 )
e1 ,e2
 
⇐⇒ max max π1 e1 + π2 e2 − c(e1 , e2 )
e1 e2

⇐⇒ max π1 e1 + π2 eFB FB
2 (e1 ) − c(e1 , e2 (e1 )). (5)
e1

Difference b/w eA2 (e1 ) (agent’s optimum) and eFB


2 (e1 ) (firs-best) given e1

36 / 54
Multi-Tasking
The derivative of the principal’s second-best profit

max π1 e1 + π2 eA2 (e1 ) − c(e1 , eA2 (e1 )) (6)


e1

with respect to e1 is

deA2
π1 − c1 + (π2 − c2 ) (e1 )
de1
∂c
where ci = ∂e . The derivative of (5) with respect to e1 is
i

π 1 − c1 . (7)

Note that e1 changes eFB FB


2 , but the impact of e1 through e2 is zero.
(Envelope theorem. See Section M.L of MWG.)
deA
The second-best has an additional term (π2 − c2 ) de2 (e1 ).
1 37 / 54
Multi-Tasking: Optimal Effort Levels

de∗
Suppose (π2 − c2 ) de2 (e1 ) < 0, e.g., the principal highly values e2
1
de∗
(π2 > c2 ) but promoting a high e1 could reduce e2 ( de2 (e1 ) < 0).
1

This is the “cost” to the principal of inducing a higher e1 .

The second-best e1 can then be lower than the first-best.

The principal chooses to not enforce as much effort as in the first-best,


because providing incentives to increase e1 can detract the agent from the
other task e2 that the principal values.

38 / 54
Summary

I Moral hazard (private information after contracting) as hidden action


I First best problem (observable e)
I Risk neural agent (and the principal)
⇒ “selling the firm” attains the first best
I Moral hazard creates distortion relative to the first best under
I limited liability for the agent;
I risk-averse agent; or
I multi-tasking.

I General approach to solve moral hazard


(what is the optimal w(·) to induce e? → which e to induce?)

39 / 54
(Important) Appendix

40 / 54
Reminder: Reduced Optimal e Under Limited Liability

Risk-neutral agent with limited liability


I First-best: maxe πg(e) − e

I Second-best: maxe πg(e) − e − Rent(e)

I Rent(e) is increasing in e
I We want to show that eFB ≥ eSB

I Classical approach: Take derivative, compare first-order conditions

I Issue: No concavity/differentiability, potentially multiple solutions

I Monotone comparative statics

41 / 54
Monotone Comparative Statics (MCS)

Critical modern tool for economic analysis.

Definition
A function f : R × R → R has increasing differences in (x, θ) if,
whenever xH ≥ xL and θ H ≥ θ L , we have

f (xH , θH ) − f (xL , θH ) ≥ f (xH , θL ) − f (xL , θL ).

Return to choosing a higher value of x is increasing in θ .


Form of complementarity between x and θ .

42 / 54
Increasing Differences

Theorem
If f is twice continuously differentiable, then f has increasing differences if
and only if
∂ 2 f (x, θ)
≥ 0, ∀x ∈ X, ∀θ ∈ Θ.
∂x∂θ

Many ways to verify increasing differences, e.g., ∂f /∂x is increasing in θ .


∂ 2 f (x,θ)
Even if ∂x∂θ ≤ 0, we can still use results based on increasing
differences, because f now has increasing differences in (x, −θ).

43 / 54
Strong Set Order

We will show that if f : R × R → R has increasing differences in (x, θ),


then arg maxx f (x, θ) is “increasing” in θ .

But “increasing” is not well-defined if maxx f (x, θ) has multiple solutions.

We want a formal way to compare two sets, such as maxx f (x, θ H ) and
maxx f (x, θL )

44 / 54
Strong Set Order

Definition
A set A ⊂ R is greater than a set B ⊂ R in the strong set order if, for any
a ∈ A and b ∈ B,

max {a, b} ∈ A and min {a, b} ∈ B.

I {y} is greater than {x} iff y ≥ x


I [2, 4] is greater than [1, 3]
I A = [0, 3] is not greater than B = [1, 2]. 0.5 ∈ A and 1 ∈ B, but
min(0.5, 1) is not in B.

45 / 54
Result

Theorem
For each θ ∈ Θ, define X ∗ (θ) := arg maxx∈X f (x, θ). If f has increasing
differences in (x, θ), then X ∗ (θ) is non-decreasing in the strong set order,
i.e., for any θ H and θ L , X ∗ (θ H ) is greater than X ∗ (θ L ) in the strong set
order.

46 / 54
Going Back to: Reduced Optimal e

I We want to show: the first-best effort level maxe πg(e) − e is greater


than the second-best effort level maxe πg(e) − e − Rent(e)
(let’s assume uniqueness of these effort levels).
I Previous notation: maxx f (x, θ).
I Now x = e. What about θ ?
I Define f (e, θ) = maxe πg(e) − e − (1 − θ)Rent(e)
I ∂f = Rent(e) is increasing in e, i.e., f has increasing differences in
∂θ
(e, θ).
I The effort at θ = 1 is higher than the effort at θ = 0: eFB ≥ eSB .

47 / 54
MCS with n Choice Variables and m Parameters

Previous theorems generalize to X ⊂ Rn and Θ ⊂ Rm

Two main issues in generalization:


1. What’s “max” or “min” of two vectors?
2. Need complementarity within components of x, not just between x
and θ .

48 / 54
Meet and Join
Relevant notion of min and max are component-wise min and max, also
called meet and join:

x ∧ y = (min{x1 , y1 }, ..., min{xn , yn }) (meet)


x ∨ y = (max{x1 , y1 }, ..., max{xn , yn }) (join).

Definition
A set A ⊂ Rn is greater than a set B ⊂ Rn in the strong set order if, for
any a ∈ A and b ∈ B,

a ∨ b ∈ A, and
a ∧ b ∈ B.

A lattice is a set X ⊂ Rn such that x ∧ y ∈ X and x ∨ y ∈ X for all x, y ∈ X .

Example. A product set X = X1 × · · · × Xn is a lattice.


49 / 54
Aside: Meet and Join

meet and join:

x ∧ y = (min{x1 , y1 }, ..., min{xn , yn }) (meet)


x ∨ y = (max{x1 , y1 }, ..., max{xn , yn }) (join).

Really hard to remember which is meet and which is join

Prof. Alvin Roth at Stanford: “meet at the intersection, join the union.”

meet = intersection = ∩ = ∧ = min

join = union = ∪ = ∨ = max

It more sense when we define lattice using ∪ and ∩ (but not in this course)

50 / 54
Increasing Differences

Definition of increasing differences in (x, θ) same as before:


xH ≥ xL and θH ≥ θL (≥ is coordinate-wise), we have

f (xH , θH ) − f (xL , θH ) ≥ f (xH , θL ) − f (xL , θL ).

Increasing differences in (x, θ) no longer enough to guarantee X ∗ (θ)


increasing.

Issue: what if increase in θ1 pushes x1 up, but increase in x1 pushes x2


down?

Need complementarity within components of x, not just between x and θ

Called supermodularity of f in x.

51 / 54
Supermodularity

Definition
A function f : X × Θ → R is supermodular in x if, for all x, y ∈ X and
θ ∈ Θ, we have

f (x ∨ y, θ) − f (x, θ) ≥ f (y, θ) − f (x ∧ y, θ). (8)

52 / 54
Supermodularity

Theorem
If function f : Rn × Rm → R is twice continuously differentiable, then f has
increasing differences iff

∂ 2 f (x, θ)
≥ 0, ∀x ∈ X, ∀θ ∈ Θ, i ∈ {1, . . . , n} , j ∈ {1, . . . , m} ,
∂xi ∂θj

and f is supermodular in x iff

∂ 2 f (x, θ)
≥ 0, ∀x ∈ X, ∀θ ∈ Θ, i 6= j ∈ {1, . . . , n} .
∂xi ∂xj

53 / 54
Topkis’ Theorem

Theorem
If X ⊂ Rn is a lattice, Θ ⊂ Rm , and f : X × Θ → R has increasing
differences in (x, θ) and is supermodular in x, then
X ∗ (θ) = maxx∈X f (x, θ) is increasing in the strong set order.

54 / 54

You might also like