Download as pdf or txt
Download as pdf or txt
You are on page 1of 91

EconS 526: Mathematical Economics with

Applications

Mark J. Gibson
Washington State University

Fall 2020
Common utility functions
I Constant relative risk aversion:
( 1 r
x 1
1 r , r 6= 1
u (x ) =
log x, r !1

I Exercise: use l’Hôpital’s rule to show that

x1 r 1
lim = log x
r !1 1 r
I Constant absolute risk aversion:
1 e ax
a , a 6= 0
u (x ) =
x, a!0
I Exercise: use l’Hôpital’s rule to show that

1 e ax
lim =x
a !0 a
Common production (or utility) functions

I Constant elasticity of substitution:


8 1/ρ
< (αk ρ + (1 α)l ρ ) , ρ < 1, ρ 6= 0
f (k, l ) = k αl 1 α, ρ!0
:
min[k/α, l /(1 α)] ρ! ∞

I Exercise: show that the elasticity of substitution between


capital and labor is 1/(1 ρ)
Square matrices

I A square matrix A of dimension n is given by


0 1
a11 a1n
B .. C
A = @ ... ..
. . A
an1 ann

I The …rst subscript denotes the row and the second denotes
the column
I We will mostly be dealing with square matrices in this course,
but we will consider nonsquare matrices later
Determinants
I Let A be an n n matrix. The determinant of a matrix A is
denoted by det A or jAj
I Let Aij be the (n 1) (n 1) submatrix obtained by
deleting row i and column j of A, let

Mij jAij j

be the (i, j )th minor of A, and let

Cij ( 1)i +j Mij

be the (i, j )th cofactor of A


I The determinant of A is de…ned as

jAj = a11 C11 + a12 C12 + + a1n C1n


= a11 M11 a12 M12 + + ( 1)n +1 a1n M1n
I The de…nition may seem circular, but jaij j = aij
Some determinant formulas

I Two of the basic determinant formulas are


a11 a12
= a11 a22 a12 a21
a21 a22

a11 a12 a13


a22 a23 a21 a23
a21 a22 a23 = a11 a12
a32 a33 a31 a33
a31 a32 a33
a21 a22
+ a13
a31 a32
Some properties of determinants

I The determinant of a lower-triangular, upper-triangular, or


diagonal matrix is the product of its diagonal entries
I Let A be an n n matrix and let R be its row echelon form.
Then det A = det R. If no row interchanges are used to
obtain R from A, then det A = det R
I For n n matrices A and B, det AB = det A det B
I If A is invertible, det A 1 = 1/ det A
Leading principal submatrices and minors

I The leading principal submatrices of an n n matrix A are


given by

A1 = a11
a11 a12
A2 =
a21 a22
..
.
0 1
a11 a1n
B .. .. .. C
An = @ . . . A
an1 ann

I The leading principal minors are the determinants of the


leading principal submatrices
De…niteness of a matrix

I Let A be a symmetric (meaning aij = aji ) n n matrix


I A is positive de…nite i¤ all its n leading principal minors are
strictly positive
I A is negative de…nite i¤ its n leading principal minors
alternate in sign as follows:

jA1 j < 0, jA2 j > 0, jA3 j < 0, etc.

(i.e., the kth order leading principal minor should have the
same sign as ( 1)k )
I A is positive semide…nite i¤ all its principal minors are
nonnegative
I A is negative semide…nite i¤ A is positive semide…nite
Euclidean spaces

I Economists are only concerned with real numbers, so we are


primarily working with Euclidean spaces
I Euclidean space in one dimension is the real line and in two
dimensions is the Cartesian plane
I To maintain generality, economists are mostly concerned with
Euclidean space in n dimensions, which is Rn
I Euclidean space in n dimensions consists of vectors
x = (x1 , . . . , xn )
Vector properties

I Vectors can be added and multiplied by scalars


I Vectors satisfy the commutative law for addition:
u+v = v+u
I Vectors satisfy the associative law for addition:
u + (v + w ) = (u + v ) + w
I Vectors satisfy the distributive laws: (r + s )u = r u + su and
r (u + v ) = r u + r v
Vector length and distance

I The Euclidean length of a vector is given by

kxk = (x12 + + xn2 )1/2

I Euclidean length satis…es the properties of a norm:


I k u k 0 and k u k = 0 only when u = 0
I k r u k= jr j k u k
I k u+v k k u k + k v k
I The distance between two vectors is

kx yk = ((x1 y1 )2 + + (xn yn )2 )1/2


Inner product

I The Euclidean inner product, or dot product, of vectors x and


y is given by
x y = x1 y1 + + xn yn
I Inner product satis…es the following properties:
I u v=v u
I u (v + w ) = u v + u w
I u (r v ) = r (u v ) = (r u) v
I u u 0
I u u = 0 implies u = 0
I (u + v ) (u + v ) = u u + 2(u v ) + v v
Hyperplanes

I The n-dimensional equivalent to a line in 2 dimensions and a


plane in 3 dimensions is known as a hyperplane. A
hyperplane can be expressed as

a1 x1 + + an xn = d

or, using inner product,

a x=d
Example: commodity space and budget sets

I Let there be n commodities and let xi denote the quantity of


commodity i. The vector x = (x1 , . . . , xn ) is a commodity
bundle
I Since we are only dealing with nonnegative quantities, the
commodity space is the positive orthant of Rn , often denoted
Rn+
I If I is the consumer’s income and p is the price vector, then
the budget set is given by

fx 2 Rn+ : p x Ig

I Notice that the budget set is bounded by the hyperplane


p x=I
Unconstrained optimization

With unconstrained optimization, …nding a local max or min of a


function involves two basic steps
1. Use …rst-order conditions (FOCs) to …nd the critical points of
the function
2. Use second-order conditions (SOCs) to determine whether a
critical point is a local max or min
Critical points and FOCs

I A critical point is an interior point where a function’s partial


derivatives all equal zero
I This matters because a local max or min is a critical point
(but a critical point is not necessarily a local max or min)
I Consider a function F : U ! R1 , where F is continuously
di¤erentiable (C 1 ) and U Rn . If x is a local max or min
of F and x is an interior point of U, then

∂F
(x ) = 0 for i = 1, . . . , n
∂xi
Critical points and SOCs
I To determine whether a critical point is a local max or min,
we use the SOCs, which are represented by the Hessian of F :
0 ∂2 F ∂2 F
1
2 ( x ) ∂x ∂x
1 n
( x )
B ∂x1 . .. .. C
D 2 F (x ) = B
@ .
. . .
C,
A
∂2 F ∂2 F
∂xn ∂x1 (x ) ∂x 2
n
(x )

where F is twice continuously di¤erentiable (C 2 )


I For a C 2 function, the cross-partials are equal, so the Hessian
is a symmetric matrix
I If the Hessian D 2 F (x ) is negative de…nite, then x is a strict
local max of F
I If the Hessian D 2 F (x ) is positive de…nite, then x is a strict
local min of F
I If the Hessian D 2 F (x ) is inde…nite, then x is a saddle point
of F
One-variable example

I Find the critical points of

F (x ) = 2x 3 0.5x 2 + 2

I Determine whether each critical point is a local max, a local


min, or an in‡ection point
One-variable example
I To …nd the critical points of

F (x ) = 2x 3 0.5x 2 + 2

we take the FOC:

F 0 (x ) = 6x 2 x =0

or
x (6x 1) = 0
I The critical points are x = 0, 1/6
I To characterize the critical points, we take the SOC:

F 00 (x ) = 12x 1

I Since F 00 (0) = 1 < 0, x = 0 is a strict local max


I Since F 00 (1/6) = 1 > 0, x = 1/6 is a strict local min
Two-variable example

I Find the critical points of

F (x1 , x2 ) = 4x12 x1 x2 + x22 x13

I Determine whether each critical point is a local max, local


min, or saddle point
Two-variable example

I The FOCs are


∂F
= 8x1 x2 3x12 = 0
∂x1
∂F
= x1 + 2x2 = 0
∂x2
I The second FOC implies that x1 = 2x2 . We can substitute
this into the …rst FOC to obtain 15x2 12x22 = 0, or
12x2 (5/4 x2 ) = 0
I The critical points are (x1 , x2 ) = (0, 0), (5/2, 5/4)
Two-variable example

I The SOCs are given by the Hessian

8 6x1 1
D 2 F (x1 , x2 ) =
1 2

I At (x1 , x2 ) = (0, 0), the leading principal minors of the


Hessian are 8 > 0 and 15 > 0, so the Hessian is positive
de…nite and (x1 , x2 ) = (0, 0) is a strict local minimum
I At (x1 , x2 ) = (5/2, 5/4), the leading principal minors of the
Hessian are 7 < 0 and 15 < 0, so the Hessian is inde…nite
and (x1 , x2 ) = (5/2, 5/4) is a saddle point
Necessary vs. su¢ cient conditions

I The previous conditions are su¢ cient for …nding a local max
or min
I What conditions must necessarily hold for a local max or min?
I If x is a local max of F , then (∂F /∂xi )(x ) = 0 for
i = 1, . . . , n and the Hessian D 2 F (x ) is negative semide…nite
I If x is a local min of F , then (∂F /∂xi )(x ) = 0 for
i = 1, . . . , n and the Hessian D 2 F (x ) is positive semide…nite
I One-variable example: F (x ) = x 4 clearly has a global
minimum at critical point x = 0. The necessary conditions
hold: F 0 (0) = 0 and F 00 (0) = 0. The su¢ cient condition
(F 00 (0) > 0) does not
Row echelon form

I A row of a matrix has k leading zeros if the …rst k elements


of the row are zeros and the (k + 1)th element of the row is
not zero
I A matrix is in row echelon form if each row has more leading
zeros than the row preceding it. If a row is all zeros, all the
subsequent rows must contain only zeros
I You can get a matrix in row echelon form using elementary
row operations:
I interchange two rows of a matrix
I change a row by adding to it a multiple of another row
I multiply each element of a row by the same nonzero number
Practice

I Go from 0 1
1 0.4 0.3
@ 0.2 0.88 0.14 A
0.5 0.2 0.95
to the following row echelon form (there are other
possibilities): 0 1
1 0.4 0.3
@ 0 0.8 0.2 A
0 0 0.7
Rank

I The rank of a matrix is the number of nonzero rows in its row


echelon form. If an m n matrix has rank m, it has maximal
rank
I For example, the previous matrix,
0 1
1 0.4 0.3
@ 0 0.8 0.2 A ,
0 0 0.7

has rank 3, or maximal rank


I What is the rank of
1 2
?
2 4
Constrained optimization: equality constraints

I Most optimization in economics is subject to constraints


I We will start with equality constraints and use the method of
Lagrange
One equality constraint with two variables: FOCs
I Let f and h be C 1 functions of two variables. Let
x = (x1 , x2 ) be a solution to

max f (x1 , x2 )
s.t. h (x1 , x2 ) = a,

where x is not a critical point of h. Then there is a real


number µ such that (x1 , x2 , µ ) is a critical point of the
Lagrangian function

L(x1 , x2 , µ) = f (x1 , x2 ) µ[h (x1 , x2 ) a]

I That is, evaluated at (x1 , x2 , µ ), we have

∂L ∂L ∂L
= 0, = 0, =0
∂x1 ∂x2 ∂µ
Constraint quali…cation

I The condition that x is not a critical point of h is known as a


constraint quali…cation
I While important in theory, this condition is rarely important in
practice
Example

I Solve

min 2x1 + 4x2


x1 ,x2

s.t. x10.25 x20.75 = 10

I First we set up the Lagrangian:

L(x1 , x2 , µ) = 2x1 + 4x2 µ(x10.25 x20.75 10)

I Then we take the FOCs and solve


Example
I The FOCs are
∂L
= 2 0.25µx1 0.75 x20.75 = 0
∂x1
∂L
= 4 0.75µx10.25 x2 0.25 = 0
∂x2
∂L
= x10.25 x20.75 10 = 0
∂µ
I We can eliminate the Lagrange multiplier by combining the
…rst two FOCs to obtain 2 = 3x1 /x2 , or x2 = 1.5x1
I Substituting this into the third FOC (the constraint) gives
(x1 , x2 ) = (7.38, 11.07)
I Finally, we should verify that the solution is not a critical
point of h (x1 , x2 ) = x10.25 x20.75 and that we have, in fact,
found a minimum (more on this later)
Multiple variables and equality constraints: FOCs

I Let f and h1 , . . . , hm be C 1 functions of n variables. Denote


the constraint set by

Ch fx = (x1 , . . . , xn ) : h1 (x) = a1 , . . . , hm (x) = am g

I Let x 2 Ch be a local max or min of f on Ch that satis…es


the nondegenerate constraint quali…cation that the Jacobian
matrix has rank m at x
I Then there exist Lagrange multipliers µ = (µ1 , . . . , µm ) such
that (x , µ ) is a critical point of the Lagrangian function
m
L(x, µ) = f (x) ∑ µ i ( hi ( x ) ai )
i =1

I To tell whether x is a local max or min, we need to take the


SOCs
SOCs with a bordered Hessian

I Let Dx2 L(x , µ ) be the n n Hessian and let Dh(x ) be the


m n Jacobian of the constraint matrix
I Then the bordered Hessian is the (n + m ) (n + m) matrix

0 Dh(x )
H
Dh(x )T Dx2 L(x , µ )

I If the last n m leading principal minors of H alternate in


sign, with the sign of jH j the same as the sign of ( 1)n , then
x is a strict local constrained max
I If the last n m leading principal minors of H all have the
same sign as ( 1)m , then x is a strict local constrained min
Example

I Recall the problem

min 2x1 + 4x2


x1 ,x2

s.t. x10.25 x20.75 = 10

with FOCs
∂L 0.75 0.75
=2 0.25µx1 x2 =0
∂x1
∂L
=4 0.75µx10.25 x2 0.25
=0
∂x2
I Calculate the bordered Hessian and use it to verify that the
solution is a strict local constrained min
Example
I The bordered Hessian is
0 0.75 0.75 1
0 1
4 x1 x2 3 0.25
4 x1 x2 0.25
@ x1 0.75 0.75
x2 3 1.75 0.75
x2 3 0.75
x2 0.25 A
4 1 16 µx1 16 µx1
0.25 0.75
3 0.25
x
4 1 x2
3
16 µx 1 x2 0.25 3
16 µx1
0.25 x 1.25
2

I In this case, n m = 2 1, so we only need to verify that the


determinant of the bordered Hessian has the same sign as
( 1)m = 1
I Notice that the signs of the elements of the bordered Hessian
are as follows: 0 1
0 + +
@ + + A
+ +
I Thus the determinant of the bordered Hessian will be
negative, which implies that we have a strict local constrained
min
Constrained optimization: maximizing two variables with
one inequality constraint

I Consider the problem

max f (x, y )
s.t. g (x, y ) b

I We can still use the Lagrangian method, but the Lagrange


multiplier (λ) must be nonnegative and we must add a
complementary slackness condition:

λ(g (x, y ) b) = 0
Maximizing two variables with one inequality constraint
I Consider the problem
max f (x, y )
s.t. g (x, y ) b
I The Lagrangian is
L(x, y , λ) = f (x, y ) λ(g (x, y ) b)
I If (x , y ) is a solution to the above problem (and a
constraint quali…cation holds), then there exists a nonnegative
Lagrange multiplier λ such that (x , y , λ ) satis…es
∂L
=0
∂x
∂L
=0
∂y
g (x, y ) b
λ(g (x, y ) b) = 0
Example
I Consider the problem

max(x + a)y 2
x ,y

s.t. bx + cy m,

where all the parameters are strictly positive


I The Lagrangian is

L(x, y , λ) = (x + a)y 2 λ(bx + cy m)

I The FOCs are

y2 λb = 0
2(x + a )y λc = 0

I We need to check two cases: λ = 0 and λ > 0


Example

I If λ = 0, then the …rst FOC implies that y = 0. The


constraint then gives x 2 ( ∞, m/b ), but the value of the
maximand is 0 regardless
I If λ > 0, then we can combine the FOCs to obtain
y b
=
2(x + a ) c

and the constraint is bx + cy = m, so we have


x = (m cy )/b. We combine these and solve to obtain

m 2ab 2(m + ab )
(x , y ) = , .
3b 3c

Notice that x + a > 0, so this is the maximum


Constraint quali…cation
I The constraint quali…cation for this sort of optimization
problem only applies when the constraint is binding (i.e.,
when the constraint holds with equality)
I The constraint quali…cation is, if g (x , y ) = b, then (x , y )
is not a critical point of g :

∂g ∂g
(x , y ) 6= 0 or (x , y ) 6 = 0
∂x ∂y
I What if there are n variables and k constraints:
g1 (x1 , . . . , xn ) b1 , . . . , gk (x1 , . . . , xn ) bk ? Suppose the
…rst k0 constraints are binding. Then the constraint
quali…cation is that the Jacobian matrix of the binding
constraints has rank k0
I For example, in the previous problem the constraint was
binding, so we check to see that the Jacobian has rank 1
Minimization problems with inequality constraints
I How do minimization problems with inequality constraints
di¤er? Everything is the same except we reverse the
inequality. For example:
min(x, y )
s.t. g (x, y ) b
I The Lagrangian is
L(x, y , λ) = f (x, y ) λ(g (x, y ) b)
I If (x , y ) is a solution to the above problem, then there
exists a nonnegative Lagrange multiplier λ such that
(x , y , λ ) satis…es
∂L
=0
∂x
∂L
=0
∂y
g (x, y ) b
Kuhn–Tucker

I The most widely used approach to optimization in economics


is given by the Kuhn–Tucker theorem
I This is because most economic problems involve optimizing
subject to inequality constraints and nonnegativity constraints
Kuhn–Tucker maximization

I Consider a maximization problem with n variables that must


be nonnegative and k inequality constraints:

max f (x1 , . . . , xn )
s.t. g1 (x1 , . . . , xn ) b1
..
.
gk (x1 , . . . , xn ) bk
x1 0, . . . , xn 0

I The Kuhn–Tucker Lagrangian is


k
L(x, λ1 , . . . , λk ) = f (x) ∑ λj (gj (x) bj )
j =1
Kuhn–Tucker maximization
I If x solves the above maximization problem and the
nondegenerate constraint quali…cation holds, then there exist
nonnegative Lagrange multipliers λ1 , . . . , λk such that
(x , λ1 , . . . , λk ) satis…es
∂L
0, i = 1, . . . , n
∂xi
∂L
xi = 0, i = 1, . . . , n
∂xi
∂L
0, j = 1, . . . , k
∂λj
∂L
λj = 0, j = 1, . . . , k
∂λj
I In this case, the constraint quali…cation is that the Jacobian
matrix with elements ∂gi /∂xj , where the i’s vary over the
indices of the gi constraints that are binding at x and the j’s
vary over the indices for which xj > 0, has maximal rank
Example

I Consider the following consumer’s problem:

max log(1 + c1 ) + log(1 + c2 )


c1 ,c2
s.t. c1 + 7c2 5
c1 0, c2 0

I The Kuhn–Tucker Lagrangian is

L = log(1 + c1 ) + log(1 + c2 ) λ(c1 + 7c2 5)

I Specify the Kuhn–Tucker conditions. How many cases do we


need to check?
Example
I The Kuhn–Tucker conditions are
1
λ 0
1 + c1
1
c1 λ =0
1 + c1
1
7λ 0
1 + c2
1
c2 7λ = 0
1 + c2
5 c1 7c2 0
λ (5 c1 7c2 ) = 0

I There are 8 cases to check here. The …rst condition makes it


clear that any case with λ = 0 cannot be a solution. The last
condition then implies that we can eliminate the case where
c1 = c2 = 0. We are left with 3 cases to check
Example

I Case 1: c1 > 0, c2 > 0, λ > 0. The K–T conditions require


that
1 + c1
= 7,
1 + c2
so
c1 = 7c2 + 6.
Plugging this into the budget constraint, we see that
c2 = 1/14, so this cannot be a solution
I Case 2: c1 = 0, c2 > 0, λ > 0. The budget constraint implies
that c2 = 5/7, which then implies that λ = 1/12. But this
does not satisfy the …rst K–T condition
I Case 3: c1 > 0, c2 = 0, λ > 0. The budget constraint implies
that c1 = 5, which then implies that λ = 1/6. This satis…es
all the conditions
Kuhn–Tucker minimization

I Consider a minimization problem with n variables that must


be nonnegative and k inequality constraints:

min f (x1 , . . . , xn )
s.t. g1 (x1 , . . . , xn ) b1
..
.
gk (x1 , . . . , xn ) bk
x1 0, . . . , xn 0

I The Kuhn–Tucker Lagrangian is


k
L(x, λ1 , . . . , λk ) = f (x) ∑ λj (gj (x) bj )
j =1
Kuhn–Tucker minimization

I If x solves the above minimization problem and the


nondegenerate constraint quali…cation holds, then there exist
nonnegative Lagrange multipliers λ1 , . . . , λk such that
(x , λ1 , . . . , λk ) satisfy

∂L
0, i = 1, . . . , n
∂xi
∂L
xi = 0, i = 1, . . . , n
∂xi
∂L
0, j = 1, . . . , k
∂λj
∂L
λj = 0, j = 1, . . . , k
∂λj
Square systems of linear equations

I A square system of n linear equations is given by

a11 x1 + + a1n xn = b1
..
.
an1 x1 + + ann xn = bn

or 0 10 1 0 1
a11 a1n x1 b1
B .. .. .. C B .. C = B .. C
@ . . . A@ . A @ . A
an1 ann xn bn
or
Ax = b
I We want to know how many solutions there are (if any) and
how to solve for them
Some intuition

I Consider a 2 2 linear system. Graphically, it is two straight


lines
I Either the lines are parallel and never intersect, the lines are
nonparallel and intersect once, or the lines coincide
I Thus there is either no solution, one solution, or an in…nite
number of solutions
I This is true of any linear system
Square linear systems in matrix form and unique solutions

I For the system of linear equations

a11 x1 + + a1n xn = b1
..
.
an1 x1 + + ann xn = bn ,

the coe¢ cient matrix and augmented matrix are


0 1 0 1
a11 a1n a11 a1n j b1
B .. . .. . C B
.. A and  = @ ... . .. .. . C
A=@ . . j .. A
an1 ann an1 ann j bn

I If A has maximal rank (rank n), then the system has a unique
solution
Solving using reduced row echelon form

I The …rst nonzero entry in each row of a matrix in row echelon


form is called a pivot
I A row echelon matrix in which each pivot is a one and in
which each column containing a pivot contains no other
nonzero entries is in reduced row echelon form
I Taking an augmented matrix and putting the coe¢ cient
matrix in reduced row echelon form provides a solution to a
linear system (if one exists)
I Example: Solve the following linear system using elementary
row operations:

2x + 2y z =2
x +y +z = 2
2x 4y + 3z = 0
Solving using reduced row echelon form

I The augmented matrix is


0 1
2 2 1 j 2
@ 1 1 1 j 2 A
2 4 3 j 0

I The reduced row echelon form is


0 1
1 0 0 j 1
@ 0 1 0 j 1 A
0 0 1 j 2

I Thus the solution is (x , y , z ) = (1, 1, 2)


Solving using Cramer’s rule

I A second way of solving a linear system is to use Cramer’s rule


I Let Ax = b be an n n linear system and let A be nonsingular
I Then the unique solution is given by
det Bi
xi = ,
det A
where Bi is the matrix A with b replacing the ith column of A
Practice: supply and demand with two goods
I Suppose supply and demand for two goods are given by
constant-elasticity functions:
Q1d = K1 P1a11 P2a12 Y b1
Q2d = K2 P1a21 P2a22 Y b2
Q1s = M1 P1n1
Q2s = M2 P2n2
I Here Y is income. What is the interpretation of aij , bi , and
ni ?
I Assume that the law of demand holds, so aii < 0, and that
the law of supply holds or supply is inelastic, so ni 0
I Assume that both goods are normal goods, so bi > 0
I Assume that a change in the price of good i has a larger
absolute e¤ect on the demand for good i than on the demand
for good j 6= i, so jaii j > jaji j
I Log-linearize the system. Show that a solution exists and is
unique. Then solve for equilibrium prices using Cramer’s rule
Practice: supply and demand with two goods

I In logs, the linearized system is

a11 n1 a12 p1 m1 k1 b1 y
=
a21 a22 n2 p2 m2 k2 b2 y

I Given our assumptions, it is easy to see that

a11 n1 a12
= (a11 n1 )(a22 n2 ) a12 a21 > 0,
a21 a22 n2

so the solution exists and is unique


Practice: supply and demand with two goods

I Using Cramer’s rule,

m1 k1 b1 y a12
m2 k2 b2 y a22 n2
p1 =
(a11 n1 )(a22 n2 ) a12 a21
(a22 n2 )(m1 k1 b1 y ) a12 (m2 k2 b2 y )
=
(a11 n1 )(a22 n2 ) a12 a21
a11 n1 m1 k1 b1 y
a21 m2 k2 b2 y
p2 =
(a11 n1 )(a22 n2 ) a12 a21
(a11 n1 )(m2 k2 b2 y ) a21 (m1 k1 b1 y )
=
(a11 n1 )(a22 n2 ) a12 a21
Comparative statics

I Once we have solved an optimization problem, we often want


to know how the solution would change if we changed the
value of one of the parameters
I This is known as comparative statics
I The envelope theorems are useful here
Envelope theorem for unconstrained problems

I Let f (x; a) be a C 1 function of x 2 Rn , where a is an


exogenous parameter. Let x (a) be a solution of

max f (x; a)
x

and suppose that x (a) is C 1 . Then

d ∂f
f (x (a ); a ) = (x (a ); a )
da ∂a
I Notice the di¤erence in notation here. The total derivative is
n
d ∂f ∂f ∂x
f (x (a ); a ) = (x (a ); a ) + ∑ (x (a ); a ) i (a ),
da ∂a i =1 ∂x i ∂a

so this envelope theorem greatly simpli…es the calculation


Example

I Let x (b ) be the solution to

max f (x; b ) = bx 2 + cx + d
x

I Find the impact of b on f and x


Example

I By the envelope theorem,

df ∂f
= = (x )2 < 0
db ∂b
I The FOC is
2bx + c = 0,
so
c
x =
2b
I Thus
∂x c
= <0
∂b 2b 2
Envelope theorem with equality constraints
I Let f (x; a), h1 (x; a), . . . , hk (x; a) be C 1 functions of x 2 Rn ,
where a is an exogenous parameter. Let x (a) be a solution
of

max f (x; a)
x
s.t. h1 (x; a) = 0, . . . , hk (x; a) = 0
I The Lagrangian is
k
L(x, µ; a) = f (x; a) ∑ µj hj (x; a)
j =1

I Suppose the relevant constraint quali…cation holds, let µ(a)


be the associated Lagrange multipliers, and suppose that
x (a) and µ(a) are C 1 . Then
d ∂L
f (x (a ); a ) = (x (a ), µ (a ); a )
da ∂a
Example

I Consider the problem

max U = c1α c21 α


c1 ,c2
s.t. p1 c1 + p2 c2 = I

I Find the e¤ect of p1 on U and c1


Example

I The Lagrangian is

L(c1 , c2 , µ) = c1α c21 α


µ ( p 1 c1 + p 2 c2 I)

I By the envelope theorem,

dU ∂L
= = µc1 < 0
dp1 ∂p1
I We can use the FOCs to solve to get c1 = αI /p1 . Thus

∂c1 αI
= <0
∂p1 p12
Example
I Suppose a plant wants to minimize the cost of producing
quantity q of output. The plant’s technology is given by
standard production function f (k, l ). The plant takes as
given wage w and rental r . How does a change in the wage
a¤ect the total cost and use of capital?
I The plant’s problem is

min c (k, l; r , w , q ) = rk + wl
k ,l
s.t. f (k, l ) = q
I The Lagrangian is

L(k, l, µ; r , w , q ) = rk + wl µ(f (k, l ) q)


I By the envelope theorem,
dc ∂L
= =l >0
dw ∂w
Example

I The FOCs are

f (k, l ) + q = 0
r µfk (k, l ) = 0
w µfl (k, l ) = 0

I Denote the solution by k (r , w , q ), l (r , w , q ), µ (r , w , q )


I Di¤erentiate the FOCs with respect to w :

∂k ∂l
fk fl =0
∂w ∂w
∂µ ∂k ∂l
fk µ fkk µ fkl =0
∂w ∂w ∂w
∂µ ∂k ∂l
1 fl µ flk µ fll =0
∂w ∂w ∂w
Example
I In matrix notation, we have
0 1 0 1
∂µ
0
B ∂w C @
BH @ ∂k
∂w A= 0 A,
∂l
∂w
1

where BH is the bordered Hessian:


0 1
0 fk fl
BH = @ fk µ fkk µ fkl A
fl µ flk µ fll
I Using Cramer’s rule,
0 0 fl
fk 0 µ fkl
∂k fl 1 µ fll fk fl
= = > 0,
∂w jBH j jBH j
where jBH j < 0 for a minimum
Homogeneous functions

I For any scalar k, a function f (x1 , . . . , xn ) is homogeneous of


degree k if

f (tx1 , . . . , txn ) = t k f (x1 , . . . , xn ) for all x1 , . . . , xn and all t > 0

I Are the following functions homogeneous? If so, of what


degree?

f (x1 , x2 ) = 3x15 x2 + 2x12 x24


x12 x22
f (x1 , x2 ) = +3
x12 + x22
f (x1 , x2 ) = x13/4 x21/4 6x1 + 4
Homogeneous functions in economics

I Homogeneous functions are widely used throughout economics


I If a production function f is homogeneous, the degree k
determines the returns to scale
I Consider a standard consumer’s problem:

max U (x1 , . . . , xn )
s.t. p1 x1 + + pn xn I
x1 0, . . . , xn 0

I Denote the solution by x = D (p, I ). The demand function is


homogeneous of degree 0
Useful properties of homogeneous functions

If f (x) is homogeneous of degree k:


I the …rst-order partial derivatives of f are homogeneous of
degree k 1
I the tangent planes to the level sets of f have constant slope
along each ray from the origin (this property is important for
both utility and production functions)
Useful properties of homogeneous functions: utility
I Let U (x) be a utility function on Rn+ that is homogeneous of
degree k. Let x(I ) be the solution to the problem of
maximizing utility subject to a standard budget constraint
with income I . Then, at x(I ), the level curve of U is tangent
to the budget line:
Ui pi
=
Uj pj
I The MRS is the slope of the indi¤erence curve (the level curve
of U). By homogeneity, it is constant along rays from the
origin
I Income expansion paths (bundles chosen for various levels of
income) are rays from the origin
I The demand function is homogeneous of degree one in
income: x(tI ) = tx(I )
I The income elasticity of demand is 1:
dxi (I ) I d (Ixi (1)) I
= =1
dI xi (I ) dI Ixi (1)
Useful properties of homogeneous functions: production

I Let f (x) be a production function on Rn+ that is homogeneous


of degree k. Let x(q ) be the solution to

C (q ) = min w1 x1 + + wn xn
s.t. f (x1 , . . . , xn ) q
x1 0, . . . , xn 0

I The MRTS is the slope of the isoquant (the level curve of f ).


By homogeneity, it is constant along rays from the origin
I The cost function is homogeneous of degree 1/k. Show this
Useful properties of homogeneous functions: Euler’s
theorem

I Let f (x) be a function on Rn+ that is homogeneous of degree


k. Then, for all x,
∂f ∂f
x1 (x) + + xn (x) = kf (x)
∂x1 ∂xn
I Prove that a competitive …rm with a production function that
is homogeneous of degree one earns zero pro…ts in equilibrium
Homothetic functions

I Homogeneity is a cardinal property: it is not preserved by


monotonic transformations
β
I Example: x1α x2 is homogeneous, but α log x1 + β log x2 is not
I Ordinal properties are preserved by monotonic transformations
I Homotheticity is an ordinal property
I A function v : Rn+ ! R is homothetic if it is a monotone
transformation of a homogeneous function
I α log x1 + β log x2 is homothetic because it is a monotone
β
transformation of x1α x2
I Utility functions typically are assumed to have ordinal
properties
Main property of homothetic functions

I If u is a C 1 function on Rn+ and is homothetic, then the slopes


of the tangent planes to the level sets of u are constant along
rays from the origin. That is,

ui (tx) ui ( x )
= , all i, all j, all x 2 Rn+ , all t > 0
uj (tx) uj ( x )
I Essentially, the optimization property we had with
homogeneous functions still holds with homothetic functions
Concave functions
I Here we are mainly interested in de…ning concave functions of
more than one variable using calculus
I Let f be a C 1 function on a convex subset U of Rn . Then f
is concave on U i¤, for all x, y in U,

f (y ) f (x) Df (x)(y x).

That is, for all x, y in U,

f (y ) f (x) f1 (x)(y1 x1 ) + + fn (x)(yn xn )

I Using the Hessian: Let f be a C 2 function on an open convex


subset U of Rn . Then f is a concave (convex) function on U
i¤ the Hessian Df 2 (x) is negative (positive) semide…nite for
all x 2 U
I Exercise: Under what conditions is F (x1 , x2 ) = x1a x2b
concave?
Maximizing a concave function

I Concavity is a nice way of ensuring we have a global max


I Let f be a concave (convex) function on an open convex
subset U of Rn . If x0 2 U is a critical point of f (that is, if
Df (x0 ) = 0), then x0 is a global max (min) of f on U
Quasiconcavity
I Concavity is a cardinal property (a monotonic transformation
of a concave function need not be concave)
I A useful ordinal property is quasiconcavity
I A function f de…ned on a convex subset U of Rn is
quasiconcave if, for every real number a,
Ca+ fx 2 U : f (x) ag
is a convex set
I A function f de…ned on a convex subset U of Rn is
quasiconvex if, for every real number a,
Ca fx 2 U : f (x) ag
is a convex set
I This is useful, for example, for having indi¤erence curves with
diminishing MRS
I Every concave function is quasiconcave, every convex function
is quasiconvex, and every monotonic transformation of a
concave function is quasiconcave
Quasiconcavity

The following are equivalent:


I f is a quasiconcave function on U
I for all x, y 2 U and all t 2 [0, 1], f (x) f (y) implies
f (tx + (1 t )y) f (y)
I for all x, y 2 U and all t 2 [0, 1],
f (tx + (1 t )y) minff (x), f (y)g
Systems of linear equations

I An m n system of linear equations is given by

a11 x1 + + a1n xn = b1
..
.
am1 x1 + + amn xn = bm

I We want to know how many solutions there are (if any)


Number of solutions to a linear system: key results
Consider the linear system Ax = b, where A is m n
I If m < n, then:
I Ax = 0 has in…nitely many solutions
I for any given b, Ax = b has zero or in…nitely many solutions
I if rank A = m, Ax = b has in…nitely many solutions for every b
I If m > n, then:
I Ax = 0 has one or in…nitely many solutions
I for any given b, Ax = b has zero, one, or in…nitely many
solutions
I if rank A = n, Ax = b has zero or one solution for every b
I If m = n, then:
I Ax = 0 has one or in…nitely many solutions
I for any given b, Ax = b has zero, one, or in…nitely many
solutions
I if rank A = m = n, Ax = b has exactly one solution for every b
Practice

I For each of the following coe¢ cient matrices, determine the


number of solutions for the case of b = 0 and the case of any
given b:
0 1
2 1
1 4 1 4 3
, ,@ 1 4 A,
2 1 2 1 0
0 3
0 1 0 1
1 4 3 1 4 3
@ 2 1 0 A,@ 2 1 0 A
1 1 1 0 7 6
Matrix invertibility

I A square matrix can have at most one inverse. If the inverse


of square matrix A exists, we denote it by A 1
I A square matrix is invertible i¤ it is nonsingular (its
determinant is nonzero)
I Exercise: Using the augmented coe¢ cient matrix, …nd the
inverse of
a b
A=
c d
Matrix invertibility

I The inverse of a 2 2 matrix is

1 1 d b
A =
ad bc c a

I Notice that ad bc is the determinant of A and that A is


nonsingular (and therefore invertible) i¤ ad bc 6= 0
Determinants and inverses

I The adjoint of an n n matrix A is the n n matrix whose


(i, j )th entry is Cji , the (j, i )th cofactor of A, and is denoted
by adj A
I The inverse of A is
1 adj A
A =
det A
I Example: recall that, for

a b
A= ,
c d

1 1 d b
A =
ad bc c a
Nonsquare matrix inverses

I Let A be a k n matrix
I The n k matrix B is a right inverse for A if AB = I
I The n k matrix C is a left inverse for A if CA = I
Linear production systems

I Linear production systems can be represented by input-output


matrices
I Let ci be (exogenous) consumer demand for good i = 1, . . . , n
I Let xi be (endogenous) gross output of good i
I The technology is linear: aij is the quantity of good i needed
to produce one unit of good j
I Thus the market-clearing condition is

x = Ax + c,

where A is the technology matrix


I We solve to obtain
1
x = (I A) c
Linear production systems

I Under what circumstances is a solution guaranteed to exist?


I Theorem. Let A be an n n matrix with only nonnegative
entries such that the entries in each column sum to less than
one. Then (I A) 1 exists and contains only nonnegative
entries
I Is this a reasonable restriction? Typically the technology
matrix is constructed using data in monetary units, say
dollars. The sum of the entries in each column would then be
the dollar value of the intermediate inputs needed to make $1
of output
I A simple aggregate version of the input-output model might
have three sectors: primaries, manufactures, and services
Linear production systems: computational exercise

I Let the technology matrix be given by


0 1
0.7 0.2 0.2
A = @ 0.1 0.6 0.1 A
0.1 0.1 0.6

I Compute gross output for the following demand vectors:


0 1 0 1 0 1
1 2 2
c= @ 1 A , @ 1 A , @ 1 A
1 1 2

You might also like