Download as pdf or txt
Download as pdf or txt
You are on page 1of 22

The Kuhn-Tucker and Envelope Theorems

Peter Ireland EC720.01 - Math for Economists Boston College, Department of Economics Fall 2012
The Kuhn-Tucker and envelope theorems can be used to characterize the solution to a wide range of constrained optimization problems: static or dynamic, and under perfect foresight or featuring randomness and uncertainty. In addition, these same two results provide foundations for the work on the maximum principle and dynamic programming that we will do later on. For both of these reasons, the Kuhn-Tucker and envelope theorems provide the starting point for our analysis. Lets consider each in turn, rst in fairly general or abstract settings and then applied to some economic examples.

The Kuhn-Tucker Theorem


Dixit, Chapters 2 and 3. Simon-Blume, Chapters 18 and 19. Acemoglu, Appendix A.

References:

Consider a simple constrained optimization problem: x R choice variable F : R R objective function, continuously dierentiable c G(x) constraint, with c R and G : R R, also continuously dierentiable. The problem can be stated as: max F (x) subject to c G(x)
x

Copyright c 2012 by Peter Ireland. Redistribution is permitted for educational and research purposes, so long as no changes are made. All copies must be provided free of charge and must include this copyright notice.

This problem is simple because it is static and contains no random or stochastic elements that would force decisions to be made under uncertainty. This problem is also simple because it has a single choice variable and a single constraint. All these simplications will make our statement and proof of the Kuhn-Tucker theorem as clean and intuitive as possible. But the results can be generalized along all of these dimensions and, throughout the semester, we will work through examples that do so. Probably the easiest way to solve this problem is via the method of Lagrange multipliers. The mathematical foundations that allow for the application of this method are given to us by Lagranges Theorem or, in its most general form, the Kuhn-Tucker Theorem. To prove this theorem, begin by dening the Lagrangian: L(x, ) = F (x) + [c G(x)] for any x R and R. Theorem (Kuhn-Tucker) Suppose that x maximizes F (x) subject to c G(x), where F and G are both continuously dierentiable, and suppose that G (x ) = 0. Then there exists a value of such that x and satisfy the following four conditions: L1 (x , ) = F (x ) G (x ) = 0, L2 (x , ) = c G(x ) 0, 0, and [c G(x )] = 0. (4) Proof Consider two possible cases, depending on whether or not the constraint is binding at x . Case 1: Nonbinding Constraint. If c > G(x ), then let = 0. Clearly, (2)-(4) are satised, so it only remains to show that (1) must hold. With = 0, (1) holds if and only if F (x ) = 0. (5) (1) (2) (3)

We can show that (5) must hold using a proof by contradiction. Suppose that instead of (5), it turns out that F (x ) < 0. Then, by the continuity of F and G, there must exist an > 0 such that F (x ) > F (x ) and c > G(x ).

But this result contradicts the assumption that x maximizes F (x) subject to c G(x). Similarly, if it turns out that F (x ) > 0, then by the continuity of F and G there must exist an > 0 such that F (x + ) > F (x ) and c > G(x + ), But, again, this result contradicts the assumption that x maximizes F (x) subject to c G(x). This establishes that (5) must hold, completing the proof for case 1. Case 2: Binding Constraint. If c = G(x ), then let = F (x )/G (x ). This is possible, given the assumption that G (x ) = 0. Clearly, (1), (2), and (4) are satised, so it only remains to show that (3) must hold. With = F (x )/G (x ), (3) holds if and only if F (x )/G (x ) 0. (6)

We can show that (6) must hold using a proof by contradiction. Suppose that instead of (6), it turns out that F (x )/G (x ) < 0. One way that this can happen is if F (x ) > 0 and G (x ) < 0. But if these conditions hold, then the continuity of F and G implies the existence of an > 0 such that F (x + ) > F (x ) and c = G(x ) > G(x + ), which contradicts the assumption that x maximizes F (x) subject to c G(x). And if, instead, F (x )/G (x ) < 0 because F (x ) < 0 and G (x ) > 0, then the continuity of F and G implies the existence of an > 0 such that F (x ) > F (x ) and c = G(x ) > G(x ), which again contradicts the assumption that x maximizes F (x) subject to c G(x). This establishes that (6) must hold, completing the proof for case 2. Notes: a) The theorem can be extended to handle cases with more than one choice variable and more than one constraint: see Dixit, Simon-Blume, Acemoglu, or section 4.1 of the notes below. b) Equations (1)-(4) are necessary conditions: If x is a solution to the optimization problem, then there exists a such that (1)-(4) must hold. But (1)-(4) are not sucient conditions: if x and satisfy (1)-(4), it does not follow automatically that x is a solution to the optimization problem. 3

Despite point (b) listed above, the Kuhn-Tucker theorem is extremely useful in practice. Suppose that we are looking for the solution x to the constrained optimization problem max F (x) subject to c G(x).
x

The theorem tells us that if we form the Lagrangian L(x, ) = F (x) + [c G(x)], then x and the associated must satisfy the rst-order condition (FOC) obtained by dierentiating L by x and setting the result equal to zero: L1 (x , ) = F (x ) G (x ) = 0, In addition, we know that x must satisfy the constraint: c G(x ). We know that the Lagrange multiplier must be nonnegative: 0. And nally, we know that the complementary slackness condition [c G(x )] = 0, (4) (3) (2) (1)

must hold: If > 0, then the constraint must bind; if the constraint does not bind, then = 0. In searching for the value of x that solves the constrained optimization problem, we only need to consider values of x that satisfy (1)-(4). Two pieces of terminology: a) The extra assumption that G (x ) = 0 is needed to guarantee the existence of a multiplier satisfying (1)-(4). This extra assumption is called the constraint qualication, and almost always holds in practice. b) Note that (1) is a FOC for x, while (2) is like a FOC for . In most applications, the second-order conditions (SOC) will imply that x maximizes L(x, ), while minimizes L(x, ). For this reason, (x , ) is typically a saddle-point of L(x, ). Thus, in solving the problem in this way, we are using the Lagrangian to turn a constrained optimization problem into an unconstrained optimization problem, where we seek to maximize L(x, ) rather than simply F (x). One nal note: 4

Our general constraint, c G(x), nests as a special case the nonnegativity constraint x 0, obtained by setting c = 0 and G(x) = x. So nonnegativity constraints can be introduced into the Lagrangian in the same way as all other constraints. If we consider, for example, the extended problem max F (x) subject to c G(x) and x 0,
x

then we can introduce a second multiplier , form the Lagrangian as L(x, , ) = F (x) + [c G(x)] + x, and write the rst order condition for the optimal x as L1 (x , , ) = F (x ) G (x ) + = 0. (1 )

In addition, analogs to our earlier conditions (2)-(4) must also hold for the second constraint: x 0, 0, and x = 0. Kuhn and Tuckers original statement of the theorem, however, does not incorporate nonnegativity constraints into the Lagrangian. Instead, even with the additional nonnegativity constraint x 0, they continue to dene the Lagrangian as L(x, ) = F (x) + [c G(x)]. If this case, the rst order condition for x must be modied to read L1 (x , ) = F (x ) G (x ) 0, with equality if x > 0. (1 )

Of course, in (1 ), 0 in general and = 0 if x > 0. So a close inspection reveals that these two approaches to handling nonnegativity constraints lead in the end to the same results.

The Envelope Theorem


Dixit, Chapter 5. Simon-Blume, Chapter 19. Acemoglu, Appendix A.

References:

In our discussion of the Kuhn-Tucker theorem, we considered an optimization problem of the form max F (x) subject to c G(x)
x

Now, lets generalize the problem by allowing the functions F and G to depend on a parameter R. The problem can now be stated as max F (x, ) subject to c G(x, )
x

For this problem, dene the maximum value function V : R R as V () = max F (x, ) subject to c G(x, )
x

Note that evaluating V requires a two-step procedure: First, given , nd the value of x that solves the constrained optimization problem. Second, substitute this value of x , together with the given value of , into the objective function to obtain V () = F (x , ) Now suppose that we want to investigate the properties of this function V . Suppose, in particular, that we want to take the derivative of V with respect to its argument . As the rst step in evaluating V (), consider solving the constrained optimization problem for any given value of by setting up the Lagrangian L(x, ) = F (x, ) + [c G(x, )] We know from the Kuhn-Tucker theorem that the solution x to the optimization problem and the associated value of the multiplier must satisfy the complementary slackness condition: [c G(x , )] = 0 Use this last result to rewrite the expression for V as V () = F (x , ) = F (x , ) + [c G(x , )] So suppose that we tried to calculate V () simply by dierentiating both sides of this equation with respect to : V () = F2 (x , ) G2 (x , ). But, in principle, this formula may not be correct. The reason is that x and will themselves depend on the parameter , and we must take this dependence into account when dierentiating V with respect to . However, the envelope theorem tells us that our formula for V () is, in fact, correct. That is, the envelope theorem tells us that we can ignore the dependence of x and on in calculating V (). To see why, for any , let x () denote the solution to the problem: max F (x, ) subject to c G(x, ), and let () be the associated Lagrange multiplier.

Theorem (Envelope) Let F and G be continuously dierentiable functions of x and . For any given , let x () maximize F (x, ) subject to c G(x, ), and let () be the associated value of the Lagrange multiplier. Suppose, further, that x () and () are also continuously dierentiable functions, and that the constraint qualication G1 [x (), ] = 0 holds for all values of . Then the maximum value function dened by V () = max F (x, ) subject to c G(x, )
x

satises V () = F2 [x (), ] ()G2 [x (), ]. (7) Proof The Kuhn-Tucker theorem tells us that for any given value of , x () and () must satisfy L1 [x (), ()] = F1 [x (), ] ()G1 [x (), ] = 0, and (){c G[x (), ]} = 0. In light of (4), V () = F [x (), ] = F [x (), ] + (){c G[x (), ]} Dierentiating both sides of this expression with respect to yields V () = F1 [x (), ]x () + F2 [x (), ] + (){c G[x (), ]} ()G1 [x (), ]x () ()G2 [x (), ] which shows that, in principle, we must take the dependence of x and on into account when calculating V (). Note, however, that V () = {F1 [x (), ] ()G1 [x (), ]}x () +F2 [x (), ] + (){c G[x (), ]} ()G2 [x (), ], which by (1) reduces to V () = F2 [x (), ] + (){c G[x (), ]} ()G2 [x (), ] Thus, it only remains to show that (){c G[x (), ]} = 0 Clearly, (8) holds for any such that the constraint is binding. (8) (4) (1)

For such that the constraint is not binding, (4) implies that () must equal zero. Furthermore, by the continuity of G and x , if the constraint does not bind at , there exists an > 0 such that the constraint does not bind for all + with > ||. Hence, (4) also implies that ( + ) = 0 for all > ||. Using the denition of the derivative 0 ( + ) () = lim = 0, () = lim 0 0 it once again becomes apparent that (8) must hold. Thus, V () = F2 [x (), ] ()G2 [x (), ] as claimed in the theorem. Once again, this theorem is useful because it tells us that we can ignore the dependence of x and on in calculating V (). And once again, the theorem can be extended to apply in more general settings: see Dixit, Simon-Blume, Acemoglu, or section 4.2 of the notes below. But what is the intuition for why the envelope theorem holds? To obtain some intuition, begin by considering the simpler, unconstrained optimization problem: max F (x, ),
x

where x is the choice variable and is the parameter. Associated with this unconstrained problem, dene the maximum value function in the same way as before: V () = max F (x, ).
x

To evaluate V for any given value of , use the same two-step procedure as before. First, nd the value x () that solves the unconstrained maximization problem for that value of . Second,substitute that value of x back into the objective function to obtain V () = F [x (), ]. Now dierentiate both sides of this expression through by , carefully taking the dependence of x on into account: V () = F1 [x (), ]x () + F2 [x (), ]. But, if x () is the value of x that maximizes F given , we know that x () must be a critical point of F : F1 [x (), ] = 0.

Hence, for the unconstrained problem, the envelope theorem implies that V () = F2 [x (), ], so that, again, we can ignore the dependence of x on in dierentiating the maximum value function. And this result holds not because x fails to depend on : to the contrary, in fact, x will typically depend on through the function x (). Instead, the result holds because since x is chosen optimally, x () is a critical point of F given . Now return to the constrained optimization problem max F (x, ) subject to c G(x, )
x

and dene the maximum value function as before: V () = max F (x, ) subject to c G(x, ).
x

The envelope theorem for this constrained problem tells us that we can also ignore the dependence of x on when dierentiating V with respect to , but only if we start by adding the complementary slackness condition to the maximized objective function to rst obtain V () = F [x (), ] + (){c G[x (), ]}. In taking this rst step, we are actually evaluating the entire Lagrangian at the optimum, instead of just the objective function. We need to take this rst step because for the constrained problem, the Kuhn-Tucker condition (1) tells us that x () is a critical point, not of the objective function by itself, but of the entire Lagrangian formed by adding the product of the multiplier and the constraint to the objective function. And what gives the envelope theorem its name? The envelope theorem refers to a geometrical presentation of the same result that weve just worked through. To see where that geometrical interpretation comes from, consider again the simpler, unconstrained optimization problem: max F (x, ),
x

where x is the choice variable and is a parameter. Following along with our previous notation, let x () denote the solution to this problem for any given value of , so that the function x () tells us how the optimal choice of x depends on the parameter . Also, continue to dene the maximum value function V in the same way as before: V () = max F (x, ).
x

Now let 1 denote a particular value of , and let x1 denote the optimal value of x associated with this particular value 1 . That is, let x1 = x (1 ). After substituting this value of x1 into the function F , we can think about how F (x1 , ) varies as variesthat is, we can think about F (x1 , ) as a function of , holding x1 xed. In the same way, let 2 denote another particular value of , with 2 > 1 lets say. And following the same steps as above, let x2 denote the optimal value of x associated with this particular value 2 , so that x2 = x (2 ). Once again, we can hold x2 xed and consider F (x2 , ) as a function of . The geometrical presentation of the envelope theorem can be derived by thinking about the properties of these three functions of : V (), F (x1 , ), and F (x2 , ). One thing that we know about these three functions is that for = 1 : V (1 ) = F (x1 , 1 ) > F (x2 , 1 ), where the rst equality and the second inequality both follow from the fact that, by denition, x1 maximizes F (x, 1 ) by choice of x. Another thing that we know about these three functions is that for = 2 : V (2 ) = F (x2 , 2 ) > F (x1 , 2 ), because again, by denition, x2 maximizes F (x, 2 ) by choice of x. On a graph, these relationships imply that: At 1 , V () coincides with F (x1 , ), which lies above F (x2 , ). At 2 , V () coincides with F (x2 , ), which lies above F (x1 , ). And we could nd more and more values of V by repeating this procedure for more and more specic values of i , i = 1, 2, 3, .... In other words: V () traces out the upper envelope of the collection of functions F (xi , ), formed by holding xi = x (i ) xed and varying . Moreover, V () is tangent to each individual function F (xi , ) at the value i of for which xi is optimal, or equivalently: V () = F2 [x (), ], which is the same analytical result that we derived earlier for the unconstrained optimization problem. 10

The Envelope Theorem

V()

F(x2,)

F(x1,)

To generalize these arguments so that they apply to the constrained optimization problem max F (x, ) subject to c G(x, ),
x

simply use the fact that in most cases (where the appropriate second-order conditions hold) the value x () that solves the constrained optimization problem for any given value of also maximizes the Lagrangian function L(x, , ) = F (x, ) + [c G(x, )], so that V () = max F (x, ) subject to c G(x, )
x x

= max L(x, , ) Now just replace the function F with the function L in working through the arguments from above to conclude that V () = L3 [x (), (), ] = F2 [x (), ] ()G2 [x (), ], which is again the same result that we derived before for the constrained optimization problem.

3
3.1

Two Examples
Utility Maximization

A consumer has a utility function dened over consumption of two goods: U (c1 , c2 ) Prices: p1 and p2 Income: I Budget constraint: I p1 c1 + p2 c2 = G(c1 , c2 ) The consumers problem is: max U (c1 , c2 ) subject to I p1 c1 + p2 c2
c1 ,c2

The Kuhn-Tucker theorem tells us that if we set up the Lagrangian: L(c1 , c2 , ) = U (c1 , c2 ) + (I p1 c1 p2 c2 )
Then the optimal consumptions c 1 and c2 and the associated multiplier must satisfy the FOC: L1 (c 1 , c2 , ) = U1 (c1 , c2 ) p1 = 0

and
L2 (c 1 , c2 , ) = U2 (c1 , c2 ) p2 = 0

11

Move the terms with minus signs to the other side, and divide the rst of these FOC by the second to obtain p1 U1 (c 1 , c2 ) = , U2 (c1 , c2 ) p2 which is just the familiar condition that says that the optimizing consumer should set the slope of his or her indierence curve, the marginal rate of substitution, equal to the slope of his or her budget constraint, the ratio of prices.
Now consider I as one of the models parameters, and let the functions c 1 (I ), c2 (I ), and (I ) describe how the optimal choices c1 and c2 and the associated value of the multiplier depend on I .

In addition, dene the maximum value function as V (I ) = max U (c1 , c2 ) subject to I p1 c1 + p2 c2


c1 ,c2

The Kuhn-Tucker theorem tells us that


(I )[I p1 c 1 (I ) p2 c2 (I )] = 0

and hence
V ( I ) = U [ c 1 (I ), c2 (I )] = U [c1 (I ), c2 (I )] + (I )[I p1 c1 (I ) p2 c2 (I )]. The envelope theorem tells us that we can ignore the dependence of c 1 and c2 on I in calculating V (I ) = (I ),

which gives us an interpretation of the multiplier as the marginal utility of income.

3.2

Cost Minimization

The Kuhn-Tucker and envelope conditions can also be used to study constrained minimization problems. Consider a rm that produces output y using capital k and labor l, according to the technology described by f (k, l) y. r = rental rate for capital w = wage rate Suppose that the rm takes its output y as given, and chooses inputs k and l to minimize costs. Then the rm solves min rk + wl subject to f (k, l) y
k,l

12

If we set up the Lagrangian as L(k, l, ) = rk + wl [f (k, l) y ], where the term involving the multiplier is subtracted rather than added in the case of a minimization problem, the Kuhn-Tucker conditions (1)-(4) continue to apply, exactly as before. Thus, according to the Kuhn-Tucker theorem, the optimal choices k and l and the associated multiplier must satisfy the FOC: L1 (k , l , ) = r f1 (k , l ) = 0 and L2 (k , l , ) = w f2 (k , l ) = 0 (10) Move the terms with minus signs over to the other side, and divide the rst FOC by the second to obtain r f1 (k , l ) = , f2 (k , l ) w which is another familiar condition that says that the optimizing rm chooses factor inputs so that the marginal rate of substitution between inputs in production equals the ratio of factor prices. Now suppose that the constraint binds, as it usually will: y = f (k , l ) (11) (9)

Then (9)-(11) represent 3 equations that determine the three unknowns k , l , and as functions of the models parameters r, w, and y . In particular, we can think of the functions k = k (r, w, y ) and l = l (r, w, y ) as demand curves for capital and labor: strictly speaking, they are conditional (on y ) factor demand functions. Now dene the minimum cost function as C (r, w, y ) = min rk + wl subject to f (k, l) y = rk (r, w, y ) + wl (r, w, y ) = rk (r, w, y ) + wl (r, w, y ) (r, w, y ){f [k (r, w, y ), l (r, w, y )] y } The envelope theorem tells us that in calculating the derivatives of the cost function, we can ignore the dependence of k , l , and on r, w, and y . 13
k,l

Hence: C1 (r, w, y ) = k (r, w, y ), C2 (r, w, y ) = l (r, w, y ), and C3 (r, w, y ) = (r, w, y ). The rst two of these equations are statements of Shephards lemma; they tell us that the derivatives of the cost function with respect to factor prices coincide with the conditional factor demand curves. The third equation gives us an interpretation of the multiplier as a measure of the marginal cost of increasing output. Thus, our two examples illustrate how we can apply the Kuhn-Tucker and envelope theorems in specic economic problems. The two examples also show how, in the context of specic economic problems, it is often possible to attach an economic interpretation to the multiplier .

4
4.1

Generalizing the Basic Results


The Kuhn-Tucker Theorem

Our simple version of the Kuhn-Tucker theorem applies to a problem with only one choice variable and one constraint. Section 19.6 of Simon and Blumes book develops a proof for the more general case, with n choice variables and m constraints. Their proof makes repeated, clever use of the implicit function theorem, which makes the arguments surprisingly short but also works to obscure some of the intuition provided by the analysis of the simplest case. Nevertheless, having gained the intuition the intuition from working through the simple case, it is useful to see how the result extends. Simon and Blume (Chapter 15) and Acemoglu (Appendix A) both present fairly general statements of the implicit function theorem. The special case or application of their results that we will need works as follows. Consider a system of n equations in n variables: H1 (y1 , y2 , . . . , yn ) = c1 , H2 (y1 , y2 , . . . , yn ) = c2 , . . . Hn (y1 , y2 , . . . , yn ) = cn . The functions may have other arguments exogenous variables but since these will be held xed, notation referring to them can be suppressed. 14

Now evaluate these equations at a specic set of values y1 , y2 , . . . , yn to obtain H1 (y1 , y2 , . . . , yn ) = c 1, H2 (y1 , y2 , . . . , yn ) = c 2, . . . Hn (y1 , y2 , . . . , yn ) = c n.

Suppose that each function Hi , i = 1, . . . , n, is n n matrix of derivatives H1 /y1 H2 /y1 . .. . . . Hn /y1


. , y2 , . . . , yn is nonsingular at y1

continuously dierentiable and that the H1 /yn H2 /yn . . . Hn /yn

Then there exist continuously dierentiable functions y1 (c1 , c2 , . . . , cn ), y2 (c1 , c2 , . . . , cn ), . . . yn (c1 , c2 . . . , cn ),


dened in an open subset C of Rn containing (c 1 , c2 , . . . , cn ), such that

H1 (y1 (c1 , c2 , . . . , cn ), y2 (c1 , c2 , . . . , cn ), . . . , yn (c1 , c2 , . . . , cn )) = c1 , H2 (y1 (c1 , c2 , . . . , cn ), y2 (c1 , c2 , . . . , cn ), . . . , yn (c1 , c2 , . . . , cn )) = c2 , . . . Hn (y1 (c1 , c2 , . . . , cn ), y2 (c1 , c2 , . . . , cn ), . . . , yn (c1 , c2 , . . . , cn )) = cn . for all (c1 , c2 , . . . , cn ) C . With this result in hand, consider the following generalized version of the Kuhn-Tucker theorem we proved earlier. Let there be n choice variables, x1 , x2 , . . . , xn . The objective function F : Rn R is continuously dierentiable, as are the m functions Gj : Rn R, j = 1, 2, . . . , m that enter into the constraints cj Gj (x1 , x2 , . . . , xn ), where cj R for all j = 1, 2, . . . , m. The problem can be stated as:
x1 ,x2 ,...,xn

max

F (x1 , x2 , . . . , xn ) subject to cj Gj (x1 , x2 , . . . , xn ) for all j = 1, 2, . . . , m.

Note that, typically, m n will have to hold in order so that there is a set of values for the choice variables that satisfy all of the constraints. 15

To dene the Lagrangian, introduce the multipliers j , j = 1, 2, . . . , m, one for each constraint. Then
m

L(x1 , x2 , . . . , xn , 1 , 2 , . . . , m ) = F (x1 , x2 , . . . , xn ) +
j =1

j [cj Gj (x1 , x2 , . . . , xn )].

Theorem (Kuhn-Tucker) Suppose that x 1 , x2 , . . . , xn maximize F (x1 , x2 , . . . , xn ) subject to cj Gj (x1 , x2 , . . . , xn ) for all j = 1, 2, . . . , m, where F and the Gj s are all continuously dierentiable. Suppose (without loss of generality) that the rst m m constraints bind at the optimum and that the remaining m m 0 constraints are nonbinding, and assume that the m n matrix of derivatives G1,1 (x 1 , x2 , . . . , xn ) . . . G1,n (x1 , x2 , . . . , xn ) G2,1 (x , x , . . . , x ) . . . G2,n (x , x , . . . , x ) n 2 1 n 2 1 (12) , . . . . . . . . . Gm, (x1 , x2 , . . . , xn ) 1 (x1 , x2 , . . . , xn ) . . . Gm,n where Gj,i = Gj /xi , has rank m . Then there exist values 1 , 2 , . . . , m that, to gether with x 1 , x2 , . . . , xn , satisfy: L i ( x 1 , x2 , . . . , xn , 1 , 2 , . . . , n ) = Fi (x1 , x2 , . . . , xn ) m

j =1

j Gj,i (x1 , x2 , . . . , xn ) = 0

(13)

for i = 1, 2, . . . , n,
Ln+j (x 1 , x2 , . . . , xn , 1 , 2 , . . . , n ) = cj Gj (x1 , x2 , . . . , xn ) 0,

(14) (15)

for j = 1, 2, . . . , m, j 0, for j = 1, 2, . . . , m, and


j [cj Gj (x1 , x2 , . . . , xn )] = 0,

(16)

for j = 1, 2, . . . , m.
Proof To begin, set the multipliers m +1 , m +2 , . . . , m associated with the nonbinding contraints equal to zero. Since each of the functions Gj , j = m + 1, m + 2, . . . , m, is continuously dierentiable, suciently small adjustments in the choice variables can be made without causing these m m constraints to become binding.

Next, note that the m + 1 n matrix F1 (x 1 , x2 , . . . , xn ) G1,1 (x , x , . . . , x ) 1 2 n G2,1 (x , x , . . . , x ) 1 2 n . . . Gm, 1 (x1 , x2 , . . . , xn )

. . . Fn (x , x , . . . , x ) 1 2 n . . . G1,n (x 1 , x2 , . . . , xn ) . . . G2,n (x1 , x2 , . . . , x n) . . ... . .


. . . Gm,n (x1 , x2 , . . . , xn )

(17)

16

must have rank m <m + 1. To see why, consider the system of equations F (x1 , x2 , . . . , xn ) = y G1 (x1 , x2 , . . . , xn ) = c1 G2 (x1 , x2 , . . . , xn ) = c2 . . . Gm (x1 , x2 , . . . , xn ) = cm . With y set equal to the maximized value of the objective function,
y = F (x 1 , x2 , . . . , xn ), each of these m + 1 equations holds when the functions are evaluated at x 1 , x2 , . . . , xn . In this case, the implicit function theorem implies that it should be possible to adjust the values of m + 1 of the choice variables so to nd a new set of values x 1 , x2 , . . . , xn such that F (x 1 , x2 , . . . , xn ) = y + G1 (x 1 , x2 , . . . , xn ) = c1 G2 (x 1 , x2 , . . . , xn ) = c2 . . . Gm (x1 , x2 , . . . , xn ) = cm .

for a strictly positive but suciently small value of . But this contradicts the assump tion that x 1 , x2 , . . . , xn solves the constrained optimization problem. Since the matrix in (17) has rank m <m + 1, its m + 1 rows must be linearly dependent. Hence, there exist scalars 0 , 1 , . . . m , at least one of which is nonzero, such that F1 (x 1 , x2 , . . . , xn ) . . 0 = 0 .
Fn (x 1 , x2 , . . . , xn ) G1,1 (x Gm, 1 (x1 , x2 , . . . , xn ) 1 , x2 , . . . , xn ) . . . . + 1 + . . . + m . . . G1,n (x 1 , x2 , . . . , xn ) Gm,n (x1 , x2 , . . . , xn )

(18)

Moreover, in (18), 0 = 0, since otherwise, the matrix in (12) would have rank less than m .
Thus, for j = 1, 2, . . . , m , set j = j /0 . With these settings for 1 , 2 , . . . , m , plus the settings m +1 = m +2 = m = 0 chosen earlier, (18) implies that (13) must hold for all i = 1, 2, . . . , n. Clearly, (14) and (16) are satised for all j = 1, 2, . . . , m, and (15) holds for all j = m + 1, m + 2, . . . , m. So it only remains to show that (15) holds for j = 1, 2, . . . , m .

17

To see that these last conditions must hold, consider the system of equations G1 (x1 , x2 , . . . , xn ) = c1 G2 (x1 , x2 , . . . , xn ) = c2 . . . Gm (x1 , x2 , . . . , xn ) = cm ,

(19)

where 0. These equations hold, with = 0, at x 1 , x2 , . . . , xn . And since the matrix in (12) has rank m , the implicit function theorem implies that there are functions x1 ( ), x2 ( ), . . . , xn ( ) such that the same equations hold for all suciently small values of .

Since c1 c1 , the choices x1 ( ), x2 ( ), . . . , xn ( ) satisfy all of the constraints from the original optimization problem. And since, by assumption, x1 (0) = x 1 , x2 (0) = x2 , . . . , xn (0) = xn maximizes the objective function subject to the constraints, it must be that dF (x1 ( ), x2 ( ), . . . , xn ( )) d
n

=
=0 i=1

Fi (x 1 , x2 , . . . , xn )xi (0) 0.

(20)

In addition, the equations in (19) implicitly dening x1 ( ), x2 ( ), . . . , xn ( ) imply dG1 (x1 ( ), x2 ( ), . . . , xn ( )) d and dGj (x1 ( ), x2 ( ), . . . , xn ( )) d for j = 2, 3, . . . , m . Putting all these results together, (13) implies
m n

=
=0 i=1

G1,i (x 1 , x2 , . . . , xn )xi (0) = 1

(21)

=
=0 i=1

Gj,i (x 1 , x2 , . . . , xn )xi (0) = 0

(22)

0=

Fi (x 1 , x2 , . . . , xn )

j =1

j Gj,i (x1 , x2 , . . . , xn ).

for all i = 1, 2, . . . , n. Mutliplying each of these equations by xi (0) and summing over all i yields
n n Fi (x 1 , x2 , . . . , xn )xi (0) i=1 m j Gj,i (x1 , x2 , . . . , xn )xi (0) i=1 j =1

0= or 0=
i=1

n F i ( x 1 , x2 , . . . , xn )xi (0)

j =1

j
i=1

Gj,i (x 1 , x2 , . . . , xn )xi (0)

18

In light of (21) and (22), this last equation simplies to


n

0=
i=1

Fi (x 1 , x2 , . . . , xn )xi (0) + 1 .

And hence, in light of (20), 1 0. Analogous arguments show that j 0 for j = 2, 3, . . . , m as well, completing the proof.

4.2

The Envelope Theorem

Proving a generalized version of the envelope theorem requires no new ideas, just repeated applications of the previous ones. Consider, again, the constrained optimization problem with n choice variables and m constraints:
x1 ,x2 ,...,xn

max

F (x1 , x2 , . . . , xn ) subject to cj Gj (x1 , x2 , . . . , xn ) for all j = 1, 2, . . . , m.

Now extend this problem by allowing the functions F and Gj , j = 1, 2, . . . , m, to depend on a parameter R: maxx1 ,x2 ,...,xn F (x1 , x2 , . . . , xn , ) subject to cj Gj (x1 , x2 , . . . , xn , ) for all j = 1, 2, . . . , m. Just as before, dene the maximum value function V : R R as V () =
x1 ,x2 ,...,xn

max

F (x1 , x2 , . . . , xn , )

subject to cj Gj (x1 , x2 , . . . , xn , ) for all j = 1, 2, . . . , m. Note that V is still a function of the single parameter , since the n choice variables are optimized out. Put another way, evaluating V requires the same two-step procedure as before:
First, given , nd the values x 1 ( ), x2 ( ), . . . , xn ( ) that solve the constrained optimization problem. Second, substitute these values x 1 ( ), x2 ( ), . . . , xn ( ), together with the given value of , into the objective function to obtain V () = F (x 1 ( ), x2 ( ), . . . , xn ( ), ).

19

And just as before, the envelope theorem tells us that we can calculate the derivative V () of the maximum value function while ignoring the dependence of x 1 , x2 , . . . , xn and 1 , 2 , . . . , m on , provided we invoke the complementary slackness conditions (16) to add the sum of all of the multipliers times all of the constraints to the objective function before dierentiating through by . Theorem (Envelope) Let F and Gj , j = 1, 2, . . . , m, be continuously dierentiable func tions of x1 , x2 , . . . , xn and . For any value of , let x 1 ( ), x2 ( ), . . . , xn ( ) maximize F (x1 , x2 , . . . , xn , ) subject to cj Gj (x1 , x2 , . . . , xn , ) for all j = 1, 2, . . . , m, and let 1 ( ), 2 ( ), . . . , m ( ) be the associated values of the Lagrange multipliers. Suppose, further, that x1 (), x 2 ( ), . . . , xn ( ) and 1 ( ), 2 ( ), . . . , m ( ) are all continuously differentiable functions, and that the m () m matrix of derivatives G1,1 (x . . . G1,n (x 1 ( ), x2 ( ), . . . , xn ( ), ) 1 ( ), x2 ( ), . . . , xn ( ), ) G2,1 (x (), x (), . . . , x (), ) . . . G2,n (x (), x (), . . . , x (), ) n 2 1 n 2 1 . . . . . . . . . Gm (),1 (x1 ( ), x2 ( ), . . . , xn ( ), ) . . . Gm (),n (x1 ( ), x2 ( ), . . . , xn ( ), ) associated with the m () m binding constraints has rank m () for each value of . Then the maximum value function dened by V () =
x1 ,x2 ,...,xn

max

F (x1 , x2 , . . . , xn , )

subject to cj Gj (x1 , x2 , . . . , xn , ) for all j = 1, 2, . . . , m satises


V () = Fn+1 (x 1 ( ), x2 ( ), . . . , xn ( ), ) m

j =1

j ( )Gj,n+1 (x1 ( ), x2 ( ), . . . , xn ( ), ).

(23)

Proof The Kuhn-Tucker theorem implies that for any given value of ,
m Fi (x 1 ( ), x2 ( ), . . . , xn ( ), ) j =1 j ( )Gj,i (x1 ( ), x2 ( ), . . . , xn ( ), ) = 0

(13)

for i = 1, 2, . . . , n, and
j ( )[cj Gj (x1 ( ), x2 ( ), . . . , xn ( ), )] = 0,

(16)

for j = 1, 2, . . . , m must hold. In light of (16),


m V () = F (x 1 ( ), x2 ( ), . . . , xn ( ), ) + j =1 j ( )[cj Gj (x1 ( ), x2 ( ), . . . , xn ( ), )].

20

Dierentiating both sides of this expression by yields


n

V () =
i=1

F i ( x 1 ( ), x2 ( ), . . . , xn ( ), )x ( ) +Fn+1 (x 1 ( ), x2 ( ), . . . , xn ( ), ) m

+
j =1 n

j ( )[cj Gj (x1 ( ), x2 ( ), . . . , xn ( ), )] m j ( )Gj,i (x1 ( ), x2 ( ), . . . , xn ( ), )x ( ) i=1 j =1 m j ( )Gj,n+1 (x1 ( ), x2 ( ), . . . , xn ( ), ). j =1

which shows that, in principle, we must take the dependence of x 1 ( ), x2 ( ), . . . , xn ( ) and 1 ( ), 2 ( ), . . . , m ( ) on into account when calculating V ( ).

Note, however, that (13) implies that the sums in the rst and fourth lines of this last expression together equal zero. Hence, to show that (23) holds, it only remains to show that m
j ( )[cj Gj (x1 ( ), x2 ( ), . . . , xn ( ), )] = 0 j =1

and this is true if


j ( )[cj Gj (x1 ( ), x2 ( ), . . . , xn ( ), )] = 0

(24)

for all j = 1, 2, . . . , m. Clearly, (24) holds for any such that constraint j is binding. For such that constraint j is not binding, (16) implies that j ( ) = 0. Furthermore, by the continuity of Gj and xi (), i = 1, 2, . . . , n, if constraint j does not bind at , there exists an > 0 such that constraint j does not bind for all + with > ||. Hence, j
0 j ( + ) j ( ) () = lim = lim = 0, 0 0

and once again it becomes apparent that (24) must hold. Hence, (23) must hold as well.

21

You might also like