A Crash Course in Optimization Theory: 1 Examples

You might also like

Download as pdf or txt
Download as pdf or txt
You are on page 1of 3

A Crash Course in Optimization Theory

1 Examples
Why optimize?

1. Microeconomics: The consumer wants to choose the best bundle that she can afford.

2. Finance: Portfolio management (minimize risk subject to return)

3. Diet Problem: Find the cheapest combination of food that satisfies all the daily nutri-
tional requirements.

4. Engineering: Design of aircrafts and aerospace structures of minimum weight given


constraints, or find the optimal trajectories of space vehicles (dynamic optimization).

5. Physics: Thermodynamics (heat bath), quantum mechanics, etc.

2 Optimization
Optimize = Maximize or Minimize

2.1 Without Constraint


2.1.1 One Variable

Maximize f where f : R → R.

Definition 1 A stationary point x is a point such that f 0 (x) = 0.

If x∗ is an optimizer, then it is a stationary point. To find an optimizer, we take


the first-order condition and solve f 0 (x) = 0 for x.

To determine whether an optimizer is a maximizer or a minimizer, we take the second-order


condition; i.e. we look at the sign of f 00 (x∗ ).
If f 00 (x∗ ) > 0, then x∗ is a (local) minimizer. If f 00 (x∗ ) < 0, x∗ is a (local) maximizer.

1
2.1.2 Multiple Variables

Maximize f where f : Rn → R. Denote x = (x1 , x2 , . . . , xn ).


Definition 2 A stationary point x is a point such that ∂f (x)/∂xi = 0 for all i = 1, 2, . . . , n.
An optimizer must be a stationary point. To find an optimizer, we solve the first-order condi-
tions; this is the system 
 ∂f (x)/∂x1 = 0

..
 .

∂f (x)/∂xn = 0
The second-order condition is more complicated. It involves checking the semi-definiteness
of the Hessian matrix. We will not check it in this class but you must be aware of it.

2.2 With Constraints


This is the case when we try to solve:
maxx∈Rn f (x1 , . . . , xn )
s.t.
g(x1 , . . . , xn ) = 0
In consumer theory (with two goods), f (x) = u(x1 , x2 ) and g(x) = p1 x1 + p2 x2 − m.
Claim 1 If x∗ maximizes a differentiable function f : Rn → R subject to g(x) = m (and if
a rank condition condition is satisfied, i.e. x∗ is interior) then there is λ ∈ R such that
∂f (x∗ ) ∂g(x∗ )
−λ = 0, ∀i = 1, 2, . . . , n.
∂xi ∂xi

2.2.1 A Cookbook Procedure

Step 1. Setup the Lagrangean:


L(x1 , . . . , xn , λ) = f (x1 , . . . , xn ) + λg(x1 , . . . , xn )
Step 2. Find the maximizers of L. That is, solve


 ∂L(x)

 =0

 ∂x1

 ...

 ∂L(x)

 =0

 ∂xn

 ∂L(x)

 =0
∂λ
for (x1 , . . . , xn , λ).

Step 3. Check the second-order condition to make sure that it is a maximizer (you will not
have to do it in this class).

2
2.2.2 Examples

1. Maximize u(x1 , x2 ) = xa1 xb2 subject to p1 x1 + p2 x2 = m

2. Maximize x + y subject to (x2 + y 2 )2 + 5(x2 − y 2 ) = 0.

You might also like