Lecture 1

You might also like

Download as pdf or txt
Download as pdf or txt
You are on page 1of 54

Introduction Unconstrained Optimization Constrained Optimization Final Remarks

Lecture 1: Basic Concepts in Optimization

Prof. Marcelo Escobar

PPGEQ – UFRGS
Prof. Jorge Otávio Trierweiler

September 11, 2011

logo

Prof. Marcelo Escobar Optimization of Chemical Processes


Introduction Unconstrained Optimization Constrained Optimization Final Remarks

Outline

1 Introduction
Introduction
Mathematical Background
Basic Concepts

2 Unconstrained Optimization

3 Constrained Optimization

4 Final Remarks
logo

Prof. Marcelo Escobar Optimization of Chemical Processes


Introduction Unconstrained Optimization Constrained Optimization Final Remarks

Introduction

Introduction

Optimization: given a system or process, find the best solution to


this process within constraints
Objective Function: indicator of goodness of solution, e.g. cost,
profit, energy ...
Decision Variables: variables that influence the process behaviour
and can be adjusted during the optimization

logo

Prof. Marcelo Escobar Optimization of Chemical Processes


Introduction Unconstrained Optimization Constrained Optimization Final Remarks

Introduction

Optimization Viewpoints

Mathematician: characterization of theoretical properties of


optimization, convergence, existence, local convergence rates.
Numerical Analyst: implementation of optimization method for
efficient and ”practical”use. Concerned with ease of computations,
numerical stability, performance.
Engineer: applies optimization method to real problems.
Concerned with reliability, robustness, efficiency, diagnosis, and
recovery from failure.
logo

Prof. Marcelo Escobar Optimization of Chemical Processes


Introduction Unconstrained Optimization Constrained Optimization Final Remarks

Introduction

Motivation

Scope of Optimization: Provide systematic framework for searching


among a specified space of alternatives to identify an optimal design,
i.e., as a decision-making tool!

Premise: Conceptual formulation of optimal product and process design


corresponds to a mathematical programming problem

minimize f (x)
x
subject to h(x) = 0
g(x) ≤ 0

logo

Prof. Marcelo Escobar Optimization of Chemical Processes


Introduction Unconstrained Optimization Constrained Optimization Final Remarks
Introdução Trajetória Acadêmica e Experiência Produção Cientifica Projetos de Pesquisa The End

Introduction

AOptimization
Aplicações
li õ de Otiin iProcess
d Otimizaçãoã emSystem
Process
P Engineering
S
System
t Engineering
E i i

LP MILP QP NLP MINLP

Process Design & Sysnthesis

MHENs (Redes) x x x x
R ti
Reactions x x x
Separation Systems x x x
Flowsheeting x x
Process Operations

Planning/ Scheduling x x x
Supply Chain x x x
Real Time Optimization x x x

Control

Predictive Control x x
Nonlinear Predictive Control x x logo

Hybrid Control x

Prof. Marcelo Escobar Optimization of Chemical Processes


Introduction Unconstrained Optimization Constrained Optimization Final Remarks

Introduction

Process Optimization

Process Model: h(x) = 0, h : Rn → Rm


n variables and m equations
Degrees of Freedom: φ = n − m, if φ > 0 → optimization.
Process Specifications: g(x) ≤ 0, g : Rn → R r
Objective Function: f (x) f : Rn → R indicator of goodness!!

Process Optimization Problem


minimize
n
f (x)
x∈R

subject to hi (x) = 0, i = 1, . . . , m
gj (x) ≤ 0, j = 1, . . . , r logo

Prof. Marcelo Escobar Optimization of Chemical Processes


Introduction Unconstrained Optimization Constrained Optimization Final Remarks

Introduction

Some Remarks

Minimizing f (x) is equivalent to maximizing −f (x);


Inequalities constraints can always be rearranged:

x1 − x2 ≥ 0 ⇒ −x1 + x2 ≤ 0

Multiple objective function: minimize f1 (x) and f2 (x)


weigthed objective function:

f (x) = w1 f1 (x) + w2 f2 (x)

ε-constrained method:
minimize f1 (x)
x
logo
subject to f2 (x) ≤ ε

Prof. Marcelo Escobar Optimization of Chemical Processes


Introduction Unconstrained Optimization Constrained Optimization Final Remarks

Introduction

Class of Problems

LP - Liner Programming: f (x),h(x), and g(x) are linear;


QP - Quadratic Programming:f (x) is quadratic; h(x), and g(x)
are linear;
NLP - Nonlinear Programming: f (x),h(x), and g(x) are nonlinear;
MILP - Mixed Integer Liner Programming: linear functions with
integer variables;
MINLP - Mixed Integer NonLiner Programming: nonlinear
functions with integer variables;
logo

Prof. Marcelo Escobar Optimization of Chemical Processes


Introduction Unconstrained Optimization Constrained Optimization Final Remarks

Introduction

Modern Optimization
Modern Optimization

x2
Linear Programming Kantorovich (1939), Dantzig (1947)

x1

x2
Nonlinear Programming Karush (1939), Kuhn, A.W.Tucker (1951)

x1

y2

Integer Programming R. E. Gomory (1958)

logo
y1

Prof. Marcelo Escobar Optimization of Chemical Processes


Introduction Unconstrained Optimization Constrained Optimization Final Remarks

Introduction

Evolution of Mathematical Programming

1950’s: Linear Programming, Nonlinear Programming


1960’s: Integer Programming
1980’s: Interior Point Methods
1990’s: Integer Nonlinear Programming, Modelling Systems
2000’s: Global Optimization, Logic Based Methods

Computational progress:
much faster algorithms/much faster computers
logo

Prof. Marcelo Escobar Optimization of Chemical Processes


Final Remarks

logo
APPENDIX A. OPTIMIZATION OVERVIEW

LOGMIP
Generalized Disjunctive Programming (linear and non linear)
(GDP) Big-M/Convex hull Relaxation
Balas (1985)
Vecchietti and Grossmann (1999, 2000) LaGO
Lee and Grossmann (2000)
Genetic
OQNLP
Algorithms
Random
Search (RS) Simulated MSNLP
Annealing
. Chemical Processes

OTS
Scatter
Search
FilMINT
Constrained Optimization

Hybrid Method Tabu


BARON
LP/NLP based branch and bound,
Quesada and Grossmann (1992)
van Roy and Wolsey (1987)
BONMIN
Branch and Reduce
Mixed Integer Non Linear (Sahinidis,1996) Alpha-ECP
Programming (MiNLP)
Extended Cutting Plane (ECP) DICOPT
Westerlund and Pettersson (1992,1995)
Outer Approximation (OA) MINOPT-AO/GBD

of
Duran and Grossmann (1986) (dynamic optimization)

PTIMIZATION TOOLS
Viswanathan and Grossmann (1990)
Fletcher and Leyffer (1994)

Optimization
MINLP_BB
Generalized Benders Decomposition
(GBD)
Benders (1962), Geoffrion (1972)
SBB
Branch and Bound Method (B&B)
Gupta and Ravindran (1985)
Borchers and Mitchell (1994) LINDOGlobal
Leyffer (2001), Bussieck and Drud (2001)
α-BB
(underestimators)

1: OVERVIEW OF O
IPOPT
Interior Point Methods (IP)
LOQO
Forsgren, Gill, and Wright (2002)
Wächter (2002), Byrd et al. (1999)

Escobar
KNITRO
Augmented Langrangian Methods
Conn,Gould and Toint (1992)
LANCELOT
Reduced Gradient Methods
Unconstrained Optimization

(GRG) Drud (1992)

FIGURE
Non Linear Programming
Murtagh and Saunders (1998)
(NLP) CONOPT
Optimization Overview

Sequential Quadratic Programming

Prof. Marcelo
(SQP)
Gill, Murray and Saunders (1997)
MINOS
Byrd, Hribar and Nocedal (1999)
Successive linear programming (SLP)
Fletcher and Sainz de la Maza (1989) SNOPT
Byrd, Gould and Nocedal (2003)
NPSOL
Branch and Cut
LGO
Crowder et al. (1983)
Johnson et al. (2000)
COIN-CBC
Mixed Integer Linear Cutting Planes (CP)
Programming (MILP) Gomory (1960), Balas et al (1993)
COIN-GLPK
Branch and Bound (B&B)
Lang and Doing (1960), Dakin (1965)
MOSEK
XA
Interior Point
Karmarkar (1984)
Linear Programming (LP)
OSL
Simplex
Dantzig (1949)
XPRESS
CPLEX
Solution Modeling
Classes Major Codes Interfaces
Algorithms System
Introduction

Introduction
Introduction Unconstrained Optimization Constrained Optimization Final Remarks

Mathematical Background

Linear Algebra

For a given matrix A:


Av = λv
for this relation λ are the eigenvalues and v is the eigenvectors of A.

How to determine? det(A − λI ) = 0

Classification of a symmetric matrix based on eigenvalues:


λi > (≥) 0 - positive (semi) definite;
λi < (≤) 0 - (semi) definite;
λi ≤ 0 and λj ≥ 0 - indefinite;

Note: If A is symmetric all eigenvalues are real. logo

Prof. Marcelo Escobar Optimization of Chemical Processes


Introduction Unconstrained Optimization Constrained Optimization Final Remarks

Mathematical Background

Calculus: The Gradient Vector

For a given differentiable function f (x) we can define


Gradient Vector:
 
∂f (x)
∂x1
 . 
 .. 
∇f (x) =  
∂f (x)
∂xn

∇f (x) points in the direction of greatest increase of f (x);


∇f (x ∗ ) defines the tangent plane to the surface at the point x ∗ ;
∇f (x ∗ ) is orthogonal to the contour curves of f (x) at the point x ∗ ;
logo

Prof. Marcelo Escobar Optimization of Chemical Processes


Introduction Unconstrained Optimization Constrained Optimization Final Remarks

Mathematical Background

Calculus: Surface and Contour Curves of f (x)

PART I1 : Optimization Theory and Methods

Convex function

Planef (x) = k

I I
I I
I I
I I
I I

FIGURE 4.11
Illustration of a convex set formed by a plane f(x) = k cutting a convex function. logo

As a consequence, the problem


Minimize:
Prof. Marcelo f (x)
Escobar Optimization of Chemical Processes
Introduction Unconstrained Optimization Constrained Optimization Final Remarks

Mathematical Background

Calculus: Surface and Contour Curves in Matlab

45

40

35

30

25

20

15

10

5 10 15 20 25 30 35 40 45

Quick Tutorial: See Matlab Function tutorial01.m! logo

Prof. Marcelo Escobar Optimization of Chemical Processes


Introduction Unconstrained Optimization Constrained Optimization Final Remarks

Mathematical Background

Calculus: The Hessian Matrix


For a given differentiable function f (x) we can define
Hessian Matrix:
 ∂ 2 f (x) ∂ 2 f (x)

∂x12
... ∂x1 ∂xn

H(x) = ∇2 f (x) =  .. .. .. 
 . . .


∂ 2 f (x) ∂ 2 f (x)
∂xn ∂x1 ... ∂xn2

The Hessian Matrix is always symmetrical, i.e. H(x) = H(x)T ;


The eigenvalues (λ) of H(x) give us an idea about how the shape of the
surface f (x) looks like (curvature)
λ < 0 corresponds to hills;
λ > 0 corresponds to valleys; logo
λ = 0 corresponds to zero curvature.

Prof. Marcelo Escobar Optimization of Chemical Processes


Introduction Unconstrained Optimization Constrained Optimization Final Remarks

Mathematical Background

Calculus: Curvature of f (x)

Remember form Calculus I, for a f (x) of a single variable::


If f 00 (x) > 0 the curvature of the function is upwards;
If f 00 (x) < 0 the curvature of the function is downwards;
If f 00 (x) = 0 this is a inflexion point;
If f 0 (x) = 0 this is a critical point;

Multivariable Extension:
If λ > 0 of H(x) the curvature of the function is upwards;
If λ < 0 of H(x) the curvature of the function is downwards;
For indefinite Hessian: saddle point;
For ∇f (x) = 0: stationary point;
logo

Prof. Marcelo Escobar Optimization of Chemical Processes


Introduction Unconstrained Optimization Constrained Optimization Final Remarks

Mathematical Background

Calculus: Taylor’s Series

Taylor’s series expansion of f (x) about xk :

1
f (x) ≈ f (xk ) + ∇f (xk )T d + d T H(xk )d
2
where d = x − xk is the direction vector joining the point x and xk .

Examine the vicinity of xk :


The term ∇f (xk )T d is called the directional derivative;
If ∇f (xk )T d < 0 the function locally decreases in the direction d;
d T H(xk )d > 0 for d 6= 0 if the matrix H(x) is positive definite.

logo

Prof. Marcelo Escobar Optimization of Chemical Processes


Introduction Unconstrained Optimization Constrained Optimization Final Remarks

Mathematical Background

Numerical: Newton’s Method

Suppose that we want to solve the following system c(x) with n


equations and n variables. Consider an approximation about xk :

c(x) ≈ c(xk ) + ∇c(xk )(x − xk ) = 0

Solving for x:
xk+1 = xk − ∇c(xk )−1 c(xk )

Newton Step: d = xk+1 − xk


logo

Prof. Marcelo Escobar Optimization of Chemical Processes


Introduction Unconstrained Optimization Constrained Optimization Final Remarks

Mathematical Background

Vectorial Algebra

Consider the matrices A and B and the vectors x and y the following
properties holds:
(A+B)T = AT + B T
(A×B)T = B T × AT
xT y = y T x

For functions f (x), g(x) and h(x):


∇(Ax) = A
∇(x T A) = A
∇(x T Ax) = (A + AT )x
∇f (g(x)) = ∇g(x)T ∇g f (g(x))
∇(f (x)g(x)) = g(x)∇g(x)T + f (x)∇g (x)
∇(g(x)T h(x)) = ∇g(x)T h(x) + ∇h(x)T g(x)
logo

Prof. Marcelo Escobar Optimization of Chemical Processes


Introduction Unconstrained Optimization Constrained Optimization Final Remarks

Basic Concepts

Nonlinear Programming

minimize f (x)
x
subject to h(x) = 0
g(x) ≤ 0
where:
f (x): objective function
h(x): equality constraints
g(x): inequality constraints
x: decision variables

Feasible Region:
 logo
FR = x| h(x) = 0, g(x) ≤ 0

Prof. Marcelo Escobar Optimization of Chemical Processes


Introduction Unconstrained Optimization Constrained Optimization Final Remarks

Basic Concepts

What is a solution?

Global Minimum:
A point x ∗ is a global minimizer if f (x ∗ ) < f (x) ∀ x ∈ F R
Local Minimum:
logo
A point x ∗ is a local minimizer if f (x ∗ ) < f (x) ∀ x ∈ N = x| kx − x ∗ k ≤ δ


Prof. Marcelo Escobar Optimization of Chemical Processes


Introduction Unconstrained Optimization Constrained Optimization Final Remarks

Basic Concepts

Convexity: Convex Regions

A set is convex if it contains all the line segments connecting any pair of its
points.

A feasible region F R is convex only if:

All equality constraints h(x) = 0 are linear;


All inequality constraints g(x) ≤ 0 are convex; logo

Prof. Marcelo Escobar Optimization of Chemical Processes


Introduction Unconstrained Optimization Constrained Optimization Final Remarks

Basic Concepts

Convexity: Convex Functions

A given f (x) is called convex on the interval [x1 , x2 ] if:

f (αx1 + (1 − α)x2 ) ≤ αx1 + (1 − α)x2 , α ∈ [0, 1]

The function is convex its Hessian is positive definite.

logo

Prof. Marcelo Escobar Optimization of Chemical Processes


Introduction Unconstrained Optimization Constrained Optimization Final Remarks

Basic Concepts

Convexity: Convex Optimization Problem

An optimization problem is convex only if:


All equality constraints h(x) = 0 are linear;
All inequality constraints g(x) ≤ 0 are convex;
The objective function f (x) is convex

For a convex problem any local minimum is a global minimum!!!


logo

Prof. Marcelo Escobar Optimization of Chemical Processes


Introduction Unconstrained Optimization Constrained Optimization Final Remarks

Unconstrained Optimization

Unconstrained Optimization
General form:
minimize f (x)
x
What conditions characterize an optimal solution
x2 Unconstrained Local Minim
Necessary Conditions
∇f (x*) = 0
pT∇2f (x*) p • 0 for p∈ℜ
(positive semi-definite)

x*
Unconstrained Local Minim
Sufficient Conditions
∇f (x*) = 0
Contours of f(x) pT∇2f (x*) p > 0 for p∈ℜ
(positive definite)
x1

logo
For smooth functions, why are contours around optimum elliptical?
What conditions characterize
Taylor an optimal
Series in n dimensions aboutpoint?
x*:
Prof. Marcelo Escobar Optimization of Chemical Processes
1
Introduction Unconstrained Optimization Constrained Optimization Final Remarks

Unconstrained Optimization

Optimality Conditions

For a function f (x) of a single variable x:

First order Necessary condition: f 0 (x ∗ ) = 0


Second Order Sufficient Condition: f 00 (x ∗ ) > 0

For a multivariable function f (x):

First order Necessary condition: ∇f (x ∗ ) = 0


Second Order Necessary Condition: d T ∇2 f (x ∗ )d ≥ 0, ∀ d 6= 0;
Second Order Sufficient Condition: d T ∇2 f (x ∗ )d > 0, ∀ d 6= 0;
logo

Prof. Marcelo Escobar Optimization of Chemical Processes


Introduction Unconstrained Optimization Constrained Optimization Final Remarks

Unconstrained Optimization

What characterizes a solution?

f (x) = x 2 f (x) = x 3 f (x) = −x 2

minimum saddle maximum


f 0 (0) = 0 f 00 (0) = 2 f 0 (0) = 0 f 00 (0) = 0 f 0 (0) = 0 f 00 (0) = −2
logo

Prof. Marcelo Escobar Optimization of Chemical Processes


Introduction Unconstrained Optimization Constrained Optimization Final Remarks

Unconstrained Optimization

What characterizes a solution?

f (x) = −x12 − x22 f (x) = x12 − x22

maximum saddle
λ1 = −2; λ2 = −2 λ1 = 2; λ2 = −2 logo

Prof. Marcelo Escobar Optimization of Chemical Processes


Introduction Unconstrained Optimization Constrained Optimization Final Remarks

Unconstrained Optimization

Types of Stationary Points

logo

Prof. Marcelo Escobar Optimization of Chemical Processes


Introduction Unconstrained Optimization Constrained Optimization Final Remarks

Unconstrained Optimization

Contours of Stationary Points

logo

Prof. Marcelo Escobar Optimization of Chemical Processes


Introduction Unconstrained Optimization Constrained Optimization Final Remarks

Unconstrained Optimization

Proof: First Order Condition

A descent direction d = x − x ∗ is such that: ∇f (x ∗ )T d < 0


For a sufficient small step (kdk ≤ δ) f (x) can be approximated by:

f (x) = f (x ∗ ) + ∇f (x ∗ )T d

Suppose that x ∗ is not the local minimum.

We can take a direction d = −∇f (x ∗ ):

f (x) = f (x ∗ ) − ∇f (x ∗ )T ∇f (x ∗ )

The second term is always positive, except when:

∇f (x ∗ ) = 0
logo

Prof. Marcelo Escobar Optimization of Chemical Processes


Introduction Unconstrained Optimization Constrained Optimization Final Remarks

Unconstrained Optimization

Proof: Second Order Condition

The function f (x) can be approximated by:

1
f (x) = f (x ∗ ) + ∇f (x ∗ )T d + d T H(xk )d
2
Since ∇f (x ∗ ) = 0.
The function increases (f (x) = f (x ∗ ) for any d 6= 0, only if

d T H(xk )d > 0

So if the Hessian is positive definite: x ∗ is a local minimum!!!

logo

Prof. Marcelo Escobar Optimization of Chemical Processes


Introduction Unconstrained Optimization Constrained Optimization Final Remarks

Constrained Optimization

Constrained Optimization Problem

General form:
minimize f (x)
x
subject to hi (x) = 0, i = 1, . . . , m
gj (x) ≤ 0, j = 1, . . . , r

where:
f (x): objective function
h(x): equality constraints
g(x): inequality constraints

Sufficient Conditions for Unique Optimum:


Convexity - f (x) and g(x) convex, and h(x) linear logo

Prof. Marcelo Escobar Optimization of Chemical Processes


Introduction Unconstrained Optimization Constrained Optimization Final Remarks

Constrained Optimization

Equality Constrained Optimization Problem


Consider the equality constrained mathematical program:

minimize f (x)
x
subject to hi (x) = 0, i = 1, . . . , m

First Order Necessary Conditions:


Stationarity (linear dependence of gradients):
m
X
∇f (x) + λi ∇h(x) = 0
i=1

Feasibility:
h(x) = 0 logo

Prof. Marcelo Escobar Optimization of Chemical Processes


Introduction Unconstrained Optimization Constrained Optimization Final Remarks

Constrained Optimization

Sketch of Proof: First Order Condition

An improvement direction must lie in the region R3 : R1 ∩ R2 :


Descent Region: R1 = d| ∇f (x ∗ )T d < 0


Feasibility Region: R2 = d| ∇h(x ∗ )T d




The Region R3 is empty only if ∇f (x ∗ ) and ∇h(x ∗ ) are parallel.

For multiple equality constraints, the gradients must be linear


dependent.

logo

Prof. Marcelo Escobar Optimization of Chemical Processes


Introduction Unconstrained Optimization Constrained Optimization Final Remarks

Constrained Optimization

General Constrained Optimization Problem


Consider the equality constrained mathematical program:
minimize f (x)
x

subject to hi (x) = 0, i = 1, . . . , m
gj (x) = 0, j = 1, . . . , r
First Order Necessary Conditions:

Stationarity (linear dependence of gradients):


m
X r
X
∇f (x) + λi ∇h(x) + µj ∇gj (x) = 0
i=1 j=1

Feasibility:
h(x) = 0; g(x) ≤ 0
Complementarity: logo
µj gj (x) = 0, µj ≥ 0

Prof. Marcelo Escobar Optimization of Chemical Processes


Introduction Unconstrained Optimization Constrained Optimization Final Remarks

Constrained Optimization

Active Set: Active and Inactive Constraints


For a given solution x ∗ we candefine:
Active Constraints: when g(x ∗ ) = 0, for which µ > 0;
Inactive Constraints: when g(x ∗ ) < 0,, for which µ = 0.

The set of active constraints A is called Active Set. If we know in advance


the active set we could write:
First Order Necessary Conditions:
Stationarity (linear dependence of gradients):
m
X X
∇f (x) + λi ∇h(x) + µj ∇gj (x) = 0
i=1 j∈A

Feasibility:
h(x) = 0; gA (x) = 0
logo

The linear dependence of gradients holds only for active constraints.


Prof. Marcelo Escobar Optimization of Chemical Processes
Introduction Unconstrained Optimization Constrained Optimization Final Remarks

Constrained Optimization

Sketch of Proof: First Order Condition


An improvement direction must lie in the region R3 : R1 ∩ R2 :
Descent Region: R1 = d| ∇f (x ∗ )T d < 0


Feasibility Region: R2 = d| ∇g(x ∗ )T d ≤ 0




If all g(x) are inactive, the first order conditions reduces to:

∇f (x ∗ ) = 0

On the other hand if one constraint is active. The Region R3 is empty only if
∇f (x ∗ ) and ∇g(x ∗ ) are parallel and have opposite directions.

∇f (x ∗ ) + µ∇g(x ∗ ) = 0 µ≥0
For multiple active constraints, the gradients must be linear dependent. logo

Prof. Marcelo Escobar Optimization of Chemical Processes


Introduction Unconstrained Optimization Constrained Optimization Final Remarks

Constrained Optimization

Geometrical Interpretation
Optimal solution for inequality constrained problem

Min f(x)
s.t. g(x) ” 0∗ ∗
∇f (x ) + µ1 ∇g
Analogy: Ball rolling down 1 (x ) = 0
valley pinned by fence logo
≥ ∇g
Note: Balance of forcesµ(∇f, 0 1)
35

Prof. Marcelo Escobar Optimization of Chemical Processes


Introduction Unconstrained Optimization Constrained Optimization Final Remarks

Constrained Optimization

Geometrical Interpretation
Optimal solution for general constrained problem

Problem: Min f(x)


∗ s.t. ∗ ”0
g(x)
∇f (x ) + λ∇h(xh(x) )+ µ ∇g1 (x ∗ ) =
=0 1
0
logo
µ ≥on 0rail pinned by fences
Analogy: Ball rolling
Balance of forces: ∇f, ∇g1, ∇h
36

Prof. Marcelo Escobar Optimization of Chemical Processes


Introduction Unconstrained Optimization Constrained Optimization Final Remarks

Constrained Optimization

First Order Optimality Conditions - KKT

Stationarity:
m
X r
X
∇f (x) + λi ∇hi (x) + µj ∇gj (x) = 0
i=1 j=1

Feasibility:

hi (x) = 0 ; gj (x) ≤ 0
Complementarity:

µj gj (x) = 0
µj ≥ 0 logo

Prof. Marcelo Escobar Optimization of Chemical Processes


Introduction Unconstrained Optimization Constrained Optimization Final Remarks

Constrained Optimization

Karush Kuhn Tucker Conditions - KKT

Stationarity:

∇f (x) + ∇h(x)T λ + ∇g(x)T µ = 0

Feasibility:

h(x) = 0 ; g(x) ≤ 0
Complementarity:

µ.g(x) = 0
µ≥0
logo

Prof. Marcelo Escobar Optimization of Chemical Processes


Introduction Unconstrained Optimization Constrained Optimization Final Remarks

Constrained Optimization

The Lagrangean Function


Lagrangean function: L(x, λ, µ) = f (x) + λT g(x) + µT g(x)

Stationarity:
∇x L(x, λ, µ) = 0
Feasibility:

∇λ L(x, λ, µ) = 0 ; ∇µ L(x, λ, µ) ≤ 0

Complementarity:

µ.g(x) = 0
logo
µ≥0

Prof. Marcelo Escobar Optimization of Chemical Processes


Introduction Unconstrained Optimization Constrained Optimization Final Remarks

Constrained Optimization

Sufficient Conditions for Optimality

For a KKT point (x ∗ , λ∗ , µ∗ ) it is not H(x ∗ ) but we need two consider


the curvature of the constraints:
Positive Curvature in all non-zero allowable directions:

p T ∇2 L(x ∗ , λ∗ , µ∗ )p > 0

where,

∇h(x ∗ )T p = 0
∇g( x ∗ )T p ≤ 0

Note: If there is no allowable directions, the solution is defined entirely by the


active constraint, and the second order condition is vacuously satisfied! logo

Prof. Marcelo Escobar Optimization of Chemical Processes


Introduction Unconstrained Optimization Constrained Optimization Final Remarks

Constrained Optimization

The Role of the Lagrangean Multipliers


Consider a small direction vector such that:

d T ∇f (x) = δf , d T ∇hi (x) = δhi d T ∇gj (x) = δgi

If we premultiply the Stationarity conditions by d T it follows that


m
X r
X
δf + λi δhi + µj δgj
i=1 j=1

The Lagrangean multipliers represent:

∂f (x) ∂f (x)
λi = − µj = −
∂hi (x) ∂gj (x)
Also known as:
logo
Shadow Prices; Dual Variables; Lagrange Multipliers.

Prof. Marcelo Escobar Optimization of Chemical Processes


Introduction Unconstrained Optimization Constrained Optimization Final Remarks

Constrained Optimization

Procedure for determining Karush-Kuhn-Tucker point



Let J = j| gj (x) = 0 index of active inequalities.
1. Assume no active inequalities. Set J = ø, µj = 0;
2. Solve KKT conditions for x,λ, and µ:
m
X X
∇f (x) + λi ∇hi (x) + µi ∇gj (x) = 0
i=1 j∈J

hi (x) = 0 i = 1, . . . , m
gj (x) = 0 j ∈ J
3. If gj (x) ≤ 0 and µj ≥ 0 ∀j, then STOP. Solution found!
4. If any gj (x) > 0 (violated), or µj < 0 (wrong sign) then

Drop one active constraint with wrong sign (largest magnitude);


Add to J the most violated constraint;
logo
Return to Step 2.

Prof. Marcelo Escobar Optimization of Chemical Processes


Introduction Unconstrained Optimization Constrained Optimization Final Remarks

Constrained Optimization

Active Set Strategy

Consider the following example:


1 2
minimize f (x) : (x + x22 ) − 3x1 − x2
x 2 1
subject to g1 (x) : −x1 + x2 ≤ 0
1
g2 (x) : x1 − x2 − 2 ≤ 0
2
g3 (x) : −x2 ≤ 0

Then
     
  −1 1 0
∇f (x) = x1 − 3 x2 − 1 ∇g1 (x) = ∇g2 = ∇g3 =
1 − 21 −1

Lets start with empty Active Set: µ1 = µ2 = µ3 = 0 logo

Prof. Marcelo Escobar Optimization of Chemical Processes


Introduction Unconstrained Optimization Constrained Optimization Final Remarks

Constrained Optimization

Active Set Strategy


Iteration 1: All constraints inactive, KKT: ∇f (x) = 0
     
x1 − 3 x1 3
=0 ⇒ =
x2 − 1 x2 1

g1 (x) = −2 (OK), g2 (x) = 1/2 (violated), and g2 (x) = −1 (OK)

Iteration 2: µ1 = µ3 = 0, µ3 ≥ 0
KKT Conditions: ∇f (x) + µ2 ∇g2 (x) = 0; g2 (x) = 0
   
x1 − 3 −1    
 x2 − 1 + µ 2 x1 2.6
 1 
 = 0 ⇒  x2  = 1.2
 
µ2 1.4
x1 − 21 x2 − 2

For which all constraints are satisfied! logo


Check the linear dependence of the gradients.

Prof. Marcelo Escobar Optimization of Chemical Processes


Introduction Unconstrained Optimization Constrained Optimization Final Remarks

Final Remarks

Final Remarks

An optimum point x ∗ is necessarily a stationary point;


For an unconstrained problem a stationary point is such that
∇f (x ∗ ) = 0;
For an constrained problem a stationary point satisfies the KKT
conditions;
For sufficient conditions check the curvature at the solution;
logo
Finding the Active Set is the main issue in optimization.

Prof. Marcelo Escobar Optimization of Chemical Processes


Introduction Unconstrained Optimization Constrained Optimization Final Remarks

Final Remarks

Further Readings

Numerical Optimization. Nocedal


and Wright (1999): Chapters 1 and
15;
Optimization of Chemical Processes.
Himmelblau (1995): Chapters 4 and
8;
Linear and Nonlinear Programming.
Luenberguer (2008): Chapter 1;
Nonlinear Programming. Biegler
(2010): Chapters 1, 2 and 4;
Systematic Methods of Chemical
Process Design. Biegler, Grossmann
and Westerberg (1997): Chapter 9. logo

Prof. Marcelo Escobar Optimization of Chemical Processes


Introduction Unconstrained Optimization Constrained Optimization Final Remarks

Final Remarks

Help, Comments, Suggestions


P
Personal
l Information
I f ti

Just in Case Contact

Marcelo Escobar Aragão


(Teaching Assistant)

Department of Chemical Engineering


Federal University of Rio Grande do Sul
Porto Alegre ‐ RS
Phone: 55 51 3308 4163
Mobile: 55 51 9684 4213
email: escobar029@hotmail.com

logo
Presented by: Course 22/08/2011 11:39
Marcelo Escobar Slide 23/38

Prof. Marcelo Escobar Optimization of Chemical Processes


This presentation is over

Thank You for your attention!!!!


What you want to do now:
restart presentation

quit

You might also like