DyEEP Clase 12 - Optimizacion-2

You might also like

Download as pdf or txt
Download as pdf or txt
You are on page 1of 10

20/05/2020

Diseño de Procesos

Tema: Optimización en Diseño de Procesos Químicos


Parte 2

Curso 2020

M. S. Diaz DyEEP 2020

Aspectos de Optimización en Diseño


5. Sub-optimización

Optimum of Overall
Sub-plant A Optimum of Optimum
Subset B
Optimum of

Sub-plant C

Optimizing sub-problems usually does not lead to


overall optimum.

1
20/05/2020

Aspectos de Optimización en Diseño


5. Sub-optimización
But large projects often have options that can be
handled as incremental problems

Option A
It sometimes makes sense to
Minimum Option B break these out as separate sub-
Project Option C projects and evaluate them
independently
Option D

Optimización de una sola variable de decisión


Min. z  f  x 
s.t. x  xL
x  xU

df d2 f
1. Solve f    0 , check  0 for minimum
dx dx 2
each solution of f   0 is a local stationary point

2. Minimum is lowest stationary point

3. Si f es convexa, el mínimo local es a la vez, un mínimo global

4. If f (x) is discontinuous then value of z on either side of the discontinuity


must also be found

z z

x x

2
20/05/2020

Método de Newton
Función de una variable: Aprox. f con una fn. cuadrática

f ' ' ( x k )( x  x k )2
f ( x )  f ( x k )  f ' ( x k )( x  x k ) 
2!
df
0 Hallar el cero de la derivada para determinar puntos estacionarios
dx
1
0  f ' ( x k )  2 f ' ' ( x k )( x k 1  x k )
2
k 1 k f ' ( xk )
(x x )
f '' ( xk )

Ventajas: Convergencia local cuadrática


Desventajas: f’ y f’’ , f’’ --> 0, conv. lentamente; puede oscilar

Optimización de una sola variable de decisión

 What if the form of f(x) is not known or is too complex


to get derivative f΄ ?

Use search methods (Direct or Indirect)


(e.g. Golden Section; Finite Difference Newton method)

3
20/05/2020

Método de Newton por Diferencias Finitas

Función de una variable: Aprox. derivadas por diferencias finitas

k 1 k ( f ( x k  h )  f ( x k  h )) /( 2h )
(x x )
( f ( x k  h )  2 f ( x k )  f ( x k  h )) / h 2

Desventajas: Selección de h
Evaluaciones adicionales de f en cada iteración

Optimización sin restricciones con dos variables


Min z = f (x1, x2)

 We can visualize parameter space in 2-D and plot contour


lines of z
global
minimum 50

x2 local 40
minimum
30

20
30 40
10

4
20/05/2020

Algoritmo general búsqueda en dos dimensiones

1. Find a feasible solution (x1 , x2 )

2. Determine search direction (e.g. Newton´s method)

3. Determine step length x1, x2 (Búsqueda Lineal)

4. Evaluate z = f (x1 + x1, x2 + x2)

5. Repeat steps 2-4 until convergence criteria satisfied


(optimum found)

Optimización con dos variables con


restricciones. Convexidad
 We can also plot constraint boundaries:
x2 x2

xa xa xb
xb

x1 x1
Convex Feasible Region Non-Convex Feasible Region

 Feasible region is convex if


x = α xa + (1 – α) xb  FR  xa, xb  FR, 0 < α < 1

 Convex problems can be solved to global optimum


(concave problems are prone to local minima)

5
20/05/2020

Problema de optimización convexo

 Función objetivo: convexa


 Restriccines de igualdad: lineales
 Restricciones de desigualdad g(x)≤0: convexas

 El óptimo local de un problema de optimiziación convexo


es óptimo global

Problemas más comunes

x1 x1
x2

x1 x1
x2

Convergence to local Slow convergence Non-convex FR


optimum  
 indirect & gradient stochastic
stochastic based methods methods
methods help

6
20/05/2020

Optimización multivariable
 Harder to visualize than 2 variables
 Same procedure
 Same issues of initialization, convergence, local / global
optima
 Numerical methods for multivariable optimization are the
subject of Operations Research

The ChE design course


is not and should not be a substitute
for a proper course on
Optimization!

So what follows is just an overview

Optimización multivariable:
Programación Lineal (LP)

x2

z
x1

 A set of linear constraints always defines a convex FR


 If the objective function is also linear and
xi ≥ 0 i

then the problem can be written as a LP

7
20/05/2020

Programación Lineal

x2

z
x1

 Solve to global optimum


 Optimum must always be on the boundary
 Optimum always lies at intersection of constraints
- these constraints are “active”
g (x) = 0

Programación Lineal

z = 120 x1 + 100 x2
5 x1 + 3 x2  100 5 x1 + 3 x2 +S1 = 100
x1 + x2  30 introduce x1 + x2 +S2 = 100
x1  0 slack & x1 – S3 = 0
surplus
x2  0 variables
x2 – S4 = 0

 Solve the set of equalities to get a feasible solution

 Improve objective by searching vertices of FR


(e.g. using Dantzig’s Simplex Algorithm – not the same as
simplex search)

8
20/05/2020

Programación Lineal: Problemas

 Degenerate LP problems will not solve

Objective Function
FR Unbounded No FR
Parallel to Constraint

Programación No Lineal (NLP)

 When the objective function and/or constraints are non-linear


 Successive Linear Programming
- Linearize f (x), g(x), h(x) at initial point
- Solve LP for improved solution and repeat
- LP solution may be outside FR, so move back to FR & repeat
- No guarantee of convergence or global optimality
 Successive Quadratic Programming
- Approximate f (x) as quadratic, constraints as linear at each iteration
- Convergence is better than SLP in general
 Reduced Gradient Method (e.g. MINOS)
- Solve sequence of sub-problems with linearized constraints
- Uses quasi-Newton method in reduced space of independent
variables
- Reduces to simplex algorithm if all functions are linear

9
20/05/2020

Programación No Lineal (NLP)


SQP vs. GRG

 Fewer iterations  Less computation per iteration

 Better when gradients require  Better when analytical


numerical estimation expressions for gradients are
known

 Best for < 50 variables  Better for large number of


variables

 Best for highly non-linear  Best for large number of linear


problems constraints

 Use, e.g., in optimizing  Use, e.g., in optimizing large


process simulation MS Excel Model

10

You might also like