Download as pdf or txt
Download as pdf or txt
You are on page 1of 10

Optimization literature

Journals:

1. Engineering Optimization
OPTIMIZATION TECHNIQUES 2. ASME Journal of Mechnical Design

IN MANUFACTURING 3.
4.
AIAA Journal
ASCE Journal of Structural Engineering
5. Computers and Structures
6. International Journal for Numerical Methods in Engineering
7. Structural Optimization
8. Journal of Optimization Theory and Applications
9. Computers and Operations Research
10. Operations Research and Management Science

Optimization Optimization
Basic Information Course Schedule:

• Instructor: Assoc. Professor Pelin Gundes 1. Introduction to Optimization


(http://atlas.cc.itu.edu.tr/~gundes/) 2. Classical Optimization Techniques
• E-mail: gundesbakir@yahoo.com 3. Linear programming and the Simplex method
4. Nonlinear programming-One Dimensional Minimization Methods
• Office Hours: TBD by email appointment 5. Nonlinear programming-Unconstrained Optimization Techniques
• Website: 6. Nonlinear programming-Constrained Optimization Techniques
http://atlas.cc.itu.edu.tr/~gundes/teaching/Optimi 7. Global Optimization Methods-Genetic algorithms
zation.htm 8. Global Optimization Methods-Simulated Annealing
9. Global Optimization Methods- Coupled Local Minimizers
• Lecture Time: Wednesday 13:00 - 16:00
• Lecture Venue: M 2180
2 5

Optimization literature Optimization


Textbooks: Course Prerequisite:

1. Nocedal J. and Wright S.J., Numerical Optimization, Springer Series in


Operations Research, Springer, 636 pp, 1999. • Familiarity with MATLAB, if you are not familiar with MATLAB, please
2. Spall J.C., Introduction to Stochastic Search and Optimization, visit
Estimation, Simulation and Control, Wiley, 595 pp, 2003.
3. Chong E.K.P. and Zak S.H., An Introduction to Optimization, Second http://www.ece.ust.hk/~palomar/courses/ELEC692Q/lecture%2006%20-%20cvx/matlab_crashcourse.pdf
Edition, John Wiley & Sons, New York, 476 pp, 2001.
4. Rao S.S., Engineering Optimization - Theory and Practice, John Wiley & http://www.ece.ust.hk/~palomar/courses/ELEC692Q/lecture%2006%20-%20cvx/official_getting_started.pdf
Sons, New York, 903 pp, 1996.
5. Gill P.E., Murray W. and Wright M.H., Practical Optimization, Elsevier,
401 pp., 2004.
6. Goldberg D.E., Genetic Algorithms in Search, Optimization and Machine
Learning, Addison Wesley, Reading, Mass., 1989.
7. S. Boyd and L. Vandenberghe, Convex Optimization, Cambridge
University Press, 2004.(available at
http://www.stanford.edu/~boyd/cvxbook/)

3 6

1
Optimization 1. Introduction
• Operations research (in the UK) or operational research (OR)
• 70% attendance is required! (in the US) or yöneylem araştırması (in Turkish) is an
interdisciplinary branch of mathematics which uses methods like:

– mathematical modeling
• Grading: – statistics
– algorithms to arrive at optimal or good decisions in complex
Homeworks: 15% problems which are concerned with optimizing the maxima (profit,
faster assembly line, greater crop yield, higher bandwidth, etc) or
minima (cost loss, lowering of risk, etc) of some objective function.
Mid-term projects: 40%
• The eventual intention behind using operations research is to
Final Project: 45% elicit a best possible solution to a problem mathematically, which
improves or optimizes the performance of the system.

7 10

Optimization 1. Introduction
• There will also be lab sessions for
MATLAB exercises!

8 11

1. Introduction 1. Introduction
• Optimization is the act of obtaining the best result under given Historical development
circumstances.
• Isaac Newton (1642-1727)
• Optimization can be defined as the process of finding the conditions (The development of differential calculus
that give the maximum or minimum of a function. methods of optimization)

• The optimum seeking methods are also known as mathematical • Joseph-Louis Lagrange (1736-1813)
programming techniques and are generally studied as a part of
operations research. (Calculus of variations, minimization of functionals,
method of optimization for constrained problems)
• Operations research is a branch of mathematics concerned with the
application of scientific methods and techniques to decision making • Augustin-Louis Cauchy (1789-1857)
problems and with establishing the best or optimal solutions. (Solution by direct substitution, steepest
descent method for unconstrained optimization)

9 12

2
1. Introduction 1. Introduction
Historical development • Mathematical optimization problem:
minimize f 0 ( x)
• Leonhard Euler (1707-1783) subject to g i ( x)  bi , i  1,...., m
(Calculus of variations, minimization of
• f0 : Rn R: objective function
functionals)
• x=(x1,…..,xn): design variables (unknowns of the problem,
they must be linearly independent)
• Gottfried Leibnitz (1646-1716) • gi : Rn R: (i=1,…,m): inequality constraints
(Differential calculus methods
of optimization)
• The problem is a constrained optimization problem
13 16

1. Introduction 1. Introduction
Historical development • If a point x* corresponds to the minimum value of the function f (x), the
same point also corresponds to the maximum value of the negative of
• George Bernard Dantzig (1914-2005) the function, -f (x). Thus optimization can be taken to mean
minimization since the maximum of a function can be found by seeking
(Linear programming and Simplex method (1947)) the minimum of the negative of the same function.

• Richard Bellman (1920-1984)


(Principle of optimality in dynamic
programming problems)

• Harold William Kuhn (1925-)


(Necessary and sufficient conditions for the optimal solution of
programming problems, game theory)

14 17

1. Introduction 1. Introduction
Historical development Constraints

• Albert William Tucker (1905-1995) • Behaviour constraints: Constraints that represent limitations on
(Necessary and sufficient conditions the behaviour or performance of the system are termed behaviour or
functional constraints.
for the optimal solution of programming
problems, nonlinear programming, game
• Side constraints: Constraints that represent physical limitations on
theory: his PhD student design variables such as manufacturing limitations.
was John Nash)

• Von Neumann (1903-1957)


(game theory)
15 18

3
1. Introduction 1. Introduction
Constraint Surface Constraint Surface
• For illustration purposes, consider an optimization problem with only • In the below figure, a hypothetical two dimensional design space is
inequality constraints gj (X)  0. The set of values of X that satisfy depicted where the infeasible region is indicated by hatched lines. A
the equation gj (X) =0 forms a hypersurface in the design space and design point that lies on one or more than one constraint surface is
is called a constraint surface. called a bound point, and the associated constraint is called an
active constraint.

19 22

1. Introduction 1. Introduction
Constraint Surface Constraint Surface
• Note that this is a (n-1) dimensional subspace, where n is the • Design points that do not lie on any constraint surface are known as
number of design variables. The constraint surface divides the free points.
design space into two regions: one in which gj (X)  0and the other
in which gj (X) 0.

20 23

1. Introduction 1. Introduction
Constraint Surface Constraint Surface
• Thus the points lying on the hypersurface will satisfy the constraint
Depending on whether a
gj (X) critically whereas the points lying in the region where gj (X) >0 particular design point belongs to
are infeasible or unacceptable, and the points lying in the region the acceptable or unacceptable
where gj (X) < 0 are feasible or acceptable. regions, it can be identified as one
of the following four types:

• Free and acceptable point

• Free and unacceptable point

• Bound and acceptable point

• Bound and unacceptable point

21 24

4
1. Introduction 1. Introduction
• The conventional design procedures aim at finding an acceptable or • The locus of all points satisfying f (X) = c = constant forms a
adequate design which merely satisfies the functional and other hypersurface in the design space, and for each value of c there
requirements of the problem. corresponds a different member of a family of surfaces. These surfaces,
called objective function surfaces, are shown in a hypothetical two-
dimensional design space in the figure below.
• In general, there will be more than one acceptable design, and the
purpose of optimization is to choose the best one of the many
acceptable designs available.

• Thus a criterion has to be chosen for comparing the different


alternative acceptable designs and for selecting the best one.

• The criterion with respect to which the design is optimized, when


expressed as a function of the design variables, is known as the
objective function.

25 28

1. Introduction 1. Introduction
• In civil engineering, the objective is usually taken as the • Once the objective function surfaces are drawn along with the constraint
minimization of the cost. surfaces, the optimum point can be determined without much difficulty.
• But the main problem is that as the number of design variables exceeds
two or three, the constraint and objective function surfaces become
• In mechanical engineering, the maximization of the mechanical complex even for visualization and the problem has to be solved purely
efficiency is the obvious choice of an objective function. as a mathematical problem.

• In aerospace structural design problems, the objective function for


minimization is generally taken as weight.

• In some situations, there may be more than one criterion to be


satisfied simultaneously. An optimization problem involving multiple
objective functions is known as a multiobjective programming
problem.

26 29

1. Introduction Example
• With multiple objectives there arises a possibility of conflict, and one Example:
simple way to handle the problem is to construct an overall objective
function as a linear combination of the conflicting multiple objective Design a uniform column of tubular section to carry a compressive load P=2500 kgf
functions. for minimum cost. The column is made up of a material that has a yield stress of 500
kgf/cm2, modulus of elasticity (E) of 0.85e6 kgf/cm2, and density () of 0.0025 kgf/cm3.
• Thus, if f1 (X) and f2 (X) denote two objective functions, construct a new The length of the column is 250 cm. The stress induced in this column should be less
(overall) objective function for optimization as:
than the buckling stress as well as the yield stress. The mean diameter of the column
f (X)  1 f1 (X)   2 f 2 (X) is restricted to lie between 2 and 14 cm, and columns with thicknesses outside the
range 0.2 to 0.8 cm are not available in the market. The cost of the column includes

where 1 and 2 are constants whose values indicate the relative material and construction costs and can be taken as 5W + 2d, where W is the weight
importance of one objective function to the other. in kilograms force and d is the mean diameter of the column in centimeters.

27 30

5
Example Example
Example: • Thus, the behaviour constraints can be restated as:
2500
g1 ( X)   500  0
The design variables are the x1 x2
mean diameter (d) and tube 2500  2 (0.85 106 )( x12  x22 )
thickness (t): g 2 ( X)   0
x1 x2 8(250) 2
 x1  d 
X  
 x2  t  • The side constraints are given by:

The objective function to be 2  d  14


minimized is given by:
0.2  t  0.8

f (X)  5W  2d  5ldt  2d  9.82 x1 x2  2 x1


31 34

Example Example
• The behaviour constraints can be expressed as: • The side constraints can be expressed in standard form as:

stress induced ≤ yield stress


g 3 ( X)   x1  2  0
stress induced ≤ buckling stress g 4 ( X)  x1  14  0
g 5 ( X)   x2  0.2  0
• The induced stress is given by:
g 6 ( X)  x2  0.8  0

P 2500
induced stress   i  
dt x1 x2

32 35

Example Example
• The buckling stress for a pin connected column is given by: • For a graphical solution, the constraint surfaces are to be
plotted in a two dimensional design space where the two axes
Euler buckling load  EI 2
represent the two design variables x1 and x2. To plot the first
buckling stress   b  
cross  sectional area l 2dt constraint surface, we have:
2500
g1 ( X)   500  0 x1 x2  1.593
where I is the second moment of area of the cross section of the x1 x2
column given by:
• Thus the curve x1x2=1.593 represents the constraint surface
  g1(X)=0. This curve can be plotted by finding several points on
I (d  d ) 
4
o i
4
(d  d )(d o  d i )(d o  d i )
2
o i
2
the curve. The points on the curve can be found by giving a
64 64 series of values to x1 and finding the corresponding values of x2


64
(d  t ) 2

 (d  t ) 2 (d  t )  (d  t )(d  t )  (d  t ) that satisfy the relation x1x2=1.593 as shown in the Table below:
x1 2 4 6 8 10 12 14
 
 dt (d 2  t 2 )  x1 x2 ( x12  x22 ) x2 0.7965 0.3983 0.2655 0.199 0.1593 0.1328 0.114
8 8
33 36

6
Example Example
• The infeasible region represented by g1(X)>0 or x1x2< 1.593 is • Next, the contours of the
shown by hatched lines. These points are plotted and a curve P1Q1 objective function are to be
passing through all these points is drawn as shown: plotted before finding the
optimum point. For this, we
plot the curves given by:
f ( X)  9.82 x1 x2  2 x1  c
 constant

for a series of values of c. By


giving different values to c, the
contours of f can be plotted
with the help of the following
points.

37 40

Example Example
• Similarly the second • For f (X)  9.82 x1 x2  2 x1  50.0

constraint g2(X) < 0 can x2 0.1 0.2 0.3 0.4 0.5 0.6 0.7

be expressed as: x1 16.77 12.62 10.10 8.44 7.24 6.33 5.64

• For f (X)  9.82 x1 x2  2 x1  40.0


x1 x2 ( x  x )  47.3
2
1
2
2 x2 0.1 0.2 0.3 0.4 0.5 0.6 0.7

x1 13.40 10.10 8.08 6.75 5.79 5.06 4.51


• The points lying on the • For f (X)  9.82 x1 x2  2 x1  31.58 (passing through the corner point C)
constraint surface g2 x2 0.1 0.2 0.3 0.4 0.5 0.6 0.7

(X)=0 can be obtained as x1 10.57 7.96 6.38 5.33 4.57 4 3.56

follows (These points are • For f (X)  9.82 x1 x2  2 x1  26.53 (passing through the corner point B)
plotted as Curve P2Q2: x2 0.1 0.2 0.3 0.4 0.5 0.6 0.7

x1 2 4 6 8 10 12 14 x1 8.88 6.69 5.36 4.48 3.84 3.36 2.99

x2 2.41 0.716 0.219 0.0926 0.0473 0.0274 0.0172


38 41

Example Example
• The plotting of side • These contours are shown in the
constraints is simple figure below and it can be seen
since they represent that the objective function can not
straight lines. be reduced below a value of 26.53
x1 x2 ( x12  x22 )  47.3 (corresponding to point B) without
violating some of the constraints.
• After plotting all the six Thus, the optimum solution is
constraints, the feasible given by point B with d*=x1*=5.44
region is determined as
cm and t*=x2*=0.293 cm with
the bounded area
fmin=26.53.
ABCDEA

39 42

7
Examples Classification of optimization problems

Design of civil engineering structures Classification based on:


• variables: width and height of member cross-sections
• constraints: limit stresses, maximum and minimum dimensions
• Permissable values of the design variables
• objective: minimum cost or minimum weight
– Integer programming problems
– Real valued programming problems

Analysis of statistical data and building empirical models • Deterministic nature of the variables
from measurements
– Stochastic programming problem
• variables: model parameters
– Deterministic programming problem
• Constraints: physical upper and lower bounds for model parameters
• Objective: prediction error

43 46

Classification of optimization problems Classification of optimization problems

Classification based on: Classification based on:

• Constraints • Separability of the functions


– Constrained optimization problem – Separable programming problems
– Unconstrained optimization problem – Non-separable programming problems

• Nature of the design variables • Number of the objective functions


– Static optimization problems – Single objective programming problem
– Dynamic optimization problems – Multiobjective programming problem

44 47

Classification of optimization problems Geometric Programming


Classification based on: • A geometric programming problem (GMP)
is one in which the objective function and
• Physical structure of the problem
– Optimal control problems
constraints are expressed as posynomials
– Non-optimal control problems in X.

• Nature of the equations involved


– Nonlinear programming problem
– Geometric programming problem
– Quadratic programming problem
– Linear programming problem

45 48

8
Optimal Control Problem
• The problem is to find a set of control or design
variables such that the total objective function
(also known as the performance index) over all
stages is minimized subject to a set of
constraints on the control and state variables.
• An OC problem can be stated as follows:
l
Find X which minimizes f ( X)   f i ( xi , yi )
i 1
subject to the constraints
qi ( xi , yi )  yi  yi 1 , i  1,2,, l
g j ( x j )  0, j  1,2,, l
h k ( y k )  0, k  1,2,, l
where xi is the ith control variable, yi is the ith
control variable, and fi is the contribution of the
ith stage to the total objective function; gj, hk and
qi are functions of xj, yk and xi and yi,
49
respectively, and l is the total number of stages. 52

Quadratic Programming Problem Integer Programming Problem


• A quadratic programming problem is a nonlinear programming problem with a
quadratic objective function and linear constraints. It is usually formulated as
• If some or all of the design variables x1,x2,..,xn of
follows: an optimization problem are restricted to take
F (X)  c   qi xi  
n n

Q
n

ij xi x j on only integer (or discrete) values, the problem


i 1 i 1 j 1

subject to
is called an integer programming problem.
n

a
i 1
ij xi  b j , j  1,2,, m

xi  0, i  1,2,, n • If all the design variables are permitted to take


any real value, the optimization problem is
where c, qi,Qij, aij, and bj are constants. called a real-valued programming problem.

50 53

Optimal Control Problem Stochastic Programming Problem


• An optimal control (OC) problem is
a mathematical programming
• A stochastic programming problem is an
problem involving a number of optimization problem in which some or all of the
stages, where each stage evolves
from the preceding stage in a
parameters (design variables and/or
prescribed manner. preassigned parameters) are probabilistic
• It is usually described by two
types of variables: the control
(nondeterministic or stochastic).
(design) and the state variables. • In other words, stochastic programming deals
The control variables define the
system and govern the evolution with the solution of the optimization problems in
of the system from one stage to
the next, and the state variables
which some of the variables are described by
describe the behaviour or status of probability distributions.
the system in any stage.

51 54

9
Separable Programming Problem
• A function f (x) is said to be separable if it can be expressed as
the sum of n single variable functions, f1(x1), f2(x2),….,fn(xn), that is,
n
f (X)   f i xi
i 1

• A separable programming problem is one in which the objective


function and the constraints are separable and can be expressed in
standard form as:
n
Find X which minimizes f (X)   f i ( xi )
i 1
subject to
n
g j ( X)   gij ( xi )  b j , j  1,2,, m
i 1

where bj is constant.

55

Multiobjective Programming
Problem
• A multiobjective programming problem can be stated as follows:

Find X which minimizes f1 (X), f2 (X),…., fk (X)


subject to
g j (X)  0, j  1,2,..., m

where f1 , f2,…., fk denote the objective functions to be minimized


simultaneously.

56

Review of mathematics
Concepts from linear algebra:
Positive definiteness
• Test 1: A matrix A will be positive definite if all its
eigenvalues are positive; that is, all the values of  that satisfy
the determinental equation
A  I  0

should be positive. Similarly, the matrix A will be negative


definite if its eigenvalues are negative.

57

10

You might also like