Basic Concepts

You might also like

Download as pdf or txt
Download as pdf or txt
You are on page 1of 27

Optimization Methods: Introduction and Basic Concepts 1

Historical Development and Model Building

Introduction
In this lecture, historical developm ent of optimization methods is glanced through. Apart
from the major dev elopments, some recently developed novel appro aches, such as, goal
programming for multi-objective optimization, simulated annealing, genetic algorithms, and
neural network methods are briefly mentioned tracing their orig in. Engineering applications
of optimization with different m odeling approaches are scanned th rough from which one
would get a broad picture of the multitude applications of optimization techniques.

Historical Development
The existence of optimization methods can be traced to the days of Newton, Lagrange, and
Cauchy. The developm ent of diffe rential calculus methods for optim ization was possible
because of the contributions of Newton and Leibnitz to calculus. The foundations of calculus
of variations, which deals with the minimization of functions, were laid by Bernoulli, Euler,
Lagrange, and W eistrass. The m ethod of opt imization for constrained problem s, which
involve the addition of unknown multip liers, became known by the nam e of its invento r,
Lagrange. Cauchy m ade the firs t application of the steepe st descent m ethod to solve
unconstrained optimization problems. By the middle of the twentieth century, the high-speed
digital computers made implementation of the complex optimization procedures possible and
stimulated further research on new er methods. Spectacular advances followed, producing a
massive literature on optimization techniques. This adv ancement also resulted in th e
emergence of several well defined new areas in optimization theory.

Some of the m ajor developments in the ar ea of num erical methods of unconstrained


optimization are outlined here with a few milestones.

• Development of the simplex m ethod by Da ntzig in 1947 for linear programm ing
problems
• The enunciation of the princip le of optimality in 1957 by Bellm an for dyna mic
programming problems,
Optimization Methods: Introduction and Basic Concepts 2

• Work by Kuhn and Tucker in 1951 on the ne cessary and sufficient conditions for the
optimal solution of programm ing problems laid the foundation for later research in
non-linear programming.
• The contributions of Zoutendijk and Rosen to nonlinear programming during the early
1960s have been very significant.
• Work of Carroll and Fiacco and McCormick facilitated many difficult problems to be
solved by using the well-known techniques of unconstrained optimization.
• Geometric programming was developed in the 1960s by Duffin, Zener, and Peterson.
• Gomory did pioneering work in integer programming, one of the m ost exciting and
rapidly developing areas of opti mization. The reason for this is that m ost real world
applications fall under this category of problems.
• Dantzig and Charnes and C ooper developed stochastic programming techniques and
solved problems by assum ing design param eters to be independent and norm ally
distributed.

The necessity to optimize more than one objec tive or go al while satis fying the p hysical
limitations led to the development of multi-objective programming methods. Goal
programming is a well-known technique for solving specific types of multi-objective
optimization problems. The goal programm ing was originally proposed for linear problem s
by Charnes and Cooper in 1961. The foundation of ga me theory was laid by von Neumann in
1928 and since then the technique has been appl ied to solve several m athematical, economic
and military problems. Only during the last few years has game theory been applied to solve
engineering problems.

Simulated annealing, genetic algorithm s, and neural network methods represent a new class
of mathematical programming techniques that have come into prom inence during the las t
decade. Simulated annealing is analo gous to the physical process of annealing of m etals and
glass. The genetic alg orithms are search techniques based on the m echanics of natural
selection and natural genetics. Neural netw ork methods are based on solving the problem
using the computing power of a network of interconnected ‘neuron’ processors.
Optimization Methods: Introduction and Basic Concepts 3

Engineering applications of optimization


To indicate the widespread scope of the subject, som e typical applications in different
engineering disciplines are given below.

• Design of civil engineering structures such as f rames, foundations, bridges, towers,


chimneys and dams for minimum cost.
• Design of m inimum weight structures for earth quake , wind and othe r types of
random loading.
• Optimal plastic design of fra me structures (e.g., to determ ine the ultimate m oment
capacity for minimum weight of the frame).
• Design of water resources systems for obtaining maximum benefit.
• Design of optimum pipeline networks for process industry.
• Design of aircraft and aerospace structure for minimum weight
• Finding the optimal trajectories of space vehicles.
• Optimum design of linkages, cams, gears, machine too ls, and other m echanical
components.
• Selection of m achining conditions in m etal-cutting processes for m inimizing the
product cost.
• Design of m aterial handling equipm ent such as conveyors, trucks and cranes for
minimizing cost.
• Design of pumps, turbines and heat transfer equipment for maximum efficiency.
• Optimum design of electrical machinery such as motors, generators and transformers.
• Optimum design of electrical networks.
• Optimum design of control systems.
• Optimum design of chemical processing equipments and plants.
• Selection of a site for an industry.
• Planning of maintenance and replacement of equipment to reduce operating costs.
• Inventory control.
• Allocation of resources or services among several activities to maximize the benefit.
• Controlling the waiting and idle tim es in production lines to reduce the cost of
production.
• Planning the best strategy to obtain maximum profit in the presence of a competitor.
Optimization Methods: Introduction and Basic Concepts 4

• Designing the shortest route to be taken by a salesperson to visit v arious cities in a


single tour.
• Optimal production planning, controlling and scheduling.
• Analysis of statistical data and building empirical models to obtain the most accurate
representation of the statistical phenomenon.
However, the list is incomplete.

Art of Modeling: Model Building


Development of an optimization model can be divided into five major phases.
• Data collection
• Problem definition and formulation
• Model development
• Model validation and evaluation of performance
• Model application and interpretation

Data collection may be time consuming but is the fundam ental basis of the m odel-building
process. The availability and accuracy of data can have considerable effect on the accuracy of
the model and on the ability to evaluate the model.

The problem definition and formulation includes the s teps: identification of the decision
variables; formulation of the model objective(s) and the formulation of the model constraints.
In performing these steps the following are to be considered.
• Identify the important elements that the problem consists of.
• Determine the number of independent variable s, the number of equations required to
describe the system, and the number of unknown parameters.
• Evaluate the structure and complexity of the model
• Select the degree of accuracy required of the model

Model development includes the m athematical description, parameter estimation, input


development, and software developm ent. The model development phase is an iterative
process that may require returning to the model definition and formulation phase.
The model validation and evaluation phase is checking the perfor mance of the m odel as a
whole. Model validation consists o f validation of the assu mptions and parameters of the
Optimization Methods: Introduction and Basic Concepts 5

model. The perform ance of th e model is to be evaluated using standard perform ance
measures such as Root m ean squared error an d R2 value. A sensitiv ity analysis s hould be
performed to test the model inputs and parameters. This phase also is an iterative process and
may require returning to the model definition and formulation phase. One important aspect of
this process is that in most cases data used in the form ulation process should be different
from that used in validation. Another point to keep in m ind is that no single validation
process is appropriate for all models.
Model application and implementation include the use of the model in the partic ular area
of the solu tion and th e translation of the re sults into operating instructions issued in
understandable form to the individuals who will administer the recommended system.
Different modeling techniques are developed to meet the requirements of different types of
optimization problems. Major categories of m odeling approaches are: classical optimization
techniques, linear programming, nonlinear programm ing, geometric programming, dynamic
programming, integer programm ing, stochastic programm ing, evolutionary algorithms, etc.
These modeling approaches will be discussed in subsequent modules of this course.
Optimization Methods: Introduction and Basic concepts 6

Optimization Problem and Model Formulation

Introduction

In the previous discussion we studied the evolution of optimization methods and their
engineering applications. A brief introduction was also given to the art of modeling. In this
lecture we will study the Optimization problem, its various components and its formulation as
a mathematical programming problem.

Basic components of an optimization problem:

An objective function expresses the main aim of the model which is either to be minimized
or maximized. For example, in a manufacturing process, the aim may be to maximize the
profit or minimize the cost. In comparing the data prescribed by a user-defined model with the
observed data, the aim is minimizing the total deviation of the predictions based on the model
from the observed data. In designing a bridge pier, the goal is to maximize the strength and
minimize size.

A set of unknowns or variables control the value of the objective function. In the
manufacturing problem, the variables may include the amounts of different resources used or
the time spent on each activity. In fitting-the-data problem, the unknowns are the parameters
of the model. In the pier design problem, the variables are the shape and dimensions of the
pier.

A set of constraints are those which allow the unknowns to take on certain values but
exclude others. In the manufacturing problem, one cannot spend negative amount of time on
any activity, so one constraint is that the "time" variables are to be non-negative. In the pier
design problem, one would probably want to limit the breadth of the base and to constrain its
size.

The optimization problem is then to find values of the variables that minimize or maximize
the objective function while satisfying the constraints.

Objective Function

As already stated, the objective function is the mathematical function one wants to maximize
or minimize, subject to certain constraints. Many optimization problems have a single
Optimization Methods: Introduction and Basic concepts 7

objective function. (When they don't they can often be reformulated so that they do) The two
exceptions are:

• No objective function. In some cases (for example, design of integrated circuit


layouts), the goal is to find a set of variables that satisfies the constraints of the model.
The user does not particularly want to optimize anything and so there is no reason to
define an objective function. This type of problems is usually called a feasibility
problem.

• Multiple objective functions. In some cases, the user may like to optimize a number of
different objectives concurrently. For instance, in the optimal design of panel of a
door or window, it would be good to minimize weight and maximize strength
simultaneously. Usually, the different objectives are not compatible; the variables that
optimize one objective may be far from optimal for the others. In practice, problems
with multiple objectives are reformulated as single-objective problems by either
forming a weighted combination of the different objectives or by treating some of the
objectives as constraints.

Statement of an optimization problem

An optimization or a mathematical programming problem can be stated as follows:

⎛ x1 ⎞
⎜ ⎟
⎜ x2 ⎟
To find X = ⎜ . ⎟ which minimizes f(X) (1.1)
⎜ ⎟
⎜ . ⎟
⎜x ⎟
⎝ n⎠

Subject to the constraints

gi(X) ≤ 0 , i = 1, 2, …., m

lj(X) = 0 , j = 1, 2, …., p

where X is an n-dimensional vector called the design vector, f(X) is called the objective
function, and gi(X) and lj(X) are known as inequality and equality constraints, respectively.
The number of variables n and the number of constraints m and/or p need not be related in
any way. This type problem is called a constrained optimization problem.
Optimization Methods: Introduction and Basic concepts 8

If the locus of all points satisfying f(X) = a constant c, is considered, it can form a family of
surfaces in the design space called the objective function surfaces. When drawn with the
constraint surfaces as shown in Fig 1 we can identify the optimum point (maxima). This is
possible graphically only when the number of design variables is two. When we have three or
more design variables because of complexity in the objective function surface, we have to
solve the problem as a mathematical problem and this visualization is not possible.

C1 > C2 > C3 >C4 …..> Cn

.
f = C1

f = C2
f = C3 f= C4
f = C5

Optimum
point

Fig 1

Optimization problems can be defined without any constraints as well.

⎛ x1 ⎞
⎜ ⎟
⎜ x2 ⎟
To find X = ⎜ . ⎟ which minimizes f(X) (1.2)
⎜ ⎟
⎜ . ⎟
⎜x ⎟
⎝ n⎠

Such problems are called unconstrained optimization problems. The field of unconstrained
optimization is quite a large and prominent one, for which a lot of algorithms and software
are available.
Optimization Methods: Introduction and Basic concepts 9

Variables
These are essential. If there are no variables, we cannot define the objective function and the
problem constraints. In many practical problems, one cannot choose the design variable
arbitrarily. They have to satisfy certain specified functional and other requirements.

Constraints

Constraints are not essential. It's been argued that almost all problems really do have
constraints. For example, any variable denoting the "number of objects" in a system can only
be useful if it is less than the number of elementary particles in the known universe! In
practice though, answers that make good sense in terms of the underlying physical or
economic criteria can often be obtained without putting constraints on the variables.

Design constraints are restrictions that must be satisfied to produce an acceptable design.

Constraints can be broadly classified as:

1) Behavioral or Functional constraints: These represent limitations on the behavior


performance of the system.

2) Geometric or Side constraints: These represent physical limitations on design


variables such as availability, fabricability, and transportability.

For example, for the retaining wall design shown in the Fig 2, the base width W cannot be
taken smaller than a certain value due to stability requirements. The depth D below the
ground level depends on the soil pressure coefficients Ka and Kp. Since these constraints
depend on the performance of the retaining wall they are called behavioral constraints. The
number of anchors provided along a cross section Ni cannot be any real number but has to be
a whole number. Similarly thickness of reinforcement used is controlled by supplies from the
manufacturer. Hence this is a side constraint.
Optimization Methods: Introduction and Basic concepts 10

Ni no. of anchors

Fig. 2

Constraint Surfaces

Consider the optimization problem presented in eq. 1.1 with only the inequality constraint
gi(X) ≤ 0 . The set of values of X that satisfy the equation gi(X) ≤ 0 forms a boundary surface
in the design space called a constraint surface. This will be a (n-1) dimensional subspace
where n is the number of design variables. The constraint surface divides the design space
into two regions: one with gi(X) < 0 (feasible region) and the other in which gi(X) > 0
(infeasible region). The points lying on the hyper surface will satisfy gi(X) =0. The collection
of all the constraint surfaces gi(X) = 0, j= 1, 2, …, m, which separates the acceptable region is
called the composite constraint surface.

Fig 3 shows a hypothetical two-dimensional design space where the feasible region is
denoted by hatched lines. The two-dimensional design space is bounded by straight lines as
shown in the figure. This is the case when the constraints are linear. However, constraints
may be nonlinear as well and the design space will be bounded by curves in that case. A
design point that lies on more than one constraint surface is called a bound point, and the
associated constraint is called an active constraint. Free points are those that do not lie on any
constraint surface. The design points that lie in the acceptable or unacceptable regions can be
classified as following:

1. Free and acceptable point

2. Free and unacceptable point


Optimization Methods: Introduction and Basic concepts 11

3. Bound and acceptable point

4. Bound and unacceptable point.

Examples of each case are shown in Fig. 3.

Behavior
constraint
Infeasible
g2 ≤ 0
region

Side
constraint Feasible
Behavior
g3 ≥ 0 region
. constraint

.
g1 ≤0
Bound
acceptable point.

Free acceptable
Free unacceptable Bound point
point unacceptable
point.

Fig. 3

Formulation of design problems as mathematical programming problems

In mathematics, the term optimization, or mathematical programming, refers to the study


of problems in which one seeks to minimize or maximize a real function by systematically
choosing the values of real or integer variables from within an allowed set. This problem can
be represented in the following way

Given: a function f : A R from some set A to the real numbers

Sought: an element x0 in A such that f(x0) ≤ f(x) for all x in A ("minimization") or such that
f(x0) ≥ f(x) for all x in A ("maximization").

Such a formulation is called an optimization problem or a mathematical programming


problem (a term not directly related to computer programming, but still in use for example,
Optimization Methods: Introduction and Basic concepts 12

in linear programming . Many real-world and theoretical problems may be modeled in this
general framework.

Typically, A is some subset of the Euclidean space Rn, often specified by a set of constraints,
equalities or inequalities that the members of A have to satisfy. The elements of A are called
candidate solutions or feasible solutions. The function f is called an objective function, or cost
function. A feasible solution that minimizes (or maximizes, if that is the goal) the objective
function is called an optimal solution. The domain A of f is called the search space.

Generally, when the feasible region or the objective function of the problem does not present
convexity (refer module 2), there may be several local minima and maxima, where a local
minimum x* is defined as a point for which there exists some δ > 0 so that for all x such that

and

that is to say, on some region around x* all the function values are greater than or equal to the
value at that point. Local maxima are defined similarly.

A large number of algorithms proposed for solving non-convex problems – including the
majority of commercially available solvers – are not capable of making a distinction between
local optimal solutions and rigorous optimal solutions, and will treat the former as the actual
solutions to the original problem. The branch of applied mathematics and numerical analysis
that is concerned with the development of deterministic algorithms that are capable of
guaranteeing convergence in finite time to the actual optimal solution of a non-convex
problem is called global optimization.

Problem formulation

Problem formulation is normally the most difficult part of the process. It is the selection of
design variables, constraints, objective function(s), and models of the discipline/design.

Selection of design variables

A design variable, that takes a numeric or binary value, is controllable from the point of view
of the designer. For instance, the thickness of a structural member can be considered a design
variable. Design variables can be continuous (such as the length of a cantilever beam),
Optimization Methods: Introduction and Basic concepts 13

discrete (such as the number of reinforcement bars used in a beam), or Boolean. Design
problems with continuous variables are normally solved more easily.

Design variables are often bounded, that is, they have maximum and minimum values.
Depending on the adopted method, these bounds can be treated as constraints or separately.

Selection of constraints

A constraint is a condition that must be satisfied to render the design to be feasible. An


example of a constraint in beam design is that the resistance offered by the beam at points of
loading must be equal to or greater than the weight of structural member and the load
supported. In addition to physical laws, constraints can reflect resource limitations, user
requirements, or bounds on the validity of the analysis models. Constraints can be used
explicitly by the solution algorithm or can be incorporated into the objective, by using
Lagrange multipliers.

Objectives

An objective is a numerical value that is to be maximized or minimized. For example, a


designer may wish to maximize profit or minimize weight. Many solution methods work only
with single objectives. When using these methods, the designer normally weights the various
objectives and sums them to form a single objective. Other methods allow multi-objective
optimization (module 8), such as the calculation of a Pareto front.

Models

The designer has to also choose models to relate the constraints and the objectives to the
design variables. These models are dependent on the discipline involved. They may be
empirical models, such as a regression analysis of aircraft prices, theoretical models, such as
from computational fluid dynamics, or reduced-order models of either of these. In choosing
the models the designer must trade-off fidelity with the time required for analysis.

The multidisciplinary nature of most design problems complicates model choice and
implementation. Often several iterations are necessary between the disciplines’ analyses in
order to find the values of the objectives and constraints. As an example, the aerodynamic
loads on a bridge affect the structural deformation of the supporting structure. The structural
deformation in turn changes the shape of the bridge and hence the aerodynamic loads. Thus,
it can be considered as a cyclic mechanism. Therefore, in analyzing a bridge, the
Optimization Methods: Introduction and Basic concepts 14

aerodynamic and structural analyses must be run a number of times in turn until the loads and
deformation converge.

Representation in standard form

Once the design variables, constraints, objectives, and the relationships between them have
been chosen, the problem can be expressed as shown in equation 1.1

Maximization problems can be converted to minimization problems by multiplying the


objective by -1. Constraints can be reversed in a similar manner. Equality constraints can be
replaced by two inequality constraints.

Problem solution

The problem is normally solved choosing the appropriate techniques from those available in
the field of optimization. These include gradient-based algorithms, population-based
algorithms, or others. Very simple problems can sometimes be expressed linearly; in that case
the techniques of linear programming are applicable.

Gradient-based methods

• Newton's method

• Steepest descent

• Conjugate gradient

• Sequential quadratic programming

Population-based methods

• Genetic algorithms

• Particle swarm optimization

Other methods

• Random search

• Grid search

• Simulated annealing

Most of these techniques require large number of evaluations of the objectives and the
constraints. The disciplinary models are often very complex and can take significant amount
of time for a single evaluation. The solution can therefore be extremely time-consuming.
Optimization Methods: Introduction and Basic concepts 15

Many of the optimization techniques are adaptable to parallel computing. Much of the current
research is focused on methods of decreasing the computation time.

The following steps summarize the general procedure used to formulate and solve
optimization problems. Some problems may not require that the engineer follow the steps in
the exact order, but each of the steps should be considered in the process.

1) Analyze the process itself to identify the process variables and specific characteristics
of interest, i.e., make a list of all the variables.

2) Determine the criterion for optimization and specify the objective function in terms of
the above variables together with coefficients.

3) Develop via mathematical expressions a valid process model that relates the input-
output variables of the process and associated coefficients. Include both equality and
inequality constraints. Use well known physical principles such as mass balances,
energy balance, empirical relations, implicit concepts and external restrictions.
Identify the independent and dependent variables to get the number of degrees of
freedom.

4) If the problem formulation is too large in scope:

ƒ break it up into manageable parts, or

ƒ simplify the objective function and the model

5) Apply a suitable optimization technique for mathematical statement of the problem.

6) Examine the sensitivity of the result, to changes in the values of the parameters in the
problem and the assumptions.
Optimization Methods: Introduction and Basic Concepts 16

Classification of Optimization Problems

Introduction

In the previous discussion we studied the basics of an optimization problem and its form ulation
as a m athematical programming problem. In this lecture we look at th e various criteria for
classification of optimization problems.

Optimization problems can be classified based on the type of constraint s, nature of design
variables, physical structure of the problem , nature of the equations involved, deterministic
nature of the variables, permissible value of the design variables, separability of the functions
and number of objective functions. These classifications are briefly discussed below.

Classification based on existence of constraints.

Under this category optimizations problems can be classified into two groups as follows:

Constrained optimization problems: which are subject to one or more constraints.

Unconstrained optimization problems: in which no constraints exist.

Classification based on the nature of the design variables.

There are two broad categories in this classification.

(i) In the f irst category the objec tive is to find a set of design param eters that m akes a
prescribed function of these parameters minimum or maximum subject to certain constraints.
For example to find the m inimum weight design of a strip footing with two loads shown in
Fig 1 (a) subject to a lim itation on the m aximum settlement of the structure can be stated as
follows.

b 
Find X =   which minimizes
d 

f(X) = h(b,d)

Subject to the constraints  s ( X )   max ; b  0 ; d  0

where  s is the settlem ent of the footing. Such problem s are called parameter or static

optimization problems.
Optimization Methods: Introduction and Basic Concepts 17

It may be noted that, for this particular example, the length of the footing (l), the loads P 1 and
P 2 and the distance b etween the loads are a ssumed to be constant and the required
optimization is achieved by varying b and d.

(ii) In the second category of problems, the objective is to find a set of design param eters,
which are all continuous functi ons of som e other param eter that m inimizes an objective
function subject to a set of constraints. If th e cross sectional dim ensions of the rectangular
footings are allowed to vary along its lengt h as shown in Fig 3.1 ( b), the optim ization
problem can be stated as :

 b(t ) 
Find X(t) =   which minimizes
d (t )

f(X) = g( b(t), d(t) )

Subject to the constraints

 s ( X(t) )   max 0 t l

b(t)  0 0 t l

d(t)  0 0 t l

The length of the footing (l) the loads P 1 and P 2 , the distance between the loads are assumed
to be constant and the required optimization is achieved by varying b and d along the length l.

Here the design variables are functions of the le ngth parameter t. this type of problem, where
each design variable is a function of one or more param eters, is known as trajectory or
dynamic optimization problem.

P1 P1 t
P2 P2

b b(t)

d d(t)

l l

(a) (b)

Figure 1
Optimization Methods: Introduction and Basic Concepts 18

Classification based on the physical structure of the problem

Based on the physical structure, optimization problems are classified as optimal control and
non-optimal control problems.

(i) Optimal control problems

An optimal control (OC) problem is a m athematical programming problem involving a


number of stages, where each stage evolves from the preceding stage in a prescribed manner.
It is defined by two types of variables: the control or design and state v ariables. The control
variables define the system and controls how one stage evolves into the next. T he state
variables describe the behavior or status of the syst em at any stage. The problem is to find a
set of control variables such that the total objective function (also known as the perform ance
index, PI) over all stages is m inimized, subject to a set of constraints on the control and state
variables. An OC problem can be stated as follows:
l
Find X which minimizes f(X) =  f (x , y )
i 1
i i i

Subject to the constraints

qi ( xi , y i )  y i  y i 1 i = 1, 2, …., l

g j (x j )  0 , j = 1, 2, …., l

hk ( y k )  0 , k = 1, 2, …., l

Where x i is the ith control variable, y i is the ith state variable, and f i is the contribution of the
ith stage to the total ob jective function. g j , h k , and q i are the functions of x j, y j ; x k, y k and x i
and y i , respectively, and l is the total number of states. The control and state variables x i and
y i can be vectors in some cases.

(ii) Problems which are not optimal control problems are called non-optimal control
problems.

Classification based on the nature of the equations involved

Based on the nature of equations for the objec tive function and the constraints, optim ization
problems can be classified as linear, nonlin ear, geometric and quadratic programm ing
problems. The classification is very useful from a com putational point of view since m any
Optimization Methods: Introduction and Basic Concepts 19

predefined special m ethods are available for effective solution of a particular type of
problem.

(i) Linear programming problem

If the objective function and all the constraints are ‘linear’ func tions of the design variables,
the optimization problem is called a linear programming problem (LPP). A linear
programming problem is often stated in the standard form :

 x1 
x 
 2 
Find X =  . 
.
 
 x n 

n
Which maximizes f(X) = c x
i 1
i i

Subject to the constraints


n

a
i 1
ij xi  b j , j = 1, 2, . . . , m

xi  0 , j = 1, 2, . . . , m

where c i , a ij , and b j are constants.

(ii) Nonlinear programming problem

If any of t he functions am ong the objectives and constraint functions is nonlinear, the
problem is called a nonlinear programming (NLP) problem. This is the most general form of
a programming problem and all other problems can be considered as special cases of the NLP
problem.

(iii) Geometric programming problem

A geometric programming (GMP) problem is one in w hich the objective function and
constraints are expressed as polynomials in X. A function h(X) is called a polynomial (with
m terms) if h can be expressed as

h( X )  c1 x1a11 x 2a21  x nan1  c 2 x1a12 x 2a22  x nan 2    c m x1a1m x 2a2 m  x nanm


Optimization Methods: Introduction and Basic Concepts 20

where c j ( j  1, , m ) and a ij ( i  1, , n and j  1, , m ) are constants with c j  0 and

xi  0 .

Thus GMP problems can be posed as follows:

Find X which minimizes


N0
 n aij 
f(X) = 
j 1
c j
  xi
 i 1
,

c j > 0, x i > 0

subject to
Nk
 n qijk 
g k (X) =  a jk 
  xi   0, a jk > 0, x i > 0, k = 1,2,…..,m
j 1  i 1 

where N 0 and N k denote the number of terms in the objective function and in the kth constraint
function, respectively.

(iv) Quadratic programming problem

A quadratic programming problem is the best behaved nonlinear programming problem with
a quadratic objectiv e function and linear constraints and is concave (for m aximization
problems). It can be solved by suitably m odifying the linear programmi ng techniques. It is
usually formulated as follows:

n n n
F(X) = c   qi xi   Qij xi x j
i 1 i 1 j 1

Subject to
n

a
i 1
ij xi  b j , j = 1,2,….,m

xi  0 , i = 1,2,….,n

where c, q i , Q ij , a ij , and b j are constants.

Classification based on the permissible values of the decision variables

Under this classification, objective functions can be classified as integer and real-valued
programming problems.
Optimization Methods: Introduction and Basic Concepts 21

(i) Integer programming problem

If some or all of the design va riables of an optim ization problem are restricted to take only
integer (or discrete) values, the problem is called an integer programming problem. For
example, the optim ization is to find num ber of articles needed for an operation with leas t
effort. Thus, m inimization of the effort requi red for the operation being the objective, the
decision variables, i.e. the num ber of articles used can ta ke only integer values. Other
restrictions on minimum and maximum number of usable resources may be imposed.

(ii) Real-valued programming problem

A real-valued problem is that in which it is sought to minimize or maximize a real function


by systematically choosing the values of real va riables from within an allowed set. When the
allowed set contains only real values, it is called a real-valued programming problem.

Classification based on deterministic nature of the variables

Under this classification, optimization problems can be classified as deterministic or


stochastic programming problems.

(i) Stochastic programming problem

In this type of an optim ization problem, some or all the design variables are expressed
probabilistically (non-deterministic or stochastic). For exampl e estimates of life span of
structures which have probabilis tic inputs of the concrete stre ngth and load capacity is a
stochastic programming problem as one can only estim ate stochastically the life span of the
structure.

(ii) Deterministic programming problem

In this type of problems all the design variables are deterministic.

Classification based on separability of the functions

Based on this classification, optim ization problems can be classified as separable and non-
separable programming problems based on the sepa rability of the objec tive and constraint
functions.

(i) Separable programming problems

In this type of a problem the objective function and the constraints are separable. A function
is said to b e separable if it c an be expressed as the sum of n single-variable functions,
f1  xi , f 2  x 2 ,... f n  x n  , i.e.
Optimization Methods: Introduction and Basic Concepts 22

n
f ( X )   f i  xi 
i 1

and separable programming problem can be expressed in standard form as :


n
Find X which minimizes f ( X )   f i  xi 
i 1

subject to
n
g j ( X )   g ij  xi   b j , j = 1,2,. . . , m
i 1

where b j is a constant.

Classification based on the number of objective functions

Under this classification, objective functions can be classified as single-objective and multi-
objective programming problems.

(i) Single-objective programming problem in which there is only a single objective function.

(ii) Multi-objective programming problem

A multiobjective programming problem can be stated as follows:

Find X which minimizes f1  X , f 2  X ,... f k  X 

Subject to

g j (X)  0 , j = 1, 2, . . . , m

where f 1 , f 2 , . . . f k denote the objective functions to be minimized simultaneously.

For example in some design problems one might have to minimize the cost and weight of the
structural member for economy and, at the sa me time, maximize the load carrying capacity
under the given constraints.
Optimization Methods: Introduction and Basic Concepts 23

Classical and Advanced Techniques for Optimization

In the previous lectures we have understood the various classifications of optimization


problems, let us move on to understand the classical and advanced optimization techniques.

Classical Optimization Techniques

The classical optimization techniques are useful in f inding the optimum solution or
unconstrained maxima or m inima of continuous and differentiable functions. These are
analytical methods and m ake use of differential calculus in locating the optim um solution.
The classical methods have limited scope in practical applicati ons as some of them involve
objective functions which are not continuous and/ or differentiable. Yet, the study of these
classical techniques of optimization form a basis for developing m ost of the num erical
techniques that have evolved into advanced techniques more suitable to today’s practical
problems. These methods assume that the f unction is differentiable twice with respect to the
design variables and that the derivatives are continuous. Three main types of problems can be
handled by the classical optimization techniques, viz., single variable functions, multivariable
functions with no constr aints and m ultivariable functions with both equ ality and in equality
constraints. For problems with equality constraints the Lagrange multiplier method can be
used. If the problem has inequality constraint s, the Kuhn-Tucker conditions can be used to
identify the optim um solution. These m ethods lead to a set of nonlinear sim ultaneous
equations that may be difficult to solve. Thes e classical methods of optimization are further
discussed in unit 2.
The other methods of optimization include

• Linear programming: studies the case in which the objective function f is linear and
the set A is specified u sing only linear equ alities and in equalities. (A is the design
variable space)
• Integer programming: studies linear program s in whic h some or all variables are
constrained to take on integer values.
• Quadratic programming: allows the obje ctive function to have quadratic term s,
while the set A must be specified with linear equalities and inequalities.
Optimization Methods: Introduction and Basic Concepts 24

• Nonlinear programming: studies the general case in which the objective function or
the constraints or both contain nonlinear parts.
• Stochastic programming: studies the case in which so me of the constraints depend
on random variables.
• Dynamic programming: studies the case in which the optimization strategy is based
on splitting the problem into smaller sub-problems.
• Combinatorial optimization: is co ncerned with problem s where the s et of feasib le
solutions is discrete or can be reduced to a discrete one.
• Infinite-dimensional optimization: studies the case when the set of feasible solutions
is a subset of an infinite-dimensional space, such as a space of functions.
• Constraint satisfaction: studies the case in whic h the objective function f is constant
(this is used in artificial intelligence, particularly in automated reasoning).

Most of these techniques will be discussed in subsequent modules.

Advanced Optimization Techniques

• Hill climbing
Hill climbing is a g raph search algorithm where the current path is extended with a
successor node which is closer to the solution than the end of the current path.

In simple hill climbing, the first closer node is chosen whereas in steepest ascent hill
climbing all successors are compared and the closest to the solution is chosen. Both
forms fail if there is no closer node. This m ay happen if there are local maxima in the
search space which are not solutions. Steepest ascent hill climbing is similar to best
first search but the latter tries all p ossible extensions of the current path in order,
whereas steepest ascent only tries one.

Hill climbing is used widely in artificial intelligence fields, for reach ing a goal state
from a starting node. Choice of next node starting node can be varied to give a
number of related algorithms.
Optimization Methods: Introduction and Basic Concepts 25

• Simulated annealing
The name and inspiration com e from annealing process in metallurgy, a technique
involving heating and controlle d cooling of a m aterial to increase the size of its
crystals and reduce their def ects. The heat causes the atoms to become unstuck from
their initial positions (a local minimum of the internal energ y) and wander random ly
through states of higher energy; the slow cooling gives th em more chances of finding
configurations with lower internal energy than the initial one.
In the simulated annealing method, each point of the search space is com pared to a
state of some physical system , and the function to be m inimized is interpreted as the
internal energy of the system in that state. The refore the goal is to br ing the system,
from an arbitrary initial state, to a state with the minimum possible energy.

• Genetic algorithms
A genetic algorithm (GA) is a s earch technique used in com puter science to fin d
approximate solutions to optimization and sear ch problems. Specifically it f alls into
the category of local search tech niques and is therefore generally an incom plete
search. Genetic algo rithms are a particular class of evolutionary algorithm s that use
techniques inspired by evolu tionary biology such as inhe ritance, mutation, selection,
and crossover (also called recombination).

Genetic algorithms are typically implemented as a com puter simulation. in which a


population of abstract repres entations (called chrom osomes) of candidate solutions
(called individuals) to an optimization problem, evolves toward better solutions.
Traditionally, solutions are represented in bina ry as strings of 0s and 1s , but different
encodings are also possible. The evolutio n starts from a populat ion of com pletely
random individuals and occur in g enerations. In each generation, the fitness of th e
whole population is ev aluated, multiple individuals are s tochastically selected from
the current population (based on their fitness) , and modified (mutated or recombined)
to form a new population. The new population is then used in the next iteration of the
algorithm.
Optimization Methods: Introduction and Basic Concepts 26

• Ant colony optimization


In the real world, ants (i nitially) wander random ly, and upon finding food return to
their colony while laying down pherom one trails. If other ants find such a path, the y
are likely not to keep traveling at random, but instead follow the trail laid by earlier
ants, returning and reinforcing it, if they eventually find any food.

Over time, however, the pherom one trail st arts to evaporate, thus reducing its
attractive strength. The more time it takes for an ant to travel down the path and back
again, the more time the pheromones have to evaporate. A short path, by comparison,
gets marched over faster, and thus the pherom one density remains high as it is laid on
the path as fast as it can evaporate. Pher omone evaporation has also the advantage of
avoiding the convergence to a local optim al solution. If there was no evaporation at
all, the pa ths chosen by the f irst ants would te nd to be ex cessively attractive to the
following ones. In that case, the exploration of the solution space would be
constrained.

Thus, when one ant finds a good (short) path from the colony to a food source, other
ants are more likely to follow that path, and such positive feedback even tually leaves
all the ants following a s ingle path. The idea of the ant colo ny algorithm is to m imic
this behavior with "simulated ants" walking around the search space rep resenting the
problem to be solved.

Ant colony optimization algorithms have been used to produce near-optimal solutions
to the traveling salesman problem. They have an advantage over sim ulated annealing
and genetic algorithm approaches when the graph m ay change dynamically. The ant
colony algorithm can be run continuously and can adapt to changes in real tim e. This
is of interest in network routing and urban transportation systems.
Optimization Methods: Introduction and Basic Concepts 27

References / Further Reading:

1. Deb K., Multi-Objective Optimization using Evolutionary Algorithms, John Wiley & Sons
Pvt Ltd.

2. Deb K., Optimization for Engineering Design – Algorithms and Examples, Prentice Hall
of India Pvt. Ltd., New Delhi.

3. Rao S.S., Engineering Optimization – Theory and Practice, Third Edition, New Age
International Limited, New Delhi.

You might also like