Download as pptx, pdf, or txt
Download as pptx, pdf, or txt
You are on page 1of 15

Optimisation Techniques for Engineering Design

Prof. Sanjib Kumar Acharyya

Department of mechanical Engineering

Jadavpur University
Presentation :2
Classification of Optimisation
methods/techniques
&
Classical techniques for optimisation
Classification of optimization Methods :
Classification Based on number of design variables:
• Single variable
• Multi-variable

Classification Based on the Nature of the Design Variables


• parameter or static optimization problems
• trajectory or dynamic optimization problem

Classification Based on the Physical Structure of the Problem


• Optimal Control Problem
• Non-optimal control problems

Classification Based on the Nature of the Equations Involved


linear, nonlinear, geometric, and quadratic programming

Classification Based on the Permissible Values of the Design Variables:


• Integer Programming Problem
• real-valued programming problem
Classification Based on the Deterministic Nature of the Variables:

• Deterministic programming problems.

• Stochastic Programming Problem

Classification Based on the Separability of the Functions:

• Separable

• Non-separable

Classification Based on the Number of Objective Functions :

• single-objective programming

• multi-objective programming

Classification Based on constraints :

• Constrained Optimisation Problem

• Unconstrained Optimisation Problem


Classification Based on optimisation method:

• Direct Search

• Gradient based

Classification Based on nature of solution

• Local

• Global

Classification Based on search technique

• Traditional

• Non traditional ( Evolutionary, Nature inspired)


OPTIMIZATION TECHNIQUES

• Various techniques available for the solution of different types of optimization problems .

• Classical methods of differential calculus for unconstrained maxima and minima of a function of several variables.

• For problems with equality constraints, the Lagrange multiplier method

• Problems with inequality constraints, the Kuhn–Tucker conditions can be used

• Techniques of nonlinear, linear, geometric, quadratic, or integer programming are numerical techniques wherein an

approximate solution is sought by proceeding in an iterative manner by starting from an initial solution.

• The dynamic programming technique is useful for optimal control problems.

• Stochastic programming deals with the variables described by probability distributions.

• The modern methods of optimization, including genetic algorithms, simulated annealing, particle swarm optimization, ant

colony optimization are nature inspired techniques.

• Multi-objective optimization techniques are capable to take care of multiple objectives.


Classical techniques for optimization

Condition for Minima/ Maxima


Condition for minima or Maxima of a function :
Single variable function

Necessary Condition:

If a function f (x) is defined in the interval a ≤ x ≤ b and has a relative

minimum/maximum at x = x∗, where a < x∗ < b,

if the derivative df (x)/dx = f ′(x) exists as a finite number at x = x∗, then f ′(x∗) = 0.
The theorem does not say what happens if a minimum or maximum
occurs at a point x∗ where the derivative fails to exist.

The theorem does not say that the function necessarily will have a
minimum or maximum at every point where the derivative is zero.

Sufficient Condition:

Let f ′(x∗) = f ′′(x∗) = · · · = f (n−1)(x∗) = 0, but fn (x∗) = 0.

Then f (x∗) is (i) a minimum value of f (x) , if f (n) (x∗)> 0 and n is even;

(ii) a maximum value of f (x) if f (n)(x∗) < 0 and n is even;

(iii) neither a maximum nor a minimum if n is odd.


MULTIVARIABLE OPTIMIZATION WITH NO CONSTRAINTS

is the matrix of second partial derivatives and is called the Hessian matrix of f (X).

Saddle Point
In the case of a function of two variables, f (x, y), the Hessian matrix may be neither positive nor
negative definite at a point (x∗, y∗) at which

In such a case, the point (x∗, y∗) is called a saddle point .


MULTIVARIABLE OPTIMIZATION WITH EQUALITY CONSTRAINTS :

optimization of continuous functions subjected to equality constraints:


Minimize f = f (X)

subject to
gj (X) = 0, j = 1, 2, . . . ,m
Solution by Direct Substitution

n : No. of variables m : No. of equality constraints,

solve simultaneously the m equality constraints and express any set of m variables in terms of the remaining n − m

variables.

A new objective function involving only n − m variables. New objective function is not subjected to any constraint

and hence its optimum can be found by using the unconstrained optimization techniques.

Constraint equations will be nonlinear for most of practical problems, and often it becomes impossible to solve them

and express any m variables in terms of the remaining n − m variables.


 Example 2.6 Find the dimensions of a box of largest volume that can be inscribed in a sphere of unit radius.

SOLUTION :
Let the origin of the Cartesian coordinate system x1, x2, x3 be at the center of the sphere and the sides of the box be 2x1, 2x2, and
2x3.
The volume of the box is given by f (x1, x2, x3) = 8 x1 x2 x3
Since the corners of the box lie on the surface of the sphere of unit radius, x1, x2, and x3 have to satisfy the constraint

+ + =1

This problem has three design variables and one equality constraint. Hence the equality constraint can be used to eliminate any
one of the design variables from the objective function. If we choose to eliminate x3, Eq. gives

Thus the objective function becomes

which can be maximized as an unconstrained function in two variables.


Solution by the Method of Lagrange Multipliers
The necessary conditions given by Eqs. (2.34) to (2.36) are more commonly generated by constructing a function L, known as
the Lagrange function, as
L(x1, x2, λ ) = f (x1, x2) + λ g(x1, x2)

By treating L as a function of the three variables x1, x2, and , the necessary conditionsfor its extremum are given by

The Lagrange function, L, in generalized case is defined by introducing one Lagrange multiplier λ j for each constraint gj (X)
as L(x1, x2, . . . , xn, λ1, λ2, …. λm) = f (X) + λ1 g1(X) + λ2 g2(X) + · · · + λm gm(X)
By treating L as a function of the n + m unknowns, (x 1, x2, . . . , xn, λ1, λ2, …. Λm) , the necessary conditions for the extremum of L,
which also correspond to the solution of the original problem stated are given by
Minima :

>0

Maxima :
< 0

You might also like