Professional Documents
Culture Documents
Optimisation (Part 2)
Optimisation (Part 2)
Jadavpur University
Presentation :2
Classification of Optimisation
methods/techniques
&
Classical techniques for optimisation
Classification of optimization Methods :
Classification Based on number of design variables:
• Single variable
• Multi-variable
• Separable
• Non-separable
• single-objective programming
• multi-objective programming
• Direct Search
• Gradient based
• Local
• Global
• Traditional
• Various techniques available for the solution of different types of optimization problems .
• Classical methods of differential calculus for unconstrained maxima and minima of a function of several variables.
• Techniques of nonlinear, linear, geometric, quadratic, or integer programming are numerical techniques wherein an
approximate solution is sought by proceeding in an iterative manner by starting from an initial solution.
• The modern methods of optimization, including genetic algorithms, simulated annealing, particle swarm optimization, ant
Necessary Condition:
if the derivative df (x)/dx = f ′(x) exists as a finite number at x = x∗, then f ′(x∗) = 0.
The theorem does not say what happens if a minimum or maximum
occurs at a point x∗ where the derivative fails to exist.
The theorem does not say that the function necessarily will have a
minimum or maximum at every point where the derivative is zero.
Sufficient Condition:
Then f (x∗) is (i) a minimum value of f (x) , if f (n) (x∗)> 0 and n is even;
is the matrix of second partial derivatives and is called the Hessian matrix of f (X).
Saddle Point
In the case of a function of two variables, f (x, y), the Hessian matrix may be neither positive nor
negative definite at a point (x∗, y∗) at which
subject to
gj (X) = 0, j = 1, 2, . . . ,m
Solution by Direct Substitution
solve simultaneously the m equality constraints and express any set of m variables in terms of the remaining n − m
variables.
A new objective function involving only n − m variables. New objective function is not subjected to any constraint
and hence its optimum can be found by using the unconstrained optimization techniques.
Constraint equations will be nonlinear for most of practical problems, and often it becomes impossible to solve them
SOLUTION :
Let the origin of the Cartesian coordinate system x1, x2, x3 be at the center of the sphere and the sides of the box be 2x1, 2x2, and
2x3.
The volume of the box is given by f (x1, x2, x3) = 8 x1 x2 x3
Since the corners of the box lie on the surface of the sphere of unit radius, x1, x2, and x3 have to satisfy the constraint
+ + =1
This problem has three design variables and one equality constraint. Hence the equality constraint can be used to eliminate any
one of the design variables from the objective function. If we choose to eliminate x3, Eq. gives
By treating L as a function of the three variables x1, x2, and , the necessary conditionsfor its extremum are given by
The Lagrange function, L, in generalized case is defined by introducing one Lagrange multiplier λ j for each constraint gj (X)
as L(x1, x2, . . . , xn, λ1, λ2, …. λm) = f (X) + λ1 g1(X) + λ2 g2(X) + · · · + λm gm(X)
By treating L as a function of the n + m unknowns, (x 1, x2, . . . , xn, λ1, λ2, …. Λm) , the necessary conditions for the extremum of L,
which also correspond to the solution of the original problem stated are given by
Minima :
>0
Maxima :
< 0