Download as ppt, pdf, or txt
Download as ppt, pdf, or txt
You are on page 1of 56

Under the guidance of:

Dr. J. S Dhillon
(Professor)
Department of Electrical
and Instrumentation Engineering
Presented by:
Om Prakash
(PG/ICE/088209)

Sant Longowal Institute Of Engineering And Technology, Longowal


 Optimization:
 The act of obtaining the best result under the
given circumstances.
 Design, construction and maintenance of engineering
systems involve decision making both at the managerial
and the technological level
 Goals of such decisions :
– To minimize the effort required or
– To maximize the desired benefit
Optimization:
The process of finding the conditions that give the
minimum or maximum value of a function, where the
function represents the effort required or the desired
benefit
Real world problem

Validation ,
Analysis
Sensitivity analysis

Algorithm, model, or solution technique

Numerical
Verification
method

Computer implementation
 Basic components:
 An objective function expresses the main aim of the
model which is either to be minimized or maximized.
 A set of unknowns or variables which control the value of
the objective function.
A set of constraints that allow the unknowns to take on
certain values but exclude others
 Definition:
Find values of the variables that minimize or maximize the
objective function while satisfying the constraints.
The objective function is the mathematical function one wants
to maximize or minimize, subject to certain constraints. Many
optimization problems have a single objective function

 Multiple objective functions: In practice, problems with


multiple objectives are reformulated as single-objective
problems by either forming a weighted combination of the
different objectives or by treating some of the objectives by
constraints.
X1
To find X = which maximizes f(X)
X2

Subject to the constraints …
gi(X) <= 0 ,
X ni = 1, 2,….,m
lj(X) = 0 , j = 1, 2,….,p

Where,
– X is an n-dimensional vector called the design vector
– f(X) is called the objective function, and
– gi(X) and lj(X) are known as inequality and equality constraints,
respectively.
 Optimization => Maximum of desired quantity, or =>
Minimum of undesired quantity
 Objective Function = Formula to be optimized
= f(X)
 Decision Variables = Variables about which we
. can make decisions
= x= X1 ,X2, X3 ………….
Xn
B
F(x) D

E
A C
• B and D are maxima X
• A, C and E are minima
 By calculus:
If F(X) continuous, analytic
 Primary conditions for maxima and minima:
∂F(X) / ∂Xi = 0 ∀i
(symbol means: “for all i”)
 Secondary conditions:
∂²F(X) / ∂Xi² < 0 = > Max (B,D)
∂²F(X) / ∂Xi² > 0 = > Min (A,C,E)
These define whether point of no change in ‘f’ is a
maximum or a minimum
 Involves the optimization of a process subject to constraints

 Types
 Equality Constraints -- some factors have to equal
constraints
 Inequality Constraints -- some factors have to be less
or greater than the constraints (these are “upper” and
“lower” bounds)
 To solve situations of increasing complexity,
(for example, those with equality, inequality constraints) …
 Transform more difficult situation into one we
know how to deal with
 “constrained” optimization to “unconstrained”
optimization
 Traditional Methods
 Fast, deterministic, give exact solutions
 Limitations
 Can only be applied to some set of problems.
 Often too time-consuming to solve large real-world
problems Or get stuck on local optima.
 The problem must be well-defined
 Modern Heuristics:
 Generally applicable, and give good approximate
solutions to complex problems in reasonable time
 Limitations
 Not determinstic (except Tabu Search)
 Do not promise to find the optimal solution.
Examples : Simulated annealing, Tabu Search,
Evolutionary Algorithms (EAs) and Particle Swarms
There are several methods, which are implemented to solve
the optimization problems. Classification of these methods
are given below.
Optimization Algorithms

Gradient descent method Population based method Direct search method

Genetic Evolutionary Evolution Simulated Ant colony Particle swarm


algorithms programming strategy annealing optimization optimization
Algorithms
 Design of structural units in construction, machinery and in
space vehicles.
 Maximizing benefit/ minimizing product costs in various
manufacturing and construction processes.
 Optimal path finding in road networks/ freight handling
processes.
 Optimal production planning, controlling and scheduling.
 Optimal allocation of resources or services among several
activities to maximize the benefit.
History and main idea
Process of PSO
Swarm search
Algorithm
Simulation
My project
Advantage and
disadvantage
History and Main idea

Swarm
A large number of small
animals or insects: “A
swarm of bees”
• Developed in 1995 by James
Kennedy and Russ Eberhart.
• It was inspired by social
behavior of bird flocking or fish
schooling.
• PSO applies the concept of
social interaction to problem
solving.
• Finds a global optimum.
• PSO is a robust stochastic
optimization technique based
on the movement and
intelligence of swarms.
• It uses a number of agents
(particles) that constitute a
swarm moving around in the
search space looking for the
best solution.
• Each particle is treated as a
point in a d-dimensional space
which adjusts its “flying”
according to its own flying
experience as well as the flying
experience of other particles.
• Each particle keeps track of its
coordinates in the solution space which
are associated with the best solution
(fitness) that has achieved so far by that
particle. This value is called personal
best , pbest.
• Another best value that is tracked by the
PSO is the best value obtained so far by
any particle in the neighborhood of that
particle. This value is called gbest.
• The basic concept of PSO lies in
accelerating each particle toward its pbest
and the gbest locations, with a random
weighted accelaration at each time step a
Process of PSO

• Initialize Swarm
• Move Swarm
• Calculate pbest and gbest
• Adjust Velocities
– Cognitive
– Social
• Convergence
The modification of the particle’s position can be
mathematically modeled according the following
equation :

 Random initial position,

 Random velocity vectors


Velocity calculation

 Position update

xid – current value of the dimension “d” of the dividual “i”


vid – current velocity of the dimension “d” of the individual “i”.
Pid – optimal value of the dimension “d” of the individual “i” so far.
Pgd – current optimal value of the dimension “d” of the swarm.
c1, c2 – acceleration coefficients.
 - inertia weight factor
 In PSO, particles never die!
 Particles can be seen as simple agents that fly through the
search space and record (and possibly communicate) the best
solution that they have discovered.
 Initially the values of the velocity vectors are randomly
generated with the range [-Vmax, Vmax] where Vmax is the
maximum value that can be assigned to any vid.
 Once the particle computes the new Xi and evaluates its new
location. If x-fitness is better than p-fitness, then Pi = Xi and
p-fitness = x-fitness.
 When using PSO, it is possible for the magnitude of the
velocities to become very large.Performance can suffer if
Vmax is inappropriately set.
 Several methods were developed for controlling the
growth of velocities:
 A dynamically adjusted inertia factor,
 A dynamically adjusted acceleration coefficients.
 Re-initialisation of stagnated particles…..
Start

Initialize Initial population of particles with random


position and velocities and parameter of PSO

No
If iteration < max. iterations Output global best and
f(X)

yes
Stop
Update position and velocity

Evaluate the fitness of each particle according


objective function

fitness ( xidt )  fitness ( xiib ) then, xiib  xidt 1

; fitness( xidt )  fitness( xigb ) then, xigb  xidt 1

Determine & store local best and global best


max
y

min
fitness

x
max
y

min
fitness

x
max
y

min
fitness

x
max
y

min
fitness

x
max
y

min
fitness

x
max
y

min
fitness

x
max
y

min
fitness

x
max
y

min
fitness

x
My Project

OBJECTIVES
The objectives of my thesis are outlined below:
• To study the different test problem’s of optimization problem.
• To gain fundamental insight into the particle swarm
optimization (PSO) algorithm.
• To utilize the particle swarm optimization algorithm to solve
constrained, unconstrained optimization problem
• To implement the particle swarm optimization algorithm to
solve economic dispatch problem an application of power
system optimization.
APPROACH
In attaining each objective, a different approach is followed:
• Standard constrained and unconstrained optimization problem
are selected for study.
• A particle swarm optimization algorithm is developed and
solution of standard test optimization problems are verified.
• The particle swarm optimization algorithm is applied to solve
economic load dispatch problem.
UNCONSTRAINT OPTIMIZATION PROBLEM

Test Problem 1 (Sphere). This problem is defined by

where n is the dimension of the problem. The global minimizer is x ∗ = (0, . . . , 0) with
F (x∗) = 0.

Test Problem 2 (Rastrigin). This problem is defined by

where n is the dimension of the problem. The global minimizer is x ∗ = (0, . . . , 0) with
F (x∗) = 0.

Test Problem 3 (Generalized Rosenbrock). This problem is defined by


where n is the dimension of the problem. The global minimizer is x ∗ = (1, . . . , 1) with
F (x∗) = 0.

Test Problem 4 (A quadratic function). This problem is defined by

the global minimizer is x∗ = (1,3) with F(x∗) = 0.

Test Problem 5 (Beal’s function). This problem is defined by

the global minimizer is x∗ = (3, 0.5) with F (x∗) = 0.

Particle swarm optimization is used to find the control variables to minimize the

function. For each test problem swarm size is taken as 50, dimension is taken as 2

and c1 = c2 = 2; w = varying from 0.9 to 0.4.


Table 4.1: Parameters for the unconstrained optimization problems

S.N0 Problem (Xmin, Xmax) ε

1 TP1 (-100,100) 10-6

2 TP2 (-30,30) 10-6

3 TP3 (-5.12,5.12) 10-7

4 TP4 (-30,30) 10-6

5 TP5 (-15,15) 10-5


Table 4.2: Results of PSOA for the unconstrained problems
Ideal Result Obtained Result
S.N0 Problem
x1 x2 F(x) x1 x2 F(x)

1 TP1 0 0 0.00 0.0011 0.0010 0.00000226

2 TP2 0 0 0.00 0.00004 -0.00002 0.00000381

3 TP3 1 1 0.00 0.995800 0.992083 0.000013

4 TP4 1 3 0.00 0.752117 3.4999086 0.00001

5 TP5 3 0.5 0.00 2.990513 0.496756 0.00003


Fig. 4.1: Variation of function with respect to iteration for test problem 1

Fig. 4.2: Variation of function with respect to iteration for test problem 2
Fig. 4.3: Variation of function with respect to iteration for test problem 3

Fig. 4.4: Variation of function with respect to iteration for test problem 4
Fig. 4.5: Variation of function with respect to iteration for test problem 5
EQUALITY CONSTRAINTS OPTIMIZATION PROBLEM

Minimize f ( x1,x2 ) = x12 -(x2 -1)2

x , x ) = x – x 2
= 0,
Subject to h 1( 2 2 1

-1< xi<1 (i = 1, 2)

X = [x1, x2]

Particle swarm optimization is used to find the control variables to minimize the

function. Swarm size is taken as 30, plenty parameter is set to 100 and error goal is

10-5 for this problem. The results obtained for different number of iteration .
Table 4.3: Result of equality constraint optimization problem

S.N0 Iterations x1 x2 f(X)

1 5 0.384777 0.161084 0.86881

2 10 0.549719 0.317520 0.79147

3 15 0.711679 0.511302 0.74763

4 20 0.690876 0.482266 0.74781

5 25 0.709950 0.509364 0.74760

Fig. 4.6: Variation of function with respect to iterations for equality constraint problem
INEQUALITY CONSTRAINTS OPTIMIZATION PROBLEM

Maximize f ( ) = 170-14x1--22x2

Subject to g1 ( ) =20-4x1- x2 ≥ 0

g 2( ) = 73 –2x1 - 12x2 ≥ 0,

0 ≤ xi ≤ 6 (i =1, 2).

X = [x1, x2]T

Swarm size is taken as 200, plenty parameter is set to 100 and error goal
is 10-5 for this problem.
Table 4.4: Result of inequality constraint optimization problem

S. No Iteration x1 x2 f(X)

1 25 3.607857 5.463054 -0.69719

2 50 3.622584 5.463054 -0.90337

3 75 3.627593 5.463054 -0.97349

4 100 3.630115 5.463054 -1.00881

5 125 3.631635 5.463054 -1.03008

6 150 3.632650 5.463054 -1.04430

7 175 3.633377 5.463054 -1.05447

8 200 3.633923 5.463054 -1.06211

9 225 3.634347 5.463054 -1.06804

Fig. 4.7: Variation of function with respect to iteration for inequality constraint problem
ECONOMIC LOAD DISPATCH PROBLEM
The economic load dispatch problem can be described as an optimization (minimization) process with the
objective:
n
f ( X )   Fi ( xi )
i 1

Subject to:
Power balance constraints
n
PD  x
i 1
i

 Generating capacity constraints:


ximin  xi  ximax (i = 1, 2, …, n)

Where
Fi (xi) is the fuel cost function of the ith unit
PD is the system load demand and
PL is the transmission loss.
ximin and xi max are the minimum and maximum power outputs of the i th unit.

Fuel cost function without valve point effect is written as:

F ( xi )  ai xi2  bi xi  ci
A cost function is obtained based on the ripple curve for more accurate modeling which contains
higher order nonlinearity and discontinuity due to the valve point effect and should be refined by
a sine function. Therefore, the fuel cost function can be expressed as
n
F ( xi )   (ai xi2  bi xi  ci )  ei sin( f i ( ximin  xi ))
i 1

where
ai, bi , and ci are the fuel-cost coefficients of the ith unit.
ei and fi are the fuel cost-coefficients of the ith unit with valve-point effects.

3- GENERATORS
Table 4.5: Fuel-cost coefficients: 3-generators

Generator Generator limits Fuel cost coefficients

number Xi
min
(MW) Xi
max
(MW) ai bi ci ei fi

1 100 600 0.001562 7.92 561 300 0.0315

2 50 200 0.004820 7.97 78 150 0.063


Table 4.6: Result of economic load dispatch problem for 3-generators

S.N0 Iterations f(X)

1 25 8315.885

2 50 8316.604

3 75 8436.338

4 100 8234.867

5 125 8234.185

6 150 8234.867

Fig. 4.8: Variation of function with respect to number of iteration for 3-generator power system
13- GENERATORS
Table 4.7: Fuel-cost coefficients: 13-generators.

Generator Generator limits Fuel cost coefficients

number ai bi ci ei fi
Xi (MW)
min
Xi
max
(MW)

1 00 680 0.00028 8.10 550 300 0.035

2 00 360 0.00056  8.10 309 200 0.042

3 00 360 0.00056  8.10 307 200 0.042

4 60 180 0.00324  7.74 240 150 0.063

5 60 180 0.00324  7.74 240 150 0.063

6 60 180 0.00324  7.74 240 150 0.063

7 60 180 0.00324  7.74 240 150 0.063

8 60 180 0.00324  7.74 240 150 0.063

9 60 180 0.00324  7.74 240 150 0.063

10 40 120 0.00284 8.6 126 100 0.084

11 40 120 0.00284 8.6 126 100 0.084

12 55 120 0.00284 8.6 126 100 0.084


Table 4.8: Result of economic load dispatch problem for 13-generators

S.N0 Iterations f(X)

1 50 18581.34

2 100 18655.89

3 150 18633.17

4 200 18574.99

5 250 18641.51

Fig. 4.9: Variation of cost with respect number of iteration for 13-generator power system
Advantage and disadvantage
Some advantages of the PSOA are :
• it is gradient-free,
• it is easy to parallelize
• simple concept
• easy implementation
• fast computation
• robust search ability

The major drawback of PSO, like in other heuristic optimization


techniques, is that it lacks somewhat a solid mathematical foundation for
analysis. It has the problems of dependency on initial point and parameters,
difficulty in finding their optimal design parameters, and the stochastic
characteristic of the final outputs.

You might also like