Download as ppt, pdf, or txt
Download as ppt, pdf, or txt
You are on page 1of 30

Particle Swarm Optimization (PSO) …

and its applications…..


The Inventors (1)

Prof. Russell Eberhart


Purdue School of Engg. & Tech.
eberhart@engr.iupui.edu
The Inventors (2)

James Kennedy
Kennedy_Jim@bls.gov

Graduated in
Psychology.

Not at all a
professional researcher.
Particle Swarm Optimization

PSO emulates the swarm behavior of insects, animals herding,


birds flocking and fish schooling where these swarms search
for food in a collaborative manner.
Each member in the swarm adapts its search patterns by
learning from its own experience and other members’
experience.
Comparison with other Evolutionary Computation
Techniques
Unlike in genetic algorithms, in PSO, there is no selection operation.

• All particles in PSO are kept as members of the population


through the course of the run

• PSO is the only algorithm that does not implement the survival of
the fittest.

• No crossover operation in PSO.

• Velocity updating resembles mutation in EP.

• Balance between the global and local search is achieved by using


an inertial weight factor (w) in velocity updating
equation.
Particle Swarm Optimization
• Each candidate solution is called PARTICLE and represents
one individual of a population
• The population is set of vectors and is called SWARM
• The particles change their components and move (fly) in a
space R2
• They can evaluate their actual position using the function to
be optimized
• The function is called FITNESS FUNCTION
• Particles also compare themselves to their neighbors and
imitate the best of that neighbors
Initialization

Search space D-Dimensional

Xi = [xi1 , …, xiD ]T = ith particle of Swarm

Vi = [vi1 , …, viD ]T = Velocity of ith particle

Pi = [pi1 , …, piD ]T = Best previous position


of the ith particle
PSO Contd …

• Swarm of particles is flying through the parameter


space and searching for optimum
• Each particle is characterized by
• Position vector….. xi(t)
• Velocity vector…...vi(t)
113

5
x10

0
vi(t)
-400

-5 -200
400
200 0
0
-200
200 xi(t)
-400 400
Particle i
Swarming :
Swarming Contd …

Bingo !
Why Social insects ?
Problem solving benefits include:
– Flexible
– Robust
– Self-Organized
– Having no clear leader
– Can use past memory
– Swarming nature
– colonial life
General PSO Algorithm:

• Each particle has


– Individual knowledge pbest
– its own best-so-far position

vi(t)

xi(t)
Particle i. pbest
General PSO Algorithm Contd …

• Each particle has


– Individual knowledge pbest
– its own best-so-far position
– Social knowledge gbest
– pbest of its best neighbor

vi(t)

xi(t)
Particle i. pbest
.gbest
General PSO Algorithm Contd …

• Velocity update:
vi(t+1)= w×vi(t) +
+ c1×rand× (pbest(t) - xi(t)) +
+ c2×rand× (gbest(t) - xi(t))
vi(t)
gbest_i(t)
Xi(t)

pbest_i(t)
General PSO Algorithm Contd …

• Velocity update:
vi(t+1)=w×vi(t) +
+ c1×rand× (pbest(t) - xi(t)) +
+ c2×rand× (gbest(t) - xi(t))
vi(t)
gbest_i(t)
Xi(t)

pbest_i(t)
General PSO Algorithm Contd …

• Velocity update:
vi(t+1)=w×vi(t) +
+ c1×rand× (pbest(t) - xi(t)) +
+ c2×rand× (gbest(t) - xi(t))
vi(t)
gbest_i(t)
Xi(t)

pbest_i(t)
General PSO Algorithm Contd …

• Velocity update:
vi(t+1)=w vi(t) +
+c1*rand *(pbest(t) - xi(t)) +
+ c2*rand*(gbest(t) - xi(t))
• Position update: vi(t)
gbest_i(t)
Xi(t)
xi(t+1)=xi(t) + vi(t+1)

pbest_i(t)
General PSO Algorithm Contd …

• Velocity update:
vi(t+1)=w vi(t) +
+c1*rand*(pbest(t) - xi(t)) +
+c2*rand*(gbest(t) - xi(t))
• Position update:
xi(t+1)=xi(t) + vi(t+1) vi(t) Gbest(t)

Where Xi(t) vi(t+1)


w > (1 / 2) (C1 + C2) – 1
0<w<1

pbest_i(t)
Flow chart depicting the General PSO Algorithm:

Start

Initialize particles with random position


and velocity vectors.

For each particle’s position (p)


particles exhaust

evaluate fitness
Loop until all

Loop until max iter


If fitness(p) better than
fitness(pbest) then pbest= p

Set best of pBests as gBest

Update particles velocity and


position

Stop: giving gBest, optimal solution.


Maximum Velocity ?
 Maximal velocity
• Velocity must be limited
• Prevention of swarm explosion
• Vmax – if velocity of particle is greater than Vmax or less than
–Vmax, its set to Vmax
• Vmax is saturation point of velocity
 Comments on the Inertial weight factor:
A large inertia weight (w) facilitates a global search while a small
inertia weight facilitates a local search. By linearly decreasing the

inertia weight from a relatively large value to a small value


through
the course of the PSO run gives the best PSO performance
compared with fixed inertia weight settings.
Larger w ----------- greater global search ability
Smaller w ------------ greater local search
ability.
PSO with linearly decreasing inertia Weight, w

The linearly decreasing inertia weight, w is given by

( w0 − w1 ) * k
w(k ) = w0 *
I
where
k = search number
I = max . no. of iterations
w0 = 0.9
w1 = 0.4

Proposed by Shi and Eberhart.


PSO with constriction factor
Introduced by Clerc anf Kennedy.
The expression for velocity has been modified as

Vi (d ) = K *Vi (d ) + c1 * rand1i (d ) * ( pi (d ) − xi (d )) + c2 * rand 2i (d ) * ( p g (d ) − xi (d ))

2
K= , φ = c1 + c2 , φ > 4
2 − φ − φ − 4φ2

φ is set to 4.1which gives and c1 = c2 = 2.05, K = 0.729.


As a result each of the (pi-xi) terms is calculated by
multiplying 0.729 * 2.05= 1.49445.
Constriction factor guarantees the convergence and
improves the convergence velocity.
Repulsive PSO
RPSO is a global optimization algorithm, belongs to the class
of stochastic evolutionary global optimizers, a variant of
particle swarm optimization (PSO).
vk +1 = w.vk + a.R1.( xˆ − xk ) + b.R2 .w.( yˆ − xt ) + c.R3 .w.z

Where
R1, R2, R3 : random numbers between 0 and 1
ω : inertia weight between 0.01 and 0.7
x̂ :best position of a particle
ŷ : best position of a randomly chosen other particle from
within the swarm
z : a random velocity vector
a, b, c : constants
Comprehensive Learning PSO
Overcomes premature convergence
Conventional PSO
– Learns simultaneously from pbest and gbest.
– Possibility of trapping in local minima.
CLPSO
– Learns from gbest of swarm, pbest of particle and
pbest of all other particles at different dimension.

– If D is the total number of Dimension of optimizing


problem then, m- dimensions are randomly chosen to
learn from the ‘gbest’, out of the remaining D- m
dimensions some learn from a randomly selected
particle’s ‘pbest’ and the remaining dimensions learn
from its own ‘pbest’.
Initialization of all the particle velocity and position
randomly
Set the initial position as pbest (pi,d) and Find gbest (pg,d);
For n= 1 to the max. Number of generations,
For i=1 to the population size,
For d=1 to the problem dimensionality,
Update velocity according following to equation;
, d = ==1
If aiVi(d) wn .Vi ,d + rand ( g best − X i ,d )

Vi , d if= bi
Else wn(d) + rand ( pbestf ,d − X i ,d )
.Vi ,d ==1

Vi ,d = wn .Vi ,d + rand ( pbest ,d − X i ,d )


Else
Where ai (d) and bi (d) are the deciders of randomly chosen
dimension for learning for random particles.
And gbest is global best value, pbest,d is personal best of ith
particle in dth dimension and pbestf,d is the personal best value of
fth particle in dth dimension
End if
Limit Velocity
Update Position according to equation
Xi,d =Xi,d + Vi,d
End- of-d;
Compute fitness of the particle;
Update historical information regarding pbest i and gbest i if
needed ;
End-of-i;
Terminate if meet the requirements;
End-of-n;
Clonal PSO
The best features of PSO and Clonal Selection Principle of Artificial
Immune System are combined.

The local and global best positions of particles are not directly used in
deciding their new positions.

Rather velocity of each particle is updated from the knowledge of the


previous velocity and the variable inertia weights.

The location of each particle is then changed in the next search using
its updated velocity information.

The fitness values of all particles are then evaluated and the overall
best location is selected.
In the next search, each particle following clonal selection principle
occupies the best position achieved so far and then they diversify .
As the search process continues all particles proceed towards the best
possible location which then represents the desired solution.
The velocity update equation of CPSO becomes

vid (k + 1) = H (k ) * vid (k )
(H 0 − H1 ) * k
Where H (k ) = H 0 *
I
k = search munber
I = max . no. of iterations
H 0 = 0.9
H 1 = 0.4

Position update: x(k+1)=xi(k) + vid(k+1)


Comparison of CPU time used during
training operation

Nonlinear system PSO GA


(in second) (in second)

Example-1 trained at -30dB 0.3200 13.5090

Example-1 trained at -10dB 0.3300 12.4080

Example-2 trained at -30dB 0.3500 12.7990

Example-2 trained at -10dB 0.3700 13.8390


Conclusion
The PSO based nonlinear system identification
involves :
(1)Low computational complexity
(2)Faster convergence during training operation
compared to GA based approach
(3)The present method avoids local minima
during training of weights (derivative free
training)

You might also like