A Novel Particle Swarm Optimisation With Search Space Tuning Parameter To Avoid Premature Convergence

You might also like

Download as pdf or txt
Download as pdf or txt
You are on page 1of 18

382 Int. J. Mathematical Modelling and Numerical Optimisation, Vol. 9, No.

4, 2019

A novel particle swarm optimisation with search


space tuning parameter to avoid premature
convergence

Raja Chandrasekaran* and


R. Agilesh Saravanan
Department of ECE,
KL University,
Vijayawada, India
Email: rajachandru82@yahoo.co.in
Email: agileshece@gmail.com
*Corresponding author

D. Ashok Kumar
Department of Biomedical Engineering,
SRMIST,
Kattankulathur, Chennai, India
Email: ashok.d@ktr.srmuniv.ac.in

N. Gangatharan
Department of ECE,
R.M.K. College of Engineering and Technology,
Puduvoyal, Chennai, India
Email: n.gangatharan@yahoo.com

Abstract: Particle swarm optimisation is a trendy optimisation technique


that is inhaled from the space navigational intelligence of birds. The
optimisation technique is popular among the researchers for several
decades because of the fact that it is inspired by the zonal and universal
best members in all the generations. The optimisation by PSO is found
better than few other optimisation techniques, in several trials with the
optimisation of the mathematical benchmarks and real-time applications.
But the more-than-modest orientation style of the algorithm often leads the
population to premature convergence. Inertia weight parameter is used to
tune the explorability of the population. In this paper, a zonal monitor (based
on success in the recent iterations)-based inertia weight tuning is redressed
by including universal monitors (based on the success with a universal fitness
perspective). The proposed algorithm excels the conventional PSO, the PSO
with zonal monitors alone. The inertia weight of the PSO with zonal monitor
is also not dynamic whereas the proposed PSO’s inertia weight are found to
be more dynamic with tuning the explore ability with regard to zonal and
universal context of fitness.

Copyright © 2019 Inderscience Enterprises Ltd.


A novel particle swarm optimisation 383

Keywords: particle swarm optimisation; PSO; adaptive inertia weight; search


space tuning.

Reference to this paper should be made as follows: Chandrasekaran, R.,


Saravanan, R.A., Kumar, D.A. and Gangatharan, N. (2019) ‘A novel particle
swarm optimisation with search space tuning parameter to avoid premature
convergence’, Int. J. Mathematical Modelling and Numerical Optimisation,
Vol. 9, No. 4, pp.382–399.

Biographical notes: Raja Chandrasekaran received his PhD from the Anna
University, Chennai in the Faculty of Information and Communication
Engineering in 2016, specialised in medical image processing, MTech in
Biomedical Signal Processing and Instrumentation in 2005 from the SASTRA
University and BE in Electronics and Communication Engineering in 2003
from the Bharathidasan University. From 2005 to till date he has been
working as an Assistant Professor in various engineering colleges in India.
His present affiliation is with the KL University, Vijayawada which he was
designated as a Professor in ECE. To his credit he has published five research
papers in SCI and SCIE indexed international journals and international-level
conferences. He has also published in other reputed journals and conferences.
His research interests are digital image processing, wavelets, optimisation
(swarm intelligence) and deep learning.

R. Agilesh Saravanan is pursuing his PhD in the K.L. University, Vijayawada


in the Faculty of Electronics and Communication Engineering since 2017,
specialised in signal processing. He has also received his Master’s of
Engineering in Applied Electronics from the Velammal Engineering College,
Anna University, Chennai in 2014 and BE in Electronics and Communication
Engineering from the Chettinad College of Engineering and Technology,
Anna University, Chennai in 2012. He has been working as an Assistant
Professor in various engineering college from 2014 till date. His present
affiliation is with the KL University, Vijayawada which he was designated as
an Assistant Professor in ECE. His research interest are ultrasound imaging,
sensor array processing and compressed sensing.

D. Ashok Kumar received his PhD from the SRM University, Chennai
in the Department of Biomedical Engineering, Faculty of Engineering
and Technology in 2016. He received his MTech in Biomedical Signal
Processing and Instrumentation in 2005 from the SASTRA University and
BE in Electrical and Electronics Engineering in 2003 from the Bharathiar
University. From 2006 to till date he has been working as an Assistant
Professor in various engineering colleges in India. His present affiliation
is with the SRM IST, Chennai which he was designated as an Associate
Professor in BME. To his credit he has published 14 research papers in
Scopus indexed international journals and international-level conferences. He
has also published in other reputed journals and conferences. His research
interests are digital image processing, biomechanics, and rehabilitation
engineering.

N. Gangatharan received his PhD the from Multimedia University in 2007,


MBA and ME in Microwave and Optical Engineering in 1999 and 1990
respectively and BE in Electronics & Communication Engineering in 1988
all from the Madurai Kamaraj University, ME in Computer Science &
384 R. Chandrasekaran et al.

Engineering from the Manonmaniam Sundaranar University in 1997. From


1988 to 2010 he served various positions in reputed institutions in India
and abroad. Since 2010, he has been working as Head, Department of ECE
at RMK College of Engineering & Technology, Chennai. To his credit, he
has published more than 50 papers. His research interests include signal and
image processing.

1 Introduction

Particle swarm optimisation (PSO) is a popular searching and population based


optimisation algorithm and it has been applied in many discipline and technology
applications. The PSO’s explore-ability is specified by its parameters include inertia
weight and fitness coefficients. Nowadays the enhancement is based on adjusting inertia
weight and maintaining the other PSO accelerating parameter keeping constant (Majid
and Arsad, 2017). In PSO, each individual represents a latent solution and is termed
as ‘particle’. Then a population of potential solutions is evolved through successive
iterations. The most significant advantages of the PSO, compared to other optimisation
strategies, lies in its fast focus towards a universally best, easily implementable code
and having less framework to set.
In clustering, presently the data handling in data mining technique is becoming
necessary (Cohen and de Castro, 2006). Clustering is an significant procedure for data
mining and this technique aims to separate the existing data in a special number of alike
groups based on correlation among the databases. Serapio et al. (2016) states that PSO
in clustering is its likely to exploit solutions with high resolving power, the simplicity
to implement, a low computational cost and fewest number of parameters to set. Santos
et al. (2017) reported that the PSO-based clustering is most consistent and require less
computational effort than hybrid proposal between PSOC and K-means algorithm.
Apart from clustering, PSO is used in stochastic resonance algorithm. Referable
to the troubles with the stochastic-resonance algorithm (Benzi et al., 1981), a PSO
optimisation, stochastic-resonance algorithm with mutation operator has been proposed.
The outcomes indicated that when the firm mutation operator acted on the SR
optimisation parameters, the divergence was ruled out and the constancy of the iterative
algorithm was improved. By adding the inertial weight, depression strategy to the PSO
algorithm (Tong et al., 2018), the iteration speed was improved at the same time.
The basic PSO (Kennedy and Eberhart, 1995) is not the finest tool to work out
all engineering problems as it is time-consuming in some instances and it converges
to zonal optima (Kessentini et al., 2011) in some other areas. The new PSO algorithm
(w-PSO) (Kessentini and Barchiesi, 2015), introduces a new adaptive parametric setting
and it is comfortable to enforce as the acceleration coefficients are constant and
the inertia weight is lively updated using a simple assessment on the particles’ best
positions.
Accelerating convergence speed and keeping off the zonal optima have become the
two most significant and appealing goals of PSO research. A number of variant PSO
algorithms have, therefore, been proposed to attain these two goals (Ciuprina et al.,
2002; Liang et al., 2006). It is considered to be difficult to simultaneously accomplish
both ends. The efficiency of PSO algorithm with flexible exponential inertia weight
A novel particle swarm optimisation 385

strategy (FEPSO) is corroborated on a suite of benchmark problems with different


dimensions (Amoshahy et al., 2016).
Tizhoosh (2005a) introduced an idea of opposition-based learning (OBL) in machine
intelligence, where the researcher stated the key opinion by incorporating counter-ideas,
contrary numbers. Inclusion of OBL technique enhanced the outcomes. Multiple utilities
of OBL are discussed in Tizhoosh (2005a) and Tizhoosh (2005b). In Han and He (2007)
an innovative algorithm called opposition-based PSO (OPSO) by incorporating OBL for
initial population initialisation, climbing of evolution and enhancing the population with
universal highest degree measure was proposed. The objective of OPSO was to tackle
the distorted trade-offs. Wang et al. (2007) proposed a OPSO technique, employing
repulsive-start up, climbing of evolution with the varying zones, and employing the
dynamic Cauchy mutation operator. The inclusion of GOBL aids in speedy convergence
whereas, Cauchy mutation aided in escaping the particles, which were trapped in zonal
optima. The results of GOPSO were promising in many optimisation problems. The
authors also presented generalised opposition-based PSO with dynamic population size
(DP-GOPSO) and an extension of GOPSO algorithm where dynamic population was
included. A multi-start OPSO algorithm with an adaptive velocity was proposed in
Massimiliano (2013), the re-initialisation technique based on two diversity measures for
the swarm was employed to vanquish the issue of earlier convergence and stagnation.
Altogether the above are based on mutation, tuning the inertia weight for trading off
between exploration and exploitation is more efficient. Shahzad et al. (2014) introduced
probabilistic opposition-based PSO with velocity clamping and inertia weights and
named the same as OvcPSO. In OvcPSO the main areas in consideration are velocity
clamping and inertia weights. In Si et al. (2014) OpbestPSO was introduced by using the
methodology of OBL. In OpbestPSO algorithm, initially the population is chaotically
declared and their contraries were calculated. From the two swarms (i.e., chaotically
declared swarm and its contrary swarm) the best members are freezed as swarm for the
rest of algorithm.
Farooq et al. (2017) integrated GOBL to initialise the PSO swarm and a modified
inertia weight pattern has been used. The objective is to take an initial swarm with fit
particles. The intention behind applying the above methodology for inertia weight is
due to two causes. One is to trade-off between exploration and exploitation whereas the
other one is to subjugate the trouble of trapping in zonal optima.
PSO is widely used in past decades as an optimisation method for unimodal,
multimodal, separable and non-separable optimisation problems. A popular variant of
PSO is PSO-W (inertia weight PSO). Attempts have been made to modify the PSO
(Gupta et al., 2017) with selective multiple inertia weights (SMIWPSO) to enhance the
searching capability of PSO.
Some other works have taken PSO-based manners associating with the inertia
weight. Shi and Eberhart (1998a) declared the fixed inertia weight PSO for the
conventional PSO. Succesively, Shi and Eberhart (1998b) use linearly decreasing inertia
weight PSO. It tends to have more universal search capability at first and more zonal
search capability around the later generations. But the fixed or linearly decreasing
inertia weight PSO is very little effect when they are employed to track nonlinear
dynamic systems on many practical usages. Chaos has been applied to PSOs. Chaos
is a deterministic system exhibits aperiodic behaviour that relies sensitive to the initial
qualifications, thereby giving long-term prediction become impossible (Strogatz, 2001).
386 R. Chandrasekaran et al.

This makes chaos (Cheng et al., 2017) a greatly beneficial for solving the problems of
nonlinear dynamic systems.
The performance of PSO is not just linked to the inertia weight, but as well with
the population density. In a higher-order case, an study fort the swarm size and inertia
weight selection of PSO optimising performance in system identification is given.
Xueyan and Zheng (2015) discussed through simulations of different sets of the two
parameters, and gives a selection of the two parameters which makes an enhanced PSO
output.
Understanding the significance of PSO’s settings have impact, many trials for
parameter tuning have been attempted. So in some articles such as Massimiliano (2013),
Meng and Jia (2008) and Abdelbar et al. (2005), researchers suggest linear decreasing of
inertia weight from 0.9 to 0.4 while progressing of algorithm. Shi and Eberhart (2001)
introduced adaptive fuzzy PSO method. Also for enhancement of PSO’s performance
in some articles, e.g., Noroozibeyrami and Meybodi (2008), fuzzy logic is applied. One
of the principal defects of optimisation algorithms such as PSO is trapped in zonal
minimums and this problem gets more dangerous by increasing the dimension of search
space (Boussaid et al., 2013). To encounter this problem, so far revised models of PSO
like cooperative PSO (CPSO) propose (Eberhart and Kennedy, 1995). Consequently,
Gholamian and Meybodi (2015) aim, in parliamentary procedure to grow and enhance
the algorithm take significance of cooperatives PSO, comprehensive learning PSO and
fuzzy logic, while loving the benefits of some roles and procedures such as zonal search
function and cloning procedure, put forward the enhanced comprehensive learning
cooperative particle swarm optimisation with fuzzy inertia weight (ECLCFPSO-IW)
algorithm. By this algorithm the researchers attempt to improve mentioned deficiencies.
Biswas et al. (2013) proposed a strategy for particle movement in random
weightings. The researchers brought out a strategy to restrain particle’s gather in
unfavorable zones. As any wrong move can have molecules to get hold of additional
inappropriate moves. So particles avoids any misguidance projected by zonal and
universal ultimate. Execution with benchmark functions shows significant improvement
in above approach.
A new PSO algorithm known as Gompertz increasing inertia weight (GIIW) is
implemented by Ahmed et al. (2013) and the comparison has been simulated with
standard PSO. It shows that PSO with GIIW gives good performance with quick
convergence capability and aggressive movement narrowing towards the solution region
The modified version of swarm intelligence technique called particle swarm
optimisation with improved inertia weight (PSOIIW) (Saha et al., 2012) approach is
applied to IIR adaptive system identification problem. It executes performs a organised
arbitrary search of an unknown parameter by manipulating a population to converge to
an optimal solution. In the above technique iteration-based inertia weight is calculated
individually for each particle that results in better search within the multidimensional
search space. The exploration and exploitation of entire search space can be handled
efficiently in PSOIIW along with the benefits of overcoming the premature convergence.
Taherkhani and Safabaksh (2015) has shown improvements in the conventional PSO
by increasing the inertia weight (exploration) for consequent failures in a fitness and
vice-versa. As this takes attention of only the zonal improvement (or decay in) fitness,
it neglects to take into consideration, the universal view of the expected fitness for a
particular iteration. i.e., if the expected fitness after t iterations is fixed as f, the PSO
proposed by Taherkhani and Safabaksh (2015) favours zonal convergence if the fitness
A novel particle swarm optimisation 387

improves by small increments in the last three iterations alone. This may be an avenue
for premature convergence. Therefore we address the PSO proposed by Taherkhani and
Safabaksh (2015) as PSO with zonal monitor.
We propose a PSO with universal monitor where we include the conditions for
universal monitor along with the zonal monitor, i.e., an ideal expected value of a
particular iteration is calculated. Though a particular member passes the conditions of a
zonal monitor with improvement in the last few iterations, zonal exploitation is allowed
only if the member passes the universal conditions, i.e., if the expected fitness of the
iteration is also reached

2 Proposed methodology

In this study, the PSO is analysed for its tradeoff with exploration and exploitation by
adaptively tuning the inertia weight. The basic PSO equation is shown in Figure 1. In
equation (1), w is the inertia weight parameter that tunes the size of the search space.

xi (t + 1) = xi (t) + vi (t + 1)
(1)
vi (t + 1) = wvi (t) + R1 c1 (Pi − xi (t)) + R2 c2 (Pg − xi (t))

As discussed in the literature survey various researchers have adapted strategies like
constant inertia weights, random inertia weights, linearly decreasing inertia weights. The
strategy of tuning the inertia weight with respect to the fitness stands at the cutting edge.
Taherkhani and Safabaksh (2015) have tuned the inertia weight to decrease/increase if
the fitness has improved/degraded in the last two iterations as given by equation (2)
{
1, if fit(xi (t + 1) < fit(Pit )
δi (t) = (2)
−1, else

Like conventional PSO, the population (i = 1 : m) is initialised randomly and with each
member the fitness of the mathematical benchmark function is assessed. The universal
and zonal best assignments and the orientation of other members towards the zonal and
universal best are done and executed like the conventional PSO. Whereas the inertia
weight (W ) is tuned based on both the zonal monitor equation (2) and universal monitor
equation (3).

Figure 1 PSO with zonal and universal monitor


388 R. Chandrasekaran et al.

The idea of considering the fitness improvement in recent iterations alone (zonal
monitoring) without having a proposed fitness with respect to current iteration at a
universal context (universal monitoring) will lead to premature convergence.
Hence we establish a proposed fitness at a universal perspective and only if the
universal proposed fitness is reached along with the zonal success, then the exploitation
to a zonal region is allowed. i.e., the universal monitor allows only if equation (3) is
satisfied.

(fit)iter < (prop − fit)iter (3)

where (fit)iter is actual fitness of current iteration and (prop − fit)iter is the proposed
fitness of current iteration.
The proposed fitness is modeled with equation (4) and as shown in Figure 2. The
proposed fitness is modeled to exponentially improve with high variance at the initial
iterations and less at the later iteration.
( )
itermax − iter
proposed fitness = +k (4)
2σ 2

where σ-sigma, k-correlation factor, itermax -total iteration, iter–current iteration.


In equation (2), the zonal monitor ensures the success as fitness improved in last
two iterations. It is considered for exploitation of a zonal place, i.e., w is decreased.
In equation (3) the universal monitor is removed as an ‘AND’ condition along with
zonal monitor in equation (2). The universal monitor puts an additional stipulation that
the actual fitness of current iteration should be safer than the proposed fitness for the
current iteration given by acne equation (3).

Figure 2 Expected fitness model (see online version for colours)

3 Results and discussion

The parameter settings, corresponding to each benchmark function, for the execution
of our hybrid models are derived from the literatures as shown in Table 1. The last
column of the tables show the parameter settings adopted for our hybrid models. We
have adopted the parameter by reference to the other implementations for the sake of
comparison.
A novel particle swarm optimisation 389

Figure 3 (a) G-best evolution for Sphere function (b) Rastringin function
(c) Griewank function (d) step function (e) Ackley function (f) Branin function
(g) Shubert function (see online version for colours)

(a)

(b)

(c)
390 R. Chandrasekaran et al.

Figure 3 (a) G-best evolution for Sphere function (b) Rastringin function
(c) Griewank function (d) step function (e) Ackley function (f) Branin function
(g) Shubert function (continued) (see online version for colours)

(d)

(e)

(f)
A novel particle swarm optimisation 391

Figure 3 (a) G-best evolution for Sphere function (b) Rastringin function
(c) Griewank function (d) step function (e) Ackley function (f) Branin function
(g) Shubert function (continued) (see online version for colours)

(g)

Figure 4 Fitness in-terms of accuracy (see online version for colours)

Figure 4 shows the universal best attainments for the three benchmark functions (Sphere,
Rastrigin, and Griewank). The universal attainment trend is better with the other 4
benchmark functions also which can be inferred from the Normalised Best Fitness values
in Table 2. It can be inferred from Figures 3(a)–3(g), (labelled as PSO1–PSO with zonal
monitor, PSO2–PSO with universal monitor) the PSO with universal monitor reaches the
best fitness compared to the LHS, the PSO with zonal monitor. For Sphere, Rastringin
and Griewank functions it is deduced from the shape of the PSO with zonal monitor,
there is no convergence/improvement in fitness after the first ten iterations itself. For the
step function, there is no improvement in fitness from the PSO throughout the complete
iterations. Hence we reason that the PSO with zonal monitor alone is highly prone to the
premature convergence problem. Relatively, the PSO with universal monitor has fitness
improving all over the iterations.With respect to Ackley, Branin and Shubert functions,
392 R. Chandrasekaran et al.

the conventional PSO provides significant improvement in fitness through the iterations
but the performance of the proposed PSO is much better.

Table 1 Algorithm settings

Number of Length of Minimum and New model


Function
members each member maximum value settings*
Sphere 60b 30b, c [–100, 100]c 60, 30, [–100, 100]
Rastrigin 60b 30b, c [–5.12, 5.12]b, c
60, 30, [–5.12, 5.12]
Griewank 50c 30c [–600, 600]c 60, 30, [–600, 600]
Sphere 60b 30b, c [–100, 100]c 60, 30, [–100, 100]
Rastrigin 60b 30b, c [–5.12, 5.12]b, c
60, 30, [–5.12, 5.12]
Griewank 50c 30c [–600, 600]c 60, 30, [–600, 600]
Step 30a [–100, 100]a 60, 30, [–100, 100]
Ackley 30a [–32, 32]a 60, 30, [–32, 32]
Branin 2a –5, 10a 60, 20, [–5, 10]
Shubert 2a –10, 10a 60, 30, [–100, 100]
Notes: *number of members, length of each member, min and max values.
a
Taherkhani and Safabaksh (2015), b Yan and Shi (2011) and c He et al. (2009).

Table 2 Statistical analysis of the outputs

Statistical PSO [with zonally adaptive PSO [with universally adaptive


Function
analysis inertia weight] inertia weight]
Sphere Min (best) 3.8053e+06 1.6621e+06
Max (worst) 6.0203e+06 6.4227e+06
Normalised best-fitness 0.6321 (36.79%) 0.2588(74.12%)
Mean 3.8385e+06 1.8945e+06
Standard deviation 2.8868e+05 5.2864e+05
Variance 8.3335e+10 2.7946e+11
Rastrigin Min (best) 3.5089e+06 7.5050e+05
Max (worst) 6.9027e+06 7.3295e+06
Normalised best-fitness 0.5083 (49.17%) 0.1024 (89.76%)
Mean 3.5384e+06 1.0632e+06
Standard deviation 2.6135e+05 7.4679e+05
Variance 6.8303e+10 5.5770e+11
Griewank Min (best) 1.1067e+03 298.7784
Max (worst) 1.7172e+03 1.8234e+03
Normalised best-fitness 0.6445 (35.55%) 0.1639 (83.61%)
Mean 1.1129e+03 352.4827
Standard deviation 51.4288 182.8284
Variance 2.6449e+03 3.3426e+04
A novel particle swarm optimisation 393

Table 2 Statistical analysis of the outputs (continued)

Statistical PSO [with zonally adaptive PSO [with universally adaptive


Function
analysis inertia weight] inertia weight]
Step Min (best) 6.1761e+04 3.4219e+04
Max (worst) 6.1761e+04 7.2260e+04
Normalised best-fitness 1 0.4736
Mean 6.1761e+04 3.5181e+04
Standard deviation 8.0848e-11 5.2459e+03
Variance 6.5364e-21 2.7520e+07
Ackley Min (best) 3.2161e-08 2.1493e-08
Max (worst) 11.1061 9.0506
Normalised best-fitness 2.8958e-09 2.3748e-09
Mean 0.0363 1.5847
Standard deviation 0.4380 2.9588
Variance 0.1919 8.7547
Branin Min (best) 0.3979 0.3979
Max (worst) 0.9805 1.6421
Normalised best-fitness 0.4058 0.2423
Mean 0.4353 0.4154
Standard deviation 0.1353 0.1246
Variance 0.0183 0.0155
Shubert Min (best) –1.0933e+23 –5.0974e+19
Max (worst) –4.0634e+16 –1.4619e+17
Normalised best-fitness 2.6905e+06 348.6717
Mean –4.7821e+22 –4.1918e+19
Standard deviation 5.0285e+22 1.7067e+19
Variance 2.5286e+45 2.9129e+38

Quantitative comparison of both the models for all the test functions is given in
Table 2. The values portray that the best fitness for all the seven benchmark functions
is accomplished for the PSO with universally adaptive inertia weight. Table 2 clearly
depicts that the normalised best fit for the PSO with the proposed universal monitor
versus PSO with zonal monitor alone are 0.2588 versus 0.6321, 0.1024 versus 0.5083,
0.1639 versus 0.6445 versus respectively for Sphere, Rastrigin and Griewank functions.
Likewise the fitness is better achieved with the proposed model for the other four
benchmark functions also. It is obvious that of the corresponding startup values the PSO
with the proposed universal monitor has reached 74.12%, 89.76% and 83.61% of the
ideal values for the first three benchmark functions. The comparability of the same for
PSO with zonal monitor alone is shown in Figure 4.
The parameter settings, corresponding to each benchmark function, for the execution
of our hybrid models are derived from the literatures as shown in Table 1. The last
column of the tables show the parameter settings adopted for our hybrid models. We
have adopted the parameter by reference to the other implementations for the sake of
comparison.
394 R. Chandrasekaran et al.

Figure 5 (a) Fitness and inertia weight evolution for PSO with zonal monitor-Sphere function
(b) universal monitor-Sphere function (c) zonal monitor-Rastringin function
(d) universal monitor-Rastringin function (e) zonal monitor-Griewank function
(f) universal monitor-Griewank function (see online version for colours)

(a)

(b)

(c)
A novel particle swarm optimisation 395

Figure 5 (a) Fitness and inertia weight evolution for PSO with zonal monitor-Sphere function
(b) universal monitor-Sphere function (c) zonal monitor-Rastringin function
(d) universal monitor-Rastringin function (e) zonal monitor-Griewank function
(f) universal monitor-Griewank function (continued) (see online version for colours)

(d)

(e)

(f)

Figures 5(a) and 5(b) show the fitness attainment (top) and the inertia weight (bottom)
of particular member in the PSO with zonal monitor and universal monitor respectively.
396 R. Chandrasekaran et al.

In case of the former the fitness is not improved after around 50 iterations, whereas in
the later there are slight improvements in fitness around 300th and 450th iterations. With
regard to inertia weights, the inertia weight is maintained constant at 0.15 as shown
in Figure 5(a), and the value is taken before 50th iteration. As the search space would
have confined to a very small country with an inertia weight of 0.15, there would be no
scope for improvement in fitness, further. Whereas with reference to Figure 5(b), there
is a steep increase in inertia weight around 110th iteration, which contributes for the
acceptability in large space, giving scope slight improvements in fitness around 300th
and 450th iterations.
Figure 5(c) portray the non-improvement of fitness and no alteration in the inertia
weight even if the fitness has not reached satisfactory levels in the PSO with zonal
monitor. From the normalised fitness value of 0.5083 from the Table, it is understood
that the PSO with zonal monitor converges to zonal optima half-way. The PSO with
universal monitor changes the size of the search space (at 219th iteration the W increases
to 0.3621) till the end of iterations, though there is not much improvement in fitness.
Figures 5(e) and 5(f) show, for the PSO with zonal and universal monitor
respectively, the fitness and the weight changes of a particular member for the Griewank
function. For the PSO with zonal monitor, though the fitness is not convincing around
1st iteration, the inertia weight converges to a zonal optima at that point of time. It
converges to a zonal optima, i.e., no explore ability. Whereas in the PSO with universal
monitor, there is a convincing tradeoff between exploration and exploitation (increasing
and decreasing values of the inertia weight), providing the scope for improvement in
fitness till the last iteration.

Table 3 Sigma tuning

PSO with universally adaptive inertia weight


Function Statistical analysis
Sigma = –1 Sigma = 2 Sigma = 4.9
Sphere Min (best) 8.2347e+05 8.1309e+05 1.6621e+06
Max (worst) 8.2347e+05 1.2616e+06 6.4227e+06
Normalised best-fitness 1 (0%) 0.64 (36%) 0.2588 (74.12%)
Rastrigin Min (best) 6.2376e+05 6.2305e+05 7.5050e+05
Max (worst) 6.2991e+05 9.0297e+05 7.3295e+06
Normalised best-fitness 0.99 (1%) 0.69 (31%) 0.1024 (89.76%)
Griewank Min (best) 0.7044e+03 0.2471e+03 298.7784
Max (worst) 0.7337e+03 4.7519e+02 1.8234e+03
Normalised fitness 0.96 (4%) 0.52 (48%) 0.1639 (83.61%)

3.1 Sigma tuning

As shown in equation (3), σ (sigma) is the parameter that regulates the slope of the
exponent. When sigma is chosen as –1, there is almost no improvement in fitness. The
maximum gain in fitness is 4% of Griewank function, whereas the sphere function has
not shown any improvement at all. This is because of uncontrolled variation in the value
of the inertia weights which leads zonal convergence before even a minimal fitness
improvement. This unstable movement of swarms has caused undesirable results. With
sigma taken as 0.2, the swarms have shown an indication of coming out of instability,
but yet prematurely converged at the first few iterations. With sigma equal to 4.9, the
A novel particle swarm optimisation 397

swarms execute reasonable stability, i.e., convergence is observed only after attainment
of 89.76% of the ideal value.

4 Conclusions

The PSO with universal monitor reaches the best fitness compared to the PSO with zonal
monitor. Likewise it is deduced from the shape of the PSO with zonal monitor, there is
no convergence/ improvement in fitness after a few iterations itself. Hence we reason
that the PSO with zonal monitor alone is highly prone to the premature convergence
problem. Relatively, the PSO with universal monitor has fitness improving all over the
iterations. The normalised best fit for the PSO with the proposed universal monitor
versus PSO with zonal monitor alone 74.12%, 89.76% and 83.61% of the ideal values
for the three benchmark functions.
In the PSO with zonal monitor, the inertia weight is maintained constant after the
first few iterations. As the search space would have confined to a very small area with
small inertia weight confinement, there would be no scope for improvement in fitness,
further. Whereas in the PSO with universal monitor there is increase in inertia weight
even after the mid-iterations, which contributes for the exportability in large space giving
scope for improvements in fitness at later iterations. In the future research, the authors
would attempt to tune the inertia weight, including the evolutionary status of the swarms
in addition to fitness conditions.

References

Abdelbar, A.M., Abdelshahid, S. and Wunsch, D.C. (2005) ‘Fuzzy PSO: a generation of particle
swarm optimization’, in Proceeding of International Joint Conference on Neural Networks,
pp.1086–1091.
Ahmed, W., Shirazi, M.F., Jamil, O.M. and Abbasi, M.H. (2013) ‘PSO with Gompertz increasing
inertia weight’, 2013 IEEE 8th Conference on Industrial Electronics and Applications (ICIEA),
pp.923–928.
Amoshahy, M.J., Shamsi, M. and Sedaaghi, M.H. (2016) ‘A novel flexible inertia weight particle
swarm optimization algorithm’, PLOS ONE, 25 August, DOI: 10.1371/journal.pone.0161558.
Benzi, R., Sutera, A. and Vulpiana, A. (1981) ‘The mechanism of stochastic resonance’, J. Phys. A,
Math. Gen., Vol. 14, No. 11, pp.453–457.
Biswas, A., Lakra, A.V., Kumar, S. and Singh, A. (2013) ‘An improved random inertia
weighted particle swarm optimization’, International Symposium on Computational and Business
Intelligence, IEEE, pp.96–99, 978-0-7695-5066-4/13, DOI: 10.1109/ISCBI.2013.
Boussaid, I., Lepagnot, J. and Siarry, P. (2013) ‘A survey on optimization metaheuristics’, Information
Sciences, Vol. 237, pp.82–117.
Cheng, Y-H., Kuo, C-N. and Lai, C-M.(2017) ‘Comparison of the adaptive inertia weight PSOs based
on chaotic logistic map and tent map’, Proceedings of the 2017 IEEE International Conference
on Information and Automation (ICIA), Macau SAR, China.
Ciuprina, G., Ioan, D. and Munteanu, I. (2002) ‘Use of intelligent-particle swarm optimization in
electromagnetics, magnetics’, IEEE Transactions, Vol. 38, No. 2, pp.1037–1040.
Cohen, S.C.M. and de Castro, L.N. (2006) ‘Data clustering with particle swarms’, in IEEE Congress
on Evolutionary Computations, p.19921798.
398 R. Chandrasekaran et al.

Eberhart, R.C. and Kennedy, J. (1995) ‘A new optimizer using particle swarm theory’, in Proceedings
of IEEE 6th International Symposium on Micro Machine and Human Science, pp.39–43.
Farooq, M.U., Ahmad, A. and Hameed, A. (2017) ‘Opposition-based initialization and a modified
pattern for inertia weight (IW) in PSO’, IEEE, 978-1-5090-5795-5/17.
Gholamian, M. and Meybodi, M.R. (2015) ‘Enhanced comprehensive learning cooperative particle
swarm optimization with fuzzy inertia weight (ECLCFPSO-IW)’, IEEE, 978-1-4799-8733-7/15.
Gupta, I.K., Choubey, A. and Choubey, S. (2017) ‘Particle swarm optimization with selective multiple
inertia weights’, IEEE-40222, 8th ICCCNT 2017, 3–5 July.
Han, L. and He, X. (2007) ‘A novel opposition-based particle swarm optimization for noisy
problems’, in Third International Conference on Natural Computation, ICNC 2007, IEEE, Vol. 3,
pp.624–629.
He, S., Wu, Q.H. and Saunders, J.R. (2009) ‘Group search optimizer: an optimization algorithm
inspired by animal searching behavior’, IEEE Transactions on Evolutionary Computation,
Vol. 13, No. 5, pp.973–990.
Jabeen, H., Jalil, Z. and Baig, A.R. (2009) ‘Opposition based initialization in particle swarm
optimization (o-pso)’, in Proceedings of the 11th Annual Conference Companion on Genetic and
Evolutionary Computation Conference: Late Breaking Papers, ACM, pp.2047–2052.
Kennedy, J. and Eberhart, R.C. (1995) ‘Particle swarm optimization’, in Proc. IEEE International
Conference on Neural Networks, Perth, Austarlia, pp.1942–1948.
Kessentini, S. and Barchiesi, D. (2015) ‘Particle swarm optimization with adaptive inertia weight’,
International Journal of Machine Learning and Computing, October, Vol. 5, No. 5, pp.368–373.
Kessentini, S., Barchiesi, D., Grosges, T.A.L and Chapelle, M.L. (2011) ‘Particle swarm optimization
and evolutionary methods for plasmonic biomedical applications’, in Proc. IEEE Congress on
Evolutionary Computation (CEC’11), New Orleans, LA, pp.2315–2320.
Liang, J.J., Qin, A.K., Suganthan, P.N. and Baskar, S. (2006) ‘Comprehensive learning particle swarm
optimizer for global optimization of multimodal functions’, IEEE Transactions on Evolutionary
Computation, Vol. 10, No. 3, pp.281–295.
Majid, M.H.A. and Arsad, A.M.R. (2017) ‘An analysis of PSO inertia weight effect on swarm robot
source searching efficiency’, 2017 IEEE 2nd International Conference on Automatic Control and
Intelligent Systems (I2CACIS 2017), Kota Kinabalu, Sabah, Malaysia, 21 October.
Mandal, B. and Si, T. (2015) ‘Opposition based particle swarm optimization with exploration
and exploitation through gbest’, in 2015 International Conference on Advances in Computing,
Communications and Informatics (ICACCI), IEEE, pp.245–250.
Massimiliano, K. (2013) ‘A multi-start opposition-based particle swarm optimization algorithm with
adaptive velocity for bound constrained global optimization’, Journal of Global Optimization,
Vol. 55, No. 1, pp.165–188.
Meng, X. and Jia, L. (2008) ‘A new kind of PSO convergent fuzzy particle swarm optimization
and performance analysis’, 4th International Conference on Networked Computing and Advanced
Information Management, pp.102–107.
Noroozibeyrami, M.H. and Meybodi, M.R. (2008) ‘Improving particle swarm optimization using fuzzy
logic’, in Proceedings of the Second Iranian Data Mining Conference, Amir Kabir University
of Technology, Tehran, Iran, 21–22 September.
Saha, S.K., Mandal, D., Kar, R., Saha, M. and Ghoshal, S.P. (2012) ‘IIR system identification using
particle swarm optimization with improved inertia weight approach’, 2012 Third International
Conference on Emerging Applications of Information Technology (EAIT), IEEE, pp.43–46,
978-1-4673-1827-3/12.
Santos, P., Macedo, M. and Ellaikin, F. (2017) ‘Application of PSO-based clustering algorithms on
educational databases’, IEEE, 978-1-5386-3734-0/17.
A novel particle swarm optimisation 399

Serapio, A.B., Corrla, G.S., Gonalves, F.B. and Carvalho, V.O. (2016) ‘Combining k -means and
k-harmonic with fish school search algorithm for data clustering task on graphics processing
units’, Applied Soft Computing, Vol. 41, pp.290–304.
Shahzad, F., Masood, S. and Khan, N.K. (2014) ‘Probabilistic opposition-based particle swarm
optimization with velocity clamping’, Knowledge and Information Systems, Vol. 39, No. 3,
pp.703–737.
Shi, Y. and Eberhart, R.C. (1998a) ‘A modified particle swarm optimizer’, in The 1998 IEEE
International Conference on IEEE World Congress on Computational Intelligence, IEEE,
pp.69–73.
Shi, Y. and Eberhart, R.C. (1998b) ‘Parameter selection in particle swarm optimization’ in Porto,
V.W., Saravanan, N., Waagen, D., and Eiben, A.E. (Eds.): Evolutionary Programming VII. EP
1998. Lecture Notes in Computer Science, Vol. 1447, Springer, Berlin, Heidelberg.
Shi, Y. and Eberhart, R.C. (2001) ‘Fuzzy adaptive particle swarm optimization’, in Proceedings of
Congress on Evolutionary Computation, Vol. 1, pp.101–106.
Si, T., De, A. and Bhattacharjee, A.K. (2014) ‘Particle swarm optimization with generalized opposition
based learning in particle’s pbest position’, in 2014 International Conference on Circuit, Power
and Computing Technologies (ICCPCT), IEEE, pp.1662–1667.
Strogatz, S. (2001) Nonlinear Dynamics and Chaos: with Applications to Physics, Biology, Chemistry
and Engineering, 1st ed., CRC Press.
Taherkhani, M. and Safabaksh, R. (2015) ‘A novel stability-based adaptive inertia weight for particle
weight optimization’, App. Soft Computing J., Vol. 38, pp.281–295.
Tian, D.P. and Li, N.Q. (2009) ‘Fuzzy particle swarm optimization algorithm’, International Joint
Conference on Artificial Intelligence, pp.263–267.
Tizhoosh, H.R. (2005) ‘Opposition-based learning: a new scheme for machine intelligence’, in IEEE,
pp.695–701.
Tizhoosh, H.R. (2005) ‘Reinforcement learning based on actions and opposite actions’, in
International Conference on Artificial Intelligence and Machine Learning, Vol. 414.
Tong, L., Li, X., Hu, J. and Ren, L. (2018) ‘A PSO optimization scale-transformation
stochastic-resonance algorithm with stability mutation operator’, IEEE Access, Vol. 6,
pp.1167–1176.
Wang, H., Li, H., Liu, Y., Li, H. and Zeng, S. (2007) ‘Opposition based particle swarm algorithm with
cauchy mutation’, in IEEE Congress Evolutionary Computation, CEC 2007, IEEE, pp.4750–4756.
Wang, H., Wu, Z., Rahnamayan, S., Liu, Y. and Ventresca, M. (2011) ‘Enhancing particle swarm
optimization using generalized opposition-based learning’, Information Sciences, Vol. 181,
No. 20, pp.4699–4714.
Wu, Z., Ni, Z., Zhang, C. and Gu, L. (2008) ‘Opposition based comprehensive learning particle swarm
optimization’, in 3rd International Conference on Intelligent System and Knowledge Engineering,
ISKE 2008, IEEE, Vol. 1, pp.1013–1019.
Xueyan, L. and Zheng, X. (2015) ‘Swarm size and inertia weight selection of particle swarm optimizer
in system identification’, 2015 4th International Conference on Computer Science and Network
Technology (ICCSNT 2015), Harbin, China, pp.1554–1556.
Yan, X. and Shi, H. (2011) ‘A hybrid algorithm based on particle swarm optimization and group
search optimization’, Seventh International Conference on Natural Computation, pp.13–17.

You might also like