Download as pdf or txt
Download as pdf or txt
You are on page 1of 8

Chaos, Solitons and Fractals 42 (2009) 2688–2695

Contents lists available at ScienceDirect

Chaos, Solitons and Fractals


journal homepage: www.elsevier.com/locate/chaos

A dynamic global and local combined particle swarm


optimization algorithm q
Bin Jiao *, Zhigang Lian, Qunxian Chen
Shanghai DianJi University, Shanghai 200240, China

a r t i c l e i n f o a b s t r a c t

Article history: Particle swarm optimization (PSO) algorithm has been developing rapidly and many
Accepted 31 March 2009 results have been reported. PSO algorithm has shown some important advantages by
providing high speed of convergence in specific problems, but it has a tendency to
get stuck in a near optimal solution and one may find it difficult to improve solution
Communicated by: Prof. L. Marek Crnjac
accuracy by fine tuning. This paper presents a dynamic global and local combined par-
ticle swarm optimization (DGLCPSO) algorithm to improve the performance of original
PSO, in which all particles dynamically share the best information of the local particle,
global particle and group particles. It is tested with a set of eight benchmark functions
with different dimensions and compared with original PSO. Experimental results indi-
cate that the DGLCPSO algorithm improves the search performance on the benchmark
functions significantly, and shows the effectiveness of the algorithm to solve optimiza-
tion problems.
Ó 2009 Elsevier Ltd. All rights reserved.

1. Introduction

The particle swarm concept originated as a simulation of simplified social system. The original intent was to graphically
simulate the choreography of a bird block. However, it was found that particle swarm model could be used as an opti-
mizer. Eberhart and Kennedy originally formulated particle swarm in 1995 [1] to study the interesting concept of a social
behavior of bird flocking in a simplified manner in simulation. Particle swarm optimization (PSO) algorithm has been
developing rapidly and has been applied widely since it was introduced, as it is easily understood and realized. PSO
has been successfully applied in many areas: function optimization, artificial neural network training, fuzzy system con-
trol, and other areas where GA can be applied. Clerc and Kennedy in [3] have researched the particle swarm explosion
stability and convergence in a multi-dimensional complex space. Eberhart and Shi have investigated PSO developments,
applications and resources in [4] and have presented a modified particle swarms optimizer in [5,6]. He et al. have put for-
ward a particle swarm optimizer with passive congregation in [7]. Shi and Eberhart have researched parameter selection
in particle swarm optimization [8]. Kennedy and Mendes in [10] investigated the impacts of population structures to the
search performance of SPSO. Other investigations on improving PSO’s performance were undertaken using cluster analysis
[9]. In 1998, Angeline presented the model of improved PSO algorithm, called (HPSO) [11], using selection operation of
evolution computing. Lovbjerg et al. presented the model of HPSO algorithm that introduced the crossover operation of
evolution computing in 2000. In 1999, Suganthan set up the model of PSO with adjacent filed operation [12] and papers
[13,14] brought forward a kind of cooperation PSO algorithm. Kennedy and Eberhart developed a binary system PSO

q
This work is supported by The National Natural Science Foundation of Shanghai Municipal Science and Technology Commission (Grant No.
08ZR1408500).
* Corresponding author.
E-mail addresses: binjiaocn@163.com (B. Jiao), lllzg@163.com (Z. Lian), chenqx@sdju.edu.cn (Q. Chen).

0960-0779/$ - see front matter Ó 2009 Elsevier Ltd. All rights reserved.
doi:10.1016/j.chaos.2009.03.175
B. Jiao et al. / Chaos, Solitons and Fractals 42 (2009) 2688–2695 2689

algorithm on base of the PSO. Recently, several investigations have been undertaken to improve the performance of ori-
ginal PSO [15–21] and have obtained rich resu. In paper [16] PSO combined chaos to enhance the searching eggiciency and
greatly improve the searching quality. In paper [17] Quantum-behaved PSO used chaotic mutation operator to diversify
the population and improve the PSO’s performance. However it was pointed out in our studies that original PSO frequently
get the local solution, especially the problem size is middle or large. To address this shortcoming of original PSO, a dy-
namic global and local combined particle swarm optimization algorithm is presented in this paper, and the simulation
shows that it is efficacious.
This work differs from the existing ones at least in three aspects: firstly, it proposes dynamic global and local combined
particle swarm optimization algorithm iterative formula, in which all particles dynamically share the best information of the
local particle, global particle and group particles. Secondly, it finds the best dynamic combined parameters of DGLCPSO for
different size optimization problems. Thirdly, it is to compare the original PSO with DGLCPSO and shows that the latter is
more efficacious for optimization problems. The rest of the paper is organized as follow: The next section introduces the ori-
ginal PSO. The iteration formulation of DGLCPSO algorithm is presented in Section 3. In Section 4, we describe the test func-
tions, experimental settings, and compare experimental results of PSO with DGLCPSO algorithm. Finally, Section 5
summarizes the contribution of this paper and conclusions.

2. Original particle swarm algorithm

In PSO, each solution called a ‘‘particle” flies in the problem search space looking for the optimal position to land. PSO
system combines local search method (through self experience) with global search methods (through neighboring experi-
ence), attempting to balance exploration and exploitation. All of particles have fitness values that are evaluated by the fitness
function to be optimized, and have velocities that direct the flying of the particles. The system is initialized with a population
of random solutions and searches for optima by updating generations. Similar to other population-based algorithms, PSO as
an optimization tool can solve a variety of difficult optimization problems. However, unlike GA, PSO has no evolution oper-
ators such as crossover and mutation. Compared with GA, the advantages of PSO are that it is easy to implement and there
are few parameters to adjust.
In every search-iteration, each particle is updated by following two ‘‘best” values. The first one is the best solution (fit-
ness) it has achieved so far. This value is called pbest. Another ‘‘best” value that is tracked by the particle swarm optimizer
is the best value, obtained so far by any particle in the population. This best value is a global best and called gbest. After find-
ing the two best values, the particle updates its velocity and positions with following formulas [1,2]:

v id ðk þ 1Þ ¼ wv id ðkÞ þ c1 r1 ðpid ðkÞ  xid ðkÞÞ þ c2 r2 ðpgd ðkÞ  xid ðkÞÞ; ð2:1Þ
xid ðk þ 1Þ ¼ xid ðkÞ þ v id ðk þ 1Þ ði ¼ 1; 2; . . . ; m; d ¼ 1; 2; . . . ; DÞ: ð2:2Þ

In (2.1), Pi ¼ ðpi1 ; pi2 ; . . . ; piD Þ is the best previous position of the ith particle (also known as pbest). According to the different
definitions, there are two different versions of PSO. If P g ¼ ðp1 ; p2 ; . . . ; pD Þ is the best position among all the particles in the
swarm (also known as gbest), such a version is called the global version. If Pg is taken from some smaller number of adjacent
particles of the population (also known as lbest), such a version is called the local version. In (2.1) and (2.2) k represents the
iterative number, and the constants c1 ; c2 are learning factors, usually c1 ¼ c2 ¼ 2, which control how far a particle will move
in a single iteration. r1  Uð0; 1Þ, r 2  U (0, 1) are random numbers and w is an inertia weight, which is initialized typically in
the range of [0, 1]. A larger inertia weight facilitates global exploration and a smaller inertia weight tends to facilitate local
exploration to fine-tune the current search area (Shi and Eberhart) in [16]. The termination criterion for the iterations is
determined according to whether the max generation or a designated value of the fitness of Pg is reached. This original
PSO frequently gets the optimization problem’s local solution, especially when the problem size is middle or large. To
address this shortcoming of original PSO, in following a dynamic global and local combined particle swarm optimization
algorithm is presented in this paper.

3. Dynamic global and local combined particle swarm optimization algorithm (DGLCPSO)

The model of original PSO algorithm is based on the following two factors: (1) The autobiographical memory, which
remembers the best previous position of each individual Pi in the swarm; (2) the publicized knowledge, which is the
best solution Pg found currently by the population. The original PSO maybe makes the algorithm lose diversity and is
more likely to confine the search around local minima if committed too early in the search to the global best found
so far.

3.1. Description of DGLCPSO algorithm

In original PSO algorithm, the information of individual best and global best were shared by next generation particles. In
this paper we present a dynamic global and local combined particle swarm optimization (DGLCPSO) algorithm, in which all
2690 B. Jiao et al. / Chaos, Solitons and Fractals 42 (2009) 2688–2695

particles dynamically share the best information of the local particle, global particle and group particles. The detailed infor-
mation will be given in following.
Suppose that the searching space is D-dimensional and m particles form the population. The ith particle represents a
D-dimensional vector X i ¼ ðxi1 ; xi2 ; . . . ; xiD Þ ði ¼ 1; 2; . . . ; mÞ. It means that the ith particle locates at X i in the searching space
and the position of each particle is a potential result. We could calculate the particle’s fitness by putting its position into a
designated objective function. When the fitness is lower, the corresponding X i is ‘‘better”. The ith particle’s ‘‘flying” velocity is
also a D-dimensional vector, denoted as V i ¼ ðv i1 ; v i2 ; . . . ; v iD Þ. Denote the best position of the ith particle as
Pi ¼ ðpi1 ; pi2 ; . . . ; piD Þ, and the best position of the local neighborhood as P l ¼ ðpl1 ; pl2 ; . . . ; plD Þ, and the best position of the glo-
bal space as Pg ¼ ðp1 ; p2 ; . . . ; pD Þ, respectively. After finding the three best values, the particle of DGLCPSO updates its velocity
and positions with following formulas:

v id ðk þ 1Þ ¼ wv id ðkÞ þ r1 ða þ 1=ðendgen þ 1  kÞÞ  ðpid ðkÞ  xid ðkÞÞ


þ ðb  1=ðendgen þ 1  kÞÞ  ðpld ðkÞ  xid ðkÞÞ þ cr2 ðpg ðkÞ  xid ðkÞÞ; ð3:1Þ
xid ðk þ 1Þ ¼ xid ðkÞ þ v id ðk þ 1Þ; ð3:2Þ
Pl ðk þ 1Þ ¼ Pl ðk  1; k  2; . . . ; 1Þ [ P l ðkÞ; ð3:3Þ

where w is an inertia weight, which is initialized typically in the range of [0, 1]. The best particles of ðk þ 1Þth generation are
composed of the best particles of ðk  1Þ generations before and kth generation, and the best particles of kth generations are
u% in ðk þ 1Þth. a; b 2 ½0:6; 1:2 are weights index that are chosen according to different optimization problem, which reflects
relatively important degree of the best position of the ith particle and the best position of the kth generation colony particle,
and endgen denotes the maximum iteration number. The constant c is acceleration constant, which controls how far a par-
ticle will move in a single iteration, and other parameters are the same as in (2.1) and (2.2).
The searching is a repeat process, and the stop criteria are that the maximum iteration number is reached or the mini-
mum error condition is satisfied. The stop condition depends on the problem to be optimized. In DGLCPSO algorithm, each
particle of the swarm shares mutual information globally and benefits from discoveries and previous experiences of all other
colleagues during the search process.

3.2. Pseudo code of DGLCPSO

Step 1: Initialize parameters; including swarm size PS, maximum of generation endgen and other parameters will be used
in DGLCPSO algorithm.
Step 2: Signment and scheduling
 Generate stochastically initialization population and velocity;
 Evaluate each particle’s fitness;
 Initialize gbest position with the lowest fitness particle in the whole swarm;
 Initialize pbest position with a copy of particle itself;
 Initialize lbest position with the best particle of initializing population;
 k := 0;

While (the maximum endgen of generation is not met)


{
 k := k+1;
 Generate next swarm by Eqs. (3.1)–(3.3);
 Evaluate swarm;
{– Compute each particle’s fitness in the swarm;
– Find new Pg , Pl of the swarm and Pi of each particle by comparison, and update P i and Pg ;}
– Update Pl using (3.3);

}
Step 3. Output optimization results.

4. Numerical simulation

4.1. Test functions

To illustrate the effectiveness and performance of DGLCPSO algorithm for optimization problems, a set of eight represen-
tative benchmark functions with different dimensions were employed to evaluate it in comparison with original PSO. Many
authors tested algorithm using them widely.
B. Jiao et al. / Chaos, Solitons and Fractals 42 (2009) 2688–2695 2691

Pn 2
Sphere model: f1 ðxÞ ¼ i¼1 xi ,
where xi 2 ½100; 100.
P Pi 2
Schwefel’s problem 1.2: f2 ðxÞ ¼ ni¼1 j¼1 xj , where xi 2 ½100; 100.
Schwefel’s problem 2.21: f3 ðxÞ ¼ maxi fjxi j; 1 6 i 6 ng, where xi 2 ½100; 100.
Generalized Rosenbrock’s function:
Pn1  
f4 ðxÞ ¼ i¼1 100ðxiþ1  x2i Þ2 þ ðxi  1Þ2 , where xi 2 ½100; 100.
 qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi
P   P 
Ackley’s function: f5 ðxÞ ¼ 20 exp 0:2 1n ni¼1 x2i  exp 1n ni¼1 cos 2pxi þ 20 þ e, where xi 2 ½32; 32.
Pn Qn  
2 xi 100
Generalized Griewank function: f6 ðxÞ ¼ 4000 1 pffi
i¼1 ðxi  100Þ  i¼1 cos i
þ 1, where xi 2 ½600; 600.
Generalized penalized functions:
( )
p X
n1 h i X
n
2 2
f7 ðxÞ ¼ 10 sin ðpy1 Þ þ ðyi  1Þ2  1 þ 10 sin ðpyiþ1 Þ þ ðyn  1Þ2 þ uðxi ; 10; 100; 4Þ
n i¼1 i¼1

and
( )
X
n1 h i X
n
2 2
f8 ðxÞ ¼ 0:1 sin ðpy1 Þ þ ðyi  1Þ2  1 þ 10 sin ðpyiþ1 Þ þ ðyn  1Þ2 þ uðxi ; 10; 100; 4Þ;
i¼1 i¼1

where
8
>
> kðxi  aÞm ; xi > a;
>
< 1
uðxi ; a; k; mÞ ¼ 0; a 6 xi 6 a; yi ¼ 1 þ ðxi þ 1Þ; xi 2 ½50; 50:
>
> 4
>
: m
kðxi  aÞ ; xi < a;

They can be grouped as unimodal (functions f1 —f3 Þ and multimodal functions (functions f4 —f8 Þ where the number of local
minima increases exponentially with the problem dimension.

4.2. Experimental results and comparison

To evaluate the performance of the proposed DGLCPSO algorithm, OPSO (original PSO) is used for comparisons. The exper-
imental results for each algorithm on each test function are listed in Table A. The parameters of PSO and DGLCPSO algorithm
were listed in Table A too. To get the average performance of the DGLCPSO algorithm ten runs on each problem instance
were performed and the solution quality was averaged. The best solutions and the parameters found in DGLCPSO are illus-
trated with bold letters, and the best average solutions are shown in italic.
Remark 1. In Table A, Fun and Dim denote function and its dimension respectively; the Best is this function’s optimum
value; PS and EG indicates the population size and algorithm terminate generation.
From Table A one can observe that w 2 ½0:2; 0:4 has the best performance since using them has smaller minimum and
smaller arithmetic mean in relation to the solutions obtained by the others, especially w  0:3 has the better search effi-
ciency. In DGLCPSO algorithm, for different problem parameters a, b should be different, as a whole, it is nearly to 1. From
simulation results we can obtain that the DGLCPSO algorithm is clearly better than the original PSO for continuous non-lin-
ear function optimization problem. The convergence figures of most effective DGLCPSO comparing with original PSO for
eight instances are as following:
Convergence figures of DGLCPSO comparing with original OPSO for f1 —f8 .
From Fig. 1, we can discover that the convergence rate of DGLCPSO algorithm is clearly faster than the OPSO on every
benchmark function, especially DGLCPSO algorithm is more efficacious than OPSO for middle and large size optimization
problem. Accordingly, we can state that the DGLCPSO algorithm is more effective than OPSO algorithm.

5. Conclusions and perspectives

In this paper through simulation experiment it was pointed out in our studies that original PSO frequently get the local solu-
tion, especially the problem size is middle or large. According to the shortcoming of OPSO, a dynamic global and local combined
particle swarm optimization algorithm is presented, and simulations show that it is efficacious. The performance of the new
approach is evaluated in comparison with the results obtained from OPSO for eight representative instances with different
dimensions and obtained results show the effectiveness of the proposed approach. The proposed DGLCPSO algorithm approach
in this paper can be considered as effective mechanisms from this point of view.
2692 B. Jiao et al. / Chaos, Solitons and Fractals 42 (2009) 2688–2695

Table A
The comparison results of the PSO algorithm and the DGLCPSO algorithm.
Fun dim best PS, EG Min/Max/Average (e–n means  10n)

w OPSO a, b DGLCPSO

f1 150, 2500 0.1 3.8429e+5/4.2823e+5/4.066e+5 1.05 0.1982/14.7794/2.8025


1.005 2.6277e4/0.0348/0.0059
1.0005 4.925e4/0.0057/0.0024
0.2 3.572e+5/4.1782e+5/3.9208e+5 1.05 3.0063e4/0.0016/9.5387e4
1.005 4.2905e4/0.2736/0.0496
1.0005 4.0993e5/4.0973/0.5341
150 0.3 3.5046e+5/4.1953e+5/3.9114e+5 1.05 3.7230e4/0.1886/0.0283
1.005 6.3122e5/75.6828/7.7969
1.0005 7.6686e4/1.3493/0.1605
0 0.4 3.0257e+5/4.2026e+5/3.8764e+5 1.05 5.1832e4/0.0123/0.0039
1.005 5.9024e4/0.3181/0.0366
1.0005 6.1701e4/0.1334/0.0348
0.5 3.7389e+5/4.2181e+5/3.985e+5 1.05 0.2379/2.3211/0.8059
1.005 0.0177/0.1442/0.0904
1.0005 0.0095/0.3515/0.0686
f2 150,2000 0.1 30.669/180.9234/83.5532 1.05 0.067/1.4762/0.3843
1.005 5.1626e5/0.0025/5.0881e4
1.0005 8.1569e6/2.8557e4/1.2709e4
0.2 0.1688/3.9563/1.2373 1.05 4.043e7/5.7787e5/1.1049e5
1.005 6.796e10/3.2623e8/1.1518e8
1.0005 9.0871e11/1.9264e8/4.7725e9
30 0.3 0.0054/0.1383/0.0667 1.05 2.8277e8/1.6266e6/4.5192e7
1.005 6.873e11/1.5354e9/4.9353e10
1.0005 2.8173e12/6.0698e10/1.3635e10
0 0.4 0.0858/2.2729/0.5672 1.05 3.6930e7/1.3538e5/3.3332e6
1.005 1.9309e10/1.9648e8/4.1595e9
1.0005 5.6931e11/4.4894e9/1.2756e9
0.5 1.1788/19.2952/9.3177 1.05 0.0013/0.0181/0.0081
1.005 8.1204e7/2.5602e4/3.6059e5
1.0005 2.6829e7/1.6722e5/4.4751e6
f3 150,2000 0.1 4.2425/24.4099/9.4056 1.05 0.0223/0.1089/0.0607
1.005 2.8191e4/0.0026/8.0810e4
1.0005 1.4937e4/0.0012/5.8688e4
0.2 0.0873/1.9018/0.5246 1.05 8.9939e6/1.025e4/3.3987e5
1.005 2.8676e7/4.7388e6/1.9362e6
1.0005 1.0582e7/1.4371e6/5.5200e7
30 0.3 0.0131/0.047/0.0292 1.05 2.8342e7/2.3586e6/1.1557e6
1.005 2.9949e9/2.562e7/8.0897e8
1.0005 6.7076e9/1.1112e7/4.3497e8
0 0.4 0.0092/0.2557/0.0821 1.05 8.4512e8/1.626e6/6.8515e7
1.005 5.8754e9/1.389e7/5.0735e8
1.0005 3.5974e9/2.2156e7/6.0589e8
0.5 0.1705/3.7161/0.9893 1.05 1.6211e5/1.3532e4/5.1309e5
1.005 8.7457e8/2.9193e6/1.3604e6
1.0005 2.1065e7/1.151e6/5.6077e7
f4 150,2500 0.1 1.3099/615.6739/76.101 1.05 0.3313/613.7611/70.9135
1.005 0.1083/68.7209/14.2012
1.0005 0.0273/15.3811/7.7501
0.2 4.8236/601.12/134.683 1.05 0.005/1.3236e+3/142.945
1.005 6.7641e4/516.6827/54.4519
1.0005 0.0013/9.6205/2.7677
25 0.3 7.673/766.0687/96.9065 1.05 0.0042/23.6548/6.9247
1.005 8.7599e5/180.2582/29.2493
1.0005 1.277e4/73.2979/8.9506
0 0.4 1.774/17.5648/11.9558 1.05 0.0207/9.7482/5.5441
1.005 0.0013/208.5932/23.5978
1.0005 0.0117/384.7673/44.3407
0.5 14.4675/591.1482/141.4655 1.05 1.2259/2.4329e+003/269.9333
1.005 1.2334/1.0526e+3/124.8679
1.0005 1.0874/37.4239/9.7424
f5 150, 2000 0.1 2.2167e8/4.4818e4/5.4888e5 1.05 4.4409e15/7.9936e15/7.2831e15
1.005 4.4409e15/7.9936e15/5.1514e15
1.0005 4.4409e15/7.9936e15/5.862e15
0.2 9.9612e009/1.6462/0.6883 1.05 4.4409e15/7.9936e15/5.5067e15
1.005 4.4409e15/1.5017/0.2842
1.0005 4.4409e15/7.9936e15/5.862e15
(continued on next page)
B. Jiao et al. / Chaos, Solitons and Fractals 42 (2009) 2688–2695 2693

Table A (continued)

Fun dim best PS, EG Min/Max/Average (e–n means  10n)

w OPSO a, b DGLCPSO

30 0.3 4.4409e015/0.9313/0.0931 1.05 4.4409e15/7.9936e15/6.2172e15


1.005 4.4409e15/1.0271/0.1027
1.0005 4.4409e15/7.9936e15/4.7962e15
0 0.4 4.4409e15/7.9936e15/7.6383e15 1.05 4.4409e15/7.9936e15/5.5067e15
1.005 4.4409e15/7.9936e15/4.7962e15
1.0005 4.4409e15/7.9936e15/4.7962e15
0.5 7.9936e15/1.5099e14/9.4147e15 1.05 4.4409e15/ 7.9936e15/6.9278e15
1.005 4.4409e15/7.9936e15/5.5067e15
1.0005 4.4409e15/7.9936e15/5.5067e15
f6 150,2500 0.1 3.498e+3/3.9613e+3/3.7207e+3 1.05 0.1245/0.8141/0.4515
1.005 2.6242e4/0.0396/0.0126
1.0005 1.0617e4/0.3178/0.0645
0.2 3.4594e+3/4.0547e+3/3.8437e+3 1.05 4.9132e5/0.4217/0.1144
1.005 0.0735/2.7501/0.6116
1.0005 0.007/1.0688/0.2768
150 0.3 3.0914e+3/3.9497e+3/3.576e+3 1.05 8.6279e5/0.1798/0.0614
1.005 0.0018/0.7482/0.2652
1.0005 4.8522e4/0.6222/0.2074
0 0.4 0.0085/3.9859e+3/3.2703e+3 1.05 0.0012/0.194/0.0375
1.005 0.0015/1.2531/0.218
1.0005 8.9362e4/0.3206/0.0666
0.5 3.091e+3/4.089e+3/3.669e+3 1.05 0.0996/0.6491/0.2777
1.005 0.0024/0.3437/0.0641
1.0005 0.0064/0.1462/0.0583
f7 150,2500 0.1 2.1764e+9/2.8882e+9/2.5162e+9 1.05 0.7438/8.5406e+8/8.5406e+7
1.005 3.9134e6/0.1603/0.0525
1.0005 3.745e7/0.0316/0.0097
0.2 2.2157e+9/2.8836e+9/2.4478e+9 1.05 1.7773e9/0.1557/0.0312
1.005 1.4184e9/0.1244/0.0311
1.0005 4.3758e9/0.1868/0.0312
100 0.3 1.6374e+9/2.701e+9/2.3623e+9 1.05 5.2805e9/0.1557/0.0405
1.005 1.7829e9/0.1557/0.0654
1.0005 8.8324e9/0.1557/0.0218
0 0.4 1.9752e+9/2.787e+9/2.2897e+9 1.05 2.4022e9/0.1249/0.0407
1.005 2.069e8/0.4991/0.1122
1.0005 1.392e9/0.1557/0.0373
0.5 1.9117e+9/2.655e+9/2.2745e+9 1.05 0.4208/305.3568/39.7429
1.005 3.1497e5/0.4391/0.0869
1.0005 3.7702e7/0.1868/0.0584
f8 150, 2500 0.1 2.1056e+9/2.6565e+9/2.4556e+9 1.05 1.835/1.4751e+9/1.596e+8
1.005 1.3198e6/0.0035/7.0096e4
1.0005 1.7858e6/0.0012/2.3392e4
0.2 2.0456e+9/2.5792e+9/2.3794e+9 1.05 3.8907e8/0.0017/2.6983e4
1.005 4.203e8/1.8129e4/1.9719e5
1.0005 1.8542e8/7.7555e4/1.1958e4
100 0.3 1.7314e+9/2.6019e+9/2.2416e+9 1.05 2.5809e4/0.4975/0.0513
1.005 4.849e8/3.1364e5/3.5142e6
1.0005 1.6349e9/0.0997/0.02
0 0.4 2.2086e+9/2.6699e+9/2.427e+9 1.05 8.1551e7/0.3981/0.04
1.005 2.5907e8/6.0320e4/6.9838e5
1.0005 2.7638e10/4.0027e4/4.204e5
0.5 2.0547e+9/2.4671e+9/2.2353e+9 1.05 0.0047/111.8781/26.0855
1.005 7.5824e5/0.2047/0.0567
1.0005 4.0932e6/0.4093/0.0421

There are a number of research directions that can be considered as useful extensions of this research. Although the pro-
posed algorithm is tested with eight representative instances, a more comprehensive computational study should be made
to test the efficiency of proposed solution technique. For parameters of DGLCPSO algorithm, in this paper we just studied
effectiveness of a and b in small area, so the effectiveness of various a and b must be researched in future work. In the future
the convergence property of DGLCPSO algorithm should be researched, and it is used for solving other discrete combinatorial
optimization problems such as FSSP, JSSP, TSP, etc.
2694 B. Jiao et al. / Chaos, Solitons and Fractals 42 (2009) 2688–2695

Convergence figure of the DGLCPSO and OPSO for f1 Convergence figure of the DGLCPSO and OPSO for f2
x 10 x 10
5 10
OPSO optimum OPSO optimum
DGLPSO optimum DGLPSO optimum
4 8

fitness value

fitness value
3 6

2 4

1 2

0 0
0 500 1000 1500 2000 2500 500 0 1000 1500 2000
evolvement generation evolvement generation
Convergence figure of the DGLCPSO and OPSO for f3 Convergence figure of the DGLCPSO and OPSO for f4
x 10
100 2.5
OPSO optimum OPSO optimum
DGLPSO optimum DGLPSO optimum
80 2

fitness value
60
fitness value

1.5

40 1

20 0.5

0 0
0 500 1000 1500 2000 0 500 1000 1500 2000 2500
evolvement generation evolvement generation

Convergence figure of the DGLCPSO and OPSO for f5 Convergence figure of the DGLCPSO and OPSO for f6
25 5000
OPSO optimum OPSO optimum
DGLPSO optimum DGLPSO optimum
20 4000

15 3000
fitness value

fitness value

10 2000

5 1000

0 0
0 500 1000 1500 2000 0 500 1000 1500 2000 2500
evolvement generation evolvement generation
Convergence figure of the DGLCPSO and OPSO for f7 Convergence figure of the DGLCPSO and OPSO for f8
x 10 x 10
2.5 2.5
OPSO optimum OPSO optimum

DGLPSO optimum DGLPSO optimum


2 2

1.5
fitness value

1.5
fitness value

1 1

0.5 0.5

0 0
0 500 1000 1500 2000 2500 0 500 1000 1500 2000 2500
evolvement generation evolvement generation

Fig. 1. The convergence figures of DGLCPSO comparing with original PSO for function optimization problems.

References

[1] Eberhart R, Kennedy J. A new optimizer using particle swarm theory. In: Proceedings of the sixth international symposium on micro machine and
human science, Nagoya, Japan; 1995. p. 39–43.
[2] Kennedy J, Eberhart R. Particle swarm optimization. In: IEEE international conference on neural networks, Perth, Australia; 1995. p. 1942–48.
[3] Clerc M, Kennedy J. The particle swarm: explosion stability and convergence in a multi-dimensional complex space. IEEE Trans Evol Comput
2002;6(1):58–73.
B. Jiao et al. / Chaos, Solitons and Fractals 42 (2009) 2688–2695 2695

[4] Eberhart RC, Shi Y. Particle swarm optimization: developments, applications and resources. In: Proceedings of IEEE international conference on
evolutionary computation; 2001. p. 81–6.
[5] Shi Y, Eberhart RC. A modified particle swarms optimiser. In: Proceedings of the IEEE international conference on evolutionary computation; 1997. p.
303–8.
[6] Shi Y, Eberhart R. A modified particle swarm optimizer. In: IEEE world congress on computational intelligence; 1998. p. 69–73.
[7] He S, Wu QH, Wen JY, Saunders JR, Paton RC. A particle swarm optimizer with passive congregation. BioSystems 2004;78:135–47.
[8] Shi Y, Eberhart RC. Parameter selection in particle swarm optimization. In: Evolutionary programming VII: proceedings of the seventh annual
conference on evolutionary programming, New York; 1998. p. 591–600.
[9] Kennedy J. Stereotyping: improving particle swarm performance with cluster analysis. In: Proceedings of the IEEE international conference on
evolutionary computation; 2000. p. 1507–12.
[10] Kennedy J, Mendes R. Population structure and particle swarm performance. In: Proceedings of the 2002 congress on evolutionary computation
CEC2002, IEEE Press; 2002. p. 1671–6.
[11] Angeline PJ. Using selection to improve particle swarm optimization. In: IEEE international conference on evolutionary computation, Anchorage,
Alaska; 1998. p. 231–8.
[12] Lovbjerg M, Rasmussen TK, Krink T. Hybrid particle swarm optimization with breeding and subpopulations. In: IEEE international conference on
evolutionary computation, San Diego; 2000. p. 1217–22.
[13] Van den Bergh F, Engelbrecht AP. Training product unit networks using cooperative particle swarm optimizer. In: Proceedings of the third genetic and
evolution computation conference (CEOCCO), San Francisco, USA; 2001.
[14] Van den Bergh F, Engelbrecht AP. Effects of swarm size on cooperative particle swarm optimizers. In: Proceedings of the CECCO, San Francisco, USA;
2001.
[15] Shi Y, Eberhart RC. A new particle swarm optimizer. In: IEEE World Congress on Computational Intelligence; 1998. p. 69–73.
[16] Liu Bo et al. Improved particle swarm optimization combined with chaos. Chaos, Solitons & Fractals 2005;25:1261–71.
[17] Coelho Leandro dos Santos. A quantum particle swarm optimizer with chaotic mutation operator. Chaos, Solitons & Fractals 2008;37:1409–18.
[18] Coelho Leandro dos Santos, Coelho Antonio Augusto Rodrigues. Model-free adaptive control optimization using a chaotic particle swarm approach.
Chaos, Solitons & Fractals 2008.
[19] Alatas Bilal, Akin Erhan. Chaotically encoded particle swarm optimization algorithm and its applications. Chaos, Solitons & Fractals 2008.
[20] Alatas Bilal, Akin Erhan, Bedri Ozer A. Chaos embedded particle optimization algorithms. Chaos, Solitons & Fractals 2008.
[21] Coelho Leandro dos Santos, Mariani Viviana Cocco. A novel chaotic particle swarm optimization approach using Henon map and implicit filtering local
search for economic load dispatch. Chaos, Solitons & Fractals 2007.

You might also like