Professional Documents
Culture Documents
Comprehensive Learning Particle Swarm Optimizer For Solving Multiobjective Optimization Problems: Research Articles
Comprehensive Learning Particle Swarm Optimizer For Solving Multiobjective Optimization Problems: Research Articles
net/publication/220063366
CITATIONS READS
125 207
3 authors, including:
Some of the authors of this publication are also working on these related projects:
All content following this page was uploaded by Jing Liang on 12 November 2018.
This article presents an approach to integrate a Pareto dominance concept into a comprehensive
learning particle swarm optimizer ~CLPSO! to handle multiple objective optimization prob-
lems. The multiobjective comprehensive learning particle swarm optimizer ~MOCLPSO! also
integrates an external archive technique. Simulation results ~obtained using the codes made avail-
able on the Web at http://www.ntu.edu.sg/home/EPNSugan! on six test problems show that the
proposed MOCLPSO, for most problems, is able to find a much better spread of solutions and
faster convergence to the true Pareto-optimal front compared to two other multiobjective opti-
mization evolutionary algorithms. © 2006 Wiley Periodicals, Inc.
1. INTRODUCTION
2. RELATED WORK
Liang et al.13 proposed an improved PSO called CLPSO, which uses a novel
learning strategy where all particles’ historical best information is used to update a
particle’s velocity. This strategy ensures that the diversity of the swarm is pre-
served to discourage premature convergence. It has been applied to solve real-
world problems successfully.14
In CLPSO, the particle swarm learns from the gbest of the swarm, the particle’s
pbest, and the pbests of all other particles so that every particle learns from the
elite, itself and other particles. In this version, m dimensions are randomly chosen
to learn from the gbest. Some of the remaining D-m dimensions are randomly
chosen to learn from some randomly chosen particles’ pbests and the remaining
dimensions learn from its pbest. The pseudo code for CLPSO is given in Table I.
After 10 iterations, the learning dimensions are randomly reorganized.
3. MULTIOBJECTIVE CLPSO
In CLPSO the swarm population is fixed in size, and its members cannot be
replaced, only adjusted by their pbests and the gbest. However, when extending
CLPSO to handle multiobjective problems, there exists a set of nondominated solu-
tions instead of single global best individual as in the single objective CLPSO. In
addition, there may not be a single previous best individual for each member of
the swarm in the case of two solutions that are nondominated to each other. Select-
ing an exemplar for each particle is very difficult yet important.
There exist several different pbest maintenance and selection strategies in the
literature.20 In our proposal, we use the pbest updating strategy in Ref. 12 ~see
Table II!.
Differently with MOPSO,12 we allow the particle to learn from the exemplars
until the particle ceases improving for a number of generations ~which is set as 2 in
our approach! to ensure that a particle learns from good exemplars and to minimize
the time wasted on poor directions. Then we reassign the exemplars for the particle.
International Journal of Intelligent Systems DOI 10.1002/int
COMPREHENSIVE LEARNING PARTICLE SWARM OPTIMIZER 213
Table II. Updating the pbest.
if pbesti dominates X i, count ⫽ count ⫹ 1
if X i dominates pbesti , pbesti ⫽ X i
if pbesti and X i are nondominated with each other,
if rand ⬍ 0.5, pbesti ⫽ X i , else count ⫽ count ⫹ 1
1! Initialize
Randomly initialize particle positions
Initialize particle velocities: for i ⫽ 0 to NP, Vi ⫽ 0.
Evaluate the fitness values of particles.
2! Optimize
WHILE stopping criterion is not satisfied
DO
For i ⫽ 1 to NP
Select an exemplar from external archive
Assign each dimension to learn from gbest, pbest of this particle and
pbests of other particles, using Equation 2
Update particle velocity using Equation 3a, 3b, 3c
Update particle position using Equation 4
Maintain particles in search space 12
Update pbest if current position is better than pbest ~Table I!
Evaluate the fitness values of particle
End For
Update the external archive
Increment the generation count
END WHILE
m ⫽ [Pm * D] ~5!
~3! Inertia weight w. The inertia weight is used to moderate the impact of the
previous velocity on the current velocity of each particle, and balance the
global and local search abilities. Shi and Eberhart 21 proposed a linearly
decreasing inertia weight with increasing generations. Our proposed
MOCLPSO uses decreasing inertia weight, as described in Equation 1. The
inertia weight is initialized as a large value to explore the search space glob-
ally and quickly, and then gradually decreased to perform fine search.
4. EXPERIMENTAL RESULTS
4.1. Methodology
( di
i⫽1
g⫽ ~6!
N
( dme ⫹ ( 6di ⫺ dN 6
m⫽1 i⫽1
D⫽ M
~7!
( dme ⫹ ~N ⫺ 1! dN
m⫽1
Here, the parameters dme are the Euclidean distance between the extreme solu-
tions of Pareto optimal front and the boundary solutions of the obtained nondom-
inated set corresponding to mth objective function. The parameter di is the Euclidean
distance between neighboring solutions in the obtained nondominated solutions
set and dN is the mean value of these distances. D is 0 for an ideal distribution when
dme ⫽ 0 and all di equal to d.N The smaller the value of D, the better the diversity of
the nondominated set.
In our simulations, all MOEAs are run for a maximum of 10,000 fitness func-
tion evaluations ~FES!. MOCLPSO uses the following parameter values: popula-
tion size NP ⫽ 50, archive size A ⫽ 100, learning probability Pc is set as 0.1,
elitism probability Pm ⫽ 0.4. MOPSO uses a population size of 50, a repository
size of 100, and 30 divisions for the adaptive grid with mutation as presented in
Ref. 12. For these two approaches, inertia weight decreases linearly over time,
w0 ⫽ 0.9, w1 ⫽ 0.4, c1 ⫽ c2 ⫽ 2; we use all members in the archive after 10,000
fitness evaluations to calculate the performance metrics. For NSGA-II ~real-
coded!, we use a population size of 100, crossover probability of 0.9 and mutation
probability of 1/D ~where D is the number of decision variables!, distribution
indexes for crossover, and mutation operators as hc ⫽ 20 and hm ⫽ 20 as presented
in Ref. 5. The population obtained at the end of 100 generations is used to
calculate the performance metrics. The results presented in Tables IV–IX are
International Journal of Intelligent Systems DOI 10.1002/int
COMPREHENSIVE LEARNING PARTICLE SWARM OPTIMIZER 217
obtained by repeatedly running each problem 10 times. The best average results
are emphasized in boldface.
冉 冉 M 冊冊
3 2
1
Minimize f1 ~ x! ⫽ 1 ⫺ exp ⫺ ( x i ⫺
i⫽1 3
冉 (冉 M 冊冊
3 2
1
Minimize f2 ~ x! ⫽ 1 ⫺ exp ⫺ xi ⫹
i⫽1 3
where n ⫽ 3 and x i 僆 @⫺5,5# . For the optimal solutions, see Ref. 25.
The KUR problem has three disconnected Pareto-optimal regions, which
may cause difficulty in finding nondominated solutions in all regions. In Fig-
ure 2, both MOPSO and NSGA-II have a problem in finding the entire Pareto
front. However, MOCLPSO performs well, obtaining nondominated solutions
spread over the entire regions, with the best convergence and diversity metric
values as shown in Table V.
Test Problems 3– 6 are chosen from Zitzler–Deb–Thiele’s test set.26
Test Problem 3 (ZDT 1). Test Problem 3 has a convex Pareto front:
Minimize f1 ~ x! ⫽ x 1
g~ x! ⫽ 1 ⫹ 9{ 冉 ( 冊冒
n
i⫽2
xi ~n ⫺ 1!
Figure 1. Pareto fronts obtained by MOCLPSO, MOPSO, and NSGA-II on Test Problem 1
~FON!.
Figure 2. Pareto fronts obtained by MOCLPSO, MOPSO, and NSGA-II on Test Problem 2
~KUR!.
g~ x! ⫽ 1 ⫹ 9 冉 ( 冊冒
n
i⫽2
xi ~n ⫺ 1!
Figure 3. Pareto fronts obtained by MOCLPSO, MOPSO, and NSGA-II on Test Problem 3
~ZDT1!.
Figure 4. Pareto fronts obtained by MOCLPSO, MOPSO, and NSGA-II on Test Problem 4
~ZDT2!.
high values of converge and diversity metric in Table VII. However, the proposed
MOCLPSO converges to the Pareto-optimal front, as shown in Figure 4.
Test Problem 5 (ZDT3). The fifth problem has several disconnected Pareto-
optimal fronts:
International Journal of Intelligent Systems DOI 10.1002/int
222 HUANG, SUGANTHAN, AND LIANG
Figure 5. Pareto fronts obtained by MOCLPSO, MOPSO, and NSGA-II on Test Problem 5
~ZDT3!.
Minimize f1 ~ x! ⫽ x 1
Minimize 冋
f2 ~ x! ⫽ g~ x! 1 ⫺ M x 1 /g~ x! ⫺
x1
g~ x!
sin~10px 1 ! 册
g~ x! ⫽ 1 ⫹ 9 冉 ( 冊冒
n
i⫽2
xi ~n ⫺ 1!
Figure 6. Pareto fronts obtained by MOCLPSO, MOPSO, and NSGA-II on Test Problem 6
~ZDT6!.
g~ x! ⫽ 1 ⫹ 9 冋冉 ( 冊冒
n
i⫽2
xi ~n ⫺ 1! 册 0.25
5. CONCLUSIONS
This article presented a novel proposal to extend CLPSO to tackle multiobjec-
tive optimization problems with an external archive. We evaluated the proposed
approach on six test problems currently used in the literature. The results demon-
strate that combining the CLPSO with a crowding distance-based archive mainte-
nance strategy can yield a simple, effective, and stable multiobjective evolutionary
algorithm. The main advantage of MOCLPSO is that it converges fast to the true
Pareto-optimal front with fewer FES, and at the same time maintains good diversity
along the Pareto front. At this point, the proposed MOCLPSO significantly outper-
forms two other representative multiobjective evolutionary algorithms.
References
1. Srinivas N, Deb K. Multiobjective optimization using nondominated sorting in genetic
algorithms. Evol Comput 1994;2:221–248.
2. Zitzler E, Thiele L. Multi-objective evolutionary algorithms: A comparative case study
and the strength Pareto approach. IEEE Trans Evol Comput 1999;3~4!:257–271.
3. Knowles JD, Corne DW. Approximating the nondominated front using the Pareto archive
evolutionary strategy. Evol Comput 2000;8:149–172.