Download as pdf or txt
Download as pdf or txt
You are on page 1of 15

JID: ESWA

ARTICLE IN PRESS [m5G;July 31, 2015;16:16]

Expert Systems With Applications xxx (2015) xxx–xxx

Contents lists available at ScienceDirect

Expert Systems With Applications


journal homepage: www.elsevier.com/locate/eswa

PS–ABC: A hybrid algorithm based on particle swarm and artificial bee


colony for high-dimensional optimization problems
Q1 ZhiYong Li∗, WeiYou Wang, YanYan Yan, Zheng Li
College of Computer Science and Electronic Engineering, Hunan University, Changsha 410082, China

a r t i c l e i n f o a b s t r a c t

Keywords: Particle swarm optimization (PSO) and artificial bee colony (ABC) are new optimization methods that have
Particle swarm optimization attracted increasing research interests because of its simplicity and efficiency. However, when being applied
Artificial bee colony
to high-dimensional optimization problems, PSO algorithm may be trapped in the local optimal because of
Hybrid algorithm
its low global exploration efficiency; ABC algorithm has slower convergence speed in some cases because
High-dimensional optimization problems
of the lack of powerful local exploitation capacity. In this paper, we propose a hybrid algorithm called PS–
ABC, which combines the local search phase in PSO with two global search phases in ABC for the global
optimum. In the iteration process, the algorithm examines the aging degree of pbest for each individual to
decide which type of search phase (PSO phase, onlooker bee phase, and modified scout bee phase) to adopt.
The proposed PS–ABC algorithm is validated on 13 high-dimensional benchmark functions from the IEEE-CEC
2014 competition problems, and it is compared with ABC, PSO, HPA, ABC–PS and OXDE algorithms. Results
show that the PS–ABC algorithm is an efficient, fast converging and robust optimization method for solving
high-dimensional optimization problems.
© 2015 Published by Elsevier Ltd.

1 1. Introduction Due to the practical demands, there were some attempts in try- 17
ing to use different methods for high-dimensional optimization prob- 18
2 Global optimization can be applied in many areas of science and lems in recent years (Grosan & Abraham, 2009). One method is to use 19
3 engineering (Bomze, 1997; Gergel, 1997; Horst & Tuy, 1996; Lin, Ying, parallel optimization algorithm. This approach aims to solve specific 20
4 Chen, & Lee, 2008). Especially, high-dimensional optimization prob- standard functions. Höfinger, Schindler, and Aszódi (2002) proposed 21
5 lem is a branch of global optimization problems that have attracted a parallel global optimization algorithm for typical high-dimensional 22
6 increasing attention in the past few years. High-dimensional opti- problems. Schutte, Reinbolt, Fregly, Haftka, and George (2004) intro- 23
7 mization problems can be formulated as a D-dimensional minimiza- duced a parallel particle swarm algorithm for some standard func- 24
8 tion problem as follows (Gergel, 1997; Nguyen, Li, Zhang, & Truong, tions (Griewank and Corona test functions). However, parallel op- 25
9 2014): timization algorithm is limited in some application fields because 26
 parallel computing is difficult to implement for high-dimensional 27
min f(x), optimization problems. 28
(1)
s.t. l≤x≤u Several metaheuristic algorithms such as Differential Evolution 29
(DE) (Brest, Greiner, Boskovic, Mernik, & Zumer, 2006; Price, Storn, 30
10 where f(x ) is the objective function, x = (x1 , x2 , . . . , xD ) is a vec- & Lampinen, 2006; Yang, Tang, & Yao, 2007), Genetic Algorithm (GA) 31
11 tor of variables, D corresponds to the problem dimensions, l = (Chelouah & Siarry, 2000; Sánchez, Lozano, Villar, & Herrera, 2009), 32
12 (l1 , l2 , . . . , lD ) and u = (u1 , u2 , . . . , uD ) define the lower and up- Particle Swarm Optimization (PSO) (Eberhart & Kennedy, 1995; Jiang, 33
13 per limits of the corresponding variables, respectively. In high- Hu, Huang, & Wu, 2007; Karaboga, 2005), Artificial Bee Colony (ABC) 34
14 dimensional optimization problems, the search space usually be- (Karaboga & Basturk, 2008; Zhang, Ouyang, & Ning, 2010) and so 35
15 comes more complex with increasing of dimensionality; thus, solving on, have shown considerable successful in solving high-dimensional 36
16 high-dimensional problems is a considerable challenge. optimization problems in the past few years. Among the exist- 37
ing metaheuristic for global optimization, the PSO and ABC meth- 38
ods are highly successful and suitable for some classes of high- 39
Q2 ∗
Corresponding author. Tel.: +86 13607436411. dimensional optimization problems. However, the main challenge of 40
E-mail addresses: zhiyong.li@hnu.edu.cn, zhengha1989@163.com (Z. Li), the PSO algorithm is that it can easily get stuck in a local optima 41
1053723695@qq.com (W. Wang), 739954660@qq.com (Y. Yan), 121514277@qq.com when handling complex high-dimensional problems. Moreover, the 42
(Z. Li).

http://dx.doi.org/10.1016/j.eswa.2015.07.043
0957-4174/© 2015 Published by Elsevier Ltd.

Please cite this article as: Z. Li et al., PS–ABC: A hybrid algorithm based on particle swarm and artificial bee colony for high-dimensional
optimization problems, Expert Systems With Applications (2015), http://dx.doi.org/10.1016/j.eswa.2015.07.043
JID: ESWA
ARTICLE IN PRESS [m5G;July 31, 2015;16:16]

2 Z. Li et al. / Expert Systems With Applications xxx (2015) xxx–xxx

43 convergence speed of the ABC algorithm is typically lower than other ABC, PSO, HPA, ABC–PS and OXDE algorithms. The experimental re- 109
44 metaheuristic algorithms such as DE and PSO algorithms when solv- sults show that the proposed PS–ABC algorithm is more effective on 110
45 ing high-dimensional problems. This is because that PSO has poor ex- the high-dimensional optimization problems. 111
46 ploration ability and ABC has poor exploitation mechanism. There- The rest of the paper is organized as follows: Section 2 presents 112
47 fore, several modified PSO or ABC algorithms have been proposed the instructions for the PSO and ABC algorithms. Section 3 briefly 113
48 to further balance the exploration and exploitation processes, which presents the PS–ABC algorithm, including the algorithm detail, algo- 114
49 results in improved convergence speed and avoidance of the lo- rithm search ability analysis, algorithm complexity analysis, and al- 115
50 cal optima. For example, TSai, Pan, Liao, and Chu (2009) improved gorithm convergence analysis. Section 4 describes the test problems 116
51 the exploration ability of ABC by adding the concept of univer- and parameter settings. Section 5 discusses the simulation results 117
52 sal gravitation to the onlooker bee phase and applied the interac- over 13 high-dimensional benchmark functions and PS–ABC control 118
53 tive ABC (IABC) to five benchmark functions. Jia, Zheng, Qu, and parameters. Finally, the conclusion is drawn in Section 6. 119
54 Khan (2011) proposes a novel memetic PSO (CGPSO) algorithm for
55 high-dimensional problems, which combines the canonical PSO with
56 a Chaotic and Gaussian local search procedure. Jamian, Abdullah, 2. Related work 120
57 Mokhlis, Mustafa, and Bakar (2014) proposed a global PSO (GPSO)
58 algorithm for high-dimensional numerical optimization problems. 2.1. PSO algorithm 121
59 Imanian, Shiri, and Moradi (2014) proposed a modified ABC (i.e.
60 VABC) for high-dimensional continuous optimization problems. PSO, which was proposed by Kennedy and Eberhart in Eberhart 122
61 Metaheuristic algorithms use different exploration and exploita- and Kennedy (1995), is one of the most recent evolutionary algo- 123
62 tion strategies for high-dimensional optimization problems. In or- rithms based on the searching behavior of animals such as fish 124
63 der to overcome the poor exploration ability of PSO and the poor schooling and bird flocking. In PSO model, each individual is com- 125
64 exploitation mechanism of ABC, hybrid metaheuristic algorithm is a posed of three vectors: the velocity vi , the current position xi , 126
65 new research trend for solving high-dimensional optimization prob- and the previous best position pbesti . Suppose that the objective 127
66 lems, which have attracted considerable attention in recent years. In function is D- dimensional, then the velocity and position of the 128
67 this paper, hybrid metaheuristic algorithm is a recombination pro- ith particle are represented as vi = (vi1 , vi2 , vi3 , . . . , viD ) and xi = 129
68 cedure for the hybridization of ABC and PSO. For instance, a novel (xi1 , xi2 , xi3 , . . . , xiD ), respectively, while its previous best position is 130
69 hybrid swarm intelligent algorithm (IABAP) was developed by Shi stored in pbesti = ( pbesti1 , pbesti2 , pbesti3 , . . . pbestiD ). In each gener- 131
70 et al. (2010) by using information communication between PSO and ation, the best position discovered from all pbest positions is known 132
71 ABC and the information exchange approach improved the perfor- as the global best position gbest = (gbest1 , gbest2 , gbest3 , . . . gbestD ). 133
72 mance of the algorithm. El-Abd (2011) proposed a hybrid approach The process of PSO is presented below (Eberhart & Shi, 2001): 134
73 referred to as ABC–SPSO, which is based on PSO and ABC, for continu-
74 ous function optimization. Kıran & Gündüz (2013) proposed a hybrid Step 1: Initialization
75 approach (HPA) based on PSO and ABC algorithms for continuous op- Assign parameters and create populations.
Set iter = 0.
76 timization problems. Chun-Feng, Kui, and Pei-Ping (2014) proposed Step 2: Reproduction and updating loop
77 a novel ABC algorithm based on PSO search mechanism (ABC–PS) for i = 1, 2,…, N do
78 for global optimization. In these studies, algorithms such as IABAP, Update the velocity vi of particle xi by using (2)
79 ABC–SPSO, HPA and ABC–PS are hybridization of PSO and ABC. Al- vt+1
i
= w · vti + c1 · r1 · ( pbestit − xti ) + c2 · r2 · (gbest t − xti ). (2)
Update the position of particle xi by using (3)
80 though exploration and exploitation in these algorithms can be bal-
xt+1
i
= xti + vt+1
i
. (3)
81 anced to achieve excellent quality results for optimization problems, Evaluate the fitness value of the new particle xi .
82 these techniques cannot solve large-scale global optimization prob- if xi is better than pbesti then
83 lems that involve high dimensions (Kıran & Gündüz, 2013). Such as Set xi to be pbesti .
84 in the IABAP, HPA and ABC–PS algorithms, the update rule of the ABC end if
end for
85 algorithm is executed in each iteration process, thus these three algo-
Set the particle with best fitness value to be gbest.
86 rithms retain the characteristic of the ABC and have a lower conver- iter = iter +1.
87 gence speed on the high-dimensional problems. In addition, IABAP Step 3: If the stop criterion is satisfied, the process is terminated.
88 and ABC–SPSO has poor global search ability and poor computing Otherwise, return to Step 2.
89 power on the high-dimensional multimodal problems.
The PSO algorithm has three stages: initialization, iteration and 135
90 Therefore, we propose a new hybrid procedure (PS–ABC) for the
termination criterion. In initialization stage, the population is initial- 136
91 hybridization of PSO and ABC by using the exploitation ability of PSO
ized and randomly distributed in the search space. In the iteration 137
92 and the exploration ability of ABC. This method use the exploration
stage, the velocities and positions of the particles are updated by 138
93 ability of the ABC based on PSO in the algorithm process. Traditional
Eqs. (2) and (3), respectively. The velocity equation in PSO is 139
94 PSO has great exploitation ability and fast convergence speed (Jia et
95 al., 2011). By contrast, basic ABC has effective exploration ability (Zhu vt+1 = w · vti + c1 · r1 · ( pbestit − xti ) + c2 · r2 · (gbest t − xti ) (2)
i
96 & Kwong, 2010). Thus, the proposed method has fast convergence
97 speed and excellent computing performance for high-dimensional and the position equation is 140
98 optimization problems. The update status of pbest in the PS–ABC
99 algorithm is characterized by three states: active, aged, and dying xt+1
i
= xti + vt+1
i
. (3)
100 states. The proposed method determines the optimal solution in the
101 three corresponding phases. An active individual will perform PSO where c1 and c2 are two positive constants that indicate the relative 141
102 phase to exploit a new solution along the direction of pbest and gbest. influence of the cognition and social components, respectively; w is 142
103 The onlooker bee phase in the aged state has the most outstanding inertia weight that provides a balance between local exploitation and 143
104 pbest for exploring additional possible solutions in the new search global exploration; r1 and r2 are random real values in interval [0, 1]. 144
105 space to escape from the search space of the PSO phase. An optimal The velocity of the particles on each dimension is clamped to the 145
106 solution that cannot be updated indicates that the process is in a dy- range [−Vmax , Vmax ]. 146
107 ing state, and the modified scout bee phase is used to explore the If the terminate criterion is satisfied, the algorithm produce the 147
108 whole search spaces. The performance of PS–ABC is compared with best solution (gbest). Otherwise, the iteration stage is repeated. 148

Please cite this article as: Z. Li et al., PS–ABC: A hybrid algorithm based on particle swarm and artificial bee colony for high-dimensional
optimization problems, Expert Systems With Applications (2015), http://dx.doi.org/10.1016/j.eswa.2015.07.043
JID: ESWA
ARTICLE IN PRESS [m5G;July 31, 2015;16:16]

Z. Li et al. / Expert Systems With Applications xxx (2015) xxx–xxx 3

149 2.2. ABC algorithm following equation: 185

xt+1
i
= xmax + rand · (xmax − xmin ). (6)
150 Karaboga (2005) proposed an ABC algorithm to optimize numer-
151 ical problems, which is a swarm intelligence algorithm based on the where xmin and xmax are the lower and upper bound of the food 186
152 foraging behavior of honey bee swarms. The colony of artificial bees source position, respectively. 187
153 in the ABC algorithm consists of three groups of bees: employed bees, The candidate solution is compared with the old one. If the new 188
154 onlookers and scouts. Employed bees are responsible for search of a food source has a better quality than the old source, then the old 189
155 food source and for sharing this information to recruit onlooker bees. source is replaced by the new one. Otherwise, the old source is 190
156 Onlooker bees tend to select better food sources from those employed retained. 191
157 bees, and further search the food around the selected food source. If a
158 food source is not improved by a predetermined number of trials (de- 3. Hybrid evolutionary algorithm based particle swarm and 192
159 noted by limit), this employed bee will become a scout bee to search artificial bee colony 193
160 randomly for new food sources. The main steps of ABC algorithm are
161 given below: 3.1. Algorithm description 194

Step 1: Initialization
Assign parameters and create populations. Exploitation and exploration are key search mechanisms in solv- 195
Set trial = 0 for each populations. ing high-dimensional optimization problems. The exploitation pro- 196
Step 2: The employed bee phase cess applies the existing knowledge to seek better solutions, whereas 197
for i = 1, 2, …, SN do
the exploration process is concerned with the entire search of the 198
Update a new candidate solution vi by using (4) for the employed bees.
Evaluate the fitness value of the candidate solution vi . space for an optimal solution. It is obvious from velocity expression 199
Apply a greedy selection process between vi and xi to select the better one. (2) (see Section 2.1), that we update each particle’s velocity from its 200
If solution xi does not improve, triali = trial i + 1, otherwise triali = 0. own best (pbest) and the global best (gbest) experience. The gbest 201
end for
that is found early in the search process may have poor local min- 202
Step 3: Calculate the probability P i by using (5) for the solutions xi
using fitness values
ima and is likely to confine the solution convergence to local min- 203
Step 4: The onlooker bee phase ima (Suganthan, 1999). This characteristic indicates that PSO has bet- 204
for i = 1, 2, …, SN do ter exploitation ability but poor exploration ability. By analyzing the 205
if rand(0, 1) ≤ Pi then structure of the ABC algorithm (TSai et al., 2009), we notice that on- 206
Update a new candidate solution vi by using (4) for the onlookers bees.
looker bees move straight to one of the better nectar sources areas of 207
Evaluate the fitness value of the candidate solution vi .
Apply a greedy selection process between vi and xi to select the better one. the employed bees. The flight direction of the bees will change such 208
If solution xi does not improve, triali = trial i + 1, otherwise triali = 0. that the exploration ability of the bee will increase. Furthermore, the 209
end if scout bees search for a new food source randomly by using expression 210
end for
(6). Although the population is diverse, the convergence rate will be 211
Step 5: The scouts bee phase
if max (triali ) > limit then
reduced in the late iterations. 212
Replace xi with a new randomly produced candidate solution by using (6). To avoid the disadvantages of the two algorithms, we propose a 213
end if hybrid optimization approach (PS–ABC) based on PSO and ABC. The 214
Step 6: If the stop criterion is satisfied, stop and output detailed pseudo-code of PS–ABC is presented in Algorithm 1. There 215
the best solution achieved so far. Otherwise, return to Step 2.
are three main phases of PS–ABC including PSO phase, onlooker bee 216
162 In ABC algorithm, a food source position represents a potential so- phase, and modified scout bee phase in Algorithm 1. The onlooker 217
163 lution to the problem to be optimized. First, let us suppose the solu- bee and modified scout bee phases are described in Sections 3.2 218
164 tion space of the problem is D-dimensional. The ABC algorithm starts and 3.3. 219
165 with randomly producing food source, and each solution is repre- There are three stages in PS–ABC: initialization, iteration, and the 220
166 sented as xi = (xi1 , xi2 , xi3 , . . . , xiD ), i ∈ {1, 2, . . . , SN}. SN is equals to final stage. In the initialization stage, we need to define the solutions 221
167 the number of food sources and half the population size. space, and assign values to several variables. The algorithm adopts 222
168 In the employed bee phase, each employed bee performs a mod- all parameters from PSO, the measurement parameter from ABC (we 223
169 ification on the position of the food source by randomly selecting a call pbestMeasure ), and adds two new control parameters (Limit1 and 224
170 neighboring food source. A new food source vi can be generated from Limit2) for the control algorithm. 225
171 the old food source as follows: We first produce a particular number (N) of solutions randomly in 226
the solution space. In the iteration stage, the iterations are performed 227
vt+1
i
= xti + ψ · (xti − xtk ), (4) until the stopping criteria are matched. Each individual in each iter- 228
ation needs to be managed, and pbestMeasure records the update sta- 229
172 where k ∈ {1, 2, . . . , SN} is randomly chosen indexes and must be dif-
tus of pbest for each individual. If pbest is updated, then pbestMeasure 230
173 ferent from i; ψ is a random number in the range [−1, 1].
is initialized to 0; otherwise, pbestMeasure is increased by 1. For each 231
174 In the onlooker bee phase, each onlooker bee chooses a better per-
individual, if pbestMeasure is less than Limit1, it will perform the tradi- 232
175 formed food source solution from all the food source solutions of the
tional PSO phase. Otherwise, we continue to examine the criteria of 233
176 employed bees. The onlooker bee selects a food source solution de-
comparison between pbestMeasure and Limit2 to decide which type of 234
177 pend on the roulette wheel selection mechanism, which is given by
search to employ. If pbestMeasure is less than Limit2, the individual will 235
f iti run the onlooker bee phase of the ABC. Otherwise, the modified scout 236
Pi = SN , (5)
bee phase of ABC will take place. 237
j=1 f it j
If the stopping criteria matched, the algorithm performs the final 238
178 where f it is the fitness value of the solution. After the selection, the stage to produce the best solution (gbest). Otherwise, the iteration 239
179 onlooker bee tries to improve the food source solution of the em- stage is repeated. 240
180 ployed bee by using expression (4).
181 If the food source position of the employed bees cannot be fur- 3.2. Onlooker bee phase 241
182 ther improved through a given number of steps (limit) in the ABC
183 algorithm, this employed bee becomes a scout bee. The new ran- When pbestMeasure is larger than Limit1 and less than Limit2, the 242
184 dom food source position (scout bee) will be calculated from the corresponding individual will perform the onlooker bee phase. The 243

Please cite this article as: Z. Li et al., PS–ABC: A hybrid algorithm based on particle swarm and artificial bee colony for high-dimensional
optimization problems, Expert Systems With Applications (2015), http://dx.doi.org/10.1016/j.eswa.2015.07.043
JID: ESWA
ARTICLE IN PRESS [m5G;July 31, 2015;16:16]

4 Z. Li et al. / Expert Systems With Applications xxx (2015) xxx–xxx

Algorithm 1
PS–ABC Algorithm.

Input: Objective function f (x) and constraints


1. Initialization
2. Parameters initialization: assign parameter values to N, MaxFES, w (inertia weight), c1 (cognitive weight), c2 (social weight), Limit1, Limit2.
3. Population initialization: create particle position X, velocity V, pbest and gbest.
4. SetpbestMeasure = 0 for each particles and generation iter = 0.
5. Iterations
6. While iter ≤ MaxF ES do
7. for i = 1, 2,…, N do
8. if pbestMeasure (i) ≤ Limit1 then
9. PSO phase ().
10. else
11. if pbestMeasure (i) ≤ Limit2 then
12. Onlooker Bee phase ().
13. else
14. Modified Scout Bee phase().
15. end if
16. end if
17. if xi is better than pbesti then
18. Set xi to be pbesti and pbestMeasure (i) = 0.
19. else
20. pbestMeasure (i) = pbestMeasure (i) + 1.
21. end if
22. iter = iter + 1.
23. end for
24. Set the particle with best fitness value to be gbest.
25. end while
26. The final stage
Output: gbest = (gbest1 , gbest2 , gbest3 , . . . gbestD )

Algorithm 2 Algorithm 3
Onlooker Bee phase(). Modified scout bee phase ().

Input: A particle position xi and all pbest Input: A particle position xi and all pbest
1. Select half high fitness value of pbest to be employed bees from all pbest 1. Randomly choose two positive integers k1 = k2 from {1, 2, 3, . . . , N}.
2. Calculate the probability values p j for these selected pbest by using (5). 2. Update a new produced solution xi using (8)
3. for each of employed bees do 3. xt+1
i
= xti + ψ · ( pbestkt − pbestkt ). (8)
1 2
4. if rand(0, 1) ≤ Pj then 4. Apply a greedy selection process to select the better one.
5. Update a new produced solution xi by using (7) Output: the new position xi
6. xt+1
i
= xti + ψ · (xti − pbest tj ). (7)
7. Apply a greedy selection process to select the better one.
8. end if
9. end for and compared with xti . If the fitness value of xt+1
i
is better than that 267
Output: the new position xi
of xti , then xt+1
i
will replace xti and becomes a new member of the 268
population; otherwise xti is retained. 269
In the original ABC, the scout bee phase that randomly produced 270
244 pseudo-code of onlooker bee phase is shown in Algorithm 2. The the solution can provide diversity in the population, but it would 271
245 pbest in PSO phase has a better result and is used in the entire updat- reduce the convergence rate in the iteration process. The modified 272
246 ing process. When the pbest value has not been updated, PSO phase scout bee phase of the corresponding exploration area can cover the 273
247 will be terminated and performs the onlooker bee phase of ABC. entire search spaces and maintain the diversity in the population be- 274
248 We first choose half high fitness value of pbest from all pbest, as cause k1 and k2 values (k1 and k2 are positive integers) were randomly 275
249 employed bees. Thereafter, onlooker bees attempt to improve the so- selected from {1, 2, 3, . . . , N}. Meanwhile, scout bees that shared in- 276
250 lution of the employed bees by using expression (7). Based on the formation between any pbest is responsible for the convergence in 277
251 “experience” concept, the previous information can be used as guides the late iterations. Therefore, the modified scout bee phase may con- 278
252 for accurate decisions. Therefore, other pbest values will be selected tribute to the diversity in the population and quick convergence. 279
253 and used to update the individual value. Employed bees (pbest val-
254 ues) are selected by using a probability expression (5) on the basis of
255 the “roulette wheel selection” mechanism. A greedy selection is ap- 3.4. Algorithm search ability analysis 280
256 plied between xti and xt+1
i
; then the better one is selected depending
257 on the fitness value. If the fitness value at xt+1 is superior to that of The above phases coordinate the local exploitation and global ex- 281
i
258 xti , then the individual memorizes the new position and omits the old ploration abilities of the algorithm to improve search performance 282

259 one; otherwise, the previous position is retained in memory. when solving high-dimensional optimization problems. To analyze 283
the original PSO’s and PS–ABC’s potential search spaces, we refer to 284

260 3.3. Modified scout bee phase the “potential search range” in Liang, Qin, Suganthan, and Baskar 285
(2006). The search length of the potential space of the PSO and PS– 286

261 If pbestMeasure is larger than Limit2, the corresponding individual ABC for the dth dimension of the ith and the jth individual be ridj . The 287

262 will perform search as a scout bee to search for a new food source (i.e. potential search range for the ith and the jth individual is expressed 288
263 modified scout bee phase). The pseudo-code of modified scout bee as follows: 289
264 phase is shown in Algorithm 3. First, we randomly select two pbest

|Ldi − pbestid | + |Ldi − gbest d | i = j,
265 from all the pbest, and then the scout bee generates a new food source ridj = |Ldi − Ldj | =
266 by using expression (8). Once xt+1i
is obtained, it will be evaluated |Ldi − pbest dj | i = j,

Please cite this article as: Z. Li et al., PS–ABC: A hybrid algorithm based on particle swarm and artificial bee colony for high-dimensional
optimization problems, Expert Systems With Applications (2015), http://dx.doi.org/10.1016/j.eswa.2015.07.043
JID: ESWA
ARTICLE IN PRESS [m5G;July 31, 2015;16:16]

Z. Li et al. / Expert Systems With Applications xxx (2015) xxx–xxx 5

S3 S3
S1
best1 bestm bestk best best N best1 bestm k bestk best best N
k

S2 S2 S1
(a) (b)

S3
best1 bestm k best bestk best N

S2 S1

(c)
Fig. 1. The PS–ABC’s possible search spaces. (a) xk is located between pbestk and gbest, (b) pbestk is in between xk and gbest, (c) gbest is located between xk and pbestk .

d ∈ {1, 2, . . . , D}. (9) In Step 1, the main operation for produce initial population, and 329
the time complexity is O(ND). 330
290 Hence, the volumes of the potential search spaces of PS–ABC for In Step 2, judging the stopping criteria, the time complexity is 331
291 the ith individual is O(1). 332
D In Step 3, judging the parameter value pbestMeasure , if pbestMeasure 333
R= rd , i, j ∈ {1, 2, . . . , N}. (10)
d=1 i j is less than Limit1, performing PSO phase. Otherwise, judging the 334

292 In the original PSO, the current position of each particle learns parameter Limit2, if pbestMeasure is less than Limit2, performing on- 335

293 from its pbest and gbest simultaneously. The gbest is more likely to looker bee phase; if not, performing modified scout bee phase, the 336

294 provide greater guidance information than pbest. When the gbest has time complexity is O(N). 337

295 not been updated, it may influence the particle to move to a local op- In Step 4, updating pbest and gbest, the time complexity is O(N). 338

296 timum region. However, the onlooker bee phase in PS–ABC algorithm In Step 5, iteration to continue and returns to the Step 2. 339

297 selects the outstanding pbest from all the pbest, and the individual can Therefore, the time complexity of the PS–ABC algorithm is O(ND). 340

298 fly in other directions by learning these pbests when the individual


299 falls into local optimum region. Furthermore, the individual which 3.6. Algorithm convergence analysis 341
300 uses modified scout bee phase can randomly move in any directions.
301 Hence, the PS–ABC algorithm has the ability to jump out of the local The convergence of an algorithm must satisfy two conditions: (1) 342
302 optimum via two global exploration phases. the population can produce any individual in the search space, and 343
303 Let us consider that all individuals and the possible potential (2) the optimum solution can be preserved. The PS–ABC algorithm 344
304 search spaces of the traditional PSO and the PS–ABC on all dimensions is a process of constantly repeating three main phases. Each phase 345
305 are plotted as a line in Fig. 1. For the kth individual whose position is can generate a better individual, which replaces the old one in the 346
306 xk , we take it as the reference object. Three different cases are ana- memory. Therefore, the search solution process of PS–ABC algorithm 347
307 lyzed in Fig. 1: (a) xk is located between pbestk and gbest, (b) pbestk is a Markov chain (Cox & Miller, 1977), the convergence process of the 348
308 is in between xk and gbest, (c) gbest is located between xk and pbestk . PS–ABC algorithm is analyzed as follows: 349
309 When the individual is active, the particle updates the velocity by us-
310 ing velocity expression (2) and the corresponding exploitation areas Definition 1. Let :=X∗ {x∗
∈X : f (x∗ )
= min ( f (x)|x ∈ X )} is the op- 350

311 for S1 . So the possible potential search areas of the original PSO also timal solution of the problem, where X is searching variable and f 351

312 is equal to S1 . When the individual is aged, the bee explores other is objective function. θ (R) : |R ∩ X ∗ | denotes the number of optimal 352

313 new optimum areas by using expression (7), the corresponding ex- solutions of the molecules population R. 353

314 ploration areas for S2 . While the individual is died, the bee explores Definition 2. In the PS–ABC algorithm, if 354
315 the entire search space according to the expression (8), its exploration lim P {θ (R(t )) ≥ 1|R(0) = R0 } = 1 is true to arbitrary initial pop- 355
316 areas for S3 . t→∞
ulation R0 , where t represents the number of iterations, then the 356
317 Therefore, we observe that the two exploration phases of PS–ABC
algorithm converges with probability 1 to its globally optimal 357
318 exploit a larger potential search space than that of the traditional PSO
solution. 358
319 from Fig. 1. By increasing the potential search space of each individ-
320 ual, the diversity is also increased. So, PS–ABC searches more promis- Theorem 1. PS–ABC algorithm converges with probability 1 to its glob- 359
321 ing regions to find the global optimum in high-dimensional optimiza- ally optimal solution. 360
322 tion problems.
Proof. LetP0 (t ) = P {θ (R(t )) = 0}, then the probability of P0 (t + 1) is 361

323 3.5. Algorithm complexity analysis P0 (t + 1) = P {θ (R(t + 1)) = 0} = P {θ (R(t + 1)) = 0|θ (R(t )) = 0}
+ P {θ (R(t + 1)) = 0|θ (R(t )) = 0}.
324 Assuming that the variable dimension of the optimization prob-
325 lems is D, and the population size is N, the PS–ABC algorithm is de- P {θ (R(t + 1)) = 0|θ (R(t )) = 0} = 0 is true because of the best so- 362
326 scribed in Section 3.1. Considering the worst case, the time complex- lution storage mechanism. 363
327 ity of the iteration in the iterative process of PS–ABC algorithm is Hence, P0 (t + 1) = P {θ (R(t + 1)) = 0|θ (R(t )) = 0} × P {θ (R(t )) 364
328 analyzed as follows: = 0}. 365

Please cite this article as: Z. Li et al., PS–ABC: A hybrid algorithm based on particle swarm and artificial bee colony for high-dimensional
optimization problems, Expert Systems With Applications (2015), http://dx.doi.org/10.1016/j.eswa.2015.07.043
JID: ESWA
ARTICLE IN PRESS [m5G;July 31, 2015;16:16]

6 Z. Li et al. / Expert Systems With Applications xxx (2015) xxx–xxx

Table 1
13high-dimensional benchmark functions.

Category Test function

D i−1

f1 (x) = (106 ) D−1 · x2i


i=1
D
Ⅰ f2 (x) = x21 + 106 · x2i
i=2
D
f3 (x) = 10 · +
6
x21 x2
i=2 i
D−1 2
Ⅱ f4 (x) = (100 · (xi+1 − x2i ) + (xi − 1)2 )
i=1
 
1 D 2 1 D
f5 (x) = −20 exp −0.2 x − exp cos 2π xi + 20 + e
n i=1 i n i=1

D k max k max
f6 (x) = [ak · cos (2π bk (xi + 0.5))] − D [ak · cos (2π bk · 0.5)] a = 0.5, b = 3, kamx = 20
i=1 k=0 k=0

1 D 2 D xi
f7 (x) = x − cos √ +1
4000 i=1 i i=1 i
D
f8 (x) = (x2i − 10 · cos (2π xi ) + 10)
i=1
D
f9 (x) = 418.9829 × D − g(zi ), zi = xi + 4.209687462275036e + 002
⎧ i=1
zi sin (|zi | ), i f |zi | ≤ 500
1/2



⎨  (zi − 500)2
g(zi ) = (500 − mod (zi , 500)) sin ( | mod (|zi |, 500) − 500|) − , i f zi 500

⎪ 10000D

⎩  (zi + 500)2
( mod (|zi |, 500) − 500) sin ( | mod (|zi |, 500) − 500|) − , i f zi ≺ −500
10000D
10

10  32 |2 j xi − round(2 j xi )|
D D1.2
10
f10 (x) = 2 1+i − 2
D j=1 2j D
i=1
D 1/4 
  D D
f11 (x) =  x2 − D + 0.5 x2 + xi D + 0.5
i=1 i i=1 i i=1
 1/2 
 D 2 2 D 2 D D
f12 (x) =  xi − xi  + 0.5 x2i + xi D + 0.5
i=1 i=1 i=1 i=1

f13 (x) = g(x1 , x2 ) + g(


x2 , x3 ) + . . . + g(xD−1 , xD ) + g(xD , x1 )
(sin2 ( x2 + y2 ) − 0.5)
g(x, y) = 0.5 +
(1 + 0.001(x2 + y2 ))2

366 P {θ (R(t + 1)) = 1|θ (R(t )) = 0} > 0 is true because of the PS–ABC 4. Test problems and parameter settings 379
367 algorithm by three main phases reserve the global optimal solution.
368 Make λ = min P {θ (R(t + 1)) = 1|θ (R(t )) = 0}min , t = 0, 1, 2, . . . , 4.1. Test problems 380
t
369 Then
To evaluate the efficiency of the proposed algorithm, PS–ABC 381
P {θ (R(t + 1)) = 0|θ (R(t )) = 0} was compared with the standard PSO, ABC, HPA, ABC–PS and OXDE 382
= 1 − P {θ (R(t + 1)) = 0|θ (R(t )) = 0} over 13 high-dimensional benchmark functions obtained from the 383

= 1 − P {θ (R(t + 1)) ≥ 1|θ (R(t )) = 0} IEEE-CEC 2014 competition problems (Liang, Qu, & Suganthan, 2013). 384
These benchmark functions are listed in Table 1. The 13 high- 385
≤ 1 − P {θ (R(t + 1)) = 1|θ (R(t )) = 0}
dimensional benchmark functions are classified into two categories 386
≤ 1 − λ < 1. according to their characteristics: (I) unimodal functions ( f1 – f3 ) and 387
370 Therefore, (II) multimodal functions ( f4 – f13 ). A function is unimodal if it has 388
one global optimum. Unimodal functions are easy to solve when com- 389
0 ≤ P0 (t + 1) = P {θ (R(t + 1)) = 0}
pared with those multimodal functions. The multimodal function is 390
≤ (1 − λ) × P {θ (R(t )) = 0} = (1 − λ) × P0 (t ). difficult to solve because some local optima are randomly distributed 391
371 Such that, 0 ≤ P0 (t + 1) ≤ (1 − λ) × P0 (t ). in the search spaces. The search process must be able to explore the 392
372 Hence, whole search spaces in order to find the global optima. 393

0 ≤ P0 (t + 1) ≤ (1 − λ) × (1 − λ) × P0 (t − 1)
4.2. Parameter settings 394
≤ . . . ≤ (1 − λ)t+1 × P0 (0).
373 Given that lim (1 − λ)t+1 = 0 and 0 ≤ P0 (0) ≤ 1. To achieve a fair comparison, we set the same values for com- 395
x→∞
374 Hence 0 ≤ lim P0 (t ) ≤ lim (1 − λ)t × P0 (0) = 0, lim P0 (t ) = 0 mon control parameters of the algorithms such as population size 396
t→∞ t→∞ t→∞ (N) and the maximum number of function evaluations (MaxFES). For 397
375 Then
each function, MaxFES was set to D∗10,000 (D is the dimensionality of 398
lim P {θ (R(t )) ≥ 1|θ (R(0)) = R0 } the search space) and N was set to 100 on all experiments. Moreover, 399
t→∞
search space for each function was [−100, 100]D and the dimensions 400
= 1 − lim P {θ (R(t )) = 0|θ (R(0)) = R0 } (D) were set to 60, 100, and 500 in turn on all comparative experi- 401
t→∞

= 1 − lim P0 (0) = 1. ments. The other specific control parameters of algorithms are pre- 402
t→∞ sented below: 403
376 Therefore, when t → ∞, P {θ (R(t )) ≥ 1} → 1. PS–ABC algorithm PSO settings: In standard PSO (Eberhart & Shi, 2001), the inertia 404
377 can find the global optimal solution and guarantees convergence with weight w, which is applied to balance between the local and global 405
378 probability 1. search abilities, is set as 0.729. Cognitive and social factors (c1 and 406

Please cite this article as: Z. Li et al., PS–ABC: A hybrid algorithm based on particle swarm and artificial bee colony for high-dimensional
optimization problems, Expert Systems With Applications (2015), http://dx.doi.org/10.1016/j.eswa.2015.07.043
JID: ESWA
ARTICLE IN PRESS [m5G;July 31, 2015;16:16]

Z. Li et al. / Expert Systems With Applications xxx (2015) xxx–xxx 7

Table 2
Optimization computing results for f1 – f13 functions with D = 60 after 20 runs (the best mean, Std. values
and the ranks are marked in bold).

f D PS–ABC PSO ABC HPA ABC–PS

f1 60 Mean 0 1.9043e+07 3.7988e+05 7.8505e−50 9.6344e−03


Std. 0 1.4561e+06 1.5768e+05 2.3551e−49 3.3245e−02
Rank 1 5 4 2 3
f2 60 Mean 0 2.5388e+09 6.3499e+05 3.4268e−51 2.2342e−02
Std. 0 4.9911e+08 1.3988e+05 6.7922e−51 4.7422e−01
Rank 1 5 4 2 3
f3 60 Mean 0 1.5352e+04 3.6038e+01 3.4446e−55 2.4697e−03
Std. 0 2.0491e+03 3.4286e+01 5.1501e−55 5.7247e−04
Rank 1 5 4 2 3
f4 60 Mean 5.8672e+01 4.2446e+07 1.3386e+03 1.8851e+01 4.8637e−02
Std. 3.6833e−02 1.0242e+07 2.1109e+03 2.2888e+01 1.5012e−01
Rank 3 5 4 2 1
f5 60 Mean 3.1771e−09 2.1874e+01 2.5987e−01 2.1874e−08 3.3370e−11
Std. 2.0798e−09 1.0547e+00 4.2510e−03 2.3507e−09 2.5757e−11
Rank 2 5 4 3 1
f6 60 Mean 5.4677e−14 4.8278e−02 6.5619e−02 3.7219e−05 1.7459e−03
Std. 2.3220e−15 5.4949e−05 2.3904e−03 1.3878e−06 2.0923e−02
Rank 1 4 5 2 3
f7 60 Mean 0 1.3579e+00 9.9726e−02 5.1582e−02 1.6354e−08
Std. 0 1.1014e−01 2.4281e−01 6.7150e−02 3.3446e−07
Rank 1 5 4 3 2
f8 60 Mean 0 4.0728e+03 7.9768e−08 1.8459e−13 6.2663e−06
Std. 0 2.1309e+02 1.2486e−07 3.3259e−12 8.2557e−06
Rank 1 5 3 2 4
f9 60 Mean 7.4167e−04 4.3919e+02 4.3129e+00 7.6950e−04 6.6371e−04
Std. 2.3407e−05 1.1919e+02 7.3719e+00 2.6080e−05 7.3516e−06
Rank 2 5 4 3 1
f10 60 Mean 0 0 0 0 0
Std. 0 0 0 0 0
Rank 1 1 1 1 1
f11 60 Mean 6.7675e−01 2.4447e+01 1.6713e+01 1.9535e+00 5.7008e−01
Std. 1.0973e−01 8.1248e+00 2.2855e+01 4.6253e−01 1.2463e−02
Rank 2 5 4 3 1
f12 60 Mean 4.2917e−01 2.3454e+03 3.2869e+01 7.6691e+00 5.0492e−01
Std. 3.0562e−02 2.6270e+02 2.1428e+01 9.7104e+00 4.8901e−01
Rank 1 5 4 3 2
f13 60 Mean 0 2.2310e+01 1.1856e+01 3.7838e−16 8.3417e−07
Std. 0 9.6393e−01 4.4201e+00 2.2388e−16 7.9841−07
Rank 1 5 4 2 3
Average rank 1.384 4.615 3.769 2.308 2.154
Overall rank 1 5 4 3 2

407 c2 ) are real constants, which represent the knowledge of the particle ABC–PS settings: In paper (Chun-Feng et al., 2014), the inertia 425
408 itself and the collaboration among the particles, respectively. In ours weight is defined as follows: 426
experiments c1 and c2 are set to be 1.4945.
409
(wmax − wmin )
410 ABC settings: The ABC population consists of 50 employed bees w= ∗ iter, (13)
MaxF ES
411 and 50 onlooker bees because the population size is 100. Limit,
412 which determines occurrence of scout bee, is calculated as follows where wmax and wmin are maximum inertia weight and minimum 427

413 (Karaboga & Akay, 2009): weight, respectively, iter is the evaluation index and MaxFES is the 428
maximum evaluation number. wmax and wmin set to 0.9 and 0.4, re- 429

D×N spectively. c1 and c2 were both set to 1.3. 430


Limit = , (11) PS–ABC settings: This setting has five control parameters: inertia 431
2
weight w, cognitive factor c1 , social factor c2 , Limit1 and Limit2 in our 432
414 where N is the number of population, D is the dimensionality of the algorithm. The parameters c1 and c2 in the experiment were both set 433
415 search space. to 1.4945. Inertia weight w use expression (12) in the experiment. 434
416 HPA settings: HPA is a recombination algorithm based on PSO and The control parameters were set to be Limit1 = l1 · D×N 2 and Limit2 = 435
417 ABC. So the HPA population consists of 50 particles, 25 employed bees l2 · D×N
2 , where l 1 = 0.01 and l 2 = 0.05. To investigate the effect of the 436
418 and 25 onlooker bees. The parameters of ABC component form HPA control parameters (i.e. l 1 and l 2 ) in the PS–ABC algorithm, we tested 437
419 to be consistent with the standard ABC algorithm. The cognitive and the different l 1 and l 2 values with D = 60, 100, and 500. 438
420 social parameters of PSO component that forms HPA are set as 2 (c1
421 and c2 ), respectively (Kıran & Gündüz, 2013). The inertia weight is 5. Experimental results and discussion 439
422 defined as follows:
5.1. Experimental setting 440
(MaxF ES − iter)
w= , (12)
All algorithms in this paper are coded in MATLAB 7.8.0 and all ex- 441
MaxF ES
periments were implemented 20 times with different random seeds 442
423 where MaxFES is the maximum evaluation number, iter is the evalu- on the same personal computer with an Intel Core i3 CPU G630 443
424 ation index. 2.70 GHz and 2GB of RAM in Windows 7 OS. 444

Please cite this article as: Z. Li et al., PS–ABC: A hybrid algorithm based on particle swarm and artificial bee colony for high-dimensional
optimization problems, Expert Systems With Applications (2015), http://dx.doi.org/10.1016/j.eswa.2015.07.043
JID: ESWA
ARTICLE IN PRESS [m5G;July 31, 2015;16:16]

8 Z. Li et al. / Expert Systems With Applications xxx (2015) xxx–xxx

Table 3
Optimization computing results for f1 – f13 functions with D = 100 after 20 runs (the best mean, Std. values
and the ranks are marked in bold).

f D PS–ABC PSO ABC HPA ABC–PS

f1 100 Mean 0 1.8860e+08 7.9291e+06 4.8275e−11 2.9990e−02


Std. 0 1.2378e+08 7.6773e+06 5.9398e−11 3.5842e−01
Rank 1 5 4 2 3
f2 100 Mean 0 6.1509e+09 1.9192e+07 5.8956e−09 3.8672e−01
Std. 0 1.2886e+09 1.9686e+07 1.5879e−08 4.6321e−03
Rank 1 5 4 2 3
f3 100 Mean 0 5.2459e+04 8.4910e+02 1.0602e−11 3.9066e−03
Std. 0 1.6172e+04 4.7740e+02 1.6445e−11 1.2206e−02
Rank 1 5 4 2 3
f4 100 Mean 9.8219e+01 1.1910e+09 5.9772e+03 2.9722e+02 8.7094e+00
Std. 9.3594e−01 5.5774e+07 4.1273e+03 7.0215e+01 5.3352e+00
Rank 2 5 4 3 1
f5 100 Mean 6.5777e−11 2.2417e+01 1.9999e+01 2.2017e−08 2.5021e−12
Std. 3.2900e−11 2.0817e+01 4.1409e−01 1.3152e−07 1.2578e−12
Rank 2 5 4 3 1
f6 100 Mean 8.8363e−14 4.8278e−02 6.5619e−02 5.7219e−02 3.4071e−05
Std. 8.9339e−16 5.4949e−05 2.3904e−02 1.3877e−03 3.6532e−06
Rank 1 3 5 4 2
f7 100 Mean 0 2.6103e+00 9.3039e+00 1.8674e−01 1.0229e−08
Std. 0 2.7011e−01 1.0803e+01 2.2738e−01 3.6532e−06
Rank 1 4 5 3 2
f8 100 Mean 0 1.1197e+04 1.6966e+00 4.8713e−12 7.7497e−05
Std. 0 2.7289e+03 2.8824e+00 4.2957e−12 5.5547e−04
Rank 1 5 4 2 3
f9 100 Mean 2.4262e−03 1.3682e+03 4.4694e+00 7.2728e−03 1.2728e−03
Std. 3.2942e−05 3.0280e+02 4.7585e+00 2.1684e−04 4.5433e−04
Rank 2 5 4 3 1
f10 100 Mean 0 0 0 0 0
Std. 0 0 0 0 0
Rank 1 1 1 1 1
f11 100 Mean 8.2625e−01 4.4734e+01 4.2111e+01 1.6003e+00 5.9318e−01
Std. 1.1921e−01 9.7357e+00 3.3966e+01 2.2883e−01 3.2544e−01
Rank 2 5 4 3 1
f12 100 Mean 4.4680e−01 6.1837e+03 7.8231e+01 5.1468e+00 5.0619e−01
Std. 2.9138e−02 9.5735e+02 6.4651e+01 3.8788e+00 6.7441e−01
Rank 1 5 4 3 2
f13 100 Mean 0 4.5550e+03 7.4062e+03 7.0172e−13 7.8203e−07
Std. 0 2.0116e+02 9.0657e+03 7.8939e−13 6.9087e−07
Rank 1 4 5 2 3
Average rank 1.307 4.385 4.000 2.538 2.000
Overall rank 1 5 4 3 2

445 5.2. Comparison of PS–ABC with ABC, PSO, HPA and ABC–PS dimension of the function increases. Table 2–4 gives the mean, Std., 469
and rank of these high-dimensional multimodal functions. Accord- 470
446 To test the performance of the proposed PS–ABC for high- ing to the rank results in Table 2–4, PS–ABC performed better than 471
447 dimensional benchmark functions, we compare PS–ABC with the PSO, ABC, HPA, and ABC–PS on functions f5 , f6 , f7 , f8 , f11 , f12 , and f13 472
448 original PSO, ABC, hybrid HPA and ABC–PS method, several exper- with D = 60, 100, and 500, except for function f5 and f11 with D = 60 473
449 iments were carried out. The results in terms of the mean and and 100, and f12 with D = 500. However, PS–ABC gives poorer perfor- 474
450 standard deviations (Std.) of the solutions in 20 independent runs mance than ABC–PS in solving f4 and f9 with D = 60, 100, and 500. 475
451 were recorded. Meanwhile, the five algorithms were ranked over the This is because functions f4 and f9 are relatively hard test problems. 476
452 functions, and the average ranks for every category were given in Function f4 have a very narrow valley from local optimum to global 477
453 Tables 2–4. In the comparison tables, the results obtained by the optimum, function f9 has many local optima and its second better 478
454 methods less than 10−60 and 10−20 were assumed to be 0 for uni- local optimum is far from the global optimum. 479
455 modal and multimodal functions, respectively. Besides the analysis From the average rank and overall rank shown in Table 2–4, PS– 480
456 of mean and Std. value, we employed a non-parametric statistical ABC in different dimension cases can obtain the highest rank, fol- 481
457 test (Derrac, García, Molina, & Herrera, 2011) to detect whether there lowed by ABC–PS, HPA, ABC, and PSO. In addition, it should be noted 482
458 are significances among the results of the algorithms. In addition, we that, PS–ABC obtains steady solutions than other algorithms for all 483
459 compared the convergence speed of the proposed PS–ABC method functions with the increase of dimensionality. Thus, PS–ABC is effi- 484
460 with other algorithms for functions f1 , f9 and f12 in Fig. 2. cient and robust in solving high-dimensional benchmark problems. 485
461 Functions 1–3 are high-dimensional unimodal problems. Accord- Besides the analysis of mean and Std. value, we employed a 486
462 ing to the rank in Table 2–4, PS–ABC outperforms the rest of the al- non-parametric statistical test to prove the efficiency of the PS–ABC 487
463 gorithms. In addition, the mean and Std. values of PS–ABC are always method (we only analyze D = 100). Inside the field of non-parametric 488
464 less than those of PSO, ABC, HPA, and ABC–PS from Table 2–4. PS–ABC statistics, the Friedman test is a multiple comparison test that aims 489
465 can get the global optimum 0 for f1 , f2 , and f3 function with D = 60, to detect significant differences between the results of two or more 490
466 100, and 500. algorithms. The Friedman test ranks the algorithms for each problem 491
467 Functions 4–13 are high-dimensional multimodal problems, separately; the best performing algorithm ranks 1st, the second best 492
468 where the number of local optimum increases exponentially as the ranks 2nd and so on. Thus, for each problem i, rank values from 1 493

Please cite this article as: Z. Li et al., PS–ABC: A hybrid algorithm based on particle swarm and artificial bee colony for high-dimensional
optimization problems, Expert Systems With Applications (2015), http://dx.doi.org/10.1016/j.eswa.2015.07.043
JID: ESWA
ARTICLE IN PRESS [m5G;July 31, 2015;16:16]

Z. Li et al. / Expert Systems With Applications xxx (2015) xxx–xxx 9

Table 4
Optimization computing results for f1 to f13 functions with D = 500 after 20 runs (the best mean, Std. values
and the ranks are marked in bold).

f D PS–ABC PSO ABC HPA ABC–PS

f1 500 Mean 0 1.9102e+09 1.6871e+09 3.8642e−03 8.4648e−02


Std. 0 2.9908e+08 1.8329e+09 1.8772e−04 1.4257e−02
Rank 1 4 5 2 3
f2 500 Mean 0 4.8735e+10 1.9813e+11 2.8695e−04 1.9960e−01
Std. 0 1.0465e+10 3.1987e+11 1.7835e−04 3.5632e−02
Rank 1 4 5 2 3
f3 500 Mean 0 2.2558e+05 3.5366e+05 2.2128e−03 4.0020e−04
Std. 0 1.4845e+05 4.8528e+05 1.4351e−02 2.8712e−03
Rank 1 4 5 3 2
f4 500 Mean 4.0667e+02 2.0301e+09 1.1226e+09 7.0466e+02 9.9222e+01
Std. 8.2411e+01 6.2440e+08 1.6613e+09 5.1551e+02 7.0812e+01
Rank 2 5 4 3 1
f5 500 Mean 3.6480e−10 2.4316e+01 1.8754e+01 2.5433e−07 2.0616e−05
Std. 1.4129e−10 4.3263e+00 1.4328e+01 3.1462e−06 8.0021e−05
Rank 1 5 4 2 3
f6 500 Mean 3.8723e−13 6.8278e−01 4.5619e−01 6.7219e−02 1.4086e−06
Std. 2.7343e−13 5.4949e−05 2.3904e−03 1.3877e−04 3.8702e−07
Rank 1 5 4 3 2
f7 500 Mean 0 1.7221e+01 6.5413e+01 1.0445e−02 3.9642e−07
Std. 0 2.2199e+00 6.0135e+01 4.3240e−03 3.0278e−08
Rank 1 4 5 3 2
f8 500 Mean 0 6.9385e+04 4.5398e+02 1.0658e+02 2.1558e−05
Std. 0 9.7041e+03 3.8844e+02 9.9404e+01 1.0934e−05
Rank 1 5 4 3 2
f9 500 Mean 6.3225e−03 9.1049e+03 2.7401e+03 2.5473e−01 5.4638e−03
Std. 1.3305e−04 1.8124e+03 3.0435e+03 8.7836e−02 3.1404e−02
Rank 2 5 4 3 1
f10 500 Mean 0 0 0 0 0
Std. 0 0 0 0 0
Rank 1 1 1 1 1
f11 500 Mean 1.0247e+00 9.2713e+01 1.7963e+03 4.6218e+00 1.2171e+00
Std. 1.0999e−01 1.1378e+01 2.9469e+03 3.8030e+00 2.0421e+00
Rank 1 4 5 3 2
f12 500 Mean 6.3666e+00 7.1494e+04 1.3850e+04 7.1794e+00 5.0998e−01
Std. 1.7518e+00 7.9859e+03 1.1623e+04 4.0949e+00 4.4094e−02
Rank 2 5 4 3 1
f13 500 Mean 0 2.3254e+04 6.3590e−01 8.5106e+01 4.0040e−07
Std. 0 7.1990e+02 9.8775e−01 1.5544e+01 2.0998e−06
Rank 1 5 3 4 2
Average rank 1.231 4.308 4.077 2.692 1.923
Overall rank 1 5 4 3 2

(best result) to k (worst result). Denote these ranks as ri (1 ≤ j ≤ k);


j Table 5
494
Ranks achieved by the Friedman
495 For each algorithm j, average the ranks obtained in all problems to ob-
 j test, the statistics computed and
496 tain the final rank R j = 1n i ri , where k is the number of algorithms related p-value are also shown (we
497 included in the comparison, j is its associated index. n is the num- only analyze D = 100).
498 ber of problems considered and i is its associated index. Then the Algorithms Friedman ranks
499 Friedman statistic Ff is given by
  PS–ABC 1.423076
12n  k(k + 1) PSO 4.500000
Ff = R2 − , (14)
k(k + 1) j j
ABC 4.115385
4
HPA 2.653846
ABC–PS 2.115384
500 which is compared with a χ 2 distribution with k − 1 degrees of free- Statistic 29.792293
501 dom, critical values have been evaluated in (Sheskin, 2003). In this p-value 5.39483e−06
502 section, we compute the averaged rank obtained in 13 problems for
503 five algorithms. Thus, the ranks achieved by Friedman test, the statis-
504 tic value and p-value are shown in Table 5.
worse than the control method. The test statistic z-value for compar- 516
505 Table 5 shows that highlighting PS–ABC as the best performing al-
ing the ith algorithm and jth algorithm is given by 517
gorithm of the comparison, with a rank of 1.423076 for the Friedman
506

507 test. The p-value computed through the statistics of Friedman test k(k + 1)
508 strongly indicated the existence of significant differences among five z = (Ri − R j )/ , (15)
6n
509 algorithms. However, the Friedman test can only found significant
510 differences over the whole multiple comparisons, unable to estab- where Ri and R j are the average rankings by the Friedman test of the 518
511 lish proper comparisons between these algorithms. Thus, a control algorithms compared. 519
512 method is highlighted (i.e. the best performing algorithm) through The z-value is used to detect the corresponding probability (p- 520
513 the application of the test. In the multiple comparisons test, we will value) from the table of normal distribution N(0, 1), which is then 521
514 illustrate the use of a family of post-hoc procedures, these post-hoc compared with an appropriate level of significance α (Sheskin, 2003). 522
515 methods allow us to find which algorithms are significantly better/ However, when p-value is considered in a multiple test, it reflects the 523

Please cite this article as: Z. Li et al., PS–ABC: A hybrid algorithm based on particle swarm and artificial bee colony for high-dimensional
optimization problems, Expert Systems With Applications (2015), http://dx.doi.org/10.1016/j.eswa.2015.07.043
JID: ESWA
ARTICLE IN PRESS [m5G;July 31, 2015;16:16]

10 Z. Li et al. / Expert Systems With Applications xxx (2015) xxx–xxx

f1 function with D=60 f1 function with D=100 f1 function with D=500


50 50 50
10 10 10
ABC ABC ABC
PSO PSO PSO
ABC−PS ABC−PS ABC−PS
0 0 0
10 HPA 10 HPA 10 HPA

logf(x)
PS−ABC PS−ABC PS−ABC

logf(x)
logf(x)

−50 −50 −50


10 10 10

−100 −100 −100


10 10 10
0 2000 4000 6000 0 2000 4000 6000 8000 0 5000 10000
Function Evaluations Function Evaluations Function Evaluations
(a) (b) (c)
f9 function with D=60 f9 function with D=100 f9 function with D=500
6 6 6
10 10 10
ABC ABC ABC
4 PSO 4 PSO 4 PSO
10 10 10
ABC−PS ABC−PS ABC−PS

logf(x)
HPA HPA HPA
2 2 2
10 PS−ABC 10 PS−ABC 10 PS−ABC
logf(x)
logf(x)

0 0 0
10 10 10

−2 −2 −2
10 10 10

−4 −4 −4
10 10 10
0 2000 4000 6000 0 2000 4000 6000 8000 0 5000 10000
Function Evaluations Function Evaluations Function Evaluations
(d) (e) (f)
f12 function with D=60 f12 function with D=100 f12 function with D=500
6 6 8
10 10 10
ABC ABC ABC
PSO PSO 6 PSO
4 4 10 ABC−PS
10 ABC−PS 10 ABC−PS
HPA HPA HPA
4
PS−ABC PS−ABC 10 PS−ABC
logf(x)
logf(x)

logf(x)

2 2
10 10
2
10
0 0
10 10 0
10

−2 −2 −2
10 10 10
0 2000 4000 6000 0 2000 4000 6000 8000 0 5000 10000
Function Evaluations Function Evaluations Function Evaluations
(g) (h) (i)

Fig. 2. Convergence curves of PSO, ABC, ABC–PS, HPA, and PS–ABC for (a) – (c) f1 , (d) – (f) f9 , (g) – (i) f12 .

Table 6
z-values and adjusted p-values for the Friedman test (PS–ABC is the control method and we only analyze D = 100).

Algorithms z Unadjusted p-value Bonferroni Holm Holland Finner Hochberg

PSO 4.961391 6.9990e−07 2.7996e−06 2.7996e−06 2.7999e−06 2.7999e−06 2.7996e−06


ABC 4.341218 1.4170e−05 5.6680e−05 4.2510e−05 4.1999e−05 2.9999e−05 4.2510e−05
HPA 1.984556 0.0472 0.1888 0.0944 0.0922 0.0624 0.0944
ABC–PS 1.116313 0.2643 1.0000 0.2643 0.2643 0.2643 0.2643

Please cite this article as: Z. Li et al., PS–ABC: A hybrid algorithm based on particle swarm and artificial bee colony for high-dimensional
optimization problems, Expert Systems With Applications (2015), http://dx.doi.org/10.1016/j.eswa.2015.07.043
JID: ESWA
ARTICLE IN PRESS [m5G;July 31, 2015;16:16]

Z. Li et al. / Expert Systems With Applications xxx (2015) xxx–xxx 11

Table 7
Optimization computing results for f1 – f13 functions after 20 runs (the best mean and Std. values are marked in bold).

f D = 60 D = 100 D = 500

PS–ABC OXDE PS–ABC OXDE PS–ABC OXDE

f1 Mean 0 2.5257e−17 0 1.3399e−04 0 1.5992e+07


Std. 0 2.0562e−17 0 4.7189e−05 0 2.0508e+06
f2 Mean 0 1.2387e−14 0 5.8693e−02 0 1.9760e+03
Std. 0 1.0219e−14 0 2.3863e−02 0 3.4299e+03
f3 Mean 0 2.0111e−20 0 7.1141e−08 0 2.9913e−03
Std. 0 1.8100e−20 0 3.5769e−08 0 4.0907e−03
f4 Mean 5.8672e+01 3.9279e+01 9.8219e+01 9.2730e+01 4.0667e+02 1.5340e+03
Std. 3.6833e−02 2.2475e+01 9.3594e−01 3.5371e+00 8.2411e+01 1.9651e+02
f5 Mean 3.1771e−09 2.0717e+01 6.5777e−11 2.1133e+01 3.6480e−10 1.9724e+01
Std. 2.0798e−09 1.8598e−01 3.2900e−11 3.5337e−02 1.4129e−10 4.8627e+01
f6 Mean 5.4677e−14 2.1669e−05 8.8363e−14 5.2933e−05 3.8723e−13 3.1609e−01
Std. 2.3220e−15 5.0741e−06 8.9339e−16 6.8093e−07 2.7343e−13 4.6532e−02
f7 Mean 0 1.4792e−03 0 1.1797e−09 0 3.0348e−06
Std. 0 3.1184e−03 0 6.0722e−10 0 3.5308e−06
f8 Mean 0 5.8873e+01 0 3.2795e+02 0 5.0365e+03
Std. 0 1.1657e+01 0 7.8394e+01 0 7.3559e+02
f9 Mean 7.4167e−04 7.6365e−04 2.4262e−03 1.2728e−03 6.3225e−03 6.6507e−03
Std. 2.3407e−05 0 3.2942e−05 7.8821e−09 1.3305e−04 4.9866e−04
f10 Mean 0 1.1073e+00 0 2.0702e+00 0 4.2632e+01
Std. 0 5.4772e−01 0 5.0188e−01 0 1.9812e+00
f11 Mean 6.7675e−01 4.4004e−01 8.2625e−01 5.6502e−01 1.0247e+00 8.9623e−01
Std. 1.0973e−01 4.5514e−02 1.1921e−01 6.0586e−02 1.0999e−01 7.8977e−02
f12 Mean 4.2917e−01 2.8726e−01 4.4680e−01 3.5976e−01 6.3666e+00 5.7678e−01
Std. 3.0562e−02 4.6103e−02 2.9138e−02 7.7432e−02 1.7518e+00 4.0126e−01
f13 Mean 0 2.6501e+03 0 4.5850e+03 0 2.2925e+04
Std. 0 2.0368e+02 0 9.5869e−13 0 0

Table 8 • The Holm procedure: It adjusts the value of α in a step-down 536


The different Limit1 and Limit2 values, which controls the onlooker and scout produc-
manner. 537
tion, respectively (Here Limit1 = l1 · D×N
2
and Limit2 = l2 · D×N
2
).

Different control values Holm APVi : min{v, 1}, where v = max (k − j) p j : 1 ≤ j ≤ i. 538
l1 0.01 0.01 0.01 0.04 0.04 0.07 0.07 0.1 0.1 ∞
• The Holland procedure: It also adjusts the value of α in a step- 539
l2 0.05 0.1 0.5 0.09 0.4 0.1 0.7 0.5 1 ∞
down manner, as Holm’s method. 540

Table 9 Holland APVi : min{v, 1}, where 541


The results obtained by the PS–ABC less than the v = max{1 − (1 − p j )(k− j) : 1 ≤ j ≤ i}. 542
threshold values, the method will be terminated.

The threshold values • The Finner procedure: this also adjusts the value of α in a step- 543
f D = 60 D = 100 D = 500 down manner, as Holm’s and Holland’s methods. 544

f1 , f2 , f3 1.0e−60 1.0e−60 1.0e−60


Finner APVi : min{v, 1}, where 545
f4 6.0e+01 1.0e+02 5.0e+02
f5 5.0e−11 1.0e−10 5.0e−10 v = max{1 − (1 − p j )(k−1)/ j : 1 ≤ j ≤ i}. 546
f6 5.0e−14 1.0e−13 5.0e−13
f7 , f8 , f13 1.0e−14 1.0e−13 1.0e−13 • The Hochberg procedure: It adjusts the value of α in a step-up 547
f9 1.0e−03 5.0e−03 1.0e−02
f10 1.0e−04 1.0e−03 1.0e−03
way. 548
f11 , f12 1.0e+00 1.0e+00 1.0e+01
Hochberg APVi : min{(k − j) p j : i ≤ j ≤ (k − 1)}. 549

where i and j are the associated index of problems and algorithms, 550
524 probability error of a certain comparison, but it does not take into
respectively. pi and p j are different p-values. 551
525 account the remaining comparisons belonging to the family. To over-
We can get the z-value, unadjusted p-value and these APVs 552
526 come this problem, we will introduce these Adjusted p-values (APVs)
through the post-hoc procedures in Table 5. As Tables 5 and 6 show, 553
527 include the Bonferroni–Dunn procedure (Dunn, 1961), the Holm pro-
the Friedman test shows a significant improvement of PS–ABC over 554
528 cedure (Holm, 1979), the Holland procedure (Holland & Copenhaver,
PSO, ABC, ABC–PS, and HPA for all these post-hoc procedures, except 555
529 1987), the Finner procedure (Finner, 1993) and the Hochberg proce-
for the Bonferroni–Dunn test. 556
530 dure (Hochberg, 1988) in this section. The procedures of p-value ad-
In order to analyze the convergence speed, the convergence curves 557
531 justment are given below:
are drawn in Fig. 2 for some functions ( f1 , f9 and f12 ) in a particu- 558

532 • The Bonferroni–Dunn procedure: It adjusts the value of α in a sin- lar run of PSO, ABC, ABC–PS, HPA, and PS–ABC. As shown in Fig. 2(a, 559

533 gle step by dividing it by the number of comparisons performed, b, and c), the convergence speed of PS–ABC on function f1 was so 560

534 (k − 1). fast and quickly found the global optimum 0 (i.e. log f1 = ∞). How- 561
ever, although the PS–ABC for function f9 with D = 500 and f12 with 562
D = 60, 100, and 500 cannot converge to the global optimum, PS– 563
535 Bonferroni APVi : min{v, 1}, where v = (k − 1) pi . ABC converges faster than ABC, ABC–PS, and HPA, as shown in Fig. 2 564

Please cite this article as: Z. Li et al., PS–ABC: A hybrid algorithm based on particle swarm and artificial bee colony for high-dimensional
optimization problems, Expert Systems With Applications (2015), http://dx.doi.org/10.1016/j.eswa.2015.07.043
JID: ESWA
ARTICLE IN PRESS [m5G;July 31, 2015;16:16]

12 Z. Li et al. / Expert Systems With Applications xxx (2015) xxx–xxx

Table 10
Optimization results of the different control values for f1 to f5 functions with different dimension values after 20 runs. (The shadow and bold values are the minimum
average FEs and the maximum average FEs values, respectively, except for consider the values of Limit1 and Limit2 approaches to infinity.).

D = 60 D = 100 D = 500

f l1 l2 Mean Std. FEs Mean Std. FEs Mean Std. FEs

f1 0.01 0.05 8.6711e−60 9.3050e−60 154130 7.2298e−60 1.4252e−60 241,810 8.3125e−60 1.2136e−60 709,631
0.01 0.1 6.9551e−60 1.9439e−60 166840 8.2323e−60 7.2251e−60 246,780 8.4638e−60 1.8991e−60 831,911
0.01 0.5 7.2310e−60 1.6102e−60 212140 7.0294e−60 2.0172e−60 318,640 9.1478e−60 2.1467e−60 329,1510
0.04 0.09 8.9956e−60 7.8793e−61 174860 9.2064e−60 6.4219e−60 243,120 8.4238e−60 1.1234e−60 761,821
0.04 0.4 8.8737e−60 5.0883e−61 195930 7.6491e−60 8.1170e−61 286,680 7.6450e−60 3.0456e−61 795,600
0.07 0.1 7.3618e−60 2.2815e−61 208060 8.2168e−60 1.3758e−60 260,580 4.1045e−60 2.8772e−60 814,150
0.07 0.7 6.8953e−60 2.3431e−60 287180 6.6290e−60 1.6588e−60 452,320 6.0047e−60 3.3673e−60 780,711
0.1 0.5 6.2757e−60 2.1693e−60 356950 6.4331e−60 9.0038e−61 709,440 2.8399e−60 5.7503e−60 3463,610
0.1 1 1.2363e+07 3.6121e+07 467220 5.6070e+07 1.6660e+08 850,200 1.0642e+08 1.4781e+08 4875,510
∞ ∞ 8.9514e+06 9.203e+06 600000 2.8038e+07 4.1502e+07 100,0000 5.4601e+08 6.1479e+08 50,00,000
f2 0.01 0.05 8.1153e−60 1.2417e−60 172740 7.9132e−60 1.1863e−60 236,960 9.4115e−60 3.2340e−61 6,61,460
0.01 0.1 7.8515e−60 1.1502e−60 177680 8.2920e−60 1.5530e−60 245,080 8.6755e−60 8.5199e−61 8,60,050
0.01 0.5 7.8076e−60 1.4074e−60 226550 8.3229e−60 1.4774e−60 322,900 7.4678e−60 6.7554e−61 2564251
0.04 0.09 8.5246e−60 1.5617e−60 175820 8.2112e−60 1.1177e−60 244,560 9.6741e−60 1.8012e−61 7,43,550
0.04 0.4 6.6849e−60 2.2434e−60 196470 7.3959e−60 2.1038e−60 285,180 4.4568e−60 4.3504e−61 17,44,581
0.07 0.1 8.0589e−60 1.9310e−60 177060 7.3959e−60 2.1038e−60 285,180 2.0144e−60 1.0045e−60 8,97,120
0.07 0.7 6.5970e−60 1.9758e−60 260530 7.4955e−60 1.4463e−60 412,220 5.6373e−60 3.7326e−61 9,54,350
0.1 0.5 6.5526e−60 2.9054e−60 371020 6.4668e−60 2.4983e−60 590,900 9.3454e−02 4.6530e−01 4754,780
0.1 1 2.9713e+05 8.9139e+05 453980 1.1716e+07 3.51480e+06 849,630 2.0064e+04 2.6312e+03 487,4,120
∞ ∞ 2.1128e+07 1.4798e+07 600000 1.2241e+08 9.5691e+07 1000,000 3.3478e+09 4.7038e+09 50,00,000
f3 0.01 0.05 7.8223e−60 1.3837e−60 173250 8.7802e−60 1.1439e−60 240,300 9.1260e−60 7.6574e−61 686,731
0.01 0.1 8.0591e−60 1.1956e−60 175660 8.9251e−60 9.2568e−61 243,600 8.3433e−60 1.1272e−60 7,78,410
0.01 0.5 8.4507e−60 1.2971e−60 214250 7.4025e−60 1.4571e−60 322,900 5.3265e−60 1.8534e−61 987,851
0.04 0.09 8.3332e−60 1.2099e−60 173610 9.1755e−60 7.8396e−60 240,760 9.2576e−60 7.1928e−61 730,780
0.04 0.4 8.0441e−60 1.4608e−60 194180 8.0116e−60 9.0618e−61 287,180 9.8974e−60 6.0756e−60 12,56,700
0.07 0.1 8.8450e−60 7.6298e−61 173850 8.6063e−60 1.5434e−60 241,440 5.7058e−60 4.4010e−61 7,98,520
0.07 0.7 7.2166e−60 1.4531e−60 253690 7.2344e−60 2.3003e−60 400,260 6.8584e−60 1.7494e−61 963,521
0.1 0.5 8.3982e−60 1.5610e−60 213240 9.0283e−60 1.2464e−60 318,720 6.2914e−60 4.7003e−61 976,580
0.1 1 6.0843e−60 2.3272e−60 337550 6.3109e−60 2.5222e−60 536,690 4.9734e+02 3.2145e+03 49,14,210
∞ ∞ 2.3609e+04 1.8042e+04 600000 3.7044e+04 4.2855e+03 1000,000 9.6326e+05 7.2199e+04 50,00,000
f4 0.01 0.05 9.6196e+01 1.9497e+00 60181 9.9895e+01 1.0443e+00 98,981 4.9998e+02 2.3245e+00 30,4830
0.01 0.1 9.0914e+01 8.3409e+00 66131 9.9864e+01 9.7350e+00 109,400 4.9996e+02 4.3245e+00 71,2261
0.01 0.5 8.7200e+01 1.1082e+01 159230 9.9768e+01 1.1919e−01 263,120 5.4691e+02 2.0931e+00 1,014,520
0.04 0.09 9.3156e+01 4.5123e+00 61171 9.9807e+01 2.0514e−01 101,780 4.9916e+02 1.0053e+00 35,6501
0.04 0.4 9.1331e+01 5.3593e+00 120760 9.9233e+01 1.9318e−01 216,200 4.8940e+02 1.4538e+01 1,263,161
0.07 0.1 9.7472e+01 1.5912e+00 65391 9.9528e+01 4.4939e−01 299,280 4.9998e+02 1.2104e+01 45,6711
0.07 0.7 8.4860e+01 9.7612e+00 224700 9.9421e+01 3.2312e−01 398,820 5.0414e+02 3.3630e+01 12,20,461
0.1 0.5 7.9731e+01 1.1460e+01 282520 9.9421e+01 3.2312e−01 398,820 4.8940e+02 1.8047e+01 11,75,101
0.1 1 8.8470e+01 1.0298e+01 247340 9.9419e+01 3.7358e+01 479,440 4.2024e+02 7.5625e+01 24,41,540
∞ ∞ 3.4615e+06 6.0547e+06 600000 1.7227e+07 4.2855e+07 10,00,000 5.0670e+08 9.4870e+08 50,00,000
f5 0.01 0.05 7.6883e−11 2.7997e−11 178080 7.0197e−11 1.3287e−11 369,420 3.7300e−10 1.6544e−10 31,69,500
0.01 0.1 7.8813e−11 2.3376e−11 201910 7.4582e−11 9.6479e−12 349,080 3.6068e−10 1.7413e−10 25,00,561
0.01 0.5 9.2799e−11 6.4286e−12 259250 8.7348e−11 3.4345e−12 422,500 3.7341e−10 1.6253e−10 31,69,561
0.04 0.09 8.3512e−11 1.2843e−11 197340 5.9893e−11 1.4041e−11 389,180 3.7300e−10 2.4346e−11 31,68,800
0.04 0.4 6.3774e−11 1.1750e−11 311660 5.1677e−11 1.4119e−11 545,000 3.7680e−10 3.7606e−11 31,79,640
0.07 0.1 8.5109e−11 9.7565e−11 186470 7.6080e−11 1.2138e−11 351,160 1.5851e−10 4.3245e−10 29,51,850
0.07 0.7 7.2895e−11 1.0244e−11 239510 7.8686e−11 8.4290e−12 389,200 4.8358e−10 2.0416e−11 3146,570
0.1 0.5 8.5759e−11 1.6234e−11 258180 8.6991e−11 5.4332e−12 514,000 2.2649e−10 3.5536e−10 31,27,300
0.1 1 6.8601e−11 1.3690e−11 311690 6.9746e−11 7.0225e−12 516,100 2.2649e−10 1.6432e−11 321,8,400
∞ ∞ 2.3216e+01 1.4087e+01 600000 2.7683e+01 1.7210e+01 1000,000 4.8974e+01 1.2426e+01 500,0000

565 (f, g, h, and i). From Fig. 2, we conclude that PS–ABC on these func- scaling factor F = 0.9, crossover control parameter CR = 0.9, and 580
566 tions almost converges faster than other algorithms except for PSO. MaxFES = D∗10,000 (D is the dimensionality of the problem). 581
567 This is because that PSO has higher convergence speed and the pro- In this section, we compare the PS–ABC with OXDE over 13 high- 582
568 posed PS–ABC uses the exploration ability of the ABC based on PSO in dimensional benchmark functions with D = 60, 100, and 500. The 583
569 the algorithm process. Thus, PS–ABC like PSO has a fast convergence results of 20 independent runs were recorded in Table 7. From the 584
570 speed. mean and Std. values shown in Table 7, PS–ABC outperforms OXDE 585
in most cases. The details are as follows: PS–ABC performed better 586
571 5.3. Comparison of PS–ABC with OXDE than OXDE for functions f1 , f2 , f3 , f5 , f6 , f7 , f8 , f10 , and f13 with 587
D = 60, 100, and 500. For functions f1 , f2 , f3 , f7 , f8 , f10 , and f13 , PS– 588
572 Wang, Cai, and Zhang (2012) proposed an orthogonal crossover ABC can obtain the global optimum 0 and their Std. values are 0. For 589
573 based differential evolution (OXDE). The performance of the OXDE f5 and f6 , PS–ABC gives the best results although it cannot obtain the 590
574 algorithm has been examined on 24 test instances from (Noman & global optimum, and the Std. value is the least. For f9 , PS–ABC out- 591
575 Iba, 2008; Suganthan et al., 2005), and extensive experiments have performs OXDE with D = 60 and 500. However, PS–ABC gives poorer 592
576 demonstrated that OXDE was more effective than the traditional performance than OXDE in solving f4 , f11 and f12 with D = 60, 100 593
577 DE and state-of-the-art DE (denoted as ODE) algorithm. The param- and 500 except for function f4 with D = 500. These results indicate 594
578 eters of OXDE are the same as in the paper (Wang et al., 2012) that the PS–ABC is efficient in solving high-dimensional benchmark 595
579 except the population size (N = 100) to ensure fairness, that is, problems. 596

Please cite this article as: Z. Li et al., PS–ABC: A hybrid algorithm based on particle swarm and artificial bee colony for high-dimensional
optimization problems, Expert Systems With Applications (2015), http://dx.doi.org/10.1016/j.eswa.2015.07.043
JID: ESWA
ARTICLE IN PRESS [m5G;July 31, 2015;16:16]

Z. Li et al. / Expert Systems With Applications xxx (2015) xxx–xxx 13

Table 11
Optimization results of the different control values for f6 – f10 functions with different dimension values after 20 runs. (The shadow and bold values are the minimum
average FEs and the maximum average FEs values, respectively, except for consider the values of Limit1 and Limit2 approaches to infinity.).

D = 60 D = 100 D = 500

f l1 l2 Mean Std. FEs Mean Std. FEs Mean Std. FEs

f6 0.01 0.05 5.4196e−14 4.1219e−15 791 9.0994e−14 4.2876e−15 951 4.5344e−13 4.9286e−15 1331
0.01 0.1 5.3279e−14 5.0169e−15 881 8.6239e−14 2.5085e−15 1041 4.5122e−13 1.3401e−14 1461
0.01 0.5 5.5447e−14 2.8365e−15 831 8.9075e−14 2.5303e−15 981 4.6053e−13 1.6708e−14 1671
0.04 0.09 5.2778e−14 2.5360e−15 781 8.8909e−14 3.4358e−15 901 4.5734e−13 2.3924e−14 1301
0.04 0.4 5.4196e−14 2.7684e−15 831 9.2246e−14 1.7016e−15 961 4.4788e−13 6.3074e−14 1451
0.07 0.1 5.3445e−14 3.6595e−15 821 9.1579e−14 3.6936e−15 1021 4.5372e−13 1.3210e−12 1521
0.07 0.7 5.6198e−14 2.4763e−15 825 9.0243e−14 3.6405e−15 921 4.6290e−13 1.1394e−12 1151
0.1 0.5 5.3195e−14 2.4874e−15 771 8.8408e−14 4.2673e−15 941 4.4788e−13 1.0067e−13 1101
0.1 1 5.2861e−14 2.9154e−15 861 9.1578e−14 4.1379e−15 1001 4.6040e−13 3.5212e−14 2140
∞ ∞ 5.3695e−02 3.0092e−02 600,000 9.1912e−02 5.1863e−02 1000000 4.5094e−01 6.4736e−01 5000000
f7 0.01 0.05 8.6142e−14 5.3302e−15 100650 8.3866e−14 1.4738e−14 140340 8.6264e−14 5.2671e−15 434361
0.01 0.1 8.4655e−14 1.1532e−14 105,960 7.7782e−14 1.8034e−14 150680 9.7921e−14 1.0456e−15 763371
0.01 0.5 7.7126e−14 2.1096e−14 182,050 8.3266e−14 1.6970e−14 290760 9.1062e−14 2.8710e−14 2475210
0.04 0.09 8.6741e−14 7.5134e−15 101,080 8.8062e−14 1.2134e−14 144900 9.1075e−14 3.5183e−15 485901
0.04 0.4 7.0210e−14 2.5069e−14 138,290 8.1468e−14 2.2369e−14 224280 9.2371e−14 3.7430e−15 1453301
0.07 0.1 8.4910e−14 1.2581e−14 103,650 5.1314e−14 3.0782e−14 206900 7.2053e−14 6.5040e−14 558401
0.07 0.7 3.5416e−14 2.9238e−14 2,32,340 4.0989e−14 1.5923e−14 397080 6.4608e−13 4.6321e−12 1754220
0.1 0.5 4.6862e−14 2.0619e−14 22,37,90 2.2493e−14 1.7356e−14 502040 3.3046e−13 4.0708e−12 1942850
0.1 1 2.8699e−14 3.3421e−14 44,12,10 4.3709e−14 2.9682e−14 750910 2.8499e−10 5.1425e−11 3547110
∞ ∞ 9.4947e−01 2.9754e−01 600,000 1.8191e−01 2.3498e−01 1000000 8.6740e−01 1.2257e+00 5000000
f8 0.01 0.05 7.3896e−14 2.6409e−14 1,50,600 8.1001e−14 1.5812e−14 223850 6.2764e−14 2.9736e−14 547861
0.01 0.1 8.7752e−14 1.0180e−14 1,49,730 8.5975e−14 1.1042e−14 219520 4.5001e−14 2.7416e−14 811131
0.01 0.5 7.5495e−14 2.0210e−14 1,93,660 6.9988e−14 2.4680e−14 301220 6.5421e−14 4.8532e−14 2541780
0.04 0.09 7.6738e−14 1.8182e−14 1,23,580 8.7752e−14 1.0575e−14 167660 7.2238e−14 2.2484e−14 713250
0.04 0.4 7.6916e−14 1.8306e−14 1,55,390 7.6027e−14 1.3453e−14 246060 6.5725e−14 1.0698e−14 1288741
0.07 0.1 5.6843e−14 2.8698e−14 201,930 5.5067e−14 2.6704e−14 440140 1.7764e−14 3.0724e−11 1340250
0.07 0.7 4.9915e−14 3.1681e−14 2,82,860 6.2527e−14 3.1883e−14 472920 5.0120e−11 1.4162e−11 2714660
0.1 0.5 2.5401e−14 2.5084e−14 4,13,950 7.1022e+01 1.4204e+02 840740 4.2548e−11 5.0974e−10 3563560
0.1 1 1.0186e+02 1.2702e+02 4,93,720 8.3468e+01 1.0693e+02 853160 5.8498e+01 3.5325e+01 4572140
∞ ∞ 1.1187e+03 5.7760e+02 600,000 2.2212e+03 8.4883e+03 1000000 3.5237e+04 4.7613e+04 5000000
f9 0.01 0.05 9.6469e−04 3.8063e−05 73,581 4.3253e−03 4.9301e−04 98291 9.8367e−03 1.5119e−04 325730
0.01 0.1 9.6547e−04 3.2686e−05 79,701 4.4348e−03 3.0755e−04 107040 9.9082e−03 6.4270e−05 498971
0.01 0.5 9.3147e−04 5.1378e−05 16,0790 3.6974e−03 8.8560e−04 263820 6.5435e−03 2.4622e−05 2407801
0.04 0.09 9.8027e−04 1.1720e−05 73,511 4.8257e−03 1.3565e−04 99841 9.7115e−03 1.4428e−04 393131
0.04 0.4 8.4416e−04 5.5603e−05 12,6920 3.1662e−03 1.1942e−03 212940 7.8505e−03 1.7501e−04 1598710
0.07 0.1 9.3971e−04 7.8786e−05 10,5441 3.9218e−03 1.3793e−03 141140 7.0894e−03 5.9082e−02 2110420
0.07 0.7 8.4859e−04 6.5559e−05 2,34,260 3.7724e−03 1.1566e−03 418160 1.3541e−02 1.5017e−02 1524100
0.1 0.5 8.1945e−04 8.1047e−05 2,62,730 2.5986e−03 1.2884e−03 463260 1.0630e−02 1.0788e−02 1150440
0.1 1 7.8474e−04 3.0052e−05 4,57,920 6.1351e−03 1.7499e−03 875570 2.4635e−02 1.3425e−02 4125740
∞ ∞ 1.7938e+02 2.3482e+02 600,000 1.1219e+03 1.7114e+03 1000000 1.3039e+03 1.8432e+03 5000000
f10 0.01 0.05 6.1475e−04 2.2868e−04 601 6.3253e−04 2.2215e−04 531 7.9402e−04 1.8154e−04 781
0.01 0.1 5.2565e−04 2.8521e−04 591 6.9675e−04 2.5552e−04 581 8.0470e−04 2.0363e−04 811
0.01 0.5 4.5176e−04 3.0432e−04 597 7.1666e−04 8.9564e−05 601 7.3257e−04 1.4872e−04 1010
0.04 0.09 5.2516e−04 2.7924e−04 561 5.5991e−04 1.9277e−04 561 4.7734e−04 1.1750e−04 801
0.04 0.4 4.2168e−04 2.1773e−04 574 7.7318e−04 1.3844e−04 571 4.0187e−04 2.5877e−04 921
0.07 0.1 3.4963e−04 2.2768e−04 601 7.8462e−04 1.3406e−04 561 8.7875e−04 2.3881e−04 790
0.07 0.7 5.9162e−04 3.3333e−04 571 5.3056e−04 3.2894e−04 541 3.8132e−04 2.5363e−04 831
0.1 0.5 4.4411e−04 2.6805e−04 621 5.6053e−04 1.6727e−04 581 8.7178e−04 4.0904e−04 850
0.1 1 5.7022e−04 3.0429e−04 651 5.3842e−04 2.1836e−04 571 8.5517e−04 1.1750e−04 1171
∞ ∞ 1.7938e+02 2.3482e+02 600,000 1.7938e+02 2.3482e+02 1000000 1.7938e+02 2.3482e+02 5000000

597 5.4. Experiments to analyze the PS–ABC control parameters The convergence speed and optimal solution are controlled by the 613
different Limit1 and Limit2 (i.e. l 1 and l 2 ) values in the PS–ABC algo- 614
598 PS–ABC has two control parameters that affect its performance. rithm. To verify this hypothesis, some experiments are performed and 615
599 These parameters are Limit1 and Limit2, which controls the onlooker are then compared in Tables 10–12. The results given there are the 616
600 and scout bee production, respectively. In order to analyze the ef- mean, Std. values and average FEs values needed to reach the thresh- 617
601 fect of the control parameters, it has been 20 runs for f1 to f13 func- old expressed as acceptable solutions specified in Tables 10–12. The 618
602 tions with different dimension values. For each function, MaxFES was FEs is function evaluation index, which can indicate the convergence 619
603 set to D∗10,000 and N was set to 100 in all experiments. In this speed of the algorithm. When the FEs value is smaller, the faster con- 620
604 section, Limit1 = l1 · D×N
2 and Limit2 = l2 · D×N
2 . The different Limit1 vergence speed of the algorithm will be. These threshold values in 621
605 and Limit2 values are decided by l 1 and l 2 , and the corresponding different dimensions are presented in Table 9. 622
606 values are presented in Table 8. In Table 8, as the values of l 1 and As mentioned before, when the values of l 1 and l 2 equal to infinity, 623
607 l 2 approaches 1, the total number of the onlookers and scouts pro- the PS–ABC algorithm only has PSO phase and obtained poor results 624
608 duced gradually reduce. When the values of l 1 and l 2 equals to in- in Tables 10–12. For functions f1 , f2 , f3 , f4 , f7 , and f12 with l 1 = 0.01 625
609 finity (i.e. ∞), the total number of the onlookers and scouts pro- and l 2 = 0.05, the PS–ABC algorithm in different dimensions (D = 60, 626
610 duced goes to zero. It means that the PS–ABC algorithm only has 100, and 500) obtained the minimum average FEs values. PS–ABC 627
611 PSO phase that exploited the search space in the entire iteration in functions f9 and f10 obtained the minimum average FEs values 628
612 process. when control parameters l 1 = 0.01 and l 2 = 0.05 with D = 100 and 629

Please cite this article as: Z. Li et al., PS–ABC: A hybrid algorithm based on particle swarm and artificial bee colony for high-dimensional
optimization problems, Expert Systems With Applications (2015), http://dx.doi.org/10.1016/j.eswa.2015.07.043
JID: ESWA
ARTICLE IN PRESS [m5G;July 31, 2015;16:16]

14 Z. Li et al. / Expert Systems With Applications xxx (2015) xxx–xxx

Table 12
Optimization results of the different control values for f11 – f13 functions with different dimension values after 20 runs. (The shadow and bold values are the
minimum average FEs and the maximum average FEs values, respectively, except for consider the values of Limit1 and Limit2 approaches to infinity.).

D = 60 D = 100 D = 60

f l1 l2 Mean Std. FEs Mean Std. FEs Mean Std. FEs

f11 0.01 0.05 9.2401e−01 6.9768e−02 64571 9.7794e−01 9.9113e−02 113150 8.9366e+00 9.2379e−01 862100
0.01 0.1 9.5838e−01 5.1719e−02 1,24,230 9.2925e−01 5.2735e−02 106780 8.9171e+00 1.2041e−01 244630
0.01 0.5 9.7299e−01 3.8319e−02 1,52,051 9.8321e−01 2.6431e−02 393380 1.2098e+01 1.3091e+00 1164360
0.04 0.09 9.7363e−01 3.1165e−02 65,091 9.8420e−01 3.2234e−02 278280 8.9252e+00 2.8572e−01 245950
0.04 0.4 9.5772e−01 2.9734e−02 1,79,101 9.8465e−01 3.5267e−02 484840 7.9547e+00 4.0276e−01 1240781
0.07 0.1 9.8732e−01 3.9775e−02 78,450 1.0063e+00 2.3819e−02 634700 8.5988e+00 7.1025e+00 754100
0.07 0.7 9.8383e−01 2.4092e−02 1,16,861 9.7383e−01 2.1146e−02 108260 9.5373e+00 4.0142e+00 292240
0.1 0.5 9.8306e−01 9.7520e−02 1,72,411 9.9047e−01 4.3861e−02 460120 9.0479e+00 3.6406e+00 257500
0.1 1 9.9942e−01 6.4637e−02 2,22,420 9.8570e−01 2.0075e−02 284220 9.4772e+00 2.1432e+00 962010
∞ ∞ 1.0072e+00 9.6395e−02 1,76,941 1.0115e+00 4.1768e+00 893590 2.3527e+01 6.1245e+00 1736334
f12 0.01 0.05 8.9673e−01 9.7099e−02 54,801 8.7049e−01 1.1546e−01 80321 9.2046e+00 8.1947e−01 179500
0.01 0.1 7.6934e−01 1.5307e−01 60,981 7.9452e−01 1.0304e−01 88821 9.9468e+00 5.2198e−01 468561
0.01 0.5 7.6808e−01 1.3664e−01 1,48,800 8.1738e−01 1.4368e−01 229020 3.7280e+01 4.7455e+00 954510
0.04 0.09 8.5717e−01 1.2831e−01 55,021 9.0512e−01 1.1514e−01 85081 9.3893e+00 1.5485e+00 266140
0.04 0.4 8.8876e−01 1.0314e−01 68,621 9.0501e−01 1.6072e−01 124121 8.4780e+00 3.0558e+00 897150
0.07 0.1 8.5920e−01 1.0208e−01 56,771 9.1954e−01 8.0318e−02 90521 9.3456e+00 1.0128e+00 253740
0.07 0.7 8.0036e−01 1.4891e−01 61,301 6.8848e−01 1.8789e−01 100101 7.5437e+00 1.0278e+00 586420
0.1 0.5 8.9852e−01 7.1733e−01 64,101 8.6193e−01 9.3012e−02 91881 7.6163e+00 1.5978e+00 244400
0.1 1 8.3030e−01 1.2581e−01 57,781 8.2845e−01 1.2504e−01 87341 1.2356e+01 1.5321e+00 659810
∞ ∞ 8.6506e+02 1.4424e+02 600,000 8.2025e+03 1.2209e+02 1000000 9.0041e+03 1.1372e+03 5000000
f13 0.01 0.05 1.4976e−14 2.1566e−14 5,68,900 2.1221e−14 2.6178e−14 948610 4.7277e−14 2.7607e−14 4749061
0.01 0.1 1.2324e−14 1.5134e−14 5,69,150 1.0158e−14 6.6436e−15 948520 5.5696e−14 2.6875e−14 4784630
0.01 0.5 1.5681e−14 2.4351e−14 5,68,820 3.8108e−15 2.6066e−15 949520 1.3345e−13 1.5526e−14 4894630
0.04 0.09 1.3106e−14 1.6086e−14 5,69,320 3.6104e−14 2.2094e−14 949480 7.8093e−13 5.3938e−13 4786331
0.04 0.4 2.6584e−14 2.8335e−14 5,69,460 1.1157e−14 7.1875e−15 949680 1.8950e−13 1.5705e−12 4957950
0.07 0.1 1.6814e−14 1.9900e−14 5,69,450 4.2000e−14 2.7689e−14 948560 6.4948e−14 1.4630e−13 4758131
0.07 0.7 1.5803e−14 1.1662e−14 5,69,230 7.9825e−15 4.3163e−15 950080 5.6670e−01 3.0104e−01 4976430
0.1 0.5 1.4066e−14 1.1885e−14 5,69,370 2.9032e−15 3.6128e−15 949340 4.6532e−13 5.7653e−13 4796430
0.1 1 1.5204e−14 2.5460e−14 5,69,892 7.5396e−15 1.6657e−15 949900 2.2879e−12 2.2351e−11 4832140
∞ ∞ 2.7006e+03 2.4697e+02 600,000 4.5162e+03 3.6869e+02 1000000 3.2848e+04 2.1684e+01 5000000

630 500, respectively. In addition, when the l 1 value close to l 2 , the PS– tition problems. The results showed that PS–ABC is an efficient, fast 661
631 ABC obtained the minimum average FEs values such as f1 , f2 , f3 , f4 , converged and robust optimization method compared to the original 662
632 f6 , f7 , f8 , f9 , f10 and f12 with D = 60, 100, and 500. As seen from PSO, ABC, hybrid HPA, ABC–PS and OXDE to solve high-dimensional 663
633 Tables 10–12, for all functions except for f5 (D = 100), f6 (D = 60 and optimization problems. 664
634 100), f10 (D = 100), f11 (D = 500) and f12 (D = 60, 100, and 500), when Our future study will be focus on reduce the influence of some 665
635 the control parameter values of l 1 and l 2 are very high, the PS–ABC parameters and testing more high-dimensional complex problems as 666
636 obtained the maximum average FEs values. Therefore, it can be con- well. Furthermore, we will try to develop this algorithm for high di- 667
637 cluded that as the control parameter values of l 1 and l 2 increases, mensional multi-object optimization problems in the near future. 668
638 the PS–ABC algorithm produces poor results for most of functions.
639 This is because that the exploration phase in the PS–ABC is gradually Acknowledgments 669
640 weakened with the increase of the l 1 and l 2 values, and the algorithm
641 produces poor results. This work was partially supported by the National Natural Science 670
642 Therefore, based on the above analysis, the convergence speed Foundation of China (No. 61173107, 91320103), National High-tech 671
643 and optimal solution are controlled by the different Limit1 and Limit2 R&D Program of China (863 Program) (No. 2012AA01A301-01), the 672
644 values in the PS–ABC algorithm. As expected, we should choose a Special Project on the Integration of Industry, Education and Research 673
645 suitable Limit1 and Limit2 control parameter values for all high- of Guangdong Province, China (No. 2012A090300003) and the Sci- 674
646 dimensional functions. When the l 1 and l 2 values are small and close ence and Technology Planning Project of Guangdong Province, China 675
647 to each other, the algorithm produces better results. In addition, the (No. 2013B090700003). 676
648 comprehensive results of l 1 = 0.01 and l 2 = 0.05 for all functions are
649 better than other control parameter values as showed in Tables 10– References 677
650 12. Thus, the control parameters were set to be Limit1 = l1 · D×N 2 and
651 Limit2 = l2 · D×N 2 , where l 1 = 0.01 and l 2 = 0.05 in this paper. Bomze, I. M. (1997). Developments in global optimization: vol. 18. Springer Science & 678 Q3
Business Media. 679
Brest, J., Greiner, S., Boskovic, B., Mernik, M., & Zumer, V. (2006). Self-adapting control 680
652 6. Conclusion parameters in differential evolution: a comparative study on numerical benchmark 681
problems. IEEE Transactions on Evolutionary Computation, 10, 646–657. 682
Chelouah, R., & Siarry, P. (2000). A continuous genetic algorithm designed for the global 683
653 In this paper, a hybrid algorithm called PS–ABC was proposed to optimization of multimodal functions. Journal of Heuristics, 6, 191–213. 684
654 solve high-dimensional optimization problems. This method use the Chun-Feng, W., Kui, L., & Pei-Ping, S. (2014). Hybrid artificial bee colony algorithm and 685
particle swarm search for global optimization. Mathematical Problems in Engineer- 686 Q4
655 exploration ability of the ABC based on PSO in the algorithm process.
ing, 2014. 687
656 In PS–ABC, the exploitation ability of PSO was used to find the best Cox, D. R., & Miller, H. D. (1977). The theory of stochastic processes: vol. 134. CRC Press. 688
657 solution and increase the convergence rate of the algorithm, whereas Derrac, J., García, S., Molina, D., & Herrera, F. (2011). A practical tutorial on the use of 689
658 the exploration ability of ABC was used to search the solution space. nonparametric statistical tests as a methodology for comparing evolutionary and 690
swarm intelligence algorithms. Swarm and Evolutionary Computation, 1, 3–18. 691
659 The efficiency of the proposed method was examined on 13 high- Dunn, O. J. (1961). Multiple comparisons among means. Journal of the American Statis- 692
660 dimensional benchmark functions from the IEEE-CEC 2014 compe- tical Association, 56, 52–64. 693

Please cite this article as: Z. Li et al., PS–ABC: A hybrid algorithm based on particle swarm and artificial bee colony for high-dimensional
optimization problems, Expert Systems With Applications (2015), http://dx.doi.org/10.1016/j.eswa.2015.07.043
JID: ESWA
ARTICLE IN PRESS [m5G;July 31, 2015;16:16]

Z. Li et al. / Expert Systems With Applications xxx (2015) xxx–xxx 15

694 Eberhart, R. C., & Kennedy, J. (1995). A new optimizer using particle swarm theory. Liang, J., Qu, B., & Suganthan, P. (2013). Problem definitions and evaluation criteria for 743
695 In Proceedings of the sixth international symposium on micro machine and human the CEC 2014 special session and competition on single objective real-parameter 744
696 science: vol. 1 (pp. 39–43). numerical optimization. Computational Intelligence Laboratory, Zhengzhou Univer- 745
697 Eberhart, R. C., & Shi, Y. (2001). Particle swarm optimization: Developments, applica- sity, Zhengzhou China and Technical Report. Singapore: Nanyang Technological 746
698 tions and resources. In Proceedings of the 2001 Congress on Evolutionary Computa- University. 747
699 tion, 2001.: vol. 1 (pp. 81–86). IEEE. Liang, J. J., Qin, A. K., Suganthan, P. N., & Baskar, S. (2006). Comprehensive learning parti- 748
700 El-Abd, M. (2011). A hybrid ABC–SPSO algorithm for continuous function optimization. cle swarm optimizer for global optimization of multimodal functions. Evolutionary 749
701 In Proceedings of the IEEE Symposium on Swarm Intelligence (SIS), 2011 (pp. 1–6). Computation, IEEE Transactions on, 10, 281–295. 750
702 IEEE. Lin, S.-W., Ying, K.-C., Chen, S.-C., & Lee, Z.-J. (2008). Particle swarm optimization for 751
703 Finner, H. (1993). On a monotonicity problem in step-down multiple test procedures. parameter determination and feature selection of support vector machines. Expert 752
704 Journal of the American Statistical Association, 88, 920–923. systems with applications, 35, 1817–1824. 753
705 Gergel, V. P. (1997). A global optimization algorithm for multivariate functions Nguyen, T. T., Li, Z., Zhang, S., & Truong, T. K. (2014). A hybrid algorithm based on particle 754
706 with Lipschitzian first derivatives. Journal of Global Optimization, 10, 257– swarm and chemical reaction optimization. Expert systems with applications, 41, 755
707 281. 2134–2143. 756
708 Grosan, C., & Abraham, A. (2009). A novel global optimization technique for Noman, N., & Iba, H. (2008). Accelerating differential evolution using an adaptive local 757
709 high dimensional functions. International Journal of Intelligent Systems, 24, 421– search. Evolutionary Computation, IEEE Transactions on, 12, 107–125. 758
710 440. Price, K., Storn, R. M., & Lampinen, J. A. (2006). Differential evolution: A practical ap- 759
711 Höfinger, S., Schindler, T., & Aszódi, A. (2002). Parallel global optimization of high- proach to global optimization. Springer Science & Business Media. 760
712 dimensional problems. In Recent advances in parallel virtual machine and message Sánchez, A. M., Lozano, M., Villar, P., & Herrera, F. (2009). Hybrid crossover oper- 761
713 passing interface (pp. 148–155). Springer. ators with multiple descendents for real-coded genetic algorithms: combining 762
714 Hochberg, Y. (1988). A sharper Bonferroni procedure for multiple tests of significance. neighborhood-based crossover operators. International Journal of Intelligent Sys- 763
715 Biometrika, 75, 800–802. tems, 24, 540–567. 764
716 Holland, B. S., & Copenhaver, M. D. (1987). An improved sequentially rejective Bonfer- Schutte, J. F., Reinbolt, J. A., Fregly, B. J., Haftka, R. T., & George, A. D. (2004). Parallel 765
Q5 717 roni test procedure. Biometrics, 417–423. global optimization with the particle swarm algorithm. International Journal for 766
Q6 718 Holm, S. (1979). A simple sequentially rejective multiple test procedure. Scandinavian Numerical Methods in Engineering, 61, 2296–2315. 767
719 journal of statistics, 65–70. Sheskin, D. J. (2003). Handbook of parametric and nonparametric statistical procedures. 768
720 Horst, R., & Tuy, H. (1996). Global optimization: Deterministic approaches. Springer Sci- Crc Press. 769
721 ence & Business Media. Shi, X., Li, Y., Li, H., Guan, R., Wang, L., & Liang, Y. (2010). An integrated algorithm 770
722 Imanian, N., Shiri, M. E., & Moradi, P. (2014). Velocity based artificial bee colony algo- based on artificial bee colony and particle swarm optimization. In Proceedings of 771
723 rithm for high dimensional continuous optimization problems. Engineering Appli- the sixth international conference on natural computation, ICNC 2010, Yantai, Shan- 772
724 cations of Artificial Intelligence, 36, 148–163. dong, China, 10–12 August 2010 (pp. 2586–2590). 773
725 Jamian, J. J., Abdullah, M. N., Mokhlis, H., Mustafa, M. W., & Bakar, A. H. A. (2014). Suganthan, P. N. (1999). Particle swarm optimiser with neighbourhood operator. In Pro- 774
726 Global particle swarm optimization for high dimension numerical functions anal- ceedings of the 1999 Congress on Evolutionary Computation, 1999. CEC 99: Vol. 3. IEEE. 775
Q7 727 ysis. Journal of Applied Mathematics, 2014. Suganthan, P. N., Hansen, N., Liang, J. J., Deb, K., Chen, Y.-P., Auger, A., et al. (2005). 776
728 Jia, D., Zheng, G., Qu, B., & Khan, M. K. (2011). A hybrid particle swarm optimization Problem definitions and evaluation criteria for the CEC 2005 special session on 777
729 algorithm for high-dimensional problems. Computers & Industrial Engineering, 61, real-parameter optimization. KanGAL Report, 2005005. 778
730 1117–1122. TSai, P.-W., Pan, J.-S., Liao, B.-Y., & Chu, S.-C. (2009). Enhanced artificial bee colony opti- 779
731 Jiang, Y., Hu, T., Huang, C., & Wu, X. (2007). An improved particle swarm optimization mization. International Journal of Innovative Computing, Information and Control, 5, 780
732 algorithm. Applied Mathematics and Computation, 193, 231–239. 5081–5092. 781
733 Karaboga, D. (2005). An idea based on honey bee swarm for numerical optimization. Wang, Y., Cai, Z., & Zhang, Q. (2012). Enhancing the search ability of differential evolu- 782
734 In: Technical report-tr06. Erciyes university, engineering faculty, computer engi- tion through orthogonal crossover. Information Sciences, 185, 153–177. 783
735 neering department. Yang, Z., Tang, K., & Yao, X. (2007). Differential evolution for high-dimensional function 784
736 Karaboga, D., & Akay, B. (2009). A comparative study of artificial bee colony algorithm. optimization. In Proceedigns of the IEEE Congress on Evolutionary Computation, 2007. 785
737 Applied Mathematics and Computation, 214, 108–132. CEC 2007 (pp. 3523–3530). IEEE. 786
738 Karaboga, D., & Basturk, B. (2008). On the performance of artificial bee colony (ABC) Zhang, C., Ouyang, D., & Ning, J. (2010). An artificial bee colony approach for clustering. 787
739 algorithm. Applied soft computing, 8, 687–697. Expert systems with applications, 37, 4761–4767. 788
740 Kıran, M. S., & Gündüz, M. (2013). A recombination-based hybridization of particle Zhu, G., & Kwong, S. (2010). Gbest-guided artificial bee colony algorithm for numerical 789
741 swarm optimization and artificial bee colony algorithm for continuous optimiza- function optimization. Applied Mathematics and Computation, 217, 3166–3173. 790
742 tion problems. Applied soft computing, 13, 2188–2203.

Please cite this article as: Z. Li et al., PS–ABC: A hybrid algorithm based on particle swarm and artificial bee colony for high-dimensional
optimization problems, Expert Systems With Applications (2015), http://dx.doi.org/10.1016/j.eswa.2015.07.043

You might also like