Professional Documents
Culture Documents
Artigo Elsevier 2014 Gray Wolf Optimizer
Artigo Elsevier 2014 Gray Wolf Optimizer
School of Information and Communication Technology, Grifth University, Nathan Campus, Brisbane QLD 4111, Australia
Department of Electrical Engineering, Faculty of Electrical and Computer Engineering, Shahid Beheshti University, G.C. 1983963113, Tehran, Iran
a r t i c l e
i n f o
Article history:
Received 27 June 2013
Received in revised form 18 October 2013
Accepted 11 December 2013
Available online 21 January 2014
Keywords:
Optimization
Optimization techniques
Heuristic algorithm
Metaheuristics
Constrained optimization
GWO
a b s t r a c t
This work proposes a new meta-heuristic called Grey Wolf Optimizer (GWO) inspired by grey wolves
(Canis lupus). The GWO algorithm mimics the leadership hierarchy and hunting mechanism of grey
wolves in nature. Four types of grey wolves such as alpha, beta, delta, and omega are employed for simulating the leadership hierarchy. In addition, the three main steps of hunting, searching for prey, encircling prey, and attacking prey, are implemented. The algorithm is then benchmarked on 29 well-known
test functions, and the results are veried by a comparative study with Particle Swarm Optimization
(PSO), Gravitational Search Algorithm (GSA), Differential Evolution (DE), Evolutionary Programming
(EP), and Evolution Strategy (ES). The results show that the GWO algorithm is able to provide very competitive results compared to these well-known meta-heuristics. The paper also considers solving three
classical engineering design problems (tension/compression spring, welded beam, and pressure vessel
designs) and presents a real application of the proposed method in the eld of optical engineering. The
results of the classical engineering design problems and real application prove that the proposed algorithm is applicable to challenging problems with unknown search spaces.
2013 Elsevier Ltd. All rights reserved.
1. Introduction
Meta-heuristic optimization techniques have become very popular over the last two decades. Surprisingly, some of them such as
Genetic Algorithm (GA) [1], Ant Colony Optimization (ACO) [2],
and Particle Swarm Optimization (PSO) [3] are fairly well-known
among not only computer scientists but also scientists from different elds. In addition to the huge number of theoretical works,
such optimization techniques have been applied in various elds
of study. There is a question here as to why meta-heuristics have
become remarkably common. The answer to this question can be
summarized into four main reasons: simplicity, exibility, derivation-free mechanism, and local optima avoidance.
First, meta-heuristics are fairly simple. They have been mostly
inspired by very simple concepts. The inspirations are typically related to physical phenomena, animals behaviors, or evolutionary
concepts. The simplicity allows computer scientists to simulate different natural concepts, propose new meta-heuristics, hybridize
two or more meta-heuristics, or improve the current meta-heuristics. Moreover, the simplicity assists other scientists to learn metaheuristics quickly and apply them to their problems.
Second, exibility refers to the applicability of meta-heuristics
to different problems without any special changes in the structure
of the algorithm. Meta-heuristics are readily applicable to different
Corresponding author. Tel.: +61 434555738.
E-mail
addresses:
seyedali.mirjalili@grifthuni.edu.au
(S.
Mirjalili),
mohammad.smm@gmail.com (S.M. Mirjalili), a.lewis@grifth.edu.au (A. Lewis).
0965-9978/$ - see front matter 2013 Elsevier Ltd. All rights reserved.
http://dx.doi.org/10.1016/j.advengsoft.2013.12.007
2. Literature review
Meta-heuristics may be classied into three main classes:
evolutionary, physics-based, and SI algorithms. EAs are usually
47
48
3.1. Inspiration
Grey wolf (Canis lupus) belongs to Canidae family. Grey wolves
are considered as apex predators, meaning that they are at the top
of the food chain. Grey wolves mostly prefer to live in a pack. The
group size is 512 on average. Of particular interest is that they
have a very strict social dominant hierarchy as shown in Fig. 1.
The leaders are a male and a female, called alphas. The alpha is
mostly responsible for making decisions about hunting, sleeping
place, time to wake, and so on. The alphas decisions are dictated
to the pack. However, some kind of democratic behavior has also
~
D j~
X p t ~
Xtj
C ~
~
Xt 1 ~
X p t ~
D
A~
3:1
49
points illustrated in Fig. 3. So a grey wolf can update its position inside the space around the prey in any random location by using
Eqs. (3.1) and (3.2).
The same concept can be extended to a search space with n
dimensions, and the grey wolves will move in hyper-cubes (or hyper-spheres) around the best solution obtained so far.
3.2.3. Hunting
Grey wolves have the ability to recognize the location of prey
and encircle them. The hunt is usually guided by the alpha. The
beta and delta might also participate in hunting occasionally. However, in an abstract search space we have no idea about the location of the optimum (prey). In order to mathematically simulate
the hunting behavior of grey wolves, we suppose that the alpha
(best candidate solution) beta, and delta have better knowledge
about the potential location of prey. Therefore, we save the rst
three best solutions obtained so far and oblige the other search
agents (including the omegas) to update their positions according
to the position of the best search agents. The following formulas
are proposed in this regard.
~
Da j~
Xa ~
Xj; ~
Db j~
Xb ~
Xj; ~
Dd j~
Xd ~
Xj
C1 ~
C2 ~
C3 ~
3:5
~
X1 ~
Xa ~
Da ; ~
Xb ~
Db ; ~
Xd ~
Dd
X2 ~
X3 ~
A1 ~
A2 ~
A3 ~
3:6
~
X1 ~
X2 ~
X3
~
Xt 1
3
3:7
3:2
~
a ~
a
A 2~
r1 ~
3:3
~
r2
C 2 ~
3:4
Fig. 2. Hunting behavior of grey wolves: (A) chasing, approaching, and tracking prey (BD) pursuiting, harassing, and encircling (E) stationary situation and attack [47].
50
X*-X
(X*-X,Y)
(X,Y)
(X*,Y)
(X*-X,Y,Z*)
(X,Y,Z*)
(X*,Y,Z*)
(X*-X,Y,Z*-Z)
(X,Y,Z)
(X*,Y,Z)
(X*,Y,Z*-Z)
(X,Y,Z*-Z)
(X,Y*,Z)
Y*-Y
(X*,Y*,Z*)
(X*,Y*)
(X,Y*,Z*)
(X*-X,Y*)
(X*-X,Y*,Z*-Z)
(X*,Y*,Z*-Z)
(X,Y*-Y,Z)
(X,Y*,Z*-Z)
(X,Y*)
(X,Y,Z*)
(X*,Y*-Y)
(X*-X,Y*-Y)
(X*-X,Y*-Y,Z-Z*)
(X,Y*-Y)
(X*,Y*-Y,Z*-Z)
(a)
(X,Y*-Y,Z*-Z)
(b)
Fig. 3. 2D and 3D position vectors and their possible next locations.
a1
C1
a2
C2
R
Dalpha
Dbeta
Move
Ddelta
a3
C3
between its current position and the position of the prey. Fig. 5(a)
shows that |A| < 1 forces the wolves to attack towards the prey.
With the operators proposed so far, the GWO algorithm allows
its search agents to update their position based on the location of
the alpha, beta, and delta; and attack towards the prey. However,
the GWO algorithm is prone to stagnation in local solutions with
these operators. It is true that the encircling mechanism proposed
shows exploration to some extent, but GWO needs more operators
to emphasize exploration.
If
|>
|A
If
(a)
|<
|A
(b)
51
a more random behavior throughout optimization, favoring exploration and local optima avoidance. It is worth mentioning here that
C is not linearly decreased in contrast to A. We deliberately require
C to provide random values at all times in order to emphasize
exploration not only during initial iterations but also nal iterations. This component is very helpful in case of local optima stagnation, especially in the nal iterations.
The C vector can be also considered as the effect of obstacles to
approaching prey in nature. Generally speaking, the obstacles in
nature appear in the hunting paths of wolves and in fact prevent
them from quickly and conveniently approaching prey. This is exactly what the vector C does. Depending on the position of a wolf, it
can randomly give the prey a weight and make it harder and farther to reach for wolves, or vice versa.
To sum up, the search process starts with creating a random
population of grey wolves (candidate solutions) in the GWO algorithm. Over the course of iterations, alpha, beta, and delta wolves
estimate the probable position of the prey. Each candidate solution
updates its distance from the prey. The parameter a is decreased
from 2 to 0 in order to emphasize exploration and exploitation,
respectively. Candidate solutions tend to diverge from the prey
when j~
Aj > 1 and converge towards the prey when j~
Aj < 1. Finally,
the GWO algorithm is terminated by the satisfaction of an end
criterion.
The pseudo code of the GWO algorithm is presented in Fig. 6.
To see how GWO is theoretically able to solve optimization
problems, some points may be noted:
The proposed social hierarchy assists GWO to save the best
solutions obtained so far over the course of iteration.
The proposed encircling mechanism denes a circle-shaped
neighborhood around the solutions which can be extended to
higher dimensions as a hyper-sphere.
The random parameters A and C assist candidate solutions to
have hyper-spheres with different random radii.
The proposed hunting method allows candidate solutions to
locate the probable position of the prey.
Exploration and exploitation are guaranteed by the adaptive
values of a and A.
The adaptive values of parameters a and A allow GWO to
smoothly transition between exploration and exploitation.
With decreasing A, half of the iterations are devoted to exploration (|A| P 1) and the other half are dedicated to exploitation
(|A| < 1).
The GWO has only two main parameters to be adjusted (a and
C).
There are possibilities to integrate mutation and other evolutionary operators to mimic the whole life cycle of grey wolves.
Table 1
Unimodal benchmark functions.
Function
P
f1 x ni1 x2i
P
Q
f2 x ni1 jxi j ni1 jxi j
Pn Pi
2
f3 x i1 j1 xj
f4 x maxi fjxi j; 1 6 i 6 ng
P
2
2 2
f5 x n1
i1 100xi1 xi xi 1
Pn
2
f6 x i1 xi 0:5
P
f7 x ni1 ix4i random0; 1
Dim
Range
30
[100, 100]
fmin
0
30
30
[10, 10]
[100, 100]
0
0
30
30
[100, 100]
[30, 30]
0
0
30
[100, 100]
30
[1.28, 1.28]
52
Table 2
Multimodal benchmark functions.
Function
p
P
F 8 x ni1 xi sin jxi j
Pn
F 9 x i1 x2i 10 cos2pxi 10
q
P
P
F 10 x 20 exp 0:2 1n ni1 x2i exp 1n ni1 cos2pxi 20 e
Pn 2 Q n
xi
1
p
1
F 11 x 4000
i1 xi
i1 cos
i
Pn1
P
p
F 12 x n f10 sinpy1 i1 yi 12 1 10 sin2 pyi1 yn 12 g ni1 uxi ; 10; 100; 4
yi 1
Dim
Range
fmin
30
[500, 500]
418.9829 5
30
[5.12, 5.12]
30
[32, 32]
30
[600, 600]
30
[50, 50]
30
[50, 50]
30
[0, p]
4.687
xi 1
4
8
xi > a
< kxi am
0
a < xi < a
:
m
xi < a
kxi a
P
P
F 13 x 0:1fsin2 3px1 ni1 xi 12 1 sin2 3pxi 1 xn 12 1 sin2 2pxn g ni1 uxi ; 5; 100; 4
2m
2
P
i:x
F 14 x ni1 sinxi sin pi
; m 10
Pn 2 i Q
h Pn
2m
F 15 x e i1 xi =b 2e i1 xi ni1 cos2 xi ; m 5
p
P
P
P
2
2
F 16 x f ni1 sin xi exp ni1 x2i g exp ni1 sin
jxi j
uxi ; a; k; m
30
[20, 20]
1
30
[10, 10]
1
Table 3
Fixed-dimension multimodal benchmark functions.
Function
F 14 x
F 15 x
1
500
P11
i1
P25
j1
P2
1
xi aij 6
1
Dim
Range
fmin
[65, 65]
[5, 5]
0.00030
i1
2
x b2 b x
ai 12 i i 2
bi bi x3 x4
[5, 5]
1.0316
[5, 5]
0.398
[2, 2]
[1, 3]
3.86
[0, 1]
3.32
[0, 10]
10.1532
[0, 10]
10.4028
[0, 10]
10.5363
53
(F1)
(F2)
(F5)
(F3)
(F6)
Dim
Range
fmin
10
[5, 5]
10
[5, 5]
10
[5, 5]
10
[5, 5]
10
[5, 5]
10
[5, 5]
(F4)
(F7)
the proposed algorithm, three classical engineering design problems and a real problem in optical engineering are employed in
the following sections. The GWO algorithm is also compared with
well-known techniques to conrm its results.
54
(F8)
(F9)
(F10)
(F12)
(F11)
(F13)
(F14)
(F16)
(F17)
(F18)
(F24)
(F25)
(F27)
(F26)
(F28)
(F29)
Table 5
Results of unimodal benchmark functions.
F
F1
F2
F3
F4
F5
F6
F7
GWO
PSO
GSA
DE
FEP
Ave
Std
Ave
Std
Ave
Std
Ave
Std
Ave
Std
6.59E28
7.18E17
3.29E06
5.61E07
26.81258
0.816579
0.002213
6.34E05
0.029014
79.14958
1.315088
69.90499
0.000126
0.100286
0.000136
0.042144
70.12562
1.086481
96.71832
0.000102
0.122854
0.000202
0.045421
22.11924
0.317039
60.11559
8.28E05
0.044957
2.53E16
0.055655
896.5347
7.35487
67.54309
2.5E16
0.089441
9.67E17
0.194074
318.9559
1.741452
62.22534
1.74E16
0.04339
8.2E14
1.5E09
6.8E11
0
0
0
0.00463
5.9E14
9.9E10
7.4E11
0
0
0
0.0012
0.00057
0.0081
0.016
0.3
5.06
0
0.1415
0.00013
0.00077
0.014
0.5
5.87
0
0.3522
to the alpha, beta, and delta locations, there is no direct relation between the search agents and the tness function. So the simplest
constraint handling method, penalty functions, where search agents
are assigned big objective function values if they violate any of the
constraints, can be employed effectively to handle constraints in
GWO. In this case, if the alpha, beta, or delta violate constraints, they
55
F8
F9
F10
F11
F12
F13
GWO
PSO
GSA
DE
FEP
Ave
Std
Ave
Std
Ave
Std
Ave
Std
Ave
Std
6123.1
0.310521
1.06E13
0.004485
0.053438
0.654464
4087.44
47.35612
0.077835
0.006659
0.020734
0.004474
4841.29
46.70423
0.276015
0.009215
0.006917
0.006675
1152.814
11.62938
0.50901
0.007724
0.026301
0.008907
2821.07
25.96841
0.062087
27.70154
1.799617
8.899084
493.0375
7.470068
0.23628
5.040343
0.95114
7.126241
11080.1
69.2
9.7E08
0
7.9E15
5.1E14
574.7
38.8
4.2E08
0
8E15
4.8E14
12554.5
0.046
0.018
0.016
9.2E06
0.00016
52.6
0.012
0.0021
0.022
3.6E06
0.000073
Table 7
Results of xed-dimension multimodal benchmark functions.
F
F14
F15
F16
F17
F18
F19
F20
F21
F22
F23
GWO
PSO
GSA
DE
FEP
Ave
Std
Ave
Std
Ave
Std
Ave
Std
Ave
Std
4.042493
0.000337
1.03163
0.397889
3.000028
3.86263
3.28654
10.1514
10.4015
10.5343
4.252799
0.000625
1.03163
0.397887
3
3.86278
3.25056
9.14015
8.58441
8.55899
3.627168
0.000577
1.03163
0.397887
3
3.86278
3.26634
6.8651
8.45653
9.95291
2.560828
0.000222
6.25E16
0
1.33E15
2.58E15
0.060516
3.019644
3.087094
1.782786
5.859838
0.003673
1.03163
0.397887
3
3.86278
3.31778
5.95512
9.68447
10.5364
3.831299
0.001647
4.88E16
0
4.17E15
2.29E15
0.023081
3.737079
2.014088
2.6E15
0.998004
4.5E14
1.03163
0.397887
3
N/A
N/A
10.1532
10.4029
10.5364
3.3E16
0.00033
3.1E13
9.9E09
2E15
N/A
N/A
0.0000025
3.9E07
1.9E07
1.22
0.0005
1.03
0.398
3.02
3.86
3.27
5.52
5.53
6.57
0.56
0.00032
4.9E07
1.5E07
0.11
0.000014
0.059
1.59
2.12
3.14
are automatically replaced with a new search agent in the next iteration. Any kind of penalty function can readily be employed in order
to penalize search agents based on their level of violation. In this
case, if the penalty makes the alpha, beta, or delta less t than any
other wolves, it is automatically replaced with a new search agent
in the next iteration. We used simple, scalar penalty functions for
the rest of problems except the tension/compression spring design
problem which uses a more complex penalty function.
Consider
Minimize
~
x x1 x2 x3 dDN;
f ~
x x3 2x2 x21 ;
Subject to
2 3
g 1 ~
x 1 71785x
4 6 0;
1
4x2 x1 x2
g2 ~
x
3
4
2 x1 x1
4x22 x1 x2
12566x2 x31 x41
1
5108x
2 6 0;
1
1
5108x21
6 0;
x3 x
2
g2 ~
x 12566x
5:1
1
g3 ~
x 1 140:45x
6 0;
x2 x
2 3
x2
1 6 0;
x x11:5
g4 ~
Consider
Minimize
Subject to
~
x x1 x2 x3 x4 hltb;
x 1:10471x21 x2 0:04811x3 x4 14:0 x2 ;
f ~
g 1 ~
x s~
x smax 6 0;
x r~
x rmax 6 0;
g 2 ~
x d~
x dmax 6 0;
g 3 ~
x x1 x4 6 0;
g 4 ~
x P P c ~
x 6 0;
g 5 ~
x 0:125 x1 6 0
g 6 ~
x 1:10471x21 0:04811x3 x4 14:0 x2 5:0 6 0
g 7 ~
5:2
56
Table 8
Results of composite benchmark functions.
F
GWO
F24
F25
F26
F27
F28
F29
where
PSO
GSA
DE
CMA-ES
Ave
Std
Ave
Std
Ave
Std
Ave
Std
Ave
Std
43.83544
91.80086
61.43776
123.1235
102.1429
43.14261
69.86146
95.5518
68.68816
163.9937
81.25536
84.48573
100
155.91
172.03
314.3
83.45
861.42
81.65
13.176
32.769
20.066
101.11
125.81
6.63E17
200.6202
180
170
200
142.0906
2.78E17
67.72087
91.89366
82.32726
47.14045
88.87141
6.75E02
28.759
144.41
324.86
10.789
490.94
1.11E01
8.6277
19.401
14.784
2.604
39.461
100
161.99
214.06
616.4
358.3
900.26
188.56
151
74.181
671.92
168.26
8.32E02
in Fig. 14. Both ends of the vessel are capped, and the head has a
hemi-spherical shape. There are four variables in this problem:
2
x2
s~x s0 2 2s0 s00 2R
s00 ;
0
P
00
MR
s p2x1 x2 ; s J ; M PL x22 ;
q
2
x22
3
x1 x
;
4
2
np
h2
2 io
x2
3
;
J 2 2x1 x2 4 x1 x
2
R
6PL
~
r~x x6PL
2 ; dx
Ex2 x4
4x
3
Pc ~
x
4:013E
L2
x2 x6
3 4
36
1 x2L3
q
E
;
4G
dmax 0:25 in: ; E 30 16 psi;G 12 106 psi;
Coello [64] and Deb [65,66] employed GA, whereas Lee and Geem
[67] used HS to solve this problem. Richardsons random method,
Simplex method, Davidon-Fletcher-Powell, Grifth and Stewarts
successive linear approximation are the mathematical approaches
that have been adopted by Ragsdell and Philips [68] for this problem. The comparison results are provided in Table 10. The results
show that GWO nds a design with the minimum cost compared
to others.
0 6 x2 6 99;
The objective of this problem is to minimize the total cost consisting of material, forming, and welding of a cylindrical vessel as
10 6 x4 6 200;
10 6 x3 6 200;
Fig. 11. Search history and trajectory of the rst particle in the rst dimension.
57
Fig. 11 (continued)
Fig. 12. Tension/compression spring: (a) shematic, (b) stress heatmap (c) displacement heatmap.
In summary, the results on the three classical engineering problems demonstrate that GWO shows high performance in solving
challenging problems. This is again due to the operators that are
designed to allow GWO to avoid local optima successfully and converge towards the optimum quickly. The next section probes the
performance of the GWO algorithm in solving a recent real problem in the eld of optical engineering.
58
Table 9
Comparison of results for tension/compression spring design problem.
Algorithm
GWO
GSA
PSO (Ha and Wang)
ES (Coello and Montes)
GA (Coello)
HS (Mahdavi et al.)
DE (Huang et al.)
Mathematical
optimization
(Belegundu)
Constraint correction
(Arora)
Optimum variables
Table 10
Comparison results of the welded beam design problem.
Optimum
weight
Algorithm
GWO
GSA
GA (Coello)
GA (Deb)
GA (Deb)
HS (Lee and
Geem)
Random
Simplex
David
APPROX
0.05169
0.050276
0.051728
0.051989
0.051480
0.051154
0.051609
0.053396
0.356737
0.323680
0.357644
0.363965
0.351661
0.349871
0.354714
0.399180
11.28885
13.525410
11.244543
10.890522
11.632201
12.076432
11.410831
9.1854000
0.012666
0.0127022
0.0126747
0.0126810
0.0127048
0.0126706
0.0126702
0.0127303
0.050000
0.315900
14.250000
0.0128334
Optimum variables
Optimum
cost
0.205676
0.182129
N/A
N/A
0.2489
0.2442
3.478377
3.856979
N/A
N/A
6.1730
6.2231
9.03681
10.00000
N/A
N/A
8.1789
8.2915
0.205778
0.202376
N/A
N/A
0.2533
0.2443
1.72624
1.879952
1.8245
2.3800
2.4331
2.3807
0.4575
0.2792
0.2434
0.2444
4.7313
5.6256
6.2552
6.2189
5.0853
7.7512
8.2915
8.2915
0.6600
0.2796
0.2444
0.2444
4.1185
2.5307
2.3841
2.3815
DBP Dt Df
6:1
NDBP ng Dx=x0
6:2
ng
vg
dk
dx
6:3
ng
xH
n g x
xL
dx
Dx
6:4
Fig. 13. Structure of welded beam design (a) shematic (b) stress heatmap (c) displacement heatmap.
59
Fig. 14. Pressure vessel (a) shematic (b) stress heatmap (c) displacement heatmap.
Table 11
Comparison results for pressure vessel design problem.
Algorithm
Optimum variables
GWO
GSA
PSO (He and Wang)
GA (Coello)
GA (Coello and Montes)
GA (Deb and Gene)
ES (Montes and Coello)
DE (Huang et al.)
ACO (Kaveh and Talataheri)
Lagrangian Multiplier (Kannan)
Branch-bound (Sandgren)
Optimum cost
Ts
Th
0.812500
1.125000
0.812500
0.812500
0.812500
0.937500
0.812500
0.812500
0.812500
1.125000
1.125000
0.434500
0.625000
0.437500
0.434500
0.437500
0.500000
0.437500
0.437500
0.437500
0.625000
0.625000
42.089181
55.9886598
42.091266
40.323900
42.097398
48.329000
42.098087
42.098411
42.103624
58.291000
47.700000
176.758731
84.4542025
176.746500
200.000000
176.654050
112.679000
176.640518
176.637690
176.572656
43.6900000
117.701000
Super cell
Consider :
~
x x1 x2 x3 x4 x5 x6 x7 x8
Maximize :
n Dx
f ~
x NDBP gx0 ;
6051.5639
8538.8359
6061.0777
6288.7445
6059.9463
6410.3811
6059.7456
6059.7340
6059.0888
7198.0428
8129.1036
R2 R3 R4 R5 l wh wl
a a a a a a a
2R5
2R4
Si
2R3
2R2
2R1
wl
Filled
6:5
wh
xL xknL x0:9ng0 ;
kn 2kap
Dx xH xL ;
a x0 1550nm;
Fig. 15. BSPCW structure with super cell, nbackground = 3.48 and nlled = 1.6.
Wu et al. [81]
GWO
R1
R2
R3
R4
R5
l
Wh
Wl
a(nm)
g
n
430
23
17.6
103
0.26
0.33235a
0.24952a
0.26837a
0.29498a
0.34992a
0.7437a
0.2014a
0.60073a
343
19.6
33.9
103
0.43
Dk(nm)
Order of magnitude of b2 (a/2pc2)
NDBP
60
100
0.24
80
Light line
of SiO2
0.26
0.22
Guided mode
0.2
60
40
20
0.18
L
0.16
0.25
0.3
0.35
0.4
0.45
0.5
Wavevector--ka/2
(a)
0.214
0.216
0.218
0.22
0.222
0.224
(b)
Fig. 16. (a) Photonic band structure of the optimized BSPCW structure (b) The group index (ng) of the optimized BSPCW structure.
R5=120 nm
R4=101 nm
R3=92 nm
R2=84 nm
R1=114 nm
wl=69 nm
wh=206 nm
l=255 nm
7. Conclusion
This work proposed a novel SI optimization algorithm inspired
by grey wolves. The proposed method mimicked the social hierarchy and hunting behavior of grey wolves. Twenty nine test functions were employed in order to benchmark the performance of
the proposed algorithm in terms of exploration, exploitation, local
optima avoidance, and convergence. The results showed that GWO
was able to provide highly competitive results compared to wellknown heuristics such as PSO, GSA, DE, EP, and ES. First, the results
on the unimodal functions showed the superior exploitation of the
GWO algorithm. Second, the exploration ability of GWO was conrmed by the results on multimodal functions. Third, the results
of the composite functions showed high local optima avoidance. Fi-
References
[1] Bonabeau E, Dorigo M, Theraulaz G. Swarm intelligence: from natural to
articial systems: OUP USA; 1999.
[2] Dorigo M, Birattari M, Stutzle T. Ant colony optimization. Comput Intell Magaz,
IEEE 2006;1:2839.
[3] Kennedy J, Eberhart R. Particle swarm optimization, in Neural Networks, 1995.
In: Proceedings, IEEE international conference on; 1995. p. 19421948.
[4] Wolpert DH, Macready WG. No free lunch theorems for optimization. Evolut
Comput, IEEE Trans 1997;1:6782.
[5] Kirkpatrick S, Jr. DG, Vecchi MP. Optimization by simulated annealing. Science,
vol. 220; 1983. p. 67180.
[6] Beni G, Wang J. Swarm intelligence in cellular robotic systems. In: Robots and
biological systems: towards a new bionics?, ed. Springer; 1993. p. 70312.
[7] Basturk B, Karaboga D. An articial bee colony (ABC) algorithm for numeric
function optimization. In: IEEE swarm intelligence symposium; 2006. p. 124.
[8] Olorunda O, Engelbrecht AP. Measuring exploration/exploitation in particle
swarms using swarm diversity. In: Evolutionary computation, 2008. CEC 2008
(IEEE World Congress on Computational Intelligence). IEEE Congress on; 2008.
p. 112834.
[9] Alba E, Dorronsoro B. The exploration/exploitation tradeoff in dynamic cellular
genetic algorithms. Evolut Comput, IEEE Trans 2005;9:12642.
[10] Lin L, Gen M. Auto-tuning strategy for evolutionary algorithms: balancing
between exploration and exploitation. Soft Comput 2009;13:15768.
[11] Mirjalili S, Hashim SZM. A new hybrid PSOGSA algorithm for function
optimization. In: Computer and information application (ICCIA), 2010
international conference on; 2010. p. 37477.
61