Professional Documents
Culture Documents
A Proposed Genetic Algorithm Selection Method
A Proposed Genetic Algorithm Selection Method
Abstract
Genetic algorithms (GAs) are stochastic search methods that mimic natural biological evolution. Genetic
algorithms are broadly used in optimisation problems. They facilitate a good alternative in problem areas where the
number of constraints is too large for humans to efficiently evaluate.
This paper presents a new selection method for genetic algorithms. The new method is tested and compared with
the Geometric selection method. Simulation studies show remarkable performance of the proposed GA selection
method .The proposed selection method is simple to implement, and it has notable ability to reduce the effected of
premature convergence compared to other method.
The proposed GAs selection method is expected to help in solving hard problems quickly, reliably, and accurately
1. Introduction
Genetic algorithms were developed by John Holland at
the University of Michigan in the early 1970s [1].
Genetic algorithms are theoretically and empirically
proven to provide robust search in complex spaces
(Goldberg, 1989)[2].
Genetic algorithms are stochastic search methods that
mimic natural biological evolution. Genetic algorithms
operate on a population (a group of individuals) of
potential solutions applying the principle of survival of the
fittest to generate improved estimations to a solution. At
each generation, a new set of approximations is created by
the process of selecting individuals according to their
level of fitness and breeding them together using genetic
operators inspired by natural genetics. This process leads
to the evolution of better populations than the previous
populations [3].
Fitness function is the measure of the quality of an
individual. The fitness function should be designed to
provide assessment of the performance of an individual in
the current population.
In selection the individuals producing offspring are
chosen. The selection step is preceded by the fitness
assignment which is based on the objective value. This
fitness is used for the actual selection process.
There are many types of selection methods used in
genetic algorithms, including:
Local selection
Truncation selection
Tournament selection
2. Method
Genetic algorithms start with a population of
elements representing a number of potential solutions to
the problem at hand. By applying crossover and mutation
to some of the genotypes an element or group of elements
is hoped to emerge as the optimal (best) solution to the
problem at hand. There are many types of crossover and
mutation methods used in genetic algorithms and some of
them are specific to the problem at hand (Goldberg,
1989)[2]. In this research we implement the most basic
type of crossover (arithmetic crossover).
Mutation is another recombination technique. It is
used to make sure all the elements in a population are not
homogeneous and diversity is maintained. Mutation can
help genetic algorithms escape local optima in the search
for the global optima. Mutation rate is usually ranges not
higher than 30% [7]. The MultiNonUniform mutation
method is used in experiments at a rate of 5%.
The selection function [8, 9, 10, 11] chooses parents for
the next generation based on their fitness. In our
experiments we take the traditional Geomtric selection
3. Benchmarking Functions
3.1
De Jong's function 1
f(x) =
2
i
-5.12 xi 5.12.
global minimum f(x)=0; xi =0, i=1:N.
3.2
Rosenbrock's valley
3.6
Easom's function
function definition
N 1
-100 xi 100,
i=1:2.
-2.048 xi 2.048.
global minimum f(x)=0; xi =1, i=1:n.
3.3
3.7
De Jong's FUNction 3
f(x) =
xi
-5.12 xi 5.12.
f(x) =
ix i4 + Gasuss ( 0 ,1)
-1.28 xi 1.28
global minimum f(x)=0; xi =0, i=1:n.
3.5
N
i
3.4
function definition
x 14 2
f ( x) = (4 - 2.1 x + ) x 1 + x 1 x 2 + (-4 + 4 x 22 ) x 22
3
2
1
-3 x1 3, -2 x2 2.
global
minimum
f(x1,x2)=-1.0316;
0.0898,0.7126), (0.0898,-0.7126).
3.8
(x1,x2)=(-
Rastrigin's function
f(x) = 10 N +
(x
2
i
10 cos( 2 x i ))
function definition
-5.12 xi 5.12.
global minimum f(x)=0; xi =0, i=1:n.
3.9
Schwefel's function
f(x) =
A=
-x
sin(
xi )
-512 xi 512.
global minimum f(x)=-N418.9829; xi =420.9687, i=1:n.
3.10
Michalewicz's function
1
8
6
3
1
8
6
7
1
8
6
3
2
5
8
6
9
5
1
2
2
3
8
6
3.6 7
4
1
8
6
7
9
3
1
2
3.6
C=
0.1
0.2
0.2
0.4
0.4
0.6
0.3
0.7
0.5
0.5
function definition
N-1
2x
2
i +1 2 m
+ sin(xi ) sin(
2
i +1
0 xi .
global minimum = - 4.00391571; xi = 1.57080 1.57104
1.57104 1.57104 1.57104
3.99951
)2m
Paviani Function
N
f ( x) =
i =1
Ln ( xi 2) + Ln (10 xi ) xi
i =1
2
9.350266
Ackley Function
3.14
e 0.2
i =1
-32.768 xi 32.768
global minimum = -10.4614373402; xi = -1.51573
1.11506 -1.10393 -0.74712
3.12
0.2
Shekel Function
Goldstein-Price's function
f(x) =
i =1
-2 xi 2, i=1:2.
(x Ai )(x Ai )T + Ci
4. Results
Where:
Using the selection methods and functions described
in the previous sections we tried to determine the effect on
the performance of a GA. The implementation of our
programs was carried out with the help of the GAOT
library.
Mean
Min
Max
5,875
982
23,611
41,219
15,548
16,386
519,248
977,250
681,134
266,399
26,597
75,652
152,176
54,102
181,581
1,766
1,570
1,178
1,540
732,425
320,132
81,441
13,707
23,846
73,984
29,349
85,821
76,849
41,700
45,290
936,393
1,243,422
2,394,518
464,080
46,981
110,765
330,578
96,589
334,206
46,199
1,956
90,805
218,526
97,836
445,413
Table 1 continued:
Function
De Jong 1-2
Branin's
Rcos
Eesoms
six-hump
Rosenbrocks
Rastrigins
Schwefels
De Jong 3
De Jong 4
Ackley 4
Michalewicz
Shekel-4
paviani-10
GoldsteinPrice
Mean
Mean
Min
Max
Mean%
1,033
308
2,688
17.6%
2,582
3,273
1,260
122,859
139,147
144,379
51,657
4,589
16,801
104,920
53,946
29,463
644
910
518
588
107,268
38,598
38,384
3,024
7,210
32,336
6,692
20,412
6,818
11,101
2,562
205,647
165,340
343,720
80,968
5,726
27,392
338,724
164,454
46,336
6.3%
21.1%
7.7%
23.7%
14.2%
21.2%
19.4%
17.3%
22.2%
68.9%
99.7%
16.2%
3,714
588
8,372
8.0%
48,545
18,391
100,703
22.2%
1200000
Geometric
HLF
Mean
969
2,330
1,174
1,376
494,494
196,013
69,885
20,555
22,155
66,809
41,873
118,503
89,825
52,023
21,148
1,391,803
1,395,963
556,822
447,029
41,802
168,321
295,501
415,960
264,158
28,946
1,738
102,283
154,869
74,230
375,414
400000
200000
0
om
s
pc
am
e lb
Ro
a ck
se n
br o
cks
2 -2
Ra
s tr i
gi n
s6
20
Sch
we
f e ls
7 -1
0
De
Jon
g3
-5
De
Jon
g4
-8
Ack
le y
4
M ic
h al
ew
ic z 5
She
ke l
-4
pav
ia n
i -1 0
Go
ld s
t e in
-Pr
ic e
23,954
24,459
4,850
216,353
776,933
315,859
210,954
32,732
85,479
145,522
128,074
170,367
h um
13,164
s ix -
1,350
's R
cos
3,682
600000
Ees
Max
g1
-2
Min
n in
Mean
Jon
Function
De Jong 1-2
Branin's
Rcos
Eesoms
six-hump
Rosenbrocks
Rastrigins
Schwefels
De Jong 3-5
De Jong 4-8
Ackley 4
Michalewicz
Shekel-4
paviani-10
GoldsteinPrice
800000
Bra
Geometric
De
N u m b e r o f fu n c ti o n e v a l u a ti o n s
1000000
99.7%
Table 2 continued:
68.9%
Mean
630
Mean%
40%
13.1%
20%
739
560
1,232
2.6%
23,377
8,568
46,765
15.1%
8.0%
0%
h um
2.4%
3.9%
11.5%
0.3%
9.3%
11.9%
11.0%
12.3%
9.1%
19.0%
101.6%
12.2%
16.2%
7.7%
s ix -
684
1,090
714
1,064
98,126
48,006
35,868
5,348
12,698
71,653
348,304
29,288
22.2%
21.2% 19.4%
17.3%
g1
-2
R co
s
Ees
om
s
364
798
462
574
47,880
26,208
8,722
2,772
3,948
4,214
7,980
15,092
14.2%
6.3%
Jon
580
949
559
689
72,085
37,546
23,296
4,025
7,759
27,692
130,156
20,719
23.7%
21.1%
17.6%
p ca
me
e n b lb a c k
ro c
ks 2
Ras
-2
t r ig i
ns 6
Sch
-2 0
wef
e ls
7 -1
De
0
Jon
g3
5
De
Jon
g4
-8
Ack
le y
4
M ic
h ale
w ic
z
S h e -5
k e l4
pav
ia n i
10
G ol
d s te
in - P
r ic e
378
Max
Ros
482
Min
n i n 's
Mean
De
Function
De Jong 1-2
Branin's
Rcos
Eesoms
six-hump
Rosenbrocks
Rastrigins
Schwefels
De Jong 3
De Jong 4
Ackley 4
Michalewicz
Shekel-4
paviani-10
GoldsteinPrice
B ra
HighLowFit
900000
Function Evaluations(X2)
Geometric
N u m b e r o f fu n c ti o n e v a l u a ti o n s
800000
HLF
700000
600000
500000
400000
300000
200000
100000
ic e
-P r
-4
i- 1 0
ld s
t e in
pa v
ia n
k el
S he
c z-
le y
ewi
ha l
Go
M ic
g4
-8
J on
A ck
De
7-1
g3
-5
J on
f e ls
we
S ch
De
2 -2
s6
-2 0
Ra
s t ri
g in
a ck
e lb
c ks
b ro
am
pc
se n
hu m
s ix -
Ro
n in
B ra
De
J on
g1
-2
's R
co s
E es
om
s
101.6%
60%
6. Conclution
40%
19.0%
20%
13.1%
12.3%
9.3% 11.9% 11.0%
9.1%
11.5%
2.4%
3.9%
0.3%
12.2%
2.6%
De
Jon
g1
-2
B ra
n i n 's
R co
Ees s
s ix om
h um
s
p ca
mel
Ros
ba
enb
ro c ck
k
s 2Ras
2
t r ig i
ns 6
Sch
-2 0
wef
e ls
7 -1
De
Jon 0
g3
-5
De
Jon
g4
-8
Ack
le y
4
M ic
h ale
w ic
z -5
She
k e l4
pav
ia n i
10
Gol
d s te
in - P
r ic e
0%
5. Discussion
Numerous research efforts have been made to establish
the relative suitability of various selection techniques that
can be applied in genetic algorithms. A major problem in
any attempt at ranking performance has been the apparent
bias of different operators to different tasks. Hence, we
References
[1] Holland, J. H., Adaptation in natural and artificial systems.
Ann Arbor: The University of Michigan Press, 1975.
[2] Goldberg, D. E., Genetic Algorithms in Search, Optimization,
and Machine Learning. Reading, Mass.: Addison-Wesley, 1989.
[3] E. Eiben, R. Hinterding, and Z. Michalewicz, "Parameter
Control in Evolutionary Algorithms", IEEE Transations on
Evolutionary Computation, IEEE, 3(2), 1999, pp. 124-141.
[4] Luke, S., "Modification point depth and genome growth in
genetic programming". Evolutionary Computation, 1998,
11(1),67-106.
[5] Burke, D. S., De Jong, K. A., Grefenstette, J. J., & C. L.
Ramsey, "Putting more Genetics into Genetic Algorithms".
Evolutionary Computation 6(4), 387-410.