Professional Documents
Culture Documents
Neural Network Optimization Based On Improved Diploidic Genetic Algorithm
Neural Network Optimization Based On Improved Diploidic Genetic Algorithm
Neural Network Optimization Based On Improved Diploidic Genetic Algorithm
KE-YONG SHAO, FEI LI, BEI-YAN JIANG, NA WANG, HONG-YAN ZHANG, WEN-CHENG LI
School of Electrical and Information Engineering, Daqing Petroleum Institute, Heilongjiang Daqing 163318, China
E-MAIL: shaokeyong@tom.com, lifei96485@ tom.com
Abstract: on [3-5].
In this paper, a kind of improved method of diploid Genetic algorithm (GA) is a kind of optimization
genetic algorithm (DGA) without considering the dominant and method by simulating biological evolution, and the
recessive of the allele is given directed at the disadvantages of optimization process has nothing to do with gradient
DGA which are easy to fall into premature convergence and information and continuity and derivability of optimal
have low efficiency in late period local searching. Improved the
genetic operation process by imitating the reproductive
function. Therefore, ANN and GA have complementary
processes of diplont and adopting the process of gametes advantages. However, the SGA has many shortcomings, such
recombination and homologous chromosomes chiasma. United as poor local search capability, premature convergence, etc
the advantages of genetic algorithm and neural network, a new [6-8]. In order to improve the performance, researchers from
neural network structure contacted with the diploid genetic different aspects put forward lots of improved measures, such
algorithm closely is designed. This scheme combines the strong as: ecologicniche techniques, adaptive GA, and so on [7-9].
global search capability of genetic algorithm and self-learning Among them, Balgey put forward the DGA which has better
ability of neural network. Then applied the method to the adaptability of adaptability and memory, but it doesn't allow
complex multi-peak function optimization. Simulation results mutation operation to dominant value, thereby leads to
show that the improved algorithm can keep the population
diversity and repressed the premature convergence effectively.
premature convergence. Hollosetin studied a diploid ternary
The neural network optimization based on diploid genetic coding. In [9] He Fei proposed an fully dominant-recessive
algorithm increased the convergence speed and accuracy, and diploid gene encoding method which plays an active role
ensured the global optimal. about keeping the diversity. Yet, the gene encoding operation
is very troublesome, which not only increases the calculation,
but also has certain random.
Keywords:
Neural network; Genetic algorithm; Diploid; Function This paper avoids dominance and recessive of gene,
optimization makes alleles play equal role in individual performance,
improves the DGA by imitating the process of diploid
reproduction. Finally, uses strong global search capability and
1. Introduction high efficiency learning ability of DGA to optimize the
weights of NN. Simulation results show that the genetic
Artificial neural network (ANN) has strong ability of algorithm can effectively restrain premature convergence,
nonlinear mapping approximation and parallel information and achieve fast speed and high precision.
processing, as well as better capability of robustness and fault
tolerance. Which has been applied widespread in data mining, 2. New type neural network structure
industrial control system and especially in the complex
nonlinear object modeling [1-5]. There is numbers of
algorithms to train network, for example, BP learning 2.1. Back-Propagation neural network(BPNN)
algorithm which is widely used in multilayer feed forward
neural network. But it belongs to the gradient descent Back-Propagation Neural Networkcomprised of an input
algorithm essentially, thus has some defects inevitably, such layer, a certain amount of hidden layers and an output layer is
as easily sinking into the local optimal values, training a feed-forward network. If the number of neurons of input,
slowly and indicator function having to be derivable and so
hidden and output layer is nǃqǃm respectively, the network xi = 0.618xi1 + 0.382xi2 (1)
can be expressed as BP(n, q, m). Using the network can The specific operation of improved genetic algorithm is
achieve the nonlinear mapping of n dimension input vector to as follows:
m dimension output vector. Normally, increasing the number Step1: Firstly, sort the parent-population by the
of hidden layer do not exert a severe influence on the individual fitness.
precision and expression ability of network. The basic Step2: Homologous chromosomes crossover.
structure of BPNN has three layers of which the nodes Cross the two homologous chromosomes for all
employing S-type effect function. The structure is shown in individuals respectively. The two chromatids are named
Figure 1. gamete after the operation of crossover.
Step3: Gametes recombination
wij v jk Select a parent individual according to the sort order,
and match the two gametes of this individual with other two
gametes from another parent individual selected randomly.
Then, four new offspring individuals are generated (see
ˊ Figure 2). In addition, set the former M/5 parent individuals
u ˊ ˊ y have two chances to reproduce.
ˊ
1471
Proceedings of the Ninth International Conference on Machine Learning and Cybernetics, Qingdao, 11-14 July 2010
m
∂J
=¦ ⋅ g ' ( IN 3 s ) ⋅ v js ⋅ g ' ( IN 2 j ) ⋅ O3l (k − 1)
s =1 ∂O3 s
Figure 3. Three layer DNN structure diagram
m
1472
Proceedings of the Ninth International Conference on Machine Learning and Cybernetics, Qingdao, 11-14 July 2010
we choose binary coding in this paper. Quit searching process when meet any one of the two
conditions, go to Step7.
3.2. Genetic operators Step4: Carry out the improved selection operation.
Step5: Conduct diploid reproduction operation to the
selected population (including crossover and recombination
(1) Selection: This paper using ratio selection operator, process), and get offspring population.
namely, the probability of choosing any individual is Step6: Execute the mutation operation to population, go
proportional to its fitness. to Step2.
In order to avoid the reduction of population diversity Step7: Put the optimization results as the initial weights
and premature convergence which caused by too much of DNN at the end of DGA evolution.
selection pressure, this paper presents a limited selection Step8: Continue to optimize by DNN until meet the
strategy. That is to say, however the fitness, the time of precision requirement.
individual copy shall not exceed the given election limit.
(2) Flowchart of DGANN is as follows:
for the purpose of preventing to eliminate the highest
fitness individual, we have to adopt the optimal choice
strategy. Namely, the best individual can directly get into the Begin
next generation without genetic operation.
In the case of the number of the population is Initialize population
insufficient after selection, one part of the balance generate
new individual randomly to adding new genes, the other part
leave to the optimal individuals to increase the ratio of Calculate fitness
excellent genes in population.
(2) Crossover: Crossover is the genetic recombination of
Yes
two individuals. It is an important symbol of genetic Meet termination conditions
algorithm which is distinguished from other traditional
optimization method. In order to improve the computational No
efficiency and stability, unlike previous genetic algorithms, Genetic
this paper presents a cross-over strategies of homologous operation
chromosomes (see section 2.2).
(3) Recombination: This paper introduces a new concept No
of gametes recombination (see section 2.2) by imitating Reach limitied step
diploid organisms reproduction process for improving the
traditional algorithm. Yes
1473
Proceedings of the Ninth International Conference on Machine Learning and Cybernetics, Qingdao, 11-14 July 2010
these functions can fully evaluate the search performance of BPNN 1 2 1000 --
algorithm and show its effectiveness clearly. DGANN 1e-3 92 90.4 0.64s
20
TABLE 2. OPTIMIZATION PERFORMANCE CONTRAST FOR FUNCTION 2
0
Satisfaction Average Average
2 Algoristhm Precision
Rate iterations convergence time
1
BPNN 1e-1 0 1000 --
2
0
1 DGANN 1e-2 76 240.6 1.71s
-1 0
-1 Average x Average y Average goal
-2 -2
1474
Proceedings of the Ninth International Conference on Machine Learning and Cybernetics, Qingdao, 11-14 July 2010
12
Acknowledgements
11.95
11.9
This paper is supported by fund of colleges and
11.85 universities youth academic backbone support project:
optimal value
11.8 (1152G001).
11.75
11.7 References
11.65
1475