Download as pdf or txt
Download as pdf or txt
You are on page 1of 24

Genetic Algorithm Applications

Assignment #2 for Dr. Z. Dong


Mech 580, Quantitative Analysis, Reasoning and Optimization Methods in CAD/CAM
and Concurrent Engineering

Yingsong Zheng
Sumio Kiyooka
Nov. 5, 1999

Table of Contents
INTRODUCTION ..............................................................................................................4
APPLICATIONS...............................................................................................................5
Minimum Spanning Tree ................................................................................................. 5
Traveling Salesman........................................................................................................... 6
Reliability Optimization................................................................................................... 7
Job-Shop Scheduling ........................................................................................................ 7
Transportation .................................................................................................................. 7
Facility Layout Design...................................................................................................... 8
Obstacle Location Allocation........................................................................................... 9
AN EXAMPLE OF APPLYING THE GENETIC ALGORITHM...............................10
1. Optimization Problem ................................................................................................. 10
2. Representation............................................................................................................. 10
3. Initial Population........................................................................................................ 11
4. Evaluation ................................................................................................................... 11
5. Create a new population ............................................................................................. 12
5.1 Reproduction........................................................................................................... 12
5.2 Selection and Crossover ......................................................................................... 12
5.3 Mutation.................................................................................................................. 14
6. Results.......................................................................................................................... 15
RELATED MATERIAL ..................................................................................................17
CONCLUSION................................................................................................................17
REFERENCES ...............................................................................................................18
APPENDIX ......................................................................................................................19
Filename: zys_ga.m ....................................................................................................... 19
Filename: G_Pfunction.m.............................................................................................. 20
2

Filename: evalpopu.m.................................................................................................... 21
Filename: evaleach.m .................................................................................................... 21
Filename: bit2num.m..................................................................................................... 22
Filename: blackbg.m...................................................................................................... 22
Filename: nextpopu.m ................................................................................................... 22
Filename: G_Pfunction2 ................................................................................................ 24
Filename: matlabv.m ..................................................................................................... 24

Introduction
The intended audience of this report is those who wish to know which applications can be
solved by with Genetic Algorithms and how to apply the algorithm. Knowledge of the
Genetic Algorithm is assumed but a short overview will be given.
The Genetic Algorithm is based on the process of Darwins Theory of Evolution. By
starting with a set of potential solutions and changing them during several iterations the
Genetic Algorithm hopes to converge on the most fit solution. The process begins with
a set of potential solutions or chromosomes (usually in the form of bit strings) that are
randomly generated or selected. The entire set of these chromosomes comprises a
population. The chromosomes evolve during several iterations or generations. New
generations (offspring) are generated using the crossover and mutation technique.
Crossover involves splitting two chromosomes and then combining one half of each
chromosome with the other pair.
Mutation involves flipping a single bit of a
chromosome. The chromosomes are then evaluated using a certain fitness criteria and the
best ones are kept while the others are discarded. This process repeats until one
chromosome has the best fitness and thus is taken as the best solution of the problem.
There are several advantages to the Genetic Algorithm. It works well for global
optimization especially where the objective function is discontinuous or with several
local minima. These advantages lead to potential disadvantages. Since it does not use
extra information such as gradients the Genetic Algorithm has a slow convergence rate
on well-behaved objective functions.
The Genetic Algorithm can be used in both unconstrained and constrained optimization
problems. It can be applied to nonlinear programming, stochastic programming (the case
where mathematical programming problems have random variables thus introducing a
degree of uncertainty), and combinatorial optimization problems such as the Traveling
Salesman Problem, Knapsack Problem, Minimum Spanning Tree Problem, Scheduling
Problem, and many others.

Applications
Since the Genetic Algorithm can be used to solve both unconstrained and constrained
problems it is merely a way to obtaining a solution in a standard optimization problem.
Thus it can be used to solve classic optimization problems such as maximizing volume
while minimization the amount of material required to produce a container. By applying
the Genetic Algorithm to linear and nonlinear programming problems it is possible to
solve typical problems such as the diet problem (choosing the cheapest diet from a set of
foods that must meet certain nutritional requirements). Another area where Genetic
Algorithms can be applied is combinatorial optimization problems including several
common computer science problems such as the knapsack, traveling salesman, and job
scheduling problems. In the following section several common applications and how the
Genetic Algorithm can be applied to them will be discussed.
Minimum Spanning Tree
The Minimum Spanning Tree Problem is a classic problem in graph theory. Imagine the
problem of laying fiber-optic cable between several cities scattered over some
geographical area. Assume that the cost of laying the cable between each city has been
already determined. The goal of the problem is to find the layout that connects all the
cities to the network of fiber-optic cable and costs the least for layout. The layout has to
be such that it is possible for a data packet to get between any two cities although the
distance that the packet travels is not important. Mapping this example to graph theory
identifies the cities as vertices and the cable as edges where each edge has an associated
cost. The problem can be formulated as follows:
Let G = (V, E) be a connected and undirected graph. The list of vertices is denoted V =
{v1 , v2 , , vn } and the edges are denoted E = {e 1 , e2 , , em}. Each edge has an
associated non-negative weight denoted W = { w1 , w2 , , wm}.
A spanning tree is the minimal set of edges that can connect all the vertices in the graph.
By definition of being a tree there cannot exist any cycles or solitary vertices. Several
greedy algorithms such as those developed by Kruskal and Prim [1] can solve the
Minimum Spanning Tree Problem in O(E log V) time.
While using Genetic Algorithms to solve the Minimum Spanning Tree Problem the
primary concern is how to encode the tree. This can be done using edge encoding, vertex
encoding, or a combination of both. It is important for any encoding scheme to contain
the following properties:
1. All trees must be able to be represented.
2. Each distinct tree must have the same number of possible encodings as any other.
3. It must be easy to translate between the encoded form and the conventional form.
This is important when evaluating fitness.
4. Small changes to the encoding should represent small changes to the tree.

Traveling Salesman
The traveling salesman problem is of particular note because it is the classic example of
non-deterministic polynomial (NP) complete problems that, so far, can only be solved in
exponential time.
Any problems can be classed as either solvable or unsolvable (such as the Halting
Problem). The solvable problems they can be further subclassed as computationally
complex or not. The Traveling Salesman Problem is the classic computationally complex
problem. Imagine that you are a sales agent and you need to visit potential clients in a
certain number of cities using the shortest possible route. This problem can be solved
using a computer. If there are n cities then the maximum number of possible itineraries
between all the cities is (n 1)!. An algorithm can be created which simply examines all
the possible routes and comes up with the shortest one. However the main catch is that
the amount of time required for the algorithm grows at an enormous rate as the number of
cities increases. If there are 25 cities then the algorithm must examine 24! itineraries.
24! is approximately 6.21023 . Using a computer that can examine one million itineraries
per second it would still take about 6.21023 / 106 = 6.21017 seconds to solve the
problem. This is over 1.961010 years!
It can be said that a problem can be solved in a certain time. Take for example matrix
multiplication. Assume both matrices are square and of the same size say n. The number
of multiplications between single elements that is required is n2 . Therefore the problem
of matrix multiplication can be solved in n2 time. It can be shown that using more
complicated machines it is possible to reduce the necessary time to solve a problem (for
example using a two-dimensional Turing machine instead of a 4-tape Turing machine a
problem that normally takes n3 time can be solved in n time [5]). As a result the
important question is not the degree of the polynomial measuring the time that it takes to
solve a problem but instead whether or not the problem can be solved in polynomial time
[5].
It is generally accepted that solvable problems that can be solved in polynomial time can
be feasibly solved using a computer. The solvable problems that cannot be solved in
polynomial time, such as the Traveling Salesman Problem, are said to be solvable in nondeterministic time.
The big question that still remains unsolved is whether or not the class of polynomial
time problems equals the non-deterministic time ones. Many problems such as the
Traveling Salesman Problem, which are easily shown to fall in the non-deterministic
category, may or may not fall into the polynomial time category. This fundamental
question of computer science can be used to decide how computationally complex other
problems may be. For example, a similar process can be used to show that a problem is
computationally complex. First assume that the Traveling Salesman Problem can be
solved in polynomial time. Then show that by using the Traveling Salesman Problem the
current problem can be solved in polynomial time (perhaps by reducing the current
problem to the Traveling Salesman Problem). However, since it is known that the
Traveling Salesman Problem is a non-deterministic problem then the current problem
must be one also. Thus a computationally feasible algorithm will not exist for the current
problem unless one for the Traveling Salesman Problem is discovered.

When using Genetic Algorithms to solve the Traveling Salesman Problem a couple of
feasible encodings exist. One is a permutation representation where each city to be
visited is assigned a number. For example a 5 city tour may be encoded as 5 2 1 4
3.
Any crossover operators including partial-mapped crossover, position-based
crossover, heuristic crossover, cycle crossover, order crossover, etc. must take into
account that the resulting encoding must be a valid tour (i.e. cities cannot be skipped and
a city cannot be visited twice). Several mutation operators exist including inversion,
insertion, displacement, reciprocal exchange, and heuristic.
Reliability Optimization
The reliability of a system can be defined as the probability that the system has operated
successfully over a specified interval of time under stated conditions [1]. Many systems
play a critical role in various operations and if they are down then the consequences can
be pretty severe. Measures of reliability for systems such as communication switches is
desired in order to access current reliability and also determine areas where reliability can
be improved. Optimization in this field often involves in finding the best way to allocate
redundant components to systems. Components are assigned probabilities to effectively
gauge their reliability.
Job-Shop Scheduling
Imagine there is a sequence of machines that each performs a small task in a production
line. These machines are labeled from 1 to m. For a single job to be completed work
must be done first with machine 1, then machine 2, etc., all the way to machine m. There
are a total of n jobs to be done and each job requires a certain amount of time on each
machine (note that the amount of time required on one machine may vary from one job to
another). A machine can only work on one job at any given time and once a machine
starts work it cannot be interrupted until it has completed its task. The objective is to find
the ideal schedule so that the total time to complete all n jobs is minimized.
There are two main ways of encoding a schedule for applying the Genetic Algorithm to
the Job-Shop Scheduling Problem. They are the direct approach and indirect approach.
In the direct approach the entire schedule is encoded into a chromosome. With the
indirect approach the chromosome instead holds a set of dispatching rules which
determine how the jobs will be scheduled. As the Genetic Algorithm comes up with
better dispatching rules, the schedule improves as well. Dispatch rules are executed
using properties of the jobs themselves. For example one could select the job with the
shortest overall processing time to be next, or one could select the job with the greatest
amount of remaining time to be next.
Transportation
The Transportation Problem involves shipping a single commodity from suppliers to
consumers to satisfy demand via the minimum cost. Assume that the supply equals the
demand. There are m suppliers and n consumers. The cost of shipping one unit from a
single supplier to each consumer is known. The problem is to find the best allocation of

the commodity at the suppliers so that the demand can be satisfied and the lowest costs
are incurred.
A matrix representation can be encoded into each chromosome. The suppliers are listed
vertically and the consumers are listed horizontally. Element xij holds the number of
commodity shipped from supplier i to consumer j.
During both the crossover and mutation operators, it is important to ensure that the
amount of commodity being shipped remains constant since the amount of supply and
demand must remain equal. Mutation involves randomly selecting a smaller sub-matrix
consisting of a random number of rows and columns (greater than one) and then
redistributing the values. The values are redistributed in such a way so that the sum of all
values still remains constant (i.e. the same as before the mutation operator).
Facility Layout Design
Facility Layout Design Problems include the decisions made when deciding where to
place equipment or other resources (such as departments) in a configuration that allows
optimal performance according to certain criteria. Such decisions can be complicated
since equipment may often be used during the manufacturing of a variety of different
products. Each product has its own special requirements and so it is imperative that the
equipment is placed so that the total cost of production for all products is optimally
minimal. Layout decisions must be made early and poor decisions will end up costing a
lot during the setting up of the equipment and during production itself.
Commonly during facility layout design a single robot moves parts from one machine to
another. The robot may be fixed to a stationary point and only revolves around an axis, it
may move in a linear direction which machines on one or both sides, or it may move in a
planar direction and be able to access machines in multiple rows. According to the
motion of the robot the machines may be oriented in one of four different layouts: linear
single-row, linear double-row, circular single-row, or multiple-row [1]. In attempting to
solve the Facility Layout Design Problem the circular single-row is a special case of
linear single-row. Also the linear double-row is a subset of the multiple row problem.
When using Genetic Algorithms of single-row problems the representation is simple.
Each chromosome is a permutation of the machines. As in the Traveling Salesman
Problem several crossover and mutation operators exists that will ensure that any
offspring are a valid permutation.
For multiple rows the bulk of the chromosome is a permutation of the machines but at the
beginning there has to be a marker to specify when one row ends and the next one begins.
Imagine there are 9 machines to be placed into two rows. Let the first element of the
chromosome indicate which machine is the start of the second row. This value can vary
from 1 to 10. For example the following layout:
2

<------------- robot -------------->


9

would be encoded as { 5, 2, 6, 8, 3, 9, 5, 1, 4, 7 }. The fifth machine is the start of the


second row and thus a 5 is encoded in the beginning of the chromosome. Since this
representation is still mostly a permutation the crossover and mutation techniques are
essentially the same as the ones for the single-row problem.
Obstacle Location Allocation
Imagine you are building an expansive civilization consisting of a plethora of cities
dispersed throughout the known earth. As the wise ruler the time has now come for you
to choose the locations of new regional capitol cities. In order to keep your kingdoms
happiness at the highest level it is important to minimize the distance of roads it will take
to connect all of your cities to a regional capitol.
Regarding constraints the main one for this simple problem is that each city can only be
served by one capitol. Advanced problems may have more constraints. For example, it
is possible that each capitol may only be able to serve up to a set number of cities. In the
case of Obstacle Location Allocation another factor to consider is physical obstacles that
affect the placement of roads. Building roads over mountains and across lakes may be
impossible. Two types of obstacles exist [1]. The first prohibits the placement of capitol
cities (e.g. in a lake). The second prohibits the placement of routes (e.g. over tall
mountains).
Representation of each potential solution is straightforward. The location of each capitol
can be expressed using x-y coordinates. Evaluation of fitness poses much more difficulty
especially when obstacles are involved. First of all the feasibility of a solution must be
ascertained. Potential locations of capitols must be checked against all obstacles to
ensure that they are not located within one. Afterwards the shortest path between a city
and its capitol must be calculated. This can be accomplished by representing the problem
as a graph and then applying Dijikstras shortest path algorithm [1].
New chromosomes produced during crossover and mutation have the possibility of being
infeasible. There are a few approaches that can be used to solve this problem. The first
is to discard any infeasible chromosomes. However this seems to decrease the efficiency
of the algorithm [1]. The second approach is to add a penalty to each infeasible
chromosome. The final approach is to adjust any infeasible chromosome so that it
becomes feasible. An easy way to accomplish this is to replace the location of any
infeasible capitol with the location of the nearest vertex of the obstacle that the capitol
currently lies within.

An Example of Applying the Genetic Algorithm


In this section, the Genetic Algorithm will be applied to a simple example and explained
in detail.
1. Optimization Problem
The numerical example of constrained optimization problem (the Goldstein and Price
Function [3]) is given as follows:
Minimize: z=[1+(x+y+1)2 (19-14x+3x2 -14xy+6xy+3y 2 )][30+(2x-3y)2 (18-32x+12x2 +48y-36xy+27y 2 )]
Constraints: 2.0 x -2.0; 2.0 y -2.0

A three-dimensional plot of the objective function is shown in Figure 1.

Figure 1: The Goldstein and Price Function (MATLAB file: G_Pfunction.m)


2. Representation
First, we need to encode decision variables into binary strings. Here 16 bits are used to
represent a variable. The mapping from a binary string to a real number for a variable x
or y is completed as follows:
x = -2.0 + x 4.0/(216 - 1)
y = -2.0 + y 4.0/(216 - 1)

Here x and y represent the decimal value of the substring for decision variable x and y.
For example, assuming the total length of a chromosome is 32 bits:
10

1110011011001100 0110101110000010
The corresponding values for variable x and y are given below:

x
y

Binary number
1110011011001100
0110101110000010

Decimal number
59076
27522

x = -2.0 + 59076 4/(216 - 1) = 1.606


y = -2.0 + 27522 4/(216 - 1) = -0.320
3. Initial Population
In each generation the population size is set as 20.
Initial population is randomly generated as follows:
V1 =[1 1 1 0 0 1 1 0 1 1 0 0 1 1 0 0 0 1 1 0 1 0 1 1 1 0 0 0 0 0 1 0 ]=[ 1.606256, -0.320165 ]
V2 =[1 1 0 0 0 0 0 0 0 0 1 1 0 0 0 1 1 0 1 1 1 1 0 0 0 1 0 1 0 1 0 1 ]=[ 1.003037, 0.942733 ]
V3 =[1 1 0 0 0 0 0 1 0 1 0 0 0 0 1 0 1 0 1 1 0 1 0 1 0 0 0 1 1 0 0 0 ]=[ 1.019699, 0.829633 ]
V4 =[0 1 0 0 1 0 0 0 1 1 1 1 0 0 0 1 1 0 0 1 1 1 0 1 1 0 0 0 1 1 0 0 ]=[ -0.860273, 0.461707 ]
V5 =[0 0 1 1 0 0 1 1 0 1 1 1 1 1 1 0 0 1 0 1 1 1 0 1 1 0 0 0 1 0 1 0 ]=[ -1.195422, -0.538430 ]
V6 =[0 1 0 0 0 1 1 0 1 1 1 1 1 0 0 0 0 0 0 0 0 1 1 1 1 0 0 1 1 0 0 0 ]=[ -0.891096, -1.881346 ]
V7 =[0 1 1 0 1 0 0 0 0 0 0 1 1 1 1 0 1 0 0 0 0 1 1 0 0 1 1 1 1 0 1 0 ]=[ -0.373144, 0.101228 ]
V8 =[0 0 0 1 0 1 1 0 0 0 0 1 0 1 1 1 1 0 1 0 1 1 1 1 1 1 1 1 0 0 0 0 ]=[ -1.654841, 0.749065 ]
V9 =[1 0 1 1 0 1 1 1 1 1 0 1 1 1 1 1 1 0 1 0 1 1 0 0 0 0 1 1 1 1 1 1 ]=[ 0.873030, 0.691386 ]
V10 =[0 0 0 0 0 0 0 0 0 1 0 1 1 1 1 1 1 1 0 0 1 1 0 1 0 1 1 1 1 1 1 1 ]=[ -1.994202, 1.210925 ]
V11 =[1 0 0 1 1 0 0 1 1 0 1 0 1 0 1 0 0 0 0 0 1 1 1 0 1 1 0 1 0 1 0 0 ]=[ 0.401038, -1.768307 ]
V12 =[0 0 1 1 1 0 0 0 0 0 0 0 1 0 0 1 1 1 1 1 1 0 1 0 1 0 1 1 0 0 1 0 ]=[ -1.124437, 1.917174 ]
V13 =[0 1 1 1 0 1 1 1 1 1 0 0 1 0 1 0 1 1 1 1 1 0 1 1 0 1 1 0 1 0 1 1 ]=[ -0.128267, 1.928466 ]
V14 =[0 0 0 1 0 1 1 0 1 1 0 1 1 1 1 1 0 0 1 0 0 0 1 0 1 1 1 1 1 1 1 0 ]=[ -1.642634, -1.453239 ]
V15 =[1 0 0 0 1 1 0 1 0 0 1 0 0 1 0 0 1 1 1 0 0 1 1 1 1 0 0 0 1 1 1 0 ]=[ 0.205356, 1.618097 ]
V16 =[1 0 0 1 0 1 0 0 0 1 1 1 0 1 1 1 0 0 1 0 0 0 0 1 1 1 0 0 1 0 0 0 ]=[ 0.319799, -1.472160 ]
V17 =[1 1 1 1 0 0 1 0 0 1 0 0 1 0 1 1 1 0 1 0 0 1 1 1 0 1 0 0 0 0 1 0 ]=[ 1.785885, 0.613443 ]
V18 =[1 1 0 1 1 0 0 1 1 1 1 1 1 1 1 1 0 1 0 1 1 1 0 0 0 1 0 0 0 1 1 1 ]=[ 1.406241, -0.558145 ]
V19 =[0 0 0 0 0 1 1 0 1 0 1 0 1 0 1 0 0 0 0 1 1 0 1 1 1 1 1 1 0 0 0 1 ]=[ -1.895872, -1.563409 ]
V20 =[1 1 0 1 1 1 0 1 0 1 0 0 1 0 0 1 0 1 0 0 0 0 1 1 0 1 0 0 0 1 1 0 ]=[ 1.457633, -0.948836 ]

4. Evaluation
The first step after creating a generation is to calculate the fitness value of each member
in the population. The process of evaluating the fitness of a chromosome consists of the
following three steps:
1. Convert the chromosomes genotype to its phenotype. This means converting the
binary string into corresponding real values.
2. Evaluate the objective function.
3. Convert the value of objective function into fitness. Here, in order to make fitness
values positive, the fitness of each chromosome equals the maximization of the
objective function minus the objective function evaluated for each chromosome in the
population.

11

The objective function values F and the fitness values Eval of above chromosomes (the
first population) are as follows:
F(V1 ) = F(1.606256, -0.320165)=2907.700814;
F(V2 ) = F(1.003037, 0.942733)=1470.604014;
F(V3 ) = F(1.019699, 0.829633)=998.466596;
F(V4 ) = F(-0.860273, 0.461707)=9680.870631;
F(V5 ) = F(-1.195422, -0.538430)=1439.880786;
F(V6 ) = F(-0.891096, -1.881346)=11273.574224;
F(V7 ) = F(-0.373144, 0.101228)=951.091206;
F(V8 ) = F(-1.654841, 0.749065)=8068.650332;
F(V9 ) = F(0.873030, 0.691386)=982.663173;
F(V10 ) = F(-1.994202, 1.210925)=45585.613158;
F(V11 ) = F(0.401038, -1.768307)=8488.275825;
F(V12 ) = F(-1.124437, 1.917174)=703150.129106;
F(V13 ) = F(-0.128267, 1.928466)=234882.112971;
F(V14 ) = F(-1.642634, -1.453239)=14013.064752;
F(V15 ) = F(0.205356, 1.618097)=84257.482260;
F(V16 ) = F(0.319799, -1.472160)=730.102530;
F(V17 ) = F(1.785885, 0.613443)=890.983919;
F(V18 ) = F(1.406241, -0.558145)=5332.051371;
F(V19 ) = F(-1.895872, -1.563409)=21833.496910;
F(V20 ) = F(1.457633, -0.948836)=26032.543455;

Eval(V1 ) = Fmax-F(V1 ) = 700242.428


Eval(V2 ) = Fmax-F(V2 ) = 701679.525
Eval(V3 ) = Fmax-F(V3 ) = 702151.663
Eval(V4 ) = Fmax-F(V4 ) = 693469.258
Eval(V5 ) = Fmax-F(V5 ) = 701710.248
Eval(V6 ) = Fmax-F(V6 ) = 691876.555
Eval(V7 ) = Fmax-F(V7 ) = 702199.038
Eval(V8 ) = Fmax-F(V8 ) = 695081.479
Eval(V9 ) = Fmax-F(V9 ) = 702167.466
Eval(V10 ) = Fmax-F(V10 ) = 657564.516
Eval(V11 ) = Fmax-F(V11 ) = 694661.853
Eval(V12 ) = Fmax-F(V12 ) = 0.000
Eval(V13 ) = Fmax-F(V13 ) = 468268.016
Eval(V14 ) = Fmax-F(V14 ) = 689137.064
Eval(V15 ) = Fmax-F(V15 ) = 618892.647
Eval(V16 ) = Fmax-F(V16 ) = 702420.027
Eval(V17 ) = Fmax-F(V17 ) = 702259.145
Eval(V18 ) = Fmax-F(V18 ) = 697818.078
Eval(V19 ) = Fmax-F(V19 ) = 681316.632
Eval(V20 ) = Fmax-F(V20 ) = 677117.586

It is clear that in the first generation chromosome V16 is the best one and that
chromosome V12 is the poorest one.
5. Create a new population
After evaluation, we have to create a new population from the current generation. Here
the three operators (reproduction, crossover, and mutation) are used.
5.1 Reproduction
The two chromosomes (strings) with best fitness and the second best fitness are allowed
to live and produce offspring in the next generation. For example, in first population,
chromosome V16 and V17 are allowed to live in the second population.
5.2 Selection and Crossover
The cumulative probability is used to decide which chromosomes will be selected to
crossover. The cumulative probability is calculated in the following steps:
1. Calculate the total fitness for the population:
F _ total=

pop _ size

Eval(V )
i

i =1

2. Calculate the selection probability Pi for each chromosome:

12

Pi = Eval(Vi) / F_total;
3. Calculate the cumulative probability Qi for each chromosome:
i

Qi = Pk
k=0

For example, Pi and Qi of each chromosome in the above chromosomes are as follows:
P1 =0.054;
P3 =0.055;
P5 =0.054;
P7 =0.055;
P9 =0.055;
P11 =0.054;
P13 =0.036;
P15 =0.048;
P17 =0.055;
P19 =0.053;

Q1 =0.054;
Q3 =0.163;
Q5 =0.272;
Q7 =0.380;
Q9 =0.488;
Q11 =0.593;
Q13 =0.630;
Q15 =0.731;
Q17 =0.840;
Q19 =0.947;

P2 =0.054;
P4 =0.054;
P6 =0.054;
P8 =0.054;
P10 =0.014;
P12 =0.000;
P14 =0.054;
P16 =0.055;
P18 =0.054;
P20 =0.053;

Q2 =0.109;
Q4 =0.217;
Q6 =0.325;
Q8 =0.434;
Q10 =0.539;
Q12 =0.593;
Q14 =0.683;
Q16 =0.786;
Q18 =0.895;
Q20 =1.000

The crossover used here is one-cut-point method, which randomly selects one cut-point
and exchanges the right parts of two parents to generate offspring.
1. Generate a random number r from the range [0,1];
2. If Qi-1 < r < = Qi, select the ith chromosome Vi to be parent one.
3. Repeat step 1 and 2 to reproduce parent two.
4. Generate a random number r from the range [0,1]. If r is less than the probability of
crossover (we choose the probability of crossover as 1.0), the crossover will
undergoes, the cut-point is selected behind the gene which place is the nearest
integers greater than or equal to r(length-1). In this case the length is 32.
5. Repeat step 1 to step 4 altogether nine times to finish the whole crossover. The
creation of 18 offspring plus 2 chromosomes reproduced keeps the population the
same in each generation in this case 20.
For example, the first crossover on the above chromosomes is as follows:
xover_point = 9
parent1
01001000111100011001110110001100
parent2
11011001111111110101110001000111
new_popu1
01001000111111110101110001000111
new_popu2
11011001111100011001110110001100

The population after performing selection and crossover on the above chromosomes is:
10010100011101110010000111001000
11110010010010111010011101000010
01001000111111110101110001000111
11011001111100011001110110001100
11100110110011000110101110000010
11100110110011000110101110000010

13

11011100010010111010011101000010
11110011010010010100001101000110
00000110101010100101110001000111
11011001111111110001101111110001
10110010010010111010011101000010
11110111110111111010110000111111
11000001010000101011010100111111
10110111110111111010110000011000
10011001111111110101110001000111
11011001101010100000111011010100
00010100011101110010000111001000
10010110110111110010001011111110
00110011011111100101110110001010
00110011011111100101110110001010

5.3 Mutation
Mutation is performed after crossover. Mutation alters one or more genes with a
probability equal to the mutation rate. (In the example, the mutation rate is set to 0.01)
1. Generate a sequence of random numbers rk (k=1,.....,640) (Here, the numbers of bits
in the whole population is 2032=640).
2. If ri is 1, change the ith bit in the whole population from 1 to 0 or from 0 to 1.
3. The chromosomes reproduced are not subject to mutation, so after mutation, they
should be restored.
The population created after doing mutation on the above population is as follows:
10010100011101110010000111001000
11110010010010111010011101000000
01001100111111110101110001010111
11011001111100011001110110001100
11100110110011000110101110000010
11100110100011000110101110000010
11011100010010111010011101000010
11110011010010010100001101000110
00000010101010100101110000000111
11011001111111110001101111110001
10110010010000111010011101000010
11110111110111111010110000111111
11000001010000101011010100111111
10110111110111111010111000011000
10011001111111100101110001000111
11011001101010100000111011010100
00010100011101110010000111001000
10010110110111110010001011111110
00110011011111100101110110001010
00110011011111100101110110001010

A new population is created as a result of completing one iteration of the Genetic


Algorithm. The procedure can be repeated as many times as desired. In this example,
the test run is terminated after 50 generations. The best value of the objective function in
each generation is listed:
Generation 1: f(0.319799, -1.472160)=730.102530
Generation 2: f(0.406165, -0.558145)=162.226980
Generation 3: f(-0.397681, -0.453223)=57.549628

14

Generation 4: f(-0.397681, -0.453223)=57.549628


Generation 5: f(-0.390661, -0.557168)=38.178596
Generation 6: f(-0.390661, -0.557168)=38.178596
Generation 7: f(-0.390661, -0.557168)=38.178596
Generation 8: f(-0.390661, -0.557168)=38.178596
Generation 9: f(-0.225620, -0.886580)=24.916053
Generation 10: f(-0.225620, -0.886580)=24.916053
Generation 11: f(-0.225620, -0.886580)=24.916053
Generation 12: f(-0.225620, -0.886580)=24.916053
Generation 13: f(-0.225620, -0.886519)=24.913849
Generation 14: f(-0.218784, -0.886580)=23.439072
Generation 15: f(-0.204318, -0.964401)=21.311702
Generation 16: f(-0.100618, -0.886702)=10.690835
Generation 17: f(-0.100618, -0.980026)=6.364413
Generation 18: f(-0.100618, -0.980026)=6.364413
Generation 19: f(-0.100618, -0.980026)=6.364413
Generation 20: f(-0.100618, -0.980026)=6.364413
Generation 21: f(-0.046418, -0.980026)=3.866719
Generation 22: f(-0.046418, -0.980026)=3.866719
Generation 23: f(-0.046418, -0.980087)=3.865357
Generation 24: f(-0.046418, -0.980453)=3.857242
Generation 25: f(-0.038117, -0.977584)=3.716293
Generation 26: f(-0.038605, -0.980453)=3.663052
Generation 27: f(-0.046418, -0.994125)=3.619268
Generation 28: f(-0.015167, -0.978378)=3.314212
Generation 29: f(-0.015167, -0.978378)=3.314212
Generation 30: f(-0.015167, -0.978378)=3.314212
Generation 31: f(-0.015167, -0.978378)=3.314212
Generation 32: f(-0.015167, -0.980453)=3.273351
Generation 33: f(-0.015167, -0.980453)=3.273351
Generation 34: f(-0.015167, -0.996078)=3.076353
Generation 35: f(-0.014679, -0.996078)=3.072320
Generation 36: f(-0.014679, -0.996078)=3.072320
Generation 37: f(-0.015167, -0.999985)=3.057946
Generation 38: f(-0.015167, -0.999985)=3.057946
Generation 39: f(-0.006867, -0.996078)=3.024020
Generation 40: f(-0.006867, -0.996078)=3.024020
Generation 41: f(-0.006867, -0.996078)=3.024020
Generation 42: f(-0.006867, -0.999985)=3.011883
Generation 43: f(-0.006867, -0.999985)=3.011883
Generation 44: f(-0.006867, -0.999985)=3.011883
Generation 45: f(-0.006867, -0.999985)=3.011883
Generation 46: f(-0.006867, -0.999985)=3.011883
Generation 47: f(-0.006867, -0.999985)=3.011883
Generation 48: f(-0.005158, -0.999924)=3.006779
Generation 49: f(-0.005158, -0.999924)=3.006779
Generation 50: f(-0.005158, -0.999924)=3.006779

6. Results
The results are listed in Table 1. Results for REQPAL and SIMULA are copied from [3].

15

Table 1 Initial values, results and evaluation of example problem [3]


Items
Start Point
Iterations
CPU Time: T(sec.)
Optimum
Minimum function Z

REQPAL
x = -1.0
y = 1.0
23
0.22E-10
X = 0.000104
Y = -0.99999
3.000000

SIMULA
x = -1.0
y = 1.0
246001
29.5
X = 0.000038
Y = -1.00000
2.99999

Figure 2(a): contour plot of objective function,


with the initial population

GA(first time)

GA(second time)

50
50.15
X = -0.005158
Y = -0.999924
3.006779

50
26.14
X = 0.003998
Y = -0.991135
3.030143

Figure 2(b): contour plot of objective function,


with the population after the 20th generation
5

x 10
Poorest
Average
Best

Value of function

5
4

3
2

1
0
0

Figure 2(c): contour plot of objective function,


with population after the 50th generation

10

20
30
Generations

40

50

Figure 3: Performance of GAs across generations

Figure 2(a) is the contour plot of the Goldstein and Price Function, with the initial
population locations denoted by circles. The result after the 20th generation is shown in
Figure 2(b). The result after the 50th generation is shown in Figure 2(c). Figure 3 is a plot
of the best, average, and poorest values of the objective function across 50 generations.
Since we are using reproduction to keep the best two individuals at each generation, the

16

best curve is monotonically decreasing with respect to generation numbers. The erratic
behavior of the poorest curve is due to the mutation operator, which explores the
landscape in a somewhat random manner.
One suggestion about using GA. In GA, random numbers are often used, so there are
perhaps some minor differences among the results of different times, such as results in
GA(first time) and GA(second time). It is suggested to calculate the same problem
several times using the same file and then compare the results to get the most correct
result.

Related Material

(1) Books on Genetic Algorithms


Holland,J., Adaptation in Natural and Artificial Systems, University of Michigan
Press, Ann Arbor, 1975.
Goldberg, D., Genetic Algorithms in Search, Optimization and Machine Learning,
Addison-Wesley, Reading, MA, 1989
Michalewicz, Z., Genetic Algorithm + Data Structure = Evolution Programs, 2nd ed.,
Springer-Verlag, New York, 1994.
Mitsuo Gen and Runwei Cheng, Genetic Algorithms and Engineering Design, John
Wiley & Sons, Inc., New York, 1997.

(2) Journals and Special Issues on Genetic Algorithms


Journal of Evolutionary Computation ( http://www-mitpress.mit.edu)
IEEE Transactions on Evolutionary Computation

(3) Public-accessible Internet Service for Genetic Algorithm information


http://www.aic.nrl.navy.mil/galist/
ftp://ftp-bionik.fb10.tu-berlin.de/pub/EC/

Conclusion
The Genetic Algorithm is a relatively simple algorithm that can be implemented in a
straightforward manner. It can be applied to a wide variety of problems including
unconstrained and constrained optimization problems, nonlinear programming, stochastic
programming, and combinatorial optimization problems. An advantage of the Genetic
Algorithm is that it works well during global optimization especially with poorly behaved
objective functions such as those that are discontinuous or with many local minima. It
also performs adequately with computationally hard problems such as the Traveling
Salesman Problem.

17

References
1. M. Gen, R. Cheng. Genetic Algorithms and Engineering Design. John Wiley &
Sons, Inc., 1997.
2. M. Gen, Y. Tsujimura, E. Kubota. Solving Job-Shop Scheduling Problem Using
Genetic Algorithms. Proceedings of the 16th International Conference on Computers
and Industrial Engineering, Ashikaga, Japan, 1994.
3. Z. Dong. Mech 580 Course Notes, 1999.
4. J.-S. R. Jang, C.-T. Sun, E. Mizutani. Neuro-Fuzzy and Soft Computing: A
Computational Approach to Learning and Machine Intelligence. Prentice Hall, 1997.
5. H. R. Lewis, C. H. Papadimitriou. Elements of the Theory of Computation. Prentice
Hall, 1991.

18

Appendix
The MATLAB files used in the example problem were copied from [4] and then
modified.
Modification included using a different objective function, changing the
content of output, etc.
Filename: zys_ga.m
generation_n = 50;% Number of generations
popuSize = 20;
% Population size
xover_rate = 1.0; % Crossover rate
mutate_rate = 0.01; % Mutation rate
bit_n = 16;
% Bit number for each input variable
global OPT_METHOD % optimization method.
OPT_METHOD = 'ga';% This is used for display in peaksfcn
figure;
blackbg;
obj_fcn = 'G_Pfunction2';% Objective function
var_n = 2;
% Number of input variables
range = [-2, 2; -2, 2]; % Range of the input variables
% Plot Goldstein and Price function (g_p function)
g_pfunction;
colormap((jet+white)/2);
% Plot contours of g_p function
figure;
blackbg;
[x, y, z] = g_pfunction;
pcolor(x,y,z); shading interp; hold on;
contour(x, y, z, 20, 'r');
hold off; colormap((jet+white)/2);
axis square; xlabel('X'); ylabel('Y');
t=cputime;
% Initial random population
popu = rand(popuSize, bit_n*var_n) > 0.5;
fprintf('Initial population.\n');
for i=1:popuSize
for j=1:bit_n*var_n
fprintf('%1.0f ',popu(i,j));
end
fprintf('\n');
end
upper = zeros(generation_n, 1);
average = zeros(generation_n, 1);
lower = zeros(generation_n, 1);
% Main loop of GA
for i = 1:generation_n;
k=i;
% delete unnecessary objects
delete(findobj(0, 'tag', 'member'));
delete(findobj(0, 'tag', 'individual'));
delete(findobj(0, 'tag', 'count'));
% Evaluate objective function for each individual
fcn_value = evalpopu(popu, bit_n, range, obj_fcn);
if (i==1),
fprintf('Initial population\n ');
for j=1:popuSize
fprintf('f(%f, %f)=%f\n', ...
bit2num(popu(j, 1:bit_n), range(1,:)), ...
bit2num(popu(j, bit_n+1:2*bit_n), range(2,:)), ...
fcn_value(j));
end
end
% Fill objective function matrices

19

upper(i) = max(fcn_value);
average(i) = mean(fcn_value);
lower(i) = min(fcn_value);
% display current best
[best, index] = min(fcn_value);
fprintf('Generation %i: ', i);
fprintf('f(%f, %f)=%f\n', ...
bit2num(popu(index, 1:bit_n), range(1,:)), ...
bit2num(popu(index, bit_n+1:2*bit_n), range(2,:)), ...
best);
% generate next population via selection, crossover and mutation
popu = nextpopu(popu, fcn_value, xover_rate, mutate_rate,k);
if(i==1|i==10|i==20|i==30|i==40)
fprintf('Population after the %d th generation.\n',i);
fprintf('Press any key to continue...\n');
pause;
end
end
e=cputime-t;
fprintf('the CPU Time for the whole calculation=%10.5f\n',e);
figure;
blackbg;
x = (1:generation_n)';
plot(x, upper, 'o', x, average, 'x', x, lower, '*');
hold on;
plot(x, [upper average lower]);
hold off;
legend('Poorest', 'Average', 'Best');
xlabel('Generations'); ylabel('Fitness');

Filename: G_Pfunction.m
function [xz,y,z] = G_PFunction(arg1,arg2);
%G-PFunction A sample function of two variables.
%
G-PFunction is a function of two variables, obtained by translating and
%
scaling Gaussian distributions, which is useful for demonstrating
%
MESH, SURF, PCOLOR, CONTOUR, etc.
%
There are several variants of the calling sequence:
%
%
Z = G-PFunction;
%
Z = G-PFunction(N);
%
Z = G-PFunction(V);
%
Z = G-PFunction(X,Y);
%
%
G-PFunction;
%
G-PFunction(N);
%
G-PFunction(V);
%
G-PFunction(X,Y);
%
%
[X,Y,Z] = G-PFunction;
%
[X,Y,Z] = G-PFunction(N);
%
[X,Y,Z] = G-PFunction(V);
%
%
The first variant produces a 49-by-49 matrix.
%
The second variant produces an N-by-N matrix.
%
The third variant produces an N-by-N matrix where N = length(V).
%
The fourth variant evaluates the function at the given X and Y,
%
which must be the same size. The resulting Z is also that size.
%
%
The next four variants, with no output arguments, do a SURF
%
plot of the result.
%
%
The last three variants also produce two matrices, X and Y, for
%
use in commands such as PCOLOR(X,Y,Z) or SURF(X,Y,Z,DEL2(Z)).
%
%
If not given as input, the underlying matrices X and Y are
%
[X,Y] = MESHGRID(V,V)

20

%
%
%

where V is a given vector, or V is a vector of length N with


elements equally spaced from -2 to 2. If no input argument is
given, the default N is 49.

if nargin == 0
dx = 1/8;
[x,y] = meshgrid(-2:dx:2);
elseif nargin == 1
if length(arg1) == 1
[x,y] = meshgrid(-2:4/(arg1-1):2);
else
[x,y] = meshgrid(arg1,arg1);
end
else
x = arg1; y = arg2;
end
z=(1+(x+y+1).^2.*(19-14*x+3*x.^2-14*y+6*x.*y+3*y.^2)).*...
(30+(2*x-3*y).^2.*(18-32*x+12*x.^2+48*y-36*x.*y+27*y.^2));
if nargout > 1
xz = x;
elseif nargout == 1
xz = z;
else
% Self demonstration
disp(' ')
disp(' z=(1+(x+y+1).^2.*(19-14*x+3*x.^2-14*y+6*x.*y+3*y.^2)).*... ')
disp(' (30+(2*x-3*y).^2.*(18-32*x+12*x.^2+48*y-36*x.*y+27*y.^2)) ')
surf(x,y,z)
axis([min(min(x)) max(max(x)) min(min(y)) max(max(y)) ...
min(min(z)) max(max(z))])
xlabel('x'), ylabel('y'), title('G-P Function')
end

Filename: evalpopu.m
function fitness = evalpopu(popu, bit_n, range, obj_fcn)
%EVALPOPU Evaluation of the population's fitness values.
%
population: 0-1 matrix of popu_n by string_leng
%
bit_n: number of bits used to represent an input variable
%
range: range of input variables, a var_b by 2 matrix
%
fcn: objective function (a MATLAB string)
global count
pop_n = size(popu, 1);
fitness = zeros(pop_n, 1);
for count = 1:pop_n,
fitness(count) = evaleach(popu(count, :), bit_n, range, obj_fcn);
end

Filename: evaleach.m
function out = evaleach(string, bit_n, range, obj_fcn)
% EVALEACH Evaluation of each individual's fitness value.
%
bit_n: number of bits for each input variable
%
string: bit string representation of an individual
%
range: range of input variables, a ver_n by 2 matrix
%
fcn: objective function (a MATLAB string)
var_n = length(string)/bit_n;
input = zeros(1, var_n);
for i = 1:var_n,
input(i) = bit2num(string((i-1)*bit_n+1:i*bit_n), range(i, :));
end
out = feval(obj_fcn, input);

21

Filename: bit2num.m
function num = bit2num(bit, range)
% BIT2NUM Conversion from bit string representations to decimal numbers.
%
BIT2NUM(BIT, RANGE) converts a bit string representation BIT ( a 0-1
%
vector) to a decimal number, where RANGE is a two-element vector
%
specifying the range of the converted decimal number.
%
%
For example:
%
%
bit2num([1 1 0 1], [0, 15])
%
bit2num([0 1 1 0 0 0 1], [0, 127])
integer = polyval(bit, 2);
num = integer*((range(2)-range(1))/(2^length(bit)-1)) + range(1);

Filename: blackbg.m
function blackbg
% Change figure background to black
%
Issue this to change the background to black (V4 default)
tmp = version;
if str2num(tmp(1))==5, clf; colordef(gcf, 'black');
end

Filename: nextpopu.m
function new_popu = nextpopu(popu, fitness, xover_rate, mut_rate,k)
new_popu = popu;
popu_s = size(popu, 1);
string_leng = size(popu, 2);
% ====== ELITISM: find the best two and keep them
tmp_fitness = fitness;
[junk, index1] = min(tmp_fitness); % find the best
tmp_fitness(index1) = max(tmp_fitness);
[junk, index2] = min(tmp_fitness); % find the second best
new_popu([1 2], :) = popu([index1 index2], :);
% rescaling the fitness
fitness = max(fitness) - fitness;% keep it positive
total = sum(fitness);
if(k==1)
fprintf('the fitnesses after minus\n');
for i=1:popu_s
fprintf('%10.3f \n',fitness(i));
end
fprintf('the sum of fitnesses %10.5f\n',total);
end
if total == 0,
fprintf('=== Warning: converge to a single point ===\n');
fitness = ones(popu_s, 1)/popu_s;% sum is 1
else
fitness = fitness/sum(fitness);
% sum is 1
end
cum_prob = cumsum(fitness);
if(k==1)
fprintf('the probability of each chromosome, and the cumulative sum \n');
for i=1:popu_s
fprintf('%10.3f %10.3f\n',fitness(i),cum_prob(i));
end
end
% ====== SELECTION and CROSSOVER
for i = 2:popu_s/2,
% === Select two parents based on their scaled fitness values
tmp = find(cum_prob - rand > 0);

22

parent1 = popu(tmp(1), :);


tmp = find(cum_prob - rand > 0);
parent2 = popu(tmp(1), :);
% === Do crossover
if rand < xover_rate,
% Perform crossover operation
xover_point = ceil(rand*(string_leng-1));
new_popu(i*2-1, :) = ...
[parent1(1:xover_point) parent2(xover_point+1:string_leng)];
new_popu(i*2, :) = ...
[parent2(1:xover_point) parent1(xover_point+1:string_leng)];
end
if(k==1)
fprintf('xover_point = %d \n', xover_point);
fprintf('parent1\n');
for j=1:string_leng
fprintf('%d ',parent1(j));
end
fprintf('\n');
fprintf('parent2\n');
for j=1:string_leng
fprintf('%d ',parent2(j));
end
fprintf('\n');
fprintf('new_popu1\n');
for j=1:string_leng
fprintf('%d ',new_popu(i*2-1,j))
end
fprintf('\n');
fprintf('new_popu2\n');
for j=1:string_leng
fprintf('%d ',new_popu(i*2,j))
end
fprintf('\n');
%
disp(new_popu(i*2-1, :));
disp(new_popu(i*2, :));

%
end

%
keyboard;
end
if(k==1)
fprintf('the result after crossover of the first population\n');
for i=1:popu_s
for j=1:string_leng
fprintf('%d ',new_popu(i,j))
end
fprintf('\n');
fprintf('\n');
end
end
% ====== MUTATION (elites are not subject to this.)
mask = rand(popu_s, string_leng) < mut_rate;
new_popu = xor(new_popu, mask);
if(k==1)
fprintf('the result after mutation of the first population\n');
for i=1:popu_s
for j=1:string_leng
fprintf('%d ',new_popu(i,j))
end
fprintf('\n');
fprintf('\n');
end
end
% restore the elites
new_popu([1 2], :) = popu([index1 index2], :);

23

Filename: G_Pfunction2
function z = G_Pfunction2(input)
%G_Pfunction The Goldstein and price Function.
%
G_Pfunction(INPUT) returns the value of the Goldstein and price function at the INPUT.
global OPT_METHOD % optimization method
global PREV_PT
% previous data point, used by simplex
x= input(1); y = input(2);
z=(1+(x+y+1)^2*(19-14*x+3*x^2-14*y+6*x*y+3*y^2))...
*(30+(2*x-3*y)^2*(18-32*x+12*x^2+48*y-36*x*y+27*y^2));
if matlabv==4,
property='linestyle';
elseif matlabv==5,
property='marker';
else
error('Unknown MATLAB version!');
end
% Plotting ...
if strcmp(OPT_METHOD, 'ga'), % plot each member; for GA
line(x, y, property, 'o', 'markersize', 15, ...
'clipping', 'off', 'erase', 'xor', 'color', 'w', ...
'tag', 'member', 'linewidth', 2);
else
% plot input point for simplex method
line(x, y, property, '.', 'markersize', 10, ...
'clipping', 'off', 'erase', 'none', 'color', 'k', ...
'tag', 'member');
if ~isempty(PREV_PT),% plotting traj
line([PREV_PT(1) x], [PREV_PT(2) y], 'linewidth', 1, ...
'clipping', 'off', 'erase', 'none', ...
'color', 'k', 'tag', 'traj');
else
% plotting starting point
%
line(x, y, property, 'o', 'markersize', 10, ...
%
'clipping', 'off', 'erase', 'none', ...
%
'color', 'w', 'tag', 'member', 'linewidth', 3);
end
PREV_PT = [x y];
end
drawnow;

Filename: matlabv.m
function ver = matlabv
% MATLAB major version
tmp = version;
ver = str2num(tmp(1));

24

You might also like