Professional Documents
Culture Documents
Brief Papers: An Orthogonal Genetic Algorithm With Quantization For Global Numerical Optimization
Brief Papers: An Orthogonal Genetic Algorithm With Quantization For Global Numerical Optimization
Brief Papers: An Orthogonal Genetic Algorithm With Quantization For Global Numerical Optimization
1, FEBRUARY 2001 41
Brief Papers_______________________________________________________________________________
An Orthogonal Genetic Algorithm with Quantization for Global Numerical
Optimization
Yiu-Wing Leung, Senior Member, IEEE and Yuping Wang
Abstract—We design a genetic algorithm called the orthogonal robust and statistically sound. In [11], they applied an exper-
genetic algorithm with quantization for global numerical optimiza- imental design method called orthogonal design to enhance
tion with continuous variables. Our objective is to apply methods the crossover operator for a zero–one integer programming
of experimental design to enhance the genetic algorithm, so that
the resulting algorithm can be more robust and statistically sound. problem, and they called the resulting algorithm, the orthogonal
A quantization technique is proposed to complement an experi- genetic algorithm (OGA). Numerical results demonstrated
mental design method called orthogonal design. We apply the re- that the OGA had a significantly better performance than the
sulting methodology to generate an initial population of points that traditional GA on the problems studied [11].
are scattered uniformly over the feasible solution space, so that the Orthogonal design is applicable to discrete variables, but it
algorithm can evenly scan the feasible solution space once to lo-
cate good points for further exploration in subsequent iterations. is not applicable to continuous variables [9], [10]. In [11], the
In addition, we apply the quantization technique and orthogonal zero–one integer programming problem only involves discrete
design to tailor a new crossover operator, such that this crossover variables, and hence the orthogonal design can be applied to
operator can generate a small, but representative sample of points enhance the crossover operator for this problem. However, the
as the potential offspring. We execute the proposed algorithm to orthogonal design is not directly applicable to enhance the ge-
solve 15 benchmark problems with 30 or 100 dimensions and very
large numbers of local minima. The results show that the proposed netic algorithm for optimization with continuous variables.
algorithm can find optimal or close-to-optimal solutions. In this paper, we design a genetic algorithm called the orthog-
onal genetic algorithm with quantization (OGA/Q) for global
Index Terms—Evolutionary computation, experimental design
methods, genetic algorithms, numerical algorithms, numerical op- numerical optimization with continuous variables. We propose
timization, orthogonal array, orthogonal design. a quantization technique to complement the orthogonal design,
so that we can apply the resulting methodology to enhance the
genetic algorithm for optimization with continuous variables. In
I. INTRODUCTION particular, we propose the following two enhancements.
Manuscript received December 22, 1998; revised October 15, 1999 and II. PRELIMINARY
April 10, 2000. This work was supported by the HKBU Faculty Research
Grant FRG/97-98/II-08. A. Problem Definition
Y.-W. Leung is with the Department of Computer Science, Hong Kong Baptist We consider the following global optimization problem:
University, Kowloon Tong, Hong Kong (e-mail: ywleung@comp.hkbu.edu.hk).
Y. Wang is with the Department of Applied Mathematics, Xidian University, minimize
Xi’an 710071, R.O.C.
Publisher Item Identifier S 1089-778X(01)00999-7. subject to
1089–778X/01$10.00 © 2001 IEEE
42 IEEE TRANSACTIONS ON EVOLUTIONARY COMPUTATION, VOL. 5, NO. 1, FEBRUARY 2001
TABLE I
EXPERIMENTAL DESIGN PROBLEM WITH THREE FACTORS AND THREE LEVELS PER FACTOR
In this subsection, we design a simple permutation method to Example 1: Suppose and , and we want to
construct orthogonal arrays of this class. construct the orthogonal array . The execution of Algo-
We denote the th column of the orthogonal array rithm 1 is as follows.
by . Columns for Step 1: Construct the basic columns and as follows.
are called the When , is found to be 1 and can be found
basic columns, and the others are called the nonbasic columns. to be
We first construct the basic columns, and then construct the non-
basic columns. The details are as follows.
When , is found to be 2 and can be found
Algorithm 1: Construction of Orthogonal
to be
Array:
Step 1: Construct the basic columns as
follows:
FOR TO DO Step 2: Construct the nonbasic columns and as fol-
BEGIN lows. Since , is always equal to 2, and hence
and . When , is given by
mod
FOR TO DO mod
mod
When , is given by
END.
mod
Step 2: Construct the nonbasic columns
as follows: mod
FOR TO DO
44 IEEE TRANSACTIONS ON EVOLUTIONARY COMPUTATION, VOL. 5, NO. 1, FEBRUARY 2001
Fig. 2. Quantization for a two-dimensional optimization problem; the domains of x and x are [1.0, 9.0] and [1.0, 7.0] respectively. Each factor is quantized
into five levels. After quantization, the feasible solution space contains 25 chromosomes.
Step 3: Increment for all and . After quantization, has possible values
Concatenate , , , and to form , and hence the feasible solution space contains
which is shown in (2). points. We apply orthogonal design to select a small sample of
points that are scattered uniformly over the feasible solution
III. ORTHOGONAL GENETIC ALGORITHM WITH QUANTIZATION space.
We first construct a suitable orthogonal array. Recall that Al-
We define to be a chromosome with
gorithm 1 can only construct , where and
cost . The optimization problem is equivalent to finding a
is a positive integer fulfilling [see
chromosome of minimal cost.
(3)]. Since the problem dimension is given, there may not
exist and fulfilling this condition. We bypass this restric-
A. Generation of Initial Population
tion as follows. We choose the smallest such that
Before an optimization problem is solved, we may have no in-
formation about the location of the global minimum. It is desir- (5)
able that the chromosomes of the initial population be scattered
uniformly over the feasible solution space, so that the algorithm We execute Algorithm 1 to construct an orthogonal array with
can explore the whole solution space evenly. factors, and then delete the last
We observe that an orthogonal array specifies a small number columns to get an orthogonal array with factors. The
of combinations that are scattered uniformly over the space of details are given in the following algorithm.
all the possible combinations. Therefore, orthogonal design is a
potential method for generating a good initial population.
Algorithm 2: Construction of :
We define to be the th factor, so that each chromosome
Step 1: Select the smallest
has factors. These factors are continuous, but the orthogonal
fulfilling .
design is applicable to discrete factors only. To overcome this
Step 2: If , then
issue, we quantize each factor into a finite number of values. In
else .
particular, we quantize the domain of into levels
Step 3: Execute Algorithm 1 to construct
, where the design parameter is
the orthogonal array .
odd and is given by
Step 4: Delete the last columns of
to get where
.
for
Among these potential chromosomes, we select chromo-
somes having the smallest cost as the initial population, where Step 2: Quantize the subspace
is the population size. In this manner, we have evenly scanned based on (4) to get
the feasible solution space once to locate potentially good points
for further exploration in subsequent iterations.
When the feasible solution space is large, it may be desirable
to generate more potential chromosomes for a better coverage.
However, the value of depends on and , and it cannot
be increased arbitrarily. To overcome this issue, we divide the and then apply to select the following nine
feasible solution space into subspaces, where is a design chromosomes based on (7):
parameter. In particular, we choose the th dimension such that
(8a)
(8b)
and is an -dimensional vector such that its th element is Fig. 3 shows the distribution of the nine selected
one and all of the other elements are zero. We apply chromosomes in the space . Proceed in
to each subspace to generate chromosomes, so that we get a a similar manner for the other four subspaces.
total of potential chromosomes for the initial population. Step 3: Among the potential chromosomes,
We select chromosomes having the smallest cost as the initial select the chromosomes having the smallest cost
population. as the initial population.
The details for generating an initial population are given as p
follows.
B. Orthogonal Crossover with Quantization
Algorithm 3: Generation of Initial We design a new crossover operator called orthogonal
Population: crossover with quantization. It acts on two parents. It quantizes
Step 1: Divide the feasible solution the solution space defined by these parents into a finite number
46 IEEE TRANSACTIONS ON EVOLUTIONARY COMPUTATION, VOL. 5, NO. 1, FEBRUARY 2001
Fig. 3. Distribution of the nine selected chromosomes in the space [l (1); u(1)] in Example 3.
of points, and then applies orthogonal design to select a small, We denote . As the population is
but representative sample of points as the potential offspring. being evolved and improved, the population members are get-
Consider any two parents and ting closer to each other, so that the solution space defined by
. They define the solution space two parents is becoming smaller. Since is fixed, the quan-
where tized points are getting closer, and hence we can get increasingly
precise results.
After quantizing [ , we apply orthogonal de-
sign to select a small, but representative sample of points as the
potential offspring, and then select the ones with the smallest
(9) cost to be the offspring. Each pair of parents should not produce
too many potential offspring in order to avoid a large number of
function evaluations during selection. For this purpose, we di-
vide the variables into groups, where is a
We quantize each domain of into levels small design parameter, and each group is treated as one factor.
such that the difference between any two successive levels is the Consequently, the corresponding orthogonal array has a small
same. Specifically, we quantize the domain of the th dimension number of combinations, and hence a small number of poten-
into where tial offspring are generated. Specifically, we randomly generate
integers such that
, and then create the following factors for
any chromosome :
(10)
(11)
.
IEEE TRANSACTIONS ON EVOLUTIONARY COMPUTATION, VOL. 5, NO. 1, FEBRUARY 2001 47
Since have been quantized, we define the fol- choose and . The execution of Algorithm 5 is
lowing levels for the th factor : as follows.
Step 1: Quantize into
(12)
Algorithm 4: Construction of :
Step 1: Select the smallest
fulfilling .
Step 2: If , then
else .
Step 3: Execute Algorithm 1 to construct
the orthogonal array .
Step 4: Delete the last columns of
to get . C. Orthogonal Genetic Algorithm with Quantization
We generate a good initial population with chromosomes,
We apply to generate the following chromo- and then evolve and improve the population iteratively. In each
somes out of possible chromosomes: iteration, we apply orthogonal crossover with quantization and
mutation to generate a set of potential offspring. Among these
potential offspring and the parents, we select the chromo-
(13) somes with the least cost to form the next generation. We let
be a population of chromosomes in the th generation. The
details of the overall algorithm are as follows.
The details of orthogonal crossover with quantization are Orthogonal Genetic Algorithm with
given as follows. Quantization:
Step 1: Initialization
Algorithm 5: Orthogonal Crossover with Step 1.1: Execute Algorithm 2 to
Quantization: construct .
Step 1: Quantize based on Step 1.2: Execute Algorithm 3 to
(10). generate an initial population
Step 2: Randomly generate integers . Initialize the generation
such that number gen to 0.
. Step 1.3: Execute Algorithm 4 to
Create factors based on (11). construct .
Step 3: Apply to generate Step 2: Population Evolution
potential offspring based on WHILE (stopping condition is
(13). not met) DO
BEGIN
Example 4: Consider a five-dimensional optimization Step 2.1: Orthogonal Crossover with
problem. Let the two parents be and Quantization—Each chromosome
. These parents define the solution space in is selected for
. We crossover with probability .
48 IEEE TRANSACTIONS ON EVOLUTIONARY COMPUTATION, VOL. 5, NO. 1, FEBRUARY 2001
TABLE VI
COMPARISON BETWEEN OGA/Q AND CGA WHERE CGA IS EXECUTED TO
TABLE V SOLVE THE TEST FUNCTIONS IN THE CONTROL EQUIPMENT
PERFORMANCE OF OGA/Q
if
TABLE X
COMPARISON BETWEEN OGA/Q AND CEP/GMO WHERE THE RESULTS FOR
CEP/GMO ARE OBTAINED FROM [18]; IN THIS EXPERIMENT THE DIMENSION
OF f IS 30, THE SAME AS THE DIMENSION ADOPTED IN [18]
TABLE VIII
COMPARISON BETWEEN OGA/Q AND ESA WHERE THE RESULTS FOR ESA
ARE OBTAINED FROM [4]
ACKNOWLEDGMENT
The authors sincerely thank the reviewers for their construc-
tive comments, and Dr. D. B. Fogel for his suggestions on im-
proving the presentation.
REFERENCES
[1] W. W. Hager, W. Hearn, and P. M. Pardalos, Eds., Large Scale Optimiza-
tion: State of the Art. Norwell, MA: Kluwer, 1994.
[2] C. A. Floudas and P. M. Pardalos, Eds., State of the Art in Global Op-
timization: Computational Methods and Applications. Norwell, MA:
Kluwer, 1996.
[3] M. A. Stybinski and T. S. Tang, “Experiments in nonconvex optimiza-
tion: Stochastic approximation and function smoothing and simulated
annealing,” Neural Networks, vol. 3, pp. 467–483, 1990.
[4] P. Siarry, G. Berthiau, F. Durbin, and J. Haussy, “Enhanced simulated
annealing for globally minimizing functions of many-continuous vari-
ables,” ACM Trans. Math. Software, vol. 23, pp. 209–228, June 1997.
IEEE TRANSACTIONS ON EVOLUTIONARY COMPUTATION, VOL. 5, NO. 1, FEBRUARY 2001 53
[5] X. Yao and Y. Liu, “Fast evolution strategies,” in Evolutionary Program- [12] Q. Wu, “On the optimality of orthogonal experimental design,” Acta
ming VI, P. J. Angeline, R. Reynolds, J. McDonnell, and R. Eberhart, Math. Appl. Sinica, vol. 1, no. 4, pp. 283–299, 1978.
Eds. Berlin, Germany: Springer-Verlag, 1997, pp. 151–161. (Available [13] Math. Stat. Res. Group, Chinese Acad. Sci., Orthogonal Design (in Chi-
at ftp://www.cs.adfa.edu.au/pub/xin/yao_liu_ep97.ps.gz). nese). Beijing: People Education Pub., 1975.
[6] D. E. Goldberg, Genetic Algorithm in Search, Optimization and Machine [14] J. Yen and B. Lee, “A simplex genetic algorithm hybrid,” in Proc. 1997
Learning. Reading, MA: Addison-Wesley, 1989. IEEE Int. Conf. Evolutionary Computation, IN, Apr. 1997, pp. 175–180.
[7] Z. Michalewicz, Genetic Algorithms + Data Structures = Evolution Pro- [15] J. Renders and H. Bersini, “Hybridizing genetic algorithms with hill-
grams, 3rd ed. Berlin, Germany: Springer-Verlag, 1996. climbing methods for global optimization: Two possible ways,” in Proc.
[8] Y. W. Leung and Q. Zhang, “Evolutionary algorithms + 1st IEEE Conf. Evolutionary Computation, Orlando, FL, June 1994, pp.
experimental design methods: A hybrid approach for hard 312–317.
optimization and search problems,” Res. Grant Proposal, [16] H. P. Schwefel, Evolution and Optimum Seeking. New York: Wiley,
Hong Kong Baptist Univ., (A short version is available at 1995.
http://www.comp.hkbu.edu.hk/~ywleung/prop/EA_EDM.doc), 1997. [17] P. J. Angeline, “Evolutionary optimization versus particle swarm
[9] D. C. Montgomery, Design and Analysis of Experiments, 3rd ed. New optimization: Philosophy and performance differences,” in Proc. Evol.
York: Wiley, 1991. Prog. VII, V. W. Porto, N. Saravanan, D. Waagen, and A. E. Eiben,
[10] C. R. Hicks, Fundamental Concepts in the Design of Experiments, 4th Eds. Berlin, Germany: Springer-Verlag, 1998, pp. 601–610.
ed. TX: Saunders College Publishing, 1993. [18] K. Chellapilla, “Combining mutation operators in evolutionary pro-
[11] Q. Zhang and Y. W. Leung, “An orthogonal genetic algorithm for multi- gramming,” IEEE Trans. Evol. Comput., vol. 2, pp. 91–96, Sept 1998.
media multicast routing,” IEEE Trans. Evol. Comput., vol. 3, pp. 53–62,
Apr. 1999.