Download as pdf or txt
Download as pdf or txt
You are on page 1of 8

Differential Evolution for Multi-Objective Optimization

B.V.Babu* M. Mathew Leenus Jehan


Assistant Dean - ESD & Chemical Engineering Department
Head - Chemical Engineering & Engg. Tech. Depts. B.I.T.S. Pilani
B.I.T.S.. PILANI 333 031 (India) PILANI 333 031 (India)
h\,b;ibu4bits-oilani.ac.in h2OO I ?Oj6bits-oilani.ac.in

Abstract- Two test problems on Multi-objective to their population-based search approach. Thus, EAs are
optimization (one simple general problem and the ideally suited for multi-objective optimization problems.
second one on an engineering application of cantilever A detailed account of multi-objective optimization using
design. problem) are solved using Differential evolutionary algorithms and some of the applications
Evolution (DE). DE is a population based search using genetic algorithms can he found in literature (Deb,
algorithm, which is an improved version of Genetic 2001; Rajesh et al., 2000; Rajesh et al.. 2001; Oh et al.,
Algorithm (GA), Simulations carried out involved 2002).
solving (1) both the problems using Penalty function
method, and (2) first problem using Weighing factor
method and finding Pareto optimum set for the chosen 2 Differential Evolution (DE)
problem, DE found to be robust and faster in
optimization. To consolidate the power of DE, the Differential Evolution (Price and Stom. 1997) is an
classical Himmelblau function, with bounds on improved version of Genetic Algorithm (Goldberg. 1989)
variables, is also solved using both DE and GA. DE for faster optimization.
found to give the exact optimum value within less Genetic algorithm (GA) is a search technique
generations compared to simple CA. developed by Holland (1975) which mimics the principle
of natural evolution. In this technique (simple GA), the
decision variables is first decoded into binary numbers [O
and I ] and hence creat a population pool. Each of these
1 Introduction vectors or chromosomes generally called is then mapped
into its real value using specified lower and upper bounds.
Optimization is a procedure of finding and comparing A model of the process will then compute an objective
feasible solutions until no better solution can he found. function for each chromosome and then give the fitness of
Solutions are termed good or had in terms of an objective, the chromosome.
which is often the cost of fabrication, amount of harmful The optimization search proceeds through three
gases. efficiency of a process, product reliability. or other operators: reproduction. crossover and mutation. The
factors (Deb, 2001). Most of the real world problems reproduction (selection) operator selects good strings in a
involve more than one objective, making the multiple population and forms mating pool. The chromosomes are
conllicting objectives interesting to solve. Classical copied haaed on their fitness value. No new strings are
optimization methods are inconvenient to solve multi- produces in this operation. The crossover allows for a new
objective optimization problems. as they could at best find string formation by exchanging some portion of the strings
one solution in one simulation run. (chosen randomly) with string of another chromosome
As the real world problems involve the simulation and generating child chromosome in the mating pool. If the
optimization of multiple objectives, results and solutions
of these problems are conceptually different from single
objective function problems. In multiobjective
optimization, there may not exist a solution that is best
child chromosome are less fit than the parent
chromosome. the will slowly die in the subsequent
generation. The effect of crossover can be detrimental or
good. Hence. not all the strings are used for crossover. A
.
with respect to all objectives. Instead. there are equally crossover probability, pr is used, where only loop, percent
good, which are known as pareto optimal solutions. A of the strings in the mating pool are involved in crossover
pareto optimal set of solution is such that when we go operation while the rest continue unchanged to the next
from any one point to another in the set, atleast one generation. Mutation is the last operation. It is used to
objective function improves and at least one other worsen
further perturb the child vector using mutation probability
(Yee et al., 2003). Neither of the solution dominates over p,,).The mutation alters the string locally to create a better
each other and all the sets of decision variables on the string. Mutation is needed to create a point in the
pareto are equally good.
neighborhood of the current point, thereby achieving a
However, Evolutionary algorithms (EAs) can find
local search and maintaining the diversity in the
multiple optimal solutions in one single simulation run due population. The entire process is repeated till some

0-7803-7804-0 /03/$17 00 C 2003 IEEE 2696


termination criterion is met. A detailed description of GA following are the ten different working strategies proposed
is documented in Holland (1975) and Goldherg (1989). hy Price & Stom (2003):
Unlike simple GA that uses binary coding for I . DE/hest/l/exp
representing problem parameters. Differential Evolution 2. DE/rand/l/exp
(DE) uses real coding of floating point numbers. Among 3. DUrand-to-bestlllexp
the DE’S advantages are its simple structure. ease of use, 1.DE/hest/?/exp
speed and robustness. 5. DE/rand/Z/exp
The simple adaptive scheme used by DE ensures that 6. DE/best/l/bin
these mutation increments are automatically scaled to the 7. DE/rdnd/l/bin
correct magnitude. Similarly DE uses a non-uniform 8. DEIrand-to-hestlllbin
crossover in that the parameter values of the child vector 9. DE/best/2/bin
are inherited in unequal proportions from the parent IO. DE/rand/?hin
vectors. For reproduction. D E uses a tournament selection The general convention used above is DW.rldz. D E
where the child vector competes against one of its parents. stands for Differential Evolution, x represents a string
The overall structure of the D E algorithm resembles that denoting the vector to be perturbed, J is the number of
of most other population based searches. The parallel difference vectors considered for perturbation of x , and z
version of DE maintains two arrays, each of which holds a stands for the type of crossover being used (exp:
population of NP, D-dimensional, real valued vectors. The exponential: bin: binomial). Thus, the working algorithm
primary array holds the current vector population, while outlined above is the seventh strategy of DE i.e.
the secondary array accumulates vectors that are selected DElrandllhin. Hence the perturbation can he either in the
for the next generation. In each generation. NP best vector of the previous generation or in any randomly
competitions are held to determine the composition of the chosen vector. Similarly for perturbation either single or
next generation. Every pair of vectors (Xn. Xb) defines a two vector differences can be used. For perturbation with
vector differential: Xn - Xb. When Xn and Xb are chosen a single vector difference, out of the three distinct
randomly, their weighted differential is used to perturb randomly chosen vectors. the weighted vector differential
another randomly chosen vector X c . This process can be of any two vectors is added to the third one. Similarly for
mathematically written as X’c = Xc + F(Xn-Xb). The perturbation with two vector differences, five distinct
scaling factor F is a user supplied constant in the range (0 vectors, other than the target vector are chosen randomly
< F 5 1.2). The optimal value of F for most of the from the current population. Out of these, the weighted
functions lies in the range of 0.4 to 1.0 (Price & Storn, vector difference of each pair of any four vectors is added
1997). Then in every generation, each primary array to the fifth one for perturbation. In exponential crossover,
vector. Xi is targeted for crossover with a vector like X’c the crossover is ptirformed on the D variables in one loop
to produce a trial vector XI. Thus the trial vector is the until it is within the C R bound. The first time a randomly
child of two parents, a noisy random vector and the target picked number between 0 and I goes beyond the CR
vector against which it must compete. The non-uniform value, no crossover is performed and the remaining D
crossover is used with a crossover constant CR. in the variables are left intact. In binomial crossover, the
range 0 5 C R 51. CR actually represents the probability crossover is performed on each of the D variables
that the child vector inherits the parameter values from the whenever a randomly picked number hetween 0 and I is
noisy random vector. When CR = I.for example, every within the CR value. So for high values of CR, the
trial vector parameter is certain to come from X’c. If, on exponential and binomial crossovers yield similar results.
the other hand. C R = 0, all but one trial vector parameter The strategy to be adopted for each problem is to be
comes from the target vector. To ensure that XI differs determined separately by trial and error. A strategy that
from Xi by at least one parameter, the final trial vector works out to be the best for a given problem may not work
parameter always comes from the noisy random vector, well when applied for a different problem.
even when C R = 0. Then the cost of the trial vector is Price & Storn (1997) gave the working principle of D E
compared with that of the target vector, and the vector that with single strategy. Later on, they suggested ten different
has the lowest cost of the two would survive for the next strategies of DE (Price & Storn, 2003). A strategy that
generation. In all, just three factors control evolution under works out tu be the hest for a given problem may nor work
DE, the population size, NP; the weight applied to the well when applied for a different problem. Also, the
random differential, F; and the crossover constant. CR. strategy and key parameters to be adopted for a problem
are to he determined by trial & error. However, strategy-7
2.1 Different strategies of DE (DE/rand/l/bin) is the most successful and the most
Different strategies can be adopted in DE algorithm widely used strategy. The key parameters of control in
depending upon the type of problem for which DE is DE are: NP-the population size, CR-the crossover
applied. The strategies can vary based on the vector to be constant, and F-the weight applied to random differential
perturbed, number of difference vectors considered for (scaling factor). Babu et al. (2002) proposed a new
perturbation, and finally the type of crossover used. The concept called ‘nested D E to automate the choice of D E
key parameters. In addition, some new strategies have

2697
heen proposed and successfully applied to optimization of { f o r j = I IO D
extraction process (Bahu & Angira. 2003a). Xij =Laver bound+ randmi
As detailed above, the crucial idea behind DE is a nurnher *( ripper bourid - lower OoirridJl
scheme for generating trial parameter vectors. Basically. All the vectors geriernred should sori.@ the
D E adds the weighted difference between two population cotistrnirits. Perinln frrrrcriori nppronch, I. e.,
vectors to a third vector. Price & Storn (2003) have given pennlizirrg rhe vecror by givirrg it n lorge w l u e . is
some simple rules for choosing key parameters of DE for followed orily fiir t h e vector.t which do nor
any given application. Nornially, NP should he about 5 to sntish the coristrnirits.
10 times the dimention (number of parameters in a vector) Eaaluare the cost of ench vector. Profit here is
of the problem. As for F. it lies in the range 0.4 to 1.0. the value of the objective finctiori ro be
Initially F = 0.5 can he tried then F andlor NP is increased niarimized cnlcrrlnted l q o sepornre firncriorr
if the population converges prematurely. A good first deJirncr.profir( J
choice for CR is 0. I . hut in general CR should he as large j b r i = I ro N P
as possible.
D E has been successfully applied in various fields.
Some of the successful applications of DE include: digital
. Ci = defiiricr.projir()
Firid oirt the vector wirh the mnximrmt projr i.e.
the besr vector-511 jar.
filter design (Storn, 19951, hatch fermentation process Cmax = C1 nrid best = I
(Chiou & Wang, 1999: Wang & Cheng. 1999). estimation .fori = 2 to N P
of heat transfer parameters in trickle bed reactor (Bahu & [ if(Ci> Cmax)
Sastry. 1999). optimal design of heat exchangers [Babu & rheri Cniiri = Ci mid besr = i }
Munawar, 2000: 2001). synthesis & optimization of heat Peforrii rimmriori. crossover. selection nrid
integrated distillation system [Bahu & Singh. 2000). er'aluation of rhe objecrive firncrion f o r a
optimization of an alkylation reaction [Babu & Gaurav. specified trriniber of geriernridns.
2000). scenario-integrated optimization of dynamic While (gerr < MAXGENJ
systems (Bahu & Gautam, ?Gill), optimization of non- [ f o r i = I to NP
linear functions (Babu & Angira, 2001a), optimization of I
thermal cracker operation (Bahu & Angira, 2001h), glohal For ench i'ecror Xi (rnrger vector), selecr three
optimization of MINLP problems (Bahu & Angira. distinct i8ecror.s Xa. XI] arid Xc fselect,five, f n ~ ~
2002a). optimization of non-linear chemical processes r'ecfordifSrrerices are IO be used) mtdonily froiii
(Rahu & Angira, 2002h), glohal Optimization of non- the curretrr populnriorr (primdry nrrci~lorher rharr
linear chemical engineering processes (Angira & Bahu, the vector Xi
2003). optimization of water pumping system (Bahu & do
{ r1 = random number * N P
Angira, 2003b), optimization of biomass pyrolysis (Babu
& Chaurasia, 2003), etc. Many engineering applications r2 = rnndorn nuniber * NP
using various evolutionary algorithms have heen reported i-3 = raridom riurnber * NP
in literature (Dasgupta & Michalewicz. 1997: Onwuholu 1 while
& Bahu. 2003). DE applications on Multi-objective (rI=i)OR(r2=i)OR(r~=i)OR(r-l =r2
optimization are scarce (Ahhass 2001: 2002). In this
study, D E is applied to two test problems of multi-
objective optimization and three test prohlmes of singlr-
. JOR(rZ=r3JOR(rl=r3J
Per+Jn71 crossoverfor ench rnrget i w r o r Xi u i r h
its rroi.sy wctcr Xr1.i nrid create n rrinl vecror,
objective optimization with hounds on variables. The Xr.;. The 11oi.sy \sector is creared by performing
results are compared with those obtained using CA. mirtatiori.
If C R = U iriherir all thc pnrameters front the
tnrger i'ecror Xi. excepr orre which
3 Pseudo Code for DE should hefroin Xri.i.
f o r binoriiinl cmssover
The pseudo code of DE used in the present study is given I p = rando!n riumber
below: forri=lroD
Choose n seedfiJr rhe r a n d m iiuniber gerierntor.
lif i p<CR J
Initialize the i~nlrres of D. NP, CR. F and Xn.i = Xa.i + F ( X 6,; - X c,i J
MAXGEN (ninsirn~migerieratioriJ. Xt.i = Xrr,i
Initialize all the vectors of rlie popirlnrion }else Xr.i = X i j
rondomly. The variable nre rrorriinlized wirhirr the
bouiidr. Herice geriernte a random tiumber
between 0 nrid I for nll the design wriab1e.s for
. 1
Again. the NP 110isy rnndoni vectors rhnr ore
gerrerated shorrld .snrisfi rhr corisrrnirir and rlie
initinlizarion. p e n a l n fiincriori .approach is followed n.s
f i r ; = I to NP nierttioited above.

2698
Prrfonii selection for each target vector, X i by violations take place more or less the same order of
coniporirig its profit with rliat of the rrinl vector, magnitude. they all can he simply added as the overall
Xt.1 ; whichever /ins the maxirnurii profit will constraint violation and thus only one penalty parameter r
survive for rhe nexf generation. will he needed to make the overall constraint violation, of
C1.i = defurict.proft(/ the same order as thc Objective function.
if(Cr,i > C i ) Here c is a constant set by the user. By changing the c
. i = Xt.i
I I ~ WX value, we can get different single optimum solutions by
else new Xi = Xi ) this program. Here the value of c can take any value
/*fori=ltoNP*/ between 6 to 3 because the maximum and minimum value
of second objective function is 6 and 3 as per maximizing
Print the results ( o f e r the stoppinp criterin is first ob.jective function. Then higher-level information is
met). used to decide a single value of c to end up with a single
optimal value of xland x2.
The stopping criteria may he oftwo kinds. One may he It is also important to note that for maximizing both
some convergence criterion that states that the error in the objective functions, x. value should take the maximum
minimum or maximum hetween two previous generations possible value (3.0). Le, by changing any 'c' value we
should he less than some specified value (standard should end up with x: value as 3.0.
deviation may he used). The other may he an upper hound
on the number of generations. The stopping criteria may
he a combination of the two as well. In the present study, 5 Test Problem-1 solved using Weighing
test problem-l is soloved uisng second critea. whereas the factor
test problem-2 is solved using the first criteria.
The above problem (Belegundu and Chandrupatla. 2002)
is also solved using weighing factor method. The weighted
4 Test Problem-1 solved using Penalty sum method scalarizes a set of objective into a single
function objective by premultiplying each objective with a user
supplies weight (Deb, 2001). This is the most widely used
This problem (Belegundu and Chandrupatla, 2002) has .classical and simplest approach. The value of the weight
two objective functions. One objective function is used as depends on the importance of the each objective in the
a constraint. Single optimal solution is obtained after 4 0 context of the problem. The weight of an objective is
iterations. Penalty Function Method (Deb, 2001; usually chosen in proportion to the objective's relative
Belegundu and Chandrupatla, 2002) is implemented to importance in the problem. Then a composite objective
handle the constraint using DE algorithm. function can he formed by summing the weighted
ohjective and the multiobjective optimization gives is then
4.1 Problem Statement converted to a single objective oplimization. It is a usual
Maximize 3x, + .xl + I practice to choose weights such that therir sum is one.
Maximize -.x + ?x? A set of pareto optimal solutions is obtained after 100
Subject to 0<x,5 3 , oix2<3 iterations. DE algorithm with Weighing Factor Method
4.2 Parameters Used (Deb, 2M)I) is used for this work.
Penalty parameter ( r ) = 4.0 5.1 Problem Statement
Numher of population points ( N P ) = 20 Maximize 3x, + x2+ 1
Number of Iterations = 10 Maximize -.x, + 2xl
DE Key Parameters: Subject to 05.r 33, 0<*!53
Scaling Factor ( F ) = 0.45
Cross-over Constant (CR) = 0.9 5.2 Parameters Used
Weighing factors: w , = 0.25; w 2= 0.75
4.3 Simulation Results Number of Population points ( N P ) = 20
A single optimum is found after 4 0 iterations. Number of Iterations = 100
x,= 1.875 DE Key Parameters:
xi = 3.0 Scaling Factor (F') = 0.45
4.4 Discussion Cross-over Constant (CR) = 0.9
In this problem second objective function is taken as a 5.3 Simulation Results
constraint and normalized as below: A set of Pareto optimum solutions is obtained after 100
.XI + ?x: 2 c iterations using differential evolution algorithm.
-x, + - c 2 0. Computer code is developed in C++ for this work. It
Normalizing constraints in the above manner has an will give the graphical display of Pareto optimum
additional advantage. Since all normalized constraint solutions in each iteration.

2699
5.4 Discussion is smaller than a specified limit 6ms5. With all of the above
The Penalty function method is simple and a single considerations. the following twn-objective optimisation
optimum is obtained. as one of the two objective functions problem is formulated as follows:
has been considered as a constraint (second Objective Minimizef,(d./) = p d U 4
function in the present case). The convergence is ohtained Minimizef,(d,I) = 6= 6 4 P I ' / ( 3 E d )
within 40 iterations.
< S,,
subjet to q,,,s
Pareto optimum set obtained 6 5 Jrn*\
after 100 iterations using DE whereas the maximum stress is calculated as follows:
= 32P//(nd')
om*>
6
where, E = Young's Modulus, GPa

4 The following parametric values are used:


9 p = 7800 kg/m'
2 P = I kN
E = 207 GPa
0 S,= 300 MPa
0 5 10 15
q7wx= 5 mm
fl
The upper and lower hounds for / and d are:
Fig.3.2 Pareto optimum set obtained 200515 IOOOmm
I O 5 d 5 50 mm
Weighing factor method, on the other hand, is popular
and simplest way to solve a multi objective optimization. 6.1 Parameters Used
which gives a set of pareto optimum solutions. Though it Penalty parameter ( r ) = looO.0
took more iterations (IO0 iterations for the chosen problem Number of population points ( N P ) = 20
in this study), the concept is intuitive and easy to use. Far Standard Deviation ( 6 ) = 0.0001
problems having a convex Pareto-optimal front, this DE Key Parameters:
method guarantees finding solutions on the entire Pareto- Scaling Factor ( F ) = 0.6
optimal set (Deb, 2001). From those multiple optimal Cross-over Constant ( C R )= 0.85
solutions, we can choose the good Pareto optimum set. In
6.2 Simulation Results
this study, first two weighing factors w,and M'? are found
A single optimum is found after 49 iterations.
using trail and errar method. It is also found that any other
I = 200.14 mm
combination of weighing factors end up with optimization
d = 2 1 . 6 5 1 mm
of single objective function.
f , = 0.577 kg
f: = 1 .I94 mm
6 Test Problem-2 solved using Penalty 6.3 Discussion
function In this problem also. the second objective function is taken
as a constraint and normalized as below:
A cantilever design prohlem (Deb, 2001) with two f.= 6 4 P 1 ' / ( 3 E d )5 c
decision variable. diameter (d) and length ( I ) is fjc - 1 5 0
considered.
The heam has to carry an end load P . Let us consider where c = 5 mm = maximum allowable deflection of the
two conflicting objective of the design. i.e.. minimization beam. The value of c has to defined by the user. The
of weight f , and minimization of end deflection f.. The importance of normalizing the function has already been
first objective will resort to an optimum solution having explained in section-4.4.
the smaller dimensions of d and I. so that the overall As mentioned earlier, the first convergence criterion is
weight of the heam is minimum. Since dimensions are being used as condition for termination. Because, it gives
small, the beam will not he adequately rigid and the end the exact generation number at which the convergence of
deflection of the beam will he large. On the other hand, if all the population points to a single value occurs.
the beam is minimized for end deflection. the dimensions Ohviously, as there would he no betterment in terms of
of the beam are expected to he large, thereby making the obtained global solution, it is needless to go t a r further
weight of the beam large. We consider two constraints for iterations (generations). However. in the case of second
our discussion here. The developed maximum stress is less criterion, which may he used for comparison purposes of
than the allowable strength (S?) and the end detlection (6) various different algorithms, the iterations continue even if

2700
i t converged to a global optimum at the expense of DE were obtained to he the same. This also proves DE’S
computational time.. power and robustness.

7 Hirnrnelblau Function 8 Conclusions


An objective function was taken and simulated using hoth In this study. DE, an improved version of C A , is used for
Differential Evolution (DE) and Simple Genetic solving two problems: ( I j a multi-objective optimization
Algorithm (GA). In DE all points in the population problem (with two objective functions to be maximized)
converged and gave exact solution after 60 iterations. C A using penalty function method and weighing factor
Dives some good Doints after 100 generations. method. and (2)
. . classical Himmelhlau function.
In the first prohlem on multi-objective optimization,
7.1 Problem Statement
minimize simulated results showed that single optimum could he
obtained for a multi objective problem by changing one
f ( . r , , . r ?=) ( . I , ’ t .x2 - I I)’ + (x, + x2?-7)’ ohjective function as a constraint in penalty function
0 5 .r,..x2 5 6. method. Good Pareto optimum set of solutions was
obtained considering hoth objective functions and
7.2 Parameters Used in DE applying weighing factor method.
Number of population points (NP)= 20 In the second problem on classical Himmelblau
Maximum Number of Iterations = 100 function optimization, simulation results indicated that
DE Key Parameters: compared to simple CA, D E algorithm gives exact
Scaling Factor ( F ) = 0.4 optimum with less number of iterations.
Cross-over Constant ( C R ) = 0.9
7.3 Simulation Results Using DE
A single optimum is found after 60 iterations.
Bibliography
i, = 3.0 Abhass, H.A. (2002). A Memeric Pareto Euolurionan
x. = 2.0 Approach to Artij’icial Neural Networks. In: Lecture
7.4 Parameters Used in GA Notes in Artificial Intelligence. Vol. 2256. Springer-
Number of population points (NP)= 20 Verlag.
Maximum Number of Generations = 120 Ahhass, H.A.. Sarker, R.. and C. Newton (2001). PDE: a
C A Key Parameters: Pareto-frontier differential evolrrtion approach for
Cross-over Prohahility @,) = 0.75 multi-objective optimization problems. In: Proceedings
Mutation Probability @), = 0.02 of the 2001 Congress on Evolutionary Computation,
27-30 May 2001, Seoul, South Korea, Vol. 2. pp. 971-
7.5 Simulation Results Using GA 978. IEEE, Piscataway, NJ, USA. ISBN 0-7803-6657-
Best result found after 100 generations to he: 3.
x, = 3.0025 Angira, R. and B.V. Bahu (2003). “Evolutionary
x. = 2.0 Computation for Global Optimization of Non-Linear
7.6 Discussion Chemical Engineering Processes”. Proceedings of
As is evident from the results of hoth simple CA and DE, International Symposium on Process Systems
simple CA took 100 generations as against 60 in the case Engineering and Control (ISPSEC’ 03) - For
of DE. Also, DE converged to a global solution with more Productivity Enhancement through Design and
accuracy than the simple CA. Optimization, IITBomhay, Mumhai, January 3-4,
Bahu et al. (2002) reported in their study on optimal 2003. Paper No. FMAZ, pp 87-91, 2003. (Also
design of auto-thermal ammonia synthesis reactor using ai:ailable via Internet as .pdf jile at
DE, that irrespective of the values of SD (1.0, 0.5, 0.25, S0me es. condcustom. ht1nV#S6).
htt~://1~1~6a6ir.
0.1 0.01 8: 0.01) and the corresponding CR & F values, Bahu, B.V. and A S . Chaurasia (2003), “Optimization of
they obtained almost same values of the objective function Pyrolysis of Biomass Using Differential Evolution
(4848383.0 $/year) and reactor length (6.79 m). It Approach”. To he presented ar ‘The Second
consolidates the robustness of the DE algorithm. The International Conference on Compurariorial
more wider the range of values of CR, F. and SD for Intelligence, Robotics. and Auranornous Systems
which same values of objective function and reactor length (CIRAS-2003), Singapore, December 1S- 18.2003.
are obtained, the more robust is the algorithm. Also, the Bahu, B.V. and C. Gaurav (2oOO). “Evolutionary
code of “Nested DE’ was mn with another strategy Computation Strategy for Optimization of an
DE/rand/l/exp. and found that results are exactly the Alkylation Reaction”. Proceedings of Inremationaal
same. Irrespective of the parameters used for outer DE, it Symposium & S3rd Annual Session of IICIiE
was found that optimum key parameters (CR & F) of inner (CHEMCON-2000), Calcutta, December 18-21, 2oOO.

270 I
(Also available via Internet as .pdf file at Symposium & 56th Annual Session of IIChE
Ij
htt~~://l,i~l~nbu.SOmrr.s.mni/r.usrnrii.irrnil/#.~ & (CHEMCON-2003). Bhubaneswar, Decemher 19-22,
Application No. 19, Homepage of Differential 2003.
Evolution. the URL of which is: Babu, B.V. and R. Angira (2003hj, “Optimization of
httD://www.icsi.herteley.edu/-storn/code.htnil Water Pumping System Using Differential Evolution
Babu, B.V. and K. Gautam (2001j, “Evolutionary Strategies”. To be presented ar The Second
Computation for Scenario-Integrated Optimization of International Confererice on Coniputntional
Dynamic Systems”. Proceedings of. Iiueninrioiial bitelligence, Robotics, arid Aeronomoirs Systems
Symposium & 54th Annual Session of IIChE (CIRAS-2003),Singapore, Decemher 15-18.2003.
(CHEMCON-?001), Chennai, December 19-22, 2001. Babu, B.V. and R.P. Singh (2000j, “Synthesis &
(Also available via Internet as .pdf file at optimization of Heat Integrated Distillation Systems
http://bl~h~lbLl.
j f l l l l P 2 S . ~ ~ J l l L k l l . S ~/ltl?1//#39)
~~lll. & Using Differential Evolution”. Proceedirigs of All-
Application No. 21, Homepage of Differential India seminar 011 Chemical EuRineering Progress on
Evolution. the URL of which is: Resource Deoeliip,plneur: A Vision 2010 arid Beyond. IE
htt~://u~u~~v,icsi.l~ertclcv.cdu/-storn/code.html (I), Bhubaneswar. India, March 1 I , 2000.
Babu. B.V. and K.K.N. Sastry (1999), “Estimation of Bahu. B.V. and S.A. Munawar (2000). ”Differential
Heat-transfer Parameters in a Trickle-bed Reactor Evolution for the Optimal Design o f . Heat
using Differential Evolution and Orthogonal Exchangers”. Proceedings of All-India seriiinor on
Collocation”. Computers & Chemical Elrgineering, 23, Chemical Eiigirieeriirg Progress 011 Resoirr-ce
327-339. (Also available via hireniet as .pdf file at Development: A Visiori 2010 and Beyond. IE (I).
/1rr~://1~~~6d~11.
5Oiiices.com/iu.smni.li mrl/it24 J 8: Bhubaneswar, India, March I I , 2000. (Also available
Application No. 13, Homepage of Differential via Internet as .pdf file at
Evolution. the URL of which is: litm://bvbab~~.501rrr~.s.co111/cr~sfr1111.hrn~lNi?X).
http://www.icsi.her~elcv.cdu/-storn/codc.htmI Bahu, B.V. and S.A. Munawar (2001) “Optimal Design
Bahu, B.V. and R. Angira (2001a). “Optimization of Non- of Shell & Tube Heat Exchanger by Different
linear functions using Evolutionary Computation“. strategies of Differential Evolution”. PreJoumnl.com -
Proceedings of 12’’’ISME Conference, India, January The Fnculh Lounge, Article No. 003873. posted on
1&12. 153-157, 2001. (Also available via lritenier as website Jorinial / r t t o : / / e v w i i ~ . ~ r i ~ ~ i ~ i ~ r ~(Also
i~~l.i~ii~~~.
.pdf file at available via Iiiteniet as .pdf file or
ltrrp://b1+a61~S l l m es. conz/crrrorx11rml/#34 J. / i ~ t ~ ~ : / / l5Onrec.s. ~ ~ I ~ ! ~ , hmil/#3iJ
~ ~ l ~ f:oui/cmm. &
Bahu, B.V. and R. Angira (2001b), “Optimization of Application No. 18, Homepage of Differential
Thermal Cracker Operation using Differential Evolution, the URL of which is:
Evolution”. Proceedirigs of hiternarioiial Symposium http:/lu~u~w.icsi.her~~lrv.edu/-stor~i/code.htnil
& 54th Arinrial Session of IIChE (CHE1!4CON-2001). Bahu, B.V., R. Angira, and A. Nilekar (2002).
Chennai, Decemher 19-22, 2001. (Also available via “Differential Evolution for Optimal Design of an
Internet as .pdf file at Auto-Thermal Ammonia Synthesis Reactor“.
h r t r ~ : / / ~ ~ ~ ~ b ~ r b r i . ~ ~ ~ n ~ e p . s . c o r i r / c r ~ s i o ~ ~ ~ . /& Communicated
rfml/#.7R ) to Computers & Chemical
Application No. 20. Homepage of Differential Engineeriug.
Evolution. the URL of which is: Belegundu. A.D. and T.R. Chandrupatla (2002).
http://www.icsi.her~~l~y.t.dul-st(ini/~(ide.htmi Optimizariori Coricepts arid Applicarioiu if1
Babu, B.V. and R. Angira (2002a), “A Differential Engirieering. First Indian Reprint. Pearson Education
Evolution Approach for Global Optimization of (Singapore) Pte. Ltd., Indian Branch. New Dclhi.
MINLP Problems”. Proceedirigs of 4”’ Asia Pacrfir Chiou, 3.P. and F.S. Wang (1999). “Hybrid Method of
Corlfererice on Sinurloted Evolution and Leariling Evolutionary Algorithms for Static and Dynamic
(SEAL-2002). Singapore, November 18-22, Vol. 2, pp. Optimization Problems with Application to a Fed-
880-884, 2002. (Also available via Internet as .pdf file batch Fermentation Process”. Coniprrrers & Chemical
at Irtro://l~vbahu.SO~~~cp.s.cor~l/r.rr.stom.htiiil/#~6 Engineering. 23. 1277- I29 I .
Babu, B.V. and R. Angira (2002b), “Optimization of Non- Dasgupta, D. and 2. Michalewicz (1997). Ei~oliitio~inn
Linear Chemical Processes Using Evolutionary algorithms 01 Eirgirieeriug Applications, 3 - 23.
Algorithm”. Proceedings of International Symposium Springer, Germany.
& 55th Annual Session of IlChE CHEMCON-2002), Deb, K. (2001j, Multi-Objectiee Optimization using
OU, Hyderahad, Decemher 19-22. 2002. (Also Evolutionor?. Algorithms. John Wiley 8: Sons Limited,
ai~ailable uia Irirernet a.y .pdf file at New York.
litto://hvlmhir. 50nu~s. rorrdcrfsrom.lirml/#iJ J, Goldberg, D.E. (1989), Generic A/gorir/lnlr in search.
Bahu, B.V. and R. Angira (2003a), “New Strategies of Optimizarion. arid Machine learning, Reading. MA,
Differential Evolution for Optimization of Extraction Addison-Wesley.
Process“. To be presented at International

2702
Holland J. H. ( I 975). Adaptofion in Nattrral and Artificial
Sjslenis. Ann Arbor, Michigan. The University of
Michigan Press.
Oh P.P., G.P. Rangaiah, and A.K. Ray (2002),
“Simulation and Multi-objective Optimization of an
Industrial Hydrogen Plant Based on Refinery Off-gas“.
lndirstrial and Engineering Chemistry Reseorch. 41,
2248-2261.
Onwuholu, G.C. and B.V. Bahu (2003).New Optimization
Techniques in Engineering, Springer-Verlag. Germany
(In Print).
Price, K. and R. Storn (1997). “Differential Evolution - A
simple evolution strategy for fast optimization“. Dr.
DoOb’s Journal, 22 (41, 18 - 2 1 and 78.
Price, K. and R. Storn (2003), Web site of DE as on Jul!
2003, the URL of which is:
http:/lwww.ICSl,Bcrkelcv.cdul-stornlcodc.httiiI
Rajesh, J.K., S.K. Gupta, G.P. Rangaiah, and A.K. Ray
(2000), “Multi-objective Optimization of Steam
Reformer Performance Using Genetic Algorithm”.
Indnstrial a n d Engineering Chemistry Research, 39,
707-717.
Rajesh, J.K., S.K. Gupta, G.P. Rangaiah, and A.K. Ray
(2001), “Multi-objective Optimization of Industrial
Hydrogen Plants“. Cliemical Engineering Science, 56.
999.
Storn. R. (1995). “Differential Evolution design of an IIR-
filter with requirements for magnitude and group
delay”. Intenmtional Coinpiiter Science Institute. TR-
95-026.
Wang, 17. S. and W.M. Cheng (1999) “Simultaneous
optimization of feeding rate and operation parameters
for fed-hatch fermentation processes”. Biotechriologj
Progress, 15 (5). 949-952.
Yee. A.K.Y., A.K. Ray, and G.P. Rangaiah (2003),
“Multiobjective optimization of an industrial styrene
reactor”. Computers & Cl~emicol Engineering, 27,
1 I I - 130.

2703

You might also like