Download as pdf or txt
Download as pdf or txt
You are on page 1of 7

Electrical Power and Energy Systems 24 (2002) 277±283

www.elsevier.com/locate/ijepes

A Cauchy-based evolution strategy for solving the reactive


power dispatch problem
J.R. Gomes, O.R. Saavedra 1,*
Departamento de Engenharia de Eletricidade, Grupo de Sistemas de Energia Eletrica, Universidade Federal do MaranhaÄo, Sao Luis 65085-580 MA, Brazil
Received 27 November 2000; accepted 26 March 2001

Abstract
In this work is presented a new proposal for solving the reactive power dispatch. The approach is based on the …m 1 l†-ES paradigm
improved by the control of mutations and by using of Cauchy-based mutation rather than the classical Gaussian Mutations (GMs). Others
variants are also implemented and a comparative study are performed. Good and reliable performance have been achieved and validation
tests using the standard IEEE118 system are reported. q 2002 Elsevier Science Ltd. All rights reserved.
Keywords: Evolutionary computation; Reactive dispatch; Optimization; Artifcial intelligence; Cauchy distribution

1. Introduction Evolution Strategies (ES) and Evolutionary Programming


(EP).
Darwin's theory of natural selection is considered as the Each of these main stream algorithms have clearly
fundamental unifying theory of life. Evolution refers to demonstrated their capability to yield good approximate
temporal changes of any kind, whereas natural selection solutions even in case of complicated multimodal, discon-
speci®es one particular way in which these changes are tinuous, non-differentiable, noisy or moving response
brought about [1]. Natural selection is the most important surfaces of optimization problems [7]. In these approaches,
agent of evolutionary change because it involves the a population of individuals is initialized and then evolves
confront between organisms and their environment. Classic into a search space, throughout a stochastic process of selec-
Darwin's theory combined with the selectionism of Weis- tion, mutation, and in some cases, recombination. However,
mann and the genetics of Mendel are accepted set of argu- these methods differ in terms of representation, operators
ments universally known as the 2 neo-Darwinian paradigm and selection process.
[2]. This paradigm states that the history of the vast majority While Genetic Algorithms emphasize chromosomal
of life is fully accounted for by only a very few statistical operators based on genetic mechanisms, i.e. crossover and
processes acting on and within population and species [3]. mutation, ES and EP emphasize behavioral links between
These processes are reproduction, mutation, competition parents and offspring. In particular, ES emphasize
and selection. The optimization tendency of evolutionary behavioral changes at the level of the individual and EP
processes has been motivated the formulation of approaches stresses behavioral change at the level of the species [6].
that emulate these mechanisms in a crude form. Particularly, The original EP was introduced by Fogel in 1962 [12] and
the results obtained from application of simulated evolution extended by Burgin, Atmar e Fogel, recently [4]. The goal
for solving complex engineering problems have evidenced of EP is to achieve intelligent behavior through simulated
that search processes based on natural evolution are robust, evolution. While the original EP was proposed to operate on
and can be used to solve optimization problems in a vast ®nite machines and the corresponding discrete representa-
domain variety [4,6]. tions, most of present variants are utilized for continuous
There are three main approaches where the majority of parameter optimization problems.
current implementations are classi®ed: Genetic Algorithms, More recently, the technique has been extended and
applied to diverse real-valued continuous optimization
* Corresponding author. Fax: 155-98-217-8241.
problems. Rather than use ®nite state machines, representa-
E-mail address: osvaldo@dee.ufma.br (O.R. Saavedra). tions are chosen based on the problem on hand and mutation
1
www.dee.ufma.br. is the main operator used in generating new trials. The last
0142-0615/02/$ - see front matter q 2002 Elsevier Science Ltd. All rights reserved.
PII: S 0142- 061 5( 01) 00039-4
278 J.R. Gomes, O.R. Saavedra / Electrical Power and Energy Systems 24 (2002) 277±283

version, called meta-EP incorporates parameter self- than the obtained with Gaussian distribution and increasing
adaptation per individual quite similar to ES [5]. the probability of escaping from a local optimum. The bene-
ES were developed in 1960 by Rechemberg and Schwefel ®cial effect of augmenting the variance has been observed in
in Germany and extended by other authors, such as Rudolph other algorithms [19,20,23].
[9] and Herdy [10]. Early ES were mainly applied to solve In this work is presented a new proposal for solving the
various optimization problems with continuously change- reactive power dispatch. The approach is based on the
able parameters. The ®rst ES versions operated on the …m 1 l†-ES paradigm improved by the control of mutations
basis of two three individuals only (one parent Ð one and by using the Cauchy-based mutation rather than the
offspring), using mutation as the only recombination classical GMs. Others variants are also included, and
operator. validation test using the standard IEEE118 bus system are
reported.
1.1. Evolutionary techniques in reactive optimization This article is organized as follows. First, ES, and
Cauchy-based mutation are reviewed. Secondly, the
The algorithms based on the principles of the natural
proposed approach and some variants are presented. Next,
evolution have been applied successfully to solve compli-
validation tests and comparative analysis are performed.
cated problems of optimization, such as those found in
Finally, relevant conclusions and comments are presented.
reactive optimization, distribution systems planning,
expansion of transmission systems, etc. Literature presents
an extensive list of works of the application of evolutionary 2. Optimal reactive power dispatch
techniques to power system problems [13±17,18].
Lai and Ma [13] have presented a modi®ed EP to solve The goal of optimal reactive power dispatch is to mini-
the reactive power dispatch, obtaining good results. Other mize real power losses and improve voltage pro®le by
authors [15,16] have applied the same algorithm for other setting generator bus voltages, VAR compensators and
power system problems, reporting results using the IEEE30 transformer taps. The problem can be written in a form
system. A simpli®ed ES has been used in Ref. [16] and penalized as follows:
compared with genetic algorithms and Lai and Ma Min f ˆ f ` 1 fp …1†
algorithm. In Ref. [17], a proposal quite similar to Ref.
[13] has been presented. s.t.:
More recently, an evolution strategy-based approach has
Pdi 2 Pi …V; u† ˆ 0; i [ NB21 …2†
been proposed and compared with proposal of Lai and Ma
[22]. The self-adaptation of parameters is controlled by
Qdi 2 Qi …V; u† ˆ 0; i [ NPQ …3†
dynamic limits and no recombination is performed. Due to
the probabilistic nature of the evolutionary algorithms, a with
comparative statistical analysis has been performed. The X X
approach has been tested using the IEEE57 system, achiev- fp ˆ rqi …Qgi 2 Qlgi †2 1 rvi …Vi 2 Vil †2 …4†
i[NPV i[NPQ
ing feasible solution with losses reduction with probability
1. where f` represents the system losses; NB, NB21 represent the
In spite of these efforts, evolutionary techniques have not system nodes set, and the system nodes set excluding the
yet been explored completely for power system applica- slack bus, respectively. NPQ, NPV represent the PQ-buses and
tions. Algorithms that meet high quality solutions with PV-buses set, respectively. Eqs. (2) and (3) represent the
few computational effort are expected, and intensive efforts load ¯ow equations. The generator bus voltages and the
are addressed towards this goal. transformer tap-settings are control variables. r qi and r vi
are penalty factors for reactive power violations and voltage
1.2. This paper violations, respectively. Qlgi and Vil represent the violated
limits. Pdi 2 Qdi represent the active and reactive power
In the classical ES, an individual is mutated by adding of
demand at node i, respectively. Penalty parameters are
a Gaussian number with a mean of zero and standard devia-
chosen empirically in accord with experience and the
tion s (Gaussian mutation, GM). On the other hand, for
particular application.
practical applications, simulated evolution-based methods
are high CPU time-consumers. An attempt to introduce
improvement in the convergence rate was proposed in 3. Evolution strategies
Ref. [20]. In this approach, Yin et al. replaced GM by
Cauchy-based mutation on a …m 1 l†-Evolution Strategy In ES, the components of a trial solution are viewed as
without recombination. They carried out many empirical behavioral traits of an individual, not as genes along a
studies with both GM and Cauchy mutation (CM) using chromosome. It is supposed a genetic source for these traits,
several applications. Notice that variance of Cauchy distri- but the nature of the linkage is not detailed. Thus, an
bution is in®nite, allowing to expect mutation values greater individual is represented as a pair of ¯oat-valued vectors,
J.R. Gomes, O.R. Saavedra / Electrical Power and Energy Systems 24 (2002) 277±283 279

i.e. a ˆ …x; s†; where x represents a point in the search 3.1. Recombination
space. The second is the vector of standard deviations and
provides instructions on how to mutate a and is itself In ES, new individuals are generated using the mutation
subject to mutation. In other words, both components, x and recombination operators. In contrast with genetic
and s , are submitted to evolution process by application algorithms, the recombination operator creates only one
of the operators of mutation and also recombination. offspring. Different recombination mechanisms are used to
Thus, a suitable adjustment and diversity of para- produce one new individual from a set of randomly selected
meter mutations should be provided under arbitrary parent individuals. Basically, recombination works choos-
circumstances. ing @ …1 # @ # m† parent vectors from P…t† [ I m with
The ®rst ES are focused on a single-parent offspring uniform probability. Next, characteristics of @ parents are
search [11]. In this model, termed …1 1 1†-ES, a single mixed to create a new individual. When @ ˆ 2; recombina-
offspring is created for a single parent and both are placed tion is called bisexual, and if @ . 2; it is called multisexual.
in competition for survival with selection discarding the In particular, if @ ˆ m; recombination is called global [8].
poorer solution. Rechemberg proposed in 1973 the use of On this class, there are two variants:
multiple parent but only a single offspring …m 1 1†-ES.
More recently, two approaches have been explored, denoted ² Global discrete recombination. This variant is similar to
by …m 1 l†-ES and …m; l†-ES [8]. In the former, m parents uniform crossover in genetic algorithms. Each compo-
generate l offspring and all solutions compete for survival nent of an offspring is created by selecting at random
with the best m individuals being selected as parents of the an individual from parent population.
next generation. In the latter, only l offspring compete for ² Global intermediary recombination. Each component of
survival and the m parents are completely replaced in each an offspring is generated by the arithmetic average of the
generation. Then, the life of an individual is limited to a corresponding parent components, as follows:
single generation. Notice that this mechanism allows that
X
@
the best member at generation k 1 1 might perform worse b 0i ˆ …1=@† bk;i
than the best individual at generation k. In other words, the kˆ1
method is not elitist, thus facilitating the strategy to accept
temporary deterioration that might help to leave the region where b 0i is the i-component of the offspring b 0 .
of attraction of a local minimum and reach a better optimum
[8].
The process of ES is described in Ref. [8]. The following Notice that recombination is performed on strategy
pseudocode algorithm summarizes the components of the parameters as well as on the object variables, and the recom-
…m 1 l†-ES evolutionary algorithm, where each individual bination operator may be different for object variables and
is characterized by a pair a ˆ …x; s i † : standard deviations.

tU0 3.2. Mutation


1. Initialize P(t) U {a1(0),¼,am(0)} [ I m where I ˆ Rn1n
and ak ˆ …xi ; s i † ;i [ {1; ¼; n} The mutation operator m: I ! I …where I ˆ Rn1n † yields
2. Evaluate P(t): {F(a1(t)),¼,F(am(t))} where F…a k …t†† ˆ a mutated individual m…~a† ˆ …~x 0 ; s~ 0 †; by ®rst mutating the
f …xk …t††; standard deviations and then mutating the object variables
while termination criterion not ful®lled do as follows:
3. Recombine: a 0 k(t) U r(P(t)) ;k [ {1,¼,l};
4. Mutate: a 00 k(t) U m{a 0 k(t) ;k [ {1,¼,l}; s 0i j ˆ s ij exp…t 0 N…0; 1† 1 tNj …0; 1†† …5†
5. Evaluate P 0 (t) U {a 00 1(t),¼,a 00 l(t)}; {F(a 00 1(t)),¼,F(a 00 l(t))}
6. Select: P(t 1 1) U Sd(P 0 (t) < Q); t U t 1 1
x 0i j ˆ xji 1 s 0i jN…0; 1† …6†
end do
where i ˆ 1; ¼; l and j ˆ 1; ¼; n: N…0; 1† represents a
where Q [ {B; P…t†} depends on selection: Q ˆ B for …m; l† Gaussian number with zero mean and variance 1, that
selection and Q ˆ P…t† for …m 1 l† selection. should be the same for all vector positions. Nj …0; 1† also
The operators r(´), m(´) and Sd(´) de®ne the application of represents a Gaussian number; nevertheless, this value
recombination, mutation and deterministic selection to the must be different for each j value.
respective arguments. Search points in ES are n-dimen- The global factor t 0 N…0; 1† allows for any overall change
sional vectors x [ R n ; and the ®tness value of an individual of the mutability, whereas tNi …0; 1† allows for individual
is identical to its objective function value, i.e. F…a† ˆ f …x† changes of s i : The factors t and t 0 are de®ned p as
`learning
where x is the object variable component of a and each rates' p
and are
 21 suggested by Back [21] as t ˆ … 4
4n †21 and
0
individual include up to n different variances s i …i [ t ˆ … 2n† ; respectively, where n represents the problem
{1; ¼; n}†: size.
280 J.R. Gomes, O.R. Saavedra / Electrical Power and Energy Systems 24 (2002) 277±283

5. Extended Cauchy-based …m 1 l†-ES

The extended Cauchy-based Evolution Strategy (ECES) Ð


proposed in this article meets the following characteristics:

(a) The use of CM rather than GM; Eq. (7) substitutes Eq.
(6).
(b) Functional limits for s mutations.
(c) Selection …m 1 l†:
(d) Global intermediate recombination.

In (d), two alternatives have been considered:

Fig. 1. Cauchy and Gaussian distributions …t ˆ 1†: GR1: A unique random number is generated to choose a pair
of components xi, s i in the parent population to create each
3.3. Selection offspring component.
GR2: Independent random numbers are generated to choose
In contrast with EP [4], selection in classical ES (Sd) is the parent components xi and s i; thus, for n-dimensional
completely deterministic. 2 In case of …m 1 l†-ES, the m best vectors, 2n random numbers are required.
individuals are selected from the union of parents and
offspring. Thus, this selection is elitist and therefore
guarantees a monotonic improving performance. In …m; l† 5.1. Offspring creation
strategies, the m best individuals are selected only from
offspring population and replace the parents in the next The creation of an offspring is performed taking into
generation (not elitist selection). account the feasible range of the variable, similar to Lai
and Ma proposal [13,15] and using CM, as follows:

x 0i ˆ xi 1 s 0i …xmax
i 2 xmin
i †d …8†
4. Fast evolution strategies
This equation substitutes Eq. (6) in the mutation operation.
The Cauchy is a symmetrical long-tailed distribution. It is xmax
i 2 xmin
i are the limits of control variable xi. If xi exceeds its
plotted in Fig. 1 with a normal distribution. Its characteris- limit, xi will be given the limit value. Moreover, offsprings
tics are particularly interesting for engineering applications, may be satisfy Eqs. (2) and (3). Cases where load ¯ow diverge
where practical applications must supply solutions at are discarded and new offsprings are generated.
permissible times. Due to Cauchy distribution be more
expanded than Gaussian distribution, it allows, probabilis- 5.2. Dynamic limits
tically speaking, larger mutations and in this way, generat-
ing more different individuals. This means that a major This modi®cation is addressed to limit s mutations by
solution space is covered, increasing the probability of introducing dynamic upper and lower bounds, as suggested
escaping from a local optimum. Some authors have in Ref. [22]. Dynamic limits allow s mutations fall into an
suggested the addition of CM rather than GM [20,23,24]. upper and lower limit, both dynamically decreasing
In Ref. [23] a ES with Cauchy-based mutation, called exponentially, as follows:
Fast Evolution Strategy has been proposed, where mutation 0
s…t† max ˆ s max exp…2t=T1 † …9†
is the primary search operator (recombination is not consid-
ered). This proposal follows the same general algorithm in 0
Section 3, except that Eq. (6) is replaced by s…t† min ˆ s min exp…2t=T2 † …10†
0 0
where s max and s min are initial values for each function and
x 0i j ˆ xji 1 s ij dj …7† t denotes the generation; T1 and T2 are time constants calcu-
f f
lated from ®nal values desired for s max and s min ; respec-
where d j is a random number with Cauchy distribution with tively. If any dynamic limit is violated, then s (t) will be
scalar parameter setting as t ˆ 1: This number must be given the average of current values of functions above.
obtained for each value of j. Eqs. (9) and (10) allow `large' mutations in the initial
generations and `small' mutations at the end. In other
words, in the ®rst iterations diversity is emphasized while
2
In Ref. [25], Schwefel and Rudolph have been included tournament the last generations are dominated by a re®ned search
selection as alternative to deterministic selection. process (small mutations).
J.R. Gomes, O.R. Saavedra / Electrical Power and Energy Systems 24 (2002) 277±283 281
0
Table 1 2. for ECES, s max ˆ 1022 ; 0
s min ˆ 1024 ; f
s max ˆ
Reactive power generation limits of IEEE118 bus system 23 f 25
10 and s min ˆ 10 :
Reactive power generation limits Ð IEEE118
Tests have been performed over 500 generations, using
Bus 1 4 6 8 10 12 15
m ˆ 20 and l ˆ 100: The stopping criteria has been the
Qmin
g 0 2100 25 2100 2100 230 25
Qmax 8 300 30 300 300 75 23 maximum number of generations.
g
Bus 18 19 24 25 26 27 31
Qmin
g 25 25 2100 2100 2300 2100 2100
Qmax
g 23 15 300 300 600 300 300 6. Test results
Bus 32 34 36 40 42 46 49
Qmin
g 25 25 25 2100 2100 2100 2100
Qmax
g 23 23 15 300 300 300 300 Tests have been performed using the IEEE118 standard
Bus 54 55 56 59 61 62 65 system. The network consists of 54 generator-buses, 64
Qmin
g 2100 25 25 2100 2100 0 2100 load-buses and 179 branches, of which nine branches are
Qmax
g 300 11 11 300 300 8 300 under load-tap-setting transformer branches. Thus, the
Bus 66 69 70 72 73 74 76
dimension of the control vector is 63. All power and voltage
Qmin
g 2100 2300 0 2100 2100 0 0
Qmax
g 300 600 8 300 300 4 8 quantities are per-unit values.
Bus 77 80 85 87 89 90 91 Table 1 shows MVAr limits in PV buses. Voltage limits
Qmin
g 210 2100 0 2100 2100 2100 2100 have been considered 0.9±1.10 pu for PV-buses and 0.95±
Qmax
g 38 300 8 300 300 300 300 1.05 for PQ-buses. Taps limits have been assumed 0.90±
Bus 92 99 100 103 104 105 107
1.10. Threshold values of 0.10% and 10 23% are considered
Qmin
g 0 2100 2100 25 25 25 2100
Qmax
g 8 300 300 15 15 15 300 to accept or discard reactive and voltage violations, respec-
Bus 110 111 112 113 116 ± ± tively. The reactive power limits of slack bus have been
Qmin
g 0 2100 2100 2100 2250 ± ± assumed equal to the great and to the least capacity values
Qmax
g 1 300 300 300 525 ± ± observed in the generation buses.
The base-case has violations in voltage and reactive
generation. It was observed that 39 PQ buses have voltage
5.3. Implementation violations. Values fall in the range of 0.8983 pu (bus 5) to
1.1003 pu (bus 118). Reactive violations have been
To validate this proposal, several variants have been
observed in 13 PV buses and the worst violation has been
implemented and are described next. In all the variants, s
in bus 92 (18.90 MVAr, lower limit). The active losses of
mutations are controlled by Eqs. (9) and (10):
base-case has been 1.5237 pu.
Due to probabilistic characteristic of evolutionary
1. SES Ð Simple …m 1 l† Evolution Strategy: It uses the
algorithms, results reported here correspond to average
classical GM without recombination.
from 20 trials. From practical point of view, we are inter-
2. EES1 Ð Extended …m 1 l† Evolution Strategy with
ested in reliable software tools that supply good solutions
global intermediate recombination and strategy GR1.
every time. Thus, to evaluate the quality of the proposals,
3. EES2 Ð Similar to above variant, but using strategy GR2.
dispersion measures have been used.
4. EES3 Ð Similar to EES2, but using …m; l† selection.
Fig. 2 shows the performance of SES, EES1 and EES2
5. ECES Ð Extended Cauchy-based …m 1 l† Evolution
algorithms. The best results was obtained from EES2
Strategy: Similar to EES2, but it uses CM rather than GM.
version.
Fig. 3 presents the performance of EES2, EES3 and ECES
5.4. Parameters used

The ®tness function F (s) corresponds to penalized objec-


tive function f, given by Eq. (1). Control vector x is formed
by generator bus voltages and transformer tap-settings. The
objective function penalty factors utilized have been r qi ˆ
106 ervi ˆ 108 :
The initial population formed by m individuals was
generated at random, however satisfying Eqs. (2) and (3).
The parameters used in dynamic limits of Eqs. (9) and
(10) were the following:

0 0
1. for variants (1)±(4), s max ˆ 1; s min ˆ 1023 ; s max
f
ˆ
22 f 24
10 es min ˆ 10 ; Fig. 2. Objective function average for SES, EES1 and EES2 methods.
282 J.R. Gomes, O.R. Saavedra / Electrical Power and Energy Systems 24 (2002) 277±283

Table 2
Statistical performance of approaches implemented in 20 trials

Performance in terms of objective function

Method f`min f`max f` s` NVS%

SES 1.4164 824.1360 42.6039 179.2957 95


EES1 1.3874 38.9229 3.3118 8.1698 95
EES2 1.3800 1.4679 1.4321 .0254 100
EES3 1.3966 1.5281 1.4661 .0322 100
ECES 1.3485 1.4199 1.3818 .0184 100

Table 2 shows the bene®cial effect of the use of global


intermediate recombination. In general, it has been observed
that the strategy GR2 be bene®cial in terms of feasibility. It
Fig. 3. Objective function average for EES2, EES3 and ECES methods. is presented on EES2, EES3 and ECES variants; notice in all
these cases, no violations have been observed over the 20
simulations (NVS equal to 100%).
algorithms (in order to make easy the comparison, EES2 has In terms of losses minimization, …m 1 l† selection (EES2
been repeated). Basic difference between EES2 and EES3 is algorithm) performs better than …m; l† selection. Similarly to
the selection type. In the application of this paper, …m 1 l† the analysis above, algorithm ECES presents the best
performs better than …m; l† selection. On the other hand, when performance.
both …m 1 l† selection and CM are considered (ECES Up till now, results of algorithms in terms of optimality
algorithm), the global performance is clearly improved. and feasibility has been analyzed. However, the inclusion of
Fig. 4 shows, in more detail, the last generations for the sophisticated recombination strategies, new mutation type
same variants plotted in Fig. 3. and other improvements have a computational cost. Table 4
In Tables 2 and 3 are presented the comparative perfor- shows the CPU time of the implemented algorithms, where
mance for the ®ve variants implemented in terms of objec- teval and top give the percent time per generation consumed in
tive function and losses minimization, respectively. Table 2 objective function evaluations and evolutionary operations,
focuses the ability of variants to ®nd feasible solutions, respectively. tger gives de average CPU time of one genera-
while Table 3 emphasizes loss minimization into the tion, in seconds. The last column gives the average CPU
feasible solution set. Tests have been performed for 500 time in minutes at the 500th generation. ECES algorithm
generations. Each value corresponds to the average (over presents the best solution, but with the highest CPU time
20 trials) of the best ®tness on the current generation. (i.e. quality have a price). However, notice that ECES
Columns 2±5 in Table 2 give the minimum, maximum, provides solutions better than EES2 ±EES3 algorithms
average and standard deviation of objective function, already at the 350th generation and more quickly (see
respectively; column 6 gives the number of feasible Fig. 4).
solutions (NVS). Similarly, columns 2±5 in Table 3 give
the minimum, maximum, average and standard deviation of
losses, respectively; column 6 gives the average losses 7. Conclusions
relative to base-case losses.
In this work is presented a new proposal for solving the
reactive power dispatch. The approach is based on the
…m 1 l†-ES paradigm improved by the control of mutations
and by using Cauchy-based mutation rather than the classi-
cal GMs. Due to the probabilistic nature of algorithms, a

Table 3
Statistical performance in terms of losses for cases without violation

Performance in terms of losses

Method f`min f`max f` s` f`r %

SES 1.4164 1.5661 1.4706 2.4391 96.52


EES1 1.3874 1.4681 1.4375 1.3660 94.35
EES2 1.3800 1.4679 1.4321 1.7759 93.99
EES3 1.3966 1.5281 1.4661 2.1931 96.22
ECES 1.3485 1.4199 1.3818 1.3347 90.69
Fig. 4. Zoom of last generations for EES2, EES3 and ECES methods.
J.R. Gomes, O.R. Saavedra / Electrical Power and Energy Systems 24 (2002) 277±283 283

Table 4
Statistical performance in terms of CPU time

Method CPU-time average by generation CPU-time ®nal solution (min)

teval (%) top (%) tger (s)

SES 88.03 11.97 1.4755 12.29


EES1 83.31 16.69 1.5493 12.91
EES2 77.46 22.54 1.6350 10.90
EES3 78.55 21.45 1.7362 13.02
ECES 77.60 22.40 1.7323 8.66

statistical analysis has been presented. The comparative organized evolution strategies, Parallel problem solving from nature
study has shown that the ECES algorithm performs better 2. Amsterdam, The Netherlands: Elsevier, 1992 (p. 207±17).
[11] Rechenberg I. Cybernetic solution path of and experimental problem,
than the other approaches. In 100% of the tests, feasible
Royal Aircraft Establishment, Library Translation, No. 1122, August
solutions with losses reduction have been achieved. Bene- 1965.
®cial effect of CMs in terms of speed and quality solutions [12] Fogel LJ. Autonomous automata. Ind Res 1962;4:14±9.
has been observed. Exhaustive tests were performed and [13] Ma JT, Lai LL. Optimal reactive power dispatch using evolutionary
reported using the standard IEEE118 system in this work. programming, IEEE/KTH Stockholm Power Technology Conference,
Sweden, July 1995. p. 662±7.
[14] Yeh EC, Venkata SS, Sumic Z. Improved distribution system plan-
Acknowledgements ning using computational evolution. IEEE Trans Power Syst
1996;11(2):668±74.
[15] Lai LL, Ma JT. Application of evolutionary programming to reactive
The authors wish to acknowledge the support from power planning Ð comparison with nonlinear programming
Conselho Nacional de Desenvolvimento Cientõ®co Tecno- approach. IEEE Trans Power Syst 1997;12(1):198±206.
loÂgico Ð CNPq and SuperintendeÃncia de Desenvolvimento [16] Lee KY, Yang FF. Optimal reactive planning using evolutionary
da AmazoÃnia Ð SUDAM, Brazil. algorithms: a comparative study for evolutionary programming,
evolutionary estrategy, genetic algorithm and linear programming.
IEEE Trans Power Syst 1998;13(1):101±8.
References [17] Park Y, Won J, Park J, Kim D. Generation expansion planning based
on an advanced evolutionary programming. IEEE Trans Power Syst
[1] Pianka ER. Evolutionary ecology. 5th ed.. New York: HarperCollins 1999;14(1):299±305.
College Publishers, 1994. [18] Lai LL. Intelligent system applications in power engineering Ð
[2] Fogel DB. Evolutionary computation: toward a new philosophy of evolutionary programming and neural networks. New York: Wiley,
machine intelligence. New York: IEEE Press, 1995. 1998.
[3] Hoffman A. Arguments on evolution: a paleontologist's perspective. [19] Fishman GS, Kulkarni VG. Improving Monte Carlo ef®ciency by
New York: Oxford University Press, 1989. increasing variance. Management Sci 1992;38(10):1432±44.
[4] Fogel DB. Evolutionary optimization. In: Chen RR, editor. Proceed- [20] Yao X, Liu Y. Fast evolutionary programming, Proceedings of the
ings of the 26th Asilomar Conference on Signals, Systems and Fifth Annual Conference on Evolutionary Programming (EP'96), San
Computers, California: Paci®c Grove, 1992. p. 409±14. Diego, USA. Cambridge, MA: MIT Press, 1996 (p. 451±60).
[5] Fogel DB, Fogel LJ, Atmar JW. Meta-evolutionary programming. In: [21] BaÈck T, Rudolph G, Schwefel HP. Evolutionary programming and
Chen RR, editor. Proceedings of 25th Asilomar Conference on evolution strategies: similarities and differences. Proceedings of
Signals, Systems and Computers, California: Paci®c Grove, 1991. European Conference on Alife, Granada, Spain. 1995.
p. 540±5. [22] Gomes JR, Saavedra OR. Optimal reactive power dispatch using
[6] Fogel DB. An introduction to simulated evolutionary optimization. evolutionary computation: new extended algorithms, IEE Proceed-
IEEE Trans Neural Networks 1994;5(1):3±14. ings Ð Generation, Transmission and Distribution, vol 146, No 6,
[7] BaÈck T, Schwefel HP. An overview of evolutionary algorithms for p586±592. IEE Press, UK, 1999.
parameter optimization. Evolutionary Comput 1993:1±27. [23] Yao X, Liu Y. Fast evolution strategies. Evolutionary Programming
[8] BaÈck T, Hammel U, Schwefel HP. Evolutionary computation: an VI: Proceedings of the Sixth Annual Conference on Evolutionary
overview, Proceedings of the Third IEEE Conference on Evolutionary Programming (EP97), Lecture Notes in Computer Science, vol.
Computation. Piscataway, NJ: IEEE Press, 1996 (pp. 20±9). 1213. Berlin: Springer, 1997. p. 151±61.
[9] Rudolph G. Global optimization by means of distributed evolution [24] Yao X, Liu Y, Lin G. Evolutionary programming made faster. IEEE
strategies. Parallel problem solving from nature, Proceedings of the Trans Evolutionary Comput 1999;3:82±101.
®rst Workshop PPSN 1 (Lecture Notes in Computer Science), vol. [25] Schwefel HP, Rudolph G. Contemporary evolution strategies.
496. Berlin, Germany: Springer, 1991. p. 209±13. Proceedings of European Conference on Alife, Granada, Spain, 1995.
[10] Herdy M. Reproductive isolation as strategy parameter in hierarchically

You might also like