Professional Documents
Culture Documents
Effects of Adaptive Social Networks On The Robustness of Evolutionary Algorithms
Effects of Adaptive Social Networks On The Robustness of Evolutionary Algorithms
Effects of Adaptive Social Networks On The Robustness of Evolutionary Algorithms
ALGORITHMS
JAMES M. WHITACRE
Birmingham University, School of Computer Science
Edgbaston, Birmingham, B15 2TT, UK
j.m.whitacre@cs.bham.ac.uk
RUHUL A. SARKER
University of New South Wales at the Australian Defence Force Academy
School of Information Technology and Electrical Engineering,
Canberra 2600, Australia
r.sarker@adfa.edu.au
Q. TUAN PHAM
University of New South Wales, School of Chemical Sciences and Engineering,
Sydney, 2052 Australia
tuan.pham@unsw.edu.au
Abstract—Biological networks are structurally adaptive and take on non-random topological properties
that influence system robustness. Studies are only beginning to reveal how these structural features
emerge, however the influence of component fitness and community cohesion (modularity) have attracted
interest from the scientific community. In this study, we apply these concepts to an evolutionary
algorithm and allow its population to self-organize using information that the population receives as it
moves over a fitness landscape. More precisely, we employ fitness and clustering based topological
operators for guiding network structural dynamics, which in turn are guided by population changes
taking place over evolutionary time. To investigate the effect on evolution, experiments are conducted on
six engineering design problems and six artificial test functions and compared against cellular genetic
algorithms and panmictic evolutionary algorithm designs. Our results suggest that a self-organizing
topology evolutionary algorithm can exhibit robust search behavior with strong performance observed
over short and long time scales. More generally, the coevolution between a population and its topology
may constitute a promising new paradigm for designing adaptive search heuristics.
1. INTRODUCTION
Local interaction constraints have a strong influence on the global dynamics of complex
systems. Restricting interactions in population-based evolutionary simulations has been found
to promote robustness against parasitic invasion1,2, enhance speciation rates3, sustain
population diversity in rugged fitness landscapes4, facilitate the emergence of cooperative
1
2 James Whitacre, Ruhul Sarker, and Tuan Pham
behavior5, enhance robustness towards local failures6, and may influence system evolvability,
i.e. a system’s propensity to adapt7.
Parallel developments have taken place in population based search heuristics such as
evolutionary algorithms, where restricting interactions in the competition and mating of
individuals in a population has been found to influence many facets of algorithm behavior.
This has been reported in several seemingly disparate studies involving age restrictions in
genetic algorithms8, genealogical and phenotypic restrictions through Deterministic Crowding
(DC)9, limited interactions between heterogeneous subpopulations10, and explicit static
topologies for constraining interactions in cellular genetic algorithms (cGA)11-16.
Fig. 1: Examples of networks. The networks on the top row represent common EA population structures and are
known as (from left to right) panmictic, island model, and cellular population structures. Networks on the bottom
row have been developed with one or more characteristics of biological networks and are classified as (from left to
right) Self-Organizing Networks (presented here), Hierarchical Networks17, and Small World Networks18. Fig. 1e is
reprinted with permission from AAAS.
The ratio of neighborhood size (i.e. number of connections per node) to system size (i.e. total
number of nodes) provides one measurement of locality that decreases in the networks from
left to right on the top row of Fig. 1. However, these networks also share important
similarities. Within each network (Fig. 1a-c), nodes have the same number of interactions and
the same types of interactions, i.e. regular graphs, and each of the networks are static and
predefined. These properties are notably distinct from those of biological networks. As seen
in metabolic pathways, cell signaling, protein-protein interactions, and gene regulation, most
biological networks have evolved several similar topological characteristics19 and some of
these have been found to support robustness towards certain types of perturbations1,2,6.
While the structure of biological networks has developed slowly over evolutionary time, at
shorter timescales it also supports robust autonomous responses to internal and environmental
perturbations, e.g. through the dynamic formation of modular units. Such evolutionarily-
constrained structural plasticity is observed at every scale in biology including protein
interactions (e.g. molecular assemblies), cellular functions (e.g. lymphocyte avidity and
formation of the immunological synapse), neural rewiring in the brain, morphological
plasticity of multi-cellular organisms, and food web rewiring within ecosystems (e.g. adaptive
foraging). Structural adaptation in these networks changes how information is processed from
the environment and subsequently alters the system-wide traits that emerge from the
integrated actions of their constituent elements. In this study, we investigate whether
mimicking the structural plasticity of biological systems can influence the performance
characteristics of an evolutionary search process.
instance, robustness measures are sometimes used to quantify the sensitivity of a solution
towards noise or errors in fitness evaluations, the sensitivity of a search process towards local
attractors within a fitness landscape, sensitivity towards initial conditions of the population, or
more generally, the sensitivity of algorithm’s performance over multiple runs, i.e.
performance reliability. Finally, a robust algorithm framework might also be described as one
that is reliable across problems with somewhat unique fitness landscape properties. Proxies
for many of these types of robustness are evaluated in this study.
1.3. SOTEA
In this paper, we investigate evolutionary algorithms with a population topology that changes
in response to interactions between the population and fitness landscape; what we have
referred to previously as Self-Organizing Topology Evolutionary Algorithms (SOTEA)4.
Although some studies have investigated the search characteristics of EAs with non-regular
population topologies14-16, few have investigated the behavior of EA’s that evolve on an
adaptive network. One exception is seen in23 where the grid shape of a cellular GA adapts in
response to performance data using a predefined adaptive strategy. In that system, structural
changes are globally controlled using statistics on system behavior and topological changes
do not deviate from a lattice structure. In contrast, the algorithms in this study adapt to
(topologically) local conditions through a coevolution of network states and network
structure.
Previous SOTEA research: In previous research4, we developed a SOTEA model using
simple rules that allowed a population’s structure to coevolve with EA population dynamics.
Structural modifications were driven by a contextual definition of fitness ranking and the
topological changes were designed to loosely mimick the process of gene duplication and
divergence in genetic evolution. This resulted in population topologies exhibiting some
characteristics that were similar to biological networks and more importantly, a capacity to
sustain genetic diversity within rugged fitness landscapes. An example of a network which
evolved using this algorithm is shown in Fig. 1d.
This earlier SOTEA algorithm was developed to explore theoretical topics related to
evolution on rugged fitness landscapes and was not easily modified for practical optimization
purposes. For instance, the genetic diversity observed in the first SOTEA did not persist in
correlated fitness landscapes (a prominent feature in optimization problems)4 and the
algorithm did not appear to be easily amenable to sexual reproduction. In contrast, the
present study focuses on improving the optimization search characteristics of evolutionary
algorithms with an adaptable population topology. What we report in this paper is the
development of an evolutionary algorithm framework that achieves robust performance
characteristics through the creation and exploitation of structural properties.
In the next section, we briefly review common topological properties of complex networks as
well as network models that can recreate some of these properties in silico. Section 3.
presents the SOTEA adaptive network and Section 4. describes our experiments including
Adaptive Networks and Robustness in Evolutionary Algorithms 5
pseudocode and a summary of the SOTEA algorithm. Results are provided in Sections 5.
and 6. with discussion and conclusions in Sections 7. and 8. .
Degree Distribution: The degree distribution has been found to closely approximate a
power law for many biological systems with power law and exponential distributions often
fitting abiotic complex systems25. Networks which display a power law k distribution are
often referred to as scale free networks in reference to the scale invariance of k.
6 James Whitacre, Ruhul Sarker, and Tuan Pham
Clustering Coefficient: Many complex biological systems have high levels of modularity
which is typically indicated by the clustering coefficient. The clustering coefficient for a node
ci is a measure of how well the neighbors of a given node are locally interconnected. More
specifically, ci is defined as the ratio between the number of connections ei among the ki
neighbors of node i and the maximum possible number of connections between these
neighbors which is ki(ki-1)/2. The clustering coefficient for a network c is simply the average
ci value.
2ei
ci = (2)
k i (k i − 1)
Although in practice, more efficient calculation methods are used, ei can be formally defined
using the adjacency matrix J as shown in eq. (3).
N
N
ei = ∑ J ij ∑ J ik J jk , i ≠ j ≠ k (3)
j =1 k =1
Clustering-Degree Correlations: A common feature of biological and social systems is the
existence of a hierarchical architecture. Such an architecture is believed to require that
sparsely connected nodes form tight modular units or clusters and communication paths
between these modular units are maintained via the presence of a few highly connected
hubs26. Fig. 1e shows a network with these hallmark signs of modularity and hierarchy which
was grown using the deterministic models presented in17.
The existence of hierarchy in a network is typically measured by evaluating the correlation
between the clustering coefficient and the node degree. Based on the description given above,
a hierarchical network is expected to exhibit higher connectivity for nodes with low clustering
(i.e. hubs) and vice versa. Furthermore, for the feature of hierarchy to be a scale invariant
property of the system, c should have a power law dependence on k.
Degree-Degree Correlations: For many complex networks, there exist degree correlations
such that the probability that a node of degree k is connected to another node of degree k`
depends on k. This correlation is typically measured by first calculating the average nearest
neighbors degree kNN,i.
N
1
k NN ,i =
ki
∑J
j =1
i, j kj (4)
Networks are classified as assortative if kNN increases with k or disassortative if kNN decreases
with k. Degree correlations are often reported as the value of the slope υ for kNN as a linear
function of k.
Random Networks: Thus far, only qualitative statements have been given regarding the
topological properties of complex networks. In many cases, when topological properties are
Adaptive Networks and Robustness in Evolutionary Algorithms 7
mentioned as being large or small (as has been mentioned above), the statements are referring
to property values in relation to those values observed in random graphs and particularly the
models developed by Erdös and Rényi27,28. As reviewed in19, random graphs have i) a
characteristic path length LRand similar to that observed in complex networks and
approximated by eq. (5), ii) a Poisson degree distribution (as opposed to the fat tailed degree
distribution in complex networks), and iii) a clustering coefficient cRand given by eq. (6) which
is orders of magnitude smaller than what is typically seen in complex networks18. Random
graphs also do not exhibit any degree correlations or correlations between the degree and the
clustering coefficient.
ln ( N )
LRand ≈ (5)
ln (k Ave )
k Ave
c Rand =
N (6)
structural feature that often contributes to the robustness of natural systems. Importantly, the
dynamic construction of modularity can alter the behavior of constituent elements to be based
largely on interactions with other members. This not only encourages specialization and
efficiency, it also can protect other parts of a system, e.g. from error propagation. In a search
process, dynamically constructed modularity may help to focus individuals on promising
regions of a solution space while reducing sensitivity to local attractors at the population
level. In other words, dynamically constructed modularity may help to facilitate both
exploitation and exploration within a distributed search process. To encourage modularity, we
use a combination of fitness measures and measures of network clustering (described in
Topological Driving Forces). The network dynamics are implemented by rewiring local
regions of the network (described in Topological Operators).
N − Ranki
2
K Set ,i = K Min
+ (K Max − K Min ) (8)
N
* 2ei*
Max c = i (9)
k i (k i − 1)
N N
ei* = ∑ J ij ∑ J ik J jk W jk , i ≠ j ≠ k (10)
j =1 k =1
Rank j × Rank k
W jk = (11)
N2
and N3 wants to increase its number of links (kN3 < KSet) then a link is added between N1 and
N3.
Remove Link Rule: For a selected node N1 with kN1 > KSet, a two step random walk is taken,
moving from N1 to N2 to N3. If N3 is already connected to N1 (JN1,N3 =1) and kN3 > KSet then
remove the link between N1 and N3. Notice the presence of N2 with JN2,N1 = JN2,N3 = 1 ensures
that connections removed using this rule do not result in network fragmentation.
Transfer Link Rule: For a selected node N1 a two step random walk is taken, moving from
N1 to N2 to N3. If kN3 < KSet, then the connection between N1 and N2 is transferred to now be
between N1 and N3 (i.e. JN1,N2 = 1, JN1,N3 = 0 changes to JN1,N2 = 0, JN1,N3 = 1). To determine if
the transfer will be kept, the local modularity is calculated using (9) for N1, N2 and N3 both
( )
BEFORE and AFTER the connection transfer occurs. If c *N 1 + c *N 2 + c *N 3 increases after the
connection transfer then the transfer is kept, otherwise it is reversed. In this way connections
are only added which strengthen the weighted clustering metric and don’t cause a net increase
in KSet violations.
Fig. 2 Topological Operators: A selected node N1 will attempt to add, remove or transfer its connections based on
the satisfaction of constraints and the improvement of properties. Add Rule: The dotted line represents a feasible
new connection in the network assuming nodes N1 and N3 both would like to increase their number of connections.
Remove Rule: The gray dotted line represents a feasible connection to remove in the network assuming nodes N1
and N2 both have an excess of connections. Transfer Rule: The connection between N1 and N2 (gray dotted line) is
transferred to now connect N1 and N3 (black dotted line) if this action results in an overall improvement to local
clustering. There are several constraints that each rewiring rule must satisfy in order to be executed. Consequently,
in each instance of rule usage, we make up to ten attempts to satisfy the conditions for executing a rule, i.e. ten
stochastic walks starting from a node N1.
The topological operators determine how connections are added and removed in the network.
These operators were developed based on several considerations. First, unlike systems that
operate in a physical space, there are no a priori constraints on topological changes and it was
thus necessary to determine how stochastic interactions between nodes should take place.
When defining operators for modifying a network topology, we felt it was important to: 1)
maintain the notion of locality that is implied by the network (i.e. prohibit long-range
Adaptive Networks and Robustness in Evolutionary Algorithms 11
interactions) 2) ensure that the network does not fragment into disconnected sub-networks and
3) keep the rules as simple as possible. These were the primary considerations that guided the
development of these topological operators.
4. Experimental Setup
4.1.1. SOTEA
A high level pseudocode for SOTEA is provided below. The algorithm starts by defining the
initial population P on a ring topology with each node connected to exactly two others (e.g.
Fig. 1c). For a given generation t, each node N1 is subjected to both topological and genetic
operators. Once the topological operators are executed (defined in Section 3.2. ), N1 is
selected as a parent and a second parent N2 is selected by conducting a two step stochastic
walk across the network. An offspring is created using these parents and a single search
operator that is selected at random from Tab. 1. The better fit between the offspring and N1 is
stored in a temporary list Temp(N1) while the topological and genetic operators are repeated
on the remaining nodes in the population. The population is then updated with the temporary
list to begin the next generation. This sequence of steps is repeated until a stopping criterion
is met. In all experiments, the stopping criterion is set as a maximum 150,000 objective
function evaluations.
The two-step stochastic walk mating scheme is used to maintain consistency with the
topological operators. This both simplifies our model and allows for a more intuitive
understanding of system dynamics. This mating scheme is expected to generate a weak
selection pressure in most EAs, however this is not necessarily the case for SOTEA. Because
high fitness nodes are driven towards increased connectivity, they are more likely to be
encountered in a stochastic walk across the network. Hence, the selection pressure becomes a
locally defined property that can be much stronger than stochastic walk mating would
otherwise create for panmictic or cellular EAs.
Pseudocode for SOTEA
t=0
Initialize P(t) (at random)
Initialize population topology (ring structure) [Fig. 1c]
Evaluate P(t)
Do
For each N1 in P(t)
Add Link Rule(N1) [Section III.B]
Remove Link Rule(N1) [Section III.B]
Transfer Link Rule(N1) [Section III.B]
Select N1 as a first parent
12 James Whitacre, Ruhul Sarker, and Tuan Pham
4.1.2. cellular GA
SOTEA is compared with cellular and panmictic EAs. The cellular GA used in these
experiments is identical to SOTEA except for two design changes (see pseudocode). First,
the cGA does not implement any topological operators and maintains a static ring topology.
The second change is that during mating, the second parent N2 is selected among all
neighbors within a radius R from N1 using linear ranking selection. This additional departure
from SOTEA was made based on experimental evidence that it enhances the performance of
the cGA. In experiments where mating took place using random walks of length R (i.e. the
mating scheme in SOTEA), the cGA displayed exceptionally poorer performance across all
problems in this study. Moreover, in a thorough study on the performance of distributed and
non-distributed GA designs34, the cGA we use (referred to in34 as “ci”) frequently exhibited
the best performance.
Pseudocode for cGA
t=0
Initialize P(t) (at random)
Initialize population topology (ring structure) [Fig. 1c]
Evaluate P(t)
Do
For each N1 in P(t)
Select N1 as first parent
Select N2 from Neighborhood(N1,R)
Select Search Operator (at random from Tab. 1)
Create and evaluate offspring
Temp(N1) = Best_of(offspring, N1)
Next N1
t++
P(t) = Temp()
Loop until stopping criteria
Adaptive Networks and Robustness in Evolutionary Algorithms 13
Eight ES designs are tested which vary by the use of Generational (with elitism) vs. Pseudo
Steady State population updating, the use of Binary Tournament Selection vs. Truncation
Selection, and by the number of search operators. Details are given below for each of the
design conditions.
Population updating: The generational EA design (with elitism for retaining the best parent)
has the parameter settings N=λ=2µ, κ=1 (κ=∞ for best individual). The pseudo steady state
EA design has the parameter settings N=λ=µ, κ=∞.
14 James Whitacre, Ruhul Sarker, and Tuan Pham
Tab. 1: The seven search operators used in the cellular GA, SOTEA, and selected Panmictic EA designs are listed
below. More information on each of the search operators can be found in35.
Search Operators
Wright’s Heuristic Crossover
Simple Crossover
Extended Line Crossover
Uniform Crossover
BLX- α
Differential Evolution
Operator
Single Point Random Mutation
GA Designs: The previous algorithmic framework invokes selection after offspring are
generated and in this way is most similar to evolution strategies. To include experiments with
the more commonly used genetic algorithm, we use the pseudocode below. In this case, κ=∞,
λ=1 (Steady State) and selection from P occurs using either Linear Ranking (Lin) or Binary
Tournament Selection.
Pseudocode for Panmictic EA (GA)
Initialize P;
Evaluate P;
Do{
P' ={};
For i=1 to λ{
{p1, p2} = Select individuals from P;
c = Create an offspring from{p1, p2};
Add c to P';
Next i
P = replacement (P`(t) U Q)
Evaluate P;
Loop until stopping criteria
Constraint Handling: Each of the engineering design case studies involve nonlinear
inequality constraints. Solution feasibility is addressed by defining fitness using the
stochastic ranking method presented in36. Parameter settings for stochastic ranking were
taken from recommendations found in36.
Adaptive Networks and Robustness in Evolutionary Algorithms 15
5. Performance Results
The experimental results are evaluated using several metrics and statistical tests in order to
gain a clearer picture of the strengths and weaknesses of SOTEA. In concluding the section,
we summarize these results and relate them back to different concepts of algorithm
robustness. A summary of our methods for analyzing algorithm performance is given below
followed by a summary of results for each problem.
Performance profiles: Performance profiles comparing SOTEA and cGA are provided in
Figure 3. Each algorithm searches for up to a maximum 150,000 objective function
evaluations. Experiments with SOTEA test different settings of Kmax while the cellular GA
was run with different settings of neighborhood radius R. Performance for each EA is
reported as the median objective function value over 30 runs. The caption text in Figure 3
includes optimal (Fopt) or best known (Fbest) objective function values for each problem.
Statistical Tests: To compare performance between specific algorithm designs that are
“tuned” for a particular problem, we take the best algorithm from each class and calculate the
confidence in algorithm performance superiority using a non-parametric statistical test (i.e.
the Mann-Whitney U-Test). To compare algorithm classes, U tests are conducted using all of
the performance results from each class. Tab. 3 provides p values for these tests with
confidence levels under 99% (p>0.01) listed as statistically insignificant.
Figure 3 Performance profiles for the pressure vessel (Fopt=5850.38), alkylation process (Fopt=1772.77), heat
exchanger network (Fopt=7049.25), gear train (Fbest=2.70E-12, reported in37), tension compression spring
(Fbest=0.01270, reported in38), and welded beam (Fbest=1.7255, reported in39) design problems.
Adaptive Networks and Robustness in Evolutionary Algorithms 17
Tab. 2: Performance results for six engineering design problems are shown for twelve Evolutionary Algorithms run
for 3000 generations with algorithm designs varying by the use of Generational (Gen) or Pseudo Steady State (SS)
population updating, the use of Binary Tournament Selection (Tour) or Truncation Selection (Trun), and the number
of search operators (Nops). Performance is presented as the single best objective function value found in 30 runs
FBest as well as the average objective function value over 30 runs FAve. All EAs listed below obtained a feasible
solution within 3000 generations. The single best fitness values found for each problem are in bold.
EA Gen Sel Nops Pressure Vessel Heat Exchanger Alkylation Process
FBest FAve FBest FAve FBest FAve
ES SS Tour 7 6059.70 6190.31 7053.47 7109.20 1771.35 1750.38
ES SS Trun 7 6059.73 6214.31 7056.09 7179.02 1760.77 1630.90
ES Gen Tour 7 5953.06 6123.22 7116.72 7213.38 1711.00 1667.34
ES Gen Trun 7 5964.23 6174.55 7186.97 7250.82 1641.47 1495.13
ES SS Tour 2 5867.87 6382.61 7070.57 7233.18 1756.00 1708.38
ES SS Trun 2 5857.39 6449.57 7093.12 7269.02 1748.95 1661.17
ES Gen Tour 2 6144.69 6340.23 7235.69 7412.11 1621.77 1510.93
ES Gen Trun 2 6188.86 6391.15 7184.51 7398.23 1501.24 1343.48
GA SS Tour 7 5903.55 6418.48 7092.00 7399.75 1767.22 1649.42
GA SS Lin 7 5853.21 6390.27 7050.31 7303.13 1759.20 1533.20
GA SS Tour 2 6091.55 6491.42 7063.97 7290.57 1764.93 1675.21
GA SS Lin 2 6074.73 6617.18 7094.76 7332.24 1751.35 1554.77
Gear Train Tension Compression Welded Beam
FBest FAve FBest FAve FBest FAve
ES SS Tour 7 2.70E-12 2.62E-10 0.012665 0.012758 1.72485 1.74602
ES SS Trun 7 2.70E-12 7.70E-10 0.012665 0.012778 1.72494 1.80945
ES Gen Tour 7 2.70E-12 2.70E-12 0.012679 0.012710 1.75465 1.77920
ES Gen Trun 7 2.70E-12 1.09E-11 0.012687 0.012725 1.76485 1.79732
ES SS Tour 2 2.70E-12 1.12E-09 0.012701 0.013861 1.73570 1.96193
ES SS Trun 2 2.31E-11 1.81E-09 0.012804 0.015078 1.73060 2.06087
ES Gen Tour 2 2.70E-12 4.74E-12 0.012739 0.013035 1.83742 1.93124
ES Gen Trun 2 2.70E-12 2.70E-12 0.012694 0.012864 1.75302 1.88472
GA SS Tour 7 2.31E-11 1.12E-09 0.012665 0.012969 1.72599 1.96120
GA SS Lin 7 2.70E-12 6.39E-10 0.012665 0.012906 1.72673 1.89600
GA SS Tour 2 2.31E-11 2.98E-09 0.012879 0.015302 1.72830 2.06871
GA SS Lin 2 2.70E-12 3.14E-09 0.013073 0.015830 1.82331 2.21587
Tab. 3 Mann-Whitney U tests comparing best algorithms from each design class (first entry) and comparing all data
from design classes (second entry). For best in class comparisons (first entry), the best algorithm from a design class
is determined based on median performance after 150,000 evaluations. Winner of test is indicated along with p
value. “insig” indicates p > 0.05.
Problem PEA vs. SOTEA cGA vs. SOTEA PEA vs. cGA
SOTEA (p<0.0001), SOTEA SOTEA (p=0.008), SOTEA cGA (p<0.0001), cGA
Pressure Vessel (p<0.0001) (p<0.0001) (p<0.0001)
SOTEA (p<0.0001), SOTEA SOTEA (p=0.001), SOTEA cGA (p<0.0001), cGA
Heat Exchanger (p<0.0001) (p<0.0001) (p<0.0001)
SOTEA (p<0.0001), SOTEA insig, SOTEA (p=0.01) cGA (p<0.0001), cGA
Welded Beam (p<0.0001) (p<0.0001)
SOTEA (p<0.0001), SOTEA insig, insig cGA (p<0.0001), cGA
Tension Comp. (p<0.0001) (p<0.0001)
18 James Whitacre, Ruhul Sarker, and Tuan Pham
Fig. 4 Pressure Vessel Drawing. Parameters of the problem include the thickness of the shell Ts, the thickness of the
head Th, the inner radius of the vessel R and the length of the cylindrical section of the vessel L. This figure is taken
out of38 and is reprinted with permission from IEEE (© 1999 IEEE).
Results: All but one of the SOTEA algorithms outperformed all of the cGA designs (Figure 3)
and the best tuned algorithm was also a SOTEA design (Tab. 3). Performance tended to
improve as network connectivity was reduced for both SOTEA and the cGA. In light of this
trend, it is not surprising to see the PEA designs performed very poorly on this problem (see
Tab. 3 and Tab. 2). Comparing results between Figure 3 and Tab. 2, the best final solution for a
PEA design is beaten by all SOTEA designs after only 300 generations. Comparisons to
previous studies (Tab. 4) highlight the strong performance of both cGA and SOTEA. Of the
eight studies referenced in and including41, only one other algorithm was able to reach the
objective function values obtained by the distributed EA designs employed here.
Tab. 4 Comparison of results for the pressure vessel design (minimization) problem. Results from other studies were
Adaptive Networks and Robustness in Evolutionary Algorithms 19
reported in41. Results are also reported for39 however their solution violates integer constraints for the 3rd and 4th
parameters making their final solution infeasible. It should also be mentioned that equations for defining the
problem have errors in38 and41. The best solution found in these experiments was (F, x1, x2, x3, x4) = (5850.37,
38.8601, 221.365, 12, 6).
Reference Fitness Ranking
Sandgren, 199040 8129.80 11
Fu, 199142 8084.62 10
Kannan and Kramer, 199443 7198.04 9
Cao, 199744 7108.62 8
Deb, 199745 6410.38 7
Lin 199937 6370.70 6
Coello, 199938 6288.74 5
Zeng et al., 200239 5804.39 --
Li et al., 200241 5850.38 3
SOTEA (This Work) 5850.37 1
cGA (This Work) 5850.37 1
Panmictic EA (This Work) 5853.21 4
Results: All but one of the SOTEA algorithms outperformed all cGA designs (Figure 3) and
the best tuned algorithm was a SOTEA design (Tab. 3). For this problem there was no clear
trend between performance and network connectivity. PEA algorithms performed relatively
poorly on this problem (Tab. 2). Comparisons to studies from previous authors (see Tab. 5)
20 James Whitacre, Ruhul Sarker, and Tuan Pham
highlight the strong performance of the distributed EAs. Of the stochastic search methods
described in the five studies referenced in47 including their own differential evolution
algorithms, none reached the fitness values obtained by the distributed EA designs employed
here. However, two αBB (Branch and Bound non-linear programming) algorithms were cited
that did find the global optimum and did so more consistently than SOTEA or cGA.
Tab. 5 Comparison of results for the alkylation process design problem (maximization problem). Results from other
authors were reported in47. The best solution found in these experiments was (F, x1, x2, x3, x4, x5, x6, x7) = (1772.77,
1698.18, 53.66, 3031.3, 90.11, 95, 10.5, 153.53).
Fig. 6 Heat Exchanger Network Design involves 1 cold stream that exchanges heat with three hot streams.
Parameters to optimize include heat exchange areas (x1, x2, x3) and stream temperatures (x4, x5, x6, x7, x8).
Results: All of the SOTEA algorithms outperformed the cGA designs (Figure 3) and the best
tuned algorithm was a SOTEA design (Tab. 3). Performance tended to improve as network
Adaptive Networks and Robustness in Evolutionary Algorithms 21
connectivity increased in both SOTEA and cGA. Such a trend seems to suggest that
interaction constraints are not as important for this problem which makes the poor
performance of the PEA designs (Tab. 2) somewhat unexpected. Comparing results between
Figure 3 and Tab. 2, the best final result for a Panmictic EA design is beaten by all SOTEA
designs after only 400 generations. Comparisons to other work are less favorable for this
problem. In47, they introduce a differential EA that can find the optimal solution 100% of the
time and in under 40,000 evaluations. None of the algorithms employed here were able to
obtain that level of performance for this problem. In fact, the best algorithm (SOTEA with
Kmax = 7) was only able to find the optimal solution 65% of the time in 150,000 evaluations.
To make a fair comparison to the results in47, our results were also analyzed at 40,000
evaluations and under these conditions only two of the SOTEA algorithms (and none of the
cellular GAs) were able to find an optimal solution in that amount of time (with the optimal
being found only 10% of the time). Interestingly, this was one of the simplest engineering
design problems tested with only a marginal level of epistasis between parameters35.
Tab. 6 Comparison of results for the heat exchanger network design problem (minimization problem). Results from
other authors were reported in47. The best solution found in these experiments was (F, x1, x2, x3, x4, x5) = (7049.25,
579.19, 1360.13, 5109.92, 182.01, 295.60).
Reference Fitness Ranking
Angira and Babu, 200353 7049.25 1
Babu and Angira, 200647 7049.25 1
SOTEA (This Work) 7049.25 1
cGA (This Work) 7049.25 1
Panmictic EA (This Work) 7050.31 5
Tab. 7 Comparison of results for the gear train design problem (minimization problem). Results from other authors
are reported in37. The best solution found in this study was (F, x1, x2, x3, x4) = (2.70 x10-12, 19, 16, 43, 49).
22 James Whitacre, Ruhul Sarker, and Tuan Pham
Fig. 7 Diagram of Tension Compression Spring. Parameters of the problem include the mean coil diameter D, the
wire diameter d and the number of active coils N which is represented by the number of loops of wire in the
diagram. Forces acting on the spring are shown as P. This figure is taken out of38 and is reprinted with permission
from IEEE (© 1999 IEEE).
Results: All but one of the distributed EA designs converge to similar values (Figure 3).
Comparing the results from previous studies, we find strong performance from both
distributed EAs. Of the three studies referenced in and including38, no previous method has
been able to find the solutions reported in this study.
Tab. 8 Comparison of results for the tension compression spring problem (minimization problem). Results from
other authors were reported in38. The best solution found in these experiments was (F, x1, x2, x3) = (0.0126652303,
0.051689, 0.356732, 11.2881).
Reference Fitness Ranking
Belegundu,198254 0.0128334375 6
Arora, 198955 0.0127302737 5
Coello, 199938 0.0127047834 4
SOTEA (This Work) 0.0126652303 1
cGA (This Work) 0.0126652303 1
Panmictic EA (This Work) 0.0126652593 3
Adaptive Networks and Robustness in Evolutionary Algorithms 23
Fig. 8: Diagram of a welded beam. The beam load is defined as P with all other parameters shown in the diagram
defining dimensional measurements relevant to the problem. This figure is taken out of38 and is reprinted with
permission from IEEE (© 1999 IEEE).
Results: Each of the distributed EA designs converge to similar values (Figure 3) and both
strongly outperformed the PEA (Tab. 2). Comparisons to work from previous authors
highlight the strong performance of both of the distributed EAs. Of the three studies
referenced in and including39, no previous method has been able to find the solutions reported
in this study.
Tab. 9 Comparison of results for the welded beam design problem (minimization problem). Results from other
authors were reported in39. The best solution found in these experiments was (F, x1, x2, x3, x4) = (1.72485,
0.205729, 3.47051, 9.03662, 0.2057296).
landscapes. Information regarding the fitness landscape properties of these problems as well
as formal problem definitions can be found in35.
Fig. 9 Performance for FM (Fopt=0), ECC (shifted from Fopt=0.067416 to Fopt=0), system of linear equations (Fopt=0),
Rastrigin (Fopt=0), Griewangk (Fopt=0), and Watson’s (Fopt=0.01714) test functions.
Tab. 10: Performance results for all six artificial test problems are shown for twelve Evolutionary Algorithms run
for 3000 generations with algorithm designs varying by the use of Generational (Gen) or Pseudo Steady State (SS)
Adaptive Networks and Robustness in Evolutionary Algorithms 25
population updating, the use of Binary Tournament Selection (Tour) or Truncation Selection (Trun), and the number
of search operators (Nops). Performance is presented as the single best objective function value found in 20 runs
FBest as well as the average objective function value over 20 runs FAve.
EA Gen Sel Nops Freq. Mod. Error Correcting Code Sys. of Lin. Eq.
FBest FAve FBest FAve FBest FAve
ES SS Tour 7 0.00 15.36 3.53E-03 4.32E-03 8.53E-14 2.12E-05
ES SS Trun 7 6.69 18.28 3.68E-03 4.29E-03 3.16E-05 1.32
ES Gen Tour 7 23.07 26.95 2.47E-03 3.75E-03 10.90 14.58
ES Gen Trun 7 22.87 25.97 3.44E-03 4.13E-03 2.45 5.27
ES SS Tour 2 8.98 15.87 2.70E-07 3.84E-03 1.67 3.54
ES SS Trun 2 0.55 16.49 3.43E-03 3.96E-03 4.26 5.90
ES Gen Tour 2 23.35 26.33 4.18E-03 4.77E-03 50.21 74.11
ES Gen Trun 2 21.95 26.77 2.70E-07 3.17E-03 35.69 51.75
GA SS Tour 7 9.02 16.23 4.03E-03 4.47E-03 0.03 1.88
GA SS Lin 7 0.68 17.74 3.49E-03 4.30E-03 0.04 2.32
GA SS Tour 2 0.22 15.92 3.90E-03 4.55E-03 2.41 4.78
GA SS Lin 2 3.04 16.44 3.59E-03 4.43E-03 3.97 6.25
Rastigrin Griewangk Watson
FBest FAve FBest FAve FBest FAve
ES SS Tour 7 1.25E-10 1.65E-06 0.012 0.052 1.716E-02 2.025E-02
ES SS Trun 7 4.24E-02 1.26E-01 0.049 0.158 1.728E-02 2.922E-02
ES Gen Tour 7 6.33E-01 9.17E-01 0.615 0.751 1.778E-02 1.941E-02
ES Gen Trun 7 8.82E-02 1.96E-01 0.348 0.508 1.730E-02 1.828E-02
ES SS Tour 2 3.10E-02 6.92E-02 0.131 0.216 1.804E-02 4.887E-02
ES SS Trun 2 1.64E-01 2.83E-01 0.154 0.366 1.829E-02 4.369E-02
ES Gen Tour 2 7.82 10.51 1.476 2.729 2.444E-02 5.673E-02
ES Gen Trun 2 4.89 7.53 1.474 2.199 2.205E-02 4.111E-02
GA SS Tour 7 8.99E-02 2.79E-01 0.046 0.212 1.716E-02 4.406E-02
GA SS Lin 7 9.38E-03 1.52E-01 0.089 0.167 1.730E-02 2.957E-02
GA SS Tour 2 1.54E-01 2.93E-01 0.212 0.407 1.901E-02 6.413E-02
GA SS Lin 2 1.00E-01 1.99E-01 0.236 0.431 1.821E-02 5.189E-02
Tab. 11 Mann-Whitney statistical tests comparing best algorithms from each design class (first entry)
and comparing all data from design classes (second entry). For best in class comparisons (first entry),
the best algorithm from a design class is determined based on median performance after 150,000
evaluations. Winner of test is indicated along with p value. “insig” indicates p > 0.05.
Problem PEA vs. SOTEA cGA vs. SOTEA PEA vs. cGA
ECC PEA (p=0.002) , insig insig, SOTEA (p=0.01) PEA (p=0.0008) , insig
Freq. Mod. insig, SOTEA (p<0.0001) insig, insig insig, cGA (p<0. 0001)
SOTEA (p<0.0001), SOTEA SOTEA (p<0.0001) , PEA (p<0.0001), insig
Rastrigin (p<0.0001) SOTEA (p<0.0001)
Griewangk insig, SOTEA (p<0.0001) insig, insig insig, cGA (p<0.0001)
SOTEA (p<0.0001), SOTEA SOTEA (p<0.0001), cGA (p=0.009), cGA
Watson's (p<0.0001) SOTEA (p<0.0001) (p<0.0001)
SOTEA (p<0.0001), SOTEA SOTEA (p=0.008) , SOTEA cGA (p<0.0001), cGA
Sys. of Lin. Eq. (p<0.0001) (p<0.0001) (p<0.0001)
26 James Whitacre, Ruhul Sarker, and Tuan Pham
Frequency Modulation: SOTEA designs are found to be both the best and worst performers
(compared to the cGA) throughout the optimization runs (Fig. 9).
ECC: Both SOTEA and the cGA designs are able to make steady progress toward the
optimal solution with little difference between the two designs (Fig. 9). One PEA was found to
be the best tuned algorithm as seen in Tab. 11 (this is the only artificial test function where a
PEA dominates).
System of Linear Equations: SOTEA designs strongly outperform the cGA (Fig. 9).
Comparison with results in Tab. 10 finds that both distributed EA designs were able to strongly
outperform the PEAs.
Rastrigin: SOTEA designs strongly outperform the cGA and the PEA. Although both
distributed EA designs have significantly better median performance than the PEA designs,
there is some indication that the PEA can occasionally find good solutions (Tab. 10).
Griewangk: SOTEA designs are very similar in performance to the cellular GA as seen in
Fig. 9 and Tab. 11. Both distributed EA designs perform better than the PEA designs (Tab.
11).
Watson: SOTEA designs strongly outperform the cGA (Fig. 9 and Tab. 11). Both distributed
EA designs perform better than the Panmictic EA designs (Tab. 11).
Tab. 12 Overall performance statistics for the Panmictic EA, the cellular GA, and SOTEA. Statistics in columns 1-3
are an average value over all test problems.
EA Design % of runs where EA U-Test % of problems where EA
found best was top 5% p<0.05 was best design found best
Panmictic EA 4.0% 4.8% failed 8.3% 16.7%
cellular GA 9.1% 10.4% failed 12.5% 66.7%
SOTEA 17.3% 28.5% passed 79.2% 83.3%
Adaptive Networks and Robustness in Evolutionary Algorithms 27
The last two statistics in Tab. 12 are confined to the best implementations of an EA design
class and thus indicate algorithm effectiveness after parameter tuning. For instance, the fourth
statistic (Tab. 12, column five) measures the proportion of problems where the algorithm
obtained the best median objective function value. This indicates the likelihood of preferring
a given algorithm when it can only be run a small number of times on a problem. The final
statistic (Tab. 12, column six) measures the proportion of problems where the algorithm was
able to find the best known solution at least one time. This indicates likely algorithm
preference when repeated optimization runs are possible. For each of the statistics, and in the
context of the selected test problems, SOTEA is found to be better than any of the other
algorithm design classes. Particularly noteworthy are the results in column five which
indicate that a “tuned” SOTEA design was the best EA design in about 80% of the problems
tested. Moreover, we have greater than 95% confidence that SOTEA is a superior search
method for the problems considered in this study.
6. Topological Analysis
To understand the basis by which SOTEA establishes a robust search process requires a
deeper understanding of the spatio-temporal dynamics of SOTEA and how these are
influenced by fitness landscape properties. With this in mind, we conducted a genealogical
analysis using tools described in56 and a topological analysis reported here. The genealogical
analysis evaluated gene takeover dynamics across a population, however these tests did not
provide clear insights into SOTEA search behavior and the results are not presented. In this
section, we report the structural characteristics of SOTEA and compare this with the cellular
GA, Panmictic EA, and values observed in biological systems. Here we find that, unlike
standard EA population topologies, SOTEA obtains several topological characteristics
observed in biological systems that are in some cases potentially useful to a search process.
28 James Whitacre, Ruhul Sarker, and Tuan Pham
40
a) 0
b) 30
c)
-2
30
20
-4
c-k v
L 20
-6
10
10
-8
0 -10 0
0 5 10 0 5 10 0 5 10
Kmax Kmax Kmax
0.8
d) e)
5
4
c k
(ave) (ave)
3
0.4 2
0 5 10 0 5 10
Kmax Kmax
Fig. 10 Topological properties for SOTEA with different values of KMax and population sizes of N = 50 (♦), 100(◼),
and 200(▲ ). Characteristics include a) the characteristic path length (L), b) the correlation between c and k (c-k), c)
the slope of the degree correlation (υ), d) the average clustering coefficient cave and e) the degree average kave.
Tab. 13: Topological characteristics for the Panmictic EA, cGA, and SOTEA. The topological characteristics for
biological systems are taken from24 and references therein. In column five, γ refers to the exponent for k
distributions that fit a power law. Two values for γ are given for the metabolic network and refer to the in/out-
degree exponents (due to this being a directed network). Results for degree correlations are given as the slope υ of
kNN vs k. N is the population size and R is a correlation coefficient for the stated proportionalities.
Adaptive Networks and Robustness in Evolutionary Algorithms 29
Complex Large L ~ log N kave << N Power Law, cave >>crand Power Law either
Networks 2<γ<3 (Scale Free (Hierarchical) υ>0
Network) or υ < 0
Protein 2,115 2.12 6.80 Power Law, γ = 0.07 (0.003) Power Law υ<0
2.4
Metabolic 778 7.40 3.2 Power Law, γ = 0.7 (0.004) Power Law υ<0
2.2/2.1
Fig. 11 SOTEA Network Visualizations with population sizes N = 50 (top), N = 100 (middle), and N = 200
(bottom).
7. Discussion
EAs in previous studies based on a priori knowledge about a problem’s fitness landscape
properties.
Some studies have suggested that a population that is spatially distributed over a static
topology can enhance some types of robustness in an EA (e.g. see34). How this occurs has not
been entirely determined, however intuition suggests that a distributed population topology
influences population dynamics by creating a weaker coupling across the population. A
weaker coupling can attenuate fast systemic responses to local attractors and may allow for a
more diffuse and explorative search to take place. On the other hand, the use of a static
topology is itself a global and inflexible approach to achieving robustness to local attractors.
Moreover, it is expected to reduce the speed by which any information can be exploited since
it establishes a global predefined tradeoff between exploration and exploitation in the system.
Alternatively, a topology that adapts in response to local attractors has the potential to allow
for qualitative differences in search behavior for different segments of the population.
8. Conclusions
SOTEA Network Model: A Self-Organizing Topology Evolutionary Algorithm (SOTEA)
has been presented with a distributed population structure that coevolves with EA population
dynamics; the first known optimization algorithm with such a coevolving state-structure
relationship. Based on the results of this study as well as theoretical issues raised in the
introduction, we feel that the coevolution of states and structure provides a unique and
interesting extension to the design of search algorithms.
The general framework that allows for this coevolution to be implemented is straightforward.
With the population defined on a network, rules are used to modify the network topology
based on the current state of the population. In particular, structural changes are initiated by a
dynamic state value in each node, e.g. individual fitness. Node state dynamics are a simple
consequence of the genetic operators implemented within the evolutionary search process.
The SOTEA model presented in this paper was designed to structurally adapt to the fitness
landscape based on local network information and local topological changes; features that
were motivated by both practical implementation concerns and theoretical motivations.
Network dynamics were driven by i) an adaptive connectivity where higher fitness individuals
were encouraged to obtain higher levels of connectivity and ii) an adaptive definition of
community that encourages high levels of clustering amongst nodes with low fitness.
Topological Analysis: Self-organization of the population network topology resulted in high
levels of clustering, small characteristic path length, and correlations between the clustering
coefficient and a node’s degree. Each of these characteristics are approximately similar to
what is observed in biological networks.
Performance: A number of engineering design problems and artificial test functions were
selected to evaluate the robustness of the new SOTEA algorithm compared with another
distributed design, the cellular GA. Results indicate the SOTEA algorithm often had better
performance and more consistent results compared with the cGA. Both of the distributed
Adaptive Networks and Robustness in Evolutionary Algorithms 33
9. References
13. Giacobini, M., M. Tomassini, A.G.B. Tettamanzi, and E. Alba, Selection intensity in
cellular evolutionary algorithms for regular lattices. IEEE Transactions on
Evolutionary Computation, 2005. 9(5): p. 489-505.
14. Preuss, M. and C. Lasarczyk, On the Importance of Information Speed in Structured
Populations. Lecture Notes in Computer Science, 2004: p. 91-100.
15. Giacobini, M., M. Tomassini, and A. Tettamanzi. Takeover time curves in random
and small-world structured populations. in GECCO. 2005: ACM New York, NY,
USA.
16. Giacobini, M., M. Preuss, and M. Tomassini, Effects of Scale-Free and Small-World
Topologies on Binary Coded Self-adaptive CEA. Lecture Notes in Computer
Science, 2006. 3906: p. 86.
17. Ravasz, E., A.L. Somera, D.A. Mongru, Z.N. Oltvai, and A.L. Barabási,
Hierarchical Organization of Modularity in Metabolic Networks. Science, 2002.
297: p. 1551–1555.
18. Watts, D.J. and S.H. Strogatz, Collective dynamics of 'small-world' networks.
Nature, 1998. 393(6684): p. 409-10.
19. Albert, R. and A.L. Barabási, Statistical mechanics of complex networks. Reviews of
Modern Physics, 2002. 74(1): p. 47-97.
20. Kitano, H., Biological robustness. Nature Reviews Genetics, 2004. 5(11): p. 826-
837.
21. Waddington, C.H., Genetic Assimilation of an Acquired Character. Evolution, 1953.
7(2): p. 118-126.
22. Agrawal, A.A., Phenotypic Plasticity in the Interactions and Evolution of Species.
Science, 2001. 294(5541): p. 321-326.
23. Alba, E. and B. Dorronsoro, The Exploration/Exploitation Tradeoff in Dynamic
Cellular Genetic Algorithms. IEEE Transactions on Evolutionary Computation,
2005. 9(2): p. 126-142.
24. Boccaletti, S., V. Latora, Y. Moreno, M. Chavez, and D.U. Hwang, Complex
networks: Structure and dynamics. Physics Reports, 2006. 424(4-5): p. 175-308.
25. Newman, M.E.J., The structure and function of complex networks. SIAM Review,
2003. 45: p. 167-256.
26. Barabási, A.L. and Z.N. Oltvai, Network biology: understanding the cell's functional
organization. Nature Reviews Genetics, 2004. 5(2): p. 101-113.
27. Erdös, P. and A. Rényi, On random graphs. Publ. Math. Debrecen, 1959. 6: p. 290-
297.
28. Erdös, P. and A. Rényi, On the evolution of random graphs. Bulletin of the Institute
of International Statistics, 1961. 38: p. 343-347.
29. Barabási, A.L. and R. Albert, Emergence of Scaling in Random Networks. Science,
1999. 286(5439): p. 509-512.
30. Wagner, A., Evolution of Gene Networks by Gene Duplications: A Mathematical
Model and its Implications on Genome Organization. Proceedings of the National
Academy of Sciences, USA, 1994. 91(10): p. 4387-4391.
Adaptive Networks and Robustness in Evolutionary Algorithms 35
31. Caldarelli, G., A. Capocci, P. De Los Rios, and M.A. Muñoz, Scale-Free Networks
from Varying Vertex Intrinsic Fitness. Physical Review Letters, 2002. 89(25): p.
258702.
32. Vazquez, A., Growing network with local rules: Preferential attachment, clustering
hierarchy, and degree correlations. Physical Review E, 2003. 67(5): p. 56104.
33. Pollner, P., G. Palla, and T. Vicsek, Preferential attachment of communities: The
same principle, but a higher level. Europhysics Letters, 2006. 73(3): p. 478-484.
34. Alba, E. and M. Tomassini, Parallelism and evolutionary algorithms. IEEE
Transactions on Evolutionary Computation, 2002. 6(5): p. 443-462.
35. Whitacre, J.M., Adaptation and Self-Organization in Evolutionary Algorithms.
2007, University of New South Wales: PhD Thesis. p. 283.
36. Runarsson, T.P. and X. Yao, Stochastic ranking for constrained evolutionary
optimization. IEEE Transactions on Evolutionary Computation, 2000. 4(3): p. 284-
294.
37. Lin, Y.C., F.S. Wang, and K.S. Hwang, A hybrid method of evolutionary algorithms
for mixed-integer nonlinear optimization problems. Congress on Evolutionary
Computation, 1999. 3.
38. Coello, C.A.C., Self-adaptive penalties for GA-based optimization. Congress on
Evolutionary Computation, 1999. 1.
39. Zeng, S.Y., L.X. Ding, and L.S. Kang, An evolutionary algorithm of contracting
search space based on partial ordering relation for constrained optimization
problems. Proceedings of the Conference on Algorithms and Architectures for
Parallel Processing, 2002: p. 76-81.
40. Sandgren, E., Nonlinear integer and discrete programming in mechanical design
optimization. Journal of Mechanical Design, 1990. 112(2): p. 223–229.
41. Li, Y., L. Kang, H. De Garis, Z. Kang, and P. Liu, A Robust Algorithm for Solving
Nonlinear Programming Problems. International Journal of Computer Mathematics,
2002. 79(5): p. 523-536.
42. Fu, J., R.G. Fenton, and W.L. Cleghorn, A mixed integer-discrete-continuous
programming method and its application to engineering design optimization.
Engineering optimization, 1991. 17(4): p. 263-280.
43. Kannan, B.K. and S.N. Kramer, Augmented Lagrange multiplier based method for
mixed integer discrete continuous optimization and its applications to mechanical
design. ASME, 1993. 65: p. 103-112.
44. Cao, Y.J. and Q.H. Wu, Mechanical design optimization by mixed-variable
evolutionary programming. Proceedings of the Conference on Evolutionary
Computation, 1997: p. 443–6.
45. Deb, K., Optimal design of a welded beam via genetic algorithms. AIAA Journal,
1991. 29(11): p. 2013-2015.
46. Sauer, R.N., A.R. Colville, and C.W. Burwick, Computer Points Way to More
Profits. Hydrocarbon Processing, 1964. 84(2).
47. Babu, B.V. and R. Angira, Modified differential evolution(MDE) for optimization of
non-linear chemical processes. Computers and Chemical Engineering, 2006. 30(6):
p. 989-1002.
36 James Whitacre, Ruhul Sarker, and Tuan Pham