Effects of Adaptive Social Networks On The Robustness of Evolutionary Algorithms

You might also like

Download as pdf or txt
Download as pdf or txt
You are on page 1of 36

EFFECTS OF ADAPTIVE SOCIAL NETWORKS ON THE ROBUSTNESS OF EVOLUTIONARY

ALGORITHMS

JAMES M. WHITACRE
Birmingham University, School of Computer Science
Edgbaston, Birmingham, B15 2TT, UK
j.m.whitacre@cs.bham.ac.uk

RUHUL A. SARKER
University of New South Wales at the Australian Defence Force Academy
School of Information Technology and Electrical Engineering,
Canberra 2600, Australia
r.sarker@adfa.edu.au

Q. TUAN PHAM
University of New South Wales, School of Chemical Sciences and Engineering,
Sydney, 2052 Australia
tuan.pham@unsw.edu.au

Abstract—Biological networks are structurally adaptive and take on non-random topological properties
that influence system robustness. Studies are only beginning to reveal how these structural features
emerge, however the influence of component fitness and community cohesion (modularity) have attracted
interest from the scientific community. In this study, we apply these concepts to an evolutionary
algorithm and allow its population to self-organize using information that the population receives as it
moves over a fitness landscape. More precisely, we employ fitness and clustering based topological
operators for guiding network structural dynamics, which in turn are guided by population changes
taking place over evolutionary time. To investigate the effect on evolution, experiments are conducted on
six engineering design problems and six artificial test functions and compared against cellular genetic
algorithms and panmictic evolutionary algorithm designs. Our results suggest that a self-organizing
topology evolutionary algorithm can exhibit robust search behavior with strong performance observed
over short and long time scales. More generally, the coevolution between a population and its topology
may constitute a promising new paradigm for designing adaptive search heuristics.

Keywords: Evolutionary Algorithms; Network Evolution; Optimization; Adaptive Population


Topology; Self-Organization

1. INTRODUCTION
Local interaction constraints have a strong influence on the global dynamics of complex
systems. Restricting interactions in population-based evolutionary simulations has been found
to promote robustness against parasitic invasion1,2, enhance speciation rates3, sustain
population diversity in rugged fitness landscapes4, facilitate the emergence of cooperative

1
2 James Whitacre, Ruhul Sarker, and Tuan Pham

behavior5, enhance robustness towards local failures6, and may influence system evolvability,
i.e. a system’s propensity to adapt7.
Parallel developments have taken place in population based search heuristics such as
evolutionary algorithms, where restricting interactions in the competition and mating of
individuals in a population has been found to influence many facets of algorithm behavior.
This has been reported in several seemingly disparate studies involving age restrictions in
genetic algorithms8, genealogical and phenotypic restrictions through Deterministic Crowding
(DC)9, limited interactions between heterogeneous subpopulations10, and explicit static
topologies for constraining interactions in cellular genetic algorithms (cGA)11-16.

1.1. Population Networks for Evolutionary Algorithms


Defining an EA population on a network modifies an EA by localizing its genetic operators,
i.e. restricting mating and selection to occur only among individuals directly connected or
near each other within the network. Three types of population structures commonly studied in
EAs are shown on the top row of Fig. 1. The fully connected graph in Fig. 1a represents the
canonical EA design, which we refer to as the panmictic EA (PEA). In PEA, each individual
(represented by a node in the graph) can interact with every other and no definition of locality
is possible. The network in Fig. 1b represents an island model where individuals reside in
panmictic subgroups or islands. In Fig. 1b, the large arrows represent migrations between
islands that occur every few generations. As a consequence of this topology, locality is
specified on a scale that can be considerably larger than the individual. The final EA
structure shown in Fig. 1c is referred to as a cellular Genetic Algorithm (cGA). In cGA, the
network of interactions takes on a lattice structure with interactions constrained by the
dimensionality of the lattice space. The ring topology in Fig. 1c is an example of a one
dimensional lattice with periodic boundary conditions. With the cGA, each individual has a
unique environment defined by its own set of links, i.e. a neighborhood.
Adaptive Networks and Robustness in Evolutionary Algorithms 3

Fig. 1: Examples of networks. The networks on the top row represent common EA population structures and are
known as (from left to right) panmictic, island model, and cellular population structures. Networks on the bottom
row have been developed with one or more characteristics of biological networks and are classified as (from left to
right) Self-Organizing Networks (presented here), Hierarchical Networks17, and Small World Networks18. Fig. 1e is
reprinted with permission from AAAS.

The ratio of neighborhood size (i.e. number of connections per node) to system size (i.e. total
number of nodes) provides one measurement of locality that decreases in the networks from
left to right on the top row of Fig. 1. However, these networks also share important
similarities. Within each network (Fig. 1a-c), nodes have the same number of interactions and
the same types of interactions, i.e. regular graphs, and each of the networks are static and
predefined. These properties are notably distinct from those of biological networks. As seen
in metabolic pathways, cell signaling, protein-protein interactions, and gene regulation, most
biological networks have evolved several similar topological characteristics19 and some of
these have been found to support robustness towards certain types of perturbations1,2,6.
While the structure of biological networks has developed slowly over evolutionary time, at
shorter timescales it also supports robust autonomous responses to internal and environmental
perturbations, e.g. through the dynamic formation of modular units. Such evolutionarily-
constrained structural plasticity is observed at every scale in biology including protein
interactions (e.g. molecular assemblies), cellular functions (e.g. lymphocyte avidity and
formation of the immunological synapse), neural rewiring in the brain, morphological
plasticity of multi-cellular organisms, and food web rewiring within ecosystems (e.g. adaptive
foraging). Structural adaptation in these networks changes how information is processed from
the environment and subsequently alters the system-wide traits that emerge from the
integrated actions of their constituent elements. In this study, we investigate whether
mimicking the structural plasticity of biological systems can influence the performance
characteristics of an evolutionary search process.

1.2. Robustness in Biology and Search Heuristics


In systems biology, robustness typically refers to the capacity to maintain system integrity,
functionality, or phenotypic traits in the face of component changes or failures or in the face
of changes in the external environment. This does not imply that the biological system is
static or in equilibrium but only that the measured system property that is robust has displayed
little sensitivity to encountered perturbations20. Thus we can speak of the robustness of
development in multi-cellular organisms (i.e. developmental canalization21) or the robustness
of an organism’s fitness post-development that is achieved through adaptive phenotypic
plasticity22. In either case, aspects of a system’s morphological structure are driven to new
configurations, which are partly guided by feedback from the external environment and act to
confer stability within higher level traits.
Within the context of a stochastic search process such as an evolutionary algorithm, our
interest is in robustness to what one might call “intermittent imperfections in search bias”.
These intermittent imperfections arise at several distinct scales within optimization. For
4 James Whitacre, Ruhul Sarker, and Tuan Pham

instance, robustness measures are sometimes used to quantify the sensitivity of a solution
towards noise or errors in fitness evaluations, the sensitivity of a search process towards local
attractors within a fitness landscape, sensitivity towards initial conditions of the population, or
more generally, the sensitivity of algorithm’s performance over multiple runs, i.e.
performance reliability. Finally, a robust algorithm framework might also be described as one
that is reliable across problems with somewhat unique fitness landscape properties. Proxies
for many of these types of robustness are evaluated in this study.

1.3. SOTEA
In this paper, we investigate evolutionary algorithms with a population topology that changes
in response to interactions between the population and fitness landscape; what we have
referred to previously as Self-Organizing Topology Evolutionary Algorithms (SOTEA)4.
Although some studies have investigated the search characteristics of EAs with non-regular
population topologies14-16, few have investigated the behavior of EA’s that evolve on an
adaptive network. One exception is seen in23 where the grid shape of a cellular GA adapts in
response to performance data using a predefined adaptive strategy. In that system, structural
changes are globally controlled using statistics on system behavior and topological changes
do not deviate from a lattice structure. In contrast, the algorithms in this study adapt to
(topologically) local conditions through a coevolution of network states and network
structure.
Previous SOTEA research: In previous research4, we developed a SOTEA model using
simple rules that allowed a population’s structure to coevolve with EA population dynamics.
Structural modifications were driven by a contextual definition of fitness ranking and the
topological changes were designed to loosely mimick the process of gene duplication and
divergence in genetic evolution. This resulted in population topologies exhibiting some
characteristics that were similar to biological networks and more importantly, a capacity to
sustain genetic diversity within rugged fitness landscapes. An example of a network which
evolved using this algorithm is shown in Fig. 1d.
This earlier SOTEA algorithm was developed to explore theoretical topics related to
evolution on rugged fitness landscapes and was not easily modified for practical optimization
purposes. For instance, the genetic diversity observed in the first SOTEA did not persist in
correlated fitness landscapes (a prominent feature in optimization problems)4 and the
algorithm did not appear to be easily amenable to sexual reproduction. In contrast, the
present study focuses on improving the optimization search characteristics of evolutionary
algorithms with an adaptable population topology. What we report in this paper is the
development of an evolutionary algorithm framework that achieves robust performance
characteristics through the creation and exploitation of structural properties.
In the next section, we briefly review common topological properties of complex networks as
well as network models that can recreate some of these properties in silico. Section 3.
presents the SOTEA adaptive network and Section 4. describes our experiments including
Adaptive Networks and Robustness in Evolutionary Algorithms 5

pseudocode and a summary of the SOTEA algorithm. Results are provided in Sections 5.
and 6. with discussion and conclusions in Sections 7. and 8. .

2. Structural Characteristics of Complex Networks

2.1. Properties of real networks


Many natural and manmade systems consist of large networks of interacting components as
seen in biology (e.g. gene regulatory networks, food webs, neural networks), social systems
(e.g. co-authorship, personal relationships, organizations) and manmade systems (e.g. internet,
power grids). Despite the considerable simplifications needed to create network
representations of these systems and despite their inherent differences in scale, environmental
context and functionality, many real networks have surprising similarities in their topological
properties. These similarities include small characteristic path lengths, high clustering
coefficients, fat-tailed degree distributions (e.g. power law), degree correlations, and low
average connectivity. Each of these features are notably distinct from random graphs and
regular lattices. Below we describe and formally define these topological properties, and in
Section 6. we use these properties to characterize the networks evolved in this study.
Comprehensive descriptions of these properties can also be found in19,24,25.

2.2. Topological Metrics


Networks are represented by an adjacency matrix J of size N, such that individual nodes i and
j are connected (not connected) when Jij=1 (Jij=0). All networks discussed in this study are
unweighted and undirected (symmetric).
Characteristic Path Length: The path length is the shortest distance between two nodes in a
network. The characteristic path length L is the average path length over all node pair
combinations in a network. Generally, L grows very slowly with increasing system size (e.g.
population size) N in complex systems. For instance, networks exhibiting the “Small World”
property, such as the network in Fig. 1f, have L proportional to log N26.
Degree Average: The degree ki is the number of connections that node i has with other
nodes in the network. The degree average kave is simply k averaged over all nodes in the
network. The degree average is expected to remain small, even for large networks24.
N
k i = ∑ J i, j (1)
j =1

Degree Distribution: The degree distribution has been found to closely approximate a
power law for many biological systems with power law and exponential distributions often
fitting abiotic complex systems25. Networks which display a power law k distribution are
often referred to as scale free networks in reference to the scale invariance of k.
6 James Whitacre, Ruhul Sarker, and Tuan Pham

Clustering Coefficient: Many complex biological systems have high levels of modularity
which is typically indicated by the clustering coefficient. The clustering coefficient for a node
ci is a measure of how well the neighbors of a given node are locally interconnected. More
specifically, ci is defined as the ratio between the number of connections ei among the ki
neighbors of node i and the maximum possible number of connections between these
neighbors which is ki(ki-1)/2. The clustering coefficient for a network c is simply the average
ci value.

2ei
ci = (2)
k i (k i − 1)
Although in practice, more efficient calculation methods are used, ei can be formally defined
using the adjacency matrix J as shown in eq. (3).
N
 N

ei = ∑  J ij ∑ J ik J jk , i ≠ j ≠ k (3)
j =1  k =1 
Clustering-Degree Correlations: A common feature of biological and social systems is the
existence of a hierarchical architecture. Such an architecture is believed to require that
sparsely connected nodes form tight modular units or clusters and communication paths
between these modular units are maintained via the presence of a few highly connected
hubs26. Fig. 1e shows a network with these hallmark signs of modularity and hierarchy which
was grown using the deterministic models presented in17.
The existence of hierarchy in a network is typically measured by evaluating the correlation
between the clustering coefficient and the node degree. Based on the description given above,
a hierarchical network is expected to exhibit higher connectivity for nodes with low clustering
(i.e. hubs) and vice versa. Furthermore, for the feature of hierarchy to be a scale invariant
property of the system, c should have a power law dependence on k.
Degree-Degree Correlations: For many complex networks, there exist degree correlations
such that the probability that a node of degree k is connected to another node of degree k`
depends on k. This correlation is typically measured by first calculating the average nearest
neighbors degree kNN,i.
N
1
k NN ,i =
ki
∑J
j =1
i, j kj (4)

Networks are classified as assortative if kNN increases with k or disassortative if kNN decreases
with k. Degree correlations are often reported as the value of the slope υ for kNN as a linear
function of k.
Random Networks: Thus far, only qualitative statements have been given regarding the
topological properties of complex networks. In many cases, when topological properties are
Adaptive Networks and Robustness in Evolutionary Algorithms 7

mentioned as being large or small (as has been mentioned above), the statements are referring
to property values in relation to those values observed in random graphs and particularly the
models developed by Erdös and Rényi27,28. As reviewed in19, random graphs have i) a
characteristic path length LRand similar to that observed in complex networks and
approximated by eq. (5), ii) a Poisson degree distribution (as opposed to the fat tailed degree
distribution in complex networks), and iii) a clustering coefficient cRand given by eq. (6) which
is orders of magnitude smaller than what is typically seen in complex networks18. Random
graphs also do not exhibit any degree correlations or correlations between the degree and the
clustering coefficient.
ln ( N )
LRand ≈ (5)
ln (k Ave )

k Ave
c Rand =
N (6)

2.3. Network Growth Models


In many man-made and biological systems, it is generally understood that network
development occurs through a process of constrained growth and coevolution with the
environment. Over the last decade, progress has been made in the design of network growth
models which evolve to display characteristics found in real-world complex systems.
Exemplars of this success can be seen in the Barabasi-Albert (BA) Model29, the Duplication
and Divergence (DD) Model30, the intrinsic fitness models in31 and the stochastic walk models
in32. The emergence of important topological properties often occurs through the use of
simple, locally defined rules that constrain structural dynamics and are guided by state
properties of the nodes. In other words, the connections in the network change and nodes are
added or removed with a bias derived by property values that are assigned or calculated for
each node. Properties that have been used in models include the degree of a node k29,
measures of node modularity33, as well as measures of node fitness31.

3. SOTEA Model Description


Our model aims to adapt an EA population topology in a manner that is inspired by biological
phenomena but also that is relevant in an optimization context. Like the models mentioned in
the previous section, the topological changes in SOTEA are driven by properties that can be
calculated locally within the network. One property that we focus on is modularity; a
8 James Whitacre, Ruhul Sarker, and Tuan Pham

structural feature that often contributes to the robustness of natural systems. Importantly, the
dynamic construction of modularity can alter the behavior of constituent elements to be based
largely on interactions with other members. This not only encourages specialization and
efficiency, it also can protect other parts of a system, e.g. from error propagation. In a search
process, dynamically constructed modularity may help to focus individuals on promising
regions of a solution space while reducing sensitivity to local attractors at the population
level. In other words, dynamically constructed modularity may help to facilitate both
exploitation and exploration within a distributed search process. To encourage modularity, we
use a combination of fitness measures and measures of network clustering (described in
Topological Driving Forces). The network dynamics are implemented by rewiring local
regions of the network (described in Topological Operators).

3.1. Topological Driving Forces


The SOTEA network is represented by an adjacency matrix J such that individuals i and j are
connected (not connected) when Jij=1 (Jij=0). This study only deals with undirected networks
such that Jij = Jji. The terms individual and node are used interchangeably to refer to
members of the EA population situated within a network. Also, the terms links and
connections are used interchangeably to refer to directly connected nodes, i.e. individuals that
are neighbors in the population.
Topological driving forces encourage the emergence of (partially) isolated
clusters that are integrated with the broader population through high fitness
hubs. This is done by: 1) encouraging high fitness nodes to be highly
connected and 2) by encouraging clustering between solutions that are not of
high fitness.

3.1.1. fitness- degree correlations


High fitness nodes are driven to achieve higher connectivity k in the following manner. First,
an adaptive set point KSet establishes a node’s desired number of links as defined for node i in
eq. (7). The value for KSet is defined in eq. (8) as a quadratic function of fitness ranking with
a lower bound of KMin = 3 and an upper bound KMax.
The equations (7) and (8) drive node connectivity ki to high values for nodes with
exceptionally good fitness. Enforcing a lower bound of KMin = 3 ensures clustering is feasible
in the lowest fit nodes while the quadratic form of the set point KSet helps ensure that only the
most highly fit nodes are able to attain high connectivity, i.e. hub positions. We treat KMax as
a parameter of the model, which can be used to control the extent that high fitness nodes are
able to influence network dynamics. As seen in the experimental results, the optimal setting
of this parameter changes depending on the problem.

Min k i − K Set ,i (7)


Adaptive Networks and Robustness in Evolutionary Algorithms 9

  N − Ranki  
2

K Set ,i = K Min 
+ (K Max − K Min )   (8)
  N  

3.1.2. weighted clustering coefficient


To encourage high modularity amongst lower fitness nodes, network rewiring is driven to
maximize a weighted version of the clustering coefficient c as defined for node i in eq. (9).
The more common definition of c was provided in Section 2. . In eq. (9), a connection’s
contribution to c is weighted to give less importance to connections involving nodes of higher
fitness. This weighting factor W is defined in (11) and alters the ei term of the clustering
coefficient in (10). This weighting factor is identical to the intrinsic fitness measure used in
the network growth models in31.

* 2ei*
Max c = i (9)
k i (k i − 1)

N N
ei* = ∑ J ij ∑ J ik J jk W jk , i ≠ j ≠ k (10)
j =1 k =1

Rank j × Rank k
W jk = (11)
N2

3.2. Topological Operators


Section A described the driving forces for network structural dynamics. To respond to these
forces, changes to the network take place involving the addition, removal, and transfer of
links. Below we define the rules (topological operators) for executing these structural
changes and also illustrate their implementation in Fig. 2. The add link rule and the remove
link rule are topological operators that allow the k value for each node to reach KSet. The
transfer link rule allows for the improvement of local clustering within the network, but only
if this does not impede upon the desired k settings for each node.
Add Link Rule: Starting with a selected node N1, a two step random walk is taken, moving
from node N1 to node N2 to node N3. If N1 wants to increase its number of links (kN1 < KSet)
10 James Whitacre, Ruhul Sarker, and Tuan Pham

and N3 wants to increase its number of links (kN3 < KSet) then a link is added between N1 and
N3.
Remove Link Rule: For a selected node N1 with kN1 > KSet, a two step random walk is taken,
moving from N1 to N2 to N3. If N3 is already connected to N1 (JN1,N3 =1) and kN3 > KSet then
remove the link between N1 and N3. Notice the presence of N2 with JN2,N1 = JN2,N3 = 1 ensures
that connections removed using this rule do not result in network fragmentation.
Transfer Link Rule: For a selected node N1 a two step random walk is taken, moving from
N1 to N2 to N3. If kN3 < KSet, then the connection between N1 and N2 is transferred to now be
between N1 and N3 (i.e. JN1,N2 = 1, JN1,N3 = 0 changes to JN1,N2 = 0, JN1,N3 = 1). To determine if
the transfer will be kept, the local modularity is calculated using (9) for N1, N2 and N3 both
( )
BEFORE and AFTER the connection transfer occurs. If c *N 1 + c *N 2 + c *N 3 increases after the
connection transfer then the transfer is kept, otherwise it is reversed. In this way connections
are only added which strengthen the weighted clustering metric and don’t cause a net increase
in KSet violations.

Fig. 2 Topological Operators: A selected node N1 will attempt to add, remove or transfer its connections based on
the satisfaction of constraints and the improvement of properties. Add Rule: The dotted line represents a feasible
new connection in the network assuming nodes N1 and N3 both would like to increase their number of connections.
Remove Rule: The gray dotted line represents a feasible connection to remove in the network assuming nodes N1
and N2 both have an excess of connections. Transfer Rule: The connection between N1 and N2 (gray dotted line) is
transferred to now connect N1 and N3 (black dotted line) if this action results in an overall improvement to local
clustering. There are several constraints that each rewiring rule must satisfy in order to be executed. Consequently,
in each instance of rule usage, we make up to ten attempts to satisfy the conditions for executing a rule, i.e. ten
stochastic walks starting from a node N1.

The topological operators determine how connections are added and removed in the network.
These operators were developed based on several considerations. First, unlike systems that
operate in a physical space, there are no a priori constraints on topological changes and it was
thus necessary to determine how stochastic interactions between nodes should take place.
When defining operators for modifying a network topology, we felt it was important to: 1)
maintain the notion of locality that is implied by the network (i.e. prohibit long-range
Adaptive Networks and Robustness in Evolutionary Algorithms 11

interactions) 2) ensure that the network does not fragment into disconnected sub-networks and
3) keep the rules as simple as possible. These were the primary considerations that guided the
development of these topological operators.

4. Experimental Setup

4.1. Algorithm Designs

4.1.1. SOTEA
A high level pseudocode for SOTEA is provided below. The algorithm starts by defining the
initial population P on a ring topology with each node connected to exactly two others (e.g.
Fig. 1c). For a given generation t, each node N1 is subjected to both topological and genetic
operators. Once the topological operators are executed (defined in Section 3.2. ), N1 is
selected as a parent and a second parent N2 is selected by conducting a two step stochastic
walk across the network. An offspring is created using these parents and a single search
operator that is selected at random from Tab. 1. The better fit between the offspring and N1 is
stored in a temporary list Temp(N1) while the topological and genetic operators are repeated
on the remaining nodes in the population. The population is then updated with the temporary
list to begin the next generation. This sequence of steps is repeated until a stopping criterion
is met. In all experiments, the stopping criterion is set as a maximum 150,000 objective
function evaluations.
The two-step stochastic walk mating scheme is used to maintain consistency with the
topological operators. This both simplifies our model and allows for a more intuitive
understanding of system dynamics. This mating scheme is expected to generate a weak
selection pressure in most EAs, however this is not necessarily the case for SOTEA. Because
high fitness nodes are driven towards increased connectivity, they are more likely to be
encountered in a stochastic walk across the network. Hence, the selection pressure becomes a
locally defined property that can be much stronger than stochastic walk mating would
otherwise create for panmictic or cellular EAs.
Pseudocode for SOTEA
t=0
Initialize P(t) (at random)
Initialize population topology (ring structure) [Fig. 1c]
Evaluate P(t)
Do
For each N1 in P(t)
Add Link Rule(N1) [Section III.B]
Remove Link Rule(N1) [Section III.B]
Transfer Link Rule(N1) [Section III.B]
Select N1 as a first parent
12 James Whitacre, Ruhul Sarker, and Tuan Pham

Select parent N2 by conducting a two step stochastic walk from N1


Select Search Operator (at random from Tab. 1)
Create and Evaluate offspring
Temp(N1) = Best_of(offspring, N1)
Next N1
t++
P(t) = Temp()
Loop until stopping criteria

4.1.2. cellular GA
SOTEA is compared with cellular and panmictic EAs. The cellular GA used in these
experiments is identical to SOTEA except for two design changes (see pseudocode). First,
the cGA does not implement any topological operators and maintains a static ring topology.
The second change is that during mating, the second parent N2 is selected among all
neighbors within a radius R from N1 using linear ranking selection. This additional departure
from SOTEA was made based on experimental evidence that it enhances the performance of
the cGA. In experiments where mating took place using random walks of length R (i.e. the
mating scheme in SOTEA), the cGA displayed exceptionally poorer performance across all
problems in this study. Moreover, in a thorough study on the performance of distributed and
non-distributed GA designs34, the cGA we use (referred to in34 as “ci”) frequently exhibited
the best performance.
Pseudocode for cGA
t=0
Initialize P(t) (at random)
Initialize population topology (ring structure) [Fig. 1c]
Evaluate P(t)
Do
For each N1 in P(t)
Select N1 as first parent
Select N2 from Neighborhood(N1,R)
Select Search Operator (at random from Tab. 1)
Create and evaluate offspring
Temp(N1) = Best_of(offspring, N1)
Next N1
t++
P(t) = Temp()
Loop until stopping criteria
Adaptive Networks and Robustness in Evolutionary Algorithms 13

4.1.3. Panmictic EAs


SOTEA is compared against 16 distinct Panmictic EA designs that in some cases vary
significantly from the SOTEA and CGA algorithms. The variety of PEA designs
corresponded with a variety of performance outcomes. In particular, the variance in
(performance-based) ranking of PEA designs was typically greater than in SOTEA or cGA
and the best performing PEA typically depended on the problem.
Two PEA design frameworks were considered: one where selection schemes are applied
during mating and one where selection schemes are applied through a culling process. These
are labeled as Evolution Strategies (ES) and Genetic Algorithm (GA) designs respectively.
The core of the ES style Panmictic EA is given by the pseudocode below. For this
pseudocode, the parent population of size µ at generation t is defined by P(t). For each new
generation, an offspring population P`(t) of size λ is created through variation operators and is
evaluated to determine fitness values for each offspring. As was also the case in cGA and
SOTEA, offspring are created by selecting a single operator at random from Tab. 1. The
parent population for the next generation is then selected from P`(t) and Q, where Q is subset
of P(t). Q is derived from P(t) by selecting those in the parent population with an age less
than κ.

Pseudocode for Panmictic EA (ES)


t=0
Initialize P(t)
Evaluate P(t)
Do
For i=1 to λ{
{p1, p2} = Select randomly from P;
c = Create an offspring from{p1, p2};
Add c to P'(t);
Next i
P`(t) = Variation(P`(t))
Evaluate (P`(t))
P(t+1) = Select(P`(t) ∪ Q)
t=t+1
Loop until stopping criteria

Eight ES designs are tested which vary by the use of Generational (with elitism) vs. Pseudo
Steady State population updating, the use of Binary Tournament Selection vs. Truncation
Selection, and by the number of search operators. Details are given below for each of the
design conditions.
Population updating: The generational EA design (with elitism for retaining the best parent)
has the parameter settings N=λ=2µ, κ=1 (κ=∞ for best individual). The pseudo steady state
EA design has the parameter settings N=λ=µ, κ=∞.
14 James Whitacre, Ruhul Sarker, and Tuan Pham

Selection: Selection occurs by either binary tournament selection (without replacement) or by


truncation selection.
Search Operators: For each EA design, an offspring is created by using a single search
operator. Two designs were considered: i) a seven search operator design and ii) a two search
operator design. For the seven operator case, an offspring is created by an operator that is
selected at random from the list in Tab. 1. For the two operator case, uniform crossover is
used with probability = 0.95 and single point random mutation is used with probability = 0.05.

Tab. 1: The seven search operators used in the cellular GA, SOTEA, and selected Panmictic EA designs are listed
below. More information on each of the search operators can be found in35.

Search Operators
Wright’s Heuristic Crossover
Simple Crossover
Extended Line Crossover
Uniform Crossover
BLX- α
Differential Evolution
Operator
Single Point Random Mutation

GA Designs: The previous algorithmic framework invokes selection after offspring are
generated and in this way is most similar to evolution strategies. To include experiments with
the more commonly used genetic algorithm, we use the pseudocode below. In this case, κ=∞,
λ=1 (Steady State) and selection from P occurs using either Linear Ranking (Lin) or Binary
Tournament Selection.
Pseudocode for Panmictic EA (GA)
Initialize P;
Evaluate P;
Do{
P' ={};
For i=1 to λ{
{p1, p2} = Select individuals from P;
c = Create an offspring from{p1, p2};
Add c to P';
Next i
P = replacement (P`(t) U Q)
Evaluate P;
Loop until stopping criteria

Constraint Handling: Each of the engineering design case studies involve nonlinear
inequality constraints. Solution feasibility is addressed by defining fitness using the
stochastic ranking method presented in36. Parameter settings for stochastic ranking were
taken from recommendations found in36.
Adaptive Networks and Robustness in Evolutionary Algorithms 15

5. Performance Results
The experimental results are evaluated using several metrics and statistical tests in order to
gain a clearer picture of the strengths and weaknesses of SOTEA. In concluding the section,
we summarize these results and relate them back to different concepts of algorithm
robustness. A summary of our methods for analyzing algorithm performance is given below
followed by a summary of results for each problem.
Performance profiles: Performance profiles comparing SOTEA and cGA are provided in
Figure 3. Each algorithm searches for up to a maximum 150,000 objective function
evaluations. Experiments with SOTEA test different settings of Kmax while the cellular GA
was run with different settings of neighborhood radius R. Performance for each EA is
reported as the median objective function value over 30 runs. The caption text in Figure 3
includes optimal (Fopt) or best known (Fbest) objective function values for each problem.
Statistical Tests: To compare performance between specific algorithm designs that are
“tuned” for a particular problem, we take the best algorithm from each class and calculate the
confidence in algorithm performance superiority using a non-parametric statistical test (i.e.
the Mann-Whitney U-Test). To compare algorithm classes, U tests are conducted using all of
the performance results from each class. Tab. 3 provides p values for these tests with
confidence levels under 99% (p>0.01) listed as statistically insignificant.

5.1. Engineering Design Performance Results


This section presents algorithm performance results for six engineering design problems (for
formal problem definitions, see35) that have proven difficult to solve to optimality and have
been used frequently in other studies. For several of these problems, the best known solution
has steadily improved over time, however the optimal solution remains unknown. We
compare the best solutions from previous studies with those obtained in this study. We should
point out however that conclusions derived from comparisons between different studies
should be made with caution. Because the results reported within each study utilize different
computational resources, it is not entirely appropriate to make direct comparisons. With this
caveat in mind, our intention is to compare the best final solutions from amongst the many
algorithms within each study in order to get a sense of the potential capabilities of these “best
in study” algorithms. Importantly, there were a number of engineering problems where
SOTEA obtained the best result ever reported for the problem and we felt that this was useful
information to provide as it demonstrates our algorithm’s potential utility. In the current
study, the maximum number of function evaluations was chosen in order to be similar to these
other studies yet also consistent across all experiments.
16 James Whitacre, Ruhul Sarker, and Tuan Pham

Figure 3 Performance profiles for the pressure vessel (Fopt=5850.38), alkylation process (Fopt=1772.77), heat
exchanger network (Fopt=7049.25), gear train (Fbest=2.70E-12, reported in37), tension compression spring
(Fbest=0.01270, reported in38), and welded beam (Fbest=1.7255, reported in39) design problems.
Adaptive Networks and Robustness in Evolutionary Algorithms 17

Tab. 2: Performance results for six engineering design problems are shown for twelve Evolutionary Algorithms run
for 3000 generations with algorithm designs varying by the use of Generational (Gen) or Pseudo Steady State (SS)
population updating, the use of Binary Tournament Selection (Tour) or Truncation Selection (Trun), and the number
of search operators (Nops). Performance is presented as the single best objective function value found in 30 runs
FBest as well as the average objective function value over 30 runs FAve. All EAs listed below obtained a feasible
solution within 3000 generations. The single best fitness values found for each problem are in bold.
EA Gen Sel Nops Pressure Vessel Heat Exchanger Alkylation Process
FBest FAve FBest FAve FBest FAve
ES SS Tour 7 6059.70 6190.31 7053.47 7109.20 1771.35 1750.38
ES SS Trun 7 6059.73 6214.31 7056.09 7179.02 1760.77 1630.90
ES Gen Tour 7 5953.06 6123.22 7116.72 7213.38 1711.00 1667.34
ES Gen Trun 7 5964.23 6174.55 7186.97 7250.82 1641.47 1495.13
ES SS Tour 2 5867.87 6382.61 7070.57 7233.18 1756.00 1708.38
ES SS Trun 2 5857.39 6449.57 7093.12 7269.02 1748.95 1661.17
ES Gen Tour 2 6144.69 6340.23 7235.69 7412.11 1621.77 1510.93
ES Gen Trun 2 6188.86 6391.15 7184.51 7398.23 1501.24 1343.48
GA SS Tour 7 5903.55 6418.48 7092.00 7399.75 1767.22 1649.42
GA SS Lin 7 5853.21 6390.27 7050.31 7303.13 1759.20 1533.20
GA SS Tour 2 6091.55 6491.42 7063.97 7290.57 1764.93 1675.21
GA SS Lin 2 6074.73 6617.18 7094.76 7332.24 1751.35 1554.77
Gear Train Tension Compression Welded Beam
FBest FAve FBest FAve FBest FAve
ES SS Tour 7 2.70E-12 2.62E-10 0.012665 0.012758 1.72485 1.74602
ES SS Trun 7 2.70E-12 7.70E-10 0.012665 0.012778 1.72494 1.80945
ES Gen Tour 7 2.70E-12 2.70E-12 0.012679 0.012710 1.75465 1.77920
ES Gen Trun 7 2.70E-12 1.09E-11 0.012687 0.012725 1.76485 1.79732
ES SS Tour 2 2.70E-12 1.12E-09 0.012701 0.013861 1.73570 1.96193
ES SS Trun 2 2.31E-11 1.81E-09 0.012804 0.015078 1.73060 2.06087
ES Gen Tour 2 2.70E-12 4.74E-12 0.012739 0.013035 1.83742 1.93124
ES Gen Trun 2 2.70E-12 2.70E-12 0.012694 0.012864 1.75302 1.88472
GA SS Tour 7 2.31E-11 1.12E-09 0.012665 0.012969 1.72599 1.96120
GA SS Lin 7 2.70E-12 6.39E-10 0.012665 0.012906 1.72673 1.89600
GA SS Tour 2 2.31E-11 2.98E-09 0.012879 0.015302 1.72830 2.06871
GA SS Lin 2 2.70E-12 3.14E-09 0.013073 0.015830 1.82331 2.21587

Tab. 3 Mann-Whitney U tests comparing best algorithms from each design class (first entry) and comparing all data
from design classes (second entry). For best in class comparisons (first entry), the best algorithm from a design class
is determined based on median performance after 150,000 evaluations. Winner of test is indicated along with p
value. “insig” indicates p > 0.05.
Problem PEA vs. SOTEA cGA vs. SOTEA PEA vs. cGA
SOTEA (p<0.0001), SOTEA SOTEA (p=0.008), SOTEA cGA (p<0.0001), cGA
Pressure Vessel (p<0.0001) (p<0.0001) (p<0.0001)
SOTEA (p<0.0001), SOTEA SOTEA (p=0.001), SOTEA cGA (p<0.0001), cGA
Heat Exchanger (p<0.0001) (p<0.0001) (p<0.0001)
SOTEA (p<0.0001), SOTEA insig, SOTEA (p=0.01) cGA (p<0.0001), cGA
Welded Beam (p<0.0001) (p<0.0001)
SOTEA (p<0.0001), SOTEA insig, insig cGA (p<0.0001), cGA
Tension Comp. (p<0.0001) (p<0.0001)
18 James Whitacre, Ruhul Sarker, and Tuan Pham

SOTEA (p<0.0001), SOTEA SOTEA (p=0.01), SOTEA cGA (p=0.0001), cGA


Alkylation Proc. (p<0.0001) (p<0.0001) (p<0.0001)
Gear Train PEA (p=0.003) , insig insig, insig PEA (p=0.003) , insig

5.2. Pressure Vessel Design Problem


The pressure vessel design problem, originally defined by40, has the goal of minimizing the
cost of a pressure vessel as calculated based on material, forming and welding costs. The
design is subject to dimensional constraints which are set to meet ASME standards for
pressure vessels. As shown in Fig. 4, there are four design parameters to optimize consisting
of the thickness of the shell Ts, the thickness of the head Th, the inner radius R and the length
of the cylindrical section of the vessel L. Ts and Th take on integer values indicating the
number of rolled steel plates (where each steel plate is 0.0625 inches thick) and R and L are
continuous variables.

Fig. 4 Pressure Vessel Drawing. Parameters of the problem include the thickness of the shell Ts, the thickness of the
head Th, the inner radius of the vessel R and the length of the cylindrical section of the vessel L. This figure is taken
out of38 and is reprinted with permission from IEEE (© 1999 IEEE).

Results: All but one of the SOTEA algorithms outperformed all of the cGA designs (Figure 3)
and the best tuned algorithm was also a SOTEA design (Tab. 3). Performance tended to
improve as network connectivity was reduced for both SOTEA and the cGA. In light of this
trend, it is not surprising to see the PEA designs performed very poorly on this problem (see
Tab. 3 and Tab. 2). Comparing results between Figure 3 and Tab. 2, the best final solution for a
PEA design is beaten by all SOTEA designs after only 300 generations. Comparisons to
previous studies (Tab. 4) highlight the strong performance of both cGA and SOTEA. Of the
eight studies referenced in and including41, only one other algorithm was able to reach the
objective function values obtained by the distributed EA designs employed here.

Tab. 4 Comparison of results for the pressure vessel design (minimization) problem. Results from other studies were
Adaptive Networks and Robustness in Evolutionary Algorithms 19

reported in41. Results are also reported for39 however their solution violates integer constraints for the 3rd and 4th
parameters making their final solution infeasible. It should also be mentioned that equations for defining the
problem have errors in38 and41. The best solution found in these experiments was (F, x1, x2, x3, x4) = (5850.37,
38.8601, 221.365, 12, 6).
Reference Fitness Ranking
Sandgren, 199040 8129.80 11
Fu, 199142 8084.62 10
Kannan and Kramer, 199443 7198.04 9
Cao, 199744 7108.62 8
Deb, 199745 6410.38 7
Lin 199937 6370.70 6
Coello, 199938 6288.74 5
Zeng et al., 200239 5804.39 --
Li et al., 200241 5850.38 3
SOTEA (This Work) 5850.37 1
cGA (This Work) 5850.37 1
Panmictic EA (This Work) 5853.21 4

5.3. Alkylation Problem


The alkylation process design problem, originally defined in46, has the goal of improving the
octane number of an olefin feed stream through a reaction involving isobutene and acid. The
reaction product stream is distilled with the lighter hydrocarbon fraction recycled back to the
reactor. The objective function considers maximizing alkylate production minus the material
(i.e. feed stream) and operating (i.e. recycle) costs. Design parameters all take on continuous
values and include the olefin feed rate x1 (barrels/day), acid addition rate x2 (thousands of
pounds/day), alkylate yield x3 (barrels/day), acid strength x4 (wt. %), motor octane number x5,
external isobutene to olefin ratio x6, and the F-4 performance number x7.

Fig. 5 Simplified diagram of an alkylation process (recreated from47)

Results: All but one of the SOTEA algorithms outperformed all cGA designs (Figure 3) and
the best tuned algorithm was a SOTEA design (Tab. 3). For this problem there was no clear
trend between performance and network connectivity. PEA algorithms performed relatively
poorly on this problem (Tab. 2). Comparisons to studies from previous authors (see Tab. 5)
20 James Whitacre, Ruhul Sarker, and Tuan Pham

highlight the strong performance of the distributed EAs. Of the stochastic search methods
described in the five studies referenced in47 including their own differential evolution
algorithms, none reached the fitness values obtained by the distributed EA designs employed
here. However, two αBB (Branch and Bound non-linear programming) algorithms were cited
that did find the global optimum and did so more consistently than SOTEA or cGA.

Tab. 5 Comparison of results for the alkylation process design problem (maximization problem). Results from other
authors were reported in47. The best solution found in these experiments was (F, x1, x2, x3, x4, x5, x6, x7) = (1772.77,
1698.18, 53.66, 3031.3, 90.11, 95, 10.5, 153.53).

Reference Fitness Ranking


Bracken and McCormick, 196848 1769 6
Maranas and Floudas, 199749 1772.77 1
Adjiman et al., 199850 1772.77 1
Edgar and Himmelblau, 200151 1768.75 7
Babu and Angira, 200647 1766.36 8
SOTEA (This Work) 1772.77 1
cGA (This Work) 1772.77 1
Panmictic EA (This Work) 1771.35 5

5.4. Heat Exchanger Network (HEN) Problem


The Heat Exchanger Network design problem, originally defined by52, has the goal of
minimizing the total heat exchange surface area for a network consisting of one cold stream
and three hot streams. As shown in Fig. 6, there are eight design parameters consisting of the
heat exchanger areas (x1, x2, x3), intermediate cold stream temperatures (x4, x5) and hot stream
outlet temperatures (x6, x7, x8). The problem is presented below in a reformulated form taken
from53 where a variable reduction method has been used to eliminate equality constraints.

Fig. 6 Heat Exchanger Network Design involves 1 cold stream that exchanges heat with three hot streams.
Parameters to optimize include heat exchange areas (x1, x2, x3) and stream temperatures (x4, x5, x6, x7, x8).

Results: All of the SOTEA algorithms outperformed the cGA designs (Figure 3) and the best
tuned algorithm was a SOTEA design (Tab. 3). Performance tended to improve as network
Adaptive Networks and Robustness in Evolutionary Algorithms 21

connectivity increased in both SOTEA and cGA. Such a trend seems to suggest that
interaction constraints are not as important for this problem which makes the poor
performance of the PEA designs (Tab. 2) somewhat unexpected. Comparing results between
Figure 3 and Tab. 2, the best final result for a Panmictic EA design is beaten by all SOTEA
designs after only 400 generations. Comparisons to other work are less favorable for this
problem. In47, they introduce a differential EA that can find the optimal solution 100% of the
time and in under 40,000 evaluations. None of the algorithms employed here were able to
obtain that level of performance for this problem. In fact, the best algorithm (SOTEA with
Kmax = 7) was only able to find the optimal solution 65% of the time in 150,000 evaluations.
To make a fair comparison to the results in47, our results were also analyzed at 40,000
evaluations and under these conditions only two of the SOTEA algorithms (and none of the
cellular GAs) were able to find an optimal solution in that amount of time (with the optimal
being found only 10% of the time). Interestingly, this was one of the simplest engineering
design problems tested with only a marginal level of epistasis between parameters35.

Tab. 6 Comparison of results for the heat exchanger network design problem (minimization problem). Results from
other authors were reported in47. The best solution found in these experiments was (F, x1, x2, x3, x4, x5) = (7049.25,
579.19, 1360.13, 5109.92, 182.01, 295.60).
Reference Fitness Ranking
Angira and Babu, 200353 7049.25 1
Babu and Angira, 200647 7049.25 1
SOTEA (This Work) 7049.25 1
cGA (This Work) 7049.25 1
Panmictic EA (This Work) 7050.31 5

5.5. Gear Train Design Problem


The gear train design problem was originally defined by40 consists of optimizing a gear train
such that the gear ratio approach as close as possible to 1/6.931. There are four design
parameters consisting of integer values for the number of teeth for each gear.
Results: For the gear train design problem, there was no clear distinction in final performance
between the cellular GA and SOTEA. One of the cGA designs (R=12) was found to have
better median performance than any SOTEA design (Figure 3) however there was no
statistically significant difference found in the performance distributions based on the U test
(Tab. 3). Although final performance between the cellular GA, SOTEA, and the PEA design
classes was generally small, a PEA was determined to be the best tuned algorithm in Tab. 3
(the only engineering problem where this occurs). Of the studies referenced in and
including37, only one previous algorithm found the solutions reported in this study.

Tab. 7 Comparison of results for the gear train design problem (minimization problem). Results from other authors
are reported in37. The best solution found in this study was (F, x1, x2, x3, x4) = (2.70 x10-12, 19, 16, 43, 49).
22 James Whitacre, Ruhul Sarker, and Tuan Pham

Reference Fitness Ranking


Cao and Wu, 199744 2.36 x10-9 5
Lin et al. 199937 2.70 x10-12 1
SOTEA (This Work) 2.70 x10-12 1
cGA (This Work) 2.70 x10-12 1
Panmictic EA (This Work) 2.70 x10-12 1

5.6. Tension Compression Spring Design Problem


The Tension Compression Spring problem, shown in Fig. 7, has the goal of minimizing the
weight of a tension/compression spring subject to constraints on minimum deflection, shear
stress, surge frequency, and dimensional constraints38. There are three design parameters to
optimize consisting of the mean coil diameter D, the wire diameter d and the number of active
coils N.

Fig. 7 Diagram of Tension Compression Spring. Parameters of the problem include the mean coil diameter D, the
wire diameter d and the number of active coils N which is represented by the number of loops of wire in the
diagram. Forces acting on the spring are shown as P. This figure is taken out of38 and is reprinted with permission
from IEEE (© 1999 IEEE).

Results: All but one of the distributed EA designs converge to similar values (Figure 3).
Comparing the results from previous studies, we find strong performance from both
distributed EAs. Of the three studies referenced in and including38, no previous method has
been able to find the solutions reported in this study.

Tab. 8 Comparison of results for the tension compression spring problem (minimization problem). Results from
other authors were reported in38. The best solution found in these experiments was (F, x1, x2, x3) = (0.0126652303,
0.051689, 0.356732, 11.2881).
Reference Fitness Ranking
Belegundu,198254 0.0128334375 6
Arora, 198955 0.0127302737 5
Coello, 199938 0.0127047834 4
SOTEA (This Work) 0.0126652303 1
cGA (This Work) 0.0126652303 1
Panmictic EA (This Work) 0.0126652593 3
Adaptive Networks and Robustness in Evolutionary Algorithms 23

5.7. Welded Beam Design Problem


The Welded beam design problem has the goal of minimizing the cost of a weight bearing
beam subject to constraints on shear stress τ, bending stress σ, buckling load on the bar Pc,
and dimensional constraints38. There are four design parameters to optimize consisting of the
dimensional variables h, l, t, and b shown in Fig. 8.

Fig. 8: Diagram of a welded beam. The beam load is defined as P with all other parameters shown in the diagram
defining dimensional measurements relevant to the problem. This figure is taken out of38 and is reprinted with
permission from IEEE (© 1999 IEEE).

Results: Each of the distributed EA designs converge to similar values (Figure 3) and both
strongly outperformed the PEA (Tab. 2). Comparisons to work from previous authors
highlight the strong performance of both of the distributed EAs. Of the three studies
referenced in and including39, no previous method has been able to find the solutions reported
in this study.

Tab. 9 Comparison of results for the welded beam design problem (minimization problem). Results from other
authors were reported in39. The best solution found in these experiments was (F, x1, x2, x3, x4) = (1.72485,
0.205729, 3.47051, 9.03662, 0.2057296).

Reference Fitness Ranking


Deb, 199145 2.43311600 6
Coello, 199938 1.74830941 5
Zeng et al. 200239 1.72553637 4
SOTEA (This Work) 1.72485217 1
cGA (This Work) 1.72485217 1
Panmictic EA (This Work) 1.72485218 3

5.8. Artificial Test Function Results


This section presents results from experiments conducted on six artificial test functions. This
suite of problems was chosen in order to evaluate performance over a broad range of fitness
24 James Whitacre, Ruhul Sarker, and Tuan Pham

landscapes. Information regarding the fitness landscape properties of these problems as well
as formal problem definitions can be found in35.

Fig. 9 Performance for FM (Fopt=0), ECC (shifted from Fopt=0.067416 to Fopt=0), system of linear equations (Fopt=0),
Rastrigin (Fopt=0), Griewangk (Fopt=0), and Watson’s (Fopt=0.01714) test functions.

Tab. 10: Performance results for all six artificial test problems are shown for twelve Evolutionary Algorithms run
for 3000 generations with algorithm designs varying by the use of Generational (Gen) or Pseudo Steady State (SS)
Adaptive Networks and Robustness in Evolutionary Algorithms 25

population updating, the use of Binary Tournament Selection (Tour) or Truncation Selection (Trun), and the number
of search operators (Nops). Performance is presented as the single best objective function value found in 20 runs
FBest as well as the average objective function value over 20 runs FAve.
EA Gen Sel Nops Freq. Mod. Error Correcting Code Sys. of Lin. Eq.
FBest FAve FBest FAve FBest FAve
ES SS Tour 7 0.00 15.36 3.53E-03 4.32E-03 8.53E-14 2.12E-05
ES SS Trun 7 6.69 18.28 3.68E-03 4.29E-03 3.16E-05 1.32
ES Gen Tour 7 23.07 26.95 2.47E-03 3.75E-03 10.90 14.58
ES Gen Trun 7 22.87 25.97 3.44E-03 4.13E-03 2.45 5.27
ES SS Tour 2 8.98 15.87 2.70E-07 3.84E-03 1.67 3.54
ES SS Trun 2 0.55 16.49 3.43E-03 3.96E-03 4.26 5.90
ES Gen Tour 2 23.35 26.33 4.18E-03 4.77E-03 50.21 74.11
ES Gen Trun 2 21.95 26.77 2.70E-07 3.17E-03 35.69 51.75
GA SS Tour 7 9.02 16.23 4.03E-03 4.47E-03 0.03 1.88
GA SS Lin 7 0.68 17.74 3.49E-03 4.30E-03 0.04 2.32
GA SS Tour 2 0.22 15.92 3.90E-03 4.55E-03 2.41 4.78
GA SS Lin 2 3.04 16.44 3.59E-03 4.43E-03 3.97 6.25
Rastigrin Griewangk Watson
FBest FAve FBest FAve FBest FAve
ES SS Tour 7 1.25E-10 1.65E-06 0.012 0.052 1.716E-02 2.025E-02
ES SS Trun 7 4.24E-02 1.26E-01 0.049 0.158 1.728E-02 2.922E-02
ES Gen Tour 7 6.33E-01 9.17E-01 0.615 0.751 1.778E-02 1.941E-02
ES Gen Trun 7 8.82E-02 1.96E-01 0.348 0.508 1.730E-02 1.828E-02
ES SS Tour 2 3.10E-02 6.92E-02 0.131 0.216 1.804E-02 4.887E-02
ES SS Trun 2 1.64E-01 2.83E-01 0.154 0.366 1.829E-02 4.369E-02
ES Gen Tour 2 7.82 10.51 1.476 2.729 2.444E-02 5.673E-02
ES Gen Trun 2 4.89 7.53 1.474 2.199 2.205E-02 4.111E-02
GA SS Tour 7 8.99E-02 2.79E-01 0.046 0.212 1.716E-02 4.406E-02
GA SS Lin 7 9.38E-03 1.52E-01 0.089 0.167 1.730E-02 2.957E-02
GA SS Tour 2 1.54E-01 2.93E-01 0.212 0.407 1.901E-02 6.413E-02
GA SS Lin 2 1.00E-01 1.99E-01 0.236 0.431 1.821E-02 5.189E-02

Tab. 11 Mann-Whitney statistical tests comparing best algorithms from each design class (first entry)
and comparing all data from design classes (second entry). For best in class comparisons (first entry),
the best algorithm from a design class is determined based on median performance after 150,000
evaluations. Winner of test is indicated along with p value. “insig” indicates p > 0.05.
Problem PEA vs. SOTEA cGA vs. SOTEA PEA vs. cGA
ECC PEA (p=0.002) , insig insig, SOTEA (p=0.01) PEA (p=0.0008) , insig
Freq. Mod. insig, SOTEA (p<0.0001) insig, insig insig, cGA (p<0. 0001)
SOTEA (p<0.0001), SOTEA SOTEA (p<0.0001) , PEA (p<0.0001), insig
Rastrigin (p<0.0001) SOTEA (p<0.0001)
Griewangk insig, SOTEA (p<0.0001) insig, insig insig, cGA (p<0.0001)
SOTEA (p<0.0001), SOTEA SOTEA (p<0.0001), cGA (p=0.009), cGA
Watson's (p<0.0001) SOTEA (p<0.0001) (p<0.0001)
SOTEA (p<0.0001), SOTEA SOTEA (p=0.008) , SOTEA cGA (p<0.0001), cGA
Sys. of Lin. Eq. (p<0.0001) (p<0.0001) (p<0.0001)
26 James Whitacre, Ruhul Sarker, and Tuan Pham

Frequency Modulation: SOTEA designs are found to be both the best and worst performers
(compared to the cGA) throughout the optimization runs (Fig. 9).
ECC: Both SOTEA and the cGA designs are able to make steady progress toward the
optimal solution with little difference between the two designs (Fig. 9). One PEA was found to
be the best tuned algorithm as seen in Tab. 11 (this is the only artificial test function where a
PEA dominates).
System of Linear Equations: SOTEA designs strongly outperform the cGA (Fig. 9).
Comparison with results in Tab. 10 finds that both distributed EA designs were able to strongly
outperform the PEAs.
Rastrigin: SOTEA designs strongly outperform the cGA and the PEA. Although both
distributed EA designs have significantly better median performance than the PEA designs,
there is some indication that the PEA can occasionally find good solutions (Tab. 10).
Griewangk: SOTEA designs are very similar in performance to the cellular GA as seen in
Fig. 9 and Tab. 11. Both distributed EA designs perform better than the PEA designs (Tab.
11).
Watson: SOTEA designs strongly outperform the cGA (Fig. 9 and Tab. 11). Both distributed
EA designs perform better than the Panmictic EA designs (Tab. 11).

5.9. General Performance Statistics


Algorithm performance has thus far been evaluated through comparisons between
algorithms on individual optimization problems. Here we investigate whether more general
conclusions can be made about the EA design classes (PEA, cGA, SOTEA) using metrics
presented in Tab. 12. The first statistic (Tab. 12, column two) measures the proportion of runs
where an EA design class found the best known solution. This value is averaged over all test
problems and indicates the tendency of an algorithm class to converge to the optimal (or best
known) solution. The first statistic provides a measure of run consistency and can thus be seen
as a proxy for an algorithm’s robustness to initial conditions. The second statistic (Tab. 12,
column three) measures the proportion of runs where an EA design class finds a solution that
ranks in the top 5% of all solutions found in these experiments. The purpose with this metric
is to relax the criteria from the previous statistic and get a broader sense of EA design class
performance. The third statistic (Tab. 12, column four) is a p value for the Mann-Whitney U-
test where the statistical hypothesis is that the given EA design class is superior to the other
two EA design classes.

Tab. 12 Overall performance statistics for the Panmictic EA, the cellular GA, and SOTEA. Statistics in columns 1-3
are an average value over all test problems.
EA Design % of runs where EA U-Test % of problems where EA
found best was top 5% p<0.05 was best design found best
Panmictic EA 4.0% 4.8% failed 8.3% 16.7%
cellular GA 9.1% 10.4% failed 12.5% 66.7%
SOTEA 17.3% 28.5% passed 79.2% 83.3%
Adaptive Networks and Robustness in Evolutionary Algorithms 27

The last two statistics in Tab. 12 are confined to the best implementations of an EA design
class and thus indicate algorithm effectiveness after parameter tuning. For instance, the fourth
statistic (Tab. 12, column five) measures the proportion of problems where the algorithm
obtained the best median objective function value. This indicates the likelihood of preferring
a given algorithm when it can only be run a small number of times on a problem. The final
statistic (Tab. 12, column six) measures the proportion of problems where the algorithm was
able to find the best known solution at least one time. This indicates likely algorithm
preference when repeated optimization runs are possible. For each of the statistics, and in the
context of the selected test problems, SOTEA is found to be better than any of the other
algorithm design classes. Particularly noteworthy are the results in column five which
indicate that a “tuned” SOTEA design was the best EA design in about 80% of the problems
tested. Moreover, we have greater than 95% confidence that SOTEA is a superior search
method for the problems considered in this study.

5.10. Summary of Performance Results


The aim of this section was to evaluate several aspects of SOTEA performance robustness.
For a single run of a search algorithm, robustness can be related directly to the search process
and reflect the competitiveness of an algorithm over different timescales. In a rugged fitness
landscape for instance, this requires the capacity to both exploit new information and also
explore the fitness landscape. Robustness can also be defined at other resolutions as well.
Over multiple optimization runs, the robustness of a search process could refer to
performance consistency, which is reflected by a small variance in final solution quality. At
yet a higher resolution, robustness might refer to the ability of a search algorithm to achieve
the previously mentioned forms of robustness but in different fitness landscapes and with
minimal changes to algorithm design parameters. In this section we have found SOTEA to
be highly robust based on each of these definitions.

6. Topological Analysis
To understand the basis by which SOTEA establishes a robust search process requires a
deeper understanding of the spatio-temporal dynamics of SOTEA and how these are
influenced by fitness landscape properties. With this in mind, we conducted a genealogical
analysis using tools described in56 and a topological analysis reported here. The genealogical
analysis evaluated gene takeover dynamics across a population, however these tests did not
provide clear insights into SOTEA search behavior and the results are not presented. In this
section, we report the structural characteristics of SOTEA and compare this with the cellular
GA, Panmictic EA, and values observed in biological systems. Here we find that, unlike
standard EA population topologies, SOTEA obtains several topological characteristics
observed in biological systems that are in some cases potentially useful to a search process.
28 James Whitacre, Ruhul Sarker, and Tuan Pham

6.1. SOTEA Topological Analysis


Methods: In SOTEA, network structural changes are driven by node fitness, however
because node fitness is constantly evolving (due to population dynamics), the SOTEA
network never converges to a stable structure. In order to make general statements about
topological characteristics, measurements are therefore averages taken every 50 generations
for SOTEA run 10 times over 1000 generations. To consider the impact of system size,
topological properties for population sizes of N = 50, 100 and 200 have been measured with
results shown in Fig. 10. Here it is seen that most properties show little dependency on the
population size except for L which is generally smaller for smaller systems. Fig. 10 also
indicates that the topological properties of SOTEA are sensitive to the setting of KMax which is
the only parameter of the SOTEA design. The topological property values for SOTEA with
N=50 are reported in Tab. 13, which are taken as an average over all KMax settings considered
in this study (KMax = 3, 5, 7, 9).

40
a) 0
b) 30
c)
-2
30
20
-4
c-k v
L 20
-6
10
10
-8

0 -10 0
0 5 10 0 5 10 0 5 10
Kmax Kmax Kmax

0.8
d) e)
5

4
c k
(ave) (ave)
3

0.4 2
0 5 10 0 5 10
Kmax Kmax

Fig. 10 Topological properties for SOTEA with different values of KMax and population sizes of N = 50 (♦), 100(◼),
and 200(▲ ). Characteristics include a) the characteristic path length (L), b) the correlation between c and k (c-k), c)
the slope of the degree correlation (υ), d) the average clustering coefficient cave and e) the degree average kave.

Tab. 13: Topological characteristics for the Panmictic EA, cGA, and SOTEA. The topological characteristics for
biological systems are taken from24 and references therein. In column five, γ refers to the exponent for k
distributions that fit a power law. Two values for γ are given for the metabolic network and refer to the in/out-
degree exponents (due to this being a directed network). Results for degree correlations are given as the slope υ of
kNN vs k. N is the population size and R is a correlation coefficient for the stated proportionalities.
Adaptive Networks and Robustness in Evolutionary Algorithms 29

System N L kave k dist. cave (crand) c-k k-kNN


Panmictic EA 50 L=1 kave = N-1 k = N-1 1 (1_new) no no
cellular GA 50 L~N kave = 2 k=2 0 (0.04) no no
SOTEA 50 5.97 3.6 Poisson 0.687 (0.07) c = -4.75k υ = 11.8

Complex Large L ~ log N kave << N Power Law, cave >>crand Power Law either
Networks 2<γ<3 (Scale Free (Hierarchical) υ>0
Network) or υ < 0
Protein 2,115 2.12 6.80 Power Law, γ = 0.07 (0.003) Power Law υ<0
2.4
Metabolic 778 7.40 3.2 Power Law, γ = 0.7 (0.004) Power Law υ<0
2.2/2.1

6.1.1. Topological Properties of SOTEA


Here we comment on some of the topological properties of SOTEA and discuss potential
causes. Some topological properties such as the assortative character of the SOTEA networks
(υ > 0) and the linear relation between c and k are not discussed as they are not easily
interpreted within the context of algorithm search behavior.
Characteristic Path Length L: The total distance genetic material must travel across the
network is always small as indicated by small L. Although this seems to imply that the
population will be more tightly coupled, we provide arguments below as to why this is not the
case.
Clustering Coefficient: The clustering coefficient is an order of magnitude larger than what
is observed in random networks which indicates that the SOTEA driving forces were
successful in achieving this topological property. This topological feature encourage random
walk interactions (e.g. for mating) to remain within clusters; a behavior that would be
straightforward to confirm using the network analysis methods described in57. A related
consequence of this topological feature may be that it acts to slow down communication
between clusters, which could dampen the rate of genetic transfer that would otherwise be
observed in undirected random networks with similar values for L.
Degree Average: The low value for kave suggests the SOTEA network maintains a sparsely
connected architecture with high levels of locality similar to that of the cellular GA.
Degree distribution: k approximates a Poisson distribution which is not similar to the fat
tailed distributions observed in complex systems or the distributions observed in the first
SOTEA algorithm developed in4. The distributions results suggest relatively little
heterogeneity in k is present such that the level of locality is fairly uniform within the system.
Previous studies, reviewed in19, have found that placing upper bounds on k can result in strong
deviations from a power law. This SOTEA model introduces tight constraints on the values
of k (e.g. upper and lower bounds, quadratic set point) so the k distribution results are not
unexpected.
30 James Whitacre, Ruhul Sarker, and Tuan Pham

6.1.2. SOTEA Scaling


A visual analysis can often provide useful insights into network structure. Fig. 11 shows
SOTEA networks after 500 generations of evolution with varying population sizes (N=50,
100, 200) and with either KMax = 7 or KMax = 5. One noticeable consequence of the SOTEA
model is that many nodes are found in four neighborhood clusters and in particular, there
appears to be a “kite” motif present in the network. It is expected that this is in part due to the
degree lower bound of KMin = 3 in SOTEA. In the network visualizations, node size is
increased to reflect individuals with better fitness. Because fitness can change with each
generation, we see that nodes with high fitness are not always associated with hub positions.
More generally, the emergence of these properties is limited by the fact that population
members are always in flux and thus the driving forces for cluster and hub formation are also
subject to change over time. As population size increases, one can also notice residual ring-
like structures in the network, even after 500 generations. This indicates that initial
topological bias continues to impact the network over long periods of time for larger
populations. Further investigation is needed to determine how this historical structural bias
(in conjunction with initial genetic bias) can influence algorithm search behavior.

SOTEA (KMax = 7) SOTEA (KMax = 5)


Adaptive Networks and Robustness in Evolutionary Algorithms 31

Fig. 11 SOTEA Network Visualizations with population sizes N = 50 (top), N = 100 (middle), and N = 200
(bottom).

7. Discussion

7.1. SOTEA Network Model


Network dynamics were modeled in SOTEA based on a few guiding principles. First, we felt
it was necessary to have topological changes guided by interactions with the fitness landscape,
with structural changes enacted on local regions of the network. This local restructuring not
only occurs for many real-world complex systems, it is also necessary for efficient parallel
implementation of the algorithm. This led to the use of network rewiring rules based on short
stochastic walks and node property values that are based on local information.
Second, we wanted to couple the structural dynamics of the network to the dynamics of the
EA population in a way that could promote modularity and allow for high levels of
exploration and exploitation in different regions of the population. To be effective, such
modularity could not be imposed on the population but instead needed to emerge and adapt
based on information gathered during the search process. SOTEA’s superior performance
across most problems provides evidence that coevolution between a population and its
topology is a readily exploitable feature of natural systems and can be effectively utilized in
nature-inspired population based search heuristics.

7.2. Distributed EA research


Considerable research efforts have been devoted to the study of distributed evolutionary
algorithms. These efforts include the study of fine-grained (e.g. cellular grids), coarse-
grained (e.g. island models), and hybrid structures (e.g. hierarchical). The highly modular
topology of the SOTEA model combined with short stochastic walk interactions within the
system are likely to create virtual islands in the system where interactions within a cluster are
much more frequent compared to interactions between clusters. Quantifying the prevalence of
such behavior is possible by calculating the characteristic residence time of random walks on
local regions of the network, e.g. using methods described in Section 2.3 in57. Assuming that
clusters do become relatively isolated from other clusters, this would allow for a more nature-
inspired approach to the integration of fine-grain and coarse-grain structures within an EA
population (compared with explicitly defined hierarchical topologies). Of course, how these
clusters form using fitness information contained in the population will greatly influence
algorithm behavior. With the topological operators presented in this study, SOTEA appears
to generate a robust search process that evolves as an emergent property of the system
(through its interaction with the problem). This should be contrasted with surrogates of
robustness (e.g. diversity preservation, niching, crowding) that have been incorporated into
32 James Whitacre, Ruhul Sarker, and Tuan Pham

EAs in previous studies based on a priori knowledge about a problem’s fitness landscape
properties.
Some studies have suggested that a population that is spatially distributed over a static
topology can enhance some types of robustness in an EA (e.g. see34). How this occurs has not
been entirely determined, however intuition suggests that a distributed population topology
influences population dynamics by creating a weaker coupling across the population. A
weaker coupling can attenuate fast systemic responses to local attractors and may allow for a
more diffuse and explorative search to take place. On the other hand, the use of a static
topology is itself a global and inflexible approach to achieving robustness to local attractors.
Moreover, it is expected to reduce the speed by which any information can be exploited since
it establishes a global predefined tradeoff between exploration and exploitation in the system.
Alternatively, a topology that adapts in response to local attractors has the potential to allow
for qualitative differences in search behavior for different segments of the population.

8. Conclusions
SOTEA Network Model: A Self-Organizing Topology Evolutionary Algorithm (SOTEA)
has been presented with a distributed population structure that coevolves with EA population
dynamics; the first known optimization algorithm with such a coevolving state-structure
relationship. Based on the results of this study as well as theoretical issues raised in the
introduction, we feel that the coevolution of states and structure provides a unique and
interesting extension to the design of search algorithms.
The general framework that allows for this coevolution to be implemented is straightforward.
With the population defined on a network, rules are used to modify the network topology
based on the current state of the population. In particular, structural changes are initiated by a
dynamic state value in each node, e.g. individual fitness. Node state dynamics are a simple
consequence of the genetic operators implemented within the evolutionary search process.
The SOTEA model presented in this paper was designed to structurally adapt to the fitness
landscape based on local network information and local topological changes; features that
were motivated by both practical implementation concerns and theoretical motivations.
Network dynamics were driven by i) an adaptive connectivity where higher fitness individuals
were encouraged to obtain higher levels of connectivity and ii) an adaptive definition of
community that encourages high levels of clustering amongst nodes with low fitness.
Topological Analysis: Self-organization of the population network topology resulted in high
levels of clustering, small characteristic path length, and correlations between the clustering
coefficient and a node’s degree. Each of these characteristics are approximately similar to
what is observed in biological networks.
Performance: A number of engineering design problems and artificial test functions were
selected to evaluate the robustness of the new SOTEA algorithm compared with another
distributed design, the cellular GA. Results indicate the SOTEA algorithm often had better
performance and more consistent results compared with the cGA. Both of the distributed
Adaptive Networks and Robustness in Evolutionary Algorithms 33

Evolutionary Algorithms strongly outperformed a suite of 16 other Evolutionary Algorithms


tested. From these results, we propose that the coevolution between a population and its
topology may constitute a promising new paradigm for designing adaptive search heuristics.

9. References

1. Sayama, H., M.A.M. Aguiar, Y. Bar-Yam, and M. Baranger, Spontaneous pattern


formation and genetic invasion in locally mating and competing populations.
Physical Review E, 2002. 65(5): p. 51919.
2. Boerlijst, M.C. and P. Hogeweg, Spiral wave structure in pre-biotic evolution:
hypercycles stable against parasites. Physica D, 1991. 48(1): p. 17-28.
3. Eppstein, M.J., J.L. Payne, and C. Goodnight. Sympatric speciation by self-
organizing barriers to gene flow in simulated populations with localized mating. in
Proceedings of the Genetic and Evolutionary Computation Conference. 2006.
Seattle.
4. Whitacre, J.M., R.A. Sarker, and Q.T. Pham, The Self-Organization of Interaction
Networks for Nature-Inspired Optimization. IEEE Transactions on Evolutionary
Computation, 2008. 12(2): p. 220-230.
5. Santos, F.C. and J.M. Pacheco, Scale-Free Networks Provide a Unifying Framework
for the Emergence of Cooperation. Physical Review Letters, 2005. 95(9): p. 98104.
6. Gómez-Gardenes, J., Y. Moreno, and L.M. Floriá, On the robustness of complex
heterogeneous gene expression networks. Biophysical Chemistry, 2005. 115: p. 225-
228.
7. Kauffman, S.A., Requirements for evolvability in complex systems: orderly
components and frozen dynamics. Physica D, 1990. 42: p. 135–152.
8. Hornby, G.S. ALPS: the age-layered population structure for reducing the problem
of premature convergence. in Proceedings of the Genetic and Evolutionary
Computation Conference. 2006.
9. Mahfoud, S.W., A Comparison of Parallel and Sequential Niching Methods.
Conference on Genetic Algorithms, 1995. 136: p. 143.
10. Alba, E., F. Luna, A.J. Nebro, and J.M. Troya, Parallel heterogeneous genetic
algorithms for continuous optimization Parallel Computing, 2004. 30(5-6): p. 699-
719.
11. Sarma, J. and K.A. De Jong, An analysis of the effects of neighborhood size and
shape on local selection algorithms. Parallel Problem Solving from Nature, 1996.
1141: p. 236–244.
12. Dorronsoro, B., E. Alba, M. Giacobini, and M. Tomassini, The influence of grid
shape and asynchronicity on cellular evolutionary algorithms. CEC2004 Congress
on Evolutionary Computation, 2004. 2.
34 James Whitacre, Ruhul Sarker, and Tuan Pham

13. Giacobini, M., M. Tomassini, A.G.B. Tettamanzi, and E. Alba, Selection intensity in
cellular evolutionary algorithms for regular lattices. IEEE Transactions on
Evolutionary Computation, 2005. 9(5): p. 489-505.
14. Preuss, M. and C. Lasarczyk, On the Importance of Information Speed in Structured
Populations. Lecture Notes in Computer Science, 2004: p. 91-100.
15. Giacobini, M., M. Tomassini, and A. Tettamanzi. Takeover time curves in random
and small-world structured populations. in GECCO. 2005: ACM New York, NY,
USA.
16. Giacobini, M., M. Preuss, and M. Tomassini, Effects of Scale-Free and Small-World
Topologies on Binary Coded Self-adaptive CEA. Lecture Notes in Computer
Science, 2006. 3906: p. 86.
17. Ravasz, E., A.L. Somera, D.A. Mongru, Z.N. Oltvai, and A.L. Barabási,
Hierarchical Organization of Modularity in Metabolic Networks. Science, 2002.
297: p. 1551–1555.
18. Watts, D.J. and S.H. Strogatz, Collective dynamics of 'small-world' networks.
Nature, 1998. 393(6684): p. 409-10.
19. Albert, R. and A.L. Barabási, Statistical mechanics of complex networks. Reviews of
Modern Physics, 2002. 74(1): p. 47-97.
20. Kitano, H., Biological robustness. Nature Reviews Genetics, 2004. 5(11): p. 826-
837.
21. Waddington, C.H., Genetic Assimilation of an Acquired Character. Evolution, 1953.
7(2): p. 118-126.
22. Agrawal, A.A., Phenotypic Plasticity in the Interactions and Evolution of Species.
Science, 2001. 294(5541): p. 321-326.
23. Alba, E. and B. Dorronsoro, The Exploration/Exploitation Tradeoff in Dynamic
Cellular Genetic Algorithms. IEEE Transactions on Evolutionary Computation,
2005. 9(2): p. 126-142.
24. Boccaletti, S., V. Latora, Y. Moreno, M. Chavez, and D.U. Hwang, Complex
networks: Structure and dynamics. Physics Reports, 2006. 424(4-5): p. 175-308.
25. Newman, M.E.J., The structure and function of complex networks. SIAM Review,
2003. 45: p. 167-256.
26. Barabási, A.L. and Z.N. Oltvai, Network biology: understanding the cell's functional
organization. Nature Reviews Genetics, 2004. 5(2): p. 101-113.
27. Erdös, P. and A. Rényi, On random graphs. Publ. Math. Debrecen, 1959. 6: p. 290-
297.
28. Erdös, P. and A. Rényi, On the evolution of random graphs. Bulletin of the Institute
of International Statistics, 1961. 38: p. 343-347.
29. Barabási, A.L. and R. Albert, Emergence of Scaling in Random Networks. Science,
1999. 286(5439): p. 509-512.
30. Wagner, A., Evolution of Gene Networks by Gene Duplications: A Mathematical
Model and its Implications on Genome Organization. Proceedings of the National
Academy of Sciences, USA, 1994. 91(10): p. 4387-4391.
Adaptive Networks and Robustness in Evolutionary Algorithms 35

31. Caldarelli, G., A. Capocci, P. De Los Rios, and M.A. Muñoz, Scale-Free Networks
from Varying Vertex Intrinsic Fitness. Physical Review Letters, 2002. 89(25): p.
258702.
32. Vazquez, A., Growing network with local rules: Preferential attachment, clustering
hierarchy, and degree correlations. Physical Review E, 2003. 67(5): p. 56104.
33. Pollner, P., G. Palla, and T. Vicsek, Preferential attachment of communities: The
same principle, but a higher level. Europhysics Letters, 2006. 73(3): p. 478-484.
34. Alba, E. and M. Tomassini, Parallelism and evolutionary algorithms. IEEE
Transactions on Evolutionary Computation, 2002. 6(5): p. 443-462.
35. Whitacre, J.M., Adaptation and Self-Organization in Evolutionary Algorithms.
2007, University of New South Wales: PhD Thesis. p. 283.
36. Runarsson, T.P. and X. Yao, Stochastic ranking for constrained evolutionary
optimization. IEEE Transactions on Evolutionary Computation, 2000. 4(3): p. 284-
294.
37. Lin, Y.C., F.S. Wang, and K.S. Hwang, A hybrid method of evolutionary algorithms
for mixed-integer nonlinear optimization problems. Congress on Evolutionary
Computation, 1999. 3.
38. Coello, C.A.C., Self-adaptive penalties for GA-based optimization. Congress on
Evolutionary Computation, 1999. 1.
39. Zeng, S.Y., L.X. Ding, and L.S. Kang, An evolutionary algorithm of contracting
search space based on partial ordering relation for constrained optimization
problems. Proceedings of the Conference on Algorithms and Architectures for
Parallel Processing, 2002: p. 76-81.
40. Sandgren, E., Nonlinear integer and discrete programming in mechanical design
optimization. Journal of Mechanical Design, 1990. 112(2): p. 223–229.
41. Li, Y., L. Kang, H. De Garis, Z. Kang, and P. Liu, A Robust Algorithm for Solving
Nonlinear Programming Problems. International Journal of Computer Mathematics,
2002. 79(5): p. 523-536.
42. Fu, J., R.G. Fenton, and W.L. Cleghorn, A mixed integer-discrete-continuous
programming method and its application to engineering design optimization.
Engineering optimization, 1991. 17(4): p. 263-280.
43. Kannan, B.K. and S.N. Kramer, Augmented Lagrange multiplier based method for
mixed integer discrete continuous optimization and its applications to mechanical
design. ASME, 1993. 65: p. 103-112.
44. Cao, Y.J. and Q.H. Wu, Mechanical design optimization by mixed-variable
evolutionary programming. Proceedings of the Conference on Evolutionary
Computation, 1997: p. 443–6.
45. Deb, K., Optimal design of a welded beam via genetic algorithms. AIAA Journal,
1991. 29(11): p. 2013-2015.
46. Sauer, R.N., A.R. Colville, and C.W. Burwick, Computer Points Way to More
Profits. Hydrocarbon Processing, 1964. 84(2).
47. Babu, B.V. and R. Angira, Modified differential evolution(MDE) for optimization of
non-linear chemical processes. Computers and Chemical Engineering, 2006. 30(6):
p. 989-1002.
36 James Whitacre, Ruhul Sarker, and Tuan Pham

48. Bracken, J. and G.P. McCormick, Selected Applications of Nonlinear Programming.


1968: Wiley New York.
49. Maranas, C.D. and C.A. Floudas, Global optimization in generalized geometric
programming. Computers and Chemical Engineering, 1997. 21(4): p. 351-369.
50. Adjiman, C.S., I.P. Androulakis, and C.A. Floudas, A Global Optimization Method,
BB, for General Twice-Differentiable Constrained NLPs: II. Implementation and
Computational Results. Computers and Chemical Engineering, 1998. 22: p. 1159-
1179.
51. Edgar, T.F., D.M. Himmelblau, and L.S. Lasdon, Optimization of Chemical
Processes. 2001: McGraw-Hill.
52. Floudas, C.A. and P.M. Pardalos, A Collection of Test Problems for Constrained
Global Optimization Algorithms. 1990: Springer.
53. Angira, R. and B.V. Babu. Evolutionary Computation for Global Optimization of
Non-Linear Chemical Engineering Processes. in Proceedings of International
Symposium on Process Systems Engineering and Control (ISPSEC’03)-For
Productivity Enhancement through Design and Optimization 2003. Mumbai.
54. Belegundu, A. and J. Arora, A study of mathematical programming methods for
structural optimization. part i: theory. International Journal for Numerical Methods
in Engineering, 1985. 21: p. 1583-1599.
55. Arora, J.S., Introduction to Optimum Design. 1989: McGraw-Hill.
56. Whitacre, J.M., R.A. Sarker, and Q.T. Pham, Making and breaking power laws in
evolutionary algorithm population dynamics. Memetic Computing, 2009. 1(2): p.
125.
57. Lesne, A., Complex Networks: from Graph Theory to Biology. Letters in
Mathematical Physics, 2006. 78(3): p. 235-262.

You might also like