Download as pdf or txt
Download as pdf or txt
You are on page 1of 33

In: Water Resources Research Progress ISBN 1-60021-973-x

Editor: Liam N. Robinson, pp. 67-99


c 2008 Nova Science Publishers, Inc.

Chapter 2

AI T ECHNIQUES FOR H YDROLOGICAL


M ODELING AND M ANAGEMENT.
II: O PTIMIZATION
G.B. Kingston, G.C. Dandy and H.R. Maier
School of Civil & Environmental Engineering, The University of Adelaide,
Adelaide, Australia

Abstract
This article is the second of a two-part review of artificial intelligence (AI) based
techniques used in hydrologic applications. The first part of this series presented an
overview of several AI methods that could be used for prediction and simulation of
hydrological systems. In this part, the focus is on AI-based optimization techniques.
Hydrological modeling and management problems are often difficult to solve for var-
ious reasons and AI based optimization methods tend to be more suited to such prob-
lems than traditional optimization or problem-solving techniques. The main reasons
for this is that they are population based, meaning that they search from a population
of possible solutions rather than a single point; they can handle any type of objective
function and constraints and do not require these functions to be continuous or dif-
ferentiable; they are flexible in their application; and can be implemented on parallel
hardware. However, there are a number of these optimization methods available and,
by discussing their advantages, limitations and previous applications in the field of
hydrology and water resources management, this review attempts to provide guidance
as to which methods are best suited to which problems.

1. Introduction
As stated by Solomatine [1],
“Many issues related to water resources require the solution of optimization
problems. These include reservoir optimization, problems of optimal alloca-
tion of resources and planning, calibration of models, and many others.”
However, factors including multimodality, complex constraints, large dimensionality, non-
linearity, noise and time varying objective functions are common features of water resource
68 G.B. Kingston, G.C. Dandy and H.R. Maier

systems and the models used to simulate them, and these factors often lead to difficult or un-
solvable problems [2]. Problem solving techniques aim to explore various possible permu-
tations of decision variables (i.e. the controllable variables which influence the outcome)
until the best solution, according to the problem objectives, is found. The most straight-
forward way of doing this involves complete enumeration of the search space, where all
possible solutions are checked and the global optimum identified. However, the size of
real-world problems generally makes complete enumeration prohibitive, or at best, cum-
bersome. Therefore, most search techniques also exploit information about the solution
surface (i.e. objective function response surface) or the solutions themselves, such that
attention is focused on only a relatively small subset of the search space and the (near)
optimum is found in a reasonable amount of time.
Given reasonable knowledge of the problem at hand, it may be possible to select several
‘good’ alternative solutions (e.g. alternative management scenarios, model parameter val-
ues) for analysis. The analysis itself involves building a simulation model of the problem
and using the model to evaluate the alternatives. A quasi-optimal solution is then selected
based on the comparative performances of the alternatives, as determined according to the
problem objective(s). A limitation of this method, however, is that the analysis is often too
limited and based too heavily on the past experience of the modeler or decision maker, and
as a result, more optimal solutions may remain undiscovered. Furthermore, there may not
be sufficient knowledge of the system to select good alternative solutions in the first place.
Formal search or optimization procedures, on the other hand, may be used to automat-
ically determine and evaluate the alternatives considered. There are numerous traditional
optimization procedures available, including gradient-based search methods, linear pro-
gramming, dynamic programming and simulated annealing, to name a few. Yet, while each
of these methods generally works well when applied to the types of problems they were de-
signed for (e.g. those with a single optimum; continuous, differentiable objective functions;
linear and recursive problems, etc.), none are robust under a wide range of conditions [3].
Furthermore, water resources modeling and management problems generally do not fall
within the strict limits in which these optimization techniques can be effectively applied.
Rather, as mentioned above, these problems tend to be intractable with numerous dimen-
sions, many local optimal, and possible discontinuities. An example of such a solution
surface is shown in Figure 1 for a problem with two decision variables.
AI-based optimization techniques have a number of common properties that make them
more suitable for hard-to-solve problems. Unlike traditional methods, which base further
exploration of the search space on a single ‘current best’ solution, AI optimization methods
are population-based and evolutionary in nature, meaning they search for optimal solutions
from a number of different locations in many different directions, whilst making use of
information contained in the population to find better solutions. In comparison to tradi-
tional optimization methods, this makes AI-based techniques much less likely to restrict
the search to local optima [4]. Furthermore, AI methods base their search on evaluations
of the objective function, rather than on information about this function itself (e.g. gradient
information, derivatives), which allows them to be applied to any type of problem, given
that it can be simulated.
A common feature of AI-based optimization techniques is that they evolve from one
population to the next by means of random variation and selection. However, they differ in
AI Techniques for Hydrological Modeling and Management. II: Optimization 69

Objective function
response

x2

x1

Figure 1. Example nonlinear, multimodal, discontinuous solution surface.

the way these operators are applied. They also differ in their representation of individual so-
lutions - some embed memory about the problem in the solutions themselves, while others
contain this information in an environment which can be modified by the solutions [5]; and
some use actual values of the decision variables, while others use encoded values. While AI
optimization methods are, in general, very flexible and versatile, through the use of differ-
ent types of representation, variation and selection operators, these methods have generally
been designed to be more suitable for some problems than others. This review aims to de-
scribe some of the more popular AI-based optimization techniques that are applicable to
hydrology and water resources problems, including evolutionary computation, swarm in-
telligence and evolutionary multi-objective optimization. A review their hydrology-related
applications is presented, together with a discussion of the strengths and limitations of the
various optimization techniques.

2. Evolutionary Computation
2.1. Introduction
Evolutionary computation (EC) is a branch of AI that was inspired by Darwin’s theory of
natural selection and survival of the fittest, where a population evolves over time by selec-
tively sharing information among the ‘fittest’ members. Techniques belonging to this area
are used to explore a solution space from multiple solutions, or a ‘population’ of solutions,
simultaneously. Furthermore, the solutions within the population compete to survive and
70 G.B. Kingston, G.C. Dandy and H.R. Maier

contribute to future generations. As in nature, the better, or ‘fitter’, solutions are more likely
to do this and, as such, information about the better solutions is passed on from one gen-
eration to the next, analogous to the way in which the genetic material of parents is passed
on to their children. Two popular EC methods often used for optimization include genetic
algorithms (GAs) and the shuffled complex evolution algorithm developed at the University
of Arizona (SCE-UA).

2.2. Genetic Algorithms


GAs are a general purpose stochastic search technique that can be used to solve complex
optimization problems. To do this, they employ genetics-inspired operators such as selec-
tion, crossover and mutation to evolve from one population of artificial ‘chromosomes’ to a
new one. The evolutionary process starts from a completely random population and occurs
over a number of successive iterations, or ‘generations’, where the new population formed
in one generation becomes the population from which a new population is evolved in the
next, and so forth. The main steps in a GA are described as follows:

STEP 1: To initiate a GA, an initial population of chromosomes is randomly generated


within the defined search space. Each chromosome, which represents a candidate
solution to the optimization problem, is made up of a number of ‘genes’, which con-
tain encoded values of the decision variables. There are many types of encoding
techniques available for representing the variables, the most popular being binary en-
coding, where chromosomes are represented by strings of 1s and 0s [6]. However, it
is generally better to encode numerical variables directly as integers or real values.
An example of an integer encoded chromosome is shown in Figure 2. In this exam-
ple, each gene in the chromosome represents a decision variable, with the values of
the genes corresponding to particular discrete options for the variables.

2 4 1 1 3 6 2 3 2 1 1 4 6 3

Figure 2. Single integer encoded chromosome.

STEP 2: In this step, the ‘fitness’ of each individual chromosome, which measures the
chromosome’s performance in relation to the problem being solved, is evaluated. As
GAs are essentially an unconstrained optimization procedure, an important consid-
eration is how to appropriately incorporate constraints into the fitness function. The
most commonly used approach for doing this is to use penalty functions [7], where
the objective function f (x) used to evaluate the performance of the candidate solu-
tion x is extended to incorporate a penalty term as follows:

fitnessi = f (xi ) − Qi (1)


AI Techniques for Hydrological Modeling and Management. II: Optimization 71

where the penalty Qi is subtracted (since the best solution is that with the maximum
fitness) when a solution is infeasible (i.e. Qi = 0 when the solution is feasible). Ide-
ally, the penalty term should be small enough such that it does not inhibit exploration
of the feasible search space, but just large enough to ensure that infeasible solutions
are not considered feasible.

STEP 3: A number of ‘parent’ chromosomes are then selected from the population to fill
the mating pool which contributes offspring to the next generation. Different types
of selection operator may be used to fill the mating pool; however, since the aim
is to maximize fitness, fitter chromosomes typically have a greater chance of being
selected. The size of the mating pool needs to be the same as the initial population,
which means that fitter chromosomes are generally included in the mating pool more
than once, whereas less fit chromosomes may not be included at all. This process is
analogous to natural selection, where fitter individuals are more likely to survive and
breed, whereas weaker individuals die out and become extinct.

STEP 4: Once the parent chromosomes have been selected, a genetic crossover operator
is applied between pairs of parents to produce offspring, which form the next gener-
ation of chromosomes. The parents are paired by randomly selecting two chromo-
somes from the mating pool, without replacement. There are a number of different
forms of crossover operator, all of which are designed to combine the information
contained in the parent chromosomes. This is generally done by exchanging portions
of the two parent chromosomes after a (randomly selected) single crossover point, or
between multiple crossover points, to produce two new offspring. This is illustrated
in Figure 3. The crossover operator is assigned a probability, or crossover rate, which
determines whether or not crossover between a pair of parents will occur. Crossover
among parent chromosomes is a common natural process [8]; therefore, it is tradi-
tionally given a relatively high probability in a GA ranging from 0.6 to 1.0 [9].

Parent chromosome A Offspring A


2 4 1 1 3 6 2 3 2 1 1 4 6 3 2 4 1 1 3 6 2 1 3 1 2 5 5 2

Parent chromosome B Offspring B


1 5 3 2 2 4 1 1 3 1 2 5 5 2 1 5 3 2 2 4 1 3 2 1 1 4 6 3

Figure 3. Single point crossover applied to integer encoded chromosomes.

STEP 5: Mutation, which is the occasional random alteration of the value of a gene [10],
is the final step in the generation of offspring chromosomes. This operator ensures
that the evolution does not become trapped in unpromising regions of the search
space by introducing new information into the search. Similar to the selection and
crossover operators, there are a number of alternative mutation operators available.
In Figure 4, mutation of two genes in an integer encoded chromosome is illustrated,
where the values of the randomly selected genes are altered in some fashion, such
that they take on a new random value within the feasible range of the corresponding
72 G.B. Kingston, G.C. Dandy and H.R. Maier

decision variables. A mutation rate is also assigned to the mutation operator, and this
is generally applied to a chromosome on a gene by gene basis. The bulk of a GA’s
processing power can be attributed to selection and crossover; therefore, mutation
plays a secondary role in the algorithm [10]. As mutation in nature is a rare process,
the mutation rate per gene is generally set to a small value (e.g. less than 0.1) [9].

2 4 1 1 3 6 2 3 2 1 1 4 6 3 2 4 1 1 4 6 2 3 2 1 1 4 2 3

Figure 4. Random mutation of individual genes within an integer encoded chromosome.

STEP 6: Steps 2–5 are repeated for many generations until some stopping criterion has
been met. The process is shown in Figure 5.

START

Initialise population by sampling N


chromosomes at random from the STEP 1
feasible search space

Evaluate fitness of each STEP 2


chromosome

Replace the current YES


chromosome Stopping
criterion met?
STOP STEP 6
population with the
new population NO
Select parent chromosomes for STEP 3
mating pool

Crossover parent chromosomes STEP 4


to generate offspring

Mutate random genes in STEP 5


offspring chromosomes

Figure 5. Schematic of a GA outlining the main steps performed.

2.3. Shuffled Complex Evolution


The SCE-UA algorithm [11] has become a popular optimization technique in recent years,
primarily used for calibrating, or optimizing the parameters of, conceptual watershed mod-
AI Techniques for Hydrological Modeling and Management. II: Optimization 73

els. While not strictly based on Darwin’s theory of evolution and survival of the fittest, it
combines the strengths of several existing optimization methods including simplex search,
controlled random search and complex shuffling with competitive evolution, where com-
munities are evolved through a ‘reproduction’ process. Like the GA, the SCE-UA method
begins with an initial random population of points, which represent candidate solutions to
the problem. However, unlike GAs, decision variables must be continuous and the feasible
search space must be specified by placing upper and lower limits on these variables. The
population of candidate solutions is divided into several communities, or complexes, which
are then evolved independently, through a ‘reproduction’ process, where each member in
a complex is a potential ‘parent’ with the ability to participate in the reproduction process.
At periodic stages of the evolution, the entire population is shuffled before points are reas-
signed to complexes (i.e. the communities are mixed and new communities formed). This
promotes the sharing of information gained by each community in order to direct the en-
tire population toward the neighborhood of a global optimum. The main steps carried out
during the algorithm are as follows:
STEP 1: To initialize the process, a random sample of points is generated within the de-
fined feasible search space. The size of the sample s is equal to the number of com-
plexes p, multiplied by the number of points in each complex m (i.e. s = m × p).
STEP 2: The fitness of each point, f (x1 ) , . . . , f (xs ), is evaluated in this step. The s
points are then sorted in order of decreasing fitness and stored in the array D =
{xi , f (xi ) , i = 1, . . . , s}, such that i = 1 represents the candidate solution with the
highest fitness.
STEP 3: The array D is partitioned into p complexes A1 , . . . , Ap , each containing m
points, such that the first complex contains every [p(j − 1) + 1] ranked point, the
second complex containsn every [p(j − 1) + 2] ranked point, ando so on, where
j = 1, . . . , m (i.e. Ak = xkj , f (xkj )|xkj = xk+p(j−1) , j = 1, . . . , m ).

STEP 4: The complexes are evolved using the competitive complex evolution (CCE) algo-
rithm [11]. In this algorithm, a number of subcomplexes are selected from a complex,
where a subcomplex acts as a pair of parents, although it may contain more than two
members. A probability is assigned to the members of the complex such that better
points have a greater chance of becoming parents, similar to the selection operator
described for the GA. The downhill simplex method [12] is then applied to each sub-
complex to produce most of the offspring, where reflection and contraction processes
are applied to direct the evolution in an improvement direction. Offspring are also oc-
casionally randomly introduced to ensure that the evolution does not become trapped
in an unpromising region. This is analogous to the mutation operator used in a GA.
Each new offspring produced by a subcomplex then replaces the worst point in the
subcomplex.
STEP 5: Once the complexes have been evolved, they are shuffled by combining all of the
points in the evolved complexes into a single population.
STEP 6: Steps 2 to 5 are repeated until a stopping criterion has been met. A schematic of
the SCE-UA algorithm is shown in Figure 6, outlining the above steps.
74 G.B. Kingston, G.C. Dandy and H.R. Maier

2.4. Advantages
The advantages of EC optimization methods are discussed below. GAs are the original and
most popular population-based, evolutionary search procedure; therefore, the advantages
presented here are relative to classical optimization techniques. The advantages of tech-
niques presented in later sections of this review, on the other hand, are considered relative
to GAs.

• EC optimization techniques can handle any type of objective function. They do not
depend on gradient information and are therefore suitable for problems where such
information is unavailable (e.g. problems with discontinuous objective functions),
or is very costly to estimate (e.g. complex problems with many interacting decision
variables) [13]. It is also possible for evolutionary algorithms to deal with prob-
lems where no explicit objective function is available (e.g. scheduling, multiobjective
problems). These features make them much more robust than many traditional search
algorithms, such as gradient based methods or dynamic programming.

• One of the main advantages of EC approaches is their domain independence and


the fact that they generally do not require an in-depth mathematical understanding
of the problems to which they are applied. This means that given an appropriate
representation of evolving structures, EC methods can evolve almost anything and,
furthermore, they are relatively cheap and quick to implement [14, 15]. GAs are

START

Initialise population by randomly sampling STEP 1


s = m×p points from the feasible search space

Evaluate fitness of each point and sort STEP 2


points in order of decreasing fitness

YES
Stopping
criterion met?
STOP STEP 6

NO
Partition population into p complexes,
each containing m points STEP 3
STEP 4
Evolve each complex

Shuffle complexes by combining CCE


algorithm Selection
into a single population
Mutation Reflection
STEP 5
Contraction

Figure 6. Schematic of SCE algorithm outlining the main steps carried out.
AI Techniques for Hydrological Modeling and Management. II: Optimization 75

easily hybridized with other methods and can be used to carry out optimization in
conjunction with any simulation model (as can the SCE-UA algorithm, given that
the decision variables are continuous). This flexibility and general applicability has
allowed EC approaches to be adopted in a wide range of disciplines [16].

• Evolutionary algorithms deal with a population of solutions simultaneously, rather


than a single point. Furthermore, the use of probabilistic transition rules allows
exploration of the search space in many directions, which means that a number of
promising regions are able to be identified. Many real-world optimization problems
require multiple solutions (e.g. when there are multiple objectives to be met). In
such cases, a GA or the SCE-UA algorithm could be used to provide an entire set
of Pareto-optimal solutions in a single run, rather than having to perform a series of
separate runs to obtain these solutions (see Section 4.). Additionally, from a model
calibration or model fitting point of view, the generation of a number of alternative
near-optimal solutions may be beneficial, as the parameters that give the best fit to
the data are not always those that result in the most physically plausible model, or,
alternatively, they may perform well only in some cases (e.g. peak flows), while other
parameters perform better in others (e.g. low flows).

• To move from one generation to the next, EC methods combine solutions, which
potentially enables long leaps to be taken in the search space [17]. Therefore, unlike
many traditional optimizers, EC algorithms are able to jump from one optimum to
another and are less susceptible to becoming trapped in local minima.

• Whereas most standard problem-solving approaches are serial, the parallel nature
of EC enables such algorithms to be implemented on parallel hardware, which can
significantly reduce the time required to find (near) optimal solutions [18].

• EC methods are adaptable to a dynamic environment. The majority of traditional


optimizers assume a fixed fitness function and any change in that function requires
restarting the algorithm. EC techniques, on the other hand, handle time-varying fit-
ness functions and unexpected events naturally through the evolutionary process [3].
As they are population-based, if an optimal solution suddenly becomes sub-optimal,
or even infeasible, due to a change in the objective function, other solutions in the
population may become optimal, or at least provide a better location to search from.

2.5. Disadvantages
The disadvantages of EC techniques are discussed below.

• The aim of EC algorithms is to seek and find good solutions to a problem; however,
they do not guarantee that an optimum solution will be found. In fact, a number of
features have been identified that cause difficulty to evolutionary algorithms, and may
prevent their convergence to the optimum solution. These include multi-modality,
deception, isolated optima, and collateral noise [19]. Multi-modality may cause the
optimization algorithm to become stuck in a local optimum if appropriate parameters
76 G.B. Kingston, G.C. Dandy and H.R. Maier

are not used; deception can cause it to be misled towards deceptive attractors (lo-
cal optima favored by almost the entire search space); an isolated optimum may be
difficult to find if the surrounding search space provides no useful information; and
collateral noise, which comes from the improper evaluation of good partial solutions
due to the excessive noise coming from other parts of the solution vector, pose diffi-
culties if the population size is not adequate to discover signal from noise [20]. These
problems usually have a ‘rugged’ fitness landscape.

• Although GAs are good at global search and identifying promising regions in the
search space, they are generally inefficient in fine-tuned local search [13]. How-
ever, due to the ease with which GAs may be hybridized with other methods, their
efficiency may be improved by incorporation of a local search procedure into the evo-
lution. The SCE-UA algorithm was developed by combining an evolutionary search
with a local search procedure; however, this increases the complexity and computa-
tional intensity of the algorithm.

• Being randomized search procedures, EC methods are usually computationally in-


tensive. Typically, fitness values will have to be evaluated thousands of times before
a near-optimum solution is identified. For optimization problems using GAs or the
SCE-UA algorithm, evaluating the fitness may be as simple as plugging a trial solu-
tion into an equation; however, it may also be as complicated as running a complete
simulation. In general, EC methods are not well-suited to real-time applications and,
in cases where evaluation of the fitness is complex and time consuming, the use of an
EC method for optimization could be infeasible.

• The parameters used to control the operation of an EC algorithm (e.g. population


size, crossover and mutation rates, number of generations) can greatly influence the
quality of the final solution and the efficiency with which it is found. These param-
eters must be defined before the algorithm is used; however, determining the best
set of parameter values can be extremely difficult and highly problem dependent. Al-
though the importance of determining the best parameter values has been recognized,
no universal rules have yet been found [18]. Generally, parameter values are deter-
mined based on experience and trial-and-error, which can be time consuming and
computationally expensive [16].

• EC methods are poor at handling constraints. It has been noted that optimal solutions
often lie on the boundary of a feasible region and, therefore, many of the solutions
most similar to the optimum will be infeasible [21]. By restricting the search to feasi-
ble regions only, or by imposing severe penalties on infeasible solutions, it is difficult
to generate potential solutions that will drive the population toward the optimum. On
the other hand, if the penalty applied to the objective function is not severe enough,
a significant amount of the search time will be spent in regions that are far from the
feasible region and, consequently, the search will tend to stall outside the feasible
region [22]. While numerous constraint-handling techniques have been proposed for
EC methods, each suffers its own drawbacks [7, 23].

• GAs tend to evaluate too many trial solutions. In the generation of trial solutions,
AI Techniques for Hydrological Modeling and Management. II: Optimization 77

relatively random parent solutions are combined with little attention paid to the po-
tential fitness values of the children produced. As a result, many infeasible solutions
are generated and evaluated, rather than using an intelligent method to ensure that
only feasible solutions are generated. The SCE-UA method requires a priori specifi-
cation of the feasible search space and therefore overcomes this limitation. However,
the need to prespecify the feasible search space is a limitation in itself.

• The choice of representation (e.g. structure of individual solutions together with


the choice of crossover and mutation operators) and fitness, or objective function,
can have an enormous impact on the way an EC system performs [24]. The fitness
function must be chosen such that the population evolves to what are truly better
quality solutions, which, in some cases, is not a trivial problem.

2.6. Applications
Due to their many advantages over traditional optimization and problem-solving techniques,
evolutionary optimization methods, in particular GAs, have proven very popular in hydrol-
ogy and water resources management. For example, GAs have been used to optimize water
monitoring networks [25–27]; irrigation planning and management practices [28]; reservoir
operations [18, 29–33]; design and operations of water distribution systems [34–36]; and
watershed, river and aquifer management [37–40]. They have also been used in numerous
studies to build, optimize and calibrate hydrology-related models; for example, Bowden
et al. [41, 42, 43] used a GA to select the optimal input variables for data driven water
quality prediction models and to divide available hydrological, meteorological and water
quality data into appropriate calibration and validation subsets; Solomatine [1], Franchini
and Galeati [44], Franchini et al. [45], Solomatine et al. [46] and Kingston et al. [47] used
a GA to calibrate conceptual and ANN hydrological models; and Abrahart et al. [48] used
a GA to optimize the structure of an ANN model used for rainfall-runoff modeling.
As mentioned, the SCE-UA algorithm has become popular in recent years for cali-
brating conceptual watershed models. It has been used successfully for this purpose in
numerous applications, and has often been shown to outperform other calibration meth-
ods [11, 45, 47, 49, 50]. This algorithm has also been applied to more general optimiza-
tion problems including urban water supply headworks optimization [51], optimization of
groundwater management [52] and infrastructure works programming [53].

3. Swarm Intelligence
3.1. Introduction
Swarm intelligence (SI) is an area of AI based on the social behavior of a “swarm”, which
is a term used to classify a collection of simple locally interacting organisms with global
adaptive behavior [54]. Examples of swarms found in nature include ant colonies, flocks of
birds, and schools of fish, to name a few. What is interesting about such social organisms is
that, while each individual has its own agenda, the group as a whole is highly organized and
cooperation and coordination among the group is largely self-organized [55]. Furthermore,
while only simple local interactions occur between individuals, the emergent behavior of the
78 G.B. Kingston, G.C. Dandy and H.R. Maier

group, which occurs through the social sharing of information, can solve difficult problems,
such as finding the shortest distance to a food source, or finding promising regions on the
landscape during the search for food. Therefore, SI methods provide a suitable approach
for optimizing complex, non-linear functions.
Using SI, the behavior of natural swarms is simulated through the use of a multiple
autonomous agents, which may be either candidate solutions to the problem or decision
making entities, and where the behavior of each agent and the interaction between agents is
based on a simple set of rules. The sharing of information between agents is designed such
that each agent can benefit from the discoveries of the rest of the population, or swarm.
Therefore, SI is similar to evolutionary optimization, in that a population-based search for
optimal solutions is performed and the cooperative social behavior of the swarm evolves
through time. There has recently been a growing number of attempts at devising new ways
of applying SI to a diverse range of problems [55]. Two popular SI techniques are ant
colony optimization (ACO), which is inspired by the foraging behavior of ants, and particle
swarm optimization (PSO), which is inspired by the social behavior of flocks of birds.

3.2. Ant Colony Optimization


ACO was developed by Dorigo et al. [56] based on the ability of real ants, which are al-
most blind, to find the shortest route between their nest and a source of food. They do this
via an indirect form of communication that involves individual ants depositing pheromone
along the paths on which they travel, and probabilistically preferring to follow paths rich in
pheromone. The route taken by an individual ant is essentially random; however, when it
encounters a previously deposited pheromone trail, it will follow this path with high proba-
bility, thus reinforcing the trail with its own pheromone [56]. Shorter paths become favored
over time, as these paths take less time to be traversed and are thus reinforced with greater
amounts of pheromone per unit time than alternative longer paths. Consider Figure 7, which
shows a colony of ants preferring the shortest route Nest-B-Food over the alternative longer
route Nest-A-Food. Initially, however, when there is no pheromone trail on either route,
the ants will take each route between the nest and the food source with equal probability.
Yet, as the route Nest-B-Food is shorter than the route Nest-A-Food, the first ant following
this path will reach the food source before an ant traversing the longer path, and will return
to the nest following its pheromone trail. Therefore, the pheromone concentration on the
shorter path Food-B-Nest becomes higher than that on the longer path Food-A-Nest, and a
second ant returning to the nest will also have a greater probability of selecting the shorter
path, resulting in further reinforcement of the pheromone trail. Furthermore, pheromone
evaporates over time and, since fewer ants will select the longer path as more pheromone is
deposited on the shorter path, the longer path will eventually lose all of its pheromone. The
final result is that all of the ants quickly choose the shorter path.
The ACO algorithm was designed specifically for solving discrete combinatorial opti-
mization problems [56] and, in order to apply it as such, the problem under consideration
needs to be mapped onto a graph G = (D, L), where D = {d1 , d2 , . . . , dn } is a set of
points at which decisions have to be made and L = {lij } is the set of options j available at
each decision point i. An example of such a graph is shown in Figure 8 for an optimization
problem with four decision points D = {d1 , d2 , d3 , d4 }, each with four options, except for
AI Techniques for Hydrological Modeling and Management. II: Optimization 79

Nest Food source

Figure 7. Ants finding the shortest route around an obstacle between their nest and a food
source.

d2

d4
d1

d3

Figure 8. Graph representation of a combinatorial optimization problem used in ACO.

d2 which has six.


The number of decision points in D is equal to the number of decision variables asso-
ciated with the problem, while the number of options available at each point is equal to the
number of values that each of the decision variables can take. ACO algorithms can handle
both discrete and continuous variables; however, the range of values taken by continuous
variables must be discretized before ACO can be applied. The number of paths is gener-
ally chosen so as to achieve the desired resolution, and this number may be different for
each decision point. Once the problem has been set up as such, the ACO algorithm can be
applied as follows:

Step 1: The algorithm is initialized by assigning random levels of pheromone to each of


the paths, or options, L = {lij }.

Step 2: A population of artificial ants is then generated, where the path taken by an artificial
ant represents a solution string. Thus, a trial solution is generated once an ant has
chosen which option to take at each of the decision points. This is done stochastically
according to the following equation:

[τij (t)]α [ηij ]β


pij (k, t) = P α β
(2)
lij [τij (t)] [ηij ]
80 G.B. Kingston, G.C. Dandy and H.R. Maier

where pij (k, t) is the probability that option lij is chosen by ant k at time t; τij (t) is
the concentration of pheromone associated with option lij at time t; ηij is a heuristic
factor favoring options that have smaller “local” costs (this is analogous to providing
ants with some “visibility”); α is a parameter that controls the relative importance of
pheromone; and β is a parameter that controls the relative importance of visibility.
The heuristic visibility factor enables ants to favor shorter paths from one decision
point to the next, although this might not necessarily lead to the overall shortest route.

Step 3: The “cost” of the path taken by each ant is then evaluated, which is the inverse of
the fitness of the solution. To incorporate constraints on the feasible search space, the
fitness function given by (1) would be appropriate.

Step 4: Once the cost associated with the solution generated by each ant has been calcu-
lated, the pheromone trails are updated in a way that reinforces fitter solutions. This
is done according to the general formula:

τij (t + 1) = ρτij (t) + ∆τij (3)

where ρ is a pheromone persistence coefficient and ∆τij is the change in pheromone


concentration associated with option lij as a function of the trial solutions found at
time t. The pheromone persistence coefficient simulates pheromone evaporation and
therefore needs to be less than one. This parameter reduces the chances of high cost
solutions being selected in future and prevents premature convergence to sub-optimal
solutions, as it reduces the difference in pheromone concentration between options at
each decision point. The change in pheromone concentration ∆τij can be calculated
in a number of ways, but is generally proportional to the inverse of the cost of ants
selecting option lij . In other words, the better the trial solution, and hence the lower
the cost, the larger the amount of pheromone added. As a result, options that are
chosen by many ants and form part of lower cost solutions receive more pheromone
and are more likely to be selected in future iterations.

Step 5: Steps 2–4 are repeated for many generations until some stopping criterion has been
met. The process is illustrated in Figure 9.

3.3. Particle Swarm Optimization


The PSO algorithm, developed by Kennedy and Eberhart [57], was inspired by the behav-
ior of a flock of birds as they search for a target of unknown location (e.g. food source,
predator-safe location, migration destination), as illustrated in Figure 10. Each bird in the
flock is called a ‘particle’ and its position in solution space represents a trial solution. The
key concept of PSO is that particles are flown through the search space and are accelerated
towards better or more optimum positions, or solutions. The velocity of each particle to-
wards the optimum particle (i.e. that with the current best position) depends on its current
velocity and position and its own previous best position. This enables particles to learn from
the experience of others in the swarm (global search), as well as from their own experience
(local search).
AI Techniques for Hydrological Modeling and Management. II: Optimization 81

START

Initialise model by randomly assigning


initial pheromone levels to paths STEP 1

Generate a population of solutions by


allowing ants to stochastically select STEP 2
options at each decision node

Evaluate cost of each solution


STEP 3
selected by the ants

STEP 5
Stopping YES
STOP
criterion met?

NO
Update pheromone trails STEP 4

Figure 9. Schematic of ACO outlining the main steps performed.

Step 1: The algorithm is initialized by randomly generating a population of particles,


which are represented by their position in an n-dimensional space, where n is the
number of decision variables. A randomized velocity is also assigned to each particle
so that it is flown through hyperspace.
Step 2: The fitness of each particle is then evaluated (using some function similar to (1))
and the position of the particle with the highest fitness is stored as the best position
of all particles P g. Throughout the simulation, each particle i also keeps track of its
current position Xi = {x1 , . . . , xn }, its current velocity Vi = {v1 , . . . , vn } and its
own previous best position Pi = {p1 , . . . , pn }
Step 3: The velocities of the particles are updated such that they accelerate towards the best
position of all the particles and towards their own previous best position, as follows:
Vi+1 = ωVi + c1 r1 (Pi − Xi ) + c2 r2 (Pg − Xi )
Vmax ≥ Vi+1 ≥ −Vmax (4)
where ω is an inertia weight used to control the impact of previous velocities on the
new velocity; c1 , c2 are two positive learning factors (usually c1 = c2 = 2); r1 , r2
are random numbers generated from U (0, 1); and Vmax is the maximum allowable
particle velocity. In this equation, the second term represents cognition, while the
third term represents social collaboration [9].
Step 4: Using the new velocity Vi+1 , the position of each particle is updated according to:
Xi+1 = Xi + Vi+1 (5)
82 G.B. Kingston, G.C. Dandy and H.R. Maier

Flight path

Target

Figure 10. Flocking behavior of birds as they search for a target of unknown location.

Step 5: Throughout the simulation the inertia weight ω is linearly decreased from an initial
value of around 1.4 to a final value of approximately 0.5 [58]. This has the effect of
moderating the initial global search, which is conducted with large values of ω, such
that a more local search is carried out towards the end of the simulation, as favored
by small values of ω.

Step 6: Steps 2–5 are repeated for many generations until some stopping criterion has been
met. The process is illustrated in Figure 11.

3.4. Advantages and Disadvantages


The advantages and disadvantages of SI methods for optimization are much the same as
those of GAs. However, both ACO and PSO algorithms have some advantages over GAs in
certain situations. In GAs, the memory of the system is embedded in the actual trial solu-
tions, whereas, in ACO, system memory is contained in the environment and improved trial
solutions are obtained by modifying this environment. Due to this difference, ACO algo-
rithms may be more useful than GAs in an operational setting, where the system is dynami-
cally changing [5]. As different options are continuously explored, the resulting pheromone
trails are maintained to some extent throughout the simulation; thus, a pool of alternative
portions of solutions is also maintained. Therefore, once a disruption to the system occurs,
weak links can be reinforced quickly and used to replace missing or damaged links [55, 59].
ACO algorithms may also have an advantage in situations where sequential decisions have
to be made in the construction of trial solutions, and where the selection of some component
solutions restricts subsequent choices. In such cases, the graph G = (D, L) may take the
form of a decision tree, and IF . . . THEN operators may be incorporated into the algorithm
to restrict the available choices at each decision point [5]. Thus, unlike in GAs, infeasible
solutions do not have to be evaluated.
The main advantages of the PSO algorithm over a GA are its relative simplicity, and
computationally inexpensive coding. The PSO algorithm has fewer parameters to adjust
than the GA and its implementation is much simpler, making this algorithm more appealing
in some cases [60, 61]. Furthermore, while information is shared between all chromosomes
AI Techniques for Hydrological Modeling and Management. II: Optimization 83

START

Initialise PSO algorithm by randomly


generating position and velocity vectors STEP 1
for a population of particles

Evaluate the fitness of each particle and


store the position of the fittest particle STEP 2

STEP 6
YES
Stopping
criterion met? STOP

NO
Update the velocity of each particle STEP 3

Update the position of each particle STEP 4


according to the new velocities

Decrease inertia weight STEP 5

Figure 11. Schematic of PSO outlining the main steps performed.

in the mating pool using a GA, in PSO, information is only given out from the best par-
ticle to other members of the population. Therefore, the evolution only looks for the best
solutions and, as a result, locates near optima significantly faster than EC optimization tech-
niques. PSO also uses a highly directional variation operator. The velocity of each particle
is modified in a direction that lies between its personal best and the current global best. As
a result, the performance of PSO may be expected to be better than that of a GA when the
average local gradients point toward the global optimum; however, it will not perform as
well when the local average gradient is constantly changing [62].

3.5. Applications
In recognizing the strengths of the ACO algorithm when applied to discrete combinatorial
optimization problems, ACO has been used to successfully optimize multi-purpose reser-
voir operation [63]; water distribution system design [5, 64]; and hydropower plant mainte-
nance scheduling [65]. It has also been applied for the calibration of a simple rainfall-runoff
model [66], which was a continuous optimization problem. However, it was found that a
GA was more effective and efficient than ACO when applied to this problem. In the field of
water resources modeling and management, the PSO algorithm has been used to train ANNs
applied to rainfall-runoff modeling and river stage forecasting [61, 67, 68]; to optimize se-
lective withdrawal from thermally stratified reservoirs [69]; and to optimize the selection,
sizing, and placement of hydraulic devices for transient protection in a pipe network [70].
84 G.B. Kingston, G.C. Dandy and H.R. Maier

While variations of both algorithms have been developed, which allow their application
to both continuous and discrete optimization problems, ACO was designed specifically for
discrete combinatorial optimization, whereas PSO was designed for continuous function
optimization.

4. Evolutionary Multi-Objective Optimization


4.1. Introduction
The optimization methods presented thus far have been discussed in relation to problems for
which there is a single objective, e.g. cost, demand, risk, etc. However, for most real-world
hydrology-related management problems, multiple objectives must be met simultaneously.
For example, management of a reservoir may be required to satisfy objectives relating to
water supply reliability; hydroelectric power generation; environmental conditions in both
the reservoir and downstream catchment; and flood control. One way to handle such prob-
lems is to simplify them to single-criterion problems through the use of penalty functions or
weighted aggregate objective functions and apply traditional optimization methods. How-
ever, while this is by far the most common approach for dealing with multiple objectives,
such simplification may result in an optimization problem unlike that which actually needs
to be solved [71]. Alternatively, multiobjective optimization (also called multicriteria op-
timization or vector optimization) techniques may be applied, which involve defining and
optimizing a problem in terms of several, possibly conflicting, criteria [3]. These tech-
niques are capable of tackling hydrological modeling and management problems in their
true multiobjective form without requiring potentially misleading simplifications.
Multiobjective optimization has been defined by Coello Coello [72] as the problem of
finding

“...a vector of decision variables which satisfies constraints and optimizes a


vector function whose elements represent the objective functions. These func-
tions form a mathematical description of performance criteria which are usu-
ally in conflict with each other. Hence, the term “optimize” means finding a
solution which would give the values of all the objective functions acceptable
to the designer.”

Mathematically, such a problem may be formulated as follows:

optimize y = f (x) = (f1 (x) , . . . , fn (x))


where x = (x1 , . . . , xm ) ∈ the decision space X
y = (y1 , . . . , yn ) ∈ the objective space Y (6)

and m and n are the numbers of decision variables and objectives, respectively. Unlike
single-objective problems, the aim is no longer to find a single optimal solution in terms
of x; rather, a set of optimal trade-offs, known as the Pareto-optimal set, is sought [73].
This set contains decision vectors for which the corresponding objective vectors cannot be
all simultaneously improved [74]; in other words, a Pareto-optimal decision vector, also
known as a nondominated vector, cannot be improved in any objective without causing
AI Techniques for Hydrological Modeling and Management. II: Optimization 85

degradation in at least one other objective [73]. Such decision vectors must, therefore, be
considered equivalent in the absence of any higher level information and, although they
may be suboptimal in any single objective, they should offer “acceptable” performance in
all objectives [74].

f2(x)

f1(x)

Figure 12. Example Pareto front for a problem with two objectives.

The set of decision vectors corresponding to the set of Pareto-optimal solutions is gen-
erally referred to as the “Pareto-optimal front”. These vectors lie on the boundary of the
feasible design region, F , and display the trade-off between multiple objectives. This is
shown in Figure 12 for a two-objective minimization problem, where the solid line denotes
the Pareto front. If, for example, objective f1 (x) is risk of system failure and objective
f2 (x) is cost, it can be seen that solutions on the Pareto-optimal front are better than any
other solution found in F . However, generally, only one solution may be implemented;
therefore, higher level information is required for the decision maker to choose between
alternative solutions on the Pareto front. As such, the final solution depends on both op-
timization and decision processes. There are three variants of the decision process, which
depend on whether the preferences of the decision maker are defined before, during or after
the optimization process. More formally, these decision processes are known as [75]:
1. a priori preference articulation - where the decision maker expresses his/her prefer-
ences by combining the differing objectives into a single aggregated objective func-
tion prior to optimization. For example, this may involve specifying the weighting
coefficients of a weighted sum objective function, which may then be optimized using
a single objective optimization method.

2. progressive preference articulation - where the optimization and decision processes


are carried out concurrently. At each step, partial preference information is provided
to the optimizer, which then generates better solutions according to the information
received.

3. a posteriori preference articulation - where the decision maker is presented with a set
of Pareto-optimal solutions and then chooses a compromise solution from this set.
86 G.B. Kingston, G.C. Dandy and H.R. Maier

Evolutionary optimization algorithms, such as those discussed in this review, are partic-
ularly suited to multiobjective optimization as they simultaneously deal with a population
of possible solutions, which enables them to find an entire set of Pareto-optimal solutions in
a single run, rather than having to perform a series of separate runs, as would be the case for
traditional point-based optimization methods [72]. Furthermore, they are less susceptible
to the shape or continuity of the Pareto front [76]. There are two goals that a multiobjective
optimization algorithm must achieve, which are:
1. to guide the search towards the global Pareto-optimal region; and
2. to maintain population diversity in the Pareto-optimal front [77].
While the first goal is common to all optimization algorithms, fitness assignment and/or the
selection operator used to achieve this goal differ for multiobjective approaches. Before
evolutionary algorithms can be applied, the multiple objectives must first be converted to
a scalar fitness value. It may seem natural to combine all objectives into a single aggre-
gate objective; however, as mentioned, this can be problematic and requires some accurate
information on the range of each objective to prevent one from dominating others [72].
Alternative approaches for handling multiple objectives include Pareto ranking methods,
where candidate solutions are ranked according to their dominance over other solutions,
and non-Pareto based methods, where the different objectives are considered separately.
The second goal noted above is unique to multiobjective approaches and ensures that the
algorithm does not converge to a single point on the front.
Since the mid 1980s, numerous evolutionary algorithms have been developed for mul-
tiobjective optimization and evolutionary multiobjective optimization (EMOO) became a
field in itself. The first multiobjective evolutionary algorithm (MOEA) to be developed
was the Vector Evaluated Genetic Algorithm (VEGA) [78], which was a simple GA with
a modified non-Pareto based selection operator. However, this algorithm had a number of
problems: its performance was fairly poor and it had no explicit mechanism to maintain
diversity, which led to the development of further MOEAs. In his book on GAs, Goldberg
[10] first suggested the notion of Pareto ranking and proposed the following algorithm to
assign a rank to a solution x at generation t, where f (x) is the corresponding objective
function and N is the size of the population:

curr_rank = 1
WHILE N 0
DO FOR i = 1 to N
IF f (x) is nondominated THEN
rank(x, t) = curr_rank
END
DO FOR i = 1 to N
IF rank(x, t) = curr_rank
Remove x from population
N = N – 1
END
curr_rank = curr_rank + 1
END WHILE
AI Techniques for Hydrological Modeling and Management. II: Optimization 87

Essentially, this algorithm assigns a rank of 1 to all nondominated vectors in the first front, a
rank of 2 to all nondominated vectors in the second front (once the first Pareto front vectors
are removed) and so on. Goldberg [10] also suggested that a niching technique, such as
fitness sharing, was needed to prevent the GA from converging to a single point on the
Pareto front. The fitness sharing mechanism is given as follows:
(  
dij
1 − σshare if dij < σshare
φ(dij ) = (7)
0 otherwise

where dij is the distance between solutions i and j and σshare is the niche radius, or sharing
threshold. The parameter φ(dij ) is then used to modify the fitness of solution i as follows:

fi
fsi = PM (8)
j=1 φ(dij )

where M is the number of solutions located within the neighborhood of the ith individual.
In the case where an individual is alone in its niche, its fitness value remains intact [76].
MOEAs developed after this publication were influenced by Goldberg’s ideas, including
the Multi-Objective Genetic Algorithm (MOGA) [79], the Nondominated Sorting Genetic
Algorithm (NSGA) [80], and the Niched-Pareto Genetic Algorithm (NPGA) [81]. Of these
algorithms, MOGA showed the greatest performance [76]. With the development of the
Strength Pareto Evolutionary Algorithm (SPEA) [82], the use of elitism in MOEAs became
common practice. In this algorithm, niches were not defined by distance, but rather on
Pareto dominance. The NSGA-II [83, 84], a modified version of NSGA, was then developed
and used a so-called ‘crowding distance’ to maintain population diversity in the Pareto front.
Due to the efficiency and relatively good performance of this algorithm, it has become a
popular approach in recent years [76]. Around the same time, the second generation of
SPEA, SPEA2, was developed [85] with comparable performance to NSGA-II [86]. While
the original MOEAs were primarily based on GAs, new algorithms are continually being
developed, including MOEAs based on the SCE-UA algorithm [87], ACO [88, 89] and PSO
[90–92].

4.2. Advantages
In addition to having the superior optimization capabilities of evolutionary (including SI)
algorithms, the primary advantage of EMOO when applied to solve hydrological modeling
and management problems is that a number of important, yet conflicting, objectives can
be taken into account and traded-off against one another in their ‘real’ form. Using tradi-
tional multiobjective optimization methods, the multiobjective problem is transformed into
a single objective problem which can be solved using nonlinear optimization. This transfor-
mation is commonly done using weighted sum methods, where different components of a
fitness function are simply combined into a single scalar objective, usually via a weighted,
linear sum. However, such a transformation is quite a radical simplification of the prob-
lem, and may be such that the resulting optimization problem is not the problem that really
needs to be solved [71]. Furthermore, not all Pareto-optimal solutions can be found using a
single weighted aggregate unless all objective functions and the feasible region are convex
88 G.B. Kingston, G.C. Dandy and H.R. Maier

[69]. Another disadvantage of traditional methods is that the weights used to transform the
problem represent the relative importance of the competing objectives. However, it is often
very difficult to determine the relative importance that different stakeholders place on each
of the objectives, which is further complicated by the dynamic nature of people’s perspec-
tives and attitudes. A great advantage of the Pareto front is that it is independent of the
relative importance of the objectives and, in fact, the Pareto-optimal set includes optimum
solutions for all possible combinations of weighting factors [93]. Thus, determination of
Pareto-optimal solutions is far more valuable to a problem solver or designer than a single
solution to a simplified and different problem [71], and an evolutionary approach is best
suited to obtaining such solutions.

4.3. Disadvantages
The main disadvantages of EMOO techniques are as follows:
• Like single-objective evolutionary algorithms, MOEAs are subject to the problems
caused by multimodality, deception, isolated optimum and collateral noise [20]. As
such, these algorithms may converge to a local Pareto-optimal front, rather than the
global front. Additionally, certain characteristics of the Pareto-optimal front, in-
cluding convexity or nonconvexity, discreteness, and nonuniformity, may prevent a
MOEA from finding diverse Pareto-optimal solutions [20].
• One of the major difficulties associated with EMOO involves evaluating the quality
of a solution. Unless there is some knowledge of the true Pareto front, which is not
possible for most real-world problems, visual inspection may be the only practical
technique available for assessing solutions [72].
• Most of the MOEAs available have been developed and tested on problems with a
small number of objectives (usually 2-3). However, many real-world problems can
have several more objectives, yet the scalability of existing algorithms to problems
with many objectives is uncertain [71].
• In some cases it may be necessary to assign greater importance to certain objectives;
however, unlike weighted sum approaches, MOEAs generally do not provide a means
for automatically doing this and, therefore, it must be done by the decision maker
once solutions have been identified as being Pareto-optimal [72].
• It is difficult to define appropriate stopping criteria for MOEAs as it is not obvious
when the population has reached the point where no further improvement can be
achieved [72].

4.4. Applications
EMOO has been applied to problems in water resources modeling and management since
the mid to late 1990s [94–96]; however, it has only been in the last few years that the
popularity of this approach has begun to increase in this field. This is due to the fact that the
field of EMOO itself is still very much beginning and the efficacy of new MOEAs is only
now attracting more researchers and practitioners to the area [71].
AI Techniques for Hydrological Modeling and Management. II: Optimization 89

From the publications presented in the literature thus far, it is clear that EMOO offers
a promising approach to aid design and decision making problems related to hydrologi-
cal systems. Examples of successful applications include the use of EMOO techniques
for water distribution system design and rehabilitation [95, 97–99]; design of groundwater
monitoring systems [100–104]; design of groundwater remediation systems [94, 105]; cali-
bration of hydrological models [86, 96, 104, 106]; optimization of agricultural land use and
management practices within a watershed [40]; optimization of single and multi-reservoir
system operation [107, 108]; and determining the optimal waste load allocation in rivers
[109]. Various MOEAs were used in these studies; however, in the most recent publications,
there has been a shift towards NSGA-II and the Epsilon-Nondominated Sorted Genetic Al-
gorithm II (-NSGA-II) developed by Kollat and Reed [102]. Of the 11 reviewed EMOO
papers published this year in various water and environmental modeling and management
journals, 9 applied some version of the NSGA-II.

5. Conclusions
The AI-based optimization methods presented in this review share a number of common
features that make them suitable for solving water resources modeling and management
problems, which tend to be characterized by multimodality, constraints, large dimensional-
ity, nonlinearity, noise and time varying objective functions. The most important of these
features is the fact that AI optimization methods are population-based and information con-
tained within the population is shared in order to ‘evolve’ from one population to the next.
This allows them to explore the search space without becoming trapped in local optima
and without requiring information about the objective function that may be impossible or
difficult to obtain (e.g. gradient information).
The versatility of AI-based optimization methods enables them to be applied to a range
of different problems; however, in a similar way to the simulation modeling techniques re-
viewed in the first part of this series [110], there is no single optimization method that is
most suitable for all problems. Rather, to obtain high quality solutions in a short enough
period of time, it is important to select the optimization algorithm best suited to the problem
at hand. This requires knowledge of both the problem characteristics and those of the avail-
able optimization tools. A GA is a good ‘first port of call’ choice of optimization method,
as there are now many available versions of this algorithm in terms of its variation and
selection operators and the way in which individual solutions are represented, such that it
can be applied to any type of problem. However, it may not be the most efficient method,
the easiest method to use, nor the best able to achieve the objectives of the search. For
example, ACO may be more efficient (faster to converge with less computational cost) for
discrete combinatorial problems, such as optimal scheduling and planning, since only fea-
sible solutions are considered in the optimization, unlike in GAs, which generate infeasible
solutions and rely on evaluation to determine their feasibility. This method would also be
easier to use, as it was designed for discrete combinatorial problems and, therefore, would
not require the selection of an appropriate representation (i.e. structure of individual so-
lutions, selection and variation operators) to suit. For continuous optimization problems,
PSO may be more suitable if the modeler or decision maker is concerned with simplicity in
implementation and has little knowledge of how the optimization algorithm should behave,
90 G.B. Kingston, G.C. Dandy and H.R. Maier

as this method does not require a choice of representation, has fewer parameters that need
to be tuned to the problem at hand, and is less computationally expensive (only one oper-
ator, requiring a few lines of implementation code, is used to evolve solutions). The PSO
algorithm may also be more suitable in a real-time application, as this method only shares
information about the current best solution and, therefore, may find good solutions in a
shorter period of time. The SCE-UA algorithm is likely to be the best choice if finely tuned
solutions are required for continuous decision variables, such as model calibration prob-
lems, since the combination of a local search with global optimization enables this method
to better converge to an exact optimum. If the problem has many competing objectives,
which cannot be effectively combined into a single objective function, an EMOO approach
would be most appropriate, as this technique allows the problem objectives to be properly
represented and traded-off against one another in their real form.
It is clear that there are many important considerations when selecting an optimization
technique. Not only does the most suitable technique need to be selected, but also the most
suitable parameters, representation and objective function. If future research efforts are
directed towards hybridizing or generalizing AI-based optimization techniques, and auto-
matically tuning or adapting these methods to the problem, whilst solving the problem at
the same time, the importance of these considerations will be significantly reduced.

References
[1] D. P. Solomatine. Genetic and other global optimization algorithms - comparison
and use in calibration problems. In V. Babovic and L. C. Larsen, editors, Proceed-
ings of the 3rd International Conference on Hydroinformatics, pages 1021–1028,
Copenhagen, Denmark, 1998. Balkema Publishers.

[2] T. Bäck, U. Hammel, and H.-P. Schwefel. Evolutionary computation: comments on


the history and current state. IEEE Transactions on Evolutionary Computation, 1(1):
3–17, 1997.

[3] Z. Michalewicz and D. B. Fogel. How to Solve It: Modern Heuristics. Springer,
Berlin; New York, 2nd edition, 2004.

[4] A. R. Simpson, G. C. Dandy, and L. J. Murphy. Genetic algorithms compared to


other techniques for pipe optimization. Journal of Water Resources Planning and
Management, 120(4):423–443, 1994.

[5] H. R. Maier, A. R. Simpson, A. C. Zecchin, W. K. Foong, K. Y. Phang, H. Y. Seah,


and C. L. Tan. Ant colony optimization for design of water distribution systems.
Journal of Water Resources Planning and Management, 129(3):200–209, 2003.

[6] M. Negnevitsky. Artificial Intelligence: A Guide to Intelligent Systems. Pearson


Education Limited, Harlow, England, 2002.

[7] C. A. Coello Coello. A survey of constraint handling techniques used with evo-
lutionary algorithms. Technical Report Lania-RI-99-04, Laboratorio Nacional de
Informtica Avanzada, Xalapa, Veracruz, Mexico, 1999.
AI Techniques for Hydrological Modeling and Management. II: Optimization 91

[8] M. Caudill. Evolutionary neural networks. AI Expert, March:28–33, 1991.


[9] E. Elbeltagi, T. Hegazy, and D. Grierson. Comparison among five evolutionary-based
optimization algorithms. Advanced Engineering Informatics, 19(1):43–53, 2005.
[10] D. E. Goldberg. Genetic Algorithms in Search, Optimization and Machine Learning.
Addison-Wesley Pub. Co., Reading, Mass., 1989.
[11] Q. Duan, S. Sorooshian, and V. K. Gupta. Effective and efficient global optimization
for conceptual rainfall-runoff models. Water Resources Research, 28(4):1015–1031,
1992.
[12] J. A. Nelder and R. Mead. A simplex method for function minimization. Computer
Journal, 7(4):308–313, 1965.
[13] X. Yao. Evolving artificial neural networks. Proceedings of the IEEE, 87(9):1423–
1447, 1999.
[14] V. Babovic, M. Keijzer, and M. Stefansson. Chaos theory, optimal embedding and
evolutionary algorithms. Technical report, DHI Water and Environment, Hørsholm,
Denmark, 2001.
[15] G. R. Raidl. Evolutionary computation: an overview and recent trends. ÖGAI Jour-
nal, 24:2–7, 2005.
[16] M. S. Gibbs, H. R. Maier, G. C. Dandy, and J. B. Nixon. The relationship between
problem characteristics and the optimal number of genetic algorithm generations.
IEEE Transactions on Evolutionary Computation, Submitted, 2006.
[17] K. Downing. Using evolutionary computational techniques in environmental mod-
elling. Environmental Modelling and Software, 13(5-6):519–528, 1998.
[18] C. L. Chang, S. L. Lo, and S. L. Yu. Applying fuzzy theory and genetic algorithm to
interpolate precipitation. Journal of Hydrology, 314(1-4):92–104, 2005.
[19] K. Deb, J. Horn, and D. E. Goldberg. Multimodal deceptive functions. Complex
Systems, 7(2):131–153, 1993.
[20] K. Deb. Multi-objective genetic algorithms: problem difficulties and construction of
test problems. Evolutionary Computation, 7(3):205–230, 1999.
[21] W. Siedlecki and J. Sklansky. Constrained genetic optimization via dynamic reward-
penalty balancing and its use in pattern recognition. In J. D. Schaffer, editor, Proceed-
ings of the 3rd International Conference on Genetic Algorithms (ICGA-89), pages
141–150, George Mason University, United States, 1989. Morgan Kaufmann Pub-
lishers Inc.
[22] A. E. Smith and D. W. Coit. Constraint handling techniques – penalty functions.
In T. Bäck, D. B. Fogel, and Z. Michalewicz, editors, Handbook of Evolutionary
Computation, page Chapter C 5.2. Oxford University Press and Institute of Physics
Publishing, Bristol, UK, 1997.
92 G.B. Kingston, G.C. Dandy and H.R. Maier

[23] Z. Michalewicz and M. Schoenauer. Evolutionary algorithms for constrained param-


eter optimization problems. Evolutionary Computation, 4(1):1–32, 1996.

[24] D. Ashlock. Evolutionary Computation for Modeling and Optimization, volume 200
of Interdisciplinary Applied Mathematics. Springer, New York, 2006.

[25] S. E. Cieniawski, J. W. Eheart, and S. Ranjithan. Using genetic algorithms to solve a


multiobjective groundwater monitoring problem. Water Resources Research, 31(2):
399–409, 1995.

[26] Y. Icaga. Genetic algorithm usage in water quality monitoring networks optimization
in Gediz (Turkey) River Basin. Environmental Monitoring and Assessment, 108(1-
3):261–277, 2005.

[27] S.-Y. Park, J. H. Choi, S. Wang, and S. S. Park. Design of a water quality monitoring
network in a large river system using the genetic algorithm. Ecological Modelling,
199(3):289–297, 2006.

[28] S.-F. Kuo and C.-W. Liu. Simulation and optimization model for irrigation planning
and management. Hydrological Processes, 17(15):3141–3159, 2003.

[29] R. Oliveira and D. P. Loucks. Operating rules for multireservoir systems. Water
Resources Research, 33(4):839–852, 1997.

[30] F.-J. Chang and Y.-C. Chen. Real-coded genetic algorithm for rule-based flood con-
trol reservoir management. Water Resources Management, 12(3):185–198, 1998.

[31] R. Wardlaw and M. Sharif. Evaluation of genetic algorithms for optimal reservoir
system operation. Journal of Water Resources Planning and Management, 125(1):
25–33, 1999.

[32] P. Chaves, T. Kojiri, and Y. Yamashiki. Optimization of storage reservoir considering


water quantity and quality. Hydrological Processes, 17(14):2769–2793, 2003.

[33] C. Jian-Xia, H. Qiang, and W. Yi-Min. Genetic algorithms for optimal reservoir
dispatching. Water Resources Management, 19(4):321–331, 2005.

[34] D. A. Savic and G. A. Walters. Genetic algorithms for least-cost design of water
distribution networks. Journal of Water Resources Planning and Management, 123
(2):67–77, 1997.

[35] M. S. Gibbs, H. R. Maier, and G. C. Dandy. Applying fitness landscape mea-


sures to water distribution optimization problems. In S.-Y. Liong, K.-K. Phoon, and
V. Babovic, editors, Proceedings of the 6th International Conference on Hydroin-
formatics, volume 1, pages 795–802, Singapore, 2004. World Scientific Publishing
Company.

[36] D. R. Broad, G. C. Dandy, and H. R. Maier. Water distribution system optimization


using metamodels. Journal of Water Resources Planning and Management, 131(3):
172–180, 2005.
AI Techniques for Hydrological Modeling and Management. II: Optimization 93

[37] J. A. Vasquez, H. R. Maier, B. J. Lence, B. A. Tolson, and R. O. Foschi. Achieving


water quality system reliability using genetic algorithms. Journal of Environmental
Engineering, 126(10):954–962, 2000.

[38] T. Merabtene, A. Kawamura, K. Jinno, and J. Olsson. Risk assessment for optimal
drought management of an integrated water resources system using a genetic algo-
rithm. Hydrological Processes, 16(11):2189–2208, 2002.

[39] A. Mantoglou, M. Papantoniou, and P. Giannoulopoulos. Management of coastal


aquifers based on nonlinear optimization and evolutionary algorithms. Journal of
Hydrology, 297(1-4):209–228, 2004.

[40] M. K. Muleta and J. W. Nicklow. Decision support for watershed management using
evolutionary algorithms. Journal of Water Resources Planning and Management,
131(1):35–44, 2005.

[41] G. J. Bowden, H. R. Maier, and G. C. Dandy. Input determination for neural net-
work models in water resources applications. Part 1. Background and methodology.
Journal of Hydrology, 301(1-4):75–92, 2005.

[42] G. J. Bowden, H. R. Maier, and G. C. Dandy. Input determination for neural network
models in water resources applications. Part 2. Case study: forecasting salinity in a
river. Journal of Hydrology, 301(1-4):93–107, 2005.

[43] G. J. Bowden, H. R. Maier, and G. C. Dandy. Optimal division of data for neural
network models in water resources applications. Water Resources Research, 38(2):
1010, 2002.

[44] M. Franchini and G. Galeati. Comparing several genetic algorithm schemes for the
calibration of conceptual rainfall-runoff models. Hydrological Sciences Journal, 42
(3):357–378, 1997.

[45] M. Franchini, G. Galeati, and S. Berra. Global optimization techniques for the cali-
bration of conceptual rainfall-runoff models. Hydrological Sciences Journal, 43(3):
443–458, 1998.

[46] D. P. Solomatine, Y. B. Dibike, and N. Kukuric. Automatic calibration of groundwa-


ter models using global optimization techniques. Hydrological Sciences Journal, 44
(6):879–894, 1999.

[47] G. B. Kingston, H. R. Maier, and M. F. Lambert. Calibration and validation of


neural networks to ensure physically plausible hydrological modeling. Journal of
Hydrology, 314(1-4):158–176, 2005.

[48] R. J. Abrahart, L. See, and P. E. Kneale. Using pruning algorithms and genetic algo-
rithms to optimise network architectures and forecasting inputs in a neural network
rainfall-runoff model. Journal of Hydroinformatics, 1(2):103–114, 1999.

[49] G. Kuczera. Efficient subspace probabilistic parameter optimization for catchment


models. Water Resources Research, 33(1):177–185, 1997.
94 G.B. Kingston, G.C. Dandy and H.R. Maier

[50] M. Thyer, G. Kuczera, and B. C. Bates. Probabilistic optimization for conceptual


rainfall-runoff models: A comparison of the shuffled complex evolution and simu-
lated annealing algorithms. Water Resources Research, 35(3):767–773, 1999.

[51] L.-J. Cui and G. Kuczera. Optimizing urban water supply headworks using proba-
bilistic search methods. Journal of Water Resources Planning and Management, 129
(5):380–387, 2003.

[52] M. M. Eusuff and K. E. Lansey. Optimal operation of artificial groundwater recharge


systems considering water quality transformations. Water Resources Management,
18(4):379–405, 2004.

[53] C. Nunoo and D. Mrawira. Shuffled complex evolution algorithms in infrastructure


works programming. Journal of Computing in Civil Engineering, 18(3):257–266,
2004.

[54] M. M. Millonas. Swarms, phase transitions, and collective intelligence. In C. G.


Langton, editor, Artificial Life III, Santa Fe Institute Studies in the Sciences of Com-
plexity. Addison-Wesley, Reading, MA, 1994.

[55] E. Bonabeau and G. Théraulaz. Swarm smarts. Scientific American, 282(3):72–79,


2000.

[56] M. Dorigo, V. Maniezzo, and A. Colorni. Ant system: optimization by a colony of


cooperating agents. IEEE Transactions on Systems, Man and Cybernetics. Part B:
Cybernetics, 26(1):29–41, 1996.

[57] J. Kennedy and R. Eberhart. Particle swarm optimization. Proceedings of the IEEE
international conference on neural networks, 4:1942–1948, 1995.

[58] Y. Shi and R. Eberhart. A modified particle swarm optimizer. Proceedings of the
IEEE International Conference on Evolutionary Computation, pages 69–73, 1998.

[59] E. Bonabeau, M. Dorigo, and G. Théraulaz. Inspiration for optimization from social
insect behaviour. Nature, 406(6791):39–42, 2000.

[60] D. W. Boeringer and D. H. Werner. Particle swarm optimization versus genetic algo-
rithms for phased array synthesis. IEEE Transactions on Antennas and Propagation,
52(3):771–779, 2004.

[61] K. W. Chau. Particle swarm optimization training algorithm for ANNs in stage pre-
diction of Shing Mun River. Journal of Hydrology, 329(3-4):363–367, 2006.

[62] P. J. Angeline. Evolutionary optimization versus particle swarm optimization: phi-


losophy and performance differences. In V. W. Porto, N. Saravanan, D. Waagen,
and A. E. Eiben, editors, Proceedings of the 7th Annual Conference on Evolution-
ary Programming, volume 1447/1998 of Lecture Notes in Computer Science, pages
601–610, San Diego, California, USA, 1998. Springer.
AI Techniques for Hydrological Modeling and Management. II: Optimization 95

[63] D. Nagesh Kumar and M. Janga Reddy. Ant colony optimization for multi-purpose
reservoir operation. Water Resources Management, 20(6):879–898, 2006.

[64] A. C. Zecchin, A. R. Simpson, H. R. Maier, M. Leonard, A. J. Roberts, and M. J.


Berrisford. Application of two ant colony optimisation algorithms to water distribu-
tion system optimisation. Mathematical and Computer Modelling, 44(5-6):451–468,
2006.

[65] W. K. Foong, H. R. Maier, and A. R. Simpson. Ant colony optimization for power
plant maintenance scheduling optimization. In H.-G. Beyer, editor, GECCO05, Pro-
ceedings of the Genetic and Evolutionary Computation Conference, volume 1, pages
249–256, Washington, DC, USA, 2005. ACM Press.

[66] R. E. Olarte and N. Obregón. Comparison between a simple GA and an ant sys-
tem for the calibration of a rainfall-runoff model. In S.-Y. Liong, K.-K. Phoon, and
V. Babovic, editors, Proceedings of the 6th International Conference on Hydroin-
formatics, volume 1, pages 842–849, Singapore, 2004. World Scientific Publishing
Company.

[67] K. Chau. Rainfall-runoff correlation with particle swarm optimization algorithm.


In F. Yin, J. Wang, and C. C. Guo, editors, Advances in Neural Networks - ISNN
2004. Proceedings of the International Symposium on Neural Networks, volume
3174/2004 of Lecture Notes in Computer Science, pages 970–975, Dalian, China,
2004. Springer.

[68] K. Chau. River stage forecasting with particle swarm optimization. In B. Orchard,
C. Yang, and M. Ali, editors, Innovations in Applied Artificial Intelligence. Pro-
ceedings of the 17th International Conference on Industrial and Engineering Appli-
cations of Artificial Intelligence and Expert Systems, volume 3029/2004 of Lecture
Notes in Artificial Intelligence, pages 1166–1173, Ottawa, Canada, 2004. Springer.

[69] A. M. Baltar and D. G. Fontane. A generalized multiobjective particle swarm opti-


mization solver for spreadsheet models: application to water quality. In Proceedings
of the AGU Hydrology Days Conference, pages 1–12, Colorado, USA, 2006.

[70] B. S. Jung and B. W. Karney. Hydraulic optimization of transient protection devices


using GA and PSO approaches. Journal of Water Resources Planning and Manage-
ment, 132(1):44–52, 2006.

[71] D. W. Corne, K. Deb, P. J. Fleming, and J. D. Knowles. The good of the many out-
weighs the good of the one: evolutionary multiobjective optimizations. coNNectionS,
1(1):9–13, 2003.

[72] C. A. Coello Coello. A comprehensive survey of evolutionary-based multiobjective


optimization techniques. Knowledge and Information Systems, 1(3):269–308, 1999.

[73] E. Zitzler, K. Deb, and L. Thiele. Comparison of multiobjective evolutionary algo-


rithms: empirical results. Evolutionary Computation, 8(2):173–195, 2000.
96 G.B. Kingston, G.C. Dandy and H.R. Maier

[74] C. M. Fonseca and P. J. Fleming. An overview of evolutionary algorithms in multi-


objective optimization. Evolutionary Computation, 3(1):1–16, 1995.
[75] C. M. Fonseca and P. J. Fleming. Multiobjective optimization and multiple constraint
handling with evolutionary algorithms – part I: a unified formulation. IEEE Transac-
tions on Systems, Man and Cybernetics. Part A: Systems and Humans, 28(1):26–37,
1998.
[76] C. A. Coello Coello. 20 years of evolutionary multiobjective optimization: what has
been done and what remains to be done. In G. Y. Yen and D. B. Fogel, editors, Com-
putational Intelligence: Principles and Practice, pages 73–88. IEEE Computational
Intelligence Society, Vancouver, Canada, 2006.
[77] K. Deb. Evolutionary algorithms for multi-criterion optimization in engineering de-
sign. In K. Miettinen, M. M. Mäkelä, P. Neittaanmäki, and J. Periaux, editors, Evo-
lutionary Algorithms in Engineering and Computer Science, pages 135–161. John
Wiley and Sons, Chichester, UK, 1999.
[78] J. D. Schaffer. Multiple objective optimization with vector evaluated genetic algo-
rithms. In J. J. Grefensttete, editor, Genetic Algorithms and their Applications: Pro-
ceedings of the 1st International Conference on Genetic Algorithms, pages 93–100,
Hillsdale, NJ, 1985. Lawrence Erlbaum Associates.
[79] C. M. Fonseca and P. J. Fleming. Genetic algorithms for multiobjective optimization:
formulation, discussion and generalization. In S. Forrest, editor, Proceedings of the
5th International Conference on Genetic Algorithms, pages 416–423, San Mateo,
California, 1993. Morgan Kaufman Publishers.
[80] N. Srinivas and K. Deb. Multiobjective optimization using Nondominated Sorting in
Genetic Algorithms. Evolutionary Computation, 2(3):221–248, 1994.
[81] J. Horn, N. Nafpliotis, and D. E. Goldberg. A Niched Pareto Genetic Algorithm
for multiobjective optimization. In Proceedings of the 1st IEEE Conference on Evo-
lutionary Computation, IEEE World Congress on Computational Intelligence, vol-
ume 1, pages 82–87, Piscataway, NJ, 1994. IEEE Service Center.
[82] E. Zitzler and L. Thiele. Multiobjective optimization using evolutionary algorithms
– a comparative study. In A. E. Eiben, editor, Parallel Problem Solving from Nature
V, pages 292–301. Springer-Verlag, Amsterdam, 1998.
[83] K. Deb, S. Agrawal, A. Pratab, and T. Meyarivan. A fast elitist Non-dominated Sort-
ing Genetic Algorithm for multi-objective optimization: NSGA-II. In M. Schoe-
nauer, K. Deb, G. Rudolph, X. Yao, E. Lutton, J. J. Merelo, and H.-P. Schwefel,
editors, Proceedings of the Parallel Problem Solving from Nature VI Conference,
volume 1917/2000 of Lecture Notes in Computer Science, pages 849–858, Paris,
France, 2000. Springer.
[84] K. Deb, A. Pratap, S. Agarwal, and T. Meyarivan. A fast and elitist multiobjective
genetic algorithm: NSGA-II. IEEE Transactions on Evolutionary Computation, 6
(2):182–97, 2002.
AI Techniques for Hydrological Modeling and Management. II: Optimization 97

[85] E. Zitzler, M. Laumanns, and L Thiele. SPEA2: Improving the Strength Pareto
Evolutionary Algorithm. TIK-Report 103, Department of Electrical Engineering,
Swiss Federal Institute of Technology, Zurich, Switzerland, 2001.
[86] Y. Tang, P. Reed, and T. Wagener. How effective and efficient are multiobjective evo-
lutionary algorithms at hydrologic model calibration? Hydrology and Earth System
Sciences, 10(2):289–307, 2006.
[87] J. A. Vrugt, H. V. Gupta, L. A. Bastidas, W. Bouten, and S. Sorooshian. Effective
and efficient algorithm for multiobjective optimization of hydrologic models. Water
Resources Research, 39(8):1214, 2003.
[88] L. M. Gambardella, É. Taillard, and G. Agazzi. MACS-VRPTW: A multiple ant
colony system for vehicle routing problems with time windows. In D. Corne,
M. Dorigo, and F. Glover, editors, New Ideas in Optimization, pages 63–76.
McGraw-Hill, London, 1999.
[89] C. Garcı́a-Martinez, O. Cordón, and F. Herrera. An empirical analysis of multiple
objective ant colony optimization algorithms for the bi-criteria TSP. In M. Dorigo,
M. Birattari, C. Blum, L. M. Gambardella, F. Mondada, and T. Stützle, editors, Pro-
ceedings of the 4th International Workshop on Ant Colony Optimization and Swarm
Intelligence, ANTS 2004, volume 3172/2004 of Lecture Notes in Computer Science,
pages 61–72, Brussels, Belgium, 2004. Springer.
[90] C. A. Coello Coello, G. T. Pulido, and M. S. Lechuga. Handling multiple objectives
with particle swarm optimization. IEEE Transactions on Evolutionary Computation,
8(3):256–279, 2004.
[91] D.-W. Gong, Y. Zhang, and J.-H. Zhang. Multi-objective particle swarm optimiza-
tion based on minimal particle angle. In D.-S. Huang, X.-P. Zhang, and G.-B. Huang,
editors, Advances in Intelligent Computing, Proceedings of the International Confer-
ence on Intelligent Computing, ICIC 2005, volume 3644/2005 of Lecture Notes in
Computer Science, pages 571–580, Hefei, China, 2005. Springer.
[92] M. K. Gill, Y. H. Kaheil, A. Khalil, M. Mckee, and L. A. Bastidas. Multiobjective
particle swarm optimization for parameter estimation in hydrology. Water Resources
Research, 42(7):W07417, 2006.
[93] R. J. Balling, J. T. Taber, M. R. Brown, and K. Day. Multiobjective urban planning
using genetic algorithm. Journal of Urban Planning and Development, 125(2):86–
99, 1999.
[94] V. R. Vemuri and W. Cedeino. A new genetic algorithm for multi-objective opti-
mization in water resource management. In Proceedings of the IEEE International
Conference on Evolutionary Computation, volume 1, pages 495–500, Perth, Aus-
tralia, 1995.
[95] D. Halhal, G. A. Walters, D. Ouazar, and D. A. Savic. Water network rehabilitation
with structured messy genetic algorithm. Journal of Water Resources Planning and
Management, 123(3):137–146, 1997.
98 G.B. Kingston, G.C. Dandy and H.R. Maier

[96] P. O. Yapo, H. V. Gupta, and S. Sorooshian. Multi-objective global optimization for


hydrologic models. Journal of Hydrology, 204(1-4):83–97, 1998.

[97] T. D. Prasad and N.-S. Park. Multiobjective genetic algorithms for design of water
distribution networks. Journal of Water Resources Planning and Management, 130
(1):73–82, 2004.

[98] L. S. Vamvakeridou-Lyroudia, G. A. Walters, and D. A. Savic. Fuzzy multiobjective


optimization of water distribution networks. Journal of Water Resources Planning
and Management, 131(6):467–476, 2005.

[99] M. Atiquzzaman, S.-Y. Liong, and X. Yu. Alternative decision making in water
distribution network with NSGA-II. Journal of Water Resources Planning and Man-
agement, 132(2):122–126, 2006.

[100] P. Reed, B. S. Minsker, and D. E. Goldberg. A multiobjective approach to cost effec-


tive long-term groundwater monitoring using an elitist nondominated sorted genetic
algorithm with historical data. Journal of Hydroinformatics, 3(2):71–89, 2001.

[101] P. Reed, J. B. Kollat, and V. K. Devireddy. Using interactive archives in evolutionary


multiobjective optimization: A case study for long-term groundwater monitoring
design. Environmental Modelling and Software, 22(5):683–692, 2007.

[102] J. B. Kollat and P. M. Reed. Comparing state-of-the-art evolutionary multi-objective


algorithms for long-term groundwater monitoring design. Advances in Water Re-
sources, 29(6):792–807, 2006.

[103] J. B. Kollat and P. M. Reed. A computational scaling analysis of multiobjective evo-


lutionary algorithms in long-term groundwater monitoring applications. Advances in
Water Resources, 30(3):408–419, 2007.

[104] Y. Tang, P. M. Reed, and J. B. Kollat. Parallelization strategies for rapid and robust
evolutionary multiobjective optimization in water resources applications. Advances
in Water Resources, 30(3):335–353, 2007.

[105] M. Erickson, A. Mayer, and J. Horn. Multi-objective optimal design of groundwater


remediation systems: application of the niched Pareto genetic algorithm (NPGA).
Advances in Water Resources, 25(1):51–65, 2002.

[106] S.-Y. Liong, S.-T. Khu, and W.-T. Chan. Derivation of Pareto front with genetic
algorithm and neural network. Journal of Hydrologic Engineering, 6(1):52–61, 2001.

[107] T. Kim, J.-H. Heo, and C.-S. Jeong. Multireservoir system optimization in the Han
River basin using multi-objective genetic algorithms. Hydrological Processes, 20(9):
2057–2075, 2006.

[108] M. Janga Reddy and D. Nagesh Kumar. Optimal reservoir operation using multi-
objective evolutionary algorithm. Water Resources Management, 20(6):861–878,
2006.
AI Techniques for Hydrological Modeling and Management. II: Optimization 99

[109] S. R. Murty Yandamuri, K. Srinivasan, and S. Murty Bhallamudi. Multiobjective


optimal waste load allocation models for rivers using Nondominated Sorting Genetic
Algorithm-II. Journal of Water Resources Planning and Management, 132(3):133–
143, 2006.

[110] G. B. Kingston, H. R. Maier, and G. C. Dandy. AI techniques for hydrological


modeling and management. I: simulation. In L. N. Robinson, editor, Hydrology
Research Trends. pages 15–65, Nova Science Publishers, Inc., 2008

You might also like