Professional Documents
Culture Documents
Full Ebook of Handbook of Ai Based Metaheuristics 1St Edition Anand J Kulkarni Online PDF All Chapter
Full Ebook of Handbook of Ai Based Metaheuristics 1St Edition Anand J Kulkarni Online PDF All Chapter
https://ebookmeta.com/product/handbook-of-cloud-computing-1st-
edition-anand-nayyar/
https://ebookmeta.com/product/ai-metaheuristics-for-information-
security-in-digital-media-1st-edition-apoorva-s-shastri/
https://ebookmeta.com/product/natural-language-processing-
projects-build-next-generation-nlp-applications-using-ai-
techniques-akshay-kulkarni/
https://ebookmeta.com/product/introduction-to-prescriptive-ai-a-
primer-for-decision-intelligence-solutioning-with-python-1st-
edition-akshay-kulkarni/
Applied Generative AI for Beginners: Practical
Knowledge on Diffusion Models, ChatGPT, and Other LLMs
1st Edition Akshay Kulkarni
https://ebookmeta.com/product/applied-generative-ai-for-
beginners-practical-knowledge-on-diffusion-models-chatgpt-and-
other-llms-1st-edition-akshay-kulkarni/
https://ebookmeta.com/product/nursing-diagnosis-handbook-an-
evidence-based-guide-to-planning-care-11th-edition-betty-j-
ackley/
https://ebookmeta.com/product/conceptual-review-of-radiology-3rd-
edition-mayur-arun-kulkarni/
https://ebookmeta.com/product/nala-damayanti-1st-edition-
neelakantan-anand/
https://ebookmeta.com/product/handbook-of-security-science-1st-
edition-anthony-j-masys/
Handbook of AI-based
Metaheuristics
Advances in Metaheuristics
Series Editors:
Anand J. Kulkarni, Symbiosis Center for Research and Innovation, Pune, India
Patrick Siarry, Universite Paris-Est Creteil, Laboratoire Images, Signaux et Systemes
Intelligents, Vitry-sur-Seine, France
Edited by
Anand J. Kulkarni
Patrick Siarry
First edition published 2022
by CRC Press
6000 Broken Sound Parkway NW, Suite 300, Boca Raton, FL 33487-2742
© 2022 selection and editorial matter, Patrick Siarry, Anand J. Kulkarni; individual chapters, the contributors
Reasonable efforts have been made to publish reliable data and information, but the author and publisher cannot
assume responsibility for the validity of all materials or the consequences of their use. The authors and publishers have
attempted to trace the copyright holders of all material reproduced in this publication and apologize to copyright hold-
ers if permission to publish in this form has not been obtained. If any copyright material has not been acknowledged
please write and let us know so we may rectify in any future reprint.
Except as permitted under U.S. Copyright Law, no part of this book may be reprinted, reproduced, transmitted, or
utilized in any form by any electronic, mechanical, or other means, now known or hereafter invented, including pho-
tocopying, microfilming, and recording, or in any information storage or retrieval system, without written permission
from the publishers.
For permission to photocopy or use material electronically from this work, access www.copyright.com or contact the
Copyright Clearance Center, Inc. (CCC), 222 Rosewood Drive, Danvers, MA 01923, 978-750-8400. For works that are
not available on CCC please contact mpkbookspermissions@tandf.co.uk
Trademark notice: Product or corporate names may be trademarks or registered trademarks and are used only for
identification and explanation without intent to infringe.
Typeset in Times
by KnowledgeWorks Global Ltd.
Dedication
vii
viii Contents
Index...........................................................................................................................................393
Preface
The domain of optimization and computational intelligence is one of the inherent and essential
parts of most of the disciplines associated with utilizing resources to their fullest. It is the need of
today and the future. In the heart of the optimization domain are mathematical modeling of the
problem and the solution methodologies. Today, the problems are becoming larger with growing
complexity. Such problems are becoming cumbersome to be handled by traditional optimization
methods. This motivated researchers to resort to artificial intelligence (AI)-based nature-inspired
solution methodologies or algorithms. They are commonly referred to as metaheuristics. These
algorithms are preferred as they are based on simple rules and can handle large complex real-
world problems in a reasonable amount of time. They are classified into bio-inspired, physics-
based, and socio-inspired algorithms and swarm-based algorithms.
The bio-inspired algorithms are motivated from genetic evolution of living animals and plants.
The major bio-inspired optimization algorithms are evolution based, swarm based, and ecology
based. The popular evolution-based algorithms are artificial neural network, genetic algorithms,
differential evolution, genetic programming, evolution strategies, and so forth. The swarm-based
methods are particle swarm optimization, ant colony optimization, artificial bee colony algorithm,
fish swarm algorithm, intelligent water drops algorithm, bacterial foraging optimization algorithm,
artificial immune system algorithm, firefly algorithm, group search optimizer, shuffled frog leaping
algorithm, cat swarm optimization, gray wolf optimization algorithm, elephant search algorithm,
chicken swarm optimization algorithm, moth flame optimization, and so forth. The ecology-based
methods are invasive weed colony optimization, biogeography-based optimization, and so forth. The
notable socio-inspired algorithms could be broadly classified into the methods inspired from socio-
political ideologies, competitive behavior in sports, social and evolutionary cooperation/competition,
cultural transition in humans, colonization and imperialistic competition, and memetic algorithms.
So far a wide variety of problems from almost every application domain has been success-
fully solved by the variations as well as by hybridized versions of these algorithms. The major
domains are supply chain and transportation, packing, medicinal and healthcare, robotics and
automation, image processing and steganography, machine learning, mechanical engineering
design and manufacturing, real-time control systems, construction and planning, and so forth.
Any development and improvement associated with the metaheuristics will lead to a huge impact
on the above listed and allied areas as well as societal viewpoint.
Several researchers from industry and academia as well as Masters and PhD students around
the globe are working in the metaheuristics and applications domain. Handbook of AI-based
Metaheuristics intends to serve as a complete reference to the theoretical and mathematical for-
mulations of the metaheuristics, their testing, and validation along with detailed illustrative solu-
tions, solutions to certain classes of problems, and applications. In addition, this book may also
serve as a platform to propose novel metaheuristic algorithms. Based on the overall domains,
this book is divided into four sections: Bio-Inspired Methods, Physics and Chemistry-Based
Methods, Socio-Inspired Methods, and Swarm-Based Methods.
ix
x Preface
algorithm are also listed along with the rationale behind each one. Accordingly, the variations of
the BSO developed for solving single-objective problems such as global-best BSO (GBSO), BSO
with multi-information interaction (MIIBSO), BSO algorithm with flexible search length and
memory-based selection (ASBSO), simplex search-based BSO (Simplex-BSO), vector grouping
learning BSO (VGLBSO), multiple diversity driven BSO (MDBSO), and active learning BSO
(ALBSO) are critically surveyed. Furthermore, a review of the many-objective BSO (MaOBSO),
multi-objective brain storm optimization algorithm based on decomposition (MBSO/D), grid-
based BSO algorithm (GMOBSO), k-means (KMOBSO), and group-based clustering (PMOBSO)
is also provided. Importantly, the limitations and possible solutions opened the doors and avenues
to explore the BSO algorithm further.
Chapter 2 by Filho et al. presents a critical review of the advances and developments of the
Fish School Search algorithm (FSS) in the last decade. It includes theoretical and mathemati-
cal discussion of the variations such as improved binary FSS (IBFSS), simplified binary FSS
(SBFSS), and mixed FSS (MFSS). Three approaches such as multi-objective FSS (MOFSS),
multi-objective binary FSS (MOBFSS), and weight-based many objective FSS (wmoFSS) for
multi- and many-objective optimizations are also discussed in detail. Moreover, two different
multimodal optimization approaches such as density-based FSS (dFSS) and weight-based FSS
(wFSS) are also described. Apart from these variations, two other methods for parallel process-
ing referred to as parallel FSS (pFSS) and multi-threaded FSS (MTFSS) with an aim to acceler-
ate the processing time are illustrated. A theoretical review of some of the important applications
such as clustering, feature selection, image compression and thresholding, the knapsack prob-
lem, recommendation systems, smart grids, supply chain network planning, electrical impedance
tomography, the mixed model workplace time-dependent assembly line balancing problem, and
finite-element model updating is also provided in the chapter.
Chapter 3 by Liu et al. discusses an algorithm referred to as marriage in honey bees opti-
mization (MBO). The chapter discusses in detail the honey bee colony structure and marriage
process along with its artificial model representing the MBO algorithm. It is accompanied with a
detailed illustration on the proposed MBO in continuous domain with five heuristics improving
the solutions significantly. Experiments are conducted on several benchmark functions to evalu-
ate the search ability of MBOs with each single heuristic and five incorporated heuristics. The
benchmark functions with several dimensions are chosen. The performance of the algorithm is
validated by comparing the results with contemporary optimization algorithms such as particle
swarm optimization (PSO), ant colony optimization (ACO), and artificial bee colony (ABC). The
computational complexity of the algorithm is also evaluated. Certain limitations of the MBO are
listed, which can be of help for the readers to explore the avenues further.
The genetic algorithm (GA) architecture and its application scenario in real-world engineering
optimization designs, especially in the domain of structural optimization, are of prime focus in
Chapter 4 by R. Desai. Fundamental concepts and working principles with design representations
available in GA are highlighted in the chapter. It also touches on several constraint handling
approaches that have been implemented successfully in the optimization domain. Apart from
these, the widespread applications of GA in various fields of structural optimization, deliberated
with a great amount of details, are described. Most importantly, the working of the algorithm is
demonstrated in those very details using a variety of examples, which underscored that the GA
offers a powerful search technique with validated performance in almost all design optimizations.
GSA. The chapter provides the GSA procedure along with the mathematical formulation. Most
significantly, eight operators proposed by several researchers, which helped the algorithm evolve
or modify, are illustrated in detail. The operators such as escape, Kepler, black hole, and disrup-
tion are inspired by physical theories and are related to astronomy, gravity, and relativity. The
other operators include crossover, mutation, chaotic, and discrete local search. The detailed theo-
retical and mathematical descriptions of the variations of the GSA such as binary GSA, discrete
GSA, mixed GSA, quantum GSA, constraint GSA, and multimodal GSA are also provided in the
chapter. The review of several applications of the GSA and its modified versions is discussed. The
applications are associated with power engineering, pattern recognition, communication engi-
neering, control and electrical engineering, civil engineering, software engineering, mechanical
engineering, and biology. It is important to mention that the chapter lists the limitations of the
algorithm, which opens the door for the readers to work on.
Chapter 6 by Andrew O. Martin discusses the stochastic diffusion search (SDS) algorithm.
It provides mathematical formulation involving the discrete dynamical systems model. The
detailed analysis of the convergence time, convergence stability, convergence sensitivity, positive
feedback, and so forth, are discussed. This chapter introduces a formalism that defines standard
SDS as a set of interacting functions. It enabled variants of SDS to be described in terms of
the behavior that is distinct from standard SDS. It demonstrates the modularity of SDS and the
implementation of the variants for use in experiments. The formalism therefore facilitates the
expression of variants of SDS in literature and in algorithmic implementation. In addition, a num-
ber of variants of SDS are also described. For each variant the individual level rules that define
the variant are delineated. The advantages and disadvantages of the behavior of the variants are
also discussed.
review, several potential modifications and applications have been suggested, such as the use of
a variety of cultural paradigms and reducing computational cost using parallelism. In addition, a
need of the development of a generalized cultural algorithm-based multi-objective optimizer as
well as many-objective optimizer is suggested with an emphasis on solving dynamic problems.
Chapter 9 by Toğan et al. discusses teaching-learning-based optimization (TLBO) with
detailed elaboration on its theoretical development, while illustrating TLBO through numeri-
cal examples associated with minimization of time and cost of construction projects, under the
category of a multi-objective optimization problem. The chapter describes in detail the original
version of the TLBO as well as the detailed theoretical and mathematical discussion of modified
versions such as the elitist TLBO (eTLBO) and the modified TLBO (mTLBO) approaches. The
application of these TLBO versions solving the time-cost optimization (TCO) problem encoun-
tered in the field of construction management and execution is described involving mathematical
formulations. The chapter illustrates two computational experiments demonstrating the applica-
tion of TLBO and its variants on TCO type problems. This study lays the groundwork for future
research works on applications of multi-objective optimization problems in various fields.
Chapter 10 by Yue-Jiao Gong introduces a novel optimization approach referred to as social
learning optimization (SLO). The approach mimics the social learning process of humans. The
motivation comes from the social learning behavior of humans that inherently exhibits the high-
est level of intelligence in nature. The chapter describes in detail the SLO method and associ-
ated mechanism with a focus on its major components such as attention, retention, reproduction,
and motivation. In addition to the above four basic elements, human intelligence is addressed
using aspects such as vicarious reinforcement and self-reinforcement. The algorithm of SLO is
validated by solving several unimodal and multimodal functions of different dimensions and
comparing the performance with particle swarm optimization, the genetic algorithm, and the
differential evolution algorithm. In addition, the investigation on parameter setting is also carried
out showing the effectiveness of certain suitable settings.
The real-world problems are inherently constrained in nature. The need of the development
of a suitable constraint handling technique is underscored in Chapter 11 by Apoorva S. Shastri
and Anand J. Kulkarni. In this chapter an emerging socio-inspired optimization metaheuris-
tic referred to as multi-cohort intelligence (Multi-CI) is incorporated with a probability-based
constraint handling approach. The constrained version of the Multi-CI algorithm is applied for
solving four constrained test problems and two engineering design problems such as the welded
beam design problem and the pressure vessel design problem. The solutions to these problems are
compared with several well-known methods such as ant colony optimization, cultural algorithm,
particle swarm optimization, genetic algorithm, differential evolution algorithm, and so forth.
This contribution opens the door for the constrained version of the algorithm, which could be
applied to real-world problems from the supply chain domain.
Chapter 13 by Magdalene Marinaki and Yannis Marinakis provides a new version of the
bumble bees mating optimization (BBMO) algorithm referred to as the adaptive memory bumble
bees mating optimization (ABBMO). The chapter provides a detailed theoretical and mathemati-
cal explanation of the BBMO and the ABBMO, clearly discussing the underlying differences.
The practical usability of the algorithm is demonstrated by successfully solving the location rout-
ing problem with stochastic demands along with several benchmark instances. The performance
is further successfully validated by comparing the computational results with other contempo-
rary algorithms from the literature.
Chapter 14 by Rapanaki et al. proposes the parallel multi-start multi-objective glowworm
swarm optimization (PMS-GSO) algorithm suitable for multi-objective routing problems. The
proposed PMS-GSO is used for the solution of a complex vehicle routing problem, the multi-
objective route-based energy reduction multi-depot vehicle routing problem (MERMDVRP). In
all four different versions of the problem, from the one defined as the simplest, which had a prob-
lem without real-life constraints, and the one defined as the most difficult, which involves several
real-world constraints based mainly to the behavior of the driver, the road, and the weather condi-
tions. The computational performance of the proposed algorithm is validated by comparing the
results with three other algorithms, such as the parallel multi-start non-dominated sorting genetic
algorithm II (PMS-NSGA-II), the parallel multi-start multiobjective firefly (PMS-FIREFLY)
algorithm, and the parallel multi-start multiobjective particle swarm optimization (PMS-NSPSO)
algorithm by solving several benchmark instances.
Chapter 15 by Liwen Xie and Gai-Ge Wang theoretically and mathematically describes the
monarch butterfly optimization (MBO) algorithm. A review of the existing algorithms is also
carried out to underscore the distinct underlying principle of the MBO. The chapter critically
reviews the variations of the MBO such as chaotic MBO (CMBO), opposition-based learning
MBO (OMBO), opposition-based learning MBO (OPMBO), multi-strategy MBO (MMBO),
self-adaptive population MBO (SPMBO), self-adaptive crossover operator MBO (GCMBO), F/T
mutation, and so forth. Apart from this, hybridized versions of the MBO, such as differential evo-
lution (LMBO-DE), differential evolution (DEMBO), differential evolution (DE-LSMBO), dif-
ferential evolution (DEMBO), harmony search (HMBO), artificial bee colony (HAM), artificial
bee colony (HAMBO-CHLD), firefly algorithm (MBO-FS), and simulated annealing (SAMBO)
are also critically surveyed. Moreover the variants of the MBO such as binary MBO, discrete
MBO, quantum MBO, and multi-objective MBO are explained in brief. The computational per-
formance of the MBO algorithm is validated by solving several test problems as well as several
large-sized knapsack test cases and further comparing them with several contemporary algo-
rithms. Most significantly, in addition to several proven advantages of the MBO, the chapter also
provided limitations of the algorithm, which may help the researchers in exploring new avenues.
Editors
Anand J. Kulkarni is an Associate Professor of Computational Intelligence Domain at the
Symbiosis Center for Research and Innovation, Symbiosis International (Deemed University,
Pune, India).
Patrick Siarry is a Professor of automatics and informatics at the University of Paris-Est Créteil,
where he leads the Image and Signal Processing team in the Laboratoire Images, Signaux et
Systèmes Intelligents (LiSSi).
xv
List of Contributors
Hussein A. Abbass Ma Guadalupe Castillo Tapia
School of Engineering and Information Administration Department
Technology UAM Azcapotzalco
University of New South Wales Canberra Cabbage Reynosa Tamaulipas
ACT Canberra, Australia Mexico City, Mexico
xvii
xviii List of Contributors
CONTENTS
1.1 Introduction...........................................................................................................................3
1.2 Brain Storm Optimization.....................................................................................................4
1.2.1 Description of the Original BSO Algorithm.............................................................4
1.2.2 Limitations of the Original BSO Algorithm.............................................................5
1.3 Literature Review..................................................................................................................7
1.3.1 Single-objective Optimization...................................................................................7
1.3.2 Multi-objective Optimization..................................................................................10
1.3.3 Constrained Optimization.......................................................................................11
1.4 Real-World Application.......................................................................................................11
1.4.1 Power System Reliability Assessment.....................................................................11
1.4.2 Problem Definition...................................................................................................12
1.4.3 Idea Encoding..........................................................................................................13
1.4.4 Experimental Results...............................................................................................15
1.5 Conclusions and Future Research Directions......................................................................16
References.....................................................................................................................................16
1.1 INTRODUCTION
The term population-based metaheuristics refers to one large class of algorithms that has been
successfully applied to solve continuous and discrete optimization problems. Population-based
metaheuristics maintain a population of individuals (i.e., solutions) that are updated, using some
specified mechanism, over a number of iterations (i.e., generations) until some stopping criteria
are met. Population-based metaheuristics are also referred to as nature-inspired metaheuristics
or bio-inspired metaheuristics as their population update mechanism is usually biologically
inspired. This class of algorithms could be further categorized based on the natural and/or
biological inspiration behind their population update mechanism. The categories include [1]
breeding-based evolution, swarm intelligence, physics and chemistry, social human behavior
algorithms, plants-based, and miscellaneous.
According to the above categorization, the BSO algorithm [2, 3] is a social human behav-
ior algorithm that mimics brainstorming processes. In a typical brainstorming process, a group
of humans with different backgrounds, expertise, and abilities meet together to collect ideas
or develop solutions for a problem at hand. The collected ideas could be further hybridized
or enhanced to generate more ideas. Since its introduction in 2011, different researchers have
3
4 Handbook of AI-based Metaheuristics
studied BSO behavior, proposed improvements to BSO, hybridized BSO with other techniques,
developed cooperative BSO frameworks, and applied BSO to real-world applications. Previous
literature reviews on BSO can be found in [4, 5].
In this chapter, we explain the mathematical formulation of BSO, conduct a comprehensive
literature survey for the last 3 years, detail how BSO is applied to one real-world application, and
finally highlight promising future research directions.
The rest of this chapter is organized as follows: Section 1.2 gives details about the original
BSO algorithm along with its limitations. Different improvements proposed in the literature, in
the previous 3 years, to improve BSO are covered in Section 1.3. A real- world application for
BSO is presented in Section 1.4. Section 1.5 concludes this chapter and proposes future research
directions.
The clustering step aims to combine similar ideas together and to help the population to con-
verge into small regions. The clustering k-means algorithm was used in the original BSO and the
best idea in each cluster was recorded as the cluster center.
The generation step is used to generate new ideas from existing ones. An intermediate indi-
vidual X ni is generated taking information from:
X ni = centercr (1.1)
Brain Storm Optimization Algorithm 5
X ni = X crj (1.2)
b. A random combination of two randomly selected ideas j and k from two probabilis-
tically selected clusters cr1 and cr2:
One of the above four strategies is randomly selected based on the parameters pone − cluster (i.e.,
probability of considering a single cluster), pone − center (i.e., probability of considering a single
center), and ptwo− centers (i.e., probability of considering two centers). A cluster is probabilistically
selected based on its size (i.e., the number of ideas in the cluster).
Afterward, a perturbation step is applied to X ni to generate the new individual X new i using a
step-size parameter ξ and Gaussian distribution as follows:
X new
i = X ni + ξ × N (0,1) (1.6)
where rand is a function that generates a uniformly distributed random number in the range
[0,1), N (0,1) represents a Gaussian distribution with mean 0 and a standard deviation of 1, and k
is a coefficient to modify the slope of the logsig() function. The detailed BSO generation step is
shown in Algorithm 1.2.
As the logsig() is a declining function, this results in less and less perturbation to be applied to
newly generated ideas as the search progresses. Hence, BSO moves from an explorative behavior
at the beginning, due to applying large perturbations, into an exploitative behavior toward the end
due to having less perturbation.
Finally, in the replacement step, each newly generated idea X new i replaces the current X old
i
if it
has a better fitness; otherwise, it will be discarded.
iterations, it goes into a scout phase where it is randomly (or heuristically) re-initialized in the
search space. Previous work introducing a re-initialization step was reported in [10–12].
Third, having a fixed slope parameter k dictates that the logsig() function will have the same
decay characteristics regardless of the function being optimized. This could result in undesired
behaviors like having a sharp curve decrease, which leads to a quick switch from exploration to
exploitation. This is not desirable for more complicated functions with many local optima and
especially in high dimensions. Earlier attempts to propose dynamic scheduling include the works
in [7, 12, 13].
Fourth, inspecting the generation step of the original BSO reveals that it actually involves two
separate steps: hopping then perturbation. A new individual, X new
i , in BSO is generated using step 21
in Algorithm 1.2. In this equation, the first term, X n, is basically generated using one or more
i
cluster centers, or one or more randomly selected individuals. Afterward, this term is perturbed
and the generated individual, X new
i , is compared with the old one X i . This means that the newly
old
generated individual X new does not inherit any traits of the old individual X old
i i
, with which it is
compared. This is unlike other metaheuristics like genetic algorithms (GAs) and differential
evolution (DE) through the crossover operator, or particle swarm optimization (PSO) through
position updates in which the new individual is generated while inheriting from or updating
old individuals. This renders BSO as a sort of memoryless algorithm in which there is no clear
trajectory that an individual follows in the search space. Subsequently, this results in BSO being
ineffective in exploring the search space. Moreover, in the case of using a single cluster center or
Brain Storm Optimization Algorithm 7
ability of BSO as it does not allow for cooperation or information sharing between different
individuals.
1.3.1 Single-objective Optimization
A global-best BSO (GBSO) was proposed in [14]. In GBSOs, a fitness-based grouping strategy
was adopted to equally allocate good individuals across different clusters. Moreover, the per-
variable updates were introduced so that Eqs. (1.1) to (1.4) are repeated for every problem vari-
able, and not only once per individual, allowing for more individuals sharing information. The
same re-initialization and dynamic scheduling techniques as in [12] were incorporated. Finally,
the newly generated idea was influenced by the global-best one with an increasing effect facili-
tated by a new parameter C as follows:
Current _ Iteration
C = Cmin + × (Cmax − Cmin ) (1.7)
Max _ Iterations
X new
i = X new
i + rand (1, DimSize) × C × (Global _ Best − X new
i ) (1.8)
Experiments proved the superior performance of the GBSO compared with the previous BSO
improved algorithms. Further experiments demonstrated the highly competitive performance
against other popular global-best heuristic algorithms. Moreover, GBSO exhibited a very desir-
able property by having an improved performance with higher dimensions on the hybrid and
composition functions of the CEC14 benchmarks [15].
The authors in [16] proposed a simple BSO algorithm with a periodic quantum learning strat-
egy (SBSO-PQLS). The clustering process was a simple individual clustering (SIU) strategy
that sorts all individuals according to their fitness and then reasonably allocates them to dif-
ferent clusters. For the update strategy, when a new idea is generated from one cluster, it was
either based on the cluster center or a combination of two randomly selected ideas in that cluster.
However, when generating an idea from two clusters, this was based on a combination of two ran-
domly selected ideas from the two clusters. In either case, when generating a combination of two
ideas, this was independently done for each problem variable depending on a different random
number, which is similar to the per-variable update proposed in [14]. Hence, X ni was defined as:
where d represents the dth problem variables, cr1 and cr2 are the selected clusters (would be the
same if the update is based on a single cluster), and j and k are the randomly selected ideas.
Moreover, the work also adapted the step-size function, for each problem variable independently,
according to the selected update strategy and the minimum and maximum boundaries of the
search space. Finally, the work introduced a quantum-behaved individual update with periodic
learning, which was applied after the above update strategy, to generate new momentum for
similar ideas. SBSO-PQLS was compared with seven other BSO variants, PSO, and DE using the
8 Handbook of AI-based Metaheuristics
CEC13 benchmark functions [17]. The results showed that SBSO-PQLS achieved a better global
search performance and avoided premature convergence.
The work in [18] tested three different versions of BSO for locating multiple optima. The tested
algorithms were the original BSO, BSO in objective space (BSO-OS) [19] with the Gaussian
random variable, and BSO-OS with the Cauchy random variable. Algorithms were modified to
preserve the found global/local optima until the end of the search. Experiments were conducted
on eight benchmark functions and seven nonlinear equation system problems. Results showed
that BSO-OS performed better than the fireworks algorithm (FWA) and PSO (with star topology)
but worse than PSO (with the von Neumann topology) for several problems.
BSO with multi-information interaction (MIIBSO) was introduced in [20]. In MIIBSO, the
clustering step was performed using the RG strategy in [7]. Moreover, MIIBSO adopted three
patterns for their update strategy. For pattern I, when the update strategy is based on a single clus-
ter, MIIBSO combined a cluster center and a random idea. This could be considered as a mixture
of steps 5 and 8 in Algorithm 1.2 as follows:
For pattern II, and to remove redundancy, when updating an individual using information from
two clusters, only a combination of two randomly selected ideas is considered. Finally for
pattern III, which is repeated for every dimension j, ∆ random ideas are selected from the popula-
tion, where 2 ≤ ∆ ≤ N , then problem variable j is extracted from the best idea among the selected
∆ ideas and used to update dimension j in X ni. However, to reduce the computational complexity,
pattern III is only adopted as a re-initialization mechanism if a certain individual has not been
improved for a number of iterations. Finally, the step-size function is dynamically updated based
on the search space size. MIIBSO was compared with 11 BSO algorithms and five other algo-
rithms on the CEC13 benchmarks. MIIBSO was ranked number one when applied to problem
sizes of 30 and 50.
In [21], the authors proposed a BSO algorithm with flexible search length and memory-based
selection (ASBSO). To effectively explore the search space, ASBSO used different step lengths
by providing M different values for the parameter k, referred to as different strategies. To adjust
k, ASBSO employed two memory structures, namely success memory and failure memory. Both
memories consist of M columns, one for each strategy, and L rows, for L consecutive itera-
tions. An entry in the success(failure) memory is set to 1(0) when an individual is successfully
improved using strategy m, where 1 ≤ m ≤ M , at iteration l, where 1 ≤ l ≤ L, and 0(1) otherwise.
When an individual was updated, a strategy was probabilistically selected based on the success
rates. When tested using the CEC13 benchmarks, ASBSO outperformed BSO-OS and was com-
petitive with GBSO.
The authors in [22] introduced a simplex search-based BSO (simplex-BSO), which combined
the explorative ability of BSO with the local search ability of the Nelder-Mead simplex (NMS)
method. First, the BSO clustering and generation steps were modified. The clustering step only
generated two clusters referred to as elites (the best 20% individuals) and normals. As for the gen-
eration step, an individual was only generated using elites (one or a combination of two) or nor-
mals (one or a combination of two) but not both. Second, the NMS method was applied in every
iteration to exploit the area around the best idea. The NMS method only consumed 40 × D func-
tion evaluations, where D is the problem size, before returning the best-found solution. Simplex-
BSO was tested using the CEC17 benchmarks [23], with dimensionality 2 and 10. Experimental
results showed that simplex-BSO provided a better performance than BSO and NMS as well as
being the best algorithm at any given computational cost.
Brain Storm Optimization Algorithm 9
A vector grouping learning BSO (VGLBSO) was proposed in [24]. The clustering stage
adopted the RG strategy. The update strategy considered the information exchange between sub-
vectors of individuals. This could be considered as an in-between strategy with the two extremes
as the entire individual update in the original BSO and the per-variable update in GBSO. Each
individual is stochastically divided into τ subvectors with at least ∆ problem variables each. Two
new individual generating patterns, A and B, were devised to emphasize local search and global
search, respectively. In pattern A, one cluster, cr , and one individual from that cluster, X crj, were
randomly selected. The new individual, X ni, was generated based on combining subvectors from
the selected cluster center, centercr , and the selected individual, X crj, or by combining subvectors
from the randomly selected individual, X crj, and another randomly selected one, X crk (k changes
for every subvector). This is illustrated in the following equation:
where S ( X ni , t ) denotes subvector t in X ni, S ( X crj , t ) denotes subvector t in X crj, S ( X crk , t ) denotes
subvector t in X crk, and S (centercr , t ) denotes subvector t in the cluster center. On the other hand,
pattern B considered information from different individuals that belong to two different clusters
to the entire population (i.e., expressed as the mean of all cluster centers). It did not only rely on
stochastic combinations as pattern A, but also included differential information. Hence, pattern B
focused on the global exploration. Furthermore, to generate X new i , a perturbation is applied to X i
n
by either the step function combined with Gaussian distribution as in the original BSO or by add-
ing a differential term that is similar to DE. The CEC13 benchmarks library was used to compare
VGLBSO with another 12 BSO variants for a problem size of 50. Experimental results showed
that VGLBSO ranked number 1 among the compared algorithms followed by GBSO.
To enhance the population diversity and make effective exploration/exploitation balance,
the work in [25] combined BSO with the sine-cosine algorithm (SCA) [26]. In EBS-SCA, RG
is used to cluster ideas. Meanwhile, an exploration phase was followed for the first 80% of
iterations, while an exploitation phase was followed in the last 20%. In general, and to gen-
erate X ni, EBS-SCA combined updating the entire individual as in the original BSO with a
per-variable update as in GBSO. For exploration, and after generating X ni, X new i was generated
as follows:
A multiple diversity driven BSO (MDBSO) was proposed in [27] to adapt the BSO parameters
based on the population diversity. Two diversity measures were used, namely distance based and
fitness based. Clustering was carried out using the k-means algorithm, while the update equa-
tions were completely replaced by two mutation operators, i.e., BLX-α and Gaussian. One of the
two mutation operations was selected based on a probability that is adapted using the population
diversity. MDBSO was tested using the CEC2017 benchmarks and had the best performance
compared with other BSO variants followed by RGBSO and GBSO.
The work in [28] introduced an improved BSO (RMBSO) algorithm in which a slight relax-
ation selection and ensembles were used to improve the performance. In RMBSO, three update
schemes were utilized. In the first two, generating X ni is as in the original BSO algorithm, then
to generate X new
i , either a current-to-pbest/1 or current-to-rand/1 perturbation approach was
employed. In the third scheme, X newi was directly generated using a new triangulation mutation
approach. In addition, the replacement step was slightly relaxed by accepting new ideas that are
better than either one of the previous two individuals. RMBSO was evaluated using the CEC05
benchmarks [29] and compared with a number of BSO variants. Among the compared variants,
RMBSO was ranked at number 1 followed by GBSO.
In [30], the authors proposed an active learning BSO (ALBSO) with a dynamically changed
cluster cycle. For the clustering stage, the k-means algorithm was only executed after a random
number of generations based on a new parameter dc. For the update stage, when generating X ni from
within a single cluster, ALBSO used the difference information between the cluster center and the
mean individual of the population or the difference information between the cluster center and a
random individual. However, when generating X ni using two clusters, the difference information
between two cluster centers or the difference information between the cluster centers and a random
individual was used. Although ALBSO showed a better performance than other metaheuristics
when applied to the CEC05 benchmarks, it was not compared with other BSO variants.
Population interaction networks (PINs) were used in [31] to theoretically analyze the perfor-
mance of BSO. The PIN was used to capture the relationships among individuals in BSO and was
updated after every iteration. The work concluded that in lower dimensions, BSO produced good
results as the population interaction of BSO follows a power law distribution. This is deemed effec-
tive for seeking an optimal solution. However, in higher dimension, the population interaction did
not follow such a law; hence, the performance deteriorated. Through further experiments, the work
proposed a combination of parameter settings that could be used to produce better results. Finally,
the authors proposed that BSO should be modified to make sure that the population completely
obeys a power law distribution to enhance the performance in higher dimensions.
1.3.2 Multi-objective Optimization
The authors in [32] developed an improved BSO algorithm for many-objective optimization. The
many-objective BSO (MaOBSO) algorithm adopted new clustering methods with reference point
allocation to increase the diversity and decision variables clustering to divide the variables into con-
vergence-related and diversity-related variables. Convergence-related variables were necessary to
find Pareto solutions that are close to the Pareto front. On the other hand, diversity-related variables
makes the Pareto solutions more uniform. To optimize these variables, different approaches were
taken. Roulette wheel selection and binary crossover were used to optimize convergence-related
variables, while selection and mutation from the original BSO were used to optimize diversity-
related variables. MaOBSO provided a superior performance when compared with NSGA-III [33]
and MOEA/DD [34] when applied to the DTLZ [35] and MaF [36] benchmarks.
A multi-objective BSO algorithm based on decomposition (MBSO/D) was proposed in [37].
Using decomposition, every multi-objective problem is decomposed into a series of problems
Brain Storm Optimization Algorithm 11
using weight vectors and aggregate functions. In MBSO/D, each given weight vector provided
a cluster, and the best solution of each cluster was determined by the corresponding weight vec-
tor aggregation function. Moreover, the update equations were adopted from MOEA/D [38]
while a new solution was generated using either one or three clusters. MBSO/D showed com-
petitive performance when compared with other decomposition-based algorithms MOEA/D and
MOHS/D [39] using the CEC09 [40] and DTLZ benchmarks.
Dynamic multi-objective optimization was tackled in [41] using a grid-based BSO algo-
rithm (GMOBSO). Grid-based clustering was used to evenly divide the objective space across
each objective and group the individuals located in the same grid into a cluster. A hybrid muta-
tion strategy combining Gaussian-, Cauchy- and chaotic-based mutation operators was used to
enhance the diversity and avoid premature convergence. The algorithm was tested using eight
dynamic multi-objective benchmarks and compared with other BSO versions using different
clustering techniques including k-means (KMOBSO) and group-based clustering (PMOBSO).
Experimental results showed the improved performance of GMOBSO as it could find robust
Pareto optimal solutions approximating the true Pareto-front and had longer survival times.
1.3.3 Constrained Optimization
To solve constrained optimization problems, the work in [42] combined a modified BSO (MBSO)
algorithm with a simplified constraint consensus (CC) method (MBSO-R+V). The MBSO ver-
sion used a simple grouping technique by randomly selecting cluster seeds from the popula-
tion and assigning each idea in the same cluster with the closest seed. Moreover, the Gaussian
perturbation term is replaced with a differential term while generating some problem variables
randomly. MBSO was shown to be effective in constrained optimization [43]. To handle infea-
sible solutions, a new CC variant called restriction+violated (R+V) was proposed. In R+V, only
the feasibility vector of the hardest constraint in turn is considered. Hence, only the gradient of
that constraint is computed reducing the computational cost. MBSO-R+V was shown to be com-
petitive with other state-of-the-art algorithms for constrained optimization problems. The work
represented an extension of the same authors’ previous work in [44].
HL I
Generation
HL II
HL III
Transmission
Distribution
determine its adequacy to meet the total system load requirements. At this level, the transmission
lines are ignored and considered as completely reliable elements. In HLII studies, the adequacy
analysis is usually termed composite system or bulk transmission system evaluation. The HLIII
studies include all the three functional zones of the power system, starting at generation points
and terminating at the individual consumer load points. In this experimental study, the reliability
indices of generation (HLI) and composite power system (HLII) are calculated.
Both analytical methods and simulation-based methods could be applied to this problem.
However, analytical methods are not suitable for large systems, while simulation-based methods
have an increased computational burden when increasing the reliability level. Using population-
based metaheuristics offer the balance when applied to the reliability assessment of a power
systems problem. Previous works showed that such methods can reduce the amount of sample
states, reduce the computational time, and achieve better coverage [67, 68].
1.4.2 Problem Definition
In a power system reliability study, each element (generation unit of transmission line) has its
own force outage rate (FOR), which is calculated using failure µ and repair rate λ :
λi
FORi = eq1 (1.14)
λ i + µi
Each power system state is defined according to the elements status (1 = on and 0 = off). A prob-
ability value PS is calculated for each state j as follows:
FORi if status = 0
Availabilityi = (1.15)
1 − FORi otherwise
N g − Nt
PS j = ∏ Availability
i =1
i (1.16)
where N g is the number of generation units and N t is the number of transmission lines.
Brain Storm Optimization Algorithm 13
In addition, each system state j should be evaluated as a success or failure state. The state
is evaluated as a success if the generated power is more than the hourly demand. Otherwise, if
the generated power is less than the hourly demand, the system state is considered a failure. In
case of a failure state, the difference of the generated power and the demand is referred to as the
energy not supplied (ENS). But, in composite power system reliability studies, evaluation is more
complicated and a linear programming minimization problem based on DC load flow equations
are used to evaluate the sampled state (described in detail in next section). If it is a failure state
the amount of load curtailment for the whole system and the share of each load bus in this amount
will be determined.
Finally, after evaluation of all possible states of the system, reliability indices such as LOLE
(loss of load expected) and EENS (expected energy not supply) can be calculated for the whole
system.
1.4.3 Idea Encoding
GBSO is applied in this work to the power system reliability application. In GBSO, each idea
represents a solution to the problem. Hence, each idea represents a system state in which all gen-
eration units/transmission lines are considered as either on (up state) or off (down state). In this
case, if a generation unit status is set to 1, the generation unit is assumed to generate power equal
to its capacity. If it is 0, then the generation unit is down and no power is generated. Regarding
the transmission line, if the status is 0, this line is considered to be out of service. The fitness of
an idea is the probability calculated according to the availability of units and transmission lines
in Eq. 1.16. One idea representing the state of system is shown in Figure 1.2. In this state all of
the units are up and only one line (number 8) is down.
To calculate reliability indices, only failure states are considered. Regarding the generation
system (HLI) this is simple stage. The generated power of each system state (idea) is calculated
using Eq. 1.14. If the generated power is more than load demand, then it is considered as a suc-
cess state. If generated power is less than load demand, then it is considered a failure state and
used to calculate reliability indices. To calculate the generated power, the following equation
is utilized:
Ng
Gj = ∑g × b
i =1
i i (1.17)
where G j is the generated power of state j, N g is the number of generation units, gi is the capacity
of unit i, and bI is the status of unit i.
In composite power system reliability studies (HLII), the calculation is more complicated. To
recognize that a state (idea) is a failure or not, a minimization linear programming problem is
solved. Simply, an optimized DC load flow is done to determine the contribution of each bus in
load curtailment with a focus on minimizing the total not supplied power. If it is a failure state,
the amount of load curtailment (LC) for the whole system and the share of each load bus in this
amount are determined. These values are saved for each sampled state and then used to calculate
adequacy indices for the whole system and for load buses. Each state is evaluated using a linear
programming optimization model based on DC load flow equations. For the sampled state, based
on network configuration and status of transmission lines, the susceptance matrix B is modified.
The amount of available real power generation at each PV bus (generation bus) is also updated
according to the status of generation units installed at such a bus. In this optimization problem,
the power generated at each bus and status of transmission lines along with the configuration of
system and transmission line parameters are known. Moreover, the minimum and maximum
amounts of transmission line capacity and amount of peak load in each bus are given. The power
generated and angle at each bus (except the reference bus) are variables of the problem. The
objective of this optimization linear program is to minimize the total LC for the whole system:
n
min ∑ LC
i =1
i (1.18)
where LCi is the load curtailment at bus i and n is the total number of system buses.
This optimization problem is subject to the following constraints:
k
PDi = PGi −
∑p − LC
i =1
ij i (1.19)
where PDi is the load demand at bus i at the peak load condition, PGi is the real power
generation at bus i, and Pij is the power flow from bus i to j.
• Real power flow limits on each transmission line:
where PDi is the load demand at bus I at the peak load condition.
• Maximum and minimum available real power at each PV bus:
θi − θ j
pij = (1.23)
X ij
where θ i and θ j are the angles at buses i and j (with bus 1 assumed to be the reference bus with
θ1 = 0) and X ij is the reactance value for line i to j (which is provided in the transmission line data).
Brain Storm Optimization Algorithm 15
TABLE 1.1
Reliability Results for the Compared Algorithms—RTS-79
Measure Unit Addition GA GBSO
LOLE 9.355 9.324 8.890
EENS 1168 1163 1085
1.4.4 Experimental Results
In this section, GBSO, PSO, and GA are compared when applied to the problem at hand. All
algorithms have a population size of 50. For GBSO, all parameters are set similar to [14]. For
PSO, w = 0.7 , c1 = 1.9, and c2 = 2.1. For GA, the mutation probability Pm = 0.04 and is applied
by flipping one randomly selected chromosome. A one-point crossover operator is applied with
a probability Pc = 0.8. All algorithms are run for 20 independent runs and 1000 iterations per a
single run.
GBSO is used to decrease the amount of sampled states needed to calculate the reliability
indices. In other words, instead of evaluating all possible states 2 Nt + N g , the most probable
failure states are sampled and saved using BSO. Each system state is considered as an idea in
BSO. First, a set of system states is generated randomly. The fitness of an idea is defined as
state probability PS j . In the adopted GBSO, fitness-based grouping is used to cluster system
states with the best state (state with the highest probability) selected as the cluster center. In
each iteration, new sample states are generated and failure states are saved and used to calcu-
late the reliability indices.
GBSO is tested on RTS-79, which consists of 32 generating units. Table 1.1 shows a compari-
son between the unit addition method, GA [69] and GBSO.
GBSO has also been tested on the Roy Billinton Test System (RBTS), which has 11 genera-
tion units, installed at bus 1 and 2, and 9 transmission lines, as shown in Figure 1.3. The results
of GBSO are given in Table 1.2. The results are compared with sequential sampling and state
sampling methods in [70] and PSO in [71].
TABLE 1.2
Reliability Results for the Compared Algorithms—RBTS
Measure Sequential Sampling State Sampling GA PSO GBSO
LOLE 86.399 88.58 85.02 85.79 80.41
EENS 1081.01 1082.63 1079.4 1081.02 998.67
• Although BSO has many parameters (population size, cluster size, and probabilities),
only one identified publication attempted to carry a full sensitivity study to identify the
best combination of parameters settings for BSO. An important direction is to develop
an adaptive mechanism for these parameters to alleviate the burden of having to set
these parameters for different problems.
• Only one identified publication attempted to address the identified limitation of the
update equation. More research work could be conducted on improving the update strat-
egies to build on old information.
• The application of BSO algorithms has not been fully explored in large-scale optimization.
• Although many BSO variants emerged in the last 3 years, new variants do not often
compare with all older variants but rather a small set. A good direction is to compare
the most prominent BSO variants up to date with different benchmarks to identify the
state-of-the-art algorithms among these variants.
REFERENCES
1. Javier Del Ser Salvador Garcia Amir Hussain Daniel Molina, Javier Poyatos and Francisco Herrera.
Comprehensive taxonomies of nature- and bio-inspired optimization: Inspiration versus algorithmic
behavior, critical analysis and recommendations. https://arxiv.org/abs/2002.08136, 2020.
2. Y. Shi. Brain storm optimization algorithm. In Proc. International Conference on Swarm Intelligence,
pages 303–309, 2011.
3. Y. Shi. An optimization algorithm based on brainstorming process. International Journal of Swarm
Intelligence Research, 2(4):35–62, 2011.
4. S. Cheng, Q. Qin, J. Chen, and Y. Shi. Brain storm optimization algorithm: A review. Artificial
Intelligence Review, 46:1–14, 2016.
5. Shi Cheng, Yifei Sun, Junfeng Chen, Quande Qin, Xianghua Chu, Xiujuan Lei, and Yuhui Shi.
A comprehensive survey of brain storm optimization algorithms. In Proc. IEEE Congress on
Evolutionary Computation, pages 1637–1644, 2017.
6. Z. Zhan, J. Zhang, Y. Shi, and H. Liu. A modified brain storm optimization algorithm. In Proc. of
IEEE Congress on Evolutionary Computation, pages 1969–1976, 2012.
7. Z. Cao, Y. Shi, X. Rong, B. Liu, Z. Du, and B. Yang. Random grouping brain storm optimization
algorithm with a new dynamically changing step size. In Proc. International Conference on Swarm
Intelligence, pages 364–387, 2015.
Brain Storm Optimization Algorithm 17
8. J. Chen, S. Cheng, Y. Chen, Y. Xie, and Y. Shi. Enhanced brain storm optimization algorithm for
wireless sensor networks deployment. In Proc. International Conference on Swarm Intelligence,
pages 373–381, 2015.
9. J. Chen, J. Wang, S. Cheng, and Y. Shi. Brain storm optimization with agglomerative hierarchi-
cal clustering analysis. In Proc. 7th International Conference on Swarm Intelligence, ICSI, pages
115–122, 2016.
10. S. Cheng, Y. Shi, Q. Qin, Q. Zhang, and R. Bai. Population diversity maintenance in brain storm
optimization algorithm. Journal of Artificial Intelligence and Soft Computing Research, 4(2):83–97,
2014.
11. S. Cheng, Y. Shi, Q. Qin, T. O. Ting, and R. Bai. Maintaining population diversity in brain
storm optimization algorithm. In Proc. Of IEEE Congress on Evolutionary Computation, pages
3230–3237, 2014.
12. Mohammed El-Abd. Brain storm optimization algorithm with re-initialized ideas and modified step
size. In Proc. IEEE Congress on Evolutionary Computation, pages 2682–2686, 2016.
13. D. Zhou, Y. Shi, and S. Cheng. Brain storm optimization algorithm with modified step-size and
individual generation. In Proc. International Conference on Swarm Intelligence, pages 243–252,
2012.
14. M. El-Abd. Global-best brain storm optimization algorithm. Swarm and Evolutionary Computation,
37:27–44, 2017.
15. J. J. Liang, B-Y. Qu, and P. N. Suganthan. Problem definitions and evaluation criteria for the CEC
2014 special session and competition on single objective real-parameter numerical optimization.
Technical report, Computational Intelligence Laboratory, Zhengzhou University, Zhengzhou, China
and Nanyang Technological University, Singapore, 2014.
16. Zhenshou Song, Jiaqi Peng, Chunquan Li, and Peter X. Liu. A simple brain storm optimization algo-
rithm with a periodic quantum learning strategy. IEEE Access, 6:19968–19983, 2017.
17. J. J. Liang, B. Y. Qu, P. N. Suganthan, and A. G. Hernandez-Diaz. Problem definitions and evaluation
criteria for the CEC2013 special session on real-parameter optimization. Technical Report 201212,
Zhengzhou University, China, 2013.
18. Shi Cheng, Junfeng Chen, Xiujuan Lei, and Yuhui Shi. Locating multiple optima via brain storm
optimization algorithms. IEEE Access, 6:17039–17049, 2018.
19. Yuhui Shi. Brain storm optimization algorithm in objective space. In Proc. IEEE Congress on
Evolutionary Computation, pages 1227–1234, 2015.
20. Chunquan Li, Zhenshou Song, Jinghui Fan, Qiangqiang Cheng, and Peter X. Liu. A brain storm
optimization with multi-information interactions for global optimization problems. IEEE Access,
6:19304–19323, 2018.
21. Yang Yu, Shangce Gao, Yirui Wang, Jiujun Cheng, and Yuki Todo. ASBSO: An improved brain
storm optimization with flexible search length and memory-based selection. IEEE Access,
6:36977–36994, 2018.
22. Chunquan Li, Dujuan Hu, Zhenshou Song, Feng Yang, Zu Luo, Jinghui Fan, and Peter X. Liu. A
vector grouping learning brain storm optimization algorithm for global optimization problems. IEEE
Access, 6:78193–78213, 2018.
23. N. H. Awad, M. Z. Ali, J. J. Liang, B. Y. Qu, and P. N. Suganthan. Problem definitions and evaluation
criteria for the CEC 2017 special session and competition on single objective bound constrained real-
parameter numerical optimization. Technical report, Nanyang Technological University, Singapore,
2017.
24. Wei Chen, Yingying Cao, Shi Cheng, Yifei Sun, Qunfeng Liu, and Yun Li. Simplex search-based
brain storm optimization. IEEE Access, 6:75997–76006, 2018.
25. Chunquan Li, Zu Luo, Zhenshou Song, Feng Yang, Jinghui Fan, and Peter X. Liu. An
enhanced brain storm sine cosine algorithm for global optimization problems. IEEE Access,
7:28211–28229, 2019.
26. Seyedali Mirjalili. SCA: A sine cosine algorithm for solving optimization problems. Knowledge-
Based Systems, 96:120–133, 2016.
27. Yang Yu, Shangce Gao, Yirui Wang, Zhenyu Lei, Jiujun Cheng, and Yuki Todo. A multiple
diversity-driven brain storm optimization algorithm with adaptive parameters. IEEE Access,
7:126871–126888, 2019.
18 Handbook of AI-based Metaheuristics
28. Yuehong Sun, Jianxiang Wei, Tingting Wu, Kelian Xiao, Jianyang Bao, and Ye Jin. Brain storm
optimization using a slight relaxation selection and multi-population based creating ideas ensemble.
Applied Intelligence, 50:3137–3161, 2020.
29. P. N. Suganthan, N. Hansen, J. J. Liang, K. Deb, Y.-P. Chen, A. Auger, and S. Tiwari. Problem
definitions and evaluation criteria for the CEC 2005 special session on real-parameter optimization.
Technical Report 2005005, ITT Kanpur, India, 2005.
30. Zijian Cao and Lei Wang. An active learning brain storm optimization algorithm with a dynamically
changing cluster cycle for global optimization. Cluster Computing, 22:1413–1429, 2019.
31. Yirui Wang, Shangce Gao, Yang Yu, and Zhe Xu. The discovery of population interaction with a
power law distribution in brain storm optimization. Memetic Computing, 11:65–87, 2019.
32. Yali Wu, Xinrui Wang, Yulong Fu, and Guoting Li. Many-objective brain storm optimization algo-
rithm. IEEE Access, 7:186572–186586, 2019.
33. K. Deb and H. Jain. An evolutionary many-objective optimization algorithm using reference-
point-based nondominated sorting approach, Part I: Solving problems with box constraints. IEEE
Transactions on Evolutionary Computation, 18(4):577–601, 2014.
34. K. Li, K. Deb, Q. Zhang, and S. Kwong. An evolutionary many-objective optimization algorithm
based on dominance and decomposition. IEEE Transactions on Evolutionary Computation,
19(5):694–716, 2015.
35. K. Deb. Multi-objective genetic algorithms: Problem difficulties and construction of test problems.
Evolutionary Computation, 7(3):205–230, 1999.
36. R. Cheng, M. Li, Y. Tian, X. Zhang, S. Yang, Y. Jin, and X. Yao. A benchmark test suite for evolution-
ary many-objective optimization. Complex Intelligent Systems, 3(1):67–81, 2017.
37. Cai Dai and Xiujuan Lei. A multiobjective brain storm optimization algorithm based on decomposi-
tion. Complexity, 2019:5301284.
38. Q. Zhang and H. Li. MOEA/D: a multiobjective evolutionary algorithm based on decomposition.
IEEE Transactions on Evolutionary Computation, 11(6):712–731, 2007.
39. I. A. Doush, M. Q. Bataineh, and M. El-Abd. The hybrid framework for multi-objective evolutionary
optimization based on harmony search algorithm. In Real-Time Intelligent Systems, pages 134–142,
2017.
40. Q. F. Zhang and P. N. Suganthan. Final report on CEC’09 MOEA competition. Technical report,
The School of CS and EE, University of Essex, UK and School of EEE, Nangyang Technological
University, Singapore, 2009.
41. Yinan Guo, Huan Yang, Meirong Chen, Dunwei Gong, and Shi Cheng. Grid-based dynamic robust
multi-objective brain storm optimization algorithm. Soft Computing, 24:7395–7415, 2020.
42. Adriana Cervantes-Castillo and Efren Mezura-Montes. A modified brain storm optimization algo-
rithm with a special operator to solve constrained optimization problems. Applied Intelligence,
50:4145–4161, 2020.
43. Farhad Pourpanah, Ran Wang, Xizhao Wang, Yuhui Shi, and Danial Yazdani. MBSO: A multi-
population brain storm optimization for multimodal dynamic optimization problems. In Proc. IEEE
Symposium Series on Computational Intelligence, pages 673–679, 2019.
44. Adriana Cervantes-Castillo and Efren Mezura-Montes. A study of constraint-handling techniques in
brain storm optimization. In Proc. IEEE Congress on Evolutionary Computation, pages 3740–3746,
2016.
45. Farhad Pourpanah, Chee Peng Lim, Xizhao Wang, Choo Jun Tan, Manjeevan Seera, and Yuhui Shi.
A hybrid model of fuzzy min-max and brain storm optimization for feature selection and data clas-
sification. Neurocomputing, 333:440–451, 2019.
46. Farhad Pourpanah, Yuhui Shi, Chee Peng Lim, Qi Hao, and Choo Jun Tan. Feature selection based
on brain storm optimization for data classification. Applied Soft Computing, 80:761–775, 2019.
47. Farhad Pourpanah, Ran Wang, and Xizhao Wang. Feature selection for data classification based on
binary brain storm optimization. In Proc. 6th International Conference on Cloud Computing and
Intelligent Systems, pages 108–113, 2019.
48. Noaya Otaka, Yoshikazu Fukuyama, Yu Kawamura, Kenya Murakami, Adamo Santana, Tatsuya
Lizaka, and Tetsuro Matsui. Refrigerated showcase fault detection by a correntropy based artifi-
cial neural network using modified brain storm optimization. In Proc. IEEE Symposium Series on
Computational Intelligence, pages 808–814, 2019.
Brain Storm Optimization Algorithm 19
49. S. Thanga Revathi, N. Ramaraj, and S. Chithra. Brain storm-based whale optimization algorithm for
privacy protected data publishing in cloud computing. Cluster Computing, 22:3521–3530, 2019.
50. Zeyuan Yan, Chunquan Li, Zhenshou Song, Liwei Xiong, and Chaoyang Luo. An improved brain
storming optimization algorithm for estimating parameters of photovoltaic models. IEEE Access,
7:77629–77641, 2019.
51. Qian Zhou and She sheng Gao. 3D UAV path planning using global-best brain storm optimization
algorithm and artificial potential field. In Proc. International Conference on Intelligent Robotics and
Applications, pages 765–775, 2019.
52. Eva Tuba, Ivana Strumberger, Dejan Zivkovic, Nebojsa Bacanin, and Milan Tuba. Mobile robot path
planning by improved brain storm optimization algorithm. In Proc. IEEE Congress on Evolutionary
Computation, pages 1–8, 2018.
53. Cong Zhang, Xiaobin Xu, Yuhui Shi, Yimin Deng, Cong Li, and Haibin Duan. Binocular pose esti-
mation for UAV autonomous aerial refueling via brain storm optimization. In Proc. IEEE Congress
on Evolutionary Computation, pages 254–261, 2019.
54. Alwaleed Aldhafeeri and Yahya Rahmat-Samii. Brain storm optimization for electromagnetic appli-
cations: Continuous and discrete. IEEE Transactions on Antennas and Propagation on Antennas
and Propagation, 67(4):2710–2722, 2019.
55. Yang Shen, Mingde Liu, Jian Yang, Yuhui Shi, and Martin Middendorf. A hybrid swarm intelligence
algorithm for vehicle routing problem with time windows. IEEE Access, 8:93882–93895, 2020.
56. Shunshun Peng, Hongbing Wang, and Qi Yu. Multi-clusters adaptive brain storm optimization algo-
rithm for QoS-aware service composition. IEEE Access, 8:48822–48835, 2020.
57. Xing-Rui Chen, Jun-Qing Li, Yuyan Han, Ben Niu, Lili Liu, and Biao Zhang. An improved brain
storm optimization for a hybrid renewable energy system. IEEE Access, 7:49513–49526, 2019.
58. Mayuko Sato, Yoshikazu Fukuyama, Mohammed El-Abd, Tatsuya Iizaka, and Tetsuro Matsui. Total
optimization of energy networks in smart city by cooperative coevolution using global-best brain
storm optimization. In Proc. IEEE Congress on Evolutionary Computation, pages 681–688, 2019.
59. Xingsi Xue and Jiawei Lu. A compact brain storm algorithm for matching ontologies. IEEE Access,
8:43898–43907, 2020.
60. C. Narmatha, Sarah Mustafa Eljack, Afaf Abdul Rahman Mohammed Tuka, S. Manimurugan, and
Mohammed Mustafa. A hybrid fuzzy brain-storm optimization algorithm for the classification of
brain tumor MRI images. Journal of Ambient Intelligence and Humanized Computing, 2020.
61. Jian-Hua Hao, Jun-Qing Li, Yu Du, Mei-Xian Song, Peng Duan, and Ying-Yu Zhang. Solving dis-
tributed hybrid flowshop scheduling problems by a hybrid brain storm optimization algorithm. IEEE
Access, 7:66879–66894, 2019.
62. Zhe Xu, Xiaofang Li, Xianglian Meng, and Yanting Liu. A distributed brain storm optimization for
numerical optimization and graph planarization. IEEE Access, 7:39770–39781, 2019.
63. D. Palanikkumar and S. Priya. Brain storm optimization graph theory (BSOGT) and energy resource
aware virtual network mapping (ERVNM) for medical image system in cloud. Journal of Medical
Systems, 43:37, 2019.
64. Z. Cao, X. Hei, L. Wang, Y. Shi, and X. Rong. An improved brain storm optimization with differen-
tial evolution strategy for applications of ANNs. Mathematical Problems in Engineering, 2015:1–18,
2015.
65. Kofi Baah Boamah, Jianguo Du, Daniel Adu, Claudia Nyarko Mensah, Lamini Dauda, and
Muhammad Aamir ShafiqueKhan. Predicting the carbon dioxide emission of china using a novel aug-
mented hypo-variance brain storm optimisation and the impulse response function. Environmental
Technology, 2020.
66. R. Billinton and R. N. Allan. Reliability Assessment of Large Electric Power System. Kluwer
Academic Publishers, Boston, 1988.
67. Mohammed Benidris, Salem Elsaiah, and Joydeep Mitra. Power system reliability evaluation using a
state space classification technique and particle swarm optimisation search method. IET Generation,
Transmission and Distribution, 9(14):1865–1873, 2015.
68. Mohamadreza Gholami, Reza Sharifi, and Hamid Radmanesh. Development of composite power
system effective load duration curves by using a new optimization method for assessment compos-
ite generation and transmission reliability. International Journal of Power and Energy Research,
1(1):41–52, 2017.
20 Handbook of AI-based Metaheuristics
69. N. Samaan and C. Singh. Adequacy assessment of power system generation using a modified simple
genetic algorithm. IEEE Transactions on Power Systems, 17(4):974–981, 2002.
70. R Billinton and A. Sankarakrishnan. A comparison of Monte Carlo simulation techniques for com-
posite power system reliability assessment. In Proc. IEEE Communications, Power, and Computing,
pages 145–150, 1995.
71. Mohammadreza Gholami, Seied Hadi Hoseini, and Meisam Mohamad Taheri. Assessment of power
composite system annualized reliability indices based on improved particle swarm optimization and
comparative study between the behaviour of GA and PSO. In Proc. IEEE International Power and
Energy Conference, 2008.
2 Account for the First Decade
Fish School Search
CONTENTS
2.1 Introduction.........................................................................................................................22
2.2 Original Fish School Search................................................................................................23
2.3 Variants of Fish School Search............................................................................................25
2.3.1 Mono-Objective FSS...............................................................................................25
2.3.1.1 Binary Fish School Search (BFSS)...........................................................25
2.3.1.2 Improved Binary Fish School Search (IBFSS).........................................26
2.3.1.3 Simplified Binary Fish School Search (SBFSS).......................................27
2.3.1.4 Mixed (MFSS)..........................................................................................28
2.3.2 Multi/Many-Objective.............................................................................................29
2.3.2.1 MOFSS.....................................................................................................29
2.3.2.2 MOBFSS...................................................................................................30
2.3.2.3 Weight-Based Many-Objective Fish School Search
(wmoFSS)..................................................................................................31
2.3.3 Multimodal (Niching Algorithms)..........................................................................31
2.3.3.1 Density-Based Fish School Search (dFSS)...............................................31
2.3.3.2 Weight-Based Fish School Search (wFSS)...............................................33
2.3.4 Parallelization..........................................................................................................34
2.3.4.1 Parallel Fish School Search (pFSS)..........................................................34
2.3.4.2 Multi-Threaded Fish School Search (MTFSS).........................................34
2.4 Applications.........................................................................................................................35
2.4.1 Clustering.................................................................................................................35
2.4.2 Feature Selection.....................................................................................................35
2.4.3 Image Compression and Thresholding....................................................................36
2.4.4 Knapsack Problem...................................................................................................36
2.4.5 Recommendation Systems.......................................................................................36
2.4.6 Smart Grids..............................................................................................................36
2.4.7 Supply Chain Network Planning.............................................................................37
2.4.8 Electrical Impedance Tomography..........................................................................37
21
22 Handbook of AI-based Metaheuristics
2.1 INTRODUCTION
Evolutionary and swarm-based algorithms are optimization methods greatly inspired in the the-
ory of evolution of species and gregarious animal behavior, respectively. Both comprise two ver-
satile families of algorithms within the computational intelligence discipline. The bio-inspired
aspect of both families’ population-based techniques brings vast possibilities and capabilities for
solving hard search and optimization tasks in highly nonlinear complex contexts. Applications
excel in discrete and in continuous search spaces and are very popular in an ample range of chal-
lenging domains such as scheduling, routing, complex networks, and others [1, 2].
In particular, swarm-based algorithms profit significantly from the cooperation and interaction
of their processing units, the simple reactive particles of the swarms. These properties endow the
swarm with fast and effective proactive adaptive responses. Moreover, collectively, the particles can
seamlessly carry out an unusual combination of local and global search capabilities. The evoked
behavior of the swarm results in a powerful abstraction with a fascinating practical emergence of
self-organization and smart-communication strategies. The swarm dynamics mimics the group
behavior observed in ant colonies, schools of fish, flocks of birds, pack of wolves, and others [3].
Each algorithm of the comprising families of the swarm clique has parameters that control the
evoked behavior during the search or optimization according to a given fitness function. Tuning
these parameters is not an easy task for practitioners because it is not simple to define parameter
values that evoke optimal behavior and because of how each parameter relates to the desired results.
Since its inception, the swarm-based approach became an unprecedented selection of tools
capable of tackling large and challenging problems. This is mostly because of their inner func-
tioning mechanisms, which allow smart internal synergistic gains so that the known curse of
dimensionality is somehow tamed. Consequently, scalability is no longer the Achilles heel. From
a research standpoint, fortunately, many other aspects related to solving real-world problems
pressed for new developments, namely (1) varied data representation (e.g., binary, discrete, con-
tinuous, and mixed), (2) multimodal output, (3) multi-objective capabilities, and (4) more effective
use of the computational resources (e.g., external parallelizations).
Fish School Search (FSS) is a swarm-intelligence subfamily of algorithms proposed by
Bastos Filho and Lima Neto in 2008 and first published in 2009 [2]. In this algorithm, each
fish weight represents the success obtained during the search and directly influences the indi-
vidual and school movements. The embedded mechanisms of feeding and coordinated move-
ment make all fish “swim” toward the positive gradient to gain weight (and hence find local
and global better positions). By design, the heavier the fish, the more influential it is for guid-
ing the search.
Moreover, this one definition is seminal for a swift functioning of FSS, which uses the bary-
center as a frame of reference to decide overall directions toward better search space regions.
Another unique feature of FSS, also based on the barycenter, is its ability to switch between
exploration and exploitation automatically. The barycenter is also very handy for parallelizing
FSS, as it is one of the few global variables that need to be shared in parallel implementations.
This chapter accounts for several variations of FSS produced over the last decade, which address
the pressing aspects of real-world optimization.
Fish School Search 23
Many bio-inspired algorithms have been proposed in the last decade, and some papers have
shown that most of the recent proposals use similar computational mechanisms [4]. Nevertheless,
we can only observe some basic mechanisms to represent and aggregate the useful information
obtained by the simple reactive agents along the search process. Among these proposals, we can
cite positional information (deployed in particle swarm-based algorithms), markers in the envi-
ronment (employed in artificial bee colony-based and ant colony optimization-based algorithms),
and indicators that accumulate the degree of success of the simple agents (proposed in the FSS
and also deployed in the Glowworm Swarm Optimization).
The idea of accumulating success along the search indicates that a specific simple reactive
agent is worth influencing others. FSS works well for continuous optimization problems with
a single optimization, and it has influenced other researchers to propose variations for other
problems, such as optimization in binary problems, multi-objective optimization, many-objective
optimization, and multimodal optimization. This chapter presents some proposals for binary
optimization, three approaches for multi- and many-objective optimizations, and two different
multimodal optimization proposals. We also show two different approaches for parallel process-
ing, which aim to accelerate the processing time. We finalize this chapter giving some examples
of applications of those recent approaches in real-world problems.
where Δxi(t + 1) is the movement displacement; stepind (t ) is the individual step parameter in itera-
tion t; and rand and direction are two random values, where rand is generated by a uniform dis-
tribution in the interval [0,1] and direction is randomly chosen between −1 or 1. In the equation,
stepind can be decreased linearly to provide better exploitation in the later iterations.
24 Handbook of AI-based Metaheuristics
The individual movement is only properly performed if the new temporary position (Eq. 2.3)
is better than the current fish position. In this case, the current fish position is updated.
temp = xi (t ) + ∆xi (t + 1) (2.3)
Collective-instinctive is a movement based on the instantaneous success of all fish of the school.
In this movement, the fish that were successful in the individual movement influence the fish
school in a direction. Each fish is repositioned using the Eq. 2.4.
∑ ∆x f [x (t + 1)] + f [x (t)]
N
i =1
indi i i
xi (t + 1) = xi (t ) + (2.4)
∑ { f [x (t + 1)] − f [x (t)]}
N
i i
i =1
where ∆xindi is the displacement of the fish i due to the individual movement the current iteration.
Collective-volitive is the last movement in FSS iteration, and it is important to improve the
exploration capability of algorithm. It is performed as a small step drift to every fish position
with regard to the school’s barycenter. In summary, if the fish school is increasing its weight,
the radius of the school should contract; otherwise, it should dilate. To do that, the barycenter is
calculated using Eq. 2.5.
∑ x (t)W (t)
N
i i
i =1
Bari (t ) = (2.5)
∑ W (t)
N
i
i =1
If the overall weight of the school increases, the new position is calculated using Eq. 2.6; other-
wise, Eq. 2.7 is used.
xi (t + 1) = xi (t ) − stepvol × rand × [ xi (t ) − Bari (t )], (2.6)
xi (t + 1) = xi (t ) + stepvol × rand × [ xi (t ) − Bari (t )], (2.7)
where rand is a random number uniformly generated in the interval [0,1] and the stepvol is an FSS
parameter that is decreased linearly along the iterations.
Finally, the pseudocode for FSS is described in Algorithm 2.1
Fig. 86.—Mint.
The first and last are of commercial importance, and all may well
be introduced by apiarists, wherever there is any space or waste
ground.
The silk or milk-weed furnishes abundant nectar from June to
frost, as there are several species of the genus Asclepias, which is
wide-spread in our country. This is the plant which has large pollen
masses which often adhere to the legs of bees (Fig, 87), and
sometimes so entrap them as to cause their death. Prof. Riley once
very graciously advised planting them to kill bees. I say graciously,
as I have watched these very closely, and am sure they do little
harm, and are rich in nectar. Seldom a bee gets caught so as to hold
it long, and when these awkward masses are carried away with the
bee, they are usually left at the door of the hive, where I have often
seen them in considerable numbers. The river bank hard by our
apiary is lined with these sweet-smelling herbs, and we would like
even more.
Fig. 87.—Pollen of Milk- Fig. 88.—Black
weed. Mustard.
Black mustard, Sinapis nigra (Fig, 88) white mustard, Sinapis
alba, and rape, Brassica campestris (Fig, 89), all look much alike,
and are all admirable bee plants, as they furnish much and beautiful
honey. The first, if self-sown, blooms July 1st, the others June 1st;
the first about eight weeks after sowing, the others about four. The
mustards bloom for four weeks, rape for three. These are all
specially commendable, as they may be made to bloom during the
honey dearth of July and August, and are valuable plants to raise for
the seed. Rape seems to be very attractive to insects, as the flea
beetles and the blister beetles are often quite too much for it, though
they do not usually destroy the plants till after they have blossomed.
I have several times purchased what purported to be Chinese
mustard, dwarf and tall, but Prof. Beal, than whom there is no better
authority, tells me they are only the white and black, and certainly,
they are no whit better as bee plants. These plants, with buckwheat,
the mints, borage and mignonette, are specially interesting, as they
cover, or may be made to cover, the honey dearth from about July
20th to August 20th.
Fig. 89.—Rape.
Fig. 91.—Teasel.
Fig. 92.—
Cotton.
Now, too, our brothers of the South reap a rich harvest from the
great staple, cotton (Fig, 92), which commences to bloom early in
June, and remains in blossom even to October. This belongs to the
same family—Mallow—as the hollyhock, and like it, blooms and
fruits through the season.
JULY PLANTS.
Fig. 93.—Basswood.
Fig. 94.—Figwort.
BOOK ON BOTANY.
PRACTICAL CONCLUSIONS.
It will pay well for the apiarist to decorate his grounds with soft
and silver maples, for their beauty and early bloom. If his soil is rich,
sugar maples and lindens may well serve a similar purpose. The
Judas tree, too, and tulip trees, both North and South, may well be
made to ornament the apiarist's home. For vines, obtain the
wistarias.
Sow and encourage the sowing of Alsike clover and silver-leaf
buckwheat in your neighborhood. Be sure that your wife, children
and bees, can often repair to a large bed of the new giant or
grandiflora mignonette, and remember that it, with cleome and
borage, blooms till frost. Study the bee plants of your region, and
then study the above table, and provide for a succession,
remembering that the mustards, rape and buckwheat may be made
to bloom almost at pleasure, by sowing at the proper time. Don't
forget that borage and the mustards seem comparatively indifferent
to wet weather. Be sure that all waste places are stocked with
motherwort, catnip, asters, etc. (See Appendix, page 289).
The above dates are only true for the most part in Michigan and
Northern Ohio, and for more Southern latitudes must be varied,
which by comparison of a few, as the fruit trees, becomes no difficult
matter.
CHAPTER XVII.
WINTERING BEES.
Keep the bees breeding till the first of October. Except in years of
excessive drouth, this will occur in many parts of Michigan without
extra care. Failure may result from the presence of worthless
queens. Any queens which seem not to be prolific should be
superseded whenever the fact becomes evident. I regard this as
most important. Few know how much is lost by tolerating feeble,
impotent queens in the apiary, whose ability can only keep the
colonies alive. Never keep such queens about. Here, then, is
another reason for always keeping extra queens on hand. Even with
excellent queens, a failure in the honey yield may cause breeding to
cease. In such cases, we have only to feed as directed under the
head of feeding.
Fig. 101.