Download as pdf or txt
Download as pdf or txt
You are on page 1of 104

Neural Computing and Applications (2023) 35:14275–14378

https://doi.org/10.1007/s00521-023-08481-5 (0123456789().,-volV)(0123456789().
,- volV)

ORIGINAL ARTICLE

A systematic review of the emerging metaheuristic algorithms


on solving complex optimization problems
Oguz Emrah Turgut1 • Mert Sinan Turgut2 • Erhan Kırtepe3

Received: 2 November 2022 / Accepted: 8 March 2023 / Published online: 26 March 2023
 The Author(s), under exclusive licence to Springer-Verlag London Ltd., part of Springer Nature 2023

Abstract
The scientific field of optimization has witnessed an increasing trend in the development of metaheuristic algorithms within
the current decade. The vast majority of the proposed algorithms have been proclaimed as superior and highly efficient
compared to their contemporary counterparts by their own developers, which should be verified on a set of benchmark
cases if it is to give conducive insights into their true capabilities. This study completes a comprehensive investigation of
the general optimization capabilities of the recently developed nature-inspired metaheuristic algorithms, which have not
been thoroughly discussed in past literature studies due to their new emergence. To overcome this deficiency in the existing
literature, optimization benchmark problems with different functional characteristics will be solved by some of the widely
used recent optimizers. Unconstrained standard test functions comprised of thirty-four unimodal scalable optimization
problems with varying dimensionalities have been solved by these competitive algorithms, and respective estimated
solutions have been evaluated relying on the performance metrics defined by the statistical analysis of the predictive
results. Convergence curves of the algorithms have been construed to observe the evolution trends of objective function
values. To further delve into comprehensive analysis on unconstrained test cases, CEC 2013 problems have been con-
sidered for comparison tools since their resemblances of the following features of real-world complex algorithms. The
optimization capabilities of eleven metaheuristics algorithms have been comparatively analyzed on twenty-eight multi-
dimensional problems. Finally, fourteen complex engineering problems have been optimized by the algorithms to scru-
tinize their effectiveness on handling the imposed design constraints.

Keywords Algorithm comparison  Algorithm scalability  Metaheuristic algorithms  Real-world design problems

1 Introduction improvements in the production of high-performance


computer processors enable researchers to focus on
Recent advances in micro-chip technology allow for cal- developing more versatile, multi-functional, and prolific
culating expensive computations within a limited runtime, algorithmic schemes, which paves the way for the propo-
which minimizes the effort for developing efficient algo- sition of various types of metaheuristic optimization algo-
rithms devoted to yield feasible outcomes without wasting rithms for solving a variety of optimization problems.
redundant computational resources. Rapid yet controlled Optimization problems in their inherent nature are very
complex as they involve more than one local solution apart
from the global optimum. They can be categorized into
& Oguz Emrah Turgut distinctive branches depending on the characteristics of the
oguzemrah.turgut@bakircay.edu.tr
problem to be solved, whether it is constrained or uncon-
1
Department of Industrial Engineering, Faculty of Engineering strained, discrete or continuous, and single-objective or
and Architecture, Izmir Bakircay University, multi-objective. Metaheuristic algorithms have become
Menemen, İzmir, Turkey popular among the research community as they do not
2
Department of Mechanical Engineering, Faculty of benefit from the derivative information of the search space
Engineering, Ege University, Bornova, İzmir, Turkey to reach the global optimum and do not require initial guess
3
Department of Motor Vehicles and Transportation solutions, thanks to the randomness generated by the
Technologies, Şırnak University, Şırnak, Turkey

123
14276 Neural Computing and Applications (2023) 35:14275–14378

responsible search agents, which enables to eliminate the values and separation distances. Swarm Intelligence (SI)
local pitfalls over the solution domain [1]. They can be algorithms are nature-inspired solution strategies takes
perceived as a high-level optimization framework appli- their main foundations upon the collective behaviors in the
cable to a wide range of problem domains, guided by a set self-organized and decentralized artificial systems. Devel-
of defined search strategies to develop an efficient heuristic opment of the Particle Swarm Optimization (PSO) [12]
algorithm. They can successfully cope with the challenges algorithm is one of the first pioneering attempts to the
of nonlinear or non-convex problems with expending a gradual evolution of the SI-based algorithms. PSO simu-
relatively low computational budget compared to the tra- lates the cooperative behavior of the swarming individuals
ditional optimizers. This prolific feature of metaheuristic such as birds, insects, herds, etc. Each particle in the swarm
optimizers makes them one step ahead of the traditional takes different roles relying on their search patterns with a
optimizers such as Newton-based algorithms and gradient view to obtain available food sources and benefits from
descent optimizers. Despite their ease in implementation, their previous search experiences and cumulative domain
Conventional optimization methods are easily be trapped in knowledge to adjust the most conducive search activity to
the local solutions and desperately stagnates at these reach the global optimum solution. Ant Colony Opti-
obstructive points during the course of iterations, leading to mization (ACO) [13] mimics the intrinsic foraging
inferior solution outcomes. Metaheuristic algorithms come behaviors of the intelligent ants, which is based on the
to the rescue in these unprosperous situations where con- following the intensity of the pheromones left by foraging
ventional methods collapse and employ alternative options ants which are on their ways to probe around the available
to overcome the obstacles of the optimization problem. food resources. Artificial Bee Colony (ABC) [14] algo-
Metaheuristics can be broadly classified under four rithm is inspired from the intelligent foraging behaviors
different main branches such as evolutionary algorithms, artificial bees, which are, in essence, search agents to be
physical-based algorithms, swarm-based algorithms, and iteratively optimized during the course of consecutive
human-based algorithms. Evolutionary Algorithms (EAs) function evaluations. Salp Swarm Optimization (SSA) [15]
simulate the tendencies of living organisms relying on algorithm is a swarm intelligence metaphor-based opti-
foundational concepts of Darwinian-like natural selection mizer taking its main inspiration to the swarming procliv-
[2] to develop intelligent optimization techniques. Genetic ities of the salp individuals while they are navigating across
Algorithms [3] are the most famous member of EAs, imi- the ocean during foraging activities. Human-based meta-
tating the different aspects of biological evolution based on heuristic algorithms are based on the mathematical models
the principles of natural selection. Differential Evolution and intelligently devised procedures mimicking the char-
[4] is another reputed optimizer iteratively adjusting the set acteristics of human activities. Teaching–Learning-Based
of candidate solutions simulating the basic principles of Optimization (TLBO) [16] is one of the most famous
Darwinian evolution to achieve the optimal answer of the family of the human-based optimizer, metaphorically
problem. Evolutionary Strategies (ES) [5], Genetic Pro- simulating the mutualist teaching and learning process
gramming (GP) [6], and Biogeography-Based Optimization taking place in a classroom. Poor and Rich Optimization
(BBO) [7] also belong to the group of well-known EAs. (PRO) [17] algorithm simulates the tedious efforts between
Physical-based algorithms are conceptualized on the fun- two distinct groups comprised of poor and rich individuals
damental principles of Newton’s law of physics. Gravita- to improve their current wealth situations while sharing the
tional Search Algorithm (GSA) proposed by Rashedi et al. useful domain knowledge within the whole population.
[8] is one of the pioneers of the physics-based algorithm, Harmony Search Optimization Algorithm (HS) [18] is one
imitating the gravitational interactions between masses. of the prominent members belonging to this group, simu-
Big Bang–Big Crunch (BB–BC) [9] algorithm is inspired lating the exhaustive process of a musician trying to find
from the evolution stages of the universe. In the incipient the perfect tune during harmony improvisation. Imperialist
phase, Big Bang occurs in which trial solutions are gen- Competitive Algorithm (ICA) [19] is a socio-political
erated to be used for manipulation in the later stage. Big human-inspired metaheuristic algorithm conceptualized on
Crunch phase modifies the initial solutions iteratively to the imperialist competition among the evolving countries,
retain the global best answer of the problem. Water Cycle which are essentially search agents of this multi-agent
Algorithm (WCA) [10] draws its inspiration from the water optimization algorithm.
cycle process in the nature, formation of the rivers and Despite the undisputable success of the literature
streams, and simulates how they flow toward to the sea in metaheuristic algorithms, emerging novel algorithms come
the real world. Charged System Search (CSS) [11] is into existence to fill the gap for solving the complex
inspired by the governing mechanisms of Coulomb law, optimization problems where available existing optimizers
where each search agent is an interactive charged particle collapse and are not able to yield feasible solutions. In
influencing each other based on their respective fitness addition, a notable No Free Lunch theorem [20] explaining

123
Neural Computing and Applications (2023) 35:14275–14378 14277

why there is an unceasing need for developing brand-new answers for this raised issue as it is hard to deal with the
algorithms despite the abundancy of available optimizers complexities of the real-world problems since the global
with various types in the literature states that there is no optimum is intractable in most of the cases. Problem-in-
single optimization algorithm capable to solve all kinds of dependent feature of metaheuristic algorithms still remains
optimization problems. It requires an exhaustive analysis questionable to the community, and arduous efforts have
employing various strategies to decide which algorithm been paid by researchers to unravel the true nature of these
performs well for a given optimization problem. Statistical algorithms since their first emergence. Recent studies try to
analysis of the compiled set of objective function values shed light on this intriguing research subject yet most of
has been consistently utilized in previous literature studies them consider a particular problem domain rather than
in order to decide the suitability of the associated algorithm looking from a broader perspective for investigating the
for the given problem. Some algorithms get better results comparative performances of the available metaheuristic
for specific optimization problems than the other methods. algorithms. Kumar et al. [23] compared the performances
These are the main reasons accounting for the enormous of six metaheuristic optimizers including Iterated Local
exponential growth in the development of metaheuristic Search (ILS), Simulated Annealing (SA), Genetic Algo-
optimizers within the recent two decades. Competitive rithm (GA), Particle Swarm Optimization (PSO), Tabu
research studies related with the development of effective Search (TS) and Crow Search Algorithm (CSA) in quad-
metaheuristic algorithms during the era between 2010 and ratic assignment problems and concluded that TS has the
2020 are mostly inspired by either the swarming behavior lowest deviation and average runtime between them and
of a flocking particle in its natural habitat or by simulating declares its superiority among the contestant optimizers.
the evolutionary mechanisms of living organisms. In many Lara-Montano et al. [24] comparatively investigated the
optimization cases, these new emerging algorithms are able optimization performances of seven metaheuristic methods
to provide the best possible answer of the problems surpass on shell-and-tube heat exchanger design optimization
the former existing optimizers with respect to considered problem with having mixed-integer decision parameters. It
comparative measures. is observed that Differential Evolution and Grey Wolf
As the existing literature suggests, metaheuristic algo- Optimization (GWO) algorithms provide the most
rithms have been developed, modified, or hybridized with stable and accurate prediction results. Abdor-Sierra et al.
other intelligent algorithms for a period of time more than [25] perform a comprehensive evaluation between ten
two decades. These type of algorithms find their place in different metaheuristic algorithms on solving inverse
many fields of engineering applications ranging from kinematics problem of a robot manipulator. Statistical
Proton Exchange Membrane Fuel Cell (PEMFC) design analysis of the algorithm runs reveal that different PSO
[21] to well placement optimization [22]. They are efficient variants along with DE algorithm provide the most accu-
problem-solving strategies even when the objective func- rate estimation values and are highly recommended for
tion is strictly constrained or characteristically having solving inverse kinematic of mobile robots. Sonmez [26]
mixed-integer design variables. Continuous search agents considered eight literature optimizers for their performance
are rounded-off to its nearest integer values for the mixed- comparison on optimal design of space trusses and con-
integer optimization problems as the traditional penalty cluded that when dimensionality of the design problem is
approach is employed to penalize the infeasible solutions increased to higher extents, computationally effective
obtained during the iterations with a view to convert the solutions are obtained from Jaya, GWO, and ABC algo-
constraint optimization problem to the unconstrained one. rithm. Ahmet et al. [27] hybridized three emergent physics-
Decision parameters to be optimized are bounded to their inspired metaheuristic algorithms with a multi-layer per-
allowable predefined limits, upper and lower limits, and ceptron to approximate the amount of streamflow in the
prior to commencing the optimization process. These fea- rivers for the near future where the compiled streamflow
tures are common to nearly all metaheuristic algorithms data set is collected from the 130 years of the periodic
and should be carefully practiced to acquire a feasible water level changes in High Aswan Dam. Meng et al. [28]
solution for the problem. comprehensively investigated the optimization capabilities
As extensive literature survey indicates, metaheuristics of ten different metaheuristic algorithms on various cases
have become a commonplace approach for solving differ- of reliability-based design problems. Assessment of the
ent domains of optimization problems. However, a for- algorithms are carried out on different performance mea-
midable question is often confusing the mind as to their sures, including global convergence, solution accuracy and
effectivity and convergence capabilities for general class of robustness, and runtime analysis. Predictive results
problems. Although there are numerous emerging meta- obtained from the benchmark reliability problems indicate
heuristic algorithms developed in the existing literature, that WCA shows superior adaptability to different bench-
only a few researchers discuss and address reasonable mark cases and recommended for solving reliability-based

123
14278 Neural Computing and Applications (2023) 35:14275–14378

optimization problems. Katebi et al. [29] independently evaluations and assessments should be carried out to
integrated six intelligent metaheuristic optimizers into the comprehend as to which algorithm performs better or
active mechanism of a Wavelet-based Linear Quadratic which algorithm is most suitable to the type of the problem
Regulator to conquer the local optimality problem as well being solved. Furthermore, it is an undisputable necessity
as to enhance general system efficiency. ICA achieves to to keep up with the recent impact in the development and
obtain the most optimal responses for the optimal control implementation of new metaheuristic algorithms along
problem. Naranjo et al. [30] applied three different well- with their successful applications of various real-world
reputed metaheuristic optimizers to the multi-objective problems. In addition, there should be a continuous effort
optimization of a dimensional synthesis of a spherical to seek the improvement in existing optimizers or to
parallel manipulator. Simulation results obtained after develop state-of-the-art metaheuristic algorithms, relying
repetitive runs demonstrate that Decomposition-based on the implications of No Free Lunch theorem, postulating
Evolutionary Algorithm obtains estimation results with the the general belief that there is no available metaheuristic
lowest deviates and generates uniform and smooth solution algorithm capable to solve all optimization problems, as
distribution along the Pareto curve. Advanced design previously mentioned in the former paragraphs. To widen
optimization of a hydrogen-based microgrid system has the general perspective on scrutinizing the estimation
been carried out by employing six different metaheuristic accuracies of the new emerging metaheuristic algorithms,
algorithms, and their comparative performances are this study proposes a more insightful and reasonable per-
assessed relying on their corresponding fitness function formance benchmark strategy. Overall search effectivities
values accounting for the total cost of integrated sustain- of the recently developed eleven metaheuristic optimizers
able energy system. It is seen that Moth Flame Optimiza- of Runge–Kutta Optimization (RUNGE) [34], Gradient-
tion algorithm results in a significant reduction of the based Optimizer [35], Poor and Rich Optimization (PRO)
overall cost expenditure and outperforms the remaining [17], Reptile Search algorithm (REPTILE) [36], Snake
compared optimizers with respect to solution efficiency Optimizer (SNAKE) [37], Equilibrium Optimizer (EQUIL)
[31]. Gupta et al. [32] investigated the search behavior of [38], Manta Ray Optimization Algorithm (MANTA) [39],
the metaheuristic algorithms on real-world mechanical African Vultures Optimization Algorithm (AFRICAN)
engineering problems with mixed-integer decisions vari- [40], Aquila Optimization Algorithm (AQUILA) [41],
ables binding constraints and conflicting highly nonlinear Harris Hawks Optimization (HARRIS) [42], and Barnacles
problem objectives. Nine metaheuristic optimizers have Mating Optimizer (BARNA) [43] will be benchmarked
been considered for analyzing their comparative perfor- against multidimensional optimization problems with var-
mances on solving these mechanical engineering problems ious types in this research study. Despite their new emer-
in terms of convergence rates and solution qualities. gence and insufficient recognition from the metaheuristic
Ezugwu et al. [33] put forward a systematic analysis community, there are many existing literature applications
approach for evaluating the solution consistencies and regarding their successful employment on real-world
runtime complexities of twelve different metaheuristic design problems. RUNG algorithm was previously applied
optimizers on continuous unconstrained optimization to the optimal design of a photovoltaic system to mitigate
problems. GA, DE, and PSO algorithms perform well partial shading conditions [44]. Optimal parameter esti-
under a variety of optimization test problems, slightly mation of the PEM fuel cell model was conducted by the
surpassing Symbiotic Organism Search (SOS) and Cuckoo newly emerged GRAD algorithm, and it is seen that elec-
Search (CS) algorithms concerning the best results. trical model parameter estimation results for different PEM
One can clearly deduce from the extensive survey that fuel cell devices obtained from GRAD optimizer is much
previous research studies focus on a particular subject, better and accurate those retained by the compared litera-
providing limited insight on the overall capabilities of the ture optimizers [45]. A modified version of PRO algorithm
implemented metaheuristic algorithms for evaluating their is employed for grouping similar documents by using text
comparative prediction performances. In the domain of classification and outperforms the contestant algorithm of
evolutionary computation and metaheuristic algorithms, Particle Swarm Optimization, Whale Optimization, Grey
researchers tend to apply their best optimizers between Wolf Optimization, and Dragonfly algorithms with respect
their contemporary alternatives to solve the optimization to the clustering accuracy of the text documents [46]. Levy
problem at hand. Selecting the best optimization method Flight-assisted Reptile Search algorithm (REPTILE) is
out of the compared contestant optimizers for a given set of developed for tuning proportional-integral-derivative
benchmark problems is decided by the conclusive remarks model parameters of a vehicle cruise control [47]. Hu et al.
of the inventor of the proposed algorithm, which may be [48] proposed multi-strategy boosted Snake Inspired
deceptive and lead to unreasonable inferences regarding to Optimizer (SNAKE) developed for multidimensional
the veracity of its search performance. Comprehensive engineering design problems. They benchmarked the

123
Neural Computing and Applications (2023) 35:14275–14378 14279

optimization capability of the proposed method against functions, and their respective pros and cons are compar-
some of the well-known optimizers, and clear dominance atively evaluated by the similar performance measures.
of this developed optimizer is observed. Optimal energy Following contributions are provided to the current litera-
load dispatch in a multi-chiller system was carried out by ture by this study, which can be concisely listed as:
Equilibrium Optimizer and bets results were compared by
1. Comprehensive analyses are made on eleven algo-
the previous efforts made by Genetic Algorithm and Sim-
rithms through the statistical performance of thirty-four
ulated Annealing optimization method [49]. Hu et al. [50]
unconstrained optimization functions comprised of
enhanced the general optimization capability of Manta Ray
unimodal and multimodal test functions. Convergence
Foraging Optimization (MANTA) by integrating search
graphs for the compared algorithms are plotted for each
equations of Wavelet mutation and quadratic interpolation
unconstrained test problem to observe which algorithm
strategy and applied this ameliorated version of the algo-
converges faster to its optimum, and the best optimizer
rithm to successful shape optimization of a complex
between them is decided by its success on obtaining the
composite cubic generalized ball. Chen et al. [51] utilized
most accurate solutions within the lowest computation
African Vulture Optimization Algorithm (AFRICAN) for
time.
optimal modeling of combined power system operated in a
2. CEC-2013 (Congress on Evolutionary Computation-
watersport complex. Main optimization objective is to
2013) test suite involving twenty-eight benchmark
minimize the total energy losses as much as possible by
functions with different modalities is solved by the
optimizing the considered design parameters of a number
compared eleven algorithms, and the most successful
of gas engines, boiler heating capacity, and cooling
algorithm among the competitive methods is deter-
capacity of the electric and absorption chillers. Aquila
mined by the corresponding statistical analysis regard-
Optimization Algorithm (AQUILA) is put into practice to
ing to the best, worst, mean, and standard deviation
effectively design a feedforward proportional-integral
results.
controller to achieve the optimum air–fuel ratio in a spark
3. Fourteen real-world constrained complex engineering
ignition system, which plays an important role in regulating
design problems are optimized through these twelve
the fuel consumption as well as protecting the environment
metaheuristics, and their comparative performances are
from harmful emissions to some degree [52]. An improved
evaluated based on the statistical analysis of the fitness
Harris Hawks (HARRIS) optimization algorithm enhanced
function values.
with the chaotic Tent map and integrated with an extreme
learning machine was employed for constructing a generic Remainder of the paper is organized as follows. Sec-
holistic model to predict the intensity level of rock busts. tion 2 gives brief description of the compared meta-
The developed model reaches a high degree of prediction heuristic algorithms and lists some of the high-impact past
accuracy of 94.12%, providing a quick convergence rate to studies related with the performance comparison of meta-
its optimum solution [53]. Barnacles Mating Optimizer heuristic algorithms. Section 3 gives mention on the con-
(BARNA) combined with a support vector machine is tributing motivations behind the comparison between the
proposed for obtaining precise state of charge estimation, existing newly emerged metaheuristics. Section 4 provides
which is an important concern for a reliable battery man- the statistical comparison results for the standard uncon-
agement system [54]. strained benchmark problems and compares the optimiza-
The majority of the members of the metaheuristic tion accuracies of the compared algorithms on the test
optimization community do not have in-depth knowledge functions belonging to the CEC 2013 test suite. Section 5
on the general optimization performance of the new analyses the behavior of the contested algorithms on
emerging nature-inspired metaheuristic algorithms. engineering design problems and decides which algorithm
Underlying novelty brought out by this review paper is to gives the less erroneous predictions satisfying the chal-
scrutinize the solution effectivity of the recent nature-in- lenging problem constraints. Section 6 provides a com-
spired algorithms on multidimensional constrained and prehensive discussion on the search tendencies of the
unconstrained optimization problems. One of the compared algorithms and gives an explicit investigation on
demanding goal in this work is to conduct in-depth analysis their algorithmic structure as to why they perform well in
of general behaviors and proclivities of the above-men- some types of test problems and collapse for the other
tioned newly emerged metaheuristic approaches on test types. Section 7 concludes this comprehensive research
functions with different functional characteristics. All study with remarkable comments and insightful future
algorithms will be analyzed on the same benchmark directions.

123
14280 Neural Computing and Applications (2023) 35:14275–14378

2 Metaheuristic algorithms considered an explicit discussion of the mentioned literature works and
for comparative performance evaluations investigate which test instances they utilized for the per-
formance comparison of the considered metaheuristic
This section is related with giving a brief overview of the algorithms.
previous literature studies about the performance compar- One of the early few attempts to evaluate the compar-
ison of metaheuristic optimizers and introducing each ative prediction performances of the stochastic meta-
metaheuristic algorithm for benchmarking their prediction heuristic algorithms was carried out by Ali et al. [62], in
performance on various test functions with different which a reasonable procedure is proposed for testing the
domains. Due to the limited search space, a brief descrip- estimation accuracies of the algorithms through an appro-
tion of the metaheuristic algorithms is provided in this priate selection of the test suite of benchmark problems and
section. Interested could find more information about the put forward a straightforward methodology to present the
related algorithm in its original paper. optimization results. They compiled a diverse set of con-
tinuous test problems from different domains and investi-
2.1 Previous works gated the macroscopic search behaviors of the five
stochastic optimizers, including improving hit-and-run,
Recent developments in chip technologies entail a rapid hide-and-seek, controlled random search, real coded
escalation in generating efficient stochastic metaheuristic genetic algorithm, and differential evolution algorithms.
algorithms whose overall optimization success majorly An informative performance plot are drawn to observe
depends on approaching the global optimum solution gradual improvements in the objective function values with
within a reasonable computational time. Algorithms draw an increasing number of iterations. Numerical experiments
their inspiration from various sources, such as the intrinsic made on the selected test functions are hinged on consid-
foraging skills of the honey bees [55] or the efforts of a ering three different maximum numbers of iterations
skillful musician seeking to improvisation for perfect har- defined for the termination criterion (100D2, 10D2, and
mony [18]. Some of the existing algorithms have shown 10D where D is the dimensionality of the problem).
great potential to solve a diverse range of optimization Respective results reveal that careful selection of the
instances covering from combinatorial problems to con- maximum number of iterations which decides the elapsed
straint engineering design cases. Moreover, some stochas- algorithm runtime has a significant impact on the opti-
tic algorithms can be utilized along with the metaheuristic mization behavior of the running algorithms.
optimizers to solve an optimization problem with faster Civicioglu and Besdok [63] analyzed the algorithmic
convergence to the optimal solution [56–58]. It is note- concepts of Cuckoo search, Particle Swarm Optimization,
worthy to mention that the majority of the proposed Differential Evolution, and Artificial Bee Colony optimizer
metaheuristic optimizers up to now achieved notable suc- employing different performance measures. Optimization
cess on solving complex real-world problems from differ- success of these four well-established optimizers has been
ent domains of engineering fields [59–61]. assessed on fifty different continuous optimization bench-
Recent two decades have witnessed the development of mark functions with varying problem dimensionalities, and
a considerable amount of metaheuristic algorithms with a it is revealed that the overall solution success of is Cuckoo
wide range of inspiration sources; however, their compar- search algorithm is very close to that of Differential Evo-
ative prediction performances between each other have not lution algorithm, those two of which provide much robust
been extensively investigated so far. Keeping in mind that and accurate prediction outcomes compared to those
it is extremely difficult to systematically and exhaustively obtained for Particle Swarm Optimizer and Artificial Bee
evaluate the favorable merits of the emerging metaheuris- colony algorithms. Total number of function evaluations
tics within a limited number of optimization benchmark required for achieving the optimum answer of the problem
cases, most of the previous literature studies associated along with the runtime complexities of the algorithms have
with the optimization performance assessment of the con- also been comprehensively evaluated and found out that
temporary metaheuristic algorithms focus on a particular Differential Evolution requires fewer function evaluations
engineering design problem or cover a restricted number of without burdening a significant amount of computational
optimization benchmark cases, none of which are suffi- load in most of the test cases.
ciently conducive to provide general insights on the true Ma et al. [64] conducted a comprehensive comparative
optimization capabilities on the benchmarked algorithms. study between some of the prevalent evolutionary
There are several attempts in the existing literature to stochastic optimizers of Genetic Algorithm (GA), Bio-
present the comparative performances of algorithms on geography-based Optimization (BBO), Differential Evo-
various fields of test instances. Below paragraphs present lution (DE), Evolutionary Strategy (ES), and Particle
Swarm Optimization (PSO). Firstly, they made a

123
Neural Computing and Applications (2023) 35:14275–14378 14281

conceptual discussion on the equivalences of these men- comparatively investigated. They concluded their research
tioned algorithms and found out that the basic versions of with remarkable decisive comments, including some
these methods have similar optimization performances favorable outcomes of the numerical experiments such that
compared to GA with global uniform recombination under the number of function evaluations defined as the termi-
specific test conditions. They also discussed the differences nation criterion has a great influence on the accuracy of the
based on their biological inspirations and conclude that the predicted solutions yet increases the computational run-
enhanced solution diversity of EAs is the direct result of time, which is not a desired situation for an end-user.
these distinctions. Furthermore, the optimization capabili- According to the respective estimation results of the opti-
ties of these above-mentioned metaheuristic optimizers are mization problems with varying domains, it is seen that
extensively assessed on a set of real-world optimization PSO, DE, and GA show brilliant performances, each
problems with different functional characteristics. Empiri- obtaining the maximum value of 11% success ratio.
cal results obtained from exhaustive numerical experiments This research study is mainly concerned with novel
reveal that BBO algorithm gives the best predictive results metaheuristic algorithms, particularly for optimizers
among the compared standard optimizers. When it comes developed after 2019. Main reason behind the considera-
to examining the improved versions of the algorithms, it is tion of these algorithms is general confusion, mostly
observed that DE and ES provide the best prediction resulting from a surplus of metaheuristic optimizers
accuracy compared to that of the other remaining algo- developed in such a small span of time, as to which
rithms. Ma et al. [65] extended their previous research algorithms perform well for optimization problems with
study by introducing a conceptual comparison between varying degrees of dimensionalities, functional character-
algorithmic equivalences of some swarm intelligence (SI) istics, and types. To be clear, there is now an ongoing
optimizers, including Particle Swarm Optimization (PSO), ambiguity in the metaheuristic community over the general
Shuffled Frog Leaping Algorithm (SFLA), Group Search optimization capabilities of the recent metaheuristic opti-
Algorithm (GSO), Firefly Algorithm (FA), Artificial Bee mizer. Majority of the researchers do not have a conclusive
Colony (ABC) Algorithm, and Gravitational Search opinion regarding the optimization search efficiency of the
Algorithm. After exhaustive elaborations on the considered related algorithm, and comparative analysis should be
test instances that cover the unconstrained test functions made on various types of optimization benchmark func-
employed in CEC 2013 competitions and combinatorial tions with various functional features if it is to get clear
knapsack problems, it is seen that the advanced version of insight on its overall effectivity. In addition, comparative
ABC algorithm numerically outperforms the remaining performance analysis between the newly emerging algo-
algorithms in terms of solution accuracy and robustness for rithms is still in question as there has not been a published
CEC 2013 benchmark problems and improved versions of literature study concerning this issue. Of course, there are
SFLA and GSA algorithms yield the best prediction out- available options in the selection of different algorithms
comes on combinatorial knapsack problems. which were developed between the years of 2019 and 2022.
Ezugwu et al. [33] comparatively examined the predic- However, we consider two important aspects on their
tion capabilities and convergence characteristics of twelve optional selection. First qualification requires their frequent
metaheuristic optimizers, including the standard Genetic application to different kinds of design problems, while the
Algorithm (GA), Particle Swarm Optimization (PSO), second is their general optimization performance compared
Firefly Algorithm (FA), Ant Colony Optimization (ACO), to remaining algorithms, considering two deterministic
Symbiotic Organism Search (SOS), Cuckoo Search (CS), aspects of average computational burden and solution
Artificial Bee Colony (ABC), Bat Algorithm (BA), Dif- accuracy obtained after a defined number of algorithm
ferential Evolution (DE), Flower Pollination Algorithm runs. Between twenty-five recently developed metaheuris-
(FPA), Invasive Weed Optimization (IWO), and Bee tic algorithms, these eleven metaheuristic optimizers yield
Algorithm (BeeA). Main purpose of their accomplished the most fruitful outcomes with respect to these above-
study is to carry out an in-depth analysis that would pro- defined two complementary performance measures.
vide deep insight on the search characteristics of each Therefore, we consider these eleven algorithms to fill this
representative metaheuristic optimization algorithm. All gap in the existing literature because of their widespread
algorithms are evaluated on 36 different standard multidi- application ranges of scientific fields compared to the
mensional test functions, and comprehensive statistical remaining contestant algorithms.
analysis has been performed that entails an unbiased and Most of the literature studies concerning the compre-
objective assessment of the reflected effectiveness of the hensive survey of the metaheuristic algorithms focus on a
algorithms. Furthermore, the minimum required function particular subject, such as optimizing control parameters of
evaluations to acquire the optimum solution of the problem PID models [66], optimizing mechanical design problems
and runtime complexities of the algorithms have also been [67], solving load balancing problems in cloud

123
14282 Neural Computing and Applications (2023) 35:14275–14378

environments [68], and solving the inverse kinematics of Search Algorithms [78] to locate the optimum sitting pla-
robot manipulators [69], and solving feature selection ces of the wind turbines in a wind farm. Multi-objective
problems [70]. Furthermore, most of the review papers Manta Ray Foraging Optimization and SHADE [79]
present in the current literature only report the published algorithm were proposed for solving structural design
studies and their corresponding results without providing problems. The proposed hybrid is applied to six challeng-
comparative solution outcomes between them. This ing truss optimization problems having discrete design
research paper takes advantage of eleven newly emerged variables up to 942 parameters and corresponding results
nature-inspired metaheuristic optimizers to solve a wide have been compared to those obtained for nine state-of-the-
spectrum of constrained engineering design problems and art metaheuristic optimizers [80]. Xiao et al. [81] combined
unconstrained benchmark functions, posing extreme chal- the governing manipulation equations of African vultures
lenges to the researchers of the metaheuristic optimization Optimization Algorithm and Aquila Optimizer for solving
community. To the best knowledge of the authors, this kind global optimization problems. Comparative estimation
of performance assessment has not ever been conducted in results indicate that the hybrid algorithm can achieve
literature approaches yet. Apart from imparting knowledge superior solution accuracy and stability. Ramachandran
on the current trends in metaheuristic algorithm develop- et al. [82] proposed a hybrid optimizer whose integrated
ment, this study also provides an exhaustive comparative components are Grasshopper Optimization Algorithm [83]
study on the prediction performance of the newly emerged and Harris Hawks Optimizer for solving combined heat and
algorithms, and conclusive remarks will be given with power economic dispatch problems. Sine–Cosine Algo-
regards to their respective solution accuracy and efficacy rithm [84] was combined with Barnacles Mating Optimizer
based on the estimation results of various benchmark cases. [85] to solve data clustering problems. Experimental results
These eleven algorithms have been previously hybri- obtained for various clustering cases show that the pro-
dized with some literature metaheuristic optimizers to posed hybrid provides superior performance improvement
compensate for their intrinsic algorithmic deficiencies. resulting from the improved balance between exploration
Rawa et al. [71] hybridized the Runge Kutta Optimization and exploration mechanism.
algorithm with the Gradient-based Optimizer to establish The following sections will provide brief, yet explana-
power system planning model in the presence of renewable tory instructions on these elven algorithms, and their
energy sources considering the techno-economic aspects of algorithmic structure will be explained.
the whole integrated unit. Ewees et al. [72] improved the
search efficiency of the Gradient-based optimizer by using 2.2 Runge–Kutta optimizer
the Slime Mold algorithm [73] and applied this hybrid to
feature selection and benchmark problems used in CEC Runge–Kutta Optimizer (RUNGE) aims to bring a new
2017 competitions. It is seen that the proposed hybrid can dimension to the optimization community by proposing a
successfully improve the classification accuracy and yields metaphor-free algorithm, avoiding cliché methods such as
promising predictions outperforming the contender algo- mimicking foraging strategies of animals or evolutionary
rithms taking place in the competitions with respect to search trends. RUNGE algorithm depends on the extensive
solution efficiency. Almotairi and Abualigah [74] devel- differential equation-solving process and needs to utilize
oped a hybrid optimization model integration of Reptile the slopes that is employed in computing the iterative
Search algorithm and Remora Optimization algorithm [75] solutions steps of a differential equation. Algorithm is
and tested its effectivity over a set of benchmark cases, comprised of two different strategies. First phase is con-
including eight data clustering problems and multidimen- cerned with employing the search process governed by the
sional unconstrained test problems widely employed in fundamental rules of RUNGE algorithms, mainly deals
literature studies. Results retrieved from the performance with exploration. Second phase is mainly ruled by
evaluations show that hybrid algorithm can effectively ‘‘Enhanced Solution Quality (ESQ)’’ mechanism, focusing
tackle the complexities of hard-to-solve challenging opti- on the promising solutions obtained in the first phase of the
mization problems. Reptile Search Algorithm was hybri- algorithm. The general mathematical formulation of the
dized with snake Optimizer to determine the optimal algorithm is composed of a different set of stages that will
features of datasets collected from UCI repository as well be introduced by the following.
as to optimize two real-world optimization problems. The In the first stage, population individuals X are initialized
results show that the hybrid approach can provide practical within the defied search bounds LB (lower bound) and UB
and accurate solutions within comparatively lower com- (upper bound) by conducting the below scheme,
putational runtimes [76]. Rizk-allah and Hassanien [77]
proposed a hybrid optimization model composed of the
search equations of Equilibrium Optimization and Pattern

123
Neural Computing and Applications (2023) 35:14275–14378 14283

 
Xi;j ¼ LBj þ rnd1  UBj  LBj ; i ¼ 1; 2; :::; N; j The numerical value of DX is computed by,
¼ 1; 2; :::; D ð1Þ    
DX ¼ 2  rnd19  rnd20  Xb  rnd21  Xavg þ c 
where N is the population size, D is the problem dimension, ð8Þ
rnd1 is a random number between 0 and 1. RUNGE
algorithm employs a novel search mechanism (SM) to c ¼ rnd22 
 ðXi  rnd23 ðUB  LBÞÞ
update the current solutions by the given scheme, iter
 exp 4  ð9Þ
( Maxiter
XCF þ SFM þ l  rnd2  Xmc if r and  0:5
Xi ¼ where numerical values of Xb and Xw can be updated by the
XmF þ SFM þ l  rnd3  Xra otherwise
following algorithmic scheme,
ð2Þ  
if f ðXi Þ\f Xpb
where XCF ¼ XC þ r1  SF  g  Xc and SFM ¼ SF  SM , Xb ¼ Xi
XmF ¼ Xm þ r2  SF  g  Xm , Xra ¼ ðXr1  Xr2 Þ, Xmc ¼
Xw ¼ Xpb
ðXm  Xc Þ, r1;2 2 ½1; 1 which can be either -1 or 1 used to
change direction of the search process. Random numbers else ð10Þ
g 2 ½0; 2 and l 2 ½0; 1 helps algorithm more effectively Xb ¼ Xpb
probe around the search space. Adaptive scale factor SF Xw ¼ Xi
can be defined as, end
SF ¼ 2  ð0:5
  rnd3 Þ  a   Enhanced Solution Quality (ESQ) phase is concerned
iter
 exp b  rnd4  ð3Þ with improving the general solution quality by using the
Maxiter different mutation operators with a view to avoid local
where Maxiter is the maximum number of iteration defined optimum points in the search space,
(   
for termination criterion. Parameter Xc and Xm and Xc given Xnew; 1 þ r  w   Xnew;1  Xavg þ randn1  if w \ 1
in Eq. (2) are calculated by the following, Xnew;2 ¼  
Xnew; 1  Xavg þ r  w  Xna otherwise
Xc ¼ Xi  rnd5 þ ð1  rnd5 Þ  Xr1 ð4Þ ð11Þ
Xm ¼ Xb  rnd6 þ ð1  rnd6 Þ  Xpb ð5Þ where r 2 f1; 0; 1g,
  
where Xb and Xbp are, respectively, so-far-obtained-best Xna ¼  u  Xnew;1  Xavg þ randn1 w ¼ rndð0; 2Þ
solution and current best solution within the current itera-   
iter
tion. SM parameter given in Eq. (2) is computed by the  exp 5  rnd24  ;
Maxiter
below formula,
Xr1 þ Xr2 þ Xr3
ðXRK ÞDX Xavg ¼ ; Xnew;1 ¼ rnd25
SM ¼ ð6Þ 3
6  Xavg þ ð1  rnd25 Þ  Xb
where XRK can be computed by, ð12Þ
XRK ¼ k1 þ 2  k2 þ 2  k3 þ k4 In the case if the fitness value of Xnew,2 is not better than
ðrnd7  Xw  u  Xb Þ that of the ith solution Xi then algorithm provides another
k1 ¼ option to modify and update the current value of Xi by
2DX
ðrnd8  ðXw þ rnd9  k1  DX ÞÞ  UX employing the following the simple formulation,
k2 ¼  
2DX Xnew;3 ¼ Xnew;2  rnd26  Xnew;2 þ SF 
ðrnd10  ðXw þ rnd11  0:5k1  DX ÞÞ  UXb  rnd27  XRK þ 2  rnd28  Xb  Xnew;2
k3 ¼
2DX ð13Þ
ðrnd12  ðXw þ rnd13  k3  DX ÞÞ  UXb2
k4 ¼ Below algorithm provides the pseudo-code of Runge-
2DX
u ¼ round ð1 þ rnd14 Þ  ð1  rnd15 Þ Kutta optimizer.
UX ¼ ðu  Xb þ rnd16  k1  DX Þ
UXb ¼ ðu  Xb þ rnd17  0:5k2  DX Þ
UXb2 ¼ ðu  Xb þ rnd18  k3  DX Þ
ð7Þ

123
14284 Neural Computing and Applications (2023) 35:14275–14378

2.3 Gradient-based optimizer


 3 !2
iter
Gradient-based Optimizer (GRAD) is found upon a math- b ¼ bmin þ ðbmax  bmin Þ  1 ð18Þ
ematical concept borrowed from Newton’s method for Maxiter
solving optimization problems. Algorithm consists of two
where bmin and bmax are, respectively, 0.2 and 1.2; e is a
leading mechanisms: (1) Gradient Search Rule (GSR) and
relatively small number between 0 and 0.1; and randn is a
(2) Local Search Operator. Main elements forming the
normally distributed random number. The parameter Dx
essential mechanisms of GRAD algorithm are provided
can be computed by,
below.  
GRAD algorithm is initialized with generating trial ðxbest  xr1 Þ þ d
Dx ¼ rnd4   
 ð19Þ
population individuals by the following, 2
  x þ x þ x þ x 
Xi;j ¼ LBj þ rnd1  UBj  LBj ; i ¼ 1; 2; :::; N;  r1 r2 r3 r4 
d ¼ 2  rnd5    xi  ð20Þ
j ¼ 1; 2; :::; D 4
ð14Þ where r1, r2, r3, and r4 are randomly selected different
integer numbers within the range [1,N], which are also
GRAD employs an efficient search mechanism called
different from the current population member i. By inte-
Gradient Search Rule (GSR) to explore the search space,
grating the so-far-obtained xbest solution with the current
which is based on Newton’s famous gradient formula [86].
population member xi, new solution vector X2 is produced
2Dx  xi by the following scheme,
X1i ¼ xi  randn  q  þ rnd2  q
ðxworst  xbest þ eÞ
2Dx  xi
 ðxbest  xi Þ X2 ¼ xbest  randn  q  þ rnd6  q
ðypi  yqi þ eÞ
ð15Þ  ðxr1  xr2 Þ
q ¼ 2  rnd3  a  a ð16Þ ð21Þ
     
 3p 3p  jyiþ1 þ xi j
a ¼ b  sin þ sin b   ð17Þ ypi ¼ rnd7  þ rnd8  Dx ð22Þ
2 2 2

123
Neural Computing and Applications (2023) 35:14275–14378 14285

 
jyiþ1 þ xi j if rand \0:5
yqi ¼ rnd9   rnd10  Dx ð23Þ
2 if rand \0:5
2Dx  xi Xnew ¼ xi þ rnd14  ðf1  ðu1  xbest  u2  xk ÞÞ
yiþ1 ¼ xi  randn  þ rnd11  w1
ðxworst  xbest þ eÞ þ f2  q  ðu3  ðX2  X1ÞÞ þ u2  0:5  ðxr1  xr2 Þ
 ðxbest  xi Þ else
ð24Þ Xnew ¼ xbest þ rnd14  ðf1  ðu1  xbest  u2  xk ÞÞ
Using X1 and X2, new position of the current solution þ f2  q  ðu3  ðX2  X1ÞÞ þ u2  0:5  ðxr1  xr2 Þ
for the next iteration is calculated by the following end
expression, end
Xiiterþ1 ¼ rnd12  ðrnd13  X1 þ ð1  rnd13 Þ  X2Þ ð27Þ
ð25Þ
þ ð1  rnd12 Þ  X3
where f1 stands for a random number between [- 1,1], f2 is
X3 ¼ xi  q  ðX2  X1Þ ð26Þ a random number drawn from a normal distribution with a
standard deviation of 1 and mean value of 0. Random
Local Escaping Operator (LEO) is a conducive operator
numbers u1, u2, and u3 are calculated by the following
to avoid local optimum points over the search space. LEO
expressions,
updates the current solution with considering the contri- (
butions of xbest, X1, X2, and two randomly selected trial 2  rnd15 if rand\0:5
solutions from the population xr1 and xr2. Below-given u1 ¼ ð28Þ
1:0 otherwise
manipulation scheme describes the formulation of LEO (
mechanism, rnd16 if rand\0:5
u2 ¼ ð29Þ
1:0 otherwise
(
rnd17 if rand\0:5
u3 ¼ ð30Þ
1:0 otherwise

where rand is a random number between 0 and 1. The


following scheme is used to calculate the trial solution xk,

123
14286 Neural Computing and Applications (2023) 35:14275–14378

(
xrand if rand \0:5 best population member of the poor population; and
xk ¼ ð31Þ rand(0,1) is the randomly generated value between 0 and 1
xp otherwise
drawn from uniform distribution. New position of the poor
xrand ¼ LB þ rand  ðUB  LBÞ ð32Þ population individuals within the search space is updated
by the following simple formulation,
where xrand is a random solution produced between upper
and lower bounds and xp is random solution selected from POPnew
poor;i ¼ POP
old
 poor;i 
the trial population. Below algorithm gives the pseudo- þ randð0; 1Þ  Pattern  POPold ð35Þ
poor;i
code of Gradient-based Optimization algorithm.
where POPnew
poor;i is the new position of the ith poor member;
POPold
poor;i is the current position of the ith poor member;
and Pattern variable is resulted from the collective contri-
2.4 Poor and rich optimization algorithm bution of the best, worst and mean values of the rich
population members expressed by the below-given
Proposed by Mosavi and Bardsiri in 2019 [17], Poor and formulation,
Rich optimization (PRO) algorithm is a multi-population
human-based optimization approach inspired from the POPold old old
rich;best þ POPrich;mean þ POPrich;worst
Pattern ¼ ð36Þ
social differences of the individuals living in a particular 3
community. It is basically conceptualized upon the below-
where POPoldrich;best is the current best member of the rich
given two decisive points.
population; POPold rich;mean is the average member of the rich
• Each poor population member aims to improve his or population; POPold rich;worst is the worst member of the rich
her social situation by gaining wealth or learning an
population. There may occur sharp and rapid declines or
experience or knowledge from the rich individuals.
increases in the wealth status of the population members
• Each rich member from the population aims to broaden
resulted from unpredictable or unexpected changes in the
the social gap between the poor individuals by grasping
socio-economic affairs. Since it is nearly impossible to
their limited wealth.
predict the ongoing trends of these decisive factors, a
First, a random population is initialized between the mutation operator is employed to poor and rich individuals,
predefined upper and lower bounds to construct trial pop- which is realized by implementation of a random number
ulation members composed of poor and rich individuals. with zero mean and variance one into the ruling equation
Then, each member is evaluated based on their respective through the following expression,
fitness values and sorted in an ascending order based on
if rand(0,1) \Pmut
their corresponding objective function values. As men-
tioned, there are two distinct subpopulations formed by the POPnew new
rich;i ¼ POPrich;i þ randn ð37Þ
poor and rich members expressed by, end
Nmain ¼ Nrich þ Npoor ð33Þ if rand(0,1) \Pmut
where Nrich and Npoor are size of the rich POPrich and poor POPnew new
poor;i ¼ POPpoor;i þ randn ð38Þ
POPpoor individuals. Current position of each rich member end
is updated by the below formulation,
where Pmut is the mutation probability whose numerical
POPnew old
rich;i ¼ POPrich;i value is decided by the user experience; POPnew
  rich;i and
þ rand ð0; 1Þ POPold
rich;i  POP old
poor; best ð34Þ POPnew
poor;i are, respectively, updated position vectors rich
and poor members of the population after perturbed by a
where POPnewrich;i is the new position of the ith member of random parameter randn, which is generated by the aver-
rich population; POPold rich;i is the current position of the ith age of 0 and variance of 1.
member of the rich population; POPold poor;best is the current

123
Neural Computing and Applications (2023) 35:14275–14378 14287

associated with the encircling of the prey individuals


facilitating the high and belly walking foraging move-
2.5 Reptile search algorithm ments. Surrounding the prey members within the prede-
fined search space emphasizing on the extensive
Reptile Search Algorithm (REPTILE) is a metaheuristic exploration over the search domain can be modeled by the
algorithm simulating the social behaviors and food- following,
searching strategies of the crocodile reptiles, which include 8  
encircling and hunting mechanisms of the detected prey >
> Best iter
 g iter
 b  Riter
>
> j i;j i;j
individuals. It is a population-based gradient-free algorithm >
<
iterþ1  randð0; 1Þ iter  Maxiter=4
found to be efficient for tackling hard-to-solve engineering Xi;j ¼
>
> iter iter iter
problems. Similar to the majority of the metaheuristic > Bestj  Xr1;j  ES  randð0; 1Þ
>
>
:
algorithms, population initialization is the first phase of the ðiter [ Maxiter=4Þ and ðiter  Maxiter=2Þ
algorithm responsible for generating the set of candidate ð40Þ
crocodiles stochastically. Swarm members can be produced
by the given equation, where Bestjiter is the jth dimension of the global best
  solution obtained so far, rand(0,1) symbolizes a random
Xi;j ¼ LBj þ randð0; 1Þ  UBj  LBj i 2 f1; :::; N g
j 2 f1; :::; Dg number between 0 and 1, iter is the current iteration and
Maxiter is the maximum number of iteration, giter i;j is the
ð39Þ
hunting factor parameter defined for the jth solution of the
where UBj and LBj are upper and lower bounds of the jth ith member calculated by Eq. (41), b is a control parameter
dimension; Xi,j is the jth dimension of the ith population fixed to 0.1, Ri,j accounts for the decreased search space
member; rand(0,1) is a uniformly distributed random computed by Eq. (42), r1 is a random integer between 1
number within [0,1]; N is the size of the crocodile swarm; iter
and N, Xr1;j is the jth dimension of a random crocodile at
D is the problem dimension. REPTILE algorithm is based position r1. ESiter is an iteratively decreasing random
on the two governing search mechanisms of exploration number between -2 and 2 calculated by Eq. (43),
and exploitation. These algorithmic features are activated
by the movements of the crocodiles while foraging for the gi;j ¼ Bestj  Pi;j ð41Þ
available food sources or prey. Search activity is seg- Bestjiter  xr2;j
mented into four different stages representing the natural Ri;j ¼ ð42Þ
Bestjiter  e
food search behaviors of the foraging crocodiles. Explo-
ration phase of REPTILE is activated by the first two stages

123
14288 Neural Computing and Applications (2023) 35:14275–14378

  8
1 >
> Bestjiter Piter
i;j randð0;1Þ ðiter\0.75MaxiterÞ
ESiter ¼ 2  randð1; 1Þ  1  ð43Þ >
>
Maxiter >
>
< and ðiter 0.5Maxiter Þ
>
iterþ1
where e is a small value fixed to 1E-10, r2 is random Xi;j ¼ Bestjiter giter iter
i;j eRi;j
>
>
integer between 1 and N, r3 stands for a random value >
>
>
> randð0;1Þ ðiter Maxiter Þ and
between - 1 and 1, Pi,j is the percentage difference >
:
ðiter 0.75Maxiter Þ
between the position of the best crocodile and the current
crocodile computed by, ð45Þ
Xi;j  Xaver;j where Bestjiter is the current best solution is obtained so far
Pi;j ¼ a þ   ð44Þ
Bestj  UBj  LBj þ e until the current iteration, giter
i;j is the mathematical operator
structured by the contribution of the current best solution
where Xaver,j is the average solution of the jth dimension, a
Bestjiter and Piter iter
i;j parameters calculated by Eq. (41), Ri;j is
is a sensitive parameter defined for controlling the explo-
ration accuracy of the algorithm and set to 0.1. Exploitation the reduce function defined for the iterative reduction of the
phase of the REPTILE algorithm is occurred by the hunting search space which is computed by Eq. (42). Below simple
process related with the cooperation and coordination of algorithmic scheme describes the essential steps of the
the encircling crocodiles. After the completion of the REPTILE algorithm.
exploration phase, foraging crocodiles focus on the target
prey individuals and employed hunting strategies make it
easier for the crocodile individuals to get closer to the
target prey. The mathematical model representing the
exploitation mechanism taking place in the second phase of
the algorithm can be expressed by the following,

123
Neural Computing and Applications (2023) 35:14275–14378 14289

2.6 Snake optimizer maximum number of iteration defined for the termination
condition. Available food quantity (Q) is computed by,
This nature-inspired metaheuristic algorithm simulates the  
iter  Maxiter
intrinsic mating behaviors of snakes, which likely occurred Q ¼ 0:5  exp ð50Þ
Maxiter
at high temperatures where food sources are abundant
otherwise, snakes only concentrate on food searching Exploration phase takes place where snakes are only
rather than mating. The proposed algorithm is built upon searching food occurs when the available food quantity (Q)
two complementary mechanisms of exploration and is lower than the threshold limit of 0.25. To model this
exploration. Exploration process is influenced by environ- phase, following equation is put into practice,
mental factors such that cold surroundings an available is iterþ1
Xmale;i iter
¼ Xmale;rand  2  Amale
not present in this case, but only exhaustive food searching
 ððUB  LBÞ  randð0; 1Þ þ LBÞ ð51Þ
is dominant. Exploitation phase includes many shifts and
transitions to obtain the global optimum point. In condi- where Xmale,i is the position of the ith male, Xmale,rand is the
tions where food is available and but hot temperature is random male in the population, rand(0,1) is a uniform
also evident, snake individuals only focus on eating the random number between 0, and Amale is the ability of the
available food. On the contrary, when food is available at male to find food resources and can be computed by,
cold environmental conditions, snakes opt for mating.  
fitmale;rand
Mating process also has two cases, which are fight mode Amale ¼ exp ð52Þ
fitmale;i
and mating mode. In the fighting phase, each male snake
fights for mating the best female snake while each female where fitmale,rand is the fitness value of a random male
snake seeking to select the best male snake. In the mating Xmale,rand; fitmale,i is the fitness value of the ith male,
process, mating occurs between each selected pair iterþ1 iter
depending on the availability of the food resources in the Xfemale;i ¼ Xfemale;rand  0:05  Afemale
  
habitat.  UBj  LBj  randð0; 1Þ þ LBj ð53Þ
Algorithm is initialized by generating the trial snake
where Xfemale,i is the position of the ith female, Xfemale,rand
individuals by the below-given equation,
is the position of random female, Afemale is the female
Xi;j ¼ LBj þ randð0;1Þ ability to find food resources and calculated by the fol-
 UBj  LBj ; i¼ 1; 2;:::;N; j¼ 1; 2;:::;D lowing expressions,
ð46Þ    
fitmale;best fitfemale;rand
FF ¼ exp ; Afemale;i ¼ exp 
Xi,j is the jth dimension of the ith snake in the swarm, fiti fitfemale;i
LBj and UBj are, respectively, lower and upper bounds of ð54Þ
the jth dimension of the optimization problem. Algorithm
where fitfemale,rand is the fitness value of a random female,
assumes that the whole population is divided into two
and fitfemale,i is the fitness value of ith female in the pop-
subgroups consisting of females and males such that 50%
ulation. Exploitation phase takes place when there is
of the population is male while remaining individuals are
abundant amount of food sufficient enough to supply
female. Snake swarm is divided into two equal subgroups
energy for mating process. This phase occurs if the avail-
by the following equation,
able food quantity (Q) is above the defined threshold limit.
Nmale ¼ N=2 ð47Þ Furthermore, if the surrounding environment temperature
Nfemale ¼ N  Nmale ð48Þ is higher than temperature scale threshold limit of 0.6, then
the snakes will only employ foraging activities which is
where Nmale and Nfemale are, respectively, size of the male modeled by,
and female snake individuals. Best individuals in the male  
iterþ1 iter
and female snake population are decided and symbolized Xi;j ¼ Xfood  2  Temp  randð0; 1Þ  Xfood  Xi;j
Bestmale and Bestfemale. In addition, Food position ffood is
ð55Þ
also obtained. Temperature of the surrounding environment
(Temp) is calculated by the following, iter
where Xi;j is an individual in the snake swarm (female or
  male) for the current iteration; Xfood is the food location in
iter
Temp ¼ exp ð49Þ the search space which is essence the best solution obtained
Maxiter
so far. If the environment temperature is lower than the
where iter is the current iteration and Maxiter is the defined threshold limit of 0.6, which indicates cold air
conditions is prevalent then the snakes will perform

123
14290 Neural Computing and Applications (2023) 35:14275–14378

fighting or mating activities. Fighting mode can be math- iterþ1


Xfemale;i iter
¼ Xfemale;i þ c3  Mfemale  randð0; 1Þ
ematically simulated by the following equation,  
iter iter
 Q  Xmale;i  Xfemale;i ð61Þ
iterþ1 iter
Xmale;i ¼ Xmale;i þ c3  FM  randð0; 1Þ
 
iter where Mmale and Mfemale refer to the mating ability of male
 Q  Xfemale;best  Xmale;i ð56Þ
and female individuals in the entire population and calcu-
where Xmale,i is the position of the ith male; Xfemale,best is lated by the following equation,
 
the location of the best female; and FM is the fighting fitfemale;i
ability of the male search agent, Mmale ¼ exp ð62Þ
fitmale;i
iterþ1 iter  
Xfemale;i ¼ Xfemale;i þ c3  FF  randð0; 1Þ fitmale;i
  Mfemale ¼ exp ð63Þ
iter
 Q  Xmale;best  Xfemale;i ð57Þ fitfemale;i

where Xfemale,i is the position of the ith female individual; If egg hatch occurs, then worst male and female are
Xmale,best is the best male in the population; FF is the replaced by the following equation,
fighting ability of the female agents. FM and FF are, Xworst;male ¼ LB þ randð0; 1Þ  ðUB  LBÞ ð64Þ
respectively, calculated by the following terms, Xworst;female ¼ LB þ randð0; 1Þ  ðUB  LBÞ ð65Þ
 
fitfemale;best
FM ¼ exp ð58Þ where Xworst,male and Xworst,female are worst members male
fiti
and female subgroups in the population. The flag direction
 
fitmale;best operator ± facilitates the mechanism of improving the
FF ¼ exp ð59Þ
fiti overall population diversity in the population, which
enables an abrupt change in the direction of the responsible
where fitfemale,best is the fitness of the best female; fitmale,best search agents to achieve a good probing around the search
is the fitness of the best male; fiti is the fitness value of the space. Below algorithm provides the pseudo-code of Snake
ith search agent. Mating mode is activated by the below Optimizer, explaining the step-by-step implementation of
equations, the above-defined manipulation equations into the algo-
iterþ1
Xmale;i iter
¼ Xmale;i þ c3  Mmale  randð0; 1Þ rithm framework.
 
iter iter
 Q  Xfemale;i  Xmale;i ð60Þ

123
Neural Computing and Applications (2023) 35:14275–14378 14291

2.7 Equilibrium optimizer exploitation. EQUIL chooses a random concentration (Ceq)


among the five trial solutions from the equilibrium pool,
Equilibrium Optimizer (EQUIL) is a nature-inspired phy-  
Ceq ¼ randi Ceq;pool ð68Þ
sics-based algorithm simulating the behavior of the react-
ing particles in the equilibrium conditions into a well- The exponential terms F is crafty in maintaining the
established mathematical model. Each search agent is balance between exploration and exploitation phases and
represented by its respective concentration in the control calculated by,
volume in which the prevailing chemical reaction occurs.  
F ¼ a1  signðr  0:5Þ  ekt  1 ð69Þ
EQUIL is conceptualized on the basic principles in con-
trolling the balance between mass and volume of the non- where a1 is a constant parameter, r and k are uniform
reacting chemical elements. Concentration of each particle random numbers within [0,1], sign() represents the signum
is iteratively updated by the intelligently devised search function, t is an iterative parameter that is varied with
equations with respect to the optimal solution. EQUIL increasing number of iterations and computed by,
algorithm is efficient in balancing exploration and  ða2 Maxiter
iter
iter
Þ
exploitation mechanisms, avoiding entrapment of local t ¼ 1 ð70Þ
points in the search domain and premature convergence. Maxiter
Search procedure is mainly described taking into account
where a2 is a constant value assigned by the user, iter is the
of Equilibrium pool (Ceq,pool), exponential term (F) and a
current iteration, Maxiter is the maximum number of iter-
generation rate (G). Implementation of these characteristic
ations. Previous experiences of the solution outcomes
parameters into the base algorithm will be described in the
obtained from various optimization benchmark functions
forthcoming sections. Algorithm strives to stabilize the
with different characteristics indicate that when a1 = 2.0
concentration of the reacting particles in the chemical
and a2 = 1.0, EQUIL reaches its peak ability to converge
equilibrium point. EQUIL has three major steps: (1) con-
the optimum solution. Generation rate (G) is another
centration initializing, (2) generating the equilibrium pool
important component of EQUIL, which influences the over
and candidate solutions and (3) updating concentrations to
search efficiency of the algorithm. This parameter admin-
reach the global optimum solution of the problem. Trial
isters the exploitation ability of EQUIL during the course
solutions representing the random particles defined in the
of iterations and defined as,
search ranges can be expressed by the following,  
  G ¼ GCP  Ceq  k  C  F ð71Þ
Ci;j ¼ Cmin;j þ randð0; 1Þ  Cmax;j  Cmin;j
ð66Þ
i ¼ 1; 2; ::; N; j ¼ 1; 2; :::; D where C is the current concentration of a particle and GCP
where Cmin and Cmax are lower and upper limits of the is a control parameter defined by,
(
concentration limits, Ci,j is the jth dimension of the ith trial 0:5  r1 r2  0:5
concentration, N is the population size symbolizing the GCP ¼ ð72Þ
0 otherwise
number of non-reacting particles in the equilibrium, and D
is the problem dimension. In the context of EQUIL algo- where r1 and r2 are random numbers within the range [0,1].
rithm, optimum solution is achieved when chemical com- Finally, general solution update mechanism taking into
ponents reach the equilibrium point. Algorithm employs account overall above-defined search components are
five different concentrations to the unknown equilibrium activated by the following search scheme,
state. Equilibrium is composed of the so-far-obtained four   G
best solutions denoted by Ceq,1, Ceq,2, Ceq,3, Ceq,4 and an C iterþ1 ¼ Ceq
iter
þ Citer  Ceq
iter
Fþ ð1  F Þ
average value of these four concentrations Ceq,aver, kV
ð73Þ
Ceq;pool ¼ Ceq;1 ; Ceq;2 ; Ceq;3 ; Ceq;4 ; Ceq;aver ð67Þ
Below-given algorithm provides the constructive steps
In above equation, the first four particles are associated of Equilibrium Optimization Algorithm.
with exploration while the remaining last is related with the

123
14292 Neural Computing and Applications (2023) 35:14275–14378

to collect the available plankton missed by the frontier


manta ray, so there exists a durable continuity in capturing
2.8 Manta ray foraging optimization algorithm the plankton food source. The mathematical model
expressing the chain foraging of manta ray is given by,
Manta Ray Foraging Optimization (MANTA) is a recently 8  
developed metaheuristic optimizer inspired by the foraging >
> xiter þ rand1 ð0; 1Þ  xiter best  xi
iter
>
>
i
>
>  
skills of intelligent artificial manta rays, providing a ver- < þ a  xiterbest  xiteri ; i¼1
satile optimization tool, which has proven its prediction xiterþ1
i ¼  
>
> xiter þ rand2 ð0; 1Þ  xiter  xiter ð74Þ
accuracy and consistency over a wide range of optimiza- >
> i i1 i
>
>  
tion benchmark problems. Algorithm focuses on the : þ a  xiter  xiter ; i ¼ 2; 3; :::; N
best i
intrinsic food search behaviors of manta rays which are pffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi
founded upon different strategies of chain foraging, a ¼ 2  rand3 ð0; 1Þ  jlogðrand4 ð0; 1ÞÞj
cyclone foraging, and somersault foraging to provide where a is the weight coefficient, randj(0,1) j = 1,2,3,4 are
favorable solution outcomes for various types of opti- random numbers defined between 0 and 1, xiter best is the
mization problems. Representative mathematical models of plankton area with the highest density standing for the best
these foraging skills will be explicitly explained below.
solution obtained so far, xiter
i is the ith manta ray (search
agent) at iteration iter.
2.8.1 Chain foraging
2.8.2 Cyclone Foraging
Artificial manta rays form a solid foraging chain by linking
their heads to tails in a line. In the context of MANTA,
Cyclone foraging is another strategy adopted for this
plankton location with higher concentrations is assumed to
algorithm. When the plankton location is detected deep
be the best solution obtained through the course of itera-
down under the water, manta rays form a spiral foraging
tions. Manta rays aim to locate the position of the resources
chain and approach toward the food source. Thanks to the
where the food source plankton is abundant and swim
robust spiral movement directed to the abundant food
toward these fertile regions following the leader of the
resources, each element of chain follows the one in front of
chain. Furthermore, the chain foraging mechanism enables

123
Neural Computing and Applications (2023) 35:14275–14378 14293

8
it and move toward the highly concentrated plankton areas. > xiter
>
> rand þ rand9 ð0; 1Þ
Cyclone foraging movement is modeled by the following >
>    
>
<  xiter iter
þ b  xiter iter
mathematical expression, rand  xi rand  xi ; i¼1
8 iter   xiterþ1
i ¼  
> xbest þ rand5 ð0; 1Þ  xiter iter >
> x iter iter
þ rand10 ð0; 1Þ  xi1  xi iter
>
> best  xi >
> rand
> >
>
>
>
 iter iter
 : þ b  xiter  xiter ; i ¼ 2; 3; :::; N
>
>
> þ b  xbest  x i ; i¼1 rand i
<  
iterþ1
xi ¼ xbest þ rand6 ð0; 1Þ  xiter
iter
 xiter xiter
rand ¼ LB þ rand11 ð0; 1Þ  ðUB  LBÞ
> i1 i
>
>   ð76Þ
>
> þ b  xiter iter
>
> best  xi ;
>
> where LB and UB are, respectively, lower and upper
:
i ¼ 2; 3; :::; N
  bounds of the search space.
ðMaxiter  iter þ 1Þ
b ¼ 2  exp rand7 ð0; 1Þ 
Maxiter 2.8.3 Somersault foraging
 sinð2  p  rand8 ð0; 1ÞÞ
ð75Þ This foraging mechanism considers the food location as a
reference point where each artificial search agent pivots
It can be observed from Eq. (75) that the best food around this point to somersault to a new fertile region.
location is taken as a reference point for this search They position themselves around the best solution and
mechanism, which accounts for the full exploitation of the update their current position by using the below-given
promising regions obtained by the previous chain foraging mathematical model simulating the somersault movement,
mechanism. In addition, the cyclone foraging mechanism
xiterþ1 ¼ xiter
i  þ2
makes a significant contribution to the global exploration i 
capability by introducing a random solution taken as a  rand12 ð0:1Þ  xiter iter
best  rand13 ð0:1Þ  xi ; i
¼ 1; 2; :::; N
pivot reference point, which is defined by the following,
ð77Þ
Below algorithm explains the implementation of the
Manta Ray Foraging Optimization algorithm in the form of
a descriptive pseudo-code representation.

123
14294 Neural Computing and Applications (2023) 35:14275–14378

2.9 African vultures optimization algorithm


Here fiti is the fitness value of a vulture which is a
member of either first or second group, n is the total
Afrıcan Vultures Optimization Algorithm (AFRICAN) is a
number of groups of vulture.
recently emerged metaheuristic optimizer simulating the
foraging behaviors of intelligent African Vultures. Algo-
2.9.2 Strategy 2—determining the starvation rate
rithm follows the below-defined criteria to solve opti-
of the vultures
mization problems.
• African vulture population consists of N individuals, Starvation degree of the vultures plays a critical role in
which devoted themselves to solve high-dimensional employing the adequate manipulation scheme to the group
optimization problems. members. If a random vulture in a swarm does not suffer
• African vulture population is segmented into three from starvation, it has a sufficient life energy to seek food
subgroups which are decided by the fitness quality of across longer distances. Otherwise, if a vulture lacks of
each competent African vulture. The best solution energy, then it has not the opportunity to fly to longer
belongs to the first subgroup while the second best is distances to search food, and shows aggressive attacking
involved in the second group, and the remaining behavior toward potential prey individuals. Starvation rate
vultures take part in the third subgroup. should be fairly determined in order for maintaining bal-
• Fitness values of population individuals reflect their ance between the exploration and exploitation phases. The
respective favorable merits and drawbacks in the parameter defining the starvation degree is called the
context of the algorithm such that the weakest one hunger level F computed by Eq. (80), which is a decisive
between the vultures suffer from starvation is the worst indicator for shifting the exploration to exploitation phase,
vulture, while the healthiest one is the strongest one F ¼ ð2 
 randð0; 1Þ þ
considered to be the best solution between them.  1Þ  randð1; 1Þ
iter
AFRICAN algorithm aims to accumulate the candidate  1 þt ð80Þ
Maxiter
solutions nearby the so-far-obtained best solution to
eliminate inferior points faced in the iterations. where rand(- 1,1) is a random value between - 1.0 and
1.0 and t is calculated by the following,
AFRICAN algorithm employs five different foraging
strategies taking into account of the above-mentioned t ¼ randð2;
 2Þ    
assumptions to mathematical model the tendencies of w p  iter w p  iter
 sin þ cos 1
artificial vultures. 2  Maxiter 2  Maxiter
ð81Þ
2.9.1 Strategy 1—choosing the best vultures in the group
The possibility of performing the exploitation mecha-
After forming the trial population, all vultures are evalu- nism is determined by assigning a specific value to user-
ated based on their respective fitness value, and the best defined parameter w, and rand(- 2,2) is a random value
and the second best vulture are selected, which are, between - 2.0 and 2.0. Numerical value of F gradually
respectively, sole members of the first and second group. decreases with increasing iterations, according to the
Current situation of the competitive best vulture are Eq. (80). When absolute value of F is higher than 1.0,
updated within each iteration, and the entire vulture swarm algorithm enters into the exploration phase to search for
population is re-evaluated, possible fold locations. Otherwise, food search process
( occurs in the vicinity of the best-known food location so
Bestvul1 if pi ¼ r1 far, facilitating the exploitation mechanism.
Ri ¼ ð78Þ
Bestvul2 if pi ¼ r2
2.9.3 Strategy 3—performing the exploration phase
Here in Eq. (78), Bestvul1 represents the best vulture in
the population while Bestvul2 is the second best vulture In the search condition of high exploration, vultures have
among them; parameters r1 and r2 are randomly defined the ability to detect the poor and unhealthy dying animals
numbers within the range [0,1] whose total summation is as a food source. However, their ultimate task is exhaustive
equal to 1. A famous roulette-selection technique is prac- and difficult as they need to perform an elaborative scru-
ticed to determine the probability value pi, tinization over the entire range of their living habitat for a
fiti certain amount of time and reach for long distance to probe
pi ¼ P
n ð79Þ
around the possible food locations. AFRICAN algorithm
fiti
i¼1 enables to perform two different search strategies in a

123
Neural Computing and Applications (2023) 35:14275–14378 14295

random manner, which is decided by a parameter P1 valued conflict between the competent vultures on food acquisi-
between 0 and 1. To realize the exploration process, a tion when food sources are limited and detected food area
random number within [0,1] is generated. Each vulture is crowded. In that condition, strong and powerful vultures
chooses its search environment based on its satiation level, do not prefer to share their food with the weak ones. On the
which is decided by the below procedure, contrary, weak vultures aim to tire the strong vultures and
8 iter grasp the collected food from the healthy strong vultures
>
> Ri  Diter  Fiiter P1  rand1 ð0; 1Þ
>
>
i
causing some small conflicts and arguments between them.
< Riter  F iter þ rand ð0; 1Þ
Xiiterþ1 ¼ i i 2
ð82Þ These attacking behaviors can be modeled by the below-
>
>  ððUB  LBÞ  rand3 ð0; 1Þ þ LBÞ given equations,
>
>
:
P1 \rand1 ð0; 1Þ Xiiterþ1 ¼ Diter  ðF þ rand ð0; 1ÞÞ  dt ð85Þ
i

where Xiiterþ1 is the location of the ith vulture for the next dt ¼ R i  Xi ð86Þ
iteration, randi(0,1) i = 1,2,3 is a uniform random number
where Di is calculated by Eq. (83), Satiation rate of the
defined within [0,1], Ri is calculated by Eq. (78), Fi is
vultures F calculated by Eq. (80), rand(0,1) is a random
computed by Eq. (80), LB and UB are lower and upper
number defined between 0 and 1, Ri is the best vultures of
bounds of the search space, and Di is the spatial distance
two different groups computed by Eq. (78), and Xi is the
between the specific vulture and the current optimal value,
  current position of the vultures.
Diter
i ¼ rand ð0; 2Þ  Riter
i  Xiiter  ð83Þ Vultures employ spiral attacking movement, which
mathematically models the rotational flight movement
where rand(0,2) is a random number between 0 and 2. between all vulture and best vultures of two different
groups,
2.9.4 Strategy 4—performing exploitation phase  
rand ð0; 1Þ  Xi
S 1 ¼ Ri   cosðXi Þ
To maintain balance between exploration and exploitation 2p
  ð87Þ
phases, the absolute value of F parameter is evaluated. If rand ð0; 1Þ  Xi
S2 ¼ R i   sinðXi Þ
this value is lower than 1.0, AFRICAN enters the 2p
exploitation phase, which is composed of two different
Xiiterþ1 ¼ Ri  ðS1 þ S2 Þ ð88Þ
complementary mechanisms. P2 and P3 are two important
decisive parameters responsible for determining the gov- Second phase of the exploitation commence with the
erning search strategy. Parameter P2 is used to choose the consistent siege and aggressive strife of the accumulated
search strategy employed in the first exploitation phase, vultures over the previously explored search regions. If the
while parameter P3 is utilized to determine the available numerical value of jF j is lower than 0.5, the random value
search strategy practiced in the second phase. First phase of between 0 and 1 (randP3) is generated. If this random value
the exploitation occurs when the numerical value of jF j is is lower than or equal to the user-defined parameter P3
between 0.5 and 1.0, in which two different strategies of then, the considered search strategy is to make crowded the
rotating flight and siege fight are carried out in a random explored prey location with different type of vultures.
manner. Parameter P2 is used to decide which strategy is Otherwise, the aggressive siege fight strategy is employed.
performed at this stage of the algorithm, which should be The following procedure is used to decide which available
valued before the search operation and should be between 0 search strategy is employed between the two above-men-
and 1. A random value between 0 and 1, randP2(0,1), is tioned alternatives,
generated at the initial stage of this phase. If the numerical (  
Eq: ð91Þ if P3  randp3
value of this random number randP2(0,1) is higher than or X iterþ1
¼   ð89Þ
equal to that of parameter P2 then siege fight strategy is Eq: ð92Þ if P3 \randp3
implemented. Otherwise, if this random number is lower
Artificial vultures accumulate over the possible food
than that of parameter P2 then the rotating flight strategy is
sources by examining the movements of all vultures in the
employed. This selection process is modeled by,
(   population. Below formulations represent the typical for-
iterþ1 Eq: ð85Þ P2  randp2 ð0; 1Þ aging behaviors performed by the vultures, facilitating the
Xi ¼   ð84Þ
Eq: ð88Þ P2 \randp2 ð0; 1Þ second phase of the exploitation.

Vulture enters into competition for food phase when


jF j [ 0:5, which corresponds to a state that vultures are
satiated and have sufficient life energy. There may be a

123
14296 Neural Computing and Applications (2023) 35:14275–14378

!
iter
Bestvul1  Xiiter aggressive behaviors on their hard quest for food, which is
iter
A1 ¼ Bestvul1  2
F mathematically modeled by the following scheme,
iter  ðX iter Þ
Bestvul1 i
! ð90Þ Xiiterþ1 ¼ Ri  jdðtÞj  F  LevyðDÞ ð92Þ
iter
iter Bestvul2  Xiiter
A2 ¼ Bestvul2  2
F In Eq. (92), d(t) is the distance between a vulture in the
iter  ðX iter Þ
Bestvul2 i
population and one of the best vultures in two groups,
iter iter
In Eq. (89), Bestvul1 and Bestvul2 are, respectively, the which is calculated by Eq. (86), Levy() function stands for
best vultures of the first and second groups for the current the levy flight distribution [87] calculated by the following,
iteration, F is the current satiation rates of the vultures, Xiiter ur
LevyðDÞ ¼ 0:01 
is the position of the ith vulture for the current iteration. 1
jmjb
Then, the updated spatial position of the ith vulture can be 0   1b1
ð93Þ
computed by the following scheme, Cð1 þ bÞ  sin pb 2
r¼ @   A
A1 þ A2 b1
Xiiterþ1 ¼ ð91Þ C 1þb2  b  2ð 2 Þ
2
When jF j\0:5, vultures become unhealthy and weak where D is the problem dimension, u and v are random
due to starvation and do not have the power to deal with the numbers between 0 and 1, and b is a constant value fixed to
strong vultures in the population, therefore showing 1.5. Below algorithm shows the explicit pseudo-code rep-
resentation of the African vultures optimization algorithm.

123
Neural Computing and Applications (2023) 35:14275–14378 14297

2.9.5 Aquila optimization algorithm that, aquilas decide to make spiral circles around the
detected prey and perform rapid attacks. This attacking
Aquila Optimization Algorithm (AQUILA) is a swarm strategy is called contour flight with a short glide attack
intelligence metaheuristic algorithm simulating the intelli- and,s simulated by the following mathematical equation,
gent swarming and foraging behaviors of artificial aquilas, X2iterþ1 ¼ Xbest
iter iter
 LevyðDÞ þ Xrand þ ðy  xÞ  randð0; 1Þ
which are skilled and crafty hunters after humans with
ð97Þ
strong legs and sharp claws. These physical characteristics
enable aquilas to catch various types of prey in their living where X2iterþ1 is the solution for the next iteration iter,
habitat. They live in high mountains and other higher obtained by the second search strategy (X2); D is the search
locations. AQUILA algorithm starts with initializing the dimension of the problem; Levy() is the function that draws
trial candidate aquila population defined between the pre- a random number from a levy flight distribution for each
scribed upper and lower bounds. Each candidate solution is iter
problem dimension j = 1,2,.., D; Xrand is a randomly cho-
generated by the below-given equation, sen aquila from the swarm population. Spiral shape of the
 
Xi;j ¼ LBj þ rand ð0; 1Þ  UBj  LBj i ¼ 1; 2; :::; N attacking movement is expressed by the implementation of
j ¼ 1; 2; :::; D y and x variables into the search scheme, which is calcu-
ð94Þ lated by the following,
y ¼ r  cosðhÞ ð98Þ
where N is the population size, D is the problem dimension,
UB and LB are correspondingly upper and lower bounds of y ¼ r  sinðhÞ ð99Þ
the search space. Algorithm imitates the hunting behaviors
where
of the foraging aquilas, whose ruling attacking strategies
can be categorized into four different complementary steps. r ¼ n1 þ 0:00565  l ð100Þ
h ¼ 0:005  l þ 1:5p ð101Þ
2.9.6 Step 1: increased exploration (X)
The parameter n1 takes a random integer value between
This stage of the optimization process is based on soaring 1 and 20; l is an integer number from 1 to the length of the
high up in the sky and searching for the prey individuals search space (D).
way above the ground and finding the most favorable prey
among the suitable alternatives. Once the prey is detected, 2.9.8 Step 3: expanded exploitation (X3)
a smooth vertical dive toward the prey is performed.
Mathematical model of this foraging skill can be expressed Third foraging skill is based on a vertical landing of the
by, attacking aquila when it pinpoints the prey location. This
  method is called low flight with slow descend attack, which
iter
X1iterþ1 ¼ Xbest
iter
 1 is found to be very effective in exploiting the fertile regions
 iter Maxiter 
iter previously explored by the skillful foraging aquilas. This
þ Xmean  Xbest  rand ð0; 1Þ ð95Þ
hunting behavior is modeled by the below search scheme
 iter 
iter 1X N
X3iterþ1 ¼ 0:1  Xbest iter
 Xmean  rand ð0; 1Þ þ 0:1
XMean ¼ X iter ; i ¼ 1; 2; ::; N ð96Þ
N i¼1 i  ððUB  LBÞ  rand ð0; 1Þ þ LBÞ ð102Þ

where Xiiterþ1 is the location of the ith foraging aquila for X3iterþ1 is the solution obtained by the third search
iter iter
the next iteration, Xbest is the best solution obtained until strategy for the next iteration, Xbest is the best solution
the current iteration, which is the estimated spatial location iter
retained until the current iteration, Xmean is the mean value
of the prey within the D-dimensional search hyperspace, of the aquila population individuals, UB and LB are upper
iter is the current iteration while Maxiter is the maximum and lower bounds of the search space of the given opti-
number of iterations defined as the termination criterion, mization problem, and rand(0,1) is a uniform random value
iter
and Xmean is the mean value of each aquila in the popula- between 0 and 1.
tion for this current iteration.
2.9.9 Step 4: narrowed exploitation (X4)
2.9.7 Step 2: narrowed exploration (X2)
This search strategy is activated when the intelligent for-
Second step of the algorithm takes place when the foraging aging aquila is getting close to the prey and quickly attacks
aquila soars up and detects the prey victims. Following with using random stochastic movements. This method is

123
14298 Neural Computing and Applications (2023) 35:14275–14378

called walk and grab prey, which is mathematically mod- Eq. (105); G2 represents an iteratively decreasing number
eled by the following expression, from 2 to 0 computed by Eq. (106)
 
X4iterþ1 ¼ QF  Xbest
iter
 G1  X iter  rand ð0; 1Þ  G2 2randð0;1Þ1
QF iter ¼ iter ð1MaxiterÞ2 ð104Þ
 LevyðDÞ þ randð0; 1Þ  G1
ð103Þ G1 ¼ 2  rand ð0; 1Þ  1 ð105Þ
 
iter
where QF is a quality factor calculated by Eq. (104); G1 G2 ¼ 2  1  ð106Þ
represents the various hunting behaviors of aquilas that are Maxiter
utilized for chasing the prey individuals, calculated by Algorithm 9 provides the pseudo-code of the Aquila
Optimization algorithm.

123
Neural Computing and Applications (2023) 35:14275–14378 14299

decreasing escaping energy level of the prey rabbits while


2.9.10 Harris Hawks optimizer
eluding the blitzing hawk attacks,
 
Harris Hawks Optimization (HARRIS) is inspired by the iter
E ¼ 2E0 1  ð108Þ
collaborative and cooperative hunting behaviors of Harris Maxiter
Hawks, which mimics the prey-chasing style of Harris
Hawks in nature called ‘‘surprise pounce’’. This collective where E is the total energy level of the fleeting prey during
attacking strategy is based on the cooperative blitz of its challenging escape from the consistent hawk attacks
several hawks over the detected prey individuals from Exploration phase of the algorithm is initiated by the
different directions with a consistent attempt to confuse surprise pounce of the attacking hawks on the projected
them. Harris Hawks facilitate different attacking strategies prey rabbit, which was previously detected in the explo-
and employ novel chasing patterns to collect available prey ration phase. As the preys on the radar make desperate
victims. This algorithm shows competitive performance in attempts to avoid from dangerous situations, hunting hawks
solving various types of real-world complex engineering perform different attacking strategies and chasing methods
problems in previous literature studies. HARRIS algorithm to catch prey. These foraging actions are decided by con-
has clinical advantages over the majority of its contem- sidering the active escaping behaviors of the fleeting rab-
poraries such that it has a well-devised shifting mechanism bits as well as the chasing strategies of the Harris Hawks.
between exploration and exploitation phases. Four different foraging methods are employed in the
Exploration phase of the algorithm is activated when exploitation phase, which will be briefly explained below.
hawk individuals in the population are not able to effec- In their intrinsic nature, the preys tend to escape from the
tively track the prey. Hawks opt for exploring the search threatening hawk attacks during the hunting process.
domain and aim to identify the prey within the prevailing Assume the r is the escape probability of the prey rabbits
exploration phase. In the algorithm context, each trial from the surprise pounce attacks. The fleeting rabbit has
solution is a hawk in the population, while the best solution two chances during the persistent attacks. It can safely
obtained so far is the prey victim, which is striving to be elude from the inflicting dangerous pounces (r \ 0.5) or
caught by the foraging hawks. Artificial hawks decide to become an unlucky victim of the blitzing attacks. Hawks
perch on a random location within the search domain and perform a hard or soft besiege depending on the current
probe around the fertile regions in which prey individuals escaping energy level of the prey. A random switch
reside with the highest probability. Equation (107) simu- between these two complementary foraging skills of hard
lates this hunting strategy through the below-given search and soft besiege is decided by the current escaping energy
strategy, level such that when jEj  0:5 soft besiege proceeds, and
when jEj\0:5 hard besiege occurs.
X iterþ1 Soft besiege takes place when jEj  0:5 and r  0:5
(  iter 
iter
Xrand  r1 Xrand  2  r2  X iter  r5 \0:5 indicating that the prey rabbit has enough energy to escape
¼
iter
Xrabbit  iter
Xmean  r3 ðLB þ r4  ðUB  LBÞÞ r5  0.5 from the attacks by performing random jumps or mis-
ð107Þ leading movements yet fails at the end. Foraging hawks
makes circles around the prey with soft moves, which is
iter
where Xrand is a random hunting hawk in the population; modeled by,
iter
Xrabbit is the best solution obtained until the iteration iter  iter   
X iterþ1 ¼ Xrabbit  X iter  E  J  Xrabiit
iter
 X iter  ð109Þ
standing for the prey rabbit; rj j = 1,..,5 are uniform ran-
dom numbers defined in the range [0,1]; UB and LB are, where J is the jumping strength of the prey rabbit repre-
respectively, upper and lower bounds of the search space; senting its jumping ability throughout hunting process,
iter which is of calculated by J = 2(1-rand(0,1)).
Xmean is the mean value of the hawk population for the
current iteration. In above equation, a hunting hawk utilizes Hard besiege attack occurs when jEj\0:5 and r  0:5,
its current location, the projected prey position, and the in conditions where the prey is overly exhausted and has
mean position of the entire population to approach a new lower escaping energy. Additionally, hawks hardly sur-
position when r5 \ 0.5. Meanwhile, when r5 C 0.5, the round the prey rabbits to perform the surprising pounce
foraging hawks tend to prey on random locations between attacks. This encircling mechanism covering the current
the predefined search ranges. One of the main advantages situation of the foraging hawks can be modeled by the
of HARRIS algorithm is the intelligently defined shifting following,
mechanism, which enables a smooth transition between  iter 
X iterþ1 ¼ Xrabbit
iter
 E  Xrabbit  X iter  ð110Þ
exploration and exploitation phases regulated by the

123
14300 Neural Computing and Applications (2023) 35:14275–14378

produces D-dimensional vector composed of randomly


Soft besiege with progressive rapid dives attacking
generated numbers
move takes place when jEj  0:5 and r\0:5 where the prey
Solution update mechanism associated with the soft
individual still has enough energy to escape from the hawk
besiege phase can be simulated by taking into account the
attacks. In order to confuse the fleeting preys, hawks per-
above-mentioned assumptions through the following
form surprising zig-zag moves around the prey rabbit with
expression,
several spiral dives before the final pounce attack. In the (  
first step of this attacking mechanism, hawks try to iterþ1
V if funcðYÞ\func X iter
approach toward the prey rabbits by employing the below X ¼   ð113Þ
Z if funcðZÞ\func X iter
equation,
  Hard besiege with progressive rapid dives foraging
V ¼ X iter  E  J  X iter  X iter 
rabbit rabbit ð111Þ
mechanism is about to occur when jEj\0:5 and r\0:5 and
A conclusive comparison is made between the possible prey does not have enough energy to escape from the
outcome of the current flight with that of the previous flight surprise attacks. In this phase, Harris Hawks make persis-
to get insight on if it is reasonable to dive or not. If it does tent attempts to decrease distance with the prey rabbit and
not makes sense to dive, foraging hawks perform rapid and aim to get closer to them with performing rapid dives
irregularly shaped dives to confuse the prey rabbits based before the pounce attacks. Attack movements of the hawks
on the levy flight concept through the flowing expression, in the case of the hard besiege phase can be formulated by
  the following expression,
Z ¼ X iter  E  J  X iter  X iter  þ rand ð1 : DÞ
rabbit rabbit (  
 LevyðDÞ iterþ1
V2 if funcðY2 Þ\func X iter
X ¼   ð114Þ
ð112Þ Z2 if funcðZ2 Þ\func X iter
where Levy(D) is a D-dimensional vector including random where
number generated by the levy flight distribution; rand(1:D)

123
Neural Computing and Applications (2023) 35:14275–14378 14301

 
iter
V2 ¼ Xrabbit  E  J  Xrabbit
iter iter 
 Xmean ð115Þ since it receives its sperm from only another barnacle,
  not from itself.
iter
Z2 ¼ Xrabbit  E  J  Xrabbit
iter iter 
 Xmean þ rand ð1 : DÞ • If the same barnacle is considered for the mating
 LevyðDÞ procedure at a certain point, the algorithm disregard this
ð116Þ individual, and iterations proceed without employing
the reproduction process.
A simple pseudo-code of the Harris Hawks optimization • Sperm cast process occurs if the selection for the
algorithm is provided below. current iteration is larger than the penis size pl.

2.9.11 Barnacles mating optimizer Exploration phase of the algorithm is commenced with
the reproduction through the sperm cast process, which is
Barnacles Mating Optimization algorithm (BARNA) formulated,
mimics the characteristic mating principles of barnacles xD
barna ¼ randpermðXÞ ð118Þ
living in their natural habitat. Barnacles are hermaphroditic
microorganisms, which means that they have both male xM
barna ¼ randpermðXÞ ð119Þ
and female reproduction organs. One famous physical where BarnaD and Barnam are randomly selected parents to
feature of the barnacles is that they have a large penis size be mated; and randperm() randomly shuffles the row ele-
relative to their body, which is seven or eight times larger ments of the main population matrix X to generate the trial
than their total body length, to cope with challenging population of the mated parents.
environmental conditions with varying level of difficulty. Search equations proposed for modeling the reproduc-
Mating group of a barnacle consists of all neighboring tion phase of BARNA algorithm are quite different com-
individuals within the reach of its penis size. Variations in pared to other literature evolutionary optimization
the penis length have a significant influence on the deter- algorithms. As there is no specific mathematical model to
mining the optimal group size. The basic principles of be employed for the reproduction of offspring, BARNA
Hardy and Weinberg are utilized for generating new off- algorithm emphasizes on the inheritance characteristics of
spring in the Barnacles Mating Optimization algorithm, the parents in generating new offspring individuals, taking
whose elementary steps are defined sequentially in the into account of the fundamentals of the Hardy–Weinberg
following paragraphs. principle. To put it simply, the following proposed search
Initial population of the barnacles individuals is pro- scheme is used to produce new offspring members result-
duced by, ing from the mating parents,
 
Xi;j ¼ LBj þ randð0; 1Þ  UBj  LBj ; xnew ¼ p  xD M
ð117Þ barna þ q  xbarna ð120Þ
i ¼ 1; 2; :::; N; j¼ 1; 2;:::;D
where p is a random number between 0 and 1 drawn from
where Xi,j is the jth dimension of the ith barnacle in the uniform distribution, xD M
barna and xbarna are, respectively, dad
population, N is the population size, D is the problem and mum of the generated barnacles. Exploitation phase is
dimension rand(0,1) is a random number within the range simulated by sperm cast process, which happens when the
[0,1], and UBj and LBj, respectively, stand for the jth selected barnacle’s choice exceeds the penis length, which
dimension of the upper and lower search limits. The choice is a predetermined algorithm parameter assigned to a cer-
of the barnacles to be mated is decided by the length of the tain value before the iterative process is commenced.
penis size, pl. Selection of candidate barnacles for repro- Offspring generation through sperm cast can be modeled
duction is based on some assumptions listed below. by the following,
• Penis length is the most important factor in selecting xnew ¼ rand ð0; 1Þ  xM ð121Þ
barna
random barnacles for reproduction.
• A barnacle in the population is limited in its reproduc- Pseudo-code of the Barnacles Mating Optimizer is
tion, with only one barnacle within each generation provided in the below algorithm.

123
14302 Neural Computing and Applications (2023) 35:14275–14378

3 Main motivation for the current study problems of varying types and functional characteristics.
Researchers seek to find alternative ways to conquer the
This section aims to provide essential insights on why the challenging outcomes of the NFL theorem, such as
current research study is performed and what future hybridizing two complementary metaheuristic algorithms
directions are served to the existing literature, resulting [88], implementing chaotic sequences into the base algo-
from the outcomes of the numerical experiments made on rithm rather than using uniformly generated random num-
the considered test subjects. It also demonstrates some of bers [89] or introducing the fundamentals of reinforcement
the key advantages and disadvantages of the compared learning concepts into metaheuristic algorithms [90]. They
algorithms in terms of solution efficiency and accuracy. have also developed novel metaheuristic algorithms uti-
The recent surge in the rapid development and imple- lizing innovative nature-inspired search schemes, which
mentation of the nature-inspired metaheuristic algorithms entail a considerable amount of improvement on accurate
within the last five years is the main motivation behind this solutions for complex and large-scale benchmark problems
research study as their comparative performances covering compared to existing literature stochastic optimizers.
a broad range of problem domain have not been elaborately Therefore, due to the lack of knowledge on the general
investigated, which not only impedes the spread of the performances of the recently proposed nature-inspired
general knowledge among the community on the utmost optimizers, this research study aims to explore the inherent
capabilities of these proposed optimization methods but pros and cons of eleven selected algorithms, including
also gives limited ideas or insights on their optimization RUNGE, GRAD, PRO, REPTILE, SNAKE, EQUIL,
accuracies for problems in which variety of dimensional of MANTA, AFRICAN, AQUILA, HARRIS, and BARNA
functional complexities occur. Since numerical investiga- optimizers over a wide range of optimization problems.
tions as to the estimation performances of the proposed Despite their recent emergence, these methods have been
algorithms are made on a restricted range of benchmark applied to many engineering design cases covering a
problems and established on their own experimental con- diverse set of scientific fields. Table 1 reports the inspira-
ditions, there is no evidential literature approach unfolding tional sources of these mentioned algorithms and lists some
the true optimization capabilities of these algorithms. of their major contributions to the existing literature
Relying on the postulates of the No Free Lunch (NFL) approaches. Particularly for HARRIS and EQUIL algo-
theorem [20], indicating that there is not a single available rithms, there are plenty of engineering applications avail-
metaheuristic algorithm to be able to solve all optimization able in the existing accumulated literature small portion of

123
Neural Computing and Applications (2023) 35:14275–14378 14303

Table 1 Some early applications of the compared algorithms


Algorithm Inspiration Some literature applications

Runge Kutta Mathematical foundations of Runge–Kutta differential Parameter identification of photovoltaic models [93]
optimization equation solver Multi-hydropower reservoir optimization [94]
(RUNGE)
Gradient-based Newton’s famous gradient-based search method Multi-objective optimization of real-world structural design
Optimizer (GRAD) optimization problems [95]
Parameter estimation of PEM models [45]
Poor and rich Desire to improve the current wealth levels of the poor Feature selection for text classification [96]
optimization (PRO) and rich people in a community Classification of similar documents [46]
Reptile search Foraging behaviors of crocodiles Power systems engineering design [97]
algorithm Selecting the important subsets for churn prediction [98]
(REPTILE)
Snake optimizer Reproduction and hunting behaviors of snakes Avoiding cascading failures through a transmission expansion
(SNAKE) planning model [99]
Equilibrium optimizer Mathematical models to determine the equilibrium Feature selection [100]
(EQUIL) states of non-reacting particles in a control volume Optimal operation of hybrid AC/DC grids [101]
Manta ray foraging Hunting behaviors of manta rays Economic dispatch problems [102]
optimizer Global optimization and image segmentation [103]
(MANTA)
Optimal power flow problem [104]
African vultures Hunting styles of African vultures Shell-and-tube heat exchanger design [105]
optimization Tuning PI controllers for hybrid renewable energy systems
(AFRICAN) [106]
Parameter estimation of three diode solar photovoltaic models
[107]
Aquila optimization Foraging behaviors of intelligent aquilas Optimizing ANFIS model parameters for oil production
(AQUILA) forecasting [108]
Gene selection in cancer classification [109]
Optimal distribution of the generated energy across the grid
network [110]
Harris Hawks Foraging behaviors of Harris Hawks Optimal selection of the most significant chemical descriptors
Optimization and chemical compound activities for drug design [111]
(HARRIS) Optimal design of microchannel heat sinks [112]
Roller bearing design [113]
Barnacles mating Mating behaviors of barnacles Control of a Pendulum System [114]
optimizer (BARNA) Optimal chiller loading [115]
Training radial basis function neural network for parameter
estimation of induction motors [116]

which is reported in Table 1. Although their implementa- discussing the weakness and strengths of these algorithms,
tion into a well-organized computer code is a bit strict and and all decisive inferences regarding their inherent merits
challenging compared to other algorithms, their wide range are based on a limited number of numerical tests dealing
utilization covering different fields of engineering domain with a particular design case or solving a suite of specific
is noticed and recognized by the researchers of the opti- unconstrained benchmark functions without ever men-
mization community. tioning the effects of the ‘‘curse of the dimensionality’’ in
It is worth to emphasizing the actual fact that there is no most of the comparative cases. One interesting point that
clear evidential data or reassuring experimental findings on should be put emphasis on that most of the new emerging
the true optimization performances of these algorithms metaheuristic algorithms do not have tunable algorithm
since each representative research study associated with parameters employed on the responsible search equations,
evaluating its prediction ability is accomplished with its which results in a significant improvement on the overall
own methodology and experimental conditions. Therefore, prediction accuracies of these algorithms and entails a
there is no given or provided common ground for

123
14304 Neural Computing and Applications (2023) 35:14275–14378

Table 2 Multimodal and unimodal test functions considered for benchmarking the optimization accuracies of the compared algorithms
Name Type Range Dimension(D) Opt. point

f1 Ackley function C.D.Nsep.M [- 35.35]D 30.500.1000 0.0


f2 Rastrigin function C.D.Sep.M [- 5.12.5.12]D 30.500.1000 0.0
f3 Griewank function C.D.Nsep.M [- 100.100]D 30.500.1000 0.0
D
f4 Zakharov function C.D.Nsep.M [- 5.10] 30.500.1000 0.0
f5 Salomon function C.D.Nsep.M [- 100.100]D 30.500.1000 0.0
f6 Alpine function C.ND.Sep.M [- 10.10]D 30.500.1000 0.0
f7 Csendes function C.D.Sep.M [- 1.1]D 30.500.1000 0.0
f8 Schaffer function C.D.Nsep.M [- 100.100]D 30.500.1000 0.0
f9 Xin She Yang 2 function DC.ND.Nsep.M [- 10.10]D 30.500.1000 0.0
D
f10 Inverted Cosine Mixture function Nsep. M [- 10.10] 30.500.1000 0.0
f11 Wavy function C.D.Sep.M [- p. p]D 30.500.1000 0.0
f12 Xin She Yang 3 function DC.ND.Nsep.M [- 2p. 2p]D 30.500.1000 0.0
f13 Xin She Yang 4 function DC.ND.Nsep.M [- 10.10]D 30.500.1000 - 1.0
f14 Penalized1 function M [- 50.50]D 30.500.1000 0.0
f15 Pathological function C.D.Nsep.M [- 100.100]D 30.500.1000 0.0
f16 Quintic function C.D.Sep.M [- 10.10]D 30.500.1000 0.0
f17 Qing function C.D.Sep.M [- 500.500]D 30.500.1000 0.0
f18 Levy function Nsep.M [- 10.10]D 30.500.1000 0.0
f19 Sphere function C.D.U [0.10]D 30.500.1000 0.0
D
f20 Brown function C.D.Nsep.U [- 1.4] 30.500.1000 0.0
f21 Sum of different powers function C.D.U [- 1.1]D 30.500.1000 0.0
f22 Bent cigar function C.D.U [- 100.100]D 30.500.1000 0.0
f23 Sum of squares function C.D.U [- 5.12.5.12]D 30.500.1000 0.0
f24 Dropwave function C.D.U [- 5.12.5.12]D 30.500.1000 - 1.0
f25 Rosenbrock function C.D.Nsep.U [- 30. - 30]D 30.500.1000 0.0
f26 Discus function C.D.U [- 100.100]D 30.500.1000 0.0
f27 Dixon – Price function C.D.Nsep.U [- 10.10]D 30.500.1000 0.0
f28 Trid function C.D.Nsep.U [- 10.10]D 30.500.1000 - D(D ? 4)(D - 1)/6
f29 Schwefel 2.21 function C.ND.Sep.U [- 100.100]D 30.500.1000 0.0
f30 Schwefel 2.23 function C.ND.Nsep.U [- 10.10]D 30.500.1000 0.0
D
f31 Schwefel 2.25 function C.D.Sep.U [0.10] 30.500.1000 0.0
f32 Schwefel 2.20 function C.ND.Sep.U [- 100.100]D 30.500.1000 0.0
f33 Stretched Sine Wave function C.D.Nsep.U [-100.100]D 30.500.1000 0.0
f34 Powell function C.D.Nsep.U [-4.5]D 30.500.1000 0.0
C Continuous, DC Discontinuous, D Differentiable, ND Non-differentiable, Sep Separable, Nsep Non-separable, M Multimodal, U Unimodal

quick and robust convergence as it avoids time-consuming composed of unimodal and multimodal test functions, and
and tiresome iterative parameter adjustment process. respective predictive results are comparatively analyzed.
These test functions have been commonly used by
researchers as convenient test beds for evaluating the per-
4 Experimental methodology formance of their proposed algorithms. Functional char-
acteristics, problem dimensionalities, search ranges, and
This section presents the comparative investigation of the global optimum points of each employed test functions are
eleven above-mentioned emerging metaheuristic algo- correspondingly reported in Table 2. Unimodal test func-
rithms, taking into account of different benchmark suites. tions characteristically have no local optimum but only a
Comparative algorithms are firstly evaluated on thirty-four global optimum point, whereas multimodal test functions
scalable unconstrained optimization test functions locate many local optimum points within their defined

123
Neural Computing and Applications (2023) 35:14275–14378 14305

Table 3 Description of CEC


No. Functionsf*(x)
2013 benchmark functions
1 Sphere function - 1400
Unimodal 2 Rotated High Conditioned Elliptic function - 1300
Functions 3 Rotated Bent Cigar function - 1200
4 Rotated Discus function - 1100
5 Different Powers function - 1000
6 Rotated Rosenbrock function - 900
7 Rotated Schaffers F7 function - 800
8 Rotated Ackley function - 700
9 Rotated Weirstrass function - 600
10 Rotated Griewank function - 500
11 Rastrigin function - 400
Basic 12 Rotated Rastrigin function - 300
Multimodal 13 Non - Continuous rotated Rastrigin function - 200
Functions 14 Schwefel function - 100
15 Rotated Schwefel function 100
16 Rotated Katsuura function 200
17 Lunacek Bi Rastrigin function 300
18 Rotated Lunacek Bi Rastrigin function 400
19 Expanded Griewank plus Rosenbrock function 500
20 Expanded Schaffer F6 function 600
21 Composition function 1 (n = 5. Rotated) 700
22 Composition function 2 (n = 3. Unrotated) 800
Composition 23 Composition function 3 (n = 3. Rotated) 900
Functions 24 Composition function 4 (n = 3. Rotated) 1000
25 Composition function 5 (n = 3. Rotated) 1100
26 Composition function 6 (n = 5. Rotated) 1200
27 Composition function 7 (n = 5. Rotated) 1300
28 Composition function 8 (n = 5. Rotated) 1400
Search range: [- 100.100]D

search ranges. Unimodal test functions are efficient only for their widespread utilization on various types of
benchmark samples for assessing the exploitation perfor- metaheuristics but also establishing a credible environment
mances of the algorithms, while multimodal test functions for assessing the search tendencies of the algorithms.
are prolific instruments for evaluating the exploration Convergence success of these twelve algorithms is inves-
capabilities of the employed algorithm. These thirty-four tigated and comparatively discussed based on the opti-
benchmark functions comprised of unimodal and multi- mization results of these thirty-four optimization test
modal problems with varying dimensionalities of 30, 500, functions. Following that, the scalabilities of these algo-
and 1000D are considered for the overall performance rithms are evaluated on their respective results of 500D and
evaluation of the compared algorithms. There are many 1000D unimodal and multimodal test functions. Compu-
alternative benchmark cases in the existing literature. Some tational runtimes of each algorithm for each test function
of them are given below in their corresponding references are compared for 2000 function evaluations and a decisive
[91, 92]. However, the main advantages of using these conclusion is drawn as to which algorithm burdens the
artificially produced problems are their common and fre- minimum computational load to the processors. Second
quent applications in benchmarking the optimization phase of the comparative investigation between algorithms
effectivity of the developed algorithm in literature is based on the optimization results of the continuous
approaches, which makes them reliable test alternatives not benchmark functions from the 2013 IEEE Congress on

123
14306 Neural Computing and Applications (2023) 35:14275–14378

Table 4 Parameter settings of


Algorithm Parameters
each competitive algorithm
RUNGE No tunable algorithm parameters involved
GRAD No tunable algorithm parameters involved
PRO No tunable algorithm parameters involved
REPTILE Epsilon parameter used to avoid of division by zero error e = 10–10
Parameters for controlling the exploration accuracy of the algorithm a = 0.1 – b = 0.1
SNAKE Algorithm constant – c1 = 0.5
Algorithm constant – c2 = 0.05
Algorithm constant – c3 = 2.0
EQUIL Parameter controlling the exploration capacity of algorithm a1 = 2.0
Parameter controlling the exploitation capacity of algorithm a2 = 1.0
Parameter balancing the exploration and exploitation capacity of the algorithm – GP = 0.5
MANTA Somersault factor S = 2
AFRICAN No tunable algorithm parameters involved
AQUILA Random algorithm parameter—r1 2 ½1; 20. Random algorithm parameter—D1 2 ½1; D
Algorithm constant – U = 0.00565. Algorithm constant – x = 0.005
HARRIS No tunable algorithm parameters involved
BARNA No tunable algorithm parameters involved

Evolutionary Computation [117]. These functions are 4.1 Comprehensive analysis on exploration
briefly summarized in Table 3. Evolution of the design performance of the algorithms
variables to their optimal solutions is iteratively plotted in
the convergence curves, each of which is constructed for The exploration abilities of the eleven mentioned meta-
each 28 test instances of CEC 2013 benchmark problems heuristic optimizers are evaluated through multimodal test
for eleven compared metaheuristic algorithms. For stan- functions, which are challenging benchmark problems (f1–
dard continuous rest functions composed of thirty-four f18) defined in Table 2. These test functions include a
unimodal and multimodal problems, total number of 1000 multitude number of local optimum points located in the
function evaluations have been performed for 30 inde- search space, and the inherent complexities of these func-
pendent algorithm runs due to the stochastic natures of the tions dramatically increase with increasing problem
algorithms. Statistical analysis has been performed on the dimensionalities. Therefore, they are efficient test beds for
obtained set of solutions, and predication accuracies of the evaluating the local minimum avoidance of optimization
algorithms have been asses in terms of best mean, worst, algorithms. Tables 5 and 6 report the predictions of these
and standard deviation results of the consecutive runs. twelve compared metaheuristic algorithms for 30-dimen-
Compared metaheuristic algorithms are developed on sional multimodal benchmark functions.
MATLAB environment and run on a desktop computer REPTILE algorithm provides the best predictions for 13
with Intel Core i5-8300H CPU @ 2.30 GHz with having out of 18 multimodal test functions and becomes one of the
8.0 GB RAM. Parameter settings of the algorithms are trailblazing algorithms between the competitive optimizers.
given in Table 4 and remain constant during the course of MANTA algorithm obtains the best results for f1, f2, f3, f8,
iterations. Previous cumulative experiences of the authors, f11, f13, and f16 test functions and becomes one of the
along with the insightful recommendations of the algorithm successful algorithms regarding the overall estimation
developers in their respective original articles concerning performances. AFRICAN algorithm obtains the best results
the accurate values of algorithm constants, play an for f1, f2, f3, f8, f11, f13, and f15 test functions. AQUILA
important role during the exhaustive parameter setting algorithm is another effective method for acquiring the
process. Next section provides an extensive and conducive most accurate predictions for f2, f3, f8, f11, f13, f14, f15, and
discussion on the optimization results of standard scalable f16. Table 7 reports the ranking points of the algorithms
test functions. according to their best prediction results for 30 D multi-
modal test functions, in which the best-performing algo-
rithm obtains a ranking point of 1 while the worst method’s
ranking point is 11. It is seen that despite the dominating
performance of REPTILE algorithm with reaching the
global optimum points of twelve multimodal 30D test

123
Neural Computing and Applications (2023) 35:14275–14378 14307

Table 5 Statistical results of the compared algorithms for test problems between f1—Ackley and f8—Schaffer
f1- Ackley f2- Rastrigin
Best Mean Std Dev Worst Best Mean Std Dev Worst

AFRICAN 8.88E-16 3.66E-10 1.83E-09 1.00E-08 0.00E?00 3.79E-15 2.08E-14 1.14E-13


AQUILA 1.23E-11 1.11E-06 5.78E-06 3.17E-05 0.00E?00 2.45E-10 1.06E-09 5.75E-09
BARNA 1.15E-14 3.59E-09 1.57E-08 8.61E-08 0.00E?00 3.41E-14 1.87E-13 1.02E-12
EQUIL 2.22E-01 4.74E-01 2.12E-01 1.13E?00 4.01E?01 7.97E?01 2.33E?01 1.33E?02
GRAD 8.88E-16 4.45E-14 9.30E-14 4.56E-13 0.00E?00 0.00E?00 0.00E?00 0.00E?00
HARRIS 8.43E-11 2.57E-07 8.23E-07 4.49E-06 0.00E?00 1.29E-10 4.00E-10 2.09E-09
MANTA 8.88E-16 8.88E-16 0.00E?00 8.88E-16 0.00E?00 0.00E?00 0.00E?00 0.00E?00
PRO 8.88E-16 8.88E-16 0.00E?00 8.88E-16 0.00E?00 0.00E?00 0.00E?00 0.00E?00
REPTILE 8.88E-16 8.88E-16 0.00E?00 8.88E-16 0.00E?00 0.00E?00 0.00E?00 0.00E?00
RUNGE 5.05E-09 1.18E-06 1.41E-06 5.66E-06 5.68E-14 1.01E-07 4.68E-07 2.56E-06
SNAKE 5.21E-04 4.97E-01 5.97E-01 2.29E?00 1.78E?00 5.84E?01 6.76E?01 2.54E?02

f3 - Griewank f4 - Zakharov
Best Mean Std Dev Worst Best Mean Std Dev Worst

AFRICAN 0.00E?00 7.77E-17 4.26E-16 2.33E-15 2.62E-29 2.57E?01 7.21E?01 3.09E?02


AQUILA 0.00E?00 1.40E-14 4.82E-14 2.07E-13 2.19E-17 2.29E-01 5.97E-01 2.66E?00
BARNA 0.00E?00 0.00E?00 0.00E?00 0.00E?00 3.77E-34 3.20E-16 1.27E-15 6.69E-15
EQUIL 2.39E-03 1.62E-02 2.90E-02 1.55E-01 9.07E?01 2.07E?02 6.99E?01 3.96E?02
GRAD 0.00E?00 0.00E?00 0.00E?00 0.00E?00 2.21E-25 1.85E-16 9.70E-16 5.32E-15
HARRIS 0.00E?00 1.23E-13 6.01E-13 3.30E-12 3.80E-13 1.58E?02 2.70E?02 9.18E?02
MANTA 0.00E?00 0.00E?00 0.00E?00 0.00E?00 7.64E-43 5.21E-33 2.37E-32 1.29E-31
PRO 0.00E?00 0.00E?00 0.00E?00 0.00E?00 7.49E-67 2.27E-59 6.24E-59 2.44E-58
REPTILE 0.00E?00 0.00E?00 0.00E?00 0.00E?00 0.00E?00 2.10E-32 8.91E-32 4.69E-31
RUNGE 0.00E?00 3.54E-13 1.02E-12 4.76E-12 3.91E-04 9.96E-01 2.36E?00 1.04E?01
SNAKE 4.68E-08 8.40E-04 1.95E-03 8.35E-03 8.16E-08 1.37E-02 4.55E-02 2.45E-01

f5 - Salomon f6 - Alpine
Best Mean Std Dev Worst Best Mean Std Dev Worst

AFRICAN 5.57E-20 2.36E-04 1.28E-03 7.02E-03 7.39E-18 9.60E-10 4.86E-09 2.67E-08


AQUILA 1.25E-11 1.64E-03 6.04E-03 3.13E-02 4.24E-11 4.25E-04 1.38E-03 7.07E-03
BARNA 2.89E-15 4.11E-10 1.20E-09 5.98E-09 2.69E-16 7.78E-10 3.34E-09 1.81E-08
EQUIL 4.00E-01 5.83E-01 1.05E-01 9.00E-01 1.19E-01 3.74E-01 4.20E-01 2.35E?00
GRAD 3.46E-11 1.67E-02 3.79E-02 9.99E-02 3.37E-18 7.45E-09 4.08E-08 2.23E-07
HARRIS 1.07E-10 3.32E-05 1.76E-04 9.65E-04 2.29E-11 1.68E-07 3.89E-07 1.98E-06
MANTA 1.31E-22 1.14E-05 6.24E-05 3.42E-04 9.00E-24 1.20E-18 3.19E-18 1.35E-17
PRO 1.79E-34 2.09E-27 7.01E-27 3.36E-26 7.65E-35 3.63E-31 7.33E-31 3.51E-30
REPTILE 0.00E?00 0.00E?00 0.00E?00 0.00E?00 0.00E?00 0.00E?00 0.00E?00 0.00E?00
RUNGE 9.99E-02 9.99E-02 5.52E-10 9.99E-02 3.69E-09 8.38E-07 2.26E-06 1.19E-05
SNAKE 5.54E-02 1.58E-01 7.86E-02 4.00E-01 3.24E-04 2.97E-01 1.19E?00 6.53E?00

f7 - Csendes f8 - Schaffer
Best Mean Std Dev Worst Best Mean Std Dev Worst

AFRICAN 1.15E-128 4.44E-55 1.74E-54 8.37E-54 0.00E?00 2.09E-12 1.14E-11 6.24E-11


AQUILA 6.47E-70 4.31E-44 2.28E-43 1.25E-42 0.00E?00 1.21E-12 4.20E-12 2.05E-11
BARNA 4.55E-84 3.64E-45 1.95E-44 1.07E-43 0.00E?00 8.25E-16 3.30E-15 1.75E-14
EQUIL 3.87E-04 3.86E-02 6.72E-02 2.87E-01 3.89E?00 5.87E?00 9.87E-01 7.29E?00
GRAD 5.16E-96 8.68E-77 4.66E-76 2.55E-75 0.00E?00 2.15E-12 8.98E-12 4.64E-11
HARRIS 1.14E-59 4.34E-34 2.38E-33 1.30E-32 0.00E?00 2.38E-07 1.30E-06 7.15E-06
MANTA 1.14E-143 2.95E-106 9.18E-106 4.29E-105 0.00E?00 0.00E?00 0.00E?00 0.00E?00
PRO 9.52E-210 4.50E-174 0.00E?00 1.35E-172 0.00E?00 0.00E?00 0.00E?00 0.00E?00
REPTILE 0.00E?00 1.27E-289 0.00E?00 3.80E-288 0.00E?00 0.00E?00 0.00E?00 0.00E?00
RUNGE 5.24E-44 4.27E-23 2.12E-22 1.16E-21 2.02E-03 2.80E?00 1.68E?00 7.26E?00
SNAKE 7.95E-21 2.56E-08 1.20E-07 6.56E-07 5.99E-03 4.39E-01 4.68E-01 1.82E?00

123
14308 Neural Computing and Applications (2023) 35:14275–14378

Table 6 Comparison of twelve optimization algorithms on multimodal test functions from f9-Yang2 to f18—Levy
Problem f9- Yang2 f10-inverted cosine mixture
Best Mean Std Dev Worst Best Mean Std Dev Worst

AFRICAN 3.67E-12 3.12E-09 1.41E-08 7.72E-08 3.49E-33 7.55E-16 4.12E-15 2.26E-14


AQUILA 3.51E-12 3.52E-12 1.20E-14 3.55E-12 6.90E-22 1.59E-10 8.43E-10 4.62E-09
BARNA 6.00E-13 5.00E-06 6.90E-06 2.56E-05 6.48E-30 1.41E-12 7.73E-12 4.23E-11
EQUIL 1.27E-09 1.12E-06 3.89E-06 2.14E-05 6.48E-01 1.61E?00 4.80E-01 2.46E?00
GRAD 3.52E-12 4.64E-12 2.83E-12 1.67E-11 4.21E-32 1.80E-26 5.56E-26 2.90E-25
HARRIS 3.52E-12 6.17E-08 2.42E-07 1.18E-06 5.32E-20 1.20E-11 4.80E-11 2.61E-10
MANTA 6.12E-11 2.05E-09 3.23E-09 1.47E-08 7.11E-45 3.03E-34 1.62E-33 8.89E-33
PRO 2.86E-08 5.61E-06 1.14E-05 5.59E-05 1.45E-69 1.18E-59 3.25E-59 1.65E-58
REPTILE 9.92E-10 3.74E-07 1.26E-06 6.74E-06 0.00E?00 0.00E?00 0.00E?00 0.00E?00
RUNGE 1.58E-10 2.81E-08 1.20E-07 6.59E-07 1.47E-18 2.58E-10 9.70E-10 5.12E-09
SNAKE 3.51E-12 7.04E-09 3.85E-08 2.11E-07 3.32E-04 1.74E-01 3.65E-01 1.47E?00
f11- Wavy f12- Yang3
Best Mean Std Dev Worst Best Mean Std Dev Worst

AFRICAN 0.00E?00 1.31E-12 7.16E-12 3.92E-11 1.05E-23 6.67E-13 2.00E-12 8.00E-12


AQUILA 0.00E?00 5.82E-11 2.80E-10 1.53E-09 7.19E-09 1.97E-04 4.84E-04 1.80E-03
BARNA 0.00E?00 2.46E-15 1.28E-14 7.01E-14 6.02E-19 4.93E-12 1.61E-11 6.74E-11
EQUIL 6.23E-01 7.05E-01 4.36E-02 7.89E-01 1.10E-04 5.43E-02 5.96E-02 2.32E-01
GRAD 0.00E?00 1.16E-14 6.37E-14 3.49E-13 4.51E-18 2.15E-10 8.80E-10 4.80E-09
HARRIS 0.00E?00 3.75E-10 1.96E-09 1.07E-08 6.44E-14 7.51E-08 1.96E-07 9.86E-07
MANTA 0.00E?00 0.00E?00 0.00E?00 0.00E?00 7.78E-24 1.12E-14 4.89E-14 2.61E-13
PRO 0.00E?00 0.00E?00 0.00E?00 0.00E?00 3.12E-35 9.78E-27 5.26E-26 2.88E-25
REPTILE 0.00E?00 0.00E?00 0.00E?00 0.00E?00 0.00E?00 0.00E?00 0.00E?00 0.00E?00
RUNGE 3.80E-10 1.60E-03 6.36E-03 3.18E-02 5.60E-19 1.74E-12 8.30E-12 4.54E-11
SNAKE 4.96E-05 3.91E-01 3.18E-01 8.86E-01 1.63E-06 1.99E-03 3.55E-03 1.67E-02

f13 - Yang4 f14 - Penalized1


Best Mean Std Dev Worst Best Mean Std Dev Worst

AFRICAN - 1.00E?00 - 9.98E-01 5.02E-03 - 9.79E-01 3.57E-05 6.86E-03 8.19E-03 3.15E-02


AQUILA - 1.00E?00 - 9.86E-01 5.27E-02 - 7.36E-01 7.02E-08 2.99E-05 4.71E-05 1.82E-04
BARNA - 1.00E?00 - 1.00E?00 3.77E-07 - 1.00E?00 4.97E-01 7.76E-01 1.64E-01 1.23E?00
EQUIL 1.46E-12 3.12E-12 1.14E-12 5.99E-12 5.62E-02 1.36E-01 6.96E-02 3.78E-01
GRAD - 1.00E?00 - 9.63E-01 1.71E-01 - 8.05E-02 9.09E-06 2.96E-04 5.01E-04 2.25E-03
HARRIS - 1.00E?00 - 1.00E?00 1.06E-05 - 1.00E?00 1.04E-05 1.89E-03 4.44E-03 2.44E-02
MANTA - 1.00E?00 - 1.00E?00 1.59E-05 - 1.00E?00 1.93E-02 6.10E-02 2.59E-02 1.28E-01
PRO - 1.00E?00 - 1.00E?00 0.00E?00 - 1.00E?00 1.07E?00 1.46E?00 1.88E-01 1.67E?00
REPTILE - 1.00E?00 - 9.96E-01 2.13E-02 - 8.83E-01 4.05E-02 3.24E-01 1.90E-01 7.70E-01
RUNGE 1.16E-13 5.63E-13 2.82E-13 1.26E-12 2.34E-03 5.36E-03 2.50E-03 1.11E-02
SNAKE - 9.62E-01 - 2.18E-01 3.08E-01 1.89E-12 3.77E-05 3.99E-02 8.88E-02 4.56E-01

f15- Path f16- Quintic


Best Mean Std Dev Worst Best Mean Std Dev Worst

AFRICAN 0.00E?00 5.71E-01 1.85E?00 8.16E?00 1.17E?00 1.43E?01 1.15E?01 4.79E?01


AQUILA 0.00E?00 7.36E-04 1.21E-03 3.91E-03 3.44E-02 6.24E-01 5.94E-01 2.55E?00
BARNA 0.00E?00 1.03E-07 5.60E-07 3.07E-06 7.34E?01 8.63E?01 5.95E?00 9.79E?01
EQUIL 7.13E?00 9.22E?00 8.48E-01 1.06E?01 4.77E?01 6.37E?01 1.21E?01 9.32E?01
GRAD 5.33E-13 1.71E?00 3.47E?00 9.85E?00 1.78E-01 3.08E?00 2.84E?00 9.84E?00

123
Neural Computing and Applications (2023) 35:14275–14378 14309

Table 6 (continued)
f15- Path f16- Quintic
Best Mean Std Dev Worst Best Mean Std Dev Worst

HARRIS 1.22E-15 1.39E?00 3.16E?00 9.10E?00 1.99E-01 4.61E?00 3.29E?00 1.25E?01


MANTA 0.00E?00 2.00E?00 3.80E?00 1.03E?01 1.82E?01 3.21E?01 5.77E?00 4.36E?01
PRO 0.00E?00 0.00E?00 0.00E?00 0.00E?00 8.58E?01 1.06E?02 8.95E?00 1.18E?02
REPTILE 0.00E?00 0.00E?00 0.00E?00 0.00E?00 4.00E?01 7.38E?01 2.78E?01 1.18E?02
RUNGE 5.20E?00 8.16E?00 1.23E?00 1.01E?01 1.40E?01 2.13E?01 3.96E?00 2.99E?01
SNAKE 7.18E-07 1.78E-01 6.60E-01 3.62E?00 7.75E-01 2.64E?01 2.78E?01 9.96E?01

f17- Qing f18—Levy


Best Mean Std Dev Worst Best Mean Std Dev Worst

AFRICAN 3.99E?02 1.11E?03 4.76E?02 1.95E?03 9.01E-04 4.37E-02 9.41E-02 4.70E-01


AQUILA 1.25E?03 2.44E?03 7.49E?02 4.58E?03 4.03E-05 1.03E-02 2.16E-02 1.14E-01
BARNA 3.19E?03 4.42E?03 6.74E?02 5.47E?03 4.96E?00 6.29E?00 7.08E-01 7.88E?00
EQUIL 6.24E?02 1.27E?03 3.99E?02 2.28E?03 4.48E-01 1.18E?00 4.31E-01 1.97E?00
GRAD 1.26E?03 1.84E?03 2.49E?02 2.35E?03 2.60E-06 1.08E-03 1.43E-03 6.17E-03
HARRISREAL 8.26E?02 1.48E?03 2.58E?02 1.90E?03 3.39E-05 4.93E-03 6.94E-03 2.45E-02
MANTA 2.61E?02 7.44E?02 2.68E?02 1.28E?03 1.58E-01 5.10E-01 1.77E-01 9.01E-01
PRO 2.82E?03 4.80E?03 9.54E?02 7.17E?03 6.01E?00 1.02E?01 1.39E?00 1.14E?01
REPTILE 2.82E?03 3.83E?03 5.38E?02 5.18E?03 8.71E?00 1.01E?01 7.71E-01 1.14E?01
RUNGE 8.26E?01 5.23E?02 2.97E?02 1.06E?03 6.99E-03 2.89E-02 1.90E-02 8.22E-02
SNAKE 1.86E?03 2.29E?03 3.19E?02 3.71E?03 2.06E-04 5.27E-01 9.03E-01 3.80E?00

functions and becoming the leading algorithm for these algorithms has been performed considering the optimiza-
cases, the prediction accuracies obtained for the remaining tion outcomes of 34 benchmark functions. Relying on the
multimodal benchmark functions by this algorithm are so ranking points of the algorithms assigned to the their
erroneous and deceptive which puts this optimizer in the averaged mean fitness values of 30 independent runs for
third place in terms of overall best estimation as observed eighteen different multimodal functions given in Table 8, a
in Table 7. In this context, MANTA algorithm becomes the statistically significant difference between the compared
best performer with a ranking point of 3.05, followed by algorithms is observed with the corresponding p-values of
AFRICAN algorithm with a respective ranking point of 1.64E-12, which is much less than the predetermined
3.11. Among them, EQUIL algorithm yields the worst threshold value of 0.05. This numerical behavior indicates
estimations with the corresponding ranking point value of that there are significant differences in the general perfor-
10.16. Table 8 evaluates these competitive metaheuristic mances of algorithms in solving multimodal test problems.
algorithms with respect to their ranking points obtained for
their mean fitness values. Taking into account of mean 4.2 Comprehensive analysis on the exploitation
deviation rates, it is observed that best-performing methods capabilities of the compared algorithms
are, respectively, MANTA, REPTILE, and PRO algo-
rithms, which are sorted based on their order of prediction This section aims to analyze and comparatively investigate
success. EQUIL algorithm has the worst mean results for the general capabilities of the algorithms regarding to the
30D multimodal problems. According to the descriptive intensification of the fertile areas discovered in the pre-
statistical results obtained for 30D multimodal optimiza- ceding exploration phase. Algorithms with strong
tion benchmark problems, which include best, mean, exploitation capabilities are able to cope with the com-
standard deviation, and worst solutions found by the plexities of the search space, with a view to reach the one
compared metaheuristic algorithms, Freidman’s test anal- and only global optimum point of the problem. Tables 9
ysis for multiple comparisons of the representative and 10 report the optimal results for 30D standard

123
14310 Neural Computing and Applications (2023) 35:14275–14378

Table 7 Ranking points of the algorithms based on the best prediction results of 30 D multimodal functions
AFRICAN AQUILA BARNA EQUIL GRAD HARRIS MANTA PRO REPTILE RUNGE SNAKE

f1 1 7 6 11 1 8 1 1 1 9 10
f2 1 1 1 11 1 1 1 1 1 9 10
f3 1 1 1 11 1 1 1 1 1 1 10
f4 5 7 4 11 6 8 3 2 1 10 9
f5 4 6 5 11 7 8 3 2 1 10 9
f6 5 8 6 11 4 7 3 2 1 9 10
f7 4 7 6 11 5 8 3 2 1 9 10
f8 1 1 1 11 1 1 1 1 1 9 10
f9 6 2 1 10 4 4 7 11 9 8 2
f10 4 7 6 11 5 8 3 2 1 9 10
f11 1 1 1 11 1 1 1 1 1 9 10
f12 4 9 6 11 7 8 3 2 1 5 10
f13 1 1 1 11 1 1 1 1 1 10 9
f14 4 1 10 9 2 3 7 11 8 6 5
f15 1 1 1 11 8 7 1 1 1 10 9
f16 5 1 10 9 2 3 7 11 8 6 4
f17 3 6 11 4 7 5 2 9 9 1 8
f18 5 3 9 8 1 2 7 10 11 6 4
Average rank 3.11 3.88 4.77 10.16 3.55 4.66 3.05 3.94 3.22 7.55 8.27
Overall rank 2 5 8 11 4 7 1 6 3 9 10

Table 8 Ranking points of the algorithms based on their mean deviation results for 30D multimodal test problems
AFRICAN AQUILA BARNA EQUIL GRAD HARRIS MANTA PRO REPTILE RUNGE SNAKE

f1 5 8 6 10 4 7 1 1 1 9 11
f2 5 8 6 11 1 7 1 1 1 9 10
f3 6 7 1 11 1 8 1 1 1 9 10
f4 9 7 5 11 4 10 2 1 3 8 6
f5 6 7 3 11 8 5 4 2 1 9 10
f6 5 9 4 11 6 7 3 2 1 8 10
f7 5 7 6 11 4 8 3 2 1 9 10
f8 6 5 4 11 7 8 1 1 1 10 9
f9 4 1 10 9 2 7 3 11 8 6 5
f10 5 8 6 11 4 7 3 2 1 9 10
f11 6 7 4 11 5 8 1 1 1 9 10
f12 4 9 6 11 7 8 3 2 1 5 10
f13 5 7 1 11 8 1 1 1 6 10 9
f14 5 1 10 8 2 3 7 11 9 4 6
f15 6 4 3 11 8 7 9 1 1 10 5
f16 4 1 10 8 2 3 7 11 9 5 6
f17 3 8 10 4 6 5 2 11 9 1 7
f18 5 3 9 8 1 2 6 11 10 4 7
Average rank 5.22 5.94 5.77 9.94 4.44 6.16 3.22 4.05 3.61 7.44 8.38
Overall rank 5 7 6 11 4 8 1 3 2 9 10

123
Neural Computing and Applications (2023) 35:14275–14378 14311

Table 9 Statistical comparison of twelve algorithms for 30D unimodal functions from f17 – Sphere to f26 – Discus
Problem f19 2 Sphere f20 2 Brown
Best Mean Std Dev Worst Best Mean Std Dev Worst

AFRICAN 1.31E-36 1.95E-17 6.44E-17 3.19E-16 6.23E-34 9.30E-17 3.67E-16 1.79E-15


AQUILA 4.66E-24 1.99E-14 9.18E-14 4.97E-13 8.47E-21 7.06E-13 3.75E-12 2.05E-11
BARNA 3.88E-34 1.35E-12 7.37E-12 4.04E-11 1.51E-28 1.61E-16 7.40E-16 4.06E-15
EQUIL 3.59E-02 1.73E-01 1.49E-01 7.40E-01 5.72E-01 3.92E?00 3.73E?00 1.68E?01
GRAD 6.56E-32 1.76E-27 4.54E-27 2.06E-26 3.16E-33 1.57E-26 7.50E-26 4.12E-25
HARRISREAL 5.23E-20 2.23E-13 5.03E-13 2.18E-12 1.12E-21 5.09E-13 1.92E-12 1.04E-11
MANTA 9.40E-46 4.18E-36 1.92E-35 1.05E-34 1.09E-44 1.56E-34 7.97E-34 4.37E-33
PRO 2.67E-67 6.87E-61 1.17E-60 3.89E-60 4.78E-67 3.88E-59 1.57E-58 7.92E-58
REPTILE 0.00E?00 0.00E?00 0.00E?00 0.00E?00 0.00E?00 0.00E?00 0.00E?00 0.00E?00
RUNGE 3.55E-16 8.32E-10 4.30E-09 2.36E-08 3.94E-16 5.71E-11 1.76E-10 7.84E-10
SNAKE 8.70E-07 6.71E-03 1.72E-02 9.28E-02 1.05E-11 9.81E-03 2.55E-02 1.35E-01

Problem f21 2 Sum of different powers f22 2 Bent cigar


Best Mean Std Dev Worst Best Mean Std Dev Worst

AFRICAN 1.20E-47 8.87E-29 4.07E-28 2.20E-27 2.29E-28 1.57E-12 8.40E-12 4.60E-11


AQUILA 2.95E-23 3.32E-12 9.11E-12 3.83E-11 1.83E-15 4.16E-06 2.25E-05 1.23E-04
BARNA 1.95E-42 2.99E-20 1.63E-19 8.95E-19 2.46E-22 2.58E-10 1.02E-09 5.02E-09
EQUIL 1.75E-05 6.96E-02 1.81E-01 9.50E-01 5.13E?04 1.61E?05 8.93E?04 4.98E?05
GRAD 7.56E-40 2.53E-32 1.10E-31 5.98E-31 1.39E-27 3.90E-20 1.78E-19 9.68E-19
HARRISREAL 5.94E-33 3.05E-17 8.54E-17 3.50E-16 3.24E-17 1.09E-07 3.65E-07 1.94E-06
MANTA 1.47E-58 1.94E-49 9.03E-49 4.95E-48 2.08E-39 2.28E-29 1.11E-28 6.05E-28
PRO 4.38E-73 2.98E-62 1.38E-61 7.56E-61 2.39E-63 2.27E-53 1.23E-52 6.76E-52
REPTILE 0.00E?00 0.00E?00 0.00E?00 0.00E?00 0.00E?00 0.00E?00 0.00E?00 0.00E?00
RUNGE 5.92E-39 3.87E-19 2.12E-18 1.16E-17 8.20E-11 9.84E-06 3.38E-05 1.84E-04
SNAKE 5.48E-10 2.53E-04 1.04E-03 5.68E-03 3.87E-01 3.08E?04 1.31E?05 7.14E?05

Problem f23 - Sum of squares f24 - Dropwave


Best Mean Std Dev Worst Best Mean Std Dev Worst

AFRICAN 5.50E-37 1.70E-16 9.17E-16 5.03E-15 - 1.00E?00 - 1.00E?00 2.01E-13 - 1.00E?00


AQUILA 1.03E-22 3.66E-10 2.01E-09 1.10E-08 - 1.00E?00 - 9.98E-01 9.07E-03 - 9.52E-01
BARNA 7.58E-27 7.60E-14 4.12E-13 2.26E-12 - 1.00E?00 - 1.00E?00 6.31E-12 - 1.00E?00
EQUIL 7.33E-01 2.16E?00 1.23E?00 5.23E?00 - 4.78E-01 - 2.95E-01 8.33E-02 - 1.53E-01
GRAD 1.58E-30 1.72E-26 6.08E-26 3.12E-25 - 1.00E?00 - 9.98E-01 1.02E-02 - 9.44E-01
HARRISREAL 9.37E-19 1.08E-11 3.02E-11 1.22E-10 - 1.00E?00 - 9.98E-01 1.16E-02 - 9.36E-01
MANTA 1.63E-44 1.67E-33 7.80E-33 4.27E-32 - 1.00E?00 - 1.00E?00 3.68E-16 - 1.00E?00
PRO 2.96E-68 5.72E-59 2.10E-58 1.15E-57 - 1.00E?00 - 1.00E?00 0.00E?00 - 1.00E?00
REPTILE 0.00E?00 0.00E?00 0.00E?00 0.00E?00 - 1.00E?00 - 1.00E?00 0.00E?00 - 1.00E?00
RUNGE 2.49E-14 4.72E-10 1.91E-09 1.03E-08 - 9.36E-01 - 9.36E-01 7.47E-12 - 9.36E-01
SNAKE 4.35E-06 2.72E-01 7.21E-01 3.48E?00 - 9.36E-01 - 8.32E-01 1.10E-01 - 4.78E-01

Problem f25 2 Rosenbrock f26 2 Discus


Best Mean Std Dev Worst Best Mean Std Dev Worst

AFRICAN 7.63E-01 1.79E?01 1.26E?01 2.88E?01 3.13E-34 1.25E-13 6.46E-13 3.54E-12


AQUILA 8.68E-01 1.75E?01 1.12E?01 2.90E?01 7.68E-14 4.51E?00 1.28E?01 5.36E?01
BARNA 2.88E?01 2.89E?01 4.36E-02 2.90E?01 1.21E-30 3.83E-15 1.76E-14 9.62E-14
EQUIL 4.42E?01 7.80E?01 5.79E?01 3.67E?02 9.28E-02 6.44E-01 3.55E-01 1.89E?00
GRAD 2.62E-02 4.87E-01 7.92E-01 2.84E?00 5.05E-30 1.68E-21 9.17E-21 5.02E-20
HARRISREAL 4.31E-04 8.50E?00 1.18E?01 2.87E?01 1.12E-15 6.14E-10 2.63E-09 1.43E-08
MANTA 2.73E?01 2.81E?01 3.94E-01 2.88E?01 1.23E-41 4.43E-33 2.36E-32 1.29E-31
PRO 2.89E?01 2.90E?01 1.79E-02 2.90E?01 2.12E-64 6.14E-55 3.22E-54 1.77E-53
REPTILE 2.90E?01 2.90E?01 1.08E-02 2.90E?01 0.00E?00 0.00E?00 0.00E?00 0.00E?00
RUNGE 2.76E?01 2.86E?01 3.72E-01 2.89E?01 1.60E-14 6.54E-10 2.91E-09 1.59E-08
SNAKE 3.94E-03 1.93E?01 1.17E?01 3.16E?01 1.10E-06 9.31E-03 1.37E-02 4.57E-02

123
14312 Neural Computing and Applications (2023) 35:14275–14378

Table 10 Statistical results of the compared algorithms for 30D unimodal test functions for f27 – Dixon-Price and f34—Powell
Problem f27- Dixon-price f28- Trid
Best Mean Std Dev Worst Best Mean Std Dev Worst

AFRICAN 2.46E-01 6.09E-01 1.88E-01 9.49E-01 - 1.62E?03 - 1.17E?03 1.77E?02 - 9.65E?02


AQUILA 2.49E-01 3.64E-01 1.78E-01 9.98E-01 - 1.51E?03 - 1.18E?03 1.53E?02 - 9.49E?02
BARNA 6.81E-01 7.70E-01 1.01E-01 9.87E-01 1.17E?01 2.05E?01 2.96E?00 2.51E?01
EQUIL 3.01E?00 8.39E?00 4.08E?00 2.04E?01 5.44E?02 2.99E?03 2.09E?03 8.68E?03
GRAD 1.92E-01 3.56E-01 1.73E-01 6.72E-01 - 1.55E?03 - 9.99E?02 1.64E?02 - 8.73E?02
HARRISREAL 2.50E-01 5.97E-01 3.12E-01 9.97E-01 - 1.66E?03 - 1.29E?03 1.68E?02 - 9.57E?02
MANTA 6.67E-01 6.70E-01 1.69E-03 6.73E-01 - 2.39E?02 - 9.16E?01 4.74E?01 - 1.39E?01
PRO 9.86E-01 9.97E-01 3.11E-03 1.00E?00 1.53E?01 2.75E?01 3.61E?00 3.00E?01
REPTILE 6.71E-01 9.76E-01 8.21E-02 1.00E?00 1.40E?01 2.85E?01 3.84E?00 3.00E?01
RUNGE 6.67E-01 6.94E-01 8.65E-02 1.00E?00 - 9.02E?02 - 7.42E?02 9.43E?01 - 5.44E?02
SNAKE 6.63E-01 1.03E?00 1.08E-01 1.26E?00 - 9.76E?02 - 7.02E?02 3.25E?02 2.74E?02
Problem f29- Schwefel 2.21 f30- Schwefel 2.23
Best Mean Std Dev Worst Best Mean Std Dev Worst

AFRICAN 1.17E-19 2.10E-09 6.41E-09 2.94E-08 5.12E-184 2.37E-83 1.30E-82 7.10E-82


AQUILA 9.11E-14 1.88E-08 7.78E-08 4.28E-07 3.65E-118 3.26E-71 1.78E-70 9.77E-70
BARNA 1.19E-14 3.34E-10 8.07E-10 3.48E-09 7.77E-163 2.64E-78 1.41E-77 7.71E-77
EQUIL 3.25E-01 5.83E-01 1.71E-01 8.78E-01 6.24E-07 1.09E-02 3.24E-02 1.65E-01
GRAD 5.40E-18 3.41E-14 6.49E-14 3.03E-13 5.11E-173 6.04E-128 3.30E-127 1.81E-126
HARRISREAL 1.77E-10 1.32E-07 2.57E-07 8.96E-07 1.15E-100 1.03E-59 3.85E-59 1.91E-58
MANTA 1.08E-22 4.93E-18 2.34E-17 1.29E-16 6.58E-212 1.30E-174 0.00E?00 3.87E-173
PRO 8.34E-36 2.38E-31 4.12E-31 1.72E-30 0.00E?00 3.85E-283 0.00E?00 1.16E-281
REPTILE 0.00E?00 0.00E?00 0.00E?00 0.00E?00 0.00E?00 0.00E?00 0.00E?00 0.00E?00
RUNGE 2.86E-07 5.69E-05 9.13E-05 4.22E-04 5.17E-70 1.20E-41 6.59E-41 3.61E-40
SNAKE 4.83E-06 5.02E-02 8.36E-02 4.25E-01 1.04E-37 3.11E-13 1.12E-12 4.72E-12

Problem f31 - Schwefel 2.25 f32 - Schwefel 2.20


Best Mean Std Dev Worst Best Mean Std Dev Worst

AFRICAN 2.34E-02 4.18E-01 7.83E-01 4.06E?00 2.81E-20 1.87E-09 8.75E-09 4.81E-08


AQUILA 5.58E-05 3.43E-01 3.91E-01 1.73E?00 1.90E-11 1.61E-06 8.67E-06 4.75E-05
BARNA 2.19E?01 2.43E?01 1.01E?00 2.61E?01 4.92E-14 1.64E-09 3.38E-09 1.47E-08
EQUIL 1.54E?01 2.01E?01 2.96E?00 2.83E?01 7.24E-01 1.30E?00 3.43E-01 2.13E?00
GRAD 4.56E-04 6.51E-02 1.25E-01 5.88E-01 3.54E-16 5.12E-13 1.19E-12 5.95E-12
HARRISREAL 1.35E-03 2.01E-01 2.33E-01 8.94E-01 1.81E-09 1.03E-06 2.48E-06 1.26E-05
MANTA 6.79E?00 1.08E?01 1.77E?00 1.47E?01 5.64E-22 4.15E-17 1.77E-16 9.68E-16
PRO 2.52E?01 2.83E?01 1.00E?00 2.90E?01 5.54E-33 1.10E-29 4.23E-29 2.32E-28
REPTILE 2.68E?01 2.77E?01 5.38E-01 2.88E?01 0.00E?00 0.00E?00 0.00E?00 0.00E?00
RUNGE 3.63E-01 1.36E?00 7.77E-01 3.91E?00 2.65E-08 1.07E-05 4.10E-05 2.25E-04
SNAKE 5.47E-03 4.56E?00 6.92E?00 2.38E?01 1.45E-03 7.21E-01 8.37E-01 2.96E?00

Problem f33- Streched Sine Wave f34- Powell


Best Mean Std Dev Worst Best Mean Std Dev Worst

AFRICAN 2.83E-09 1.90E-04 6.03E-04 3.09E-03 8.41E-51 1.18E-25 6.34E-25 3.47E-24


AQUILA 2.84E-03 2.83E-02 3.97E-02 1.52E-01 4.02E-22 3.18E-10 1.74E-09 9.54E-09
BARNA 6.62E-08 5.14E-04 1.16E-03 5.20E-03 4.74E-34 4.01E-23 1.56E-22 7.35E-22
EQUIL 4.58E?00 8.82E?00 1.92E?00 1.27E?01 1.58E-05 1.21E-01 3.57E-01 1.86E?00
GRAD 3.19E-07 2.36E-03 1.93E-03 5.10E-03 1.04E-41 2.15E-31 1.13E-30 6.17E-30

123
Neural Computing and Applications (2023) 35:14275–14378 14313

Table 10 (continued)
Problem f33- Streched Sine Wave f34- Powell
Best Mean Std Dev Worst Best Mean Std Dev Worst

HARRISREAL 2.83E-03 1.34E-02 1.32E-02 5.44E-02 7.95E-30 6.55E-19 1.99E-18 8.44E-18


MANTA 2.21E-11 1.36E-03 1.59E-03 3.63E-03 1.31E-57 1.44E-46 7.69E-46 4.21E-45
PRO 1.57E-16 1.54E-07 7.75E-07 4.24E-06 6.19E-70 9.14E-61 5.00E-60 2.74E-59
REPTILE 0.00E?00 0.00E?00 0.00E?00 0.00E?00 0.00E?00 0.00E?00 0.00E?00 0.00E?00
RUNGE 2.04E-02 1.64E-01 1.30E-01 5.03E-01 1.10E-37 9.15E-26 4.60E-25 2.51E-24
SNAKE 1.65E-01 2.65E?00 1.78E?00 7.27E?00 1.91E-08 1.35E-03 6.61E-03 3.63E-02

Table 11 Comparative performances and ranking points of the competitive algorithms based on their best predictions
AFRICAN AQUILA BARNA EQUIL GRAD HARRIS MANTA PRO REPTILE RUNGE SNAKE

f19 4 7 5 11 6 8 3 2 1 9 10
f20 4 8 6 11 5 7 3 2 1 9 10
f21 4 9 5 11 6 8 3 2 1 7 10
f22 4 8 6 11 5 7 3 2 1 9 10
f23 4 7 6 11 5 8 3 2 1 9 10
f24 1 1 1 11 1 1 1 1 1 9 9
f25 4 5 8 11 3 1 6 9 10 7 2
f26 4 9 5 11 6 7 3 2 1 8 10
f27 2 3 9 11 1 4 6 10 8 6 5
f28 2 4 8 11 3 1 7 10 9 6 5
f29 4 7 6 11 5 8 3 2 1 9 10
f30 4 7 6 11 5 8 3 1 1 9 10
f31 5 1 9 8 2 3 7 10 11 6 4
f32 4 7 6 11 5 8 3 2 1 9 10
f33 4 8 5 11 6 7 3 2 1 9 10
f34 4 9 7 11 5 8 3 2 1 6 10
Average rank 3.62 6.25 6.12 10.81 4.31 5.87 3.75 3.81 3.12 7.93 8.43
Overall rank 2 8 7 11 5 6 3 4 1 9 10

unimodal benchmark problems by the compared eleven value of 3.75. Although PRO is the most accurate algo-
algorithms. REPTILE algorithm shows the best perfor- rithm for test functions of f24 and f30 and becoming the
mance for 12 unimodal functions out of 16 instances and second best algorithm for ten test functions, including f19,
becomes the leading algorithm for 30D unimodal test f20, f21, f22, f23, f26, f29, f32, f33, and f34 test problems, its
problems. Furthermore, this algorithm approaches the performance of the remaining cases are so dissatisfactory,
global optimum answers of f20, f21, f22, f23, f24, f26, f29, f30, which puts this algorithm in the fourth place among the
f32, f33, and f34 problems. According to the reported ranking other methods. EQUIL algorithm is again the worst per-
results obtained for the best predictions of the compared former as occurred for multimodal test functions with the
algorithms in Table 11, AFRICAN algorithm has the sec- respective ranking point values of 10.81. Table 12 com-
ond-best average ranking point value of 3.62, yet only pares eleven algorithms with respective ranking points
reaching the global optimum value of f24 function. obtained for the mean deviation rates. REPTILE continues
MANTA is the third-best algorithm concerning the best its dominancy regarding to the mean results having the best
solutions-based ranking points with the corresponding ranking point value of 3.06. It is interesting to see GRAD

123
14314 Neural Computing and Applications (2023) 35:14275–14378

Table 12 Evaluation of the prediction successes of the algorithms relying on their mean fitness values
AFRICAN AQUILA BARNA EQUIL GRAD HARRIS MANTA PRO REPTILE RUNGE SNAKE

f19 5 6 8 11 4 7 3 2 1 9 10
f20 5 8 6 11 4 7 3 2 1 9 10
f21 5 9 6 11 4 8 3 2 1 7 10
f22 5 8 6 11 4 7 3 2 1 9 10
f23 5 8 6 11 4 7 3 2 1 9 10
f24 1 6 1 11 6 6 1 1 1 9 10
f25 4 3 8 11 1 2 6 9 9 7 5
f26 6 11 5 10 4 7 3 2 1 8 9
f27 4 2 7 11 1 3 5 9 8 6 10
f28 3 2 8 11 4 1 7 9 10 5 6
f29 6 7 5 11 4 8 3 2 1 9 10
f30 5 7 6 11 4 8 3 2 1 9 10
f31 4 3 9 8 1 2 7 11 10 5 6
f32 6 8 5 11 4 7 3 2 1 9 10
f33 3 8 4 11 6 7 5 2 1 9 10
f34 6 9 7 11 4 8 3 2 1 5 10
Average rank 4.56 6.56 6.06 10.75 3.68 5.93 3.81 3.81 3.06 7.75 9.12
Overall rank 5 8 7 11 2 6 3 3 1 9 10

algorithm in second place for mean results as there is not a Tables 13 and 14 provide the statistical analysis results of
clear superiority of this method considering the best results. the compared eleven algorithms for 500D variants of the
MANTA and PRO algorithms share the third-best ranking previously investigated test functions, but only dealing
points assigned for mean fitness values. In addition, with multimodal benchmark functions from f1-Ackley to
Friedman’s test results obtained for 30D unimodal test f18-Levy. Total number of consecutive and independent 30
functions indicate the statistical difference among the algorithms runs are performed for the experimental con-
compared algorithms with the respective p-values of ditions covering a predefined population size of N = 20 and
2.45E-14, which is obtained by considering the mean a termination criterion fixed to 100 number of iterations. It
values of the ranking points for 30D unimodal test func- is seen that there is no clear deterioration in the solution
tions tabulated in Table 12. accuracies of the compared algorithms when problem
dimensionalities are increased from 30 to 500. Some
4.3 Investigation on the scalability algorithms between them are able to reach the global
of the compared algorithms optimum points of f2, f3, f4, f5, f6, f8, f10, f11, f12, f13, and f15
test problems even after 2000 function evaluations, which
This section scrutinizes the optimization performance of is relatively small as far as the high problem dimensionality
eleven compared metaheuristics on hyper-dimensional is concerned. It is noteworthy to mention that PRO shows a
500D and 1000D test functions and makes a comprehen- consistent prediction performance for 500D test functions
sive investigation on how the prediction accuracy of the of f2, f3, f8, f11, and f15 for which it obtains the same global
algorithms is influenced by the increasing problem optimum answer in each algorithm run. REPTILE algo-
dimensionalities, particularly for hyper-dimensional prob- rithm again proves its competitiveness in hyper-dimen-
lems. There is a common and strong belief in the opti- sional multimodal problems in terms of obtaining the most
mization community such that the vast majority of the accurate results as it reaches to the best-known solutions of
metaheuristic algorithms suffer from the curse of dimen- f2, f3, f4, f5, f6, f8, f10, f11, f12, f13, and f15 benchmark
sionality, in which a large number of function evaluations problems. Tables 15 and 16 report the statistical results of
is required to conquer the inherent drawbacks of the the 500D unimodal test functions obtained for eleven
increased search space whose spatial volume exponentially compared algorithms. REPTILE acquires the best solutions
grows with the increasing problem dimensionality. of f19, f20, f21, f22, f23, f24, f26, f29, f30, f32, f33, f34 test

123
Neural Computing and Applications (2023) 35:14275–14378 14315

Table 13 Statistical results for 500D multimodal test functions from f1 – Ackley to f8 – Schaffer test functions
f1- Ackley f2- Rastrigin
Best Mean Std Dev Worst Best Mean Std Dev Worst

AFRICAN 8.88E-16 3.74E-09 1.73E-08 9.49E-08 0.00E?00 2.21E-12 8.11E-12 3.73E-11


AQUILA 3.88E-13 2.10E-07 7.45E-07 3.91E-06 0.00E?00 3.03E-13 4.36E-13 9.09E-13
BARNA 1.87E-14 6.71E-10 1.82E-09 7.05E-09 0.00E?00 0.00E?00 0.00E?00 0.00E?00
EQUIL 2.91E?00 3.39E?00 3.01E-01 4.06E?00 2.55E?03 3.20E?03 2.63E?02 3.86E?03
GRAD 8.88E-16 3.74E-13 6.59E-13 3.02E-12 0.00E?00 0.00E?00 0.00E?00 0.00E?00
HARRIS 2.54E-10 2.96E-07 6.20E-07 3.19E-06 0.00E?00 3.51E-08 1.56E-07 8.44E-07
MANTA 8.88E-16 8.88E-16 0.00E?00 8.88E-16 0.00E?00 0.00E?00 0.00E?00 0.00E?00
PRO 8.88E-16 8.88E-16 0.00E?00 8.88E-16 0.00E?00 0.00E?00 0.00E?00 0.00E?00
REPTILE 8.88E-16 8.88E-16 0.00E?00 8.88E-16 0.00E?00 0.00E?00 0.00E?00 0.00E?00
RUNGE 6.69E-07 3.36E-05 5.38E-05 2.59E-04 1.14E-09 7.20E-05 2.23E-04 1.05E-03
SNAKE 2.94E-03 7.33E-01 6.85E-01 2.54E?00 9.43E-02 8.11E?02 9.94E?02 4.37E?03
Problem f3- Griewank f4- Zakharov
Best Mean Std Dev Worst Best Mean Std Dev Worst

AFRICAN 0.00E?00 1.37E-16 5.31E-16 2.44E-15 5.91E-03 3.33E?03 2.38E?03 8.63E?03


AQUILA 0.00E?00 0.00E?00 0.00E?00 0.00E?00 9.98E-14 5.63E?05 2.72E?06 1.49E?07
BARNA 0.00E?00 0.00E?00 0.00E?00 0.00E?00 5.57E-25 1.70E-10 9.24E-10 5.06E-09
EQUIL 2.70E-01 4.76E-01 9.59E-02 6.88E-01 4.68E?03 1.01E?04 1.24E?04 7.33E?04
GRAD 0.00E?00 0.00E?00 0.00E?00 0.00E?00 2.09E-06 1.09E?00 3.70E?00 1.78E?01
HARRIS 0.00E?00 1.44E-12 7.72E-12 4.23E-11 1.22E?02 1.13E?04 6.01E?03 1.73E?04
MANTA 0.00E?00 0.00E?00 0.00E?00 0.00E?00 4.94E-43 1.78E-31 6.35E-31 3.35E-30
PRO 0.00E?00 0.00E?00 0.00E?00 0.00E?00 1.22E-62 7.61E-55 3.57E-54 1.96E-53
REPTILE 0.00E?00 0.00E?00 0.00E?00 0.00E?00 0.00E?00 7.29E-03 2.42E-02 1.20E-01
RUNGE 1.87E-14 2.05E-09 6.53E-09 3.41E-08 5.67E?02 4.40E?03 2.41E?03 1.14E?04
SNAKE 2.64E-08 1.11E-02 3.08E-02 1.68E-01 2.59E-05 1.70E?00 5.26E?00 2.83E?01

Problem f5 - Salomon f6 - Alpine


Best Mean Std Dev Worst Best Mean Std Dev Worst

AFRICAN 5.53E-16 3.37E-03 1.82E-02 9.99E-02 7.52E-18 3.19E-09 9.68E-09 4.81E-08


AQUILA 3.91E-11 4.37E-05 2.14E-04 1.17E-03 5.51E-12 8.63E-04 2.71E-03 1.14E-02
BARNA 2.52E-15 4.47E-09 1.67E-08 8.98E-08 6.01E-15 4.80E-09 1.34E-08 6.77E-08
EQUIL 2.40E?00 2.99E?00 3.04E-01 3.80E?00 6.62E?01 9.81E?01 1.32E?01 1.23E?02
GRAD 1.02E-10 2.73E-02 4.44E-02 9.99E-02 1.26E-15 1.98E-12 3.65E-12 1.40E-11
HARRIS 6.57E-10 6.67E-03 2.53E-02 9.99E-02 8.21E-09 3.17E-06 5.13E-06 2.38E-05
MANTA 1.31E-21 3.33E-03 1.82E-02 9.99E-02 1.11E-21 1.09E-16 5.18E-16 2.85E-15
PRO 8.14E-34 4.99E-28 2.24E-27 1.22E-26 8.76E-33 5.84E-30 1.08E-29 5.21E-29
REPTILE 0.00E?00 6.49E-22 2.49E-21 1.10E-20 0.00E?00 4.72E-61 2.59E-60 1.42E-59
RUNGE 9.99E-02 1.03E-01 1.83E-02 2.00E-01 9.43E-07 2.25E-04 4.20E-04 1.60E-03
SNAKE 3.41E-03 3.39E-01 2.57E-01 1.06E?00 9.57E-03 2.97E?00 9.13E?00 4.82E?01

f7- Csendes f8- Schaffer


Best Mean Std Dev Worst Best Mean Std Dev Worst

AFRICAN 8.29E-95 7.39E-43 4.04E-42 2.21E-41 0.00E?00 4.04E-10 2.20E-09 1.21E-08


AQUILA 4.68E-76 1.30E-42 7.14E-42 3.91E-41 0.00E?00 2.28E-13 1.23E-12 6.76E-12
BARNA 1.24E-85 7.89E-38 4.32E-37 2.37E-36 0.00E?00 4.75E-14 2.39E-13 1.31E-12
EQUIL 2.98E?03 1.27E?04 9.21E?03 3.29E?04 2.07E?02 2.14E?02 3.94E?00 2.21E?02
GRAD 2.68E-95 3.27E-70 1.58E-69 8.61E-69 0.00E?00 1.70E-03 9.29E-03 5.09E-02

123
14316 Neural Computing and Applications (2023) 35:14275–14378

Table 13 (continued)
f7- Csendes f8- Schaffer
Best Mean Std Dev Worst Best Mean Std Dev Worst

HARRIS 6.65E-65 3.22E-34 1.74E-33 9.55E-33 0.00E?00 1.33E-02 7.31E-02 4.00E-01


MANTA 1.04E-141 2.39E-97 1.31E-96 7.16E-96 0.00E?00 0.00E?00 0.00E?00 0.00E?00
PRO 7.18E-204 9.68E-176 0.00E?00 2.88E-174 0.00E?00 0.00E?00 0.00E?00 0.00E?00
REPTILE 5.98E-317 2.09E-57 1.15E-56 6.28E-56 0.00E?00 0.00E?00 0.00E?00 0.00E?00
RUNGE 1.27E-26 1.28E-12 6.84E-12 3.75E-11 1.92E-01 1.99E?02 5.06E?01 2.24E?02
SNAKE 9.15E-18 3.27E-07 1.64E-06 9.00E-06 1.76E-02 7.31E?00 1.14E?01 5.14E?01

problems which is a quite successful achievement for a fails in f25, f27, f28, and f31 benchmark functions while
stochastic optimizer as it is not obviously influenced by the reaching the lowest fitness values for the remaining test
adverse effects of the curse of dimensionality. Other functions, which makes it the best optimizer between the
algorithms also are not affected by the increased problem compared ones with having an average point of 2.87. PRO
dimensionalities as most of the optimizers obtain predic- has the second-best average point of 3.68 for unimodal test
tions very close to the global optimum solution of the functions with being the second-best algorithm for most of
problems. Only EQUIL algorithm fails to get close to the the unimodal optimization problems. However, this rela-
optimal solutions within 2000 function evaluations while tively satisfying performance for unimodal functions does
other methods show satisfactory convergence tendencies not help it being in fourth place in overall ranking points.
for 500D problems. Table 17 reports the ranking points of Despite being in the third place for unimodal test functions,
the compared algorithms obtained for the mean results of having only one best position out of 16 unimodal test
500D multimodal and unimodal test problems. PRO algo- functions, MANTA algorithm occupies the second place
rithm has the best average point of 3.55 for multimodal with the overall average point of 3.52. AFRICAN algo-
problems and 3.18 for unimodal test problems, which put rithm is the third-best method between them considering
this algorithm in the first place considering the mean general prediction performances obtained for unimodal and
deviation results. MANTA algorithm has the second best multimodal test problems with respective overall average
overall ranking point of 3.67, whose respective average point of 3.61. Friedman test analysis based on the mean
ranking point for multimodal test functions is 3.94 and results of 500D test functions shows that there is a statis-
unimodal test functions is 3.37. Although REPTILE algo- tical significance between algorithms, which is validated by
rithm finds the best-known solutions of 12 benchmark the corresponding p-value of 7.51E-12.
functions out of 16 test instances, mean solutions acquired Tables 19 and 20 report the statistical results for 1000D
by this algorithm is not as successful and persistent as multimodal test functions for the eleven compared algo-
answers obtained for the best results since it attains the best rithms. Although there is a mammoth increase in the
mean solution in seven test cases, which put this algorithm dimensionality of the test problems which imposes a great
in third place with respect to overall ranking point taking deal of complexity on the search domain, there is no clear
into account of the average points of unimodal and multi- deterioration in the general solution qualities except for the
modal test problems. Indisputable estimation performance f17-Qing function. AFRICAN and REPTILE algorithms
of REPTILE is evident based on the successful achieve- share the best seats considering the best results of the
ments in acquiring the best solutions of multimodal and 1000D multimodal test functions when their respective
unimodal optimization problems with the minimum overall average points (3.00) given in Table 23 are examined.
ranking point of 3.00, as reported in Table 18. This algo- GRAD algorithm is the third-best optimizer with an aver-
rithm obtains the minimum fitness value among the other age point of 3.55, slightly followed by MANTA algorithm,
methods after the completion of consecutive functions which in fourth place. Tables 21 and 22 provide the sta-
evaluations for multimodal test functions of f1, f2, f3, f4, f5, tistical results obtained for 1000D unimodal test functions
f6, f7, f8, f10, f11, f12, f13, and f15 despite its poor performance by the compared algorithms. No significant deterioration in
for predicting the accurate solutions of f9, f14, f16, and f17 general solution qualities is observed for the algorithms.
test problems. For unimodal test functions, REPTILE only However, RUNGE algorithm is not able to converge to the

123
Neural Computing and Applications (2023) 35:14275–14378 14317

Table 14 Comparison of the statistical results acquired by the compared algorithms for 500D test functions from f9 – Yang2 to f18 – Levy
function
Problem f9- Yang2 f10-Inverted Cosine Mixture
Best Mean Std Dev Worst Best Mean Std Dev Worst

AFRICAN 5.94E-213 3.44E-158 1.88E-157 1.03E-156 1.32E-36 6.46E-13 3.54E-12 1.94E-11


AQUILA 4.47E-215 6.78E-215 0.00E?00 2.38E-214 3.37E-21 6.86E-12 2.54E-11 1.08E-10
BARNA 1.33E-120 1.22E-99 6.67E-99 3.65E-98 1.53E-25 1.26E-15 6.08E-15 3.34E-14
EQUIL 1.08E-85 4.69E-65 2.55E-64 1.40E-63 1.50E?02 1.98E?02 2.71E?01 2.57E?02
GRAD 4.73E-215 4.77E-212 0.00E?00 1.42E-210 1.58E-29 8.19E-23 3.87E-22 2.13E-21
HARRIS 4.47E-215 2.32E-194 0.00E?00 6.97E-193 3.62E-17 1.52E-11 5.32E-11 2.89E-10
MANTA 8.55E-124 7.13E-94 3.63E-93 1.99E-92 1.47E-45 4.52E-32 2.41E-31 1.32E-30
PRO 6.41E-119 9.74E-83 5.30E-82 2.90E-81 1.66E-67 2.63E-57 1.03E-56 4.94E-56
REPTILE 7.81E-96 9.10E-73 3.33E-72 1.53E-71 0.00E?00 1.25E-121 6.84E-121 3.75E-120
RUNGE 8.33E-127 1.47E-97 5.17E-97 2.55E-96 2.79E-10 5.19E-05 2.79E-04 1.53E-03
SNAKE 4.55E-215 5.37E-189 0.00E?00 1.61E-187 1.76E-04 3.59E?00 7.07E?00 3.01E?01
Problem f11- Wavy f12- Yang3
Best Mean Std Dev Worst Best Mean Std Dev Worst

AFRICAN 0.00E?00 5.50E-08 3.01E-07 1.65E-06 1.37E-21 9.53E-10 5.21E-09 2.85E-08


AQUILA 0.00E?00 2.22E-11 1.21E-10 6.64E-10 3.62E-09 1.90E-04 8.75E-04 4.81E-03
BARNA 0.00E?00 1.88E-16 8.36E-16 4.58E-15 8.06E-18 9.34E-13 1.80E-12 8.02E-12
EQUIL 8.84E-01 9.25E-01 1.83E-02 9.51E-01 1.03E-06 1.31E?14 7.19E?14 3.94E?15
GRAD 0.00E?00 5.40E-17 2.17E-16 1.13E-15 3.06E-17 3.68E-09 1.73E-08 9.48E-08
HARRIS 0.00E?00 5.01E-12 1.19E-11 4.56E-11 1.50E-14 1.09E-07 5.21E-07 2.86E-06
MANTA 0.00E?00 0.00E?00 0.00E?00 0.00E?00 7.60E-24 1.47E-14 4.52E-14 2.24E-13
PRO 0.00E?00 0.00E?00 0.00E?00 0.00E?00 2.29E-35 8.48E-27 3.17E-26 1.63E-25
REPTILE 0.00E?00 0.00E?00 0.00E?00 0.00E?00 0.00E?00 1.32E-19 5.02E-19 2.67E-18
RUNGE 4.97E-11 3.84E-05 1.31E-04 6.64E-04 2.58E-09 Inf NaN Inf
SNAKE 1.99E-04 5.58E-01 3.44E-01 9.57E-01 2.43E-05 Inf NaN Inf

Problem f13 - Yang4 f14 - Penalized1


Best Mean Std Dev Worst Best Mean Std Dev Worst

AFRICAN - 1.00E?00 - 3.46E-01 4.69E-01 1.13E-203 1.04E-04 3.51E-03 5.26E-03 2.27E-02


AQUILA - 1.00E?00 - 8.33E-01 3.56E-01 3.00E-206 4.42E-09 1.37E-05 2.05E-05 8.33E-05
BARNA - 1.00E?00 - 8.58E-01 3.44E-01 1.30E-168 1.13E?00 1.14E?00 9.42E-03 1.16E?00
EQUIL 1.61E-170 7.07E-166 0.00E?00 1.61E-164 1.21E?00 1.38E?00 9.68E-02 1.58E?00
GRAD - 1.00E?00 - 4.16E-01 4.61E-01 2.70E-215 1.90E-06 1.87E-04 2.94E-04 1.36E-03
HARRIS - 1.00E?00 - 8.30E-01 3.27E-01 2.59E-215 6.84E-07 5.80E-04 1.32E-03 7.19E-03
MANTA - 1.00E?00 - 4.94E-01 4.86E-01 5.95E-171 8.42E-01 9.09E-01 3.69E-02 9.71E-01
PRO - 1.00E?00 - 9.87E-01 4.71E-02 - 7.72E-01 1.17E?00 1.20E?00 1.10E-02 1.21E?00
REPTILE - 1.00E?00 - 6.67E-02 2.54E-01 1.13E-173 2.05E-01 9.46E-01 3.07E-01 1.21E?00
RUNGE 3.29E-203 1.15E-174 0.00E?00 1.92E-173 3.33E-02 4.66E-02 6.94E-03 6.37E-02
SNAKE - 7.23E-02 - 2.59E-03 1.32E-02 2.95E-210 9.48E-06 5.87E-02 1.81E-01 9.80E-01

Problem f15- Path f16- Quintic


Best Mean Std Dev Worst Best Mean Std Dev Worst

AFRICAN 0.00E?00 3.52E?00 1.65E?01 8.99E?01 3.39E?01 2.15E?02 1.44E?02 6.36E?02


AQUILA 0.00E?00 8.18E-04 2.50E-03 1.00E-02 2.82E-01 1.99E?01 1.94E?01 7.03E?01
BARNA 0.00E?00 5.09E-03 2.65E-02 1.45E-01 1.81E?03 1.90E?03 2.90E?01 1.94E?03
EQUIL 2.24E?02 2.30E?02 2.10E?00 2.33E?02 2.40E?03 3.63E?03 1.07E?03 7.50E?03

123
14318 Neural Computing and Applications (2023) 35:14275–14378

Table 14 (continued)
Problem f15- Path f16- Quintic
Best Mean Std Dev Worst Best Mean Std Dev Worst

GRAD 1.49E-12 1.60E?01 5.28E?01 2.30E?02 6.56E?00 4.84E?01 3.79E?01 1.91E?02


HARRIS 3.21E-14 1.55E?01 5.78E?01 2.28E?02 1.31E?01 6.92E?01 4.40E?01 2.06E?02
MANTA 0.00E?00 4.61E?01 9.24E?01 2.32E?02 1.61E?03 1.70E?03 3.19E?01 1.76E?03
PRO 0.00E?00 0.00E?00 0.00E?00 0.00E?00 1.86E?03 1.93E?03 3.14E?01 1.97E?03
REPTILE 0.00E?00 4.38E-12 2.40E-11 1.31E-10 9.19E?02 1.84E?03 3.10E?02 2.00E?03
RUNGE 2.01E?02 2.23E?02 7.94E?00 2.32E?02 5.11E?02 5.69E?02 2.55E?01 6.25E?02
SNAKE 4.94E-05 8.31E-01 2.58E?00 1.41E?01 5.38E?00 4.84E?02 4.66E?02 1.81E?03

Problem f17- Qing f18—Levy


Best Mean Std Dev Worst Best Mean Std Dev Worst

AFRICAN 2.76E?07 3.22E?07 1.18E?06 3.38E?07 1.73E-01 1.47E?00 1.35E?00 5.40E?00


AQUILA 3.21E?07 3.27E?07 3.90E?05 3.35E?07 1.06E-05 2.46E-01 3.67E-01 1.48E?00
BARNA 3.13E?07 3.32E?07 7.46E?05 3.40E?07 1.74E?02 1.79E?02 2.09E?00 1.82E?02
EQUIL 3.35E?07 3.38E?07 1.59E?05 3.42E?07 1.83E?02 2.16E?02 2.06E?01 2.67E?02
GRAD 2.24E?07 2.41E?07 1.30E?06 2.74E?07 1.06E-04 4.60E-02 5.91E-02 2.17E-01
HARRIS 2.51E?07 2.76E?07 1.56E?06 3.09E?07 1.72E-03 2.00E-01 3.12E-01 1.42E?00
MANTA 3.10E?07 3.22E?07 6.11E?05 3.33E?07 1.32E?02 1.43E?02 5.44E?00 1.53E?02
PRO 3.31E?07 3.34E?07 1.33E?05 3.36E?07 1.79E?02 1.86E?02 2.02E?00 1.88E?02
REPTILE 3.26E?07 3.29E?07 2.13E?05 3.33E?07 1.83E?02 1.86E?02 1.05E?00 1.88E?02
RUNGE 3.29E?07 3.33E?07 2.19E?05 3.37E?07 4.51E?00 7.14E?00 1.06E?00 1.07E?01
SNAKE 2.18E?07 2.33E?07 1.20E?06 2.64E?07 4.22E-05 3.89E?00 6.45E?00 2.38E?01

optimal solution of f21-Sum of difference test function and point of 3.62. When all ranking points are averaged for
is labeled as ‘‘N/A’’ which indicates that no feasible answer unimodal and multimodal test problems, it is seen that
is attained throughout the sequential algorithm runs. Fur- REPTILE is the best performer while AFRICAN yielding
thermore, SNAKE algorithm only finds one feasible solu- the second-best predictions. MANTA algorithm is in the
tion during the course of independent runs for this test third-best seat, outperforming the PRO algorithm, which
instance. Similar tendencies of these algorithms are also has the overall fourth-best predictions when the best results
evident for the unimodal test function of f34—Powell, for are primarily considered. Table 24 reports the ranking
which RUNGE algorithm do not find any valid solution points of the algorithms for the mean results of the 1000D
after the consecutive runs, and SNAKE algorithm attains benchmark functions. REPTILE algorithm does not sustain
the feasible best solution of 2.81E-06. AFRICAN algo- its leadership position when comparisons are made based
rithm is another algorithm failing to obtain feasible solu- on the mean solutions and lose its place to PRO algorithm
tions during the algorithm runs but only to find one in overall ranking points. MANTA algorithm yields the
solution, which is 1.74E-46 for f34- Powell function. second-best mean results, which are better than the previ-
REPTILE algorithm is superior to the remaining algo- ous ranking order related with the comparisons of the best
rithms in obtaining the global best solutions for 1000D test results. REPTILE algorithm’s overall average point puts it
functions of f19, f20, f21, f22, f23, f24, f26, f29, f30, f32, f33, and in third place, which also indicates that its solution con-
f34. PRO algorithm sits on the second-best seat for most of sistency is hampered by the increased problem dimen-
the test cases, which makes this algorithm the second-best sionalities, particularly for hyper-dimensional problems.
performer when its respective average values obtained for
unimodal test functions given in Table 23 are thoroughly
examined. Third-best algorithm for unimodal test problems
is MANTA algorithm which the corresponding average

123
Neural Computing and Applications (2023) 35:14275–14378 14319

Table 15 Estimation results for 500D unimodal test functions from f19—Sphere to f26—Discus
Problem f19- Sphere f20- Brown
Best Mean Std Dev Worst Best Mean Std Dev Worst

AFRICAN 4.24E-31 8.73E-13 4.63E-12 2.54E-11 1.45E-46 1.94E-12 1.06E-11 5.82E-11


AQUILA 7.18E-22 6.16E-13 3.03E-12 1.66E-11 4.25E-21 3.19E-13 1.21E-12 4.88E-12
BARNA 4.84E-32 2.08E-15 1.14E-14 6.24E-14 2.40E-27 5.64E-15 3.02E-14 1.65E-13
EQUIL 9.34E?01 1.50E?02 3.94E?01 2.50E?02 4.88E?14 3.54E?49 1.94E?50 1.06E?51
GRAD 1.52E-28 3.68E-24 1.17E-23 6.34E-23 3.80E-27 3.10E-23 7.69E-23 3.81E-22
HARRIS 1.35E-21 3.20E-11 1.43E-10 7.78E-10 1.24E-17 4.38E-11 2.07E-10 1.13E-09
MANTA 1.35E-42 4.50E-33 2.39E-32 1.31E-31 4.99E-44 1.34E-32 7.01E-32 3.84E-31
PRO 5.96E-67 3.14E-60 1.08E-59 5.72E-59 1.78E-67 9.00E-58 3.47E-57 1.73E-56
REPTILE 0.00E?00 2.69E-153 1.47E-152 8.07E-152 0.00E?00 1.19E-46 3.70E-46 1.51E-45
RUNGE 1.20E-11 3.40E-08 7.78E-08 2.90E-07 7.55E-11 8.65E-07 2.07E-06 1.04E-05
SNAKE 3.74E-06 6.35E-01 2.77E?00 1.53E?01 1.27E-05 6.37E-02 1.64E-01 8.73E-01

Problem f21- Sum of difference f22- Bentcigar


Best Mean Std Dev Worst Best Mean Std Dev Worst

AFRICAN 1.96E-47 3.31E-24 1.41E-23 7.62E-23 6.53E-36 7.37E-11 2.34E-10 1.05E-09


AQUILA 4.03E-24 1.26E-11 6.11E-11 3.34E-10 2.61E-18 1.32E-05 5.12E-05 2.52E-04
BARNA 8.02E-35 2.70E-18 1.32E-17 7.20E-17 9.80E-22 1.26E-10 3.51E-10 1.42E-09
EQUIL 2.79E-12 3.02E?00 1.65E?01 9.06E?01 1.02E?08 1.54E?08 3.13E?07 2.31E?08
GRAD 5.57E-35 1.06E-29 2.95E-29 1.54E-28 1.06E-23 7.73E-17 4.13E-16 2.26E-15
HARRIS 3.67E-31 3.39E-18 1.35E-17 7.31E-17 4.89E-12 3.70E-06 1.08E-05 5.31E-05
MANTA 1.15E-57 7.80E-50 4.09E-49 2.24E-48 1.57E-36 7.13E-26 3.89E-25 2.13E-24
PRO 6.05E-72 1.77E-61 5.71E-61 2.73E-60 9.86E-62 8.41E-53 2.47E-52 1.18E-51
REPTILE 0.00E?00 6.85E-43 2.41E-42 1.22E-41 0.00E?00 2.71E-98 1.48E-97 8.12E-97
RUNGE 8.95E-23 Inf NaN Inf 4.76E-05 2.16E-01 7.67E-01 4.16E?00
SNAKE 3.83E-08 1.71E?149 9.34E?149 5.12E?150 5.52E-01 6.38E?05 1.92E?06 9.77E?06

Problem f23- Sumsquares f24- Dropwave


Best Mean Std Dev Worst Best Mean Std Dev Worst

AFRICAN 1.73E-36 1.29E-14 5.95E-14 3.27E-13 - 1.00E?00 - 1.00E?00 1.21E-04 - 9.99E-01


AQUILA 2.94E-19 5.11E-14 1.28E-13 4.52E-13 - 1.00E?00 - 9.96E-01 1.64E-02 - 9.35E-01
BARNA 2.09E-27 4.64E-12 2.45E-11 1.35E-10 - 1.00E?00 - 1.00E?00 4.53E-11 - 1.00E?00
EQUIL 2.30E?04 3.35E?04 9.00E?03 6.17E?04 - 4.63E-03 - 3.32E-03 6.70E-04 - 1.97E-03
GRAD 1.30E-26 1.79E-21 5.32E-21 2.74E-20 - 1.00E?00 - 9.96E-01 1.62E-02 - 9.36E-01
HARRIS 2.16E-18 1.50E-09 6.97E-09 3.83E-08 - 1.00E?00 - 9.98E-01 1.16E-02 - 9.36E-01
MANTA 2.00E-39 3.07E-31 1.02E-30 5.23E-30 - 1.00E?00 - 1.00E?00 1.74E-12 - 1.00E?00
PRO 1.41E-64 1.55E-55 5.69E-55 2.61E-54 - 1.00E?00 - 1.00E?00 0.00E?00 - 1.00E?00
REPTILE 0.00E?00 6.17E-244 0.00E?00 1.85E-242 - 1.00E?00 - 1.00E?00 0.00E?00 - 1.00E?00
RUNGE 6.70E-09 3.38E-05 1.02E-04 5.43E-04 - 9.36E-01 - 9.27E-01 4.98E-02 - 6.64E-01
SNAKE 2.47E-03 1.09E?02 2.98E?02 1.18E?03 - 9.62E-01 - 4.08E-01 2.67E-01 - 6.02E-02

Problem f25- Rosenbrock f26- Discus


Best Mean Std Dev Worst Best Mean Std Dev Worst

AFRICAN 5.86E?00 4.00E?02 2.01E?02 7.17E?02 7.95E-32 4.61E-17 1.44E-16 7.29E-16


AQUILA 1.46E-01 2.75E?02 2.13E?02 4.94E?02 3.08E-17 2.38E-06 1.06E-05 5.69E-05
BARNA 4.99E?02 4.99E?02 2.92E-02 4.99E?02 5.62E-30 5.68E-12 3.11E-11 1.70E-10
EQUIL 3.19E?04 7.97E?04 3.19E?04 1.54E?05 1.44E?02 2.22E?02 4.42E?01 3.44E?02
GRAD 4.98E-02 2.79E?01 9.18E?01 4.95E?02 3.58E-28 1.51E-21 4.82E-21 2.23E-20
HARRIS 4.49E-02 1.14E?02 1.77E?02 4.94E?02 7.64E-18 3.98E-10 1.32E-09 7.09E-09
MANTA 4.98E?02 4.98E?02 8.83E-02 4.99E?02 5.79E-42 1.04E-33 3.52E-33 1.84E-32
PRO 4.99E?02 4.99E?02 1.77E-02 4.99E?02 1.88E-62 1.42E-55 5.42E-55 2.87E-54
REPTILE 4.99E?02 4.99E?02 8.53E-03 4.99E?02 0.00E?00 6.51E-183 0.00E?00 1.95E-181
RUNGE 4.96E?02 4.97E?02 1.08E?00 4.99E?02 4.34E-11 5.28E-06 2.60E-05 1.43E-04
SNAKE 6.19E-02 3.01E?02 2.11E?02 5.23E?02 2.67E-04 3.41E-01 6.72E-01 3.19E?00

123
14320 Neural Computing and Applications (2023) 35:14275–14378

Table 16 Statistical results obtained for 500D unimodal functions from f27 – Dixon and Price to f34—Powell
Problem f27- Dixon and Price f28- Trid
Best Mean Std Dev Worst Best Mean Std Dev Worst

AFRICAN 6.85E-01 9.67E-01 8.89E-02 1.00E?00 - 2.73E?05 - 1.32E?05 8.86E?04 - 4.21E?02


AQUILA 2.52E-01 9.55E-01 1.53E-01 1.00E?00 - 2.76E?05 - 2.52E?05 5.70E?03 - 2.43E?05
BARNA 8.95E-01 9.96E-01 1.91E-02 1.00E?00 4.86E?02 4.93E?02 2.98E?00 4.98E?02
EQUIL 3.20E?05 6.19E?05 2.46E?05 1.24E?06 6.63E?10 1.02E?11 2.43E?10 1.48E?11
GRAD 3.78E-01 9.79E-01 1.14E-01 1.00E?00 - 2.52E?05 - 2.32E?05 2.56E?04 - 1.36E?05
HARRIS 2.52E-01 9.50E-01 1.90E-01 1.00E?00 - 3.12E?05 - 2.55E?05 1.52E?04 - 2.33E?05
MANTA 6.76E-01 6.87E-01 1.10E-02 7.21E-01 4.31E?02 4.48E?02 7.15E?00 4.65E?02
PRO 1.00E?00 1.00E?00 1.41E-05 1.00E?00 4.95E?02 4.99E?02 1.55E?00 5.00E?02
REPTILE 9.98E-01 1.00E?00 3.01E-04 1.00E?00 4.94E?02 4.98E?02 2.03E?00 5.00E?02
RUNGE 9.97E-01 1.00E?00 1.83E-03 1.01E?00 - 2.16E?02 3.91E?03 5.65E?03 2.05E?04
SNAKE 1.00E?00 1.78E?01 3.96E?01 1.92E?02 - 2.49E?05 2.18E?07 5.97E?07 3.13E?08

Problem f29- Schwefel 2.21 f30- Schwefel 2.23


Best Mean Std Dev Worst Best Mean Std Dev Worst

AFRICAN 1.56E-19 9.40E-08 3.75E-07 1.99E-06 2.80E-215 1.36E-62 7.47E-62 4.09E-61


AQUILA 4.08E-12 1.57E-08 7.01E-08 3.84E-07 1.33E-117 2.64E-65 1.44E-64 7.91E-64
BARNA 6.31E-17 1.83E-10 3.17E-10 1.36E-09 5.44E-139 3.14E-73 1.72E-72 9.41E-72
EQUIL 4.07E?00 4.95E?00 5.02E-01 6.09E?00 5.85E?04 2.24E?06 2.88E?06 1.38E?07
GRAD 3.77E-15 1.20E-12 2.79E-12 1.52E-11 5.53E-146 1.35E-113 7.40E-113 4.05E-112
HARRIS 4.41E-10 2.58E-07 5.70E-07 2.12E-06 9.89E-127 3.76E-60 1.88E-59 1.03E-58
MANTA 5.64E-26 1.52E-17 6.09E-17 3.29E-16 7.69E-223 3.42E-158 1.88E-157 1.03E-156
PRO 1.77E-34 4.35E-31 1.13E-30 5.23E-30 0.00E?00 4.10E-293 0.00E?00 1.21E-291
REPTILE 0.00E?00 8.95E-23 4.90E-22 2.68E-21 0.00E?00 3.02E-272 0.00E?00 9.06E-271
RUNGE 3.78E-05 2.04E-03 3.03E-03 1.08E-02 6.94E-50 1.06E-18 5.78E-18 3.17E-17
SNAKE 6.84E-04 7.28E-02 7.93E-02 3.25E-01 2.56E-29 1.40E-12 7.57E-12 4.15E-11

Problem f31- Schwefel 2.25 f32- Schwefel 2.20


Best Mean Std Dev Worst Best Mean Std Dev Worst

AFRICAN 3.57E?00 2.28E?01 1.69E?01 6.51E?01 9.76E-17 7.38E-08 1.95E-07 8.37E-07


AQUILA 1.35E-03 4.42E?00 9.52E?00 4.52E?01 3.22E-12 5.38E-06 1.62E-05 8.12E-05
BARNA 4.87E?02 4.91E?02 1.62E?00 4.93E?02 3.44E-12 6.22E-07 2.70E-06 1.47E-05
EQUIL 7.28E?02 1.47E?03 3.74E?02 2.28E?03 1.19E?02 1.54E?02 1.68E?01 1.92E?02
GRAD 2.16E-03 1.04E?00 1.41E?00 5.44E?00 4.50E-14 1.98E-11 3.47E-11 1.44E-10
HARRIS 5.93E-02 2.15E?00 2.04E?00 7.37E?00 3.14E-09 3.18E-05 7.14E-05 3.10E-04
MANTA 4.45E?02 4.59E?02 5.18E?00 4.69E?02 2.53E-21 1.79E-16 4.28E-16 1.87E-15
PRO 4.95E?02 4.98E?02 1.02E?00 4.99E?02 8.76E-33 2.83E-29 5.14E-29 2.33E-28
REPTILE 4.91E?02 4.97E?02 2.18E?00 4.99E?02 0.00E?00 2.66E-45 1.46E-44 7.97E-44
RUNGE 1.39E?02 1.56E?02 1.15E?01 1.87E?02 2.81E-05 7.09E-04 1.07E-03 4.89E-03
SNAKE 3.14E-02 6.86E?01 9.87E?01 3.56E?02 5.26E-02 1.48E?01 2.21E?01 6.94E?01

Problem f33- Streched Sine Wave f34- Powell


Best Mean Std Dev Worst Best Mean Std Dev Worst

AFRICAN 2.66E-08 2.92E-03 1.02E-02 5.47E-02 5.43E-53 4.94E-21 2.71E-20 1.48E-19


AQUILA 8.41E-04 5.31E-01 8.93E-01 3.87E?00 1.02E-25 1.41E-10 6.48E-10 3.50E-09
BARNA 8.73E-07 1.66E-03 3.66E-03 1.61E-02 1.26E-42 3.44E-23 1.35E-22 6.56E-22
EQUIL 1.84E?02 2.16E?02 1.62E?01 2.53E?02 1.63E-11 1.33E-02 4.14E-02 1.92E-01
GRAD 9.15E-07 2.01E-03 1.06E-02 5.81E-02 2.29E-36 3.06E-28 1.58E-27 8.68E-27
HARRIS 3.19E-02 2.25E-01 2.43E-01 8.68E-01 3.65E-30 2.50E-16 1.34E-15 7.37E-15
MANTA 1.52E-10 2.64E-03 1.44E-02 7.91E-02 5.13E-59 3.45E-50 1.24E-49 6.58E-49
PRO 8.82E-15 2.60E-09 1.18E-08 6.41E-08 1.25E-67 1.05E-59 5.69E-59 3.12E-58
REPTILE 0.00E?00 1.72E-13 7.94E-13 4.29E-12 0.00E?00 3.14E-41 1.67E-40 9.18E-40
RUNGE 4.83E-02 2.64E-01 1.73E-01 7.36E-01 2.16E-21 Inf NaN Inf
SNAKE 4.63E?00 4.90E?01 2.52E?01 1.08E?02 2.25E-06 Inf NaN Inf

123
Neural Computing and Applications (2023) 35:14275–14378 14321

Table 17 Ranking points of the algorithms assigned to the mean fitness results for 500D test functions
AFRICAN AQUILA BARNA EQUIL GRAD HARRIS MANTA PRO REPTILE RUNGE SNAKE

f1 6 7 5 11 4 8 1 1 1 9 10
f2 7 6 1 11 1 8 1 1 1 9 10
f3 7 1 1 11 1 8 1 1 1 9 10
f4 7 11 3 9 5 10 2 1 4 8 6
f5 6 4 3 11 8 7 5 1 2 9 10
f6 5 9 6 11 4 5 3 2 1 8 10
f7 5 6 7 11 3 8 2 1 4 9 10
f8 6 5 4 11 7 8 1 1 1 10 9
f9 5 1 6 11 2 3 8 9 10 7 4
f10 6 7 5 11 4 8 3 2 1 9 10
f11 8 7 5 11 4 6 1 2 3 9 10
f12 5 8 4 9 6 7 3 1 2 10 11
f13 7 3 2 11 6 4 5 1 8 10 9
f14 4 1 9 11 2 3 7 10 8 5 6
f15 6 3 4 11 8 7 9 1 2 10 5
f16 4 1 9 11 2 3 7 10 8 6 5
f17 4 6 8 11 2 3 5 10 7 9 1
f18 4 3 8 11 1 2 7 9 10 6 5
Average point 5.66 4.94 5.00 10.77 3.88 6.00 3.94 3.55 4.11 8.44 7.83
Ranking point 7 5 6 11 2 8 3 1 4 10 9
f19 7 6 5 11 4 8 3 2 1 9 10
f20 7 6 5 11 4 8 3 1 2 9 10
f21 5 8 6 9 4 7 2 1 3 11 10
f22 5 8 6 11 4 7 3 2 1 9 10
f23 5 6 7 11 4 8 3 2 1 9 10
f24 1 6 1 11 8 6 1 1 1 9 10
f25 5 3 8 11 1 2 7 8 8 6 4
f26 5 8 6 11 4 7 3 2 1 9 10
f27 4 3 6 11 5 2 1 7 8 8 8
f28 4 2 6 11 3 1 5 8 7 9 10
f29 7 6 5 11 4 8 3 1 2 9 10
f30 7 6 5 11 4 8 3 1 2 9 10
f31 4 3 8 11 1 2 7 10 9 6 5
f32 5 7 6 11 4 8 3 2 1 9 10
f33 6 9 3 11 4 7 5 2 1 8 10
f34 6 8 5 9 4 7 2 1 3 10 11
Average point 5.18 5.80 5.50 10.75 3.87 6.00 3.37 3.18 3.19 8.68 9.25
Ranking point 5 7 6 11 4 8 3 1 2 9 10
Overall point 5.43 5.34 5.23 10.76 3.88 6.00 3.67 3.37 3.68 8.55 8.49
Overall ranking point 7 6 5 11 4 8 2 1 3 10 9

4.4 Comparison on the runtime complexities multimodal test functions. This feature of any stochastic
of the algorithms metaheuristic algorithm should be thoroughly analyzed in
order to get clear insights on their general performances
This section is devoted to investigate the runtime com- when solving exhaustive and tedious optimization prob-
plexities of the algorithms on 30D unimodal and lems. Computational load burdened by the excessive run-
ning of the search equations of the metaheuristic

123
14322 Neural Computing and Applications (2023) 35:14275–14378

Table 18 Performance comparison of the algorithms based on the ranking points obtained for the best results of 500D test functions
AFRICAN AQUILA BARNA EQUIL GRAD HARRIS MANTA PRO REPTILE RUNGE SNAKE

f1 1 7 6 11 1 8 1 1 1 9 10
f2 1 1 1 11 1 1 1 1 1 9 10
f3 1 1 1 11 1 1 1 1 1 9 10
f4 8 5 4 11 6 9 3 2 1 10 7
f5 4 6 5 11 7 8 3 2 1 10 9
f6 4 7 6 11 5 8 3 2 1 9 10
f7 5 7 6 11 4 8 3 2 1 9 10
f8 1 1 1 11 1 1 1 1 1 10 9
f9 5 1 8 11 4 2 7 9 10 6 3
f10 4 7 6 11 5 8 3 2 1 9 10
f11 1 1 1 11 1 1 1 1 1 9 10
f12 4 9 5 10 6 7 3 2 1 8 11
f13 1 1 1 11 1 1 1 1 1 10 9
f14 5 1 9 11 3 2 8 10 7 6 4
f15 1 1 1 11 8 7 1 1 1 10 9
f16 5 1 9 11 3 4 8 10 7 6 2
f17 4 7 6 11 2 3 5 10 8 9 1
f18 5 1 8 10 3 4 7 9 11 6 2
Average point 3.33 3.61 4.66 10.88 3.44 4.61 3.32 3.72 3.11 8.55 7.55
Ranking point 3 5 6 11 4 7 2 6 1 10 9
f19 5 7 4 11 6 8 3 2 1 9 10
f20 3 7 5 11 6 8 4 2 1 9 10
f21 4 8 6 10 5 7 3 2 1 9 11
f22 4 7 6 11 5 8 3 2 1 9 10
f23 4 7 5 11 6 8 3 2 1 9 10
f24 1 1 1 11 1 1 1 1 1 10 9
f25 5 4 8 11 2 1 7 8 8 6 3
f26 4 8 5 11 6 7 3 2 1 9 10
f27 5 1 6 11 3 2 4 9 8 7 10
f28 3 2 8 11 4 1 7 10 9 6 5
f29 4 7 5 11 6 8 3 2 1 9 10
f30 4 8 6 11 5 7 3 1 1 9 10
f31 5 1 8 11 2 4 7 10 9 6 3
f32 4 6 7 11 5 8 3 2 1 9 10
f33 4 7 5 11 6 8 3 2 1 9 10
f34 4 8 5 10 6 7 3 2 1 9 11
Average point 3.93 5.56 5.63 10.87 4.62 5.81 3.75 3.68 2.87 8.37 8.87
Ranking point 4 6 7 11 5 8 3 2 1 9 10
Overall point 3.61 4.52 5.11 10.88 4.00 5.17 3.52 3.70 3.00 8.46 8.17
Overall ranking point 3 6 7 11 5 8 2 4 1 10 9

algorithms needs to be deeply scrutinized if it is to hold a number of 2000 function evaluations are performed for
firm opinion on the total expended runtime memory and unimodal and multimodal test functions and execution
decide on as to which algorithms should be preferred for a times of the algorithms are averaged over 30 independent
specific type of optimization problem. Figures 1 and 2 runs.
comparatively visualize the elapsed runtimes of each It is seen from the figures that expanded computational
contestant algorithm for each benchmark function. Total memory for unimodal and multimodal test functions is so

123
Neural Computing and Applications (2023) 35:14275–14378 14323

Table 19 Comparison of the competitive algorithms with respect to the statistical results of the 1000D multimodal test functions from f1 –Ackley
to f8—Schaffer
Problem f1- Ackley f2- Rastrigin
Best Mean Std Dev Worst Best Mean Std Dev Worst

AFRICAN 8.88E-16 2.05E-08 1.10E-07 6.04E-07 0.00E?00 9.16E-12 5.01E-11 2.75E-10


AQUILA 1.75E-12 6.21E-08 1.70E-07 7.81E-07 0.00E?00 6.86E-11 3.57E-10 1.96E-09
BARNA 4.44E-15 2.60E-09 8.90E-09 3.67E-08 0.00E?00 0.00E?00 0.00E?00 0.00E?00
EQUIL 3.06E?00 3.69E?00 2.48E-01 4.30E?00 5.79E?03 6.69E?03 5.89E?02 8.11E?03
GRAD 4.44E-15 7.45E-13 2.57E-12 1.42E-11 0.00E?00 0.00E?00 0.00E?00 0.00E?00
HARRIS 1.94E-10 2.45E-07 5.07E-07 2.72E-06 0.00E?00 8.41E-08 4.03E-07 2.21E-06
MANTA 8.88E-16 8.88E-16 0.00E?00 8.88E-16 0.00E?00 0.00E?00 0.00E?00 0.00E?00
PRO 8.88E-16 8.88E-16 0.00E?00 8.88E-16 0.00E?00 0.00E?00 0.00E?00 0.00E?00
REPTILE 8.88E-16 8.88E-16 0.00E?00 8.88E-16 0.00E?00 0.00E?00 0.00E?00 0.00E?00
RUNGE 1.27E-06 3.85E-05 8.64E-05 4.31E-04 2.49E-09 2.45E-03 1.08E-02 5.81E-02
SNAKE 6.19E-03 4.87E-01 5.54E-01 2.24E?00 1.31E-01 1.14E?03 1.03E?03 4.50E?03

Problem f3- Griewank f4- Zakharov


Best Mean Std Dev Worst Best Mean Std Dev Worst

AFRICAN 0.00E?00 3.33E-17 1.82E-16 9.99E-16 1.70E-03 1.18E?04 2.78E?04 1.56E?05


AQUILA 0.00E?00 2.32E-15 1.01E-14 5.50E-14 3.35E-11 3.20E?07 1.53E?08 8.38E?08
BARNA 0.00E?00 0.00E?00 0.00E?00 0.00E?00 1.01E-22 2.70E-12 1.26E-11 6.89E-11
EQUIL 4.69E-01 7.36E-01 1.09E-01 9.65E-01 9.09E?03 1.47E?05 3.61E?05 1.68E?06
GRAD 0.00E?00 0.00E?00 0.00E?00 0.00E?00 3.43E-03 1.79E?02 4.70E?02 2.11E?03
HARRIS 0.00E?00 4.04E-14 1.34E-13 5.41E-13 4.30E?03 2.59E?04 9.48E?03 3.44E?04
MANTA 0.00E?00 0.00E?00 0.00E?00 0.00E?00 4.04E-42 7.93E-32 2.48E-31 9.83E-31
PRO 0.00E?00 0.00E?00 0.00E?00 0.00E?00 2.22E-65 1.34E-54 4.40E-54 2.25E-53
REPTILE 0.00E?00 0.00E?00 0.00E?00 0.00E?00 3.14E-270 1.15E-02 3.14E-02 1.23E-01
RUNGE 3.54E-13 1.25E-09 2.95E-09 1.36E-08 1.72E?03 1.03E?04 7.92E?03 3.44E?04
SNAKE 2.16E-07 1.77E-02 5.40E-02 2.22E-01 6.96E-07 1.82E?00 4.60E?00 2.19E?01

Problem f5- Salomon f6- Alpine


Best Mean Std Dev Worst Best Mean Std Dev Worst

AFRICAN 2.03E-17 8.64E-05 3.24E-04 1.62E-03 1.77E-18 2.92E-05 1.59E-04 8.70E-04


AQUILA 7.77E-11 6.66E-03 2.54E-02 1.00E-01 1.36E-10 2.62E-03 9.77E-03 3.89E-02
BARNA 1.23E-14 4.52E-09 1.54E-08 8.04E-08 1.85E-16 2.28E-08 5.89E-08 2.22E-07
EQUIL 3.20E?00 3.92E?00 3.64E-01 4.60E?00 1.65E?02 2.16E?02 2.88E?01 2.74E?02
GRAD 2.53E-09 3.78E-02 4.85E-02 1.00E-01 3.78E-15 7.40E-12 1.72E-11 7.81E-11
HARRIS 2.05E-08 3.36E-03 1.82E-02 9.99E-02 1.26E-08 2.68E-06 3.82E-06 1.40E-05
MANTA 1.38E-20 6.84E-08 3.74E-07 2.05E-06 2.11E-22 2.81E-17 7.87E-17 4.25E-16
PRO 3.74E-33 2.00E-27 9.44E-27 5.17E-26 4.59E-33 1.59E-28 5.84E-28 2.82E-27
REPTILE 0.00E?00 2.47E-16 9.42E-16 4.22E-15 0.00E?00 7.65E-22 4.19E-21 2.29E-20
RUNGE 9.99E-02 1.24E-01 5.16E-02 3.00E-01 8.00E-06 4.28E-04 6.52E-04 2.69E-03
SNAKE 7.61E-03 4.21E-01 2.97E-01 1.14E?00 3.08E-02 7.88E-01 1.11E?00 5.77E?00

Problem f7- Csendes f8- Schaffer


Best Mean Std Dev Worst Best Mean Std Dev Worst

AFRICAN 7.90E-95 1.57E-40 8.52E-40 4.67E-39 0.00E?00 5.32E-10 2.49E-09 1.36E-08


AQUILA 6.50E-70 3.34E-42 1.82E-41 9.99E-41 0.00E?00 1.16E-13 5.96E-13 3.27E-12
BARNA 2.35E-94 7.83E-43 4.29E-42 2.35E-41 0.00E?00 2.29E-14 9.12E-14 4.94E-13
EQUIL 1.37E?04 7.18E?04 4.28E?04 2.19E?05 4.40E?02 4.51E?02 5.40E?00 4.64E?02
GRAD 6.08E-90 1.83E-62 1.00E-61 5.48E-61 0.00E?00 4.24E-13 1.87E-12 1.00E-11
HARRIS 4.63E-65 2.05E-30 1.12E-29 6.13E-29 0.00E?00 7.76E-07 4.25E-06 2.33E-05
MANTA 1.32E-123 3.80E-96 1.66E-95 8.78E-95 0.00E?00 0.00E?00 0.00E?00 0.00E?00
PRO 2.40E-209 2.10E-173 0.00E?00 6.30E-172 0.00E?00 0.00E?00 0.00E?00 0.00E?00
REPTILE 1.56E-278 5.14E-06 2.82E-05 1.54E-04 0.00E?00 0.00E?00 0.00E?00 0.00E?00
RUNGE 3.92E-29 4.75E-13 1.46E-12 6.07E-12 3.50E?02 4.55E?02 1.99E?01 4.65E?02
SNAKE 1.97E-22 5.33E-04 2.10E-03 1.12E-02 1.52E-03 1.67E?01 2.72E?01 1.33E?02

123
14324 Neural Computing and Applications (2023) 35:14275–14378

Table 20 Statistical comparison of the optimal results for 1000D multimodal test functions from f9 – Yang to f18—Levy
Problem f9- Yang2 f10- inverted cosine mixture
Best Mean Std Dev Worst Best Mean Std Dev Worst

AFRICAN 0.00E?00 1.79E-312 0.00E?00 5.36E-311 3.00E-37 2.15E-15 1.18E-14 6.44E-14


AQUILA 0.00E?00 0.00E?00 0.00E?00 0.00E?00 5.19E-22 9.42E-15 4.95E-14 2.71E-13
BARNA 4.32E-239 1.71E-138 9.22E-138 5.05E-137 3.58E-35 2.05E-14 1.11E-13 6.08E-13
EQUIL 1.75E-155 1.09E-132 5.77E-132 3.16E-131 3.17E?02 5.00E?02 8.13E?01 6.77E?02
GRAD 0.00E?00 0.00E?00 0.00E?00 0.00E?00 2.48E-29 7.34E-23 2.07E-22 1.10E-21
HARRIS 0.00E?00 0.00E?00 0.00E?00 0.00E?00 5.76E-22 3.42E-09 1.84E-08 1.01E-07
MANTA 4.77E-237 1.62E-176 0.00E?00 4.83E-175 3.15E-39 1.68E-29 9.16E-29 5.02E-28
PRO 6.31E-230 7.19E-161 3.94E-160 2.16E-159 4.54E-64 8.83E-58 3.49E-57 1.85E-56
REPTILE 1.76E-164 1.98E-140 7.95E-140 4.14E-139 0.00E?00 3.59E-69 1.97E-68 1.08E-67
RUNGE 1.39E-272 1.32E-187 0.00E?00 3.87E-186 8.42E-10 5.10E-05 2.60E-04 1.43E-03
SNAKE 0.00E?00 0.00E?00 0.00E?00 0.00E?00 1.18E-03 4.72E?00 1.30E?01 6.60E?01
Problem f11- Wavy f12- Yang1
Best Mean Std Dev Worst Best Mean Std Dev Worst

AFRICAN 0.00E?00 9.11E-10 4.22E-09 2.28E-08 6.95E-26 1.78E-07 7.78E-07 4.17E-06


AQUILA 0.00E?00 5.23E-15 2.54E-14 1.39E-13 1.14E-11 2.87E-05 6.44E-05 2.73E-04
BARNA 0.00E?00 8.07E-16 4.03E-15 2.21E-14 1.91E-18 9.68E-10 4.99E-09 2.73E-08
EQUIL 8.49E-01 9.43E-01 2.41E-02 9.69E-01 7.75E-12 2.91E-07 8.69E-07 4.77E-06
GRAD 0.00E?00 1.48E-16 7.92E-16 4.34E-15 1.99E-12 3.26E-06 1.13E-05 5.78E-05
HARRIS 0.00E?00 1.16E-11 3.05E-11 1.40E-10 1.24E-15 5.29E-06 2.06E-05 1.02E-04
MANTA 0.00E?00 0.00E?00 0.00E?00 0.00E?00 2.43E-24 3.21E-10 1.65E-09 9.03E-09
PRO 0.00E?00 0.00E?00 0.00E?00 0.00E?00 7.34E-34 1.60E-26 4.25E-26 1.59E-25
REPTILE 0.00E?00 0.00E?00 0.00E?00 0.00E?00 0.00E?00 7.45E-19 2.26E-18 1.08E-17
RUNGE 1.29E-09 5.96E-06 2.80E-05 1.54E-04 N/A N/A N/A N/A
SNAKE 2.04E-04 4.46E-01 3.56E-01 9.75E-01 2.72E-05 N/A N/A N/A

Problem f13- Yang4 f14- Penalized1


Best Mean Std Dev Worst Best Mean Std Dev Worst

AFRICAN - 1.00E?00 - 1.00E-01 3.05E-01 0.00E?00 5.69E-05 7.46E-03 1.19E-02 4.78E-02


AQUILA - 1.00E?00 - 5.63E-01 5.01E-01 0.00E?00 1.45E-07 2.43E-05 2.54E-05 9.62E-05
BARNA - 1.00E?00 - 8.46E-01 3.55E-01 0.00E?00 1.14E?00 1.16E?00 6.38E-03 1.17E?00
EQUIL 0.00E?00 8.40E-323 0.00E?00 2.51E-321 1.35E?00 1.52E?00 1.16E-01 1.82E?00
GRAD 0.00E?00 0.00E?00 0.00E?00 0.00E?00 4.17E-06 1.93E-04 2.73E-04 1.10E-03
HARRIS - 1.00E?00 - 2.54E-01 3.90E-01 0.00E?00 5.25E-06 6.10E-04 7.32E-04 2.74E-03
MANTA - 1.00E?00 - 3.43E-01 4.56E-01 0.00E?00 9.86E-01 1.03E?00 1.86E-02 1.06E?00
PRO - 1.00E?00 - 7.39E-01 3.62E-01 0.00E?00 1.16E?00 1.19E?00 7.92E-03 1.19E?00
REPTILE - 1.00E?00 - 6.67E-02 2.54E-01 0.00E?00 2.10E-01 1.04E?00 3.05E-01 1.19E?00
RUNGE 0.00E?00 0.00E?00 0.00E?00 0.00E?00 4.12E-02 5.61E-02 8.54E-03 7.38E-02
SNAKE - 5.04E-03 - 2.93E-04 1.02E-03 0.00E?00 1.70E-06 9.05E-02 2.10E-01 7.49E-01

Problem f15- Path f16- Quintic


Best Mean Std Dev Worst Best Mean Std Dev Worst

AFRICAN 0.00E?00 3.27E?00 9.95E?00 3.59E?01 1.17E?02 6.39E?02 4.54E?02 1.67E?03


AQUILA 0.00E?00 2.78E-03 1.52E-02 8.34E-02 4.96E-01 3.90E?01 3.60E?01 1.32E?02
BARNA 0.00E?00 1.71E-08 9.36E-08 5.13E-07 3.71E?03 3.85E?03 3.97E?01 3.89E?03
EQUIL 4.61E?02 4.71E?02 4.02E?00 4.77E?02 7.32E?03 1.33E?04 4.76E?03 2.75E?04
GRAD 0.00E?00 1.67E?01 7.16E?01 3.89E?02 1.15E?01 8.87E?01 6.71E?01 3.11E?02

123
Neural Computing and Applications (2023) 35:14275–14378 14325

Table 20 (continued)
Problem f15- Path f16- Quintic
Best Mean Std Dev Worst Best Mean Std Dev Worst

HARRIS 6.40E-13 7.69E-01 4.07E?00 2.23E?01 2.02E?00 1.48E?02 1.10E?02 5.02E?02


MANTA 0.00E?00 9.43E?01 1.91E?02 4.74E?02 3.49E?03 3.60E?03 4.84E?01 3.69E?03
PRO 0.00E?00 0.00E?00 0.00E?00 0.00E?00 3.78E?03 3.88E?03 5.14E?01 3.94E?03
REPTILE 0.00E?00 0.00E?00 0.00E?00 0.00E?00 1.95E?03 3.90E?03 3.75E?02 4.00E?03
RUNGE 3.97E?02 4.62E?02 1.59E?01 4.75E?02 1.05E?03 1.21E?03 7.81E?01 1.42E?03
SNAKE 9.78E-04 2.01E?01 8.56E?01 4.71E?02 3.75E?01 9.31E?02 1.07E?03 3.70E?03

Problem f17- Qing f18- Levy


Best Mean Std Dev Worst Best Mean Std Dev Worst

AFRICAN 2.82E?08 2.94E?08 4.07E?06 3.01E?08 3.37E-01 4.23E?00 5.58E?00 3.11E?01


AQUILA 2.89E?08 2.96E?08 1.97E?06 2.98E?08 5.19E-04 4.89E-01 7.80E-01 3.88E?00
BARNA 2.60E?08 2.69E?08 3.13E?06 2.74E?08 3.59E?02 3.65E?02 2.12E?00 3.69E?02
EQUIL 2.99E?08 3.00E?08 6.19E?05 3.02E?08 3.96E?02 4.92E?02 3.86E?01 5.82E?02
GRAD 2.47E?08 2.56E?08 6.15E?06 2.67E?08 3.25E-04 1.03E-01 1.61E-01 6.06E-01
HARRIS 2.56E?08 2.70E?08 6.32E?06 2.83E?08 2.77E-03 1.85E-01 2.52E-01 1.24E?00
MANTA 2.89E?08 2.93E?08 2.01E?06 2.96E?08 3.14E?02 3.23E?02 5.34E?00 3.35E?02
PRO 2.98E?08 2.99E?08 3.64E?05 3.00E?08 3.71E?02 3.74E?02 1.19E?00 3.75E?02
REPTILE 2.96E?08 2.97E?08 6.21E?05 2.98E?08 3.65E?02 3.73E?02 2.41E?00 3.75E?02
RUNGE 2.96E?08 2.99E?08 1.19E?06 3.00E?08 1.37E?01 1.68E?01 1.86E?00 2.12E?01
SNAKE 2.44E?08 2.51E?08 8.27E?06 2.79E?08 1.39E-03 1.71E?01 5.93E?01 3.24E?02

similar, which can be deduced from the runtimes of the figures, reaming algorithms accomplish the predefined
algorithms obtained for different problems. RUNGE algo- 2000 function evaluations within the runtime band between
rithm consumes a considerable amount of computational 0.02 and 0.03 s, which is much quicker than that is elapsed
memory as the general manipulation scheme of this algo- for the RUNGE and BARNA algorithms.
rithm includes an excessive number of complementary and
well-tailored search equations which metaphorically mimic 4.5 Evaluation on the convergence behavior
the algorithmic steps of the Runge–Kutta differential of the algorithms
equation solver. Interested readers could examine the
governing search equations of the algorithms in the asso- Convergence curves give a provisional insight to the end
ciated sections of this study and comparatively investigate users on how quickly the algorithm reaches to its optimal
on which algorithm requires the most competent manipu- solution and provide a visual understanding of the ten-
lation equations that burden the highest computational load dencies of the iterative declines in fitness values. Ongoing
between them. RUNGE algorithm has the highest runtimes, evolution in the objective function value of an optimization
which takes four or five times longer to arrive the optimal problem is directly related with the predefined number of
solution of the problem compared to the remaining algo- iterations, which means that any increase in the number of
rithms except BARNA algorithm on average. BARNA iteration leads to an increase in the probability of reaching
algorithm is another relatively time-consuming algorithm, the global optimum answer of the problem. A detailed
which mainly resulted from the consistent employment of examination on the proclivities of convergence graphs
randperm() function throughout the iterations responsible helps us to analyze the gradual decreases (or increases) in
for shuffling the current positions of the population mem- objective function values of the employed optimization
bers. BARNA algorithm also requires an ascending sorting problem, which is conducive to fully comprehend and
of the population of individuals based on their respective monitor the ruling search mechanism operated during the
fitness values, which is another decisive factor explaining course of iterations. General trend in the evolution of the
the excessive computational time required to complete the convergence curves is based on the sudden and rapid
predefined number of iterations. As it is evident from the changes in the early phases of iterations, which are

123
14326 Neural Computing and Applications (2023) 35:14275–14378

Table 21 Comparison of the statistical results for 1000D unimodal test functions from f19 – Sphere to f26—Discus
Problem f19- Sphere f20- Brown
Best Mean Std Dev Worst Best Mean Std Dev Worst

AFRICAN 6.05E-34 4.05E-16 2.20E-15 1.20E-14 3.86E-34 1.11E-16 5.33E-16 2.92E-15


AQUILA 1.44E-22 1.10E-11 5.77E-11 3.16E-10 1.54E-23 2.56E-12 1.38E-11 7.58E-11
BARNA 1.09E-32 3.99E-16 1.56E-15 7.52E-15 7.17E-32 3.22E-14 1.68E-13 9.22E-13
EQUIL 2.68E?02 3.95E?02 6.49E?01 5.01E?02 4.00E?21 3.09E?73 1.69E?74 9.26E?74
GRAD 1.47E-29 6.93E-23 2.51E-22 1.26E-21 1.73E-27 4.89E-21 2.59E-20 1.42E-19
HARRIS 1.14E-17 2.74E-10 1.26E-09 6.89E-09 7.54E-19 1.22E-11 3.66E-11 1.97E-10
MANTA 1.45E-44 9.48E-34 3.67E-33 1.88E-32 5.76E-43 2.37E-33 8.42E-33 4.45E-32
PRO 5.50E-66 1.81E-57 9.59E-57 5.26E-56 9.70E-67 2.13E-57 6.78E-57 3.07E-56
REPTILE 0.00E?00 1.07E-41 5.86E-41 3.21E-40 0.00E?00 9.31E-41 4.85E-40 2.66E-39
RUNGE 1.00E-10 3.39E-07 8.80E-07 4.12E-06 1.79E-10 9.19E-06 4.31E-05 2.35E-04
SNAKE 8.88E-06 3.26E-01 9.32E-01 4.97E?00 9.06E-06 3.73E-01 8.45E-01 3.30E?00

Problem f21—Sum of difference f22- Bentcigar


Best Mean Std Dev Worst Best Mean Std Dev Worst

AFRICAN 2.24E-51 4.84E?200 Inf 1.45E?202 8.77E-26 4.49E-08 2.37E-07 1.30E-06


AQUILA 4.63E-23 1.45E-09 7.90E-09 4.33E-08 5.92E-17 5.20E-08 2.18E-07 1.14E-06
BARNA 9.35E-38 4.20E-20 2.29E-19 1.25E-18 3.22E-26 5.11E-09 2.49E-08 1.36E-07
EQUIL 1.76E-17 6.33E-09 2.95E-08 1.61E-07 2.52E?08 4.01E?08 6.03E?07 5.51E?08
GRAD 3.35E-28 4.38E-11 2.39E-10 1.31E-09 7.28E-23 6.38E-18 2.50E-17 1.36E-16
HARRIS 4.15E-30 4.90E-18 1.35E-17 5.38E-17 7.84E-12 1.78E-03 9.72E-03 5.32E-02
MANTA 3.39E-58 5.47E-48 2.18E-47 1.08E-46 7.83E-39 1.51E-25 8.16E-25 4.47E-24
PRO 3.62E-72 2.80E-59 1.45E-58 7.96E-58 2.40E-61 3.59E-52 1.88E-51 1.03E-50
REPTILE 0.00E?00 1.79E-40 9.39E-40 5.15E-39 0.00E?00 1.85E-44 1.02E-43 5.56E-43
RUNGE N/A N/A N/A N/A 1.45E-04 4.49E-01 1.42E?00 7.52E?00
SNAKE 5.25E-07 N/A N/A N/A 8.21E?01 3.76E?05 1.19E?06 6.52E?06

Problem f23- Sumsquares f24- Dropwave


Best Mean Std Dev Worst Best Mean Std Dev Worst

AFRICAN 3.69E-32 2.44E-11 9.65E-11 4.99E-10 - 1.00E?00 - 1.00E?00 6.96E-08 - 1.00E?00


AQUILA 6.51E-20 6.94E-11 3.44E-10 1.89E-09 - 1.00E?00 - 9.98E-01 1.31E-02 - 9.28E-01
BARNA 1.53E-26 2.05E-11 8.80E-11 4.63E-10 - 1.00E?00 - 1.00E?00 7.55E-12 - 1.00E?00
EQUIL 1.20E?05 1.91E?05 4.01E?04 2.77E?05 - 1.97E-03 - 1.42E-03 2.70E-04 - 8.90E-04
GRAD 4.84E-26 4.29E-20 1.66E-19 8.88E-19 - 1.00E?00 - 9.89E-01 2.42E-02 - 9.36E-01
HARRIS 1.21E-13 4.18E-09 1.28E-08 6.73E-08 - 1.00E?00 - 1.00E?00 5.00E-05 - 1.00E?00
MANTA 3.13E-40 1.65E-30 6.10E-30 2.62E-29 - 1.00E?00 - 1.00E?00 4.84E-09 - 1.00E?00
PRO 1.81E-64 2.46E-55 1.07E-54 5.82E-54 - 1.00E?00 - 1.00E?00 0.00E?00 - 1.00E?00
REPTILE 0.00E?00 1.87E-38 1.02E-37 5.61E-37 - 1.00E?00 - 1.00E?00 0.00E?00 - 1.00E?00
RUNGE 4.36E-09 1.61E-04 4.82E-04 2.51E-03 - 9.36E-01 - 9.36E-01 7.54E-11 - 9.36E-01
SNAKE 3.95E-02 2.63E?02 6.54E?02 3.20E?03 - 9.26E-01 - 3.59E-01 2.67E-01 - 2.35E-02

Problem f25- Rosenbrock f26- Discus


Best Mean Std Dev Worst Best Mean Std Dev Worst

AFRICAN 3.87E?02 9.20E?02 2.12E?02 1.37E?03 9.55E-32 1.41E-14 5.08E-14 2.45E-13


AQUILA 5.26E?00 7.04E?02 4.06E?02 9.89E?02 1.96E-17 9.90E-07 4.42E-06 2.37E-05
BARNA 9.99E?02 9.99E?02 2.36E-02 9.99E?02 7.04E-36 7.67E-16 3.81E-15 2.09E-14
EQUIL 1.35E?05 2.59E?05 6.49E?04 3.65E?05 3.67E?02 5.70E?02 9.64E?01 7.77E?02
GRAD 2.62E-01 2.29E?01 3.80E?01 1.68E?02 5.62E-27 2.27E-19 9.84E-19 5.26E-18
HARRIS 4.39E-01 2.39E?02 3.87E?02 9.90E?02 6.47E-20 1.06E-09 4.74E-09 2.60E-08
MANTA 9.98E?02 9.98E?02 8.07E-02 9.99E?02 8.13E-44 2.96E-31 1.58E-30 8.67E-30
PRO 9.99E?02 9.99E?02 1.57E-02 9.99E?02 3.95E-65 2.87E-55 1.18E-54 6.23E-54
REPTILE 9.99E?02 9.99E?02 9.63E-03 9.99E?02 0.00E?00 3.48E-75 1.91E-74 1.05E-73
RUNGE 9.93E?02 9.95E?02 1.91E?00 9.99E?02 5.46E-09 6.56E-06 1.58E-05 7.97E-05
SNAKE 1.24E?01 7.90E?02 3.73E?02 1.09E?03 1.33E-05 8.66E-01 2.86E?00 1.32E?01

123
Neural Computing and Applications (2023) 35:14275–14378 14327

Table 22 Comparison of the statistical results for 1000D unimodal test functions from f27 – Dixon-Price to f34—Powell
Problem f27- Dixon and Price f28- Trid
Best Mean Std Dev Worst Best Mean Std Dev Worst

AFRICAN 7.44E-01 9.90E-01 4.67E-02 1.00E?00 - 1.00E?06 - 4.52E?05 3.86E?05 2.47E?02


AQUILA 2.50E-01 8.52E-01 2.69E-01 1.00E?00 - 1.11E?06 - 9.77E?05 1.62E?05 - 1.28E?05
BARNA 9.98E-01 1.00E?00 4.00E-04 1.00E?00 9.87E?02 9.94E?02 2.89E?00 9.98E?02
EQUIL 1.17E?06 5.18E?06 2.10E?06 1.03E?07 2.79E?12 3.91E?12 6.87E?11 5.52E?12
GRAD 3.75E-01 9.63E-01 1.42E-01 1.00E?00 - 1.00E?06 - 8.94E?05 1.36E?05 - 5.53E?05
HARRIS 3.43E-01 9.78E-01 1.20E-01 1.00E?00 - 1.09E?06 - 9.92E?05 4.71E?04 - 8.62E?05
MANTA 6.76E-01 6.92E-01 1.53E-02 7.45E-01 9.36E?02 9.50E?02 7.28E?00 9.63E?02
PRO 1.00E?00 1.00E?00 3.14E-06 1.00E?00 9.94E?02 9.99E?02 1.34E?00 1.00E?03
REPTILE 9.11E-01 9.97E-01 1.62E-02 1.00E?00 9.97E?02 9.99E?02 7.88E-01 1.00E?03
RUNGE 9.96E-01 1.00E?00 7.65E-03 1.04E?00 5.05E?04 6.66E?06 2.13E?07 9.03E?07
SNAKE 1.11E?00 1.02E?02 1.69E?02 5.03E?02 - 9.98E?05 2.93E?08 9.09E?08 4.96E?09

Problem f29- Schwefel 2.21 f30- Schwefel 2.23


Best Mean Std Dev Worst Best Mean Std Dev Worst

AFRICAN 5.26E-20 2.47E-07 1.17E-06 6.38E-06 7.10E-177 3.59E-60 1.97E-59 1.08E-58


AQUILA 5.27E-14 1.24E-09 2.61E-09 1.09E-08 1.59E-127 3.78E-71 2.07E-70 1.13E-69
BARNA 6.57E-14 6.46E-10 1.80E-09 9.01E-09 5.75E-150 2.89E-76 1.25E-75 6.62E-75
EQUIL 4.80E?00 5.88E?00 4.50E-01 6.82E?00 2.38E?06 2.09E?07 1.72E?07 6.64E?07
GRAD 5.63E-16 2.34E-12 4.47E-12 1.80E-11 1.12E-147 3.18E-115 1.58E-114 8.62E-114
HARRIS 1.65E-10 7.54E-08 1.44E-07 5.88E-07 9.21E-104 4.44E-54 2.43E-53 1.33E-52
MANTA 5.45E-24 5.32E-18 1.67E-17 7.61E-17 4.52E-218 3.82E-169 0.00E?00 5.79E-168
PRO 1.21E-33 4.37E-31 1.00E-30 5.02E-30 0.00E?00 3.85E-296 0.00E?00 1.15E-294
REPTILE 0.00E?00 5.00E-21 2.74E-20 1.50E-19 0.00E?00 1.34E-161 7.33E-161 4.02E-160
RUNGE 1.35E-04 3.61E-03 4.88E-03 2.32E-02 9.15E-46 1.22E-19 6.69E-19 3.66E-18
SNAKE 5.40E-05 7.87E-02 1.29E-01 5.88E-01 7.79E-29 3.31E-08 1.65E-07 9.02E-07

Problem f31- Schwefel 2.25 f32- Schwefel 2.20


Best Mean Std Dev Worst Best Mean Std Dev Worst

AFRICAN 1.68E?01 5.73E?01 4.29E?01 2.16E?02 7.02E-17 1.00E-06 2.91E-06 1.20E-05


AQUILA 5.43E-02 2.16E?01 2.42E?01 7.25E?01 1.72E-09 2.56E-06 9.24E-06 4.81E-05
BARNA 9.83E?02 9.90E?02 2.47E?00 9.94E?02 4.13E-14 1.64E-07 5.02E-07 2.58E-06
EQUIL 2.46E?03 4.20E?03 1.39E?03 8.32E?03 2.80E?02 3.60E?02 3.44E?01 4.16E?02
GRAD 2.80E-03 1.72E?00 2.50E?00 1.18E?01 2.38E-13 3.52E-11 6.05E-11 2.53E-10
HARRIS 2.96E-02 5.17E?00 8.44E?00 4.40E?01 3.07E-10 1.68E-05 2.29E-05 9.97E-05
MANTA 9.40E?02 9.54E?02 6.47E?00 9.67E?02 3.07E-19 9.92E-16 1.99E-15 9.01E-15
PRO 9.94E?02 9.98E?02 1.01E?00 9.99E?02 5.17E-32 4.69E-28 1.11E-27 4.65E-27
REPTILE 9.95E?02 9.98E?02 8.19E-01 9.99E?02 0.00E?00 6.43E-39 3.50E-38 1.92E-37
RUNGE 3.06E?02 3.46E?02 2.61E?01 4.22E?02 2.50E-05 2.65E-03 5.56E-03 2.51E-02
SNAKE 9.64E-04 1.38E?02 2.67E?02 1.00E?03 6.50E-02 2.79E?01 3.46E?01 1.38E?02

Problem f33- Streched Sine Wave f34- Powell


Best Mean Std Dev Worst Best Mean Std Dev Worst

AFRICAN 1.74E-08 3.65E-03 1.30E-02 6.96E-02 1.74E-46 N/A N/A N/A


AQUILA 9.73E-02 5.98E-01 8.71E-01 4.35E?00 2.64E-22 2.65E-10 1.09E-09 5.88E-09
BARNA 1.53E-06 1.11E-02 2.08E-02 7.01E-02 1.91E-41 3.40E-22 1.29E-21 6.93E-21
EQUIL 3.89E?02 4.31E?02 2.23E?01 4.67E?02 2.78E-19 2.17E-09 1.10E-08 6.05E-08
GRAD 3.30E-06 3.48E-02 7.29E-02 2.42E-01 8.64E-30 7.72E-08 3.84E-07 2.10E-06
HARRIS 3.51E-03 3.26E-01 3.08E-01 9.48E-01 1.89E-34 1.33E-16 4.96E-16 2.37E-15
MANTA 3.08E-10 3.65E-07 7.79E-07 3.91E-06 1.60E-59 2.27E-48 6.55E-48 2.99E-47
PRO 3.61E-15 8.88E-09 4.78E-08 2.62E-07 3.41E-71 1.12E-59 4.23E-59 2.13E-58
REPTILE 0.00E?00 3.16E-11 1.61E-10 8.80E-10 0.00E?00 2.01E-37 1.10E-36 6.04E-36
RUNGE 1.17E-01 5.37E-01 3.57E-01 1.53E?00 N/A N/A N/A N/A
SNAKE 2.27E?01 1.11E?02 7.95E?01 3.18E?02 2.81E-06 N/A N/A N/A

123
14328 Neural Computing and Applications (2023) 35:14275–14378

Table 23 Ranking points of the compared algorithms for 1000D test functions relying on the best results of the predictions
AFRICAN AQUILA BARNA EQUIL GRAD HARRIS MANTA PRO REPTILE RUNGE SNAKE

f1 1 7 5 11 5 8 1 1 1 9 10
f2 1 1 1 11 1 1 1 1 1 9 10
f3 1 1 1 11 1 1 1 1 1 9 10
f4 7 5 4 11 8 10 3 2 1 9 6
f5 4 6 5 11 7 8 3 2 1 10 9
f6 4 7 5 11 6 8 3 2 1 9 10
f7 4 7 5 11 6 8 3 2 1 9 10
f8 1 1 1 11 1 1 1 1 1 10 9
f9 1 1 7 11 1 1 8 9 10 6 1
f10 4 7 5 11 6 8 3 2 1 9 10
f11 1 1 1 11 1 1 1 1 1 9 10
f12 3 9 5 8 7 6 4 2 1 11 10
f13 1 1 1 10 4 1 1 1 1 11 8
f14 5 1 9 11 3 4 8 10 7 6 2
f15 1 1 1 11 1 8 1 1 1 10 9
f16 5 1 9 11 3 2 8 10 7 6 4
f17 5 6 4 11 2 3 7 10 8 9 1
f18 5 2 8 11 1 4 7 10 9 6 3
Average point 3.00 3.61 4.27 10.78 3.55 4.61 3.57 3.77 3.00 8.72 7.33
Ranking point 1 5 7 11 3 8 4 6 1 10 9
f19 4 7 5 11 6 8 3 2 1 9 10
f20 4 7 5 11 6 8 3 2 1 9 10
f21 4 8 5 9 7 6 3 2 1 11 10
f22 5 7 4 11 6 8 3 2 1 9 10
f23 4 7 5 11 6 8 3 2 1 9 10
f24 1 1 1 11 1 1 1 1 1 9 10
f25 5 3 8 11 1 2 7 8 8 6 4
f26 5 8 4 11 6 7 3 2 1 9 10
f27 5 1 8 11 3 2 4 9 6 7 10
f28 3 1 7 11 4 2 6 8 9 10 5
f29 4 6 7 11 5 8 3 2 1 10 9
f30 4 7 5 11 6 8 3 2 1 9 10
f31 5 4 8 11 2 3 7 9 10 6 1
f32 4 8 5 11 6 7 3 2 1 9 10
f33 4 8 5 11 6 7 3 2 1 9 10
f34 4 8 5 9 7 6 3 2 1 11 10
Average point 4.06 5.68 5.43 10.75 4.87 5.69 3.62 3.56 2.81 8.87 8.68
Ranking point 4 7 6 11 5 8 3 2 1 10 9
Overall point 3.49 4.58 4.81 10.76 4.17 5.11 3.59 3.67 2.91 8.79 7.96
Overall ranking point 2 6 7 11 5 8 3 4 1 10 9

123
Neural Computing and Applications (2023) 35:14275–14378 14329

Table 24 Prediction performances of the algorithms based on the ranking points obtained for mean results of 1000D test functions
AFRICAN AQUILA BARNA EQUIL GRAD HARRIS MANTA PRO REPTILE RUNGE SNAKE

f1 6 7 5 11 4 8 1 2 3 9 10
f2 6 7 1 11 1 8 1 1 1 9 10
f3 6 7 1 11 1 8 1 1 1 9 10
f4 8 11 3 10 6 9 2 1 4 7 5
f5 5 7 3 11 8 6 4 1 2 9 10
f6 7 9 5 11 4 6 3 1 2 8 10
f7 6 5 4 11 3 7 2 1 9 8 10
f8 7 5 4 10 6 8 1 1 1 11 9
f9 5 1 10 11 1 1 7 8 9 6 1
f10 5 6 7 11 4 8 3 2 1 9 10
f11 8 6 5 11 4 7 1 1 1 9 10
f12 5 9 4 6 7 8 3 1 2 10 11
f13 6 3 1 11 9 5 4 2 8 9 8
f14 4 1 9 11 2 3 7 10 8 5 6
f15 6 4 3 11 7 5 9 1 2 10 8
f16 4 1 8 11 2 3 7 9 10 6 5
f17 6 7 3 11 2 4 5 9 8 10 1
f18 4 3 8 11 1 2 7 10 9 5 6
Average point 5.77 5.50 4.66 10.61 4.00 5.88 3.77 3.44 4.50 8.27 9.43
Ranking point 7 6 6 11 3 8 2 1 4 9 10
f19 6 7 5 11 4 8 3 1 2 9 10
f20 5 7 6 11 4 8 3 1 2 9 10
f21 9 7 4 8 6 5 2 1 3 10 11
f22 6 7 5 11 4 8 3 1 2 9 10
f23 6 7 5 11 4 8 3 1 2 9 10
f24 1 7 1 11 8 1 1 1 1 9 10
f25 5 3 8 11 1 2 7 9 10 6 4
f26 6 8 5 11 4 7 3 2 1 9 10
f27 5 2 7 11 3 4 1 8 6 9 10
f28 4 2 6 11 3 1 5 7 8 9 10
f29 8 6 5 11 4 7 3 1 2 9 10
f30 7 6 5 11 4 8 2 1 3 9 10
f31 4 3 8 11 1 2 7 9 10 6 5
f32 6 7 5 11 4 8 3 2 1 9 10
f33 4 9 5 11 6 7 3 2 1 8 10
f34 9 6 4 7 8 5 2 1 3 10 11
Average point 5.68 5.87 5.25 10.56 4.25 5.56 3.18 3.00 3.56 8.68 9.4
Ranking point 7 8 5 11 4 6 2 1 3 9 10
Overall point 5.72 5.67 4.93 10.58 4.11 5.73 3.49 3.23 4.05 8.46 9.41
Overall ranking point 7 6 5 11 4 8 2 1 3 9 10

controlled by the search schemes responsible for the performance of these eleven metaheuristic algorithms,
explorations mechanism. Next, variational declines in the decreases in the fitness values are plotted against the
fitness values are administrated by the search mechanism increasing number of iterations for 30D unimodal and
of the exploitation phase in which search agents focus on multimodal test problems, as depicted in Figs. 3, 4, 5, 6, 7,
the promising areas meticulously located in the previous 8. The convergence curves for the compared algorithms are
phase. In order to deeply analyze the convergence illustrated in these figures, which are obtained for mean

123
14330 Neural Computing and Applications (2023) 35:14275–14378

Fig. 1 Elapsed runtimes of the algorithms for 30D multimodal test functions

Fig. 2 Computational runtimes of compared algorithms for 30D unimodal test problems

values of 30 independent algorithm runs and 1000 function REPTILE algorithm shows gradual decreases in the earliest
evaluations. Figures 3, 4, 5 depict the convergence curves phases and completes iterations with rapid and sudden
plotted for 30D multimodal test functions by the eleven declines for multimodal test functions of f1, f2, f3, f5, f6, f7,
metaheuristic optimizers. Compared algorithms show dif- f8, f10, f11, f12, and f15. This convergence behavior is the
ferent convergence behaviors for different test functions. direct result of the influences of search equations

123
Neural Computing and Applications (2023) 35:14275–14378 14331

Fig. 3 Evolution histories of the 30D multimodal problems from f1 – Ackley to f6- Alpine

Fig. 4 Evolution plots of the compared algorithms for 30D multimodal test functions from f7 -Csendes to f12 -Yang1

associated with the exploration mechanism in the early unvisited regions of the search space. This tendency gives
stages, which is followed by the intensification on the rise to an acceleration in the general convergence speed of
fruitful regions discovered in the preceding iterations. As the algorithm and results in a quick reach to the global
can be noticed from the search equation of REPTILE optimum point. However, this behavior may not be con-
algorithm, responsible search agents tend to probe the ducive, even can be derogatory for some test instances as
domain around the so-far-obtained-best solutions, empha- seen from the evolution plots obtained for f14, f16, f17, and
sizing the exploitation rather than exploration around the f18 functions. Solution space of these benchmark problems

123
14332 Neural Computing and Applications (2023) 35:14275–14378

Fig. 5 Convergence histories of the competitive algorithms for 30D multimodal test functions from f13-Yang4 to f18-Levy

Fig. 6 Iterative evolution of the fitness function values for 30D unimodal benchmark functions from f19-Sphere to f24-Dropwave

needs more inquisitive exploration instead of consistent dominant and prevalent for these algorithms. Too much
intensification of the promising regions, which explains the emphasis on probing around the feasible regions of the
relatively unsuccessful prediction performance of REP- search space entails not only redundant diversification on
TILE for these test functions. Other remaining algorithms the solution space, disregarding the necessary intensifica-
perform stepwise and gradual decreases throughout the tion when it is needed but also consumes excessive amount
iteration process for most of the multimodal test functions, of computational resources, yielding to longer than
which indicates the exploration mechanism is more expected, and anticipated algorithm runtimes. If it is to

123
Neural Computing and Applications (2023) 35:14275–14378 14333

Fig. 7 Convergence plots for 30D unimodal test problems from f25-Rosenbrock to f30-Schwefel 2.23

Fig. 8 Evolution histories of the objective function values for test problems from f31-Schwefel 2.25 to f34-Powell

summarize the general convergence behavior of the algo- effectively pinpoint the exact locations of the optimum
rithms for multimodal test functions in a nutshell, when solution of the problem for most of the cases.
overall convergence performance is averaged over the Similar search tendencies are also observed for 30D
eighteen test functions, it can be conveniently stated that all unimodal test functions whose convergence graphs are
compared algorithms can capitalize on the promising visualized from Figs. 6, 7, 8. REPTILE algorithm is again
search regions previously explored in the early iterations to able to superiorly maintain a proper balance between

123
14334 Neural Computing and Applications (2023) 35:14275–14378

Table 25 Error comparison of the algorithms with respect to the statistical results of CEC-2013 test functions
CEC 01 CEC 02
Best Mean Std Dev Worst Best Mean Std Dev Worst

AFRICAN - 1.40E?03 - 1.40E?03 2.89E-07 - 1.40E?03 4.13E?06 1.12E?07 6.65E?06 4.02E?07


AQUILA - 1.39E?03 - 1.38E?03 9.56E?00 - 1.35E?03 2.68E?07 5.64E?07 1.63E?07 9.18E?07
BARNA 2.54E?04 3.28E?04 4.26E?03 4.14E?04 2.68E?08 4.73E?08 1.66E?08 9.73E?08
EQUIL - 1.40E?03 - 1.40E?03 2.63E-11 - 1.40E?03 2.49E?06 5.18E?06 1.45E?06 8.22E?06
GRAD 3.89E?02 3.49E?03 2.45E?03 9.26E?03 6.04E?07 1.66E?08 6.27E?07 3.19E?08
HARRIS - 1.40E?03 - 1.40E?03 9.92E-01 - 1.39E?03 1.35E?07 5.54E?07 5.80E?07 2.94E?08
MANTA - 1.40E?03 - 1.40E?03 9.06E-13 - 1.40E?03 3.72E?05 1.34E?06 6.31E?05 2.98E?06
PRO 3.49E?04 5.94E?04 9.53E?03 6.82E?04 2.21E?08 7.46E?08 3.27E?08 1.91E?09
REPTILE 1.88E?04 3.33E?04 9.49E?03 5.54E?04 1.43E?08 4.20E?08 1.47E?08 8.56E?08
RUNGE - 1.40E?03 - 1.40E?03 4.93E-04 - 1.40E?03 8.33E?05 3.04E?06 9.77E?05 5.26E?06
SNAKE 2.00E?04 3.04E?04 3.72E?03 3.74E?04 1.61E?08 5.09E?08 2.12E?08 9.47E?08

CEC 03 CEC 04
Best Mean Std Dev Worst Best Mean Std Dev Worst

AFRICAN 2.49E?07 8.85E?08 8.19E?08 2.95E?09 5.02E?04 5.66E?04 3.41E?03 6.50E?04


AQUILA 1.28E?10 3.23E?10 1.79E?10 8.83E?10 3.64E?04 4.85E?04 4.39E?03 5.64E?04
BARNA 4.95E?12 7.43E?14 2.03E?15 1.08E?16 4.48E?04 5.63E?04 3.88E?03 6.31E?04
EQUIL 8.60E?06 5.33E?08 4.97E?08 1.75E?09 1.27E?04 2.17E?04 4.60E?03 3.53E?04
GRAD 2.90E?10 2.38E?13 9.50E?13 5.02E?14 3.38E?04 5.02E?04 7.56E?03 6.34E?04
HARRIS 2.32E?09 2.14E?13 1.17E?14 6.41E?14 2.39E?04 3.39E?04 5.26E?03 4.60E?04
MANTA 5.95E?06 3.59E?08 5.06E?08 2.49E?09 3.05E?03 8.00E?03 3.54E?03 1.63E?04
PRO 5.04E?10 2.79E?19 1.52E?20 8.32E?20 5.12E?04 6.43E?04 6.31E?03 7.86E?04
REPTILE 9.68E?10 5.97E?14 2.38E?15 1.31E?16 4.69E?04 5.68E?04 5.07E?03 6.78E?04
RUNGE 4.04E?07 1.77E?09 2.40E?09 1.09E?10 - 8.42E?02 6.24E?01 8.03E?02 3.17E?03
SNAKE 1.18E?11 5.33E?15 1.41E?16 5.46E?16 5.30E?04 6.56E?04 4.17E?03 6.92E?04

CEC 05 CEC 06
Best Mean Std Dev Worst Best Mean Std Dev Worst

AFRICAN - 1.00E?03 - 1.00E?03 1.03E-03 - 1.00E?03 - 8.93E?02 - 8.35E?02 2.75E?01 - 7.71E?02


AQUILA - 8.31E?02 - 6.35E?02 1.49E?02 - 1.51E?02 - 8.13E?02 - 7.08E?02 6.53E?01 - 5.51E?02
BARNA 6.12E?03 1.72E?04 5.56E?03 2.60E?04 1.43E?03 3.78E?03 1.20E?03 6.75E?03
EQUIL - 1.00E?03 - 1.00E?03 1.87E-07 - 1.00E?03 - 8.84E?02 - 8.49E?02 2.43E?01 - 8.19E?02
GRAD 7.00E?02 3.01E?03 1.62E?03 6.10E?03 - 4.74E?02 2.39E?02 8.17E?02 3.26E?03
HARRIS - 9.98E?02 - 9.96E?02 1.35E?00 - 9.93E?02 - 8.33E?02 - 5.49E?02 7.64E?02 2.48E?03
MANTA - 1.00E?03 - 1.00E?03 5.74E-13 - 1.00E?03 - 8.85E?02 - 8.52E?02 2.95E?01 - 8.06E?02
PRO 3.23E?04 1.23E?05 4.99E?04 2.01E?05 4.88E?03 1.52E?04 5.50E?03 2.53E?04
REPTILE 8.08E?03 4.85E?04 4.62E?04 1.50E?05 1.31E?03 3.46E?03 1.30E?03 6.29E?03
RUNGE - 1.00E?03 - 1.00E?03 1.43E-03 - 1.00E?03 - 8.84E?02 - 8.24E?02 3.77E?01 - 7.59E?02
SNAKE 8.50E?03 2.98E?04 1.14E?04 5.89E?04 1.71E?03 4.56E?03 1.51E?03 7.12E?03

CEC 07 CEC 08
Best Mean Std Dev Worst Best Mean Std Dev Worst

AFRICAN - 7.31E?02 - 7.13E?02 2.11E?01 - 6.41E?02 - 6.79E?02 - 6.79E?02 7.90E-02 - 6.79E?02


AQUILA - 6.12E?02 9.12E?03 1.61E?04 6.35E?04 - 6.79E?02 - 6.79E?02 5.00E-02 - 6.79E?02
BARNA - 3.36E?02 2.97E?04 4.43E?04 1.66E?05 - 6.79E?02 - 6.79E?02 7.03E-02 - 6.79E?02
EQUIL - 7.85E?02 - 7.49E?02 1.89E?01 - 7.12E?02 - 6.79E?02 - 6.79E?02 4.22E-02 - 6.79E?02
GRAD - 6.04E?02 5.91E?03 1.52E?04 7.78E?04 - 6.79E?02 - 6.79E?02 7.11E-02 - 6.79E?02
HARRIS - 6.60E?02 2.93E?03 1.12E?04 6.00E?04 - 6.79E?02 - 6.79E?02 6.02E-02 - 6.79E?02
MANTA - 7.36E?02 - 6.95E?02 2.28E?01 - 6.36E?02 - 6.79E?02 - 6.79E?02 5.08E-02 - 6.79E?02
PRO - 3.43E?02 2.24E?06 1.01E?07 5.54E?07 - 6.79E?02 - 6.79E?02 5.66E-02 - 6.79E?02
REPTILE - 2.80E?02 1.74E?04 1.79E?04 6.26E?04 - 6.79E?02 - 6.79E?02 4.39E-02 - 6.79E?02
RUNGE - 7.37E?02 - 5.60E?02 1.97E?02 7.21E?00 - 6.79E?02 - 6.79E?02 3.68E-02 - 6.79E?02
SNAKE - 5.71E?02 6.69E?04 1.59E?05 8.00E?05 - 6.79E?02 - 6.79E?02 5.54E-02 - 6.79E?02

123
Neural Computing and Applications (2023) 35:14275–14378 14335

Table 26 Comparison of the error analysis results obtained for CEC-2013 test problems
CEC 09 CEC 10
Best Mean Std Dev Worst Best Mean Std Dev Worst

AFRICAN - 5.82E?02 - 5.73E?02 4.55E?00 - 5.61E?02 - 4.99E?02 - 4.98E?02 1.78E?00 - 4.92E?02


AQUILA - 5.70E?02 - 5.61E?02 3.39E?00 - 5.56E?02 - 4.31E?02 - 3.15E?02 9.46E?01 - 8.20E?01
BARNA - 5.63E?02 - 5.58E?02 1.54E?00 - 5.55E?02 2.86E?03 3.82E?03 6.45E?02 5.30E?03
EQUIL - 5.83E?02 - 5.76E?02 4.38E?00 - 5.67E?02 - 4.99E?02 - 4.99E?02 2.13E-01 - 4.98E?02
GRAD - 5.64E?02 - 5.61E?02 2.16E?00 - 5.57E?02 1.81E?02 9.57E?02 5.12E?02 2.14E?03
HARRIS - 5.71E?02 - 5.63E?02 3.27E?00 - 5.58E?02 - 4.91E?02 - 4.36E?02 1.63E?02 3.79E?02
MANTA - 5.78E?02 - 5.70E?02 4.37E?00 - 5.62E?02 - 4.99E?02 - 4.99E?02 1.46E-05 - 4.99E?02
PRO - 5.62E?02 - 5.59E?02 1.43E?00 - 5.56E?02 4.24E?03 9.97E?03 2.96E?03 1.49E?04
REPTILE - 5.63E?02 - 5.60E?02 1.47E?00 - 5.57E?02 1.23E?03 4.40E?03 2.26E?03 1.45E?04
RUNGE - 5.75E?02 - 5.68E?02 2.78E?00 - 5.62E?02 - 4.99E?02 - 4.99E?02 1.49E-02 - 4.99E?02
SNAKE - 5.62E?02 - 5.58E?02 1.78E?00 - 5.56E?02 2.26E?03 3.99E?03 1.03E?03 6.26E?03
CEC 11 CEC 12
Best Mean Std Dev Worst Best Mean Std Dev Worst

AFRICAN - 3.53E?02 - 3.10E?02 2.50E?01 - 2.41E?02 - 2.07E?02 - 1.36E?02 3.75E?01 - 7.41E?01


AQUILA - 7.60E?01 2.25E?01 5.72E?01 1.58E?02 1.55E?02 3.38E?02 1.14E?02 6.18E?02
BARNA 9.21E?01 1.94E?02 4.54E?01 2.64E?02 1.63E?02 2.68E?02 4.64E?01 3.34E?02
EQUIL - 3.68E?02 - 3.26E?02 3.35E?01 - 2.42E?02 - 2.49E?02 - 2.16E?02 2.53E?01 - 1.49E?02
GRAD - 1.21E?01 1.86E?02 1.08E?02 4.98E?02 6.23E?01 2.61E?02 9.35E?01 5.00E?02
HARRIS - 1.71E?02 - 4.65E?01 7.33E?01 1.34E?02 7.67E?01 2.91E?02 1.19E?02 4.69E?02
MANTA - 2.51E?02 - 1.62E?02 6.95E?01 1.59E?01 - 1.99E?02 1.71E?01 1.26E?02 3.19E?02
PRO 4.55E?02 6.84E?02 1.19E?02 8.66E?02 4.35E?02 7.64E?02 1.53E?02 9.52E?02
REPTILE - 1.36E?01 1.38E?02 7.36E?01 3.56E?02 1.30E?02 2.69E?02 7.61E?01 4.19E?02
RUNGE - 2.62E?02 - 4.77E?01 7.81E?01 8.95E?01 - 1.42E?02 3.66E?01 7.60E?01 1.90E?02
SNAKE 8.60E?01 2.90E?02 9.90E?01 4.84E?02 2.61E?02 3.98E?02 9.61E?01 5.57E?02

CEC 13 CEC 14
Best Mean Std Dev Worst Best Mean Std Dev Worst

AFRICAN - 2.49E?01 5.58E?01 4.25E?01 1.23E?02 9.50E?02 2.14E?03 6.67E?02 3.78E?03


AQUILA 3.32E?02 4.94E?02 1.14E?02 6.98E?02 2.64E?03 4.05E?03 6.42E?02 5.10E?03
BARNA 2.09E?02 3.50E?02 4.96E?01 4.27E?02 7.18E?03 7.75E?03 3.19E?02 8.38E?03
EQUIL - 1.07E?02 - 4.79E?01 3.34E?01 7.98E?00 8.83E?02 1.94E?03 5.93E?02 3.32E?03
GRAD 1.53E?02 3.40E?02 1.03E?02 6.12E?02 5.50E?03 6.46E?03 6.32E?02 7.73E?03
HARRIS 2.64E?02 4.76E?02 1.24E?02 6.99E?02 2.20E?03 3.31E?03 6.89E?02 4.74E?03
MANTA 3.82E?01 1.31E?02 7.00E?01 3.15E?02 1.61E?03 2.89E?03 6.21E?02 4.44E?03
PRO 3.96E?02 8.15E?02 1.74E?02 1.04E?03 5.98E?03 8.13E?03 5.02E?02 8.89E?03
REPTILE 1.97E?02 3.55E?02 7.66E?01 5.29E?02 5.68E?03 6.99E?03 5.06E?02 8.06E?03
RUNGE 5.67E?01 2.40E?02 8.79E?01 4.36E?02 1.87E?03 2.70E?03 4.86E?02 3.61E?03
SNAKE 3.01E?02 4.82E?02 1.03E?02 7.04E?02 6.06E?03 7.59E?03 5.69E?02 8.71E?03

CEC 15 CEC 16
Best Mean Std Dev Worst Best Mean Std Dev Worst

AFRICAN 3.26E?03 4.26E?03 5.00E?02 5.19E?03 2.00E?02 2.01E?02 5.13E-01 2.02E?02


AQUILA 3.36E?03 4.80E?03 7.92E?02 6.33E?03 2.01E?02 2.02E?02 6.13E-01 2.03E?02
BARNA 6.75E?03 7.65E?03 3.94E?02 8.22E?03 2.02E?02 2.03E?02 4.06E-01 2.04E?02
EQUIL 3.15E?03 4.26E?03 6.66E?02 5.45E?03 2.01E?02 2.01E?02 3.69E-01 2.02E?02
GRAD 4.74E?03 6.65E?03 7.41E?02 7.88E?03 2.01E?02 2.02E?02 2.73E-01 2.02E?02

123
14336 Neural Computing and Applications (2023) 35:14275–14378

Table 26 (continued)
CEC 15 CEC 16
Best Mean Std Dev Worst Best Mean Std Dev Worst

HARRIS 3.88E?03 4.83E?03 5.64E?02 6.16E?03 2.01E?02 2.02E?02 3.25E-01 2.03E?02


MANTA 3.28E?03 4.29E?03 6.42E?02 5.84E?03 2.01E?02 2.02E?02 4.71E-01 2.03E?02
PRO 6.79E?03 8.06E?03 4.34E?02 8.76E?03 2.02E?02 2.03E?02 3.57E-01 2.04E?02
REPTILE 5.64E?03 7.19E?03 6.70E?02 8.29E?03 2.02E?02 2.03E?02 4.78E-01 2.03E?02
RUNGE 3.00E?03 4.17E?03 9.91E?02 6.96E?03 2.01E?02 2.03E?02 4.08E-01 2.03E?02
SNAKE 6.58E?03 7.85E?03 5.12E?02 8.64E?03 2.02E?02 2.03E?02 4.95E-01 2.04E?02

exploration and exploration phases for standard unimodal optimization problems. They have been consistently
problems, which indicates that it is capable of eliminating employed by metaheuristic algorithm developers to assess
the local pitfalls located in the search domain and obtaining their proposed optimizers. When the mathematical char-
the best estimations for the unimodal test functions of f19, acteristics of these test functions are carefully examined in
f20, f21, f22, f23, f24, f26, f29, f30, f33, and f34. This algorithm detail, problems belonging to the test suite used in the
continues to explore the search domain until to the point competitions organized in 2013, 2014, and 2015 are very
where the maximum number of iterations is reached. much similar; only negligible functional nuances make the
Nearly all algorithms have a suitable convergence speed difference between them. Test functions utilized in events
for unimodal test problems; however, they experience occurring after 2017 consist of multi-objective or con-
some difficulties in converging to the optimal solution for strained benchmark cases; those are not the main concern
f25- Rosenbrock function, except for GRAD algorithm. of the related section dealing with solving unconstrained
Rosenbrock test function is a challenging unimodal optimization problems. Therefore, the authors consider the
benchmark case, frequently used for assessing the opti- twenty-eight test functions for performance evaluation of
mization capabilities of the stochastic algorithm, whose the compared algorithms.
global optimum point resides in a long, narrow, parabolic- Similar to the previous cases, exhaustive comparisons
shaped valley, which is very hard to locate by any type of between the competitive algorithms have been realized by
optimization algorithm. Neither of the algorithms is able to recording the error results of the predictions in terms of
converge the global optimum answer of the f27-Dixon and best, mean, worst, and standard deviation values for each
Price test function, which is another compelling test case benchmark function in the suite. Population size of each
for the algorithms. Again nearly all optimizers fail to arrive algorithm is set to N = 20 and the total number of 3000
the optimal solution of f32-Schwefel 2.25 function and tend iterations are performed for each algorithm for each test
to be trapped in local solutions except GRAD algorithm, function. Statistical results are obtained after 50 indepen-
which shows consistent and stepwise decreases throughout dent algorithm runs. For all problems, the size of the 30D
the iterations. All compared algorithms prematurely con- search space is restricted between the lower bound of -100
verged to one of the many local optimum points of the f28- and the upper bound of 100. During the numerical exper-
Trid function, showing no clear sign of balance between iments, the same algorithm parameters have been consid-
the complementary exploration and exploitation phases. ered for the competitive algorithms as it was previously
utilized for thirty-four unconstrained test functions. As it
4.6 Performance assessment on CEC-2013 was mentioned, the competitive algorithms have been run
benchmark problems many times, and their prediction capabilities on twenty-
eight test functions composed of unimodal, multimodal,
This section comparatively investigates the performance of and have been evaluated in terms of well-organized per-
the eleven metaheuristic algorithms by evaluating their formance metrics. Optimization performance of the algo-
prediction accuracies on a test suite of twenty-eight thirty- rithms is compared between each other with respect to the
dimensional benchmark functions employed in CEC-2013 statistical results obtained after the consecutive algorithm
competitions. Multidimensional test functions taking place runs. Furthermore, the Friedman mean ranks for each
in CEC competitions are artificially generated benchmark optimized test function are tabulated in the respective
cases composed of shifted, rotated, highly multimodal, and tables to decide the degree of statistical significance
discontinuous benchmark problems that are most likely to between the algorithms. Similarly, the convergence ten-
simulate the challenges and difficulties posed by real-world dencies of the algorithms have been evaluated and

123
Neural Computing and Applications (2023) 35:14275–14378 14337

Table 27 Statistical results of the compared algorithms for CEC 2013 test problems
Problem CEC 17 CEC 18
Best Mean Std Dev Worst Best Mean Std Dev Worst

AFRICAN 3.80E?02 4.25E?02 2.09E?01 4.79E?02 4.84E?02 5.25E?02 2.67E?01 6.24E?02


AQUILA 6.15E?02 7.43E?02 7.04E?01 9.34E?02 6.66E?02 8.35E?02 6.95E?01 1.04E?03
BARNA 8.69E?02 9.47E?02 5.01E?01 1.10E?03 9.41E?02 1.03E?03 5.35E?01 1.11E?03
EQUIL 3.53E?02 4.21E?02 4.13E?01 5.24E?02 4.72E?02 5.13E?02 2.71E?01 5.69E?02
GRAD 7.79E?02 9.33E?02 8.34E?01 1.08E?03 8.73E?02 1.05E?03 1.05E?02 1.23E?03
HARRIS 6.91E?02 8.40E?02 6.05E?01 9.90E?02 7.68E?02 9.32E?02 7.92E?01 1.07E?03
MANTA 4.87E?02 6.82E?02 1.11E?02 9.72E?02 6.19E?02 7.87E?02 1.22E?02 1.15E?03
PRO 1.05E?03 1.28E?03 7.70E?01 1.39E?03 1.21E?03 1.41E?03 5.62E?01 1.48E?03
REPTILE 7.34E?02 9.11E?02 8.68E?01 1.09E?03 8.07E?02 1.02E?03 1.11E?02 1.36E?03
RUNGE 5.16E?02 7.26E?02 1.21E?02 9.42E?02 7.18E?02 8.21E?02 7.62E?01 9.81E?02
SNAKE 9.09E?02 1.14E?03 7.69E?01 1.25E?03 1.06E?03 1.21E?03 7.38E?01 1.32E?03
CEC 19 CEC 20
Best Mean Std Dev Worst Best Mean Std Dev Worst

AFRICAN 5.03E?02 5.07E?02 2.58E?00 5.17E?02 6.11E?02 6.12E?02 2.32E-01 6.12E?02


AQUILA 5.27E?02 5.41E?02 9.54E?00 5.68E?02 6.12E?02 6.12E?02 2.75E-01 6.13E?02
BARNA 2.43E?04 9.60E?04 6.24E?04 3.36E?05 6.12E?02 6.12E?02 2.71E-01 6.13E?02
EQUIL 5.03E?02 5.09E?02 4.16E?00 5.23E?02 6.08E?02 6.09E?02 5.10E-01 6.10E?02
GRAD 1.17E?03 1.69E?04 2.65E?04 1.35E?05 6.11E?02 6.12E?02 3.77E-01 6.13E?02
HARRIS 5.26E?02 5.42E?02 1.05E?01 5.77E?02 6.11E?02 6.12E?02 2.74E-01 6.12E?02
MANTA 5.06E?02 5.17E?02 5.76E?00 5.31E?02 6.10E?02 6.11E?02 5.41E-01 6.12E?02
PRO 7.76E?05 1.44E?06 3.56E?05 1.97E?06 6.12E?02 6.13E?02 2.62E-01 6.13E?02
REPTILE 8.53E?03 3.25E?05 3.30E?05 1.53E?06 6.11E?02 6.12E?02 5.46E-01 6.13E?02
RUNGE 5.34E?02 5.57E?02 1.42E?01 5.83E?02 6.10E?02 6.11E?02 5.44E-01 6.12E?02
SNAKE 8.53E?04 2.79E?05 1.25E?05 5.15E?05 6.12E?02 6.13E?02 2.91E-01 6.13E?02

CEC 21 CEC 22
Best Mean Std Dev Worst Best Mean Std Dev Worst

AFRICAN 1.00E?03 1.07E?03 4.66E?01 1.10E?03 1.64E?03 2.94E?03 6.43E?02 4.35E?03


AQUILA 1.04E?03 1.08E?03 2.31E?01 1.11E?03 3.84E?03 4.97E?03 6.97E?02 6.30E?03
BARNA 2.76E?03 2.90E?03 7.07E?01 3.05E?03 7.93E?03 8.71E?03 3.83E?02 9.34E?03
EQUIL 1.00E?03 1.02E?03 4.30E?01 1.10E?03 2.05E?03 3.05E?03 6.06E?02 4.18E?03
GRAD 1.48E?03 2.13E?03 4.43E?02 2.81E?03 5.49E?03 7.25E?03 7.59E?02 8.50E?03
HARRIS 1.02E?03 1.06E?03 3.74E?01 1.10E?03 2.82E?03 4.47E?03 8.31E?02 6.26E?03
MANTA 8.00E?02 1.00E?03 8.09E?01 1.10E?03 2.88E?03 4.00E?03 7.15E?02 5.97E?03
PRO 3.31E?03 3.43E?03 4.83E?01 3.47E?03 7.97E?03 9.07E?03 4.73E?02 1.02E?04
REPTILE 2.82E?03 3.03E?03 1.33E?02 3.27E?03 6.72E?03 7.98E?03 5.67E?02 8.83E?03
RUNGE 1.00E?03 1.08E?03 4.04E?01 1.10E?03 2.87E?03 3.73E?03 4.72E?02 5.22E?03
SNAKE 2.77E?03 2.94E?03 8.22E?01 3.12E?03 7.46E?03 8.60E?03 5.11E?02 9.64E?03

CEC 23 CEC 24
Best Mean Std Dev Worst Best Mean Std Dev Worst

AFRICAN 3.90E?03 5.60E?03 7.26E?02 7.02E?03 1.24E?03 1.28E?03 1.83E?01 1.31E?03


AQUILA 4.92E?03 6.12E?03 5.98E?02 7.37E?03 1.32E?03 1.36E?03 7.26E?01 1.65E?03
BARNA 7.81E?03 8.57E?03 4.31E?02 9.33E?03 1.34E?03 1.35E?03 1.00E?01 1.37E?03
EQUIL 3.80E?03 4.93E?03 6.42E?02 6.13E?03 1.22E?03 1.25E?03 1.77E?01 1.29E?03
GRAD 6.46E?03 7.80E?03 6.66E?02 9.15E?03 1.30E?03 1.33E?03 1.60E?01 1.36E?03

123
14338 Neural Computing and Applications (2023) 35:14275–14378

Table 27 (continued)
CEC 23 CEC 24
Best Mean Std Dev Worst Best Mean Std Dev Worst

HARRIS 4.47E?03 5.78E?03 6.62E?02 6.89E?03 1.30E?03 1.33E?03 1.52E?01 1.37E?03


MANTA 3.83E?03 4.95E?03 6.11E?02 6.37E?03 1.26E?03 1.29E?03 1.43E?01 1.32E?03
PRO 8.42E?03 9.03E?03 3.31E?02 9.93E?03 1.32E?03 1.39E?03 8.15E?01 1.74E?03
REPTILE 6.61E?03 7.94E?03 7.18E?02 9.01E?03 1.33E?03 1.36E?03 1.47E?01 1.39E?03
RUNGE 3.23E?03 4.65E?03 7.43E?02 6.21E?03 1.27E?03 1.29E?03 1.26E?01 1.31E?03
SNAKE 7.84E?03 8.84E?03 3.73E?02 9.46E?03 1.31E?03 1.34E?03 1.70E?01 1.38E?03

Fig. 9 Convergence graphs for CEC-2013 test problems from first to sixth test instances

compared through the evolution curves of the objective to CEC-05 given in Table 25, MANTA achieves the best
functions visualized in descriptive graphs. First and fore- predictions while RUNGE algorithm occupying the second
most, the optimization abilities of the compared methods best seat. MANTA obtains the best optimum solutions of
have been evaluated by performing 50 runs on unimodal four out of five unimodal test functions becomes the best
test functions, which are famous for having only one performer for unimodal functions. When general perfor-
minimum or maximum point residing a relatively large mance analysis of the compared algorithms is founded
search domain. A successful algorithm is the one having upon the multimodal test functions whose predictions
the ability to circumvent the trapping local extremum results are given in a tabular form from Tables 25, 26, 27, a
points and arrive to the global optimum point. Potential clear dominance of EQUIL algorithm is evident in most of
local optimum points dramatically increases with increas- the test instances as it obtains the most accurate estimations
ing problem dimensionalities, which makes them appro- in eleven out of fourteen benchmark cases and significantly
priate test beds for evaluating the exploration capabilities surpasses the remaining algorithms in terms of solution
of the algorithms. According to the comparative optimum accuracy. AFRICAN algorithm attains the second-best
results obtained for unimodal test functions from CEC-01 predictions for multimodal test functions. It is also

123
Neural Computing and Applications (2023) 35:14275–14378 14339

Fig. 10 Convergence histories of the algorithms for multimodal test problems employed in CEC 2013 competition

Fig. 11 Evolution histories of the compared algorithms for multimodal test function of CEC 2013 optimization benchmark problems

observed that MANTA algorithm is the third-best method vibrant fluctuations in convergence curves are the direct
providing closer results to the global optimum points of the outcomes of the dominant exploration phase activated in
test functions. Experimental results indicate that EQUIL AFRICAN algorithm, in which responsible search agents
can effectively elude from the deceitful local stalls with a roam around the feasible domain to explore the unvisited
higher convergence rate, as it is seen from the evolution regions, resulting in the shock and sudden changes in fit-
trends of the objective function value depicted from Fig. 9, ness values. It is also worth to mention that success of
10, 11, 12. One can easily interpret from the figures that the EQUIL algorithm is the direct influences of the balanced

123
14340 Neural Computing and Applications (2023) 35:14275–14378

Fig. 12 Comparison of the convergence performance of the algorithms based on the composition function employed in CEC 2013 competition

search mechanisms of the exploration and exploitation. occupied. In addition, the glamorous success of the EQUIL
Feasible predictive solutions of the composite test func- algorithm for CEC 2013 test functions can be conceived as
tions are reported in Tables 27 and 28. Composite bench- contradictive and confusing based on its relatively poor
mark problems combines the functional characteristics of performance on standard unconstrained problems. Search
the forming sub-functions whose successful and accurate equations of EQUIL algorithm are designed and tailored in
solutions require a well-organized task division between such an intrinsic way that they collaboratively as a whole,
local and global search mechanisms. Predictive results for put too much emphasis on exploration over undiscovered
composite function show that EQUIL has the best quality areas and tend to disregard intensification on the promising
of estimations providing the lowest erroneous results and regions at certain stages of the iterations. This behavior
obtains the best answers of six out of nine problems. results in a time-consuming search process accompanied
AFRICAN and MANTA algorithms correspondingly with redundant memory usage without showing any sign of
shares the second- and third-best seats for composite test proceeding to the optimal results due to the unorganized
functions When convergence curves constructed for the balance between active exploration and exploitation
composite test functions are carefully investigated in mechanisms. Average rankings of the algorithms according
Figs. 12 and 13, faster convergence to the optimum point to the Friedman results have been reported and compared
for EQUIL algorithm is valid in most of the problem against each other in Table 29. The lesser ranking point
instances, which also verifies the superiority of this algo- corresponds to the better optimization performance of the
rithm in the diversification of the candidate solutions as algorithm. Robustness and accuracy of EQUIL algorithm is
much as possible. It is also interesting the see the collapse undeniably confirmed since it attains the best average
of REPTILE and PRO algorithms in all departments of ranking point among the other algorithms with the
CEC test functions, which are indisputable victorious respective p-value of 5.91E-08.
optimizers for standard unimodal and multimodal test
problems. This failure can be attributed to the fact that
governing search equations of these two algorithms are 5 Comprehensive benchmark analysis
mainly controlled by the so-far obtained best solution on real-world engineering problems
throughout the iterations, which can sometimes overem-
phasize intensification on the regions where current best This section analyzes the optimization performances of the
search agents are located rather than probing around the investigated metaheuristic optimizers by applying them to
unexplored areas in which possible promising answers are some selected constrained engineering design optimization

123
Neural Computing and Applications (2023) 35:14275–14378 14341

Table 28 Prediction accuracies of the eleven compared metaheuristic algorithms


CEC 25 CEC 26
Best Mean Std Dev Worst Best Mean Std Dev Worst

AFRICAN 1.41E?03 1.42E?03 1.09E?01 1.45E?03 1.40E?03 1.42E?03 5.32E?01 1.56E?03


AQUILA 1.42E?03 1.47E?03 3.18E?01 1.56E?03 1.40E?03 1.56E?03 8.64E?01 1.61E?03
BARNA 1.44E?03 1.46E?03 1.47E?01 1.49E?03 1.43E?03 1.51E?03 7.68E?01 1.63E?03
EQUIL 1.34E?03 1.38E?03 1.59E?01 1.41E?03 1.40E?03 1.50E?03 7.02E?01 1.56E?03
GRAD 1.44E?03 1.46E?03 1.85E?01 1.51E?03 1.40E?03 1.51E?03 9.37E?01 1.61E?03
HARRIS 1.41E?03 1.46E?03 1.84E?01 1.49E?03 1.40E?03 1.49E?03 1.00E?02 1.61E?03
MANTA 1.39E?03 1.41E?03 1.32E?01 1.44E?03 1.40E?03 1.40E?03 2.17E-02 1.40E?03
PRO 1.44E?03 1.46E?03 1.44E?01 1.50E?03 1.41E?03 1.54E?03 7.33E?01 1.62E?03
REPTILE 1.47E?03 1.48E?03 8.61E?00 1.50E?03 1.42E?03 1.47E?03 2.75E?01 1.52E?03
RUNGE 1.40E?03 1.43E?03 1.23E?01 1.46E?03 1.40E?03 1.40E?03 1.66E-02 1.40E?03
SNAKE 1.42E?03 1.44E?03 1.28E?01 1.48E?03 1.42E?03 1.53E?03 8.40E?01 1.62E?03
CEC 27 CEC 28
Best Mean Std Dev Worst Best Mean Std Dev Worst

AFRICAN 2.09E?03 2.33E?03 1.26E?02 2.62E?03 1.50E?03 2.22E?03 9.27E?02 4.57E?03


AQUILA 2.56E?03 2.76E?03 1.12E?02 2.99E?03 5.45E?03 6.36E?03 4.29E?02 7.19E?03
BARNA 2.67E?03 2.79E?03 6.79E?01 2.93E?03 5.31E?03 5.80E?03 3.31E?02 6.40E?03
EQUIL 1.92E?03 2.12E?03 7.51E?01 2.25E?03 1.50E?03 1.78E?03 4.15E?02 3.34E?03
GRAD 2.57E?03 2.70E?03 8.26E?01 2.84E?03 4.16E?03 6.14E?03 7.02E?02 7.58E?03
HARRIS 2.54E?03 2.69E?03 9.23E?01 2.92E?03 4.14E?03 6.07E?03 5.83E?02 7.08E?03
MANTA 2.26E?03 2.48E?03 1.17E?02 2.70E?03 1.50E?03 3.91E?03 1.28E?03 5.34E?03
PRO 2.64E?03 2.78E?03 8.95E?01 3.00E?03 6.06E?03 8.51E?03 1.46E?03 1.07E?04
REPTILE 2.67E?03 2.79E?03 6.34E?01 2.91E?03 5.04E?03 6.04E?03 4.19E?02 6.90E?03
RUNGE 2.29E?03 2.43E?03 7.79E?01 2.62E?03 4.26E?03 4.86E?03 3.56E?02 5.90E?03
SNAKE 2.49E?03 2.75E?03 1.09E?02 2.93E?03 4.82E?03 6.19E?03 6.91E?02 7.70E?03

problems and comparing the outcomes. Fourteen different engineering problems available in literature studies.
engineering design optimization problems have been Majority of the engineering design problems that have been
selected with various decision variable, constraint, and frequently employed for benchmarking the optimization
objective function characteristics and feasible solutions accuracy of the developed algorithm were utilized in CEC
obtained from Runge Kutta Optimizer (RUNGE), Gradi- 2020 competitions [118], including a set of fifty-seven
ent-based Optimizer (GRAD), Poor and Rich Optimization challenging constrained engineering problems. These
Algorithm (PRO), Reptile Search Algorithm (REPTILE), problems can capture a wide range of difficulties that is
Snake Optimizer (SNAKE), Equilibrium Optimizer highly possible to be posed by the challenges of real-world
(EQUIL), Manta Ray Foraging Optimization (MANTA), problems. In CEC’20 competition, state-of-art optimizers
African Vultures Optimization Algorithm (AFRICAN), proposed by the participants were exhaustively tested on
Aquila Optimizer (AQUILA), and Harris Hawks Opti- these synthetic benchmark cases. Another comprehensive
mization (HARRIS) are benchmarked against each other. reference providing a comprehensive database is conducted
Barnacles Mating Optimizer (BARNA) have not been by Floudas et al. [92], reflecting their long-term efforts in
included into the comprehensive comparative investiga- designing challenging test instances composed of non-
tions as it is not able to find any feasible solutions during convex optimization problems with varying degrees of
the consecutive and independent runs for each considered difficulty. After a comprehensive investigation on the past
engineering design problem, collapses at some certain literature studies related to solving constrained optimiza-
stages of the iterations. There are lots of alternative con- tion problems, the authors of this study choose the most
strained benchmark cases concerning real-world widely employed fourteen complex engineering design

123
14342 Neural Computing and Applications (2023) 35:14275–14378

Fig. 13 Convergence graphs constructed for composition test functions employed in CEC 2013 competition

cases, which were also previously utilized in CEC’2020 in the search space are disregarded through the following
[118] and Floudas et al.[92]. adjustment made to the objective function,
All the above-mentioned algorithms are developed and 8
< hmax ð~ xÞ if hmax ð~
xÞ [ 0
run in MATLAB programming environment on the same
min f ð~
xÞ ¼ p ð123Þ
personal computer that has been utilized to accomplish the : atan½f ð~
xÞ  otherwise
2
previous benchmarking studies. The algorithms have been
independently run 50 times for each engineering design where
problem, and the outcomes have been statistically evalu-
hmax ð~
xÞ ¼ max½h1 ð~
xÞ; h2 ð~
xÞ; :::; hk ð~
xÞ ð124Þ
ated in terms of the best, worst, mean, and standard devi-
ation of the acquired solutions. The maximization problems where atan() is the arctangent function. Table 30 reports
have been transformed into minimization problems by the functional properties and the maximum number of
multiplying the objective function with a minus. The function evaluations performed to solve each engineering
equality constraints in the optimization problems have been design optimization problem. D represents the number of
converted to inequality constraints by jhð xÞj  e  0, in decision variable dimensions, gnum and hnum, respectively,
which e is taken as 1E-10. The Inverse Tangent Con- represent the number of inequality and equality constraints
strained Handling method [119] has been employed to deal in the problem, and NFE stands for the maximum number
with the constraints during the optimization process. An of function evaluations in Table 30. Total number of
optimization problem can be mathematically represented as function evaluations assigned for a particular problem is
follows, directly associated with its functional complexity; that is, a
design problem having a nonlinear objective function and
arg min f ðx x2S
~Þ; ~ RD
having a high number of imposed design constraints would
with subject to ð122Þ necessitate a high number of function evaluations to suc-
D
gð~
xÞ  0; gi : R ! R; i ¼ 1; 2; :::; k cessfully reach its global optimum solution. Based on the
functional characteristics and challenging degree of the
where f(x) is the objective function, ~x is the design vari-
problem, a different number of function evaluations are
ables, D is the number of dimensions, S is the search space,
utilized. Therefore, the authors consider a similar perfor-
and g(x) is the inequality constraints. In the Inverse Tan-
mance criterion previously employed in their past effort on
gent Constrained Handling method, the infeasible regions

123
Neural Computing and Applications (2023) 35:14275–14378 14343

Table 29 Performance rankings of the algorithms based on the mean deviation results of fifty independent runs
AFRICAN AQUILA BARNA EQUIL GRAD HARRIS MANTA PRO REPTILE RUNGE SNAKE

CEC-01 1 6 9 1 7 1 1 11 10 1 8
CEC-02 4 6 9 3 7 5 1 11 8 2 10
CEC-03 3 5 9 2 7 6 1 11 8 4 10
CEC-04 8 5 7 3 6 4 2 10 9 1 11
CEC-05 1 6 8 1 7 5 1 11 10 1 9
CEC-06 3 5 9 2 7 6 1 11 8 4 10
CEC-07 2 7 9 1 6 5 3 11 8 4 10
CEC-08 1 1 1 1 1 1 1 1 1 1 1
CEC-09 2 6 10 1 6 5 3 9 8 4 10
CEC-10 4 6 8 1 7 5 1 11 10 1 9
CEC-11 2 6 9 1 8 5 3 11 7 4 10
CEC-12 2 9 6 1 5 8 3 11 7 4 10
CEC-13 2 10 6 1 5 8 3 11 7 4 9
CEC-14 2 6 10 1 7 5 4 11 8 3 9
CEC-15 2 5 9 2 7 6 4 11 8 1 10
CEC-16 1 3 7 1 3 3 3 7 7 7 7
CEC-17 2 5 9 1 8 6 3 11 7 4 10
CEC-18 2 5 8 1 9 6 3 11 7 4 10
CEC-19 1 4 8 2 7 5 3 11 10 6 9
CEC-20 4 4 4 1 4 4 2 10 4 2 10
CEC-21 4 5 8 2 7 3 1 11 10 5 9
CEC-22 1 6 10 2 7 5 4 11 8 3 9
CEC-23 4 6 9 2 7 5 3 11 8 1 10
CEC-24 2 9 8 1 5 5 3 11 9 3 7
CEC-25 3 10 6 1 6 6 2 6 11 4 5
CEC-26 3 11 7 6 7 5 1 10 4 1 9
CEC-27 2 8 10 1 6 5 4 9 10 3 7
CEC-28 2 10 5 1 8 7 3 11 6 4 9
Average rank 2.5 6.25 7.78 1.57 6.32 5.0 2.39 10.0 7.78 3.07 8.82
Overall rank 3 6 8 1 7 5 2 11 8 4 10

a research study concerning metaheuristic algorithm design 5.1 Tension/compression spring design problem
given in [120].
Inverse Tangent Constraint Handling mechanism is a The tension/compression spring design problem first
versatile and easy-to-apply procedure, eliminating the introduced by Arora [122] deals with minimizing the
employment of an exhaustive and tedious trial-and-error- weight of a tension/compression spring by taking several
based penalty value assignment process. This intelligently constraints into account, such as shear stress and surge
devised constraint handling tool is found to be the best frequency. The design parameters of the problem are wire
available mechanism among the possible alternatives diameter (d), mean coil diameter (D), and number of active
according to the outcomes of the research study given in coils (N). The mathematical representation of the opti-
[121]. Tables 31 and 32 show the statistical outcomes of mization problem is,
the compared metaheuristic optimizers for each engineer-
ing design problem considered in this study.

123
14344 Neural Computing and Applications (2023) 35:14275–14378

Table 30 Functional properties


No Name D gnum hnum NFEs
of each engineering design
optimization problem CP01 Tension/compression spring design 3 4 0 60,000
CP02 Belleville spring design 4 7 0 60,000
CP03 Optimal design of a flywheel 3 2 0 10,000
CP04 Car side impact design problem 11 10 0 50,000
CP05 Optimal welded beam design 4 7 0 50,000
CP06 Pressure vessel design problem 4 4 0 50,000
CP07 Industrial refrigeration system 14 15 0 50,000
CP08 Multi-spindle automatic lathe 10 14 1 50,000
CP09 Design of a heat exchanger 8 6 0 70,000
CP10 Hydrostatic thrust bearing model 4 7 0 20,000
CP11 Stepped cantilever beam design 10 11 0 60,000
CP12 Optimal operation of alkylation unit 7 14 0 60,000
CP13 Speed reducer design problem 7 11 0 60,000
CP14 Optimal design of a reactor 8 4 0 50,000

xÞ ¼ ðN þ 2ÞDd 2
arg min f ð~ (x2), inner diameter Di (x3), and external diameter De (x4)
of the spring. Figure 15 shows the physical representation
subject to
of the Belleville spring. The objective function and con-
D3 N 4D2  dD straints of the problem are given as,
g1 ð~
xÞ ¼ 1  4
 0; g2 ð~
xÞ ¼
71785d 12566ðDd 3  d4 Þ  
arg min f ð xÞ ¼ 0:07075p D2e  D2i t
1
þ  10 subject to
5108d 2  
4Edmax dmax
140:45d Dþd g1 ð x Þ ¼ S  2 2
b h þ ct
g3 ð~
xÞ ¼ 1   0; g4 ð~
xÞ ¼  10 ð1  l ÞaDe 2
D2 N 1:5    
4Edmax d 3
ð125Þ  0; g2 ð xÞ ¼ h  ðh  d Þt þ t
ð1  l2 Þax24 2 d¼dmax

where the design parameters are restricted to following  Pmax  0


ranges, g3 ð xÞ ¼ dl  dmax  0; g5 ð xÞ ¼ Dmax  De  0;
g4 ð xÞ ¼ H  h  t g6 ð xÞ ¼ De  Di  0
0:05  d  2:00; 0:25  D  1:3; 2.0  N  15 h
g7 ð xÞ ¼ 0:3  0
De  Di
The results reported in Table 31 show that MANTA
where
comes up with the most desired solution performance    
compared to other competing optimizers. Table 33 gives 6 K1 2 6 K1
a¼ ;b ¼ 1 ;
p ln K K p ln K ln K
the decision variable, constraint satisfaction, and objective  
6 K1
function values of the best results found with each opti- c¼ ; Pmax ¼ 5400lb; E ¼ 30E6 psi
p ln K 2
mizer for the tension/compression spring design problem.
dmax ¼ 0:2 in, l ¼ 0:3; S ¼ 200 KPsi,
It is seen that the same objective function values are found
H ¼ 2 in, Dmax ¼ 12:01 in; K ¼ De =Di ; d1 ¼ f ðaÞa; a ¼ h=t
by the SNAKE and MANTA optimizers. Moreover, all
competing algorithms successfully determined a solution ð126Þ
of the design problem without violating the inequality Table 34 lists the changing values of f(a) with regard to
constraints. Figure 14 depicts the convergence maps of a. Once again, the results given in Table 31 reveal that
each design variable for different optimization algorithms. MANTA outperforms the competing optimizers in terms of
The quick convergence performance of MANTA can be solution accuracy and consistency. Also, EQUIL is realized
noted in the charts. as the second-best performer algorithm with comparable
optimization performance to MANTA. Table 35 reports the
5.2 Optimal design of a Belleville spring results of the best solutions determined by the optimizers.
Figure 16 shows the variations of the decision variables
The Belleville spring design optimization problem aims to throughout the iterations for each optimization algorithm.
determine the minimum weight by considering four dif- The poor convergence behavior of AFRICAN and coveted
ferent design variables, namely thickness t (x1), height h

123
Neural Computing and Applications (2023) 35:14275–14378 14345

Table 31 Statistical outcomes for the first set of benchmark constrained engineering optimization problems
Problem Tension/compression spring Bellevile spring
Best Worst Mean Std Dev Best Worst Mean Std Dev

RUNGE 0.0126655 0.0157459 0.0132063 5.62E-04 1.9798313 2.4877024 2.0378178 0.1249922


GRAD 0.0126690 0.0148831 0.0132552 5.13E-04 1.9927511 2.8510001 2.2927353 0.2149018
PRO 0.0126741 0.0136532 0.0129286 2.25E-04 1.9834002 2.3076660 2.0592726 0.0778693
REPTILE 0.0126980 0.0143113 0.0132977 4.29E-04 2.1283579 3.2167725 2.6888404 0.2285834
SNAKE 0.0126653 0.0161877 0.0132012 7.69E-04 1.9874918 3.2349364 2.3419672 0.2304580
EQUIL 0.0126657 0.0133165 0.0128277 1.53E-04 1.9798231 1.9805509 1.9802833 1.15E-04
MANTA 0.0126653 0.0127050 0.0126781 1.01E-05 1.9796760 1.9803489 1.9797279 9.26E-05
AFRICAN 0.0126654 0.0138503 0.0128252 2.79E-04 2.0029510 3.2167365 2.2866787 0.2157784
AQUILA 0.0133431 0.0214323 0.0158611 0.0018358 2.1330342 3.8669488 3.0975222 0.4046920
HARRIS 0.0126663 0.0158699 0.0133717 7.39E-04 2.0362030 3.1351364 2.4213435 0.1907070

Problem Optimal design of a flywheel Car crashworthiness


Best Worst Mean Std Dev Best Worst Mean Std Dev

RUNGE - 5.684782 - 5.684752 - 5.684774 7.16E-06 23.013975 24.180236 23.384072 0.2073751


GRAD - 5.684269 - 4.635577 - 5.654328 0.1203316 23.344482 26.237443 24.510346 0.6732937
PRO - 5.684702 - 5.652871 - 5.681449 0.0060798 23.047091 25.519270 24.048011 0.5277097
REPTILE - 5.681093 - 4.960125 - 5.541409 0.1419082 23.230407 24.785423 23.953335 0.3260325
SNAKE - 5.684778 - 4.035649 - 5.371802 0.3707310 23.895012 27.566562 25.352635 0.7247717
EQUIL - 5.684783 - 5.684781 - 5.684783 1.67E-07 22.843053 23.579915 23.008573 0.1516327
MANTA - 5.684783 - 5.683855 - 5.684720 1.69E-04 22.843092 23.388643 22.982930 0.2010305
AFRICAN - 5.684783 - 5.652521 - 5.683256 0.0058583 23.052260 24.772757 23.629449 0.3479980
AQUILA - 5.672231 - 4.767308 - 5.561655 0.1118341 23.495077 25.581189 24.376052 0.4849284
HARRIS - 5.684766 - 5.584682 - 5.660700 0.0220117 23.459696 25.676467 24.289256 0.4664434

Problem Optimal welded beam design Pressure vessel design problem


Best Worst Mean Std Dev Best Worst Mean Std Dev

RUNGE 1.7245284 1.8642993 1.7453911 0.0242625 5874.8887 5891.5317 5875.8562 2.7333851


GRAD 1.7259833 2.5913392 1.8231707 0.1397521 5881.1822 6010.3149 5918.2320 26.306616
PRO 1.7247001 2.0831086 1.7912137 0.0982138 5875.3442 5932.7700 5889.9641 13.163446
REPTILE 1.7776817 2.1975705 1.9171648 0.0798489 5918.0124 6061.1684 5972.8295 29.280143
SNAKE 1.7598859 3.0493413 2.1077843 0.2838168 5892.3658 6078.8836 5975.3571 48.828087
EQUIL 1.7245275 1.8095835 1.7269403 0.0103666 5875.2338 5965.7181 5915.4556 21.875586
MANTA 1.7245275 1.7245275 1.7245275 4.35E-16 5874.9418 5898.3653 5877.2474 3.7234545
AFRICAN 1.7247730 1.9205681 1.7511433 0.0310166 5875.0181 6356.4790 5920.5786 70.849160
AQUILA 1.7630874 2.3019916 1.9235292 0.1185979 5946.8282 6540.1063 6117.2101 112.44139
HARRIS 1.7390551 2.3312481 1.8651659 0.0997532 5880.7329 6048.6887 5943.8753 33.397907

Problem Industrial refrigeration system Multi-spindle automatic lathe


Best Worst Mean Std Dev Best Worst Mean Std Dev

RUNGE 0.0365475 0.3600127 0.0877705 0.0501503 - 4396.090 - 3368.883 - 4209.207 168.48935


GRAD 0.0631868 1,072,498.5 55,457.804 205,823.68 - 4029.690 - 2514.817 - 3361.858 298.93365
PRO 3.7994443 600,986.26 67,127.818 163,300.72 - 4082.041 - 2838.410 - 3364.411 273.19764
REPTILE 11.338978 2,639,386.7 408,610.74 630,478.63 - 4010.273 - 2865.929 - 3376.766 246.14437
SNAKE 891,031.82 891,031.82 891,031.82 0.0000000 - 4347.893 - 2856.534 - 3565.494 390.97044
EQUIL 0.0328972 0.0831393 0.0429145 0.0106516 - 4174.764 - 2973.311 - 3448.921 251.96906
MANTA 0.0322144 0.0849842 0.0473375 0.0116252 - 4321.037 - 2848.001 - 3444.883 284.89763
AFRICAN 0.0451270 6.7968186 0.2603458 0.7783040 - 4153.929 - 2914.450 - 3531.240 317.46318
AQUILA 1199.7546 3,406,513.2 695,605.18 871,649.90 - 4216.306 - 2857.298 - 3473.527 310.48199
HARRIS 230.18247 2,209,728.8 386,960.67 547,568.90 - 4143.006 - 2769.173 - 3368.058 293.30269

123
14346 Neural Computing and Applications (2023) 35:14275–14378

behavior of MANTA and EQUIL can be observed in arg min f ðxÞ ¼ 1:98 þ 4:90x1 þ 6:67x2 þ 6:98x3
Fig. 16. þ 4:01x4 þ 1:78x5 þ 2:73x7
subject to
5.3 Optimal design of a flywheel
g1 ðxÞ ¼ Fa  1:0 kN, g2 ðxÞ ¼ VCu  0:32 m/s,
The optimal design of a flywheel problem [123] has a g3 ðxÞ ¼ VCm  0:32 m/s
nonlinear objective function and is subjected to two g4 ðxÞ ¼ VCl  0:32 m/s, g5 ðxÞ ¼ Dur  32 m,
inequality constraints. The problem is mathematically g6 ðxÞ ¼ Dmr  32 mm
represented as,
g7 ðxÞ ¼ Dlr  32 mm, g8 ðxÞ ¼ Fp  4 kN,
~Þ ¼ ð0:0201=107 Þx41 x2 x23
arg min f ðx g9 ðxÞ ¼ VMBP  9:9 mm/ms
respect to g10 ðxÞ ¼ VFD  15:7 mm/ms
xÞ ¼ 675  x21 x2  0
g1 ð~ where
  ð127Þ
xÞ ¼ 0:419  x21 x23 =107  0
g2 ð~ Fa ¼ 1:16  0:3717x2 x4  0:00931x2 x10  0:484x3 x9
Allowable search bounds þ 0:01343x6 x10
0  x1  36; 0  x2  5; 0  x3  125 VCu ¼ 0:261  0:0159x1 x2  0:1881x1 x8  0:019x2 x7
The reported results in Table 31 show that MANTA, þ 0:0144x3 x5 þ 0:0008757x5 x10 þ 0:08045x6 x9
EQUIL, and AFRICAN find the most desirable best results þ 0:00139x8 x11 þ 0:00001575x10 x11
after several runs. However, it is realized that EQUIL VCm ¼ 0:214 þ 0:00817x5  0:131x1 x8  0:0704x1 x9
outperformed its peers by examining at the mean and þ 0:03099x2 x6  0:018x2 x7
standard deviation results. Table 36 gives the decision
þ0.0208x3 x8 þ 0:121x3 x9  0:00364x5 x6
variable and constraint values of the best results for the
compared optimization algorithms. Figure 17 shows the þ 0:0007715x5 x10  0:0005354x6 x10
variations of the decision variables with the increasing þ 0:00121x8 x11 þ 0:00184x9 x10  0:02x22
number of iterations. VCl ¼ 0:74  0:61x2  0:163x3 x8
þ 0:001232x3 x10  0:166x7 x9 þ 0:277x22
5.4 Car side impact design problem
Dur ¼ 28:98 þ 3:818x3  4:2x1 x2 þ 0:0207x5 x10
This problem, firstly illustrated by Gu et al. [124], handles þ 6:63x6 x9  7:7x7 x8 þ 0:32x9 x10
the FE (finite element) model of a car. The main objective Dmr ¼ 33:86 þ 2:95x3 þ 0:1792x10  5:057x1 x2
is to reduce the weight and ameliorate the strength and  11:0x2 x8  0:0215x5 x10  9:98x7 x8 þ 22:0x8 x9
resilience of the vehicle to a satisfactory amount in the case
Dlr ¼ 46:36  9:9x2  12:9x1 x8 þ 0:1107x3 x10
of an instantaneous crash. The procedures and experi-
mental configurations implemented by the European FP ¼ 4:72  0:5x4  0:0122x4 x10 þ 0:009325x6 x10
Enhanced Vehicle-Safety Committee (EEVC) have been þ 0:000191x211
utilized to accomplish the test studies of the car impact. A VMBP ¼ 10:58  0:674x1 x2  1:95x2 x8 þ 0:02054x3 x10
regressive model has been developed by employing Latin
 0:0198x4 x10 þ 0:028x6 x10
hypercube sampling and quadratic backward-stepwise
regression methods. The problem consists of nine design VFD ¼ 16:45  0:489x3 x7  0:843x3 x6 þ 0:0432x9 x10
parameters; these are B-Pillar inner, B-Pillar reinforce-  0:0556x9 x11  0:000786x211
ment, floor side inner, cross members, door beam, door where
beltline reinforcement, and roof rail thicknesses (x1–x7), B- 0:5  x1 ; x2 ; x3 ; x4 ; x5 ; x6 ; x7  1:5
Pillar inner and floor side inner materials (x8 and x9) and
 30  x10 ; x11  30
barrier height and hitting position (x10 and x11). Among
them, two of the decision variables are discrete (x8 and x9), x8 ; x9 2 f0:192; 0:345g
and the rest of them are continuous. The optimization ð128Þ
problem is shown below,
Similar to the previous case, MANTA and EQUIL
outperform the competing optimization algorithms and find
the best results f(x) = 22.843092, as reported in Table 31. It
is recognized that MANTA performs slightly better for
consecutive runs compared to EQUIL by investigating the

123
Neural Computing and Applications (2023) 35:14275–14378 14347

Table 32 Statistical results for the second set of benchmark constrained engineering optimization problems
Problem Design of a heat exchanger Hydrostatic thrust bearing model
Best Worst Mean Std Dev Best Worst Mean Std Dev

RUNGE 7411.6897 10,130.485 8140.2322 490.75219 19,505.756 30,470.239 22,765.922 2706.2542


GRAD 7323.5956 13,662.856 9789.5474 1306.9996 20,009.396 34,060.394 24,671.685 3162.4149
PRO 7344.3243 13,941.226 9175.2006 1615.0778 20,519.413 49,291.511 25,382.922 4322.6059
REPTILE 11,812.881 19,072.194 15,949.822 1978.4300 23,457.816 48,249.117 33,337.716 4352.8301
SNAKE 9680.6944 16,374.383 13,088.709 2221.9716 26,596.482 120,228.19 49,855.116 19,042.845
EQUIL 7146.3565 8804.1750 7656.8758 332.21851 20,018.722 29,704.576 22,464.335 1793.0170
MANTA 7067.4473 9021.1629 7424.2728 301.84261 19,541.313 29,029.518 20,806.842 1529.7314
AFRICAN 7328.5675 8734.7933 8018.1150 329.42177 23,739.446 82,167.141 35,719.867 10,073.106
AQUILA 8811.4828 14,587.542 11,699.009 2837.7227 22,991.626 91,412.942 41,050.933 11,180.680
HARRIS 8010.0975 15,370.372 11,191.945 1873.9423 19,521.249 31,901.288 23,600.872 3340.6890
Problem Stepped cantilever beam Optimal operation of alkylation unit
Best Worst Mean Std Dev Best Worst Mean Std Dev

RUNGE 64,578.208 69,277.665 65,121.182 1064.1303 1245.6379 2117.3499 1519.6188 190.23775


GRAD 64,724.229 77,925.184 71,875.794 2375.2440 1349.9752 2222.3185 1556.7339 147.10351
PRO 66,158.642 76,469.398 71,300.283 1974.3993 1278.3725 2916.0061 1602.8508 354.84969
REPTILE 68,828.688 85,320.661 75,128.335 3001.4417 1629.1143 2972.2061 2452.2865 415.32999
SNAKE 68,457.074 78,304.308 72,557.032 1988.8471 1397.4619 2823.4577 1876.1794 408.63850
EQUIL 64,578.194 68,643.431 65,740.851 1676.5524 1235.4962 2139.5592 1381.5739 155.00092
MANTA 64,578.194 69,277.112 67,249.444 1803.3796 1227.4051 1274.3095 1235.0356 8.7295558
AFRICAN 64,579.575 71,085.347 66,954.139 1902.9033 1267.9188 2680.1533 1817.2341 391.65331
AQUILA 65,337.621 85,572.919 71,508.944 3415.1317 1488.8332 3040.4273 2206.7925 408.79305
HARRIS 64,944.385 75,397.840 68,677.812 2079.5299 1270.1414 2867.8027 1613.1782 342.69084

Problem Speed reducer design problem Optimal design of a reactor


Best Worst Mean Std Dev Best Worst Mean Std Dev

RUNGE 2823.6811 2824.8458 2823.8169 0.1555914 3.9636216 4.4107582 4.1637900 0.1253885


GRAD 2828.1824 2884.8669 2844.2099 12.758142 4.1654223 6.8838287 4.5454970 0.3700420
PRO 2824.7120 2838.0842 2829.5876 2.7090358 4.0086553 7.7602229 4.5810918 0.6108497
REPTILE 2858.3486 2925.5761 2890.9390 15.751613 4.4029289 5.3611018 4.8769577 0.2515282
SNAKE 2831.6399 2930.1740 2874.7778 22.281051 4.2959290 9.2004413 5.8464765 1.2328020
EQUIL 2823.6625 2823.6626 2823.6625 0.0000183 3.9526041 4.3009504 4.0780024 0.0948024
MANTA 2823.6625 2823.6625 2823.6625 1.03E-09 3.9515829 4.2922526 4.0136751 0.0791672
AFRICAN 2826.5224 3051.6577 2862.4383 54.512379 3.9710219 4.4091549 4.1616755 0.0982632
AQUILA 2854.6592 3572.3944 2983.8190 105.80110 4.0812704 4.8485287 4.3467733 0.1435924
HARRIS 2828.7357 2924.0439 2861.5659 24.676918 4.0043941 4.8585662 4.3415776 0.1432127

other metrics. Table 37 lists the decision variable and 5.5 Optimal welded beam design problem
constraint outcomes for the best results found by the
optimizers. Figure 18 demonstrates the convergence The welded beam design optimization problem [125]
chart for the design variables for each optimizer. shown in Fig. 19 tackles the minimization of the produc-
tion costs of the welded beam. The problem has four design
variables, namely thickness of the weld (h - x1), length of
the weld (l - x2), beam height (t - x3), and beam width
(b - x4). Furthermore, there are seven inequality

123
14348 Neural Computing and Applications (2023) 35:14275–14378

- 0.0367954
- 0.0075177
- 3.9413444
- 0.6831819
constraints subjected to shear stress (s), bending stress (r)

0.0543703
0.4208569
8.7250747

0.0133431
AQUILA in the beam, buckling load on the bar (P), end deflection of
the beam (d), and a few side constraints. The mathematical
formulation of the problem is given as,
arg min f ðxÞ ¼ 1:10471x21 x2 þ 0:04811x3 x4 ð14 þ x2 Þ

- 4.0026154
- 0.7436421
- 3.52E-05
- 6.01E-04
12.7817213 subject to
0.0507291
0.3338077

0.0126980
REPTILE

g1 ð xÞ ¼ sð xÞ  smax  0; g2 ð xÞ ¼ rð xÞ  rmax  0; g3 ð xÞ
¼ x1  x4  0
g4 ð xÞ ¼ 0:1047x21 þ 0:04811x3 x4 ð14 þ x2 Þ  5  0; g5 ð xÞ
¼ 0:125  x1  0
- 4.0538141
- 0.7268456
- 1.95E-04
- 2.74E-04

g6 ð xÞ ¼ dð xÞ  dmax  0; g7 ð xÞ ¼ Pð xÞ  Pc  0
11.2216283
0.0517467
0.3579849

0.0126741
0:1  x1 ; x4  2; 0.1  x2 ; x3  10
PRO

where
rffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi
x2 P MR
sð xÞ ¼ ðs0 Þ2 þ2s0 s00 þ ðs00 Þ2 ; s0 ¼ pffiffiffi ; s00 ¼
2R 2x1 x2 J
rffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi
- 4.0448223
- 0.7303177
- 3.03E-04
- 1.82E-06

  2  
11.5144281

x2 x2 x1 þ x3 2
0.0515337
0.3529899

0.0126690

M ¼P Lþ ;R¼ þ
2 4 2
 
GRAD

pffiffiffi x22 x1 þ x3 2 6PL 4PL3


J¼2 2 x1 x2 þ ; rð xÞ ¼ 2
; d ð xÞ ¼ 3
12 2 x4 x3 Ex3 x4
qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi
2 6 rffiffiffiffiffiffi!
4:013E ðx3 x4 =36Þ x3 E
- 4.0649926
- 0.7237373
- 9.32E-08
- 6.88E-15

Pc ¼ 1
10.9596895

L2 2L 4G
0.0519269
0.3624671

0.0126663
HARRIS

P ¼ 6000 lb, L ¼ 14 in, E ¼ 30  106 psi,


G ¼ 12  106 psi, smax ¼ 13600 psi
rmax ¼ 30000 psi, dmax ¼ 0:25 in
- 4.0595998
- 0.7256402
- 1.57E-05
- 1.23E-06

ð129Þ
11.1149557
0.0518138
0.3597259

0.0126657
EQUIL

The results in Table 31 reveal that EQUIL and MANTA


result in comparably better outcomes than their peers, with
the MANTA slightly outperforming EQUIL in terms of
solution consistency. REPTILE is the worst performer
- 4.0486669
- 0.7295164
- 8.29E-07
- 5.36E-08
11.4415035

optimization algorithm among the contenders for this


0.0515818
0.3541436

0.0126655
RUNGE

problem. Table 38 reports the optimal design variable and


Table 33 Optimal results for tension/compression spring design problem

constraint values of the best solutions. Figure 20 depicts


the variations of the design variables through 2000
iterations.
- 4.0492441
- 0.7293172
- 6.20E-09
- 7.31E-09
11.4243391
AFRICAN

0.0515938
0.3544304

0.0126654

5.6 Pressure vessel design problem

The pressure vessel design problem [126] deals with


minimizing the total cost of a cylindrical vessel. The vessel
- 4.0543463
- 0.7275159
- 7.23E-06
- 1.48E-06
11.2711160

is covered at both ends with hemispherical caps, as


0.0517018
0.3570244

0.0126653
MANTA

depicted in Fig. 21. The problem consists of four design


Tension/compression spring design

parameters and constraints. The design parameters are shell


thickness (Ts - x1), head thickness (Th - x2), inner radius
(R - x3), and length of the cellular part of the container
- 4.0501728
- 0.7289937
- 4.44E-16
- 4.92E-08
11.3965586

(L - x4) of the vessel. The mathematical representation of


0.0516132
0.3548963

0.0126653
SNAKE

the problem is given as,


Method

f(x)
g1
g2
g3
g4
x1
x2
x3

123
Neural Computing and Applications (2023) 35:14275–14378 14349

Fig. 14 Convergence map of the design variables for each optimizer

Table 34 Change of f(a) with respect to a


a \ 1.4 1.5 1.6 1.7 1.8 1.9 2.0 2.1

f(a) 1 0.85 0.77 0.71 0.66 0.63 0.6 0.58


a 2.2 2.3 2.4 2.5 2.6 2.7 2.8 [
f(a) 0.56 0.55 0.53 0.52 0.51 0.51 0.5

objective function value f(x) = 5874.9418306. AQUILA is


the worst performer algorithm for this problem. Table 39
Fig. 15 Design configuration of the Belleville spring reports the optimal decision variable, constraint satisfac-
tion, and objective function values of the optimizers. Fig-
ure 22 demonstrates the convergence charts of the design
arg min f ð xÞ ¼ 0:6224x1 x3 x4 þ 1:7781x2 variables.
x23 þ 3:1661x21 x4 þ 19:84x21 x3
5.7 Industrial refrigeration system design
subject to
g1 ðxÞ ¼ x1 þ 0:0193x3  0; g2 ðxÞ The industrial refrigeration system design problem [127]
¼ x2 þ 0:00954x3  0; has fourteen design variables and fifteen constraints
ð130Þ
4 imposed on the objective function. With large enough
g3 ðxÞ ¼ px23 x4  px33 design variables and constraints compared to the other
3
þ 1296000  0; g4 ð xÞ ¼ x4  240  0 benchmark engineering design problems, this problem can
be considered an exemplary case of a constrained engi-
0:5  x1  1; 0:3  x2  0:5
neering design problem. The problem can be mathemati-
40  x3  50; 170  x4  240 cally formalized as follows,
RUNGE is the best performer algorithm for this case,
with the objective function value f(x) = 5874.8886953.
MANTA is a close contender for the top spot with the

123
14350 Neural Computing and Applications (2023) 35:14275–14378

- 1304.464374
arg min f ð xÞ ¼ 63098:88x2 x4 x12 þ 5441:5x22 x12 þ 115055:5x1:664
2 x6

- 3.8387463
- 0.8413958
- 1.5794556
- 0.0034515
- 2.1286341
- 0.1992142
þ6172:27x22 x6 þ 63098:88x1 x3 x11 þ 5441:5x21 x11

12.0065485
9.8779144
0.2060082
0.2145361

2.1330342
AQUILA

þ 115055:5x1:664
1 x5 þ 6172:27x21 x5 þ 140:53x1 x11
þ281.29x3 x11 þ 70:26x21 þ 281:29x1 x3 þ 281:20x23
þ14437x21 x7 x1:8812
8 x1 0:3424 1
9 x10 x12 x14 þ 20479:2x21 x2:893
7 x0:316
11

- 3890.441850
g1 ð xÞ ¼ 1:524x1
7  1; g2 ð xÞ ¼ 1:524x1 1

- 29.6798113
- 0.7963789
- 1.5861495
- 0.0010729
- 2.1083103
- 0.2020306
8
1
12.0089271

g3 ð xÞ ¼ 0:07789x1  2x7 x9  1;
9.9006168
0.2073006
0.2065499

2.1283579
REPTILE

g4 ð xÞ¼ 7.05305x21 x1 1 1 1


2 x8 x9 x10 x14  1
1
g5 ð xÞ¼ 0.0833x13 x14  1
1
g6 ð xÞ ¼ 47:136x0:33
2 x10 x12  1:333x8 x13
2:1195
- 129.9260392

1 1 2:1195
þ 62:08x0:2
8 x10 x12 x13 1
- 9.6466297

- 0.7780430
- 1.5914240
- 0.0100000
- 2.0175868
- 0.1998703
12.0000000

g7 ð xÞ ¼ 0:0477x1:8812 x10 x0:3424  1;


9.9824132
0.2065557
0.2020203

2.0362030
8 12
HARRIS

g8 ð xÞ ¼ 0:0488x1:893
7 x9 x0:316
11 1
g9 ð xÞ ¼ 0:0099x1 x1
3  1; g10 ð xÞ ¼ 0:0193x2 x1
4 1
g11 ð xÞ ¼ 0:0298x1 x1
5  1; g12 ð xÞ ¼ 0:056x2 x1
6 1
- 31.4141362
- 0.0599619
- 0.7904032
- 1.5931041
- 0.0100000
- 2.0040524
- 0.1989712

g13 ð xÞ ¼2x1  1; g14 ð xÞ ¼ 2x1


10  1; g15 ð xÞ ¼ x1
11 x12  1
12.0000000

9
AFRICAN

9.9959476
0.2044289
0.2024670

2.0029510

0:001  xi  5:0; i ¼ 1; :::; 14


ð131Þ
All competing metaheuristic optimizers successfully
- 598.1447297

find results without violating the constraints imposed on


- 0.7031552
- 0.7785114
- 1.5954448
- 0.0044905
- 1.9912450
- 0.1995198
12.0055095
10.0142645

the objective function. Once again, it is realized that


0.2044745
0.2000806

1.9927511

MANTA outperforms its peers in terms of the best result


GRAD

found after consecutive runs. EQUIL takes the second spot


with performance metrics close to that of MANTA.
SNAKE is the worst-performing algorithm for this prob-
- 22.9679606
- 0.7780795
- 1.5953798
- 0.0089574
- 1.9855269
- 0.1992367
- 1.75E-10

lem. Table 40 lists the optimal design variable, constraint,


12.0010426
10.0155157
0.2045520
0.2000682

1.9874918
SNAKE

and objective function values for the best results of each


algorithm. Figure 23 depicts the convergence maps of each
design variable.
- 0.3865452
- 7.5699475
- 0.7797716
- 1.5955506
- 0.0018207
- 1.9824053
- 0.1990325
12.0081793
10.0257740

5.8 Multi-spindle automatic lathe


0.2042910
0.2001585

1.9834002
PRO

This problem deals with optimizing a multi-spindle auto-


Table 35 Optimal results for Belleville spring design problem

matic lathe and consists of ten design variables and fifteen


- 0.0562590
- 0.0078115
- 0.7796851
- 1.5958526
- 0.0022437
- 1.9800995
- 0.1989949

nonlinear constraints [123]. The mathematical expression


12.0077563
10.0276567
0.2041473
0.2000001

1.9798313

of the problem is as follows,


RUNGE

- 0.5074727
- 0.7796637
- 1.5958478

- 1.9796001
- 0.1989694
- 8.73E-11

- 9.43E-06
12.0099906
10.0303905
0.2041519
0.2000002

1.9798231
EQUIL

- 0.0020917

- 0.7797037
- 1.5958566

- 1.9795292
- 0.1989659
- 9.19E-04

- 7.76E-06
Belleville spring design
12.0099922
10.0304630
0.2041434
0.2000000

1.9796760
MANTA
Method

f(x)
h1
h2
h3
h4
h5
h6
h7
x1
x2
x3
x4

123
Neural Computing and Applications (2023) 35:14275–14378 14351

Fig. 16 Convergence map of the design variables

0:15x1 þ 14x2  0:06


arg min f ð xÞ ¼ 20000
0:002 þ x1 þ 60x2
0:75 x9 x10 10
g1 ð x Þ ¼ x 1   0; g2 ð xÞ ¼ x1   0; g3 ð xÞ ¼ x1    0;
x3 x4 x4 x5 x4 x6 x4
0:19 10 0:125
g4 ð x Þ ¼ x 1    0; g5 ð xÞ ¼ x1   0;
x4 x7 x4 x4 x8
g6 ð xÞ ¼ 10000x2  0:00131x9 x0:6665 x1:5
4 0
g7 ð xÞ ¼ 10000x2  0:001038x10 x1:6 3
6 x4  0; g8 ð xÞ ¼ 10000x2  0:000223x0:666
7 x1:5
4 0
g9 ð xÞ ¼ 10000x2  0:000076x3:55
8 x4
5:66
 0; g10 ð xÞ ¼ 10000x2  0:000698x1:2 2
3 x4  0
ð132Þ
g11 ð xÞ ¼ 10000x2  0:00005x1:6 3
3 x4  0; g12 ð xÞ ¼ 10000x2  0:00000654x2:42
3 x4
4:17
0
g13 ð xÞ ¼ 10000x2  0:000257x0:666
3 x1:5
4  0;
g14 ð xÞ ¼ 30  2:003x4 x5  1:885x4 x6  0:184x4 x8  2x4 x0:803
3 0
h1 ð xÞ ¼ x9 þ x10  0:255 ¼ 0
0  x1  10; 0  x2  0:1; 0:5E  4  x3  0:0081; 10  x4  1000
0:5E  4  x5  0:0017; 0:5E  4  x6  0:0013; 0:5E  4  x7  0:0027
0:5E  4  x8  0:002; 0:5E  4  x9  1:0; 0:5E  4  x10  1:0

123
14352 Neural Computing and Applications (2023) 35:14275–14378

- 0.8366908

- 5.6722312
As seen from the results in Table 31, RUNGE outper-

- 4.06E-04
28.9821549

70.5936607
0.8026085
AQUILA forms the competing optimizers in all optimization per-
formance metrics. One of the best performer algorithms so
far, MANTA, also shows a desirable performance in this
case; however, it is not enough to overthrow RUNGE from
the top spot. REPTILE is the worst performer for this case,
- 0.0578673

- 5.6810928
- 2.36E-04
22.0743510

92.7036213
with the best objective function value of f(x) = -
1.3851304
REPTILE

4010.273284. Table 41 shows the design variable, con-


straint satisfaction, and objective function values for the
best results obtained by the optimizers. Figure 24 demon-
strates the variations of the decision variables for the
- 0.0036086

- 5.6842686
- 3.56E-05
113.5642123
18.0238286

competing optimizers.
2.0778173
GRAD

5.9 Optimal design of a heat exchanger

This problem, first introduced by Hock and Schittkowski


- 0.0024829

- 5.6847021
- 4.39E-06
22.5207397

90.8912524

[128], tackles the design optimization of a heat exchanger.


1.3308738

The problem has six inequality constraints and eight


PRO

decision variables. The mathematical representation of the


problem is as follows,
- 5.6847655
- 2.13E-11
- 1.26E-06

arg min f ð xÞ ¼ x1 þ x2 þ x3
100.3405433
20.3999880
1.6219742
HARRIS

subject to
g1 ð xÞ ¼ 1 þ 0:0025ðx4 þ x6 Þ  0;
g2 ð xÞ ¼ 1 þ 0:0025ðx4 þ x5 þ x7 Þ  0
g3 ð xÞ ¼ 1 þ 0:01ðx5 þ x8 Þ  0; g4 ð xÞ ¼ 100x1
- 5.6847777
- 5.67E-04
- 2.22E-16
31.1759962

65.6578522
0.6944848

 x1 x6 þ 833:33252x4  83333:333  0
SNAKE

g5 ð xÞ ¼ x2 x4  x2 x7  1250x4 þ 1250x5  0; g6 ð xÞ
¼ x3 x5  x3 x8  2500x5 þ 1250000  0
- 5.6847823
- 1.89E-05
- 1.55E-09
22.1362598

92.4704068

100  x1  10000; 1000  x2  10000; 100  x3  10000;


1.3775116
RUNGE

10  xi  1000; i¼ 4; 5; 6; 7; 8
ð133Þ
According to the statistical results reported in Table 32,
- 5.6847825
- 1.40E-08
- 6.90E-10
111.2624457
18.3974830

MANTA takes the first spot with the best objective func-
AFRICAN

1.9942838
Table 36 Optimal outcomes for flywheel design problem

tion value f(x) = 7067.4473363. EQUIL takes the second


spot with the best objective function value
f(x) = 7146.3564714. REPTILE is the worst performer in
this problem. Table 42 reports the decision variable and
- 5.6847825
- 5.55E-17
117.6447763
17.3994036

constraint values for the best results obtained by each


2.2296415

0.0000000
MANTA

algorithm. Figure 25 depicts the convergence map of the


decision variables through 2500 iterations.
Optimal design of a flywheel

5.9.1 Hydrostatic thrust bearing model


- 5.6847825
- 1.14E-13
- 5.55E-17
111.3165460
18.3885417
1.9962236

This problem aims to minimize the total power loss of a


EQUIL

hydrostatic thrust bearing during its operation [129]. The


bearing must retain the axial support and cope with the
external force applied to it simultaneously during the
Method

operation. The physical configuration of the hydrostatic


f(x)
g1
g2
x1
x2
x3

123
Neural Computing and Applications (2023) 35:14275–14378 14353

Fig. 17 Convergence chart of the decision variables

thrust bearing is shown in Fig. 26. The problem has four case. HARRIS, which has not shown any presence until
design variables, namely bearing step radius R1, recess this case, is the second-best performer with the objective
radius Ro, oil viscosity l, and flow rate Q. There are also function value f(x) = 19521.2487819. SNAKE finds the
seven nonlinear inequality constraints imposed on the least desirable solution for this problem. Table 43 reports
objective function associated with oil pressure at the inlet, the optimal results of the best solutions obtained by the
load carrying capacity, oil film thickness, oil temperature optimizers. Figure 27 depicts the variations of the decision
rise, and some other physical constraints. The mathemati- variables for each algorithm.
cal formalization of the problem is given as follows,
QPo 5.9.2 Stepped cantilever beam design problem
arg min f ðxÞ ¼ þ Ef
0:7
subject to The main aim of this problem is to minimize the volume of
g1 ðxÞ ¼ W  W2  0; g2 ð xÞ ¼ Pmax  Po  0; g3 ðxÞ a stepped cantilever beam, depicted in Fig. 28, which is
¼ DTmax  DT  0; g4 ðxÞ ¼ h  hmin  0; subject to a load at its end. As can be seen from the figure,
g5 ðxÞ ¼ R  Ro  0; g6 ðxÞ ¼ 0:001 the beam consists of five segments. The decision variables
 
c Q W of the problem are the height and width of each segment.
  0; g7 ð xÞ ¼ 5000   2  0
gPo 2pRh p R  R2o Therefore, the problem has ten decision variables, which
pPo R2  R2o 6lQ R are the widths {x1, x3, x5, x7, x9} = {b1, b2,b3, b4, b5} and
W¼ ; Po ¼ 3 ln ; Ef ¼ 9336QcCDT;
2 ln RRo ph Ro ð134Þ the heights {x2, x4, x6, x8, x10} = {h1, h2, h3, h4, h5.}. The
log10 ðlog10 ð8:122E6l þ 0:8ÞÞ  C1 first six decision variables are discrete, and the remaining

n are continuous. The mathematical representation of the
   
2pN 2 2pl R4  R4o optimization problem is given below,

60 Ef 4
c ¼ 0:0307; C ¼ 0:5; n ¼ 3:35; C1 ¼ 10:04
Ws ¼ 101000; Pmax ¼ 1000; DTmax ¼ 50; hmin ¼ 0:001
g ¼ 386:4; N ¼ 750

1  R; Ro ; Q  16; 1E  6  l  16E  6

RUNGE outperforms the competing algorithms and


results in a more desirable objective function value for this

123
14354

123
Table 37 Optimal results for car side impact design problem
Method EQUIL MANTA RUNGE PRO AFRICAN REPTILE GRAD HARRIS AQUILA SNAKE

Car side impact design


x1 0.5000001 0.5000067 0.5204184 0.5130266 0.5283519 0.5711456 0.5135188 0.5074228 0.5245048 0.5601427
x2 1.1175506 1.1171732 1.1634763 1.1557417 1.1580013 1.1338720 1.2198405 1.1379911 1.1815605 1.2408260
x3 0.5000006 0.5000059 0.5000002 0.5000416 0.5044043 0.5097193 0.5069730 0.5063674 0.5082857 0.5001031
x4 1.3002459 1.3008608 1.2415288 1.2601296 1.2428136 1.2620844 1.2057099 1.3284775 1.2579703 1.1824449
x5 0.5000000 0.5000050 0.5000033 0.5001268 0.5000204 0.5080387 0.5455699 0.5774253 0.5113291 0.7017471
x6 1.4999997 1.4999905 1.4970505 1.4761108 1.4961024 1.4716622 1.3810503 1.4173367 1.4432764 1.2832200
x7 0.5000000 0.5000042 0.5000001 0.5167865 0.5000021 0.5002779 0.5008135 0.5543835 0.5719905 0.5174247
x8 0.3450000 0.3450000 0.3450000 0.3450000 0.3450000 0.3450000 0.3450000 0.3450000 0.3450000 0.3450000
x9 0.3450000 0.3450000 0.1920000 0.1920000 0.3450000 0.3450000 0.1920000 0.1920000 0.3450000 0.1920000
x10 - 19.3495451 - 19.4164608 - 9.4934556 - 11.4956139 - 9.7496883 - 10.5089797 0.0323625 - 15.6884219 - 7.7015683 7.1793440
x11 0.0096023 - 0.0172372 - 0.3750806 - 2.6808194 1.6949710 - 2.0171247 - 0.2791182 - 5.9686579 - 3.6783333 5.6660122
g1 - 0.6150540 - 0.6158451 - 0.5114176 - 0.5320046 - 0.5126022 - 0.5160530 - 0.4335659 - 0.5814025 - 0.5042783 - 0.3910451
g2 - 0.0782522 - 0.0782745 - 0.0759421 - 0.0771568 - 0.0761363 - 0.0791097 - 0.0738473 - 0.0804319 - 0.0785753 - 0.0710293
g3 - 0.0922116 - 0.0922115 - 0.0942117 - 0.0954564 - 0.0938895 - 0.0970104 - 0.0989525 - 0.0993817 - 0.0988537 - 0.1017876
g4 - 0.0217045 - 0.0217074 - 0.0198678 - 0.0209919 - 0.0197619 - 0.0183122 - 0.0181322 - 0.0235079 - 0.0228011 - 0.0151227
g5 - 3.6767186 - 3.6807606 - 3.1691268 - 3.3114190 - 3.1985382 - 3.4162690 - 2.6953306 - 3.6787518 - 3.2453478 - 2.6158583
g6 - 5.2574951 - 5.2670433 - 3.9304914 - 4.2198744 - 3.9811479 - 4.2220351 - 2.5317045 - 4.8376595 - 3.8504791 - 1.8023764
g7 0.9868502 0.9868550 1.0271489 1.0111574 1.0427664 1.1270751 1.0135121 0.9576905 0.9301117 1.0859198
g8 - 2.28E-07 - 1.14E-07 - 2.37E-06 - 2.02E-06 - 2.87E-05 - 0.0024829 - 4.00E-04 - 4.66E-11 - 0.0059634 - 6.53E-04
g9 - 0.6282749 - 0.6295470 - 0.4257771 - 0.4587161 - 0.4354473 - 0.4614603 - 0.1980935 - 0.5083679 - 0.3798875 - 0.0894056
g10 - 0.1650955 - 0.1653690 - 0.0781101 - 0.0710831 - 0.1051803 - 0.0738047 - 0.0061361 - 0.1715999 - 0.0495418 - 0.1618255
f(x) 22.8430532 22.8430915 23.0139754 23.0470907 23.0522596 23.2304066 23.3444818 23.4596963 23.4950774 23.8950119
Neural Computing and Applications (2023) 35:14275–14378
Neural Computing and Applications (2023) 35:14275–14378 14355

Fig. 18 Variations of the design variables throughout the iterations

arg min f ð xÞ ¼ 100ðx1 x2 þ x3 x4 þ x5 x6 þ x7 x8 þ x9 x10 Þ


subject to
600P 1200P
g1 ð x Þ ¼ 2  14000  0; g2 ð xÞ ¼ 2
x10 x9 x8 x7
 14000  0; g3 ð xÞ
1800P
¼ 2  14000  0;
x6 x5
2400P 3000P
g4 ð x Þ ¼ 2  14000  0; g5 ð xÞ ¼ 2  14000  0;
x4 x3 x2 x1
 
Pl3 244 148 76 28 4
g6 ð x Þ ¼ þ þ þ þ  dmax
E x1 x32 x3 x34 x5 x36 x7 x38 x9 x310
x2 x4 x6
g7 ð xÞ ¼  20; g8 ð xÞ ¼  20; g9 ð xÞ ¼  20
x1 x3 x5
x8 x10
g10 ð xÞ ¼  20; g11 ð xÞ ¼  20
x7 x9
x1 2 f1; 2; 3; 4; 5g; x3 ; x5 2 f2:4; 2:6; 2:8; 3:1g
Fig. 19 Schematic representation of the welded beam x2 ; x4 2 f45; 50; 55; 60g; x6 2 f30; 31; :::; 65g
1  x7 ; x9  5; 30  x8 ; x10  65

E ¼ 2:7E7 (N/cm2 ), P = 50000 N, l = 100 cm,


dmax ¼ 2:7 cm
ð135Þ

123
14356

123
Table 38 Optimal decision variables, constraint satisfaction, and objective function values for the welded beam design problem
Method EQUIL MANTA RUNGE PRO AFRICAN GRAD HARRIS SNAKE AQUILA REPTILE

Optimal welded beam design


x1 0.2057296 0.2057296 0.2057298 0.2057252 0.2055946 0.2053905 0.2057889 0.2066151 0.2062490 0.1921853
x2 3.4704887 3.4704887 3.4704903 3.4717499 3.4731047 3.4750459 3.4493178 3.6436790 3.5931701 3.7800462
x3 9.0366239 9.0366239 9.0366216 9.0366075 9.0373819 9.0443057 9.0888144 8.9779912 9.0750543 9.1686187
x4 0.2057296 0.2057296 0.2057298 0.2057310 0.2057259 0.2057352 0.2068184 0.2084255 0.2075935 0.2070408
g1 - 2.00E-11 0.0000000 - 0.0098068 - 3.6254763 - 1.48E-06 - 0.6406315 - 0.0886841 - 505.0071692 - 449.5138389 - 136.0452955
g2 - 1.08E-08 0.0000000 - 0.0013653 - 0.0961993 - 4.4808763 - 51.7438730 - 499.6751081 0.0000000 - 520.6154616 - 1042.117371
g3 - 8.05E-16 0.0000000 - 4.27E-09 - 5.83E-06 - 1.31E-04 - 3.45E-04 - 0.0010295 - 0.0018104 - 0.0013444 - 0.0148555
g4 - 3.4329838 - 3.4329838 - 3.4329832 - 3.4328633 - 3.4326533 - 3.4312201 - 3.4175536 - 3.4071492 - 3.4009803 - 3.3723492
g5 - 0.0807296 - 0.0807296 - 0.0807298 - 0.0807252 - 0.0805946 - 0.0803905 - 0.0807889 - 0.0816151 - 0.0812490 - 0.0671853
g6 - 0.2355403 - 0.2355403 - 0.2355403 - 0.2355403 - 0.2355437 - 0.2355775 - 0.2358628 - 0.2354459 - 0.2358514 - 0.2362435
g7 - 7.28E-12 - 9.09E-12 - 0.0091262 - 0.1160892 - 9.50E-08 - 3.8357572 - 118.8447906 - 212.2736666 - 181.7512602 - 173.7305111
f(x) 1.7245275 1.7245275 1.7245284 1.7247001 1.7247730 1.7259833 1.7390551 1.7598859 1.7630874 1.7776817
Neural Computing and Applications (2023) 35:14275–14378
Neural Computing and Applications (2023) 35:14275–14378 14357

Fig. 20 Convergence map of the design variables for each optimizer

Fig. 21 Physical design of the


pressure vessel

5.9.3 Optimal operation of alkylation unit


MANTA outperforms other algorithms in terms of
solution accuracy for this problem. EQUIL and RUNGE
This problem aims to find the optimal decision variables
are close contenders for the first place with the best-found
that represent the operating conditions to maximize the
objective function values f(x) = 64578.194021 and
profit of an alkylation unit. Alkylation process is widely
f(x) = 64578.208116, respectively. REPTILE is the worst
utilized in the petroleum industry. The alkylation process is
performer for this problem with the objective function
basically mixing the pure butane and recycled and make-up
value f(x) = 68828.688275. MANTA and EQUIL have
pure isobutene streams together with an acid catalyst in a
proven their effectiveness in continuous engineering design
reactor first. Then the product is passed to a fractionator,
problems before; they have shown their ability to tackle
where the alkylate product is separated from the isobutene.
mixed-integer problems with this problem. Table 44
The problem is first introduced in Bracken and McCormick
reports the optimal decision variable and constraint values
[130], several modifications have been made to the original
for the best results acquired from the optimizers. Figure 29
problem, and it is transformed into a problem with seven
depicts the convergence chart of the decision variables with
design variables and fourteen inequality constraints. The
the competing optimizers.
optimization problem is given below,

123
14358 Neural Computing and Applications (2023) 35:14275–14378

- 5999.970910
arg min f ð xÞ ¼ a1 x1 þ a2 x1 x6 þ a3 x3 þ a4 x2 þ a5  a6 x3 x5

- 40.1050445
5946.8282168
- 0.0036301
- 0.0023662
199.8949555
40.4114570
subject to

0.7835712
0.3878915
AQUILA

g1 ð xÞ ¼ 1  c1 x26  c2 x3 =x1  c3 x6  0;
g2 ð xÞ ¼ 1  c4 x1 =x3  c5 x1 x6 =x3  c6 x1 x26 =x3  0
g3 ð xÞ ¼ 1  c7 x26  c8 x5  c9 x4  c10 x6  0;

- 2661.606034
- 41.9692450
5918.0124466
- 0.0021502
- 7.88E-07
198.0307550 g4 ð xÞ ¼ 1  c11 =x5  c12 x6 =x5  c13 x4 =x5  c14 x26 =x5  0
40.4992647
0.7837860
0.3863638
REPTILE

g5 ð xÞ ¼ 1  c15 x7  c16 x2 =ðx3 x4 Þ  c17 x2 =x3  0;


g6 ð xÞ ¼ 1  c18 =x7  c19 x2 =ðx3 x7 Þ  c20 x2 =ðx3 x4 x7 Þ  0
g7 ð xÞ ¼ 1  c21 =x5  c22 x7 =x5  0; g8 ð xÞ
- 96.1823088
- 42.3822673
5892.3657850
- 3.02E-14
- 3.02E-04
197.6177327

¼ 1  c23 x5  c24 x7  0;
40.4930975
0.7815168
0.3866065

g9 ð xÞ ¼ 1  c25 x3  c26 x1  0;
SNAKE

g10 ð xÞ ¼ 1  c27 x1 =x3  c28 =x3  0; g11 ð xÞ


¼ 1  c29 x2 =ðx3 x4 Þ  c30 x2 =x3  0
- 10.5715007
- 35.9010856
5881.1821691
- 0.0016530
- 8.75E-05

g12 ð xÞ ¼ 1  c31 x4  c32 x3 x4 =x2  0; g13 ð xÞ


204.0989144
40.0282072
Table 39 Optimal decision variable, constraint satisfaction, and objective function values for the pressure vessel design problem

0.7726319
0.3835221

¼ 1  c33 x1 x6  c34 x1  c35 x3  0;


GRAD

g14 ð xÞ ¼ 1  c36 x3 =x1  c37 =x1  c38 x6  0


where
- 36.7040896
5880.7328569

1  x1  2000; 1  x2  120; 1  x3  5000; 85  x4  93;


- 7.76E-05
- 8.88E-04
- 2.05E-08
203.2959104
40.0847265
0.7737129
0.3832964

90  x5  95; 3  x6  12; 145  x7  162


HARRIS

ð136Þ
Table 45 lists the values of a and c parameters utilized in
- 19.6770802
- 35.5041216
5875.3441691
- 3.73E-05
- 3.63E-05

Eq. (136). MANTA and EQUIL dominate the other algo-


204.4958784
40.0004015
0.7720451
0.3816402

rithms in terms of optimization performance again.


MANTA finds a more desirable best objective function
PRO

than EQUIL, which is f(x) = 1227.4051154. REPTILE


results in the least desirable solution with an objective
- 35.6607532
5875.2337568
- 2.61E-13
- 5.53E-12
- 1.11E-06

value f(x) = 1629.1143496 for this problem. Table 46


204.3392468
40.0111416
0.7722150
0.3817063

reports the decision variable, constraint violation, and


EQUIL

objective function values for the best solutions obtained by


the optimizers. Figure 30 depicts the convergence chart of
the decision variables for each optimizer considered in this
- 35.5658858
5875.0181144
- 0.2163612
- 5.05E-09
- 1.88E-08

study.
204.4341142
40.0044720
AFRICAN

0.7720863
0.3816427

5.9.4 Speed reducer design problem

This problem investigates the weight minimization of a


- 35.5235117
5874.9418306
- 0.9054483
- 2.38E-07
- 5.49E-06

small propeller-type aircraft speed reducer. The physical


204.4764883
40.0015021
0.7720292
0.3816198

representation of the speed reducer is shown in Fig. 31.


MANTA

The objective function is subject to eleven inequality


Pressure vessel design problem

constraints, which mainly stand for the limitations in the


gear teeth bending and surface stresses, two shafts trans-
- 35.5050591
5874.8886953
- 0.0152299
- 1.06E-07
- 3.41E-06

verse deflections and shaft stresses. There are seven design


204.4949409
40.0001924
0.7720038
0.3816052

parameters in the optimization problem, such as face width


RUNGE

(x1), tooth module (x2), the number of pinion teeth (x3),


length of the first shaft (l1 - x4), length of the second shaft
(l2 - x5), diameter of the first shaft (d1 - x6), and diameter
Method

of the second shaft (d2 - x7). Among these seven design


f(x)
g1
g2
g3
g4
x1
x2
x3
x4

variables, x3 takes an integer value and the rest of them

123
Neural Computing and Applications (2023) 35:14275–14378 14359

Fig. 22 Variations of each design variable throughout the iterations

takes continuous values. The mathematical formalization consistent performances. It is realized that the standard
of the problem is as follows [131], deviation of MANTA is better than that of EQUIL.
  RUNGE shows a remarkable performance and takes the
arg min f ð xÞ ¼ 0:7854x1 x22 3:3333x23 þ 14:933x3  43:0934
      third spot with the best objective function value
1.508x1 x26 þ x27 þ 7:4777 x36 þ x37 þ 0:7854 x4 x26 þ x5 x27
f(x) = 2823.6811432. REPTILE continues its bad perfor-
subject to
mance with this problem and finds the worst optimal
27 397
g1 ð xÞ ¼  1  0; g2 ð xÞ ¼  1  0; g3 ð xÞ objective function value f(x) = 2858.3486103. BARNA
x1 x22 x3 x1 x22 x23
fails to find an optimal solution to this problem and is
1:93x34
¼  10 excluded from the tables. Table 47 reports the optimal
x2 x3 x46
ffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi
r results for the best objective function outcomes obtained by
 2
745x4
þ16900000 the optimization algorithms. Figure 32 demonstrates the
1:93x35 x2 x3
g4 ð xÞ ¼  1  0; g5 ð xÞ ¼ convergence map of the design variables.
x2 x3 x47 110x36
 10 5.9.5 Optimal design of a reactor
rffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi
 2
745x5
þ15750000
x2 x3 x 2 x3 This geometric programming problem is taken from
g6 ð xÞ ¼  1  0; g7 ð xÞ ¼  10
85x37 40 Dembo [132]. The problem is mathematically shown as
5x2 x1 follows,
g8 ð xÞ ¼  1  0; g9 ð xÞ ¼  1  0; g10 ð xÞ
x1 12x2
0:67 0:67
1:5x6 þ 1:9 arg min f ð xÞ ¼ 0:4x0:67
1 x7 þ 0:4x0:67
2 x8  x1  x2 þ 10
¼  10
x4 subject to
1:1x7 þ 1:9
g11 ð xÞ ¼  10 g1 ð xÞ ¼ 0:0588x5 x7 þ 0:1x1  1; g2 ð xÞ ¼ 0:0588x6
x5
x8 þ 0:1x1 þ 0:1x2  1
2:6  x1  3:6; 0:7  x2  0:8; 17  x3  28; 7:3  x4  8:3; g3 ð xÞ ¼ 4x3 x1 0:71 1
5 þ 2x3 x5 þ 0:0588x1:3
3 x7  1;
7:3  x5  8:3; 2:9  x6  3:9; 5:0  x7  5:5; g4 ð xÞ ¼ 4x4 x1 0:71 1
6 þ 2x4 x6 þ 0:0588x1:3
4 x8  1
ð137Þ
Once again, EQUIL and MANTA outperformed the 0:1  xi  10; i ¼ 1; 2; :::; 8
competing optimizers with their solution accuracy and ð138Þ

123
14360

Table 40 Optimal results for the industrial refrigeration system design problem

123
Method MANTA EQUIL RUNGE AFRICAN GRAD PRO REPTILE HARRIS AQUILA SNAKE

Industrial refrigeration system design


x1 0.0010000 0.0010000 0.0010000 0.0010000 0.0010001 0.0010008 0.0010949 0.0010000 0.0021516 0.0090600
x2 0.0010000 0.0010042 0.0010808 0.0016756 0.0010015 0.0010381 0.0028442 0.0149557 0.0058789 3.6836187
x3 0.0010000 0.0010035 0.0015812 0.0024414 0.0022692 0.0014330 0.0028050 0.3191832 1.5852365 0.0723213
x4 0.0010000 0.0010000 0.0072535 0.0078691 0.0018380 1.5470493 1.3364879 0.3487126 0.6966927 2.9774804
x5 0.0010000 0.0010066 0.0012144 0.0018981 0.0010090 1.3695902 1.8416263 0.0484875 2.8303539 1.3059121
x6 0.0010000 0.0010023 0.0024580 0.0010021 0.0010876 0.1965114 0.1055420 1.8051969 2.5621386 0.8005476
x7 1.5240000 1.5240000 1.5240084 1.5240044 1.5290907 1.5241894 3.2519640 1.6645558 1.7568989 1.7708722
x8 1.5240000 1.5240000 1.5240019 1.5277296 1.5852457 2.3559381 3.1777915 1.8496361 2.9570379 4.2020563
x9 5.0000000 4.9999991 4.9999253 4.5780802 4.1193447 2.1615838 4.9990404 3.0778857 2.6799621 3.8284289
x10 2.0000000 2.0675930 2.0010886 2.0012602 2.0529566 2.0000006 3.1687752 2.0745889 4.8143853 3.9206844
x11 0.0010039 0.0014952 0.0015548 0.0018409 0.0087943 0.2174969 0.0067224 0.0600268 0.6216910 0.4704093
x12 0.0010039 0.0014853 0.0015080 0.0015856 0.0047384 0.0156561 0.0026164 0.0029890 0.0288870 0.0179707
x13 0.0073068 0.0089232 0.0088500 0.0078807 0.0105251 0.0125960 0.0013968 0.0096068 0.0418313 0.0181866
x14 0.0877167 0.1071064 0.1062362 0.0937640 0.1226587 0.1467965 0.0056050 0.0122957 0.1563728 0.0021988
g1 - 4.03E-10 - 9.74E-09 - 5.52E-06 - 2.92E-06 - 0.0033292 - 1.24E-04 - 0.5313601 - 0.0844404 - 0.1325625 - 0.1394071
g2 - 6.39E-11 - 1.93E-08 - 1.27E-06 - 0.0024413 - 0.0386348 - 0.3531239 - 0.5204216 - 0.1760541 - 0.4846194 - 0.6373204
g3 - 7.5616019 - 7.5616007 - 7.5614676 - 7.0078838 - 6.3878887 - 3.8362937 - 4.0743893 - 4.6980687 - 4.0506193 - 5.3230721
g4 - 0.9788958 - 0.9822061 - 0.9838687 - 0.9871544 - 0.9819466 - 0.9817961 - 0.8942126 - 0.9860232 - 0.9784221 - 0.9825803
g5 - 4.37E-11 - 1.33E-04 - 5.79E-05 - 0.0089018 - 0.0292265 - 0.0292041 - 0.6657294 - 0.8933838 - 0.6886098 - 0.9899287
g6 - 4.70E-11 - 3.88E-04 - 3.08E-05 - 0.2542703 - 0.5393765 - 0.7414806 - 0.9860867 - 0.3836139 - 0.2896134 - 0.4270450
g7 - 0.9801733 - 0.9765612 - 0.9771971 - 0.9766928 - 0.9627139 - 0.8847572 - 0.8262511 - 0.9569864 - 0.4753487 - 0.2966188
g8 - 0.9388614 - 0.9306597 - 0.9297989 - 0.9321975 - 0.8993652 - 0.8553496 - 0.5319464 - 0.8379880 - 0.6729428 - 0.5657287
g9 - 0.9901000 - 0.9901344 - 0.9937387 - 0.9959450 - 0.9956369 - 0.9930864 - 0.9961357 - 0.9999690 - 0.9999866 - 0.9987598
g10 - 0.9807000 - 0.9806200 - 0.9971242 - 0.9958905 - 0.9894838 - 0.9999870 - 0.9999589 - 0.9991723 - 0.9998371 - 0.9761228
g11 - 0.9702000 - 0.9703960 - 0.9754615 - 0.9843001 - 0.9704636 - 0.9999782 - 0.9999823 - 0.9993854 - 0.9999773 - 0.9997933
g12 - 0.9440000 - 0.9438956 - 0.9753762 - 0.9063689 - 0.9484334 - 0.9997042 - 0.9984909 - 0.9995361 - 0.9998715 - 0.7423231
g13 - 0.6000000 - 0.5999999 - 0.5999940 - 0.5631357 - 0.5144859 - 0.0747525 - 0.5999232 - 0.3502033 - 0.2537208 - 0.4775925
g14 - 4.05E-09 - 0.0326916 - 5.44E-04 - 6.30E-04 - 0.0257953 - 2.87E-07 - 0.3688413 - 0.0359536 - 0.5845783 - 0.4898850
g15 - 1.22E-08 - 0.0066237 - 0.0300965 - 0.1386707 - 0.4611995 - 0.9280169 - 0.6107897 - 0.9502051 - 0.9535348 - 0.9617977
f(x) 0.0322144 0.0328972 0.0365475 0.0451270 0.0631868 3.7994443 11.3389780 230.1824693 1199.7545529 891,031.82463
Neural Computing and Applications (2023) 35:14275–14378
Neural Computing and Applications (2023) 35:14275–14378 14361

Fig. 23 Convergence chart of the decision variables for the industrial refrigeration system design problem

The results reported in Table 32 indicate that MANTA ranking points for best and mean results. The optimizers
outperforms the competing algorithms in all performance are also applied to 500D and 1000D hyper-dimensional
metrics, namely, best, worst, mean, and standard deviation. variants of the same benchmark optimization problems to
EQUIL shows a significant performance and takes the see how well they scale with increasing dimensions. PRO
second spot with the best objective function value algorithm became the best predictor for multimodal test
f(x) = 3.9526041. Once again, REPTILE is the worst per- function when problem dimensionality is increased from
former in this problem. Table 48 lists the optimal results of 30 to 500. REPTILE was again the best performer for
the best objective function values, and Fig. 33 depicts the unimodal test functions, even in the hyper-dimensional
variations of the design variables throughout the iterations variant of 500D form. It is also observed that REPTILE
for each algorithm. algorithm shows the fastest convergence to the optimal
solution compared to its counterparts in most of the cases
for standard 30D unimodal and multimodal problems as far
6 Discussion and critical analysis as the evolutionary tendencies of the convergence curves
are concerned. An algorithm complexity analysis is con-
This research study presents a detailed discussion and ducted by assessing the runtimes of the optimizers on 30D
comprehensive investigation of some newly emerging unimodal and multimodal problems. The results revealed
well-reputed metaheuristic algorithms with a supportive that RUNGE algorithm burdens the most computational
emphasis on their motivations, underlying algorithmic resources under the same operational conditions in com-
concepts, and the advantages and disadvantages of each parison to the other remaining algorithms. It is also
optimizer’s computer implementation methods. The observed that REPTILE algorithm shows the fastest con-
exploration and exploitation capabilities of the eleven vergence to the optimal solution compared to its counter-
emerging metaheuristic optimizers are compared over parts in most of the cases for standard 30D unimodal and
eighteen multimodal and sixteen unimodal optimization multimodal problems as far as the evolutionary tendencies
algorithms, respectively. MANTA algorithm proved its of the convergence curves are concerned. Moreover, the
dominancy with respect to the average ranking points optimization capabilities of the competing optimizers are
assigned to the competent algorithms for best and mean assessed over CEC-2013 benchmark optimization prob-
results for multimodal test functions. REPTILE algorithm lems. Contrary to its absolute failure in standard test
took over the first seat and proclaimed a clear dominance functions with varying dimensionalities, EQUIL algorithm
for unimodal test function based on the obtained average shows a significant prediction performance for different
departments of CEC-2013 test problems. Finally, detailed

123
14362

123
Table 41 Optimal results for the multi-spindle automatic lathe design problem
Method RUNGE SNAKE MANTA AQUILA EQUIL AFRICAN HARRIS PRO GRAD REPTILE

Multi-spindle automatic lathe design


x1 0.2606241 0.2158844 0.5550219 1.0423246 1.0965476 0.9302164 1.4284034 2.0581451 1.2326196 2.5279725
x2 0.0968701 0.0784091 0.0936382 0.0916384 0.0845877 0.0741677 0.0904200 0.0979304 0.0648186 0.0955211
x3 0.0050165 0.0066006 0.0035918 0.0048223 0.0018161 0.0046527 0.0068283 0.0010804 0.0017081 0.0020146
x4 704.5007086 708.4688796 599.3104301 729.2698803 909.0685298 528.2978402 527.3199803 368.4397734 749.8889154 305.7679490
x5 0.0016557 0.0010555 0.0002479 0.0010752 0.0005833 0.0011837 0.0006825 0.0011094 0.0011103 0.0004321
x6 0.0007260 0.0008763 0.0006036 0.0007517 0.0006223 0.0010621 0.0007388 0.0007985 0.0006813 0.0008713
x7 0.0012798 0.0015496 0.0010207 0.0026273 0.0002668 0.0015465 0.0005454 0.0005333 0.0003198 0.0008341
x8 0.0009385 0.0009478 0.0003758 0.0008433 0.0016225 0.0008826 0.0001918 0.0002826 0.0002586 0.0002874
x9 0.2260561 0.1614423 0.0613742 0.1491550 0.1058403 0.0413004 0.2319226 0.0093389 0.2163030 0.2536933
x10 0.0289439 0.0935577 0.1936258 0.1045549 0.1491597 0.2136996 0.0230774 0.2456611 0.0386956 0.0013986
g1 - 0.0484084 - 0.0555016 - 0.2066108 - 0.8290608 - 0.6422662 - 0.6250946 - 1.2201099 - 0.1739996 - 0.6470919 - 1.3104151
g2 - 0.0668255 - 4.03E-12 - 0.1418651 - 0.8520966 - 0.8969370 - 0.8641701 - 0.7840256 - 2.0352977 - 0.9728198 - 0.6077274
g3 - 0.1898412 - 0.0510689 - 0.0030715 - 0.8378745 - 0.8218637 - 0.5304384 - 1.3502047 - 1.1960262 - 1.1435442 - 2.4900183
g4 - 0.0356954 - 0.0287052 - 0.2277366 - 0.9294482 - 0.3021745 - 0.6787331 - 0.7488019 - 1.0640035 - 0.4271073 - 1.7502639
g5 - 0.0715646 - 0.0297320 - 2.39E-11 - 0.8390708 - 1.0117989 - 0.6621248 - 0.1925435 - 0.8574611 - 0.5880711 - 1.1053310
g6 - 968.6232300 - 784.0498773 - 936.3772613 - 916.3436153 - 845.8503046 - 741.6693951 - 904.1710700 - 979.3033815 - 648.1231045 - 955.2004277
g7 - 968.6013105 - 783.6483443 - 936.0762352 - 915.9616743 - 845.0140007 - 741.1059424 - 904.1654311 - 979.1632849 - 648.0405030 - 955.2101084
g8 - 968.6516891 - 784.0348557 - 936.3486213 - 916.3002331 - 845.8514955 - 741.6404105 - 904.1816123 - 979.2938895 - 648.1642459 - 955.2000211
g9 - 950.8382903 - 764.9934077 - 936.1043615 - 901.5247850 - 317.9049651 - 738.8602560 - 904.1873711 - 979.2978875 - 647.9238208 - 955.2082617
g10 - 968.0983613 - 783.2441909 - 936.0898361 - 915.7682445 - 845.5804720 - 741.3671344 - 903.7108766 - 979.2781983 - 647.9983349 - 955.1726472
g11 - 965.0429415 - 778.3202267 - 935.0623689 - 912.5748951 - 844.3303474 - 740.3092356 - 901.6873176 - 979.2594593 - 647.3987544 - 955.1411602
g12 - 955.3284334 - 757.4960433 - 933.3462755 - 902.3463284 - 842.5653225 - 738.3206494 - 895.7729710 - 979.2825062 - 646.9063799 - 955.1653694
g13 - 968.5597347 - 783.9203088 - 936.2931746 - 916.2392033 - 845.7716695 - 741.5894932 - 904.0873468 - 979.2850888 - 648.1100450 - 955.1886268
g14 - 6.5176465 - 2.0619207 - 15.9294033 - 7.1655987 - 16.1526161 - 13.4434581 - 9.2926924 - 25.5500627 - 18.3440715 - 25.0321875
h1 0.0000000 0.0000000 0.0000000 0.0012901 0.0000000 3.89E-16 0.0000000 9.44E-09 1.43E-06 9.18E-05
f(x) - 4396.090084 - 4347.893277 - 4321.037272 - 4216.306314 - 4174.764104 - 4153.929410 - 4143.0064863 - 4082.040639 - 4029.689964 - 4010.273284
Neural Computing and Applications (2023) 35:14275–14378
Neural Computing and Applications (2023) 35:14275–14378 14363

Fig. 24 Variations of the decision variables for each optimizer

inspection of the contestant algorithms is hinged upon the evaluations to achieve the global answer of the problem. It
exhaustive evaluation on constrained engineering design is observed that the total number of 50 iterations is not
problems. MANTA algorithm is the second-best optimizer sufficient to get close to the optimum solution since EQUIL
considering the accurate predictions, while AFRICAN sits suffers from the improper balance between the exploration
in third place for CEC-2013 problems. Finally, detailed and exploitation phases. Resulting from its unbalanced
inspection of the contestant algorithms is hinged upon the algorithmic structure EQUIL makes too much emphasis on
exhaustive evaluation on constrained engineering design the exploration phases in the early phases of the iterations,
problems. MANTA algorithm provides the best optimal neglecting the intensification of the visited regions in the
results with satisfying all imposed design constraints in six exploration, which result in a premature convergence to
out of fourteen constrained engineering design cases and local points. When the number of function evaluations is
becomes the best performer for this phase of the compre- increased as employed for constrained engineering prob-
hensive performance evaluations. EQUIL is the second- lems, the overall prediction accuracy of EQUIL is enor-
best estimator according to its satisfactory optimization mously enhanced compared to that performed for
accuracies obtained for four engineering design problems. unconstrained problems with varying dimensionalities.
RUNGE algorithm acquires the best results for third dif- Similar probing inclinations are also observed for RUNGE
ferent design cases and occupies the third-best seat. It is and SNAKE algorithms, two of which give their priority on
interesting to see that BARNA algorithm is not able to find the exploration rather than the exploitation phase in the
any feasible result within the consecutive algorithm runs early phases of the iterative process. On the contrary, PRO
for each design problem and is removed from the com- and REPTILE algorithm are two of the best predictors for
parative investigations, as seen from the tabulated results. unconstrained test problems; however their overall pre-
The authors aim to include various types of optimization diction accuracy deteriorates when constrained problems
problems, such as multi-objective or dynamic, and com- are in consideration. This deterioration in optimization
pare the performances of a broader set of metaheuristic ability occurred because the algorithmic design of these
optimizers as a future work of this study. two optimizers is predicated upon solving artificially gen-
It is interesting to see the complete failure of EQUIL erated optimization test functions such as multidimensional
algorithm in unconstrained test cases, which is one of the Rastrigin, Sphere, Greiwank, etc. benchmark problems,
best-performing algorithms for engineering design prob- disregarding their comparative performance in optimizing
lems. This is because of the algorithmic structure of the constrained engineering design problems or test cases used
EQUIL optimizer, which requires high number of in CEC competitions which can effectively simulate the

123
14364

123
Table 42 Optimal results for the heat exchanger design problem
Method MANTA EQUIL GRAD AFRICAN PRO RUNGE HARRIS AQUILA SNAKE REPTILE

Optimal design of a heat exchanger


x1 576.5009773 265.3169286 626.5627719 244.1960388 519.7523482 379.1278458 300.5250290 199.7644298 646.1363954 509.8807756
x2 1588.6240592 1318.0339672 1411.3795294 2049.6872336 1825.0776917 1696.2122167 2686.2590541 3361.1929235 2325.8988717 2083.2365462
x3 4902.3222998 5563.0055756 5285.6532618 5034.6842570 4999.4942742 5336.3496054 5023.3133758 5250.5254258 6708.6591297 9219.7632664
x4 181.7829032 148.2955492 166.5657340 138.2102607 156.9583529 139.5673965 87.1623227 130.7219090 134.4161974 104.6663956
x5 303.9071858 277.4818425 289.1709087 298.6126910 300.0493252 286.5518230 299.2672600 290.2624994 285.8696235 231.7346875
x6 218.2170968 251.6918267 197.8276046 242.0493558 224.0121113 191.3907792 222.8639110 268.6973563 228.0647016 160.0524682
x7 277.8757153 270.8135372 275.1741215 236.6889248 254.9775787 247.8982255 187.1436051 216.8618840 221.3548358 233.9130960
x8 403.9071461 377.4811195 388.9381421 398.6126739 400.0363699 386.5514228 399.1817808 390.1308359 375.8784237 326.9663840
g1 - 1.60E-12 - 3.58E-06 - 0.0469847 - 0.0481516 - 0.1459272 - 0.0230986 - 0.7110261 - 0.1508723 - 0.3669033 - 0.3275552
g2 - 4.61E-12 - 8.88E-09 - 8.03E-05 - 0.0027774 - 6.23E-05 - 5.12E-05 - 0.0068517 - 0.1236185 - 0.0250437 - 0.2265892
g3 - 1.45E-13 - 5.44E-07 - 1.28E-04 - 3.33E-08 - 3.99E-06 - 6.00E-06 - 3.50E-05 - 8.54E-06 - 0.0271698 - 0.0687831
g4 - 2.27E-12 - 3.16E-05 - 0.0890167 - 0.0493510 - 0.0475738 - 0.1726046 - 0.2249344 - 0.0014518 - 0.0937978 - 0.3382028
g5 - 5.17E-09 - 4.24E-07 - 0.0055518 - 0.0072716 - 0.0048286 - 0.0127934 - 0.0018786 - 0.0589938 - 0.0679793 - 0.0975465
g6 - 3.97E-07 - 7.23E-06 - 0.0023277 - 1.72E-07 - 1.30E-04 - 4.00E-06 - 8.55E-04 - 0.0013166 - 0.0999120 - 0.0476830
f(x) 7067.4473363 7146.3564714 7323.5955632 7328.5675295 7344.3243141 7411.6896679 8010.0974590 8811.4827792 9680.6943968 11,812.880588
Neural Computing and Applications (2023) 35:14275–14378
Neural Computing and Applications (2023) 35:14275–14378 14365

Fig. 25 Variations of the decision variables with increasing iterations for each optimizer

complexities and challenges posed by complex real-world


problems. AFRICAN algorithm shows the third-best pre-
dictions for CEC-2013 problems while yielding compara-
tively inferior results for standard benchmark functions.
This results from the stability and robustness of the
AFRICAN algorithm, which is the primary requirement to
obtain accurate solutions of CEC-type problems involving
different shifted, rotated, discontinuous, and composite test
instances. Nevertheless, as occurred for EQUIL algorithm,
AFRICAN cannot cope with the nonlinearities of the test
functions procured by the huge number of local optimum
points in the search domain within fifty iterations and tends
to collapse at some certain stage of the iterations. This
search behavior can be explained form the prevailing
dominancy of the exploration mechanism at the early
phases, neglecting the influences of intensification on
promising solutions previously explored. Among the
compared algorithms, MANTA algorithm manages to
maintain the most proper balance between the exploration
and exploitation phases, which can also be verified by the
accuracy of the estimations results obtained for constraint
and unconstrained test functions as well as the outcomes of
Fig. 26 Schematic representation of the hydrostatic thrust bearing the Friedman analysis. As previously mentioned in the

123
14366 Neural Computing and Applications (2023) 35:14275–14378

- 435.4365927
- 326.4227246

- 549.8120798
related section of this study, MANTA algorithm consists of

26,596.482026
- 1.8176632

- 0.5238436
- 5.84E-04

- 7.36E-04
three complementary search mechanisms, including Chain

7.1870818
6.6632381
3.3807018
0.0000055
foraging, Cyclone foraging, and Somersault foraging
SNAKE

strategies. It can be easily observed that perturbation


equations of MANTA (particularly for the equation taking
place in Cyclone and chain foraging phases) are inspired by

- 110.7417218
- 25.9917633

23,739.446204
- 0.0014881

- 0.5345525

- 0.3884144
- 9.22E-04

- 6.06E-04
the search equations of Salp Swarm Optimization [15],
AFRICAN

6.2819884
5.7474359
5.0392252
0.0000074
which are effective tools in generating diverse solution
candidates. First part of the cyclone foraging phase can
produce a high solution diversity in the population thanks
to the created randomness through the commanding
- 156.1755034

- 200.3180075
- 15.5443204

23,457.815529
- 0.5424124
parameter xrand, which generates a random individual
- 1.98E-04

- 7.00E-04

- 7.10E-04
REPTILE

6.4456729
5.9032605
3.8184859
0.0000065

between the defined search limits. Second part of this phase


helps the algorithm to pivot around the best solution
obtained so far and focused on the promising areas located
nearby these fertile regions, which is conducive in pro-
- 537.1385001

- 335.3559017
- 72.6899603
- 17.2145200

moting exploitation among the population individuals.


22,991.625689
- 0.5865740
- 7.28E-04

- 7.24E-04

Chain foraging gives emphasis on the diversification of the


6.1994268
5.6128528
3.8172177
0.0000066
AQUILA

population individuals as much as possible rather than


giving prominence to exploration. On the contrary, Som-
ersault foraging is more effective in the intensification of
Table 43 Comparison of the optimal results for the hydrostatic thrust bearing model design optimization problem

- 2957.633197

the updated best results obtained in the Chain and Cyclone


- 32.9104867
20,519.413193
- 0.2739894
- 0.9272600

- 0.5787335
- 3.62E-04

- 8.25E-04

foraging phases instead of random visitations to the


6.0450371
5.4663036
2.4252686
0.0000054

unexplored regions of the search domain. Collaborative


PRO

cooperation of these complementary yet contradictive


probing mechanism yield superior predictive results,
- 47.0425311

yielding the most accurate solutions compared to those


20,018.721800
- 1.1155890
- 5.6127828

- 0.5670326

- 3.9620797
- 4.12E-04

- 8.12E-04

obtained from other contestant algorithms for a wide range


5.9603998
5.3933672
2.5704563
0.0000057
EQUIL

of benchmark problem domain which includes constraint


and unconstraint test instances. One can see the established
balance between these phases from the tendencies of the
- 69.0622695
- 23.5599889

20,009.395662

convergence curves obtained for MANTA algorithm.


- 2.4627285

- 0.5606719

- 3.7347376
- 3.70E-04

- 8.20E-04

Rather than quick stepwise declines at the early phases of


6.0226123
5.4619404
2.4379800
0.0000055
GRAD

iterations which are followed by rapid falls, eventually


entailing premature convergence to the local optimum
solutions, gradual decreases accompanied by quick small
- 13.6341103

19,541.313485

declines are observed for most of the benchmark cases by


- 1.2689959
- 0.1022394

- 0.5669836

- 4.7401175
- 3.27E-04

- 8.33E-04
5.9598727
5.3928891
2.2782097
0.0000054

MANTA optimizer, which is the clear evidence of suc-


MANTA

cessful balance between exploration and exploitation


phases.
19,505.755766 19,521.2487819
- 1.2880988

- 0.5664813

- 0.6925158
- 4.40E-07

- 1.36E-04
- 3.25E-04

- 8.33E-04

7 Conclusive comments
5.9592888
5.3928075
2.2720019
0.0000054
HARRIS

Hydrostatic thrust bearing model

Based on the overall analysis of the completive algorithms


over a variety of test problems with different functional
- 0.4020232
- 0.0130166

- 0.5667708

- 0.0519950
- 9.42E-04
- 3.24E-04

- 8.33E-04

characteristics, the following decisive conclusions can be


5.9558304
5.3890596
2.2697480
0.0000054

drawn,
RUNGE

• This research study comparatively investigates the


overall optimization performances of the eleven new
Method

emerging metaheuristics, including African Vultures


Optimization Algorithm (AFRICAN), Aquila
f(x)
g1
g2
g3
g4
g5
g6
g7
x1
x2
x3
x4

123
Neural Computing and Applications (2023) 35:14275–14378 14367

Fig. 27 Convergence chart of the design variables for the hydrostatic thrust bearing design problem

Fig. 28 Physical design of the


stepped cantilever beam

Optimizer (AQUILA), Barnacles Mating Optimizer • MANTA algorithm is the best optimizer between the
(BARNA), Equilibrium Optimizer (EQUIL), Gradient- other compared methods when all optimum results
based Optimizer (GRAD), Harris Hawks Optimization obtained for different test functions are averaged,
Algorithm (HARRIS), Manta Ray Foraging Optimizer relying on its superior search mechanism, which
(MANTA), Poor and Rich Optimization Algorithm enables to pinpoint the fertile areas during the course
(PRO), Reptile Search Algorithm (REPTILE), Runge– of iterations, thanks to the well-balanced exploration
Kutta Optimizer (RUNGE), and Snake Optimizer and exploitation mechanism of the algorithm.
(SNAKE) over various benchmark suites having dif- • EQUIL is the second-best algorithm despite its unsat-
ferent functional complexities. isfactory performance on standard test functions, which
is compensated by the accurate and robust predictions

123
14368

123
Table 44 Best result decision variables, constraint satisfaction, and objective function values for the stepped cantilever beam design problem
Method MANTA EQUIL RUNGE AFRICAN GRAD HARRIS AQUILA PRO SNAKE REPTILE

Stepped cantilever beam design


x1 3.0000000 3.0000000 3.0000000 3.0000000 3.0000000 3.0000000 3.0000000 3.0000000 3.0000000 3.0000000
x2 60.0000000 60.0000000 60.0000000 60.0000000 60.0000000 60.0000000 60.0000000 60.0000000 60.0000000 60.0000000
x3 3.1000000 3.1000000 3.1000000 3.1000000 3.1000000 3.1000000 3.1000000 3.1000000 3.1000000 3.1000000
x4 55.0000000 55.0000000 55.0000000 55.0000000 55.0000000 55.0000000 55.0000000 55.0000000 55.0000000 55.0000000
x5 2.6000000 2.6000000 2.6000000 2.6000000 2.6000000 2.6000000 2.6000000 2.8000000 2.6000000 2.8000000
x6 50.0000000 50.0000000 50.0000000 50.0000000 50.0000000 50.0000000 50.0000000 50.0000000 50.0000000 50.0000000
x7 2.2808874 2.2808874 2.2808860 2.2809290 2.2654172 2.2939434 2.2920966 2.4676595 2.5624042 2.6646506
x8 45.6177483 45.6177483 45.6177159 45.6185129 45.2447343 45.8534010 45.5249091 42.1676874 43.5516311 46.6463792
x9 1.7497570 1.7497570 1.7497631 1.7503382 1.8045581 1.8970779 2.0671577 1.9000401 2.4129447 2.0056417
x10 34.9951402 34.9951402 34.9951769 34.9893298 35.6010094 33.6089233 33.1511825 35.2786894 38.3240675 36.6419298
h1 - 2.31E-06 - 2.83E-08 - 0.0780772 - 9.13E-05 - 883.2646749 - 0.0369767 - 794.6599239 - 1313.740478 - 5534.927130 - 2859.332031
h2 - 1359.050067 - 1359.050069 - 1359.024537 - 1359.704096 - 1062.004927 - 1559.8548911 - 1369.511086 - 325.6576590 - 1654.895731 - 3651.554714
h3 - 153.8461538 - 153.8461538 - 153.8461538 - 153.8461538 - 153.8461538 - 153.8461538 - 153.8461538 - 1142.857143 - 153.8461538 - 1142.857143
h4 - 1203.412423 - 1203.412423 - 1203.412423 - 1203.412423 - 1203.412423 - 1203.4124234 - 1203.412423 - 1203.412423 - 1203.412423 - 1203.412423
h5 - 111.1111111 - 111.1111111 - 111.1111111 - 111.1111111 - 111.1111111 - 111.1111111 - 111.1111111 - 111.1111111 - 111.1111111 - 111.1111111
h6 - 7.28E-12 - 3.57E-12 - 1.13E-09 - 7.32E-10 - 2.14E-04 - 0.0012708 - 1.82E-04 - 2.01E-04 - 0.0523128 - 0.1382296
h7 0.0000000 0.0000000 0.0000000 0.0000000 0.0000000 0.0000000 0.0000000 0.0000000 0.0000000 0.0000000
h8 - 2.2580645 - 2.2580645 - 2.2580645 - 2.2580645 - 2.2580645 - 2.2580645 - 2.2580645 - 2.2580645 - 2.2580645 - 2.2580645
h9 - 0.7692308 - 0.7692308 - 0.7692308 - 0.7692308 - 0.7692308 - 0.7692308 - 0.7692308 - 2.1428571 - 0.7692308 - 2.1428571
h10 - 1.46E-09 - 2.64E-09 - 2.20E-06 - 2.92E-05 - 0.0280782 - 0.0111018 - 0.1383112 - 2.9118696 - 3.0036062 - 2.4943730
h11 - 4.61E-10 - 2.06E-08 - 4.86E-05 - 0.0099605 - 0.2716194 - 2.2838466 - 3.9629159 - 1.4326611 - 4.1173036 - 1.7305702
f(x) 64,578.194018 64,578.194021 64,578.208116 64,579.574882 64,724.228803 64,944.3852349 65,337.621182 66,158.642092 68,457.073663 68,828.688275
Neural Computing and Applications (2023) 35:14275–14378
Neural Computing and Applications (2023) 35:14275–14378 14369

Fig. 29 Convergence map of the stepped cantilever beam design problem

Table 45 Parameter values of the optimal operation of alkylation unit problem


a1 = 1.715 a2 = 0.035 a3 = 4.0565
a4 = 10 a5 = 3000 a6 = 0.063
c1 = 0.0059553571 c2 = 0.88392857 c3 = -0.1175625
c4 = 1.1088 c5 = 0.1303533 c6 = -0.0066033
c7 = 0.00066173269 c8 = 0.017239878 c9 = -0.0056595559
c10 = -0.19120592 c11 = 56.85075 c12 = 1.08702
c13 = 0.32175 c14 = -0.03762 c15 = 0.006198
c16 = 2462.3121 c17 = -25.125634 c18 = 161.18996
c19 = 5000 c20 = -489,510 c21 = 44.333333
c22 = 0.33 c23 = 0.022556 c24 = -0.007595
c25 = 0.00061 c26 = -0.0005 c27 = 0.819672
c28 = 0.819672 c29 = 24,500 c30 = -250
c31 = 0.010204082 c32 = 0.000012244898 c33 = 0.0000625
c34 = 0.0000625 c35 = -0.00007625 c36 = 1.22
c37 = 1.0 c38 = -1.0

123
14370

123
Table 46 Optimal results for the operation of alkylation unit problem
Method MANTA EQUIL RUNGE AFRICAN HARRIS PRO GRAD SNAKE AQUILA REPTILE

Optimal operation of alkylation unit


x1 1697.2402921 1694.5883477 1693.5125744 1686.4242210 1693.8994497 1695.4944378 1667.9980927 1686.3157475 1551.5227945 1656.1159175
x2 54.1214406 55.9616864 59.9663213 55.5265191 59.6187899 60.7578611 58.7934979 71.3141298 64.2651106 80.8054466
x3 3030.5248268 3028.1218603 3027.3457303 3007.8366620 3027.7862475 3026.5227062 2971.3999374 3011.8723752 2756.5580469 2907.3701241
x4 90.1710409 90.3265697 90.8394454 90.0891344 90.3542413 90.9572797 89.8675361 90.5521336 91.5858912 90.7076945
x5 94.9996224 94.9754475 94.9996911 94.9999423 94.9349034 94.9975352 94.9823577 94.9989713 94.9962329 94.5927093
x6 10.4323750 10.1956399 9.7814951 10.5216390 10.0458745 10.0973498 10.7650973 10.1741622 9.2202223 8.8517483
x7 153.5342102 153.4409761 153.4973193 153.1854959 153.0662757 153.5277166 153.0257288 152.9879588 153.4497479 150.6361253
g1 - 3.50E-07 - 3.49E-05 - 1.99E-05 - 0.0011245 - 1.44E-05 - 0.0020360 - 7.76E-04 - 8.84E-04 - 0.0072152 - 0.0222430
g2 - 0.0198998 - 0.0198784 - 0.0198876 - 0.0192009 - 0.0198911 - 0.0186353 - 0.0194169 - 0.0193511 - 0.0153971 - 0.0058513
g3 - 1.95E-08 - 1.99E-06 - 4.28E-05 - 6.12E-10 - 2.23E-14 - 0.0026313 - 2.75E-04 - 7.52E-04 - 6.53E-04 - 1.30E-07
g4 - 0.0199000 - 0.0198988 - 0.0198744 - 0.0199000 - 0.0199000 - 0.0183253 - 0.0197356 - 0.0194501 - 0.0195095 - 0.0198999
g5 - 0.0094362 - 0.0095263 - 0.0093921 - 0.0098263 - 0.0094306 - 0.0093800 - 0.0065577 - 0.0028491 - 0.0078950 - 0.0102166
g6 - 8.06E-09 - 4.01E-06 - 4.93E-05 - 2.63E-06 - 6.57E-04 - 1.03E-05 - 0.0044480 - 0.0091985 - 0.0019468 - 0.0031070
g7 - 5.57E-10 - 6.94E-05 - 1.29E-04 - 0.0012147 - 9.45E-04 - 5.87E-07 - 0.0015849 - 0.0018907 - 2.58E-04 - 0.0058086
g8 - 0.0232808 - 0.0231180 - 0.0229991 - 0.0206251 - 0.0211867 - 0.0232786 - 0.0198083 - 0.0191467 - 0.0227158 - 0.0104482
g9 - 1.72E-09 - 1.40E-04 - 7.54E-05 - 0.0084317 - 1.14E-07 - 0.0015684 - 0.0214451 - 0.0059157 - 0.0942610 - 0.0545622
g10 - 0.5406736 - 0.5410270 - 0.5412006 - 0.5401564 - 0.5411626 - 0.5405391 - 0.5396005 - 0.5408021 - 0.5383520 - 0.5328109
g11 - 0.6123601 - 0.6075074 - 0.6096469 - 0.5947362 - 0.5834473 - 0.6114011 - 0.5523614 - 0.5131309 - 0.5918161 - 0.4413999
g12 - 0.0180614 - 0.0184518 - 0.0169124 - 0.0209671 - 0.0218296 - 0.0163848 - 0.0273696 - 0.0291696 - 0.0173467 - 0.0344482
g13 - 0.0183596 - 0.0451442 - 0.0896728 - 0.0149494 - 0.0614562 - 0.0548039 - 5.93E-05 - 0.0519574 - 0.2191308 - 0.2019596
g14 - 9.2534023 - 9.0149872 - 8.6000163 - 9.3451044 - 8.8645767 - 8.9190132 - 9.5911692 - 8.9945678 - 8.0520293 - 7.7093913
f(x) 1227.4051154 1235.4962276 1245.6379218 1267.9188010 1270.1414350 1278.3725073 1349.9751877 1397.4618629 1488.8331829 1629.1143496
Neural Computing and Applications (2023) 35:14275–14378
Neural Computing and Applications (2023) 35:14275–14378 14371

Fig. 30 Variations of the decision variables throughout the iterations

Fig. 31 Schematic
representation of the speed
reducer

obtained for CEC-2013 problems and engineering comparison among the algorithms indicates that
design cases. MANTA is much better than the other algorithms in
• REPTILE and PRO algorithms are found to be very terms of solution feasibility and proves its superior
competitive algorithms for standard test functions with ability in coping with the challenging imposed design
varying dimensionalities yet considerably outperformed constraint without any significant violation. Another
by other methods for CEC 2013 and engineering design remarkable conclusion can be given as to the outcomes
problems. of the comparative study between algorithms that
• Most of the algorithms experience difficulties on metaheuristic optimizers can be an important and
solving constrained engineering problems due to the indispensable alternative to the traditional problem
involvement of the nature of design parameters, such as solvers for solving complex real-world engineering
mixed-integer or continuous decision components and problems relying on their stochastic nature, which
various types of problem constraints employed on the enables them to circumvent the singular points on the
objective function of the problem. The exhaustive

123
14372

123
Table 47 Optimal results for speed reducer design optimization problem
Method EQUIL MANTA RUNGE PRO AFRICAN GRAD HARRIS SNAKE AQUILA REPTILE

Optimal design of a speed reducer


x1 3.5000000 3.5000000 3.5000306 3.5002957 3.5004206 3.5011008 3.5017320 3.5034209 3.5226152 3.5053195
x2 0.7000000 0.7000000 0.7000004 0.7000051 0.7000001 0.7001421 0.7002039 0.7000343 0.7004445 0.7009941
x3 17.0000000 17.0000000 17.0000000 17.0000000 17.0000000 17.0000000 17.0000000 17.0000000 17.0000000 17.0000000
x4 7.3000000 7.3000000 7.3000299 7.3832650 7.5280721 7.5374026 7.3472098 7.3601485 7.7747142 7.7039355
x5 7.8000000 7.8000000 7.8000877 7.8026744 7.8112367 7.8416630 7.8644416 7.9041140 7.9969049 8.0058544
x6 3.3502147 3.3502147 3.3502208 3.3506267 3.3520000 3.3512317 3.3502991 3.3650752 3.3636776 3.3602034
x7 5.0000000 5.0000000 5.0000018 5.0000307 5.0000072 5.0004722 5.0030873 5.0001423 5.0147948 5.0311552
g1 - 0.0739153 - 0.0739153 - 0.0739245 - 0.0740069 - 0.0740267 - 0.0745822 - 0.0749124 - 0.0749101 - 0.0810281 - 0.0779414
g2 - 0.1979985 - 0.1979985 - 0.1980065 - 0.1980779 - 0.1980950 - 0.1985761 - 0.1988621 - 0.1988600 - 0.2041583 - 0.2014852
g3 - 0.4991722 - 0.4991722 - 0.4991701 - 0.4820971 - 0.4519179 - 0.4494846 - 0.4895927 - 0.4957224 - 0.4049814 - 0.4191424
g4 - 0.8768557 - 0.8768557 - 0.8768518 - 0.8767330 - 0.8763235 - 0.8749445 - 0.8741262 - 0.8718793 - 0.8689347 - 0.8702981
g5 - 7.31E-13 - 2.51E-12 - 5.44E-06 - 2.29E-04 - 0.0012103 - 5.10E-04 - 1.08E-14 - 0.0130909 - 0.0111585 - 0.0082228
g6 - 0.6236653 - 0.6236653 - 0.6236656 - 0.6236703 - 0.6236588 - 0.6237430 - 0.6243167 - 0.6236223 - 0.6268478 - 0.6304743
g7 - 0.7025000 - 0.7025000 - 0.7024998 - 0.7024979 - 0.7025000 - 0.7024396 - 0.7024133 - 0.7024854 - 0.7023111 - 0.7020775
g8 - 9.61E-13 - 2.89E-13 - 8.11E-06 - 7.73E-05 - 1.20E-04 - 1.11E-04 - 2.03E-04 - 9.28E-04 - 0.0057892 - 9.96E-05
g9 - 0.5833333 - 0.5833333 - 0.5833300 - 0.5833011 - 0.5832833 - 0.5832869 - 0.5832486 - 0.5829465 - 0.5809071 - 0.5832918
g10 - 0.0513258 - 0.0513258 - 0.0513284 - 0.0619407 - 0.0797113 - 0.0810034 - 0.0574043 - 0.0560499 - 0.1066532 - 0.0991221
g11 - 0.0512821 - 0.0512821 - 0.0512925 - 0.0516029 - 0.0526458 - 0.0562564 - 0.0586241 - 0.0637589 - 0.0726069 - 0.0713957
f(x) 2823.6624718 2823.6624718 2823.6811432 2824.7119944 2826.5223933 2828.1824161 2828.7356969 2831.6398701 2854.6591574 2858.3486103
Neural Computing and Applications (2023) 35:14275–14378
Neural Computing and Applications (2023) 35:14275–14378 14373

Fig. 32 Fluctuations of the design variables for the speed reducer design problem

search space and help to them to approach the global convergence to the optimal solution as well as prevent-
optimum of the problem more rapidly and accurately. ing the entrapments in the local pitfalls residing in the
• In most of the literature approaches regarding to the specific regions over the search space. Final suggestion
development of a novel metaheuristic algorithm and its for the direction of future research can be given as the
associated performance evaluations based on compar- development of a novel reliable performance measure
ative studies, the judgment of selecting the best that simultaneously takes into account the solution
optimizer between the compared methods is usually accuracy and algorithmic runtime of the intended
decided by the developer’s own concluding remarks, metaheuristic optimizer.
lacking of the descriptive thorough analyses as to which • This research study deals with the performance com-
algorithm would be a better option for a specific type of parison of the newly emerged nature-inspired algo-
the problem. It is also observed that there are many rithms over unconstrained and unconstrained
factors influencing the optimization accuracies of the multidimensional optimization problems. After an
metaheuristic optimization algorithms, which include exhaustive investigation covering twenty-five recently
tunable algorithm parameters, the total number of developed algorithms, eleven best-performing methods
function evaluations, designing the algorithmic struc- are considered based on their success over a wide range
ture of the optimizer, and its successful implantation of optimization benchmark problems, including a set of
into a computer code, to name a few. Researchers unimodal and multimodal test functions along with
should avoid using algorithm-specific tunable parame- fourteen complex engineering design cases. However, it
ters, which not only significantly jeopardize the overall would be a more comprehensive and conducive survey
optimization performance of the algorithm but also if more recently developed optimizers were included
require a trial-and-error-based parameter adjustment and reviewed, but the majority of them are neglected
procedure, dropping down the applicability of the due to space restrictions. Furthermore, much more
algorithm below a certain level. In addition, most of reliable performance evaluations could be conducted by
the algorithms suffer from an imbalance between employing more unconstrained test evaluations. Possi-
exploration and exploitation mechanisms. Reducing ble future work regarding the comprehensive optimiza-
the instability between these two contradictive but tion performance assessment of the developed
complementary phases entails a quick and accurate metaheuristic algorithms should include a test suite

123
14374

123
Table 48 Optimal outcomes for reactor design optimization problem
Method MANTA EQUIL RUNGE AFRICAN HARRIS PRO AQUILA GRAD SNAKE REPTILE

Optimal design of a reactor


x1 6.5126221 6.4606660 6.2687921 5.8001422 5.9718755 6.1325595 4.6470025 5.2303541 7.3722750 7.2322228
x2 2.2146349 2.2862768 2.3354829 2.7579448 2.6352740 2.2262722 3.7239778 2.9426203 1.8445410 1.6258861
x3 0.6659633 0.6676607 0.7697469 0.6587435 0.7516843 0.7375335 0.8321257 0.7880140 0.8524945 0.6995084
x4 0.5948357 0.5954836 0.6383987 0.6244425 0.6732460 0.7178269 0.6228037 0.3853462 0.4918527 0.5925678
x5 5.9246283 5.9334286 6.0131780 6.0603508 6.1267189 6.0117907 6.3863185 6.1703804 6.4642286 6.4786571
x6 5.5214569 5.5176782 5.5542588 5.5579233 5.6493228 5.6556417 5.6182516 6.3303501 5.5023688 6.8850803
x7 1.0010611 1.0144683 1.0552794 1.1785805 1.1181407 1.0936712 1.4051017 1.3132098 0.6913129 0.6742231
x8 0.3920213 0.3862222 0.4273624 0.4412136 0.4139328 0.4934076 0.4880816 0.4904286 0.2420677 0.2799351
g1 - 2.67E-09 - 1.43E-09 - 5.21E-07 - 2.25E-07 - 1.05E-06 - 1.39E-04 - 0.0076623 - 5.08E-04 - 6.78E-06 - 0.0199358
g2 - 2.64E-11 0.0000000 - 3.27E-08 - 1.05E-07 - 0.0017848 - 3.33E-05 - 0.0016626 - 1.53E-04 - 1.11E-16 - 8.59E-04
g3 - 6.87E-09 - 8.73E-08 - 2.50E-04 - 0.0021180 - 0.0141800 - 7.90E-04 - 0.0170750 - 4.92E-05 - 0.0759487 - 0.1071559
g4 - 1.93E-09 - 1.45E-05 - 1.16E-06 - 3.22E-05 - 0.0137544 - 1.87E-04 - 0.0052299 - 0.0350926 - 0.0050813 - 0.2020497
f(x) 3.9515829 3.9526041 3.9636216 3.9710219 4.0043941 4.0086553 4.0812704 4.1654223 4.2959290 4.4029289
Neural Computing and Applications (2023) 35:14275–14378
Neural Computing and Applications (2023) 35:14275–14378 14375

Fig. 33 Convergence map of the optimal design of a reactor problem

composed of fifty-seven constrained test problems used 5. Beyer HG, Schwefel HP (2002) Evolution strategies – a com-
in CEC’2020 due to their challenging and complex prehensive introduction. Nat Comput 1:3–52
6. Koza JR (1992) Genetic programming II, automatic discovery of
natures. reusable subprograms. MIT Press, Cambridge
7. Simon D (2008) Biogeography-based optimization. IEEE Trans
Evol Comput 12:702–713
8. Rashedi E, Neamabadi-pour H, Saryazdi S (2009) GSA: a
Funding The authors received no financial support for the research,
gravitational search algorithm. Inf Sci 13:2232–2248
authorship and publication of this article.
9. Erol OK, Eksin I (2006) A new optimization method: big bang-
big crunch. Adv Eng Softw 37:106–111
Data availability Data sharing is not applicable to this article as no
10. Eskendar H, Sadollah A, Bahreininejad A, Hamdi M (2012)
datasets were generated or analyzed during the current study.
Water Cycle Algorithm-A novel metaheuristic optimization
algorithm for solving constrained engineering optimization
problems. Comput Struct 110–111:151–166
Declarations 11. Kaveh A, Talathari S (2010) A novel heuristic optimization
method: charged system search. Acta Mech 213:267–289
Conflict of interest On behalf of all authors, the corresponding author 12. Kennedy J, Eberhart RC (1995) Particle swarm optimization. In:
states that there is no conflict of interest. Proceedings of the IEEE international conference on neural
networks, Perth, Australia, pp 1942–1948
13. Dorigo M (1992) Optimization, learning and natural algorithms.
References PhD Thesis, Politecnico di Milano, Italy
14. Karaboga D (2005) An Idea based on honey bee swarm for
numerical optimization. Technical Report TR06, Erciyes Fac-
1. Abualigah L, Abd-Elaziz M, Khasawneh AK, Alshinwan M, Ali
ulty, Computer engineering Department
Ibrahim R, Al-qaness MAAA, Mirjalili S, Sumari P, Gandomi
15. Mirjalili S, Gandomi AH, Mirjalili SZ, Saremi S, Faris H,
AH (2022) Metaheuristic optimization algorithms for solving
Mirjalili SM (2017) Salp swarm algorithm: a bio-inspired
real-world mechanical engineering design problems: a compre-
optimizer for engineering design problems. Adv Eng Softw
hensive survey, applications, comparative analysis, and results.
114:163–191
Neural Comput Appl 34:4081–4110
16. Rao RV, Savsani VJ, Vakharia DP (2011) Teaching-learning
2. Yıldız AR, Abderazek H, Mirjalili S (2020) A comparative
based optimization: a novel method for constrained mechanical
study of recent non-traditional methods for mechanical design
design optimization problems. Comput Aided Des 43:303–315
optimization. Arch Comput Methods Eng 27:1031–1048
17. Moosavi SHS, Bardsiri VK (2019) Poor and rich optimization: a
3. Holland JH (1975) Adaptation in natural and artificial systems.
new human-based and multi population algorithm. Eng Appl
University of Michigan Press, Ann Arbor
Artif Intel 86:165–181
4. Storn R, Price K (1995) differential evolution – a simple and
18. Geem ZW, Kim JH, Loganathan GV (2001) A new heuristic
efficient adaptive scheme for global optimization over continu-
optimization algorithm: Harmony search. SIMULATION
ous spaces. Technical Report TR-95–012, ICSI
76:60–68

123
14376 Neural Computing and Applications (2023) 35:14275–14378

19. Atashpaz-Gargari E, Lucas C (2007) Imperialist competitive 37. Hashim FA, Hussien AG (2022) Snake optimizer: a novel
algorithm: an algorithm for optimization inspired by imperialist metaheuristic optimization algorithm. Knowl-Based Syst
competition. In: Proceedings of the 2007 IEEE congress on 242:108320
evolutionary computation, pp 4661–4667 38. Faramarzi A, Heidarinejad M, Stephens B, Mirjalili S (2020)
20. Wolpert DH, Macready WG (1997) No free lunch theorems for Equilibrium optimizer: a novel optimization algorithm. Knowl-
optimization. IEEE Trans Evol Comput 1:67–82 Based Syst 191:105190
21. Diab AAZ, Ali H, Abdul-Ghaffar HI, Abdelselam HA, El Sattar 39. Zhao W, Zhang Z, Wang L (2020) Manta ray foraging opti-
MA (2021) Accurate parameters extraction of PEMFC model mization: An effective bio-inspired optimizer for engineering
based on metaheuristic algorithms. Energy Rep 7:6854–6867 applications. Eng Appl Artif Intel 87:103300
22. Raji S, Dehnamaki A, Somee B, Mahdiani MR (2022) A new 40. Abdollahzadeh B, Gharehchopogh FS, Mirjalili S (2021) Afri-
approach in well placement optimization using metaheuristic can vultures optimization algorithm: a new nature-inspired
algorithms. J Pet Sci Eng 215:110640 metaheuristic algorithm for global optimization problems.
23. Kumar M, Sahu A, Mitra P (2021) A comparison of different Comput Ind Eng 158:107408
metaheuristics for the quadratic assignment problem in accel- 41. Abualigah L, Yousri D, Abd-Elaziz EAA, Al-qaness MAA,
erated systems. Appl Soft Comput 100:106927 Gandomi AH (2021) Aquila optimizer: a novel meta-heuristic
24. Lara-Montano OD, Gomez-Castro FI, Gutierrez-Antonio C optimization algorithm. Comput Ind Eng 157:107250
(2021) Comparison of the performance of different meta- 42. Heidari AA, Mirjalili S, Faris H, Aljarah I, Mafarja M, Chen H
heuristic methods for the optimization of shell-and-tube heat (2019) Harris hawks optimization: algorithm and applciations.
exchangers. Comput Chem Eng 152:107403 Futur Gener Comput Syst 97:849–872
25. Abdor-Sierra JA, Merchan-Crus EA, Rodrigues-Canizo RG 43. Sulaiman MH, Mustaffa Z, Saari MM, Daniyal H (2020) Bar-
(2022) A comparative analysis of metaheuristic algorithms for nacles Mating optimizer: a new bio-inspired algorithm for
solving the inverse kinematics of robot manipulators. Results solving engineering optimization problems. Eng Appl Artif Intel
Eng 16:100597 87:103330
26. Sonmez M (2018) Performance comparison of metaheuristic 44. Nassef AM, Houssein EH, Helmy EB, Fathy A, Alghayti ML,
algorithms for the optimal design of space trusses. Arab J Sci Rezk H (2022) Optimal configuration strategy based on modi-
Eng 43:5265–5281 fied Runge Kutta optimizer to mitigate partial shading condition
27. Ahmed AN, Lam TV, Hung TV, Thieu NV, Kisi O, El-Shafie A in photovoltaic systems. Energy Rep 8(7242):7262
(2021) A comprehensive comparison of recent developed 45. Rezk H, Ferahtia S, Djeroui A, Chouder A, Houari A, Mach-
metaheuristic algorithms for streamflow time series forecasting moum M, Abdelkareem MA (2022) Optimal parameter esti-
problem. Appl Soft Comput 105:107282 mation strategy of PEM fuel cell using gradient-based optimizer.
28. Meng Z, Li G, Wang X, Sait SM, Yıldız AR (2021) Compara- Energy 239:122096
tive study of metaheuristic algorithms for reliability-based 46. Thirumoorthy K, Munesswaran K (2022) (2022) An elitism
design optimization problems. Arch Comput Methods Eng based self-adaptive multi-population poor and rich optimization
28:1853–1869 algorithm for grouping similar documents. J Ambient Intell
29. Katebi J, Shoaei-parchin M, Shariati M, Trung NT, Khorami M Humaniz Comput 13:1925
(2020) Developed comparative analysis of metaheuristic opti- 47. Ekinci S, İzci D (2022) Enhanced reptile search algorithm with
mization algorithms for optimal active control of structures. Eng Levy flight vehicle cruise control system design. Evol Intell.
Comput 36:1539–1558 https://doi.org/10.1007/s12065-022-00745-8
30. Naranjo JAL, Alcaraz JAS, Miguel CRTS, Rojas JCP, Espinal 48. Hu G, Yang R, Abbas M, Wei G (2023) BEESO: multi-strategy
A, Gonzalez HR (2019) Comparison of metaheuristic opti- boosted snake-inspired optimizer for engineering applications.
mization algorithms for dimensional synthesis of a spherical J Bionic Eng. https://doi.org/10.1007/s42235-022-00330-w
parallel manipulator. Mech Mach Theory 140:586–600 49. Sun F, Yu J, Zhao A, Zhou M (2021) Optimizing multi-chiller
31. Mohseni S, Brent AC, Burmester D (2020) A comparison of dispatch in HVAC-system using equilibrium optimization
metaheuristics for the optimal capacity planning of an isolated, algorithm. Energy Rep 7:5997–6013
battery-less, hydrogen-based micro-grid. Appl Energy 50. Hu G, Li M, Wang X, Wei G, Chang CT (2022) An enhanced
259:114224 manta ray foraging optimization algorithm for shape optimiza-
32. Gupta S, Abderazek H, Yıldız BS, Yıldız AR, Mirjalili S, Sait tion of complex CCG-Ball curves. Knowl-Based Syst 24:108071
SM (2021) Comparison of metaheuristic optimization algo- 51. Chen L, Huang H, Tang P, Yao D, Yang H, Ghadimi N (2022)
rithms for solving constrained mechanical design optimization Optimal modeling of combined cooling, heating, and power
problems. Expert Syst Appl 183:115351 systems using developed African vulture optimization: a case
33. Ezugwu AE, Adeleke OJ, Akinyelu AA, Viriri S (2020) A study in watersport complex. Energy Sources A Recov Util
conceptual comparison of several metaheuristic algorithms on Environ Eff 44:4296–4317
continuous optimization problems. Neural Comput Appl 52. Ekinci S, İzci D, Abualigah LA (2023) A novel balanced Aquila
32:6207–6251 optimizer using random learning and Nelder-Mead simplex
34. Ahmadianfar I, Heidari AA, Gandomi AH, Chu X, Chen H search mechanisms for air-fuel ratio system control. J Braz Soc
(2021) RUN beyond the metaphor: an efficient optimization Mech Sci Eng 45:68
algorithm based on Runge Kutta method. Expert Syst Appl 53. Li M, Li K, Qin Q (2023) A rockburst prediction model based
181:115079 on extreme learning machine with improved Harris Hawks
35. Ahmadianfar I, Bozorg-Haddad O, Chu X (2020) Gradient- optimization and its application. Tunn Undergr Sp Tech
based optimizer: A new metaheuristic optimization algorithm. 134:104978
Inf Sci 540:131–159 54. Liu B, Wang H, Tseng ML, Li Z (2022) State of charge esti-
36. Abualigah L, Abd-Elaziz M, Sumari P, Geem ZW, Gandomi AH mation for lithium-ion batteries based on improved barnacle
(2022) Reptile search algorithm (RSA): a nature-inspired meta- mating optimizer and support vector machine. J Energy Storage
heuristic optimizer. Expert Syst Appl 191:116158 55:105830
55. Akay B, Karaboga D, Gorkemli B, Kaya E (2021) A survey on
the Artificial Bee Colony algorithm variants for binary, integer,

123
Neural Computing and Applications (2023) 35:14275–14378 14377

and mixed integer programming problems. Appl Soft Comput 75. Jia H, Peng X, Lang C (2021) Remora optimization algorithm
106:107351 expert. Syst Appl 185:115665
56. Lourenço HR, Martin OC, Stützle T (2003) Iterated local search. 76. Al-Shourbaji I, Kachare PH, Alshatri S, Duraibi S, Elnaim B,
In: Handbook of metaheuristics. vol 57, pp 320–353, Springer Abd-Elaziz M (2022) An efficient parallel reptile search algo-
57. Aarts EHL, van Laarhoven PJM (1989) Simulated annealing: an rithm and snake optimizer approach for feature selection.
introduction. Stat Neerl 43:31–52 Mathematics 10:2351
58. Price KV, Storn RM, Lampinen JA (2005) Differential evolu- 77. Rizk-Allah RM, Hassanien AE (2023) A hybrid equilibrium
tion: a practical approach to global optimization. Springer- algorithm and pattern search technique for wind farm layout
Verlag, Berlin optimization problem. ISA Trans 132:402–418
59. Ficarella L, Lamberti L, Degertekin SO (2021) Comparison of 78. Hooke R, Jeeves TA (1961) Direct search solution of numerical
three novel hybrid metaheuristic algorithms for structural opti- and statistical problems. J ACM 8:212–229
mization problems. Comput Struct 244:106395 79. Zhong C, Li G, Meng Z, Li H, He W (2023) Multi-objective
60. Bertolini M, Mezzogori D, Zammori F (2019) Comparison of SHADE with manta ray foraging optimizer for structural design
new metaheuristics, for the solution of an integrated jobs- problems. Appl Soft Comput 134:110016
maintenance scheduling problem. Expert Syst Appl 80. Tanabe R, Fukunaga A (2013) Success-history based parameter
122:118–136 adaptation for Differential Evolution. In: 2013 IEEE congress on
61. Camargo MP, Rueda JL, Erlich I, Ano O (2014) Comparison of evolutionary computation, Cancun, Mexico, pp 71–78
emerging metaheuristic algorithms for optimal hydrothermal 81. Xiao Y, Guo Y, Cui H, Wang Y, Li J, Zhang Y (2022)
system operation. Swarm Evol Comput 18:83–96 IHAOAVOA: A n improved hybrid aquila optimizer and Afri-
62. Ali MM, Khompatraporn C, Zabinsky, (2005) A numerical can vulture optimization algorithm for global optimization
evaluation of several stochastic algorithms on selected contin- problems Math Biosci Eng 19:10963–11017
uous global optimization test problems. J Glob Optim 82. Ramchandran M, Mirjalili S, Heris MN, Parvathysankar DS,
31:635–672 Sundaram A, Gnanakkan CARC (2022) A hybrid grasshopper
63. Civicioglu P, Besdok E (2013) A conceptual comparison of the optimization algorithm and harris hawks optimizer for combined
Cuckoo search, particle swarm optimization, differential evo- heat and power economic dispatch problem. Eng Appl Artif
lution, artificial bee colony algorithms. Artif Intel Rev Intell 111:104753
39:315–346 83. Saremi S, Mirjalili S, Lewis A (2017) Grasshopper optimisation
64. Ma H, Simon D, Fei M, Chen Z (2013) On the equivalences and algorithm: theory and application. Adv Eng Softw 105:30–47
differences of evolutionary algorithms. Eng Appl Artif Intel 84. Mirjalili S (2016) SCA: a sine cosine algorithm for solving
26:2397–2407 optimization problems. Knowl Based Syst 96:120–133
65. Ma H, Ye S, Simon D, Fei M (2017) Conceptual and numerical 85. Abd-Elaziz M, Ewes AA, Al-qaness MAA, Abualigah L, Ibra-
comparisons of swarm intelligence optimization algorithms. him RA (2022) Sine-Cosine-Barnacles algorithm optimizer with
Soft Comput 21:3081–3100 disruption operator for global optimization and automatic data
66. Joseph SB, Dada EG, Abidemi A, Oyewola DO, Khammas BM clustering. Expert Syst Appl 207:117993
(2022) Metaheuristic algorithms for PID controller parameters 86. Ypma TJ (1995) Historical development of the Newton-Raph-
tuning: review, approaches and open problems. Heliyon son method. SIAM Rev 37:531–551
5:e09399 87. Yang XS (2010) Nature-inspired metaheuristic algorithm.
67. Abd Elaziz M, Elsheikh AH, Oliva D, Abualigah L, Lu S, Ewees Luniver Press, Frome
AA (2022) Advanced metaheuristic techniques for mechanical 88. Kaveh A, Mahdavi VR (2015) A hybrid CBO-PSO algorithm for
design problems. Arch Comput Methods Eng 29:695–716 optimal design of truss structures with dynamic constraints.
68. Milan ST, Rajabion L, Ranjbar H, Navimipour NJ (2019) Nature Appl Soft Comput 34:260–273
inspired meta-heuristic algorithms for solving the load balancing 89. Gezici H, Livatyalı H (2022) Chaotic Harris hawks optimization
problem in cloud environments. Comput Oper Res 110:159–187 algorithm. J Comput Des Eng 9:216–245
69. Sierra JAA, Cruz EAM, Canizo RGR (2022) A comparative 90. Seyyedabbasi A, Aliyev R, Kiani F, Gulle MU, Basyildiz H,
analysis of metaheuristic algorithms for solving the inverse Shah MA (2021) Hybrid algorithms based on combining rein-
kinematics of robot manipulators. Results Eng 16:100597 forcement learning and metaheuristic methods to solve global
70. Dokeroglu T, Deniz A, Kiziloz HE (2022) A comprehensive optimization problems. Knowl-Based Syst 223:107044
survey on recent metaheuristics for feature selection. Neuro- 91. Andrei N (2008) An Unconstrained Optimization Test Functions
comput 494:269–296 Collection. Adv Modell Optim 10:147–161
71. Rawa M, AlKubaisy ZM, Alghamdi S, Refaat MM, Ali ZM, 92. Floudas CA, Pardalos PM, Adjiman CS, Esposito WR, Gümüş
Abdel Aleem SHE (2022) A techno-economic planning model ZH, Harding ST, Klepeis JL, Meyer CA, Schweiger CA (1999)
for integrated generation and transmission expansion in modern Handbook of test problems in local and global optimization.
power systems with renewables and energy storage using hybrid Springer
Runge- Kutta – gradient- based optimization algorithms. Energy 93. Shaban H, Houssein EH, Perez-Cisneros M, Oliva Di Yassan
Rep 8:6457–6479 AY, Ismaeel AAK, AbdElminaam DS, DebSaid SM (2021)
72. Ewees AA, Ismail FH, Sahlol AT (2023) Gradient-based opti- Identification of parameters in photovoltaic models through a
mizer improved by Slime Mould Algorithm for global opti- Runge Kutta optimizer. Mathematics 9:2313
mization and feature selection for diverse computation 94. Chen H, Ahmadianfar I, Liang G, Bakhsizadeh H, Azad B, Chu
problems. Expert Syst Appl 213:118872 X (2022) A successful candidate strategy with Rung-Kutta
73. Li S, Chen H, Wang M, Heidari AA, Mirjalili S (2020) Slime optimization for multi-hydropower reservoir optimization.
mould algorithm: A new method for stochastic optimization. Expert Syst Appl 209:118383
Future Gener Comput Syst 111:300–323 95. Premkumar M, Jangir P, Sowmya R (2021) MOGBO: a new
74. Almotairi KH, Abualigah L (2022) Hybrid reptile search algo- multiobjective gradient-based optimizer for real-world structural
rithm and remora optimization algorithm for optimization tasks optimization problems. Knowl-Based Syst 218:106856
and data clustering. Symmetry 14:458

123
14378 Neural Computing and Applications (2023) 35:14275–14378

96. Thirumoorthy K, Muneeswaran K (2021) Feature selection optimziation for control design of a pendulum system. In:
using hybrid poor and rich optimization algorithm for text Emerging technology in computing, communication and elec-
classification. Pattern Recognit Lett 147:63–70 tronics (ETTTCE), pp 1–5
97. Ekinci S, Izci D, Abu Zitar R, Alsoud AR, Abualigah L (2022) 115. Sulaiman MH, Mustaffa Z (2022) Optimal chiller loading
Development of Levy flight-based reptile search algorithm with solution for energy conservation using Barnacles mating opti-
local search ability for power systems engineering design mizer algorithm. Res Control Opt 7:100109
problems. Neural Comput Appl 34:20263–20283 116. Rajesh P, Shajin FH, Anand NV (2021) An efficient estimation
98. Al-Shourbaji I, Helian N, Sun Y, Alshatri S, Abd-Elaziz M model for induction motor using BMO-RBFNN technique.
(2022) Boosting ant colony optimization with reptile search Process Integr Optim Sustain 5:777–792
algorithm for churn prediction. Mathematics 10:1031 117. Liao T, Stuetzle T (2013) Benchmark results for a simple hybrid
99. Rawa M (2022) Towards avoiding cascading failures in trans- algorithm on the CEC 2013 benchmark set for real parameter
mission expansion planning of modern active power systems optimization. In: Proceedings of IEEE congress on evolutionary
using hybrid snake-sine cosine optimization algorithm. Mathe- computation, pp 1938–1944.
matics 10:1323 118. Kumar A, Wu G, Ali MZ, Mallipeddi R, Suganthan PN, Das S
100. Ahmed S, Ghosh KK, Mirjalili S, Sarkar S (2021) AIEOU: (2020) A test-suite of non-convex constrained optimization
automata-based improved equilibrium optimizer with U-shaped problems from the real-world and some baseline results. Swarm
transfer function for feature selection. Knowl-Based Syst Evol Comput 56:100693
228:107283 119. Kim TH, Maruta I, Sugie T (2010) A simple and efficient
101. Abdul-hamied DT, Shaheen AM, Salem WA, Gabr WI, El-se- constrained particle swarm optimization and its application to
hiemy RA (2020) Equilibrium optimizer based multi dimension engineering design problems. Proc Inst Mech Eng C J Mech Eng
operation of hybrid AC/DC grids. Alex Eng J 59:4787–4803 Sci 224:389–400
102. Hassan MH, Houssein EH, Mahdy MA, Kamel S (2021) An 120. Turgut OE, Turgut MS (2023) Local search enhanced Aquila
improved Manta ray foraging optimizer for cost-effective optimization algorithm ameliorated with an ensemble of muta-
emission dispatch problems. Eng Appl Artif Intel 100:104155 tion strategies for complex optimization problems. Math Com-
103. Abd-Elaziz M, Yousri D, Al-qaness MAA, AbdelAty AM, put Simul 206:302–374
Radwan AG, Ewees AA (2021) A Grunwald-Letnikov based 121. Golcuk I (2021) A comparative analysis of constraint-handling
Manta ray foraging optimizer for global optimization and image mechanisms for solving engineering design problems. J Ind Eng
segmentation. Eng Appl Artif Intel 98:104105 32:201–216
104. Kahraman HT, Akbel M, Duman S (2022) Optimization of 122. Arora JS (1989) Introduction to optimum design. McGraw-Hill,
optimal power flow problem using multi-objective manta ray New York, US
foraging optimizer. Appl Soft Comput 116:108334 123. Schittkowski K (1987) More test examples for nonlinear pro-
105. Gürses D, Mehta P, Sait SM, Yildiz AR (2022) African vultures gramming codes. In: Lecture notes in economics and mathe-
optimization algorithm for optimization of shell and tube heat matical systems, Springer, Berlin
exchangers. Mater Test 64:1234–1241 124. Gu L, Yang RJ, Cho CH, Makowski M, Faruque M, Li Y (2001)
106. Ghazi GA, Hasanian HM, Al-Ammar EA, Turky RA, Ko W, Optimization and robustness for crashworthiness. Int J Veh Des
Park S, Choi HJ (2022) African vulture optimization algorithm 82:241–256
based PI controllers for performance enhancement of hybrid 125. Coello CAC (2000) Use of a self-adaptive penalty approach for
renewable- energy systems. Sustainability 14:8172 engineering optimization problems. Comput Ind 41:112–127
107. Kumar C, Mary DM (2021) Parameter estimation of three-diode 126. Askari Q, Younas I, Saeed M (2020) Political optimizer: a novel
solar photovoltaic model using an Improved African Vultures socio-inspired metaheuristic for global optimization. Knowl-
optimization algorithm with Newton-Raphson method. J Com- Based Syst 195:105790
put Electron 20:2563–2593 127. Andrei N (2013) Nonlinear optimization applications using the
108. AlRassas AM, Al-qaness MAA, Ewees AA, Ren S, Abd-Elaziz GAMS technology, 1st edn. Springer-Verlag, Berlin
M, Damasevicius R, Krilavicius T (2021) Optimized ANFIS 128. Hock W, Schittkowski K (1981) Test examples for nonlinear
model using aquila optimizer for oil production forecasting. programming codes. In: Lecture notes in economics and math-
Processes 9:1194 ematical systems, Springer, Berlin
109. Pashaei E (2022) Mutation-based Binary Aquila optimizer for 129. Coello CA (2000) Treating constraints as objectives for singe-
gene selection in cancer classification. Comput Biol Chem objective evolutionary optimization. Eng Optim 32:275–308
101:107767 130. Bracken J, McCormick GP (1968) Selected applications of
110. Ali MH, Salawudeen AT, Kamel S, Salau HB, Habil SM (2022) nonlinear programming. Wiley, New York
Single and multi-objective modified aquila optimizer for optimal 131. Datseris P (1982) Weight minimization of a speed reducer by
multiple renewable energy resources in distribution network. heuristic and decomposition technique. Mech Mach Theory
Mathematics 10:2129 17:255–262
111. Houssein EH, Hosney ME, Elhoseny M, Oliva D, Mohamed 132. Dembo RS (1976) A set of geometric programming test prob-
WM, Hasaballah M (2020) Hybrid Harris hawks optimization lems and their solution. Math Program 10:192–213
with cuckoo search for drug design and discovery in chemin-
formatics. Sci Rep 10:14439 Publisher’s Note Springer Nature remains neutral with regard to
112. Abbasi A, Firouzi B, Sendur P (2021) On the application of jurisdictional claims in published maps and institutional affiliations.
Harris hawks optimization (HHO) algorithm to the design of
microchannel heat sinks. Eng Comput 37:1409–1428
Springer Nature or its licensor (e.g. a society or other partner) holds
113. Abbasi A, Firouzi B, Sendur P, Heidari AA, Chen H, Tiwari R
exclusive rights to this article under a publishing agreement with the
(2021) Multi strategy Gaussian Harris hawks optimization for
author(s) or other rightsholder(s); author self-archiving of the
fatigue life of tapered roller bearings. Eng Comput 3:1–27
accepted manuscript version of this article is solely governed by the
114. Razak AAA, Nasir ANK, NMA Ghani, NAM Rizai, MFM
terms of such publishing agreement and applicable law.
Jusof, Muhamad IH (2020) Multi-objective barnacle mating

123

You might also like