Download as pdf or txt
Download as pdf or txt
You are on page 1of 36

Multimedia Tools and Applications

https://doi.org/10.1007/s11042-024-19437-9

An Effective Hybrid Metaheuristic Algorithm for Solving


Global Optimization Algorithms

Amir Seyyedabbasi1 · Wadhah Zeyad Tareq Tareq1 · Nebojsa Bacanin2,3,4

Received: 21 February 2023 / Revised: 10 April 2024 / Accepted: 15 May 2024


© The Author(s) 2024

Abstract
Recently, the Honey Badger Algorithm (HBA) was proposed as a metaheuristic algorithm.
Honey badger hunting behaviour inspired the development of this algorithm. In the exploi-
tation phase, HBA performs poorly and stagnates at the local best solution. On the other
hand, the sand cat swarm optimization (SCSO) is a very competitive algorithm compared
to other common metaheuristic algorithms since it has outstanding performance in the
exploitation phase. Hence, the purpose of this paper is to hybridize HBA with SCSO so
that the SCSO can overcome deficiencies of the HBA to improve the quality of the solu-
tion. The SCSO can effectively exploit optimal solutions. For the research conducted in
this paper, a hybrid metaheuristic algorithm called HBASCSO was developed. The pro-
posed approach was evaluated against challenging CEC benchmark instances taken from
CEC2015, CEC2017, and CEC2019 benchmark suites The HBASCSO is also evaluated
concerning the original HBA, SCSO, as well as several other recently proposed algorithms.
To demonstrate that the proposed method performs significantly better than other competi-
tive algorithms, 30 independent runs of each algorithm were evaluated to determine the
best, worst, mean, and standard deviation of fitness functions. In addition, the Wilcoxon
rank-sum test is used as a non-parametric comparison, and it has been found that the pro-
posed algorithm outperforms other algorithms. Hence, the HBASCSO achieves an opti-
mum solution that is better than the original algorithms.

Keywords Metaheuristic algorithm · Hybrid metaheuristic · Honey badger algorithm ·


Sand cat swarm optimization · Benchmark functions

* Amir Seyyedabbasi
amir.seyyedabbasi@istinye.edu.tr
1
Computer Engineering Deptartment, Faculty of Engineering and Natural Sciences, Istinye
University, Istanbul, Turkey
2
Department of Mathematics, Saveetha School of Engineering, SIMATS, Thandalam, Tamilnadu,
India
3
Faculty of Informatics and Computing, Singidunum University, Belgrade, Serbia
4
MEU Research Unit, Middle East University, Amman, Jordan

13
Vol.:(0123456789)
Multimedia Tools and Applications

1 Introduction

A typical optimization problem involves finding the largest or the smallest value based on a
function as the goal. The obtained values are known as an optimum solution. The cost and
time complexity are important when an algorithm tries to find the optimum solution. When
the problem dimension and the search space are increased, the complexity of the problem
will also increase [1]. Researchers have used Metaheuristic algorithms to find optimal solu-
tions and avoid computational complexity in recent decades [2–7]. The complexity theory
classified the difficulty of optimization problems into the Polynomial (P), Non-determin-
istic Polynomial (NP), and NP-hard [8, 9]. In NP-hard problems, the run time increases
when the input size grows exponentially. Metaheuristic algorithms [10] can find satisfying
solutions for NP-hard problems within a reasonable time. The authors benefit from observ-
ing the animals’ behaviour, physical phenome, and evolutionary theory to build the algo-
rithms [11]. Metaheuristic algorithms determine the best possible solution in each iteration
by initialization of a search space based on the algorithm rules and satisfying the cost func-
tion. Additionally, the algorithms must consider the balancing between the exploration and
exploitation phases to prevent trapping into one of the local optimum solutions [12–15].
In general, metaheuristic algorithms are classified into Single solution-based and Popu-
lation-based. The researchers have proved that the algorithms in the population-based class
have better performance in solving optimization problems [10]. There are three types of
population-based algorithms. The swarm intelligence algorithms (SI) are the first type that
imitates the natural behaviours of humans, animals, and plants [16–22]. The evolutionary
algorithms (EA) are the second type that imitates natural genetic mechanisms and evolu-
tion [23–25]. The last type is the natural phenomenon algorithms (NP). The algorithms
in the NP subclass imitate the universe’s physical or chemical rules [26]. However, some
optimization algorithms have limitations, such as local optimum traps, tradeoffs between
exploration and exploitation, and time complexity [27]. New metaheuristic algorithms were
proposed to address these weaknesses. Usually, the researchers used some metaheuristic
algorithm advantages and disadvantages to propose hybrid metaheuristic algorithms. For
example, the ease of getting trapped in the local optima of the honey badger algorithm
(HBA) [21] can be solved by the convergence rates in the early stages of evolution in the
sand cat swarm optimization (SCSO) algorithm [22]. Moreover, the SCSO offers swarm
diversity and convergence speed which solve the insufficient exploration and exploitation
balancing in the HBA. Thus, the main goal is to use the strengths of some algorithms to
minimize another algorithm’s weaknesses [28].
A wide range of hybrid metaheuristic algorithms have been proposed, and the follow-
ing is a review of some of these algorithms. In algorithms [13][29–32], the authors pre-
sent hybrid algorithms that address problems such as low convergence speed, balancing
between the two exploration and exploitation stages, and preventing the algorithm from
falling into the local optimum. Other hybrid algorithms [33, 34] addressed problems
such as early convergence, enhanced approximate, and approach global optimum solu-
tions. In [35], the authors developed a new hybrid metaheuristic algorithm to find the
best route to collect the bins in a smart waste collection system. The hybrid algorithm
decides the collection time and each bin’s short path. They applied the proposed hybrid
algorithm in a real case study in Portugal and the results showed that the new algo-
rithm increases the company’s profit by 45% percentage. The authors of [36] proposed
a new hybrid algorithm by combining the crow search algorithm (CSA) and the sym-
biotic organisms search (SOS) algorithm. The new CSA-SOS algorithm is applied in

13
Multimedia Tools and Applications

industrial applications to solve the load-sharing optimization problem. The work in [37]
proposed a new hybrid algorithm that combines the iterated local search (ILS), variable
neighbourhood descent (VND), and threshold acceptance (TA) metaheuristic algorithms
to find the proper routing for pickup and delivery operations. The ILS is used as a main-
frame while the VND and TA are used for the local search mechanism and acceptance
criterion respectively. The new algorithm generates initial solutions by using the near-
est neighbour heuristic algorithm. Then, the VND concentrates the search space by
ordering the neighbourhood structures randomly. Finally, the perturbation mechanism
explores different regions of the search space.
In [38], a new hybrid metaheuristic algorithm is introduced to solve various engineering
problems without any adherence to the parameters. The new hybrid algorithm merges the
particle swarm optimization (PSO), gravity search algorithm (GSA), and grey wolf opti-
mizer (GWO), into one hybrid algorithm known as the HGPG algorithm. The HGPG has
high control over exploration and exploitation and this is achieved by using the gravity
law in GSA, the top three search factors in GWO, and the speed is calculated by the PSO
algorithm. This led to an increasing exploitation rate and guided the exploration which
produced a high convergence rate compared with other heuristic algorithms. Within the
past few years, different hybrid metaheuristic algorithms have been proposed to enhance
the feature selection process in human–computer interaction. In [39], a hybrid channel
ranking procedure has been developed for Multichannel Electroencephalography-based
Brain-Computer Interface (BCI) systems using Fisher information and the objective Fire-
fly Algorithm (FA). The authors aimed to minimize the high-dimensional features of the
EEG. In [40], a new hybrid algorithm based on the Dynamic Butterfly Optimization Algo-
rithm (DBOA) with a mutual information-based Feature Interaction Maximization (FIM)
scheme for solving the problems of hybrid feature selection methods. The new method
IFS-DBOIM maximized the classification accuracy on different datasets. In [41], a Multi-
objective X-shaped Binary Butterfly Optimization Algorithm (MX-BBOA) has been devel-
oped to select the most informative channels from BCI system signals. The new algorithm
increased the classification accuracy and reduced the computation time. In [42], a Logistic
S-shaped Binary Jaya Optimization Algorithm (LS-BJOA) combines a logistic map with
the Jaya optimization algorithm has been proposed. The new approach aimed to alleviate
the computational burden caused by many channels in extracting neural signals from the
brain in the BCI systems.
A new generation of hybrid metaheuristic algorithms has emerged recently. The new gen-
eration uses machine-learning strategies to enhance metaheuristic algorithms in terms of
efficiency. In [43], the authors combined the Q-learning method, a method in reinforcement
learning which is a subfield in machine learning, with three classical metaheuristic algorithms
to produce three new hybrid algorithms. The Q-learning method is responsible for finding the
global solution and avoiding the local trap by guiding the search agent with a reward and pen-
alty system. The authors hybridized the I-GWO, Ex-GWO, and WOA with the Q-learning
method to produce the RLI − GWO, RLEx − GWO, and RLWOA hybrid algorithms. The
result showed that the new algorithms explore new areas more successfully and have better
performance in the exploration and exploitation phases. Another work that used reinforcement
learning to enhance a classical metaheuristic algorithm was introduced in [14]. In this work,
the sand cat swarm optimization algorithm (SCSO) is hybridized with reinforcement learn-
ing techniques to provide the RLSCSO algorithm. The RLSCSO used reinforcement learn-
ing agents to explore the search space of the problem efficiently. The results showed that the
RLSCSO algorithm explores and exploits the search space better than the standard SCSO.

13
Multimedia Tools and Applications

Additionally, the RLSCSO algorithm is superior to other metaheuristic algorithms since the
agent could switch between the aforementioned phases depending on the reward and penalty
system.
This study made the following significant contributions:

1. This paper proposes a hybrid algorithm called HBASCSO using HBA and SCSO char-
acteristics to improve search efficiency.
2. The HBASCSO, has ability to transition from exploration to exploitation efficiently.
3. Local optimum avoidance is achieved due to trade-off between the exploration and
extraction phases.
4. The performance analysis of the HBASCSO is evaluated by three different sets of bench-
mark functions CEC 2014, 2017 and 2019.
5. Statistical analysis is carried out to evaluate the experimental results and compare them
with other state-of-the-art algorithms.

The rest of the paper is organized as follows: Section 2 gives a background about both
the honey badger algorithm (HBA) and sand cat swarm optimization (SCSO) algorithms. In
Section 3, the proposed hybrid algorithm is explained, and in Section 4, we explain the per-
formance analysis of the HBASCSO algorithm on the different sets of benchmark functions.
Finally, Section 5 presents a discussion of the results and Section 6 concludes the paper.

2 Fundamentals

The purpose of this section is to describe the honey badger algorithm (HBA) and the sand cat
swarm optimization (SCSO) algorithms. In this paper, we are going to discuss in depth the
mathematical models for these algorithms as well as their environment behavior.

2.1 Honey Badger Algorithm (HBA)

The honey badger algorithm (HBA) was proposed by Hashim et al. [21]. HBA algorithm is
inspired by honey badger behaviors in nature. The honey badger is a fearless mammal with
white fur found in Africa and Asia. Honey badgers weigh between 7 and 13 kg and measure
77 cm in length. Honey badgers love honey and beehives, but they cannot detect the hives’
location. This problem is solved by the badger following the honeyguide, a bird that can locate
the hives but cannot reach the honey, leading it to beehives. Moreover, the honey badger preys
on sixty species using the smelling skill. It starts by estimating the location of the prey, then by
moving around in the vicinity of the prey it finds a suitable spot to catch the prey. The badger
begins digging and catching after locating the correct location. The honey badger algorithm
(HBA) mimics badger’s feeding behavior in two ways. In the first mode, smell and dig, also
known as digging mode, and in the second mode, honeyguide, also known as honey mode.
The HBA algorithm starts by initializing the search space. It is the representation of the
candidate solutions that forms the search space. The search space initialization determined by
the using the Eq. (1).
[ ]
candidate solutions = x11 ⋯ x1D ⋮⋱⋮ xn1 ⋯ xnD (1)

13
Multimedia Tools and Applications

The number of honey badgers N and the position of each one is initialized. The next move
is calculating the intensity (I) which relay on both the smell of prey and the distance to it. The
honey badger’s speed depends on the power of smell. The smell intensity calculates by Inverse
Square Law, and is defined in Eq. (2), (3), and (4). The r1 selects randomly between 0 and 1,
xi is the candidate solution in a population, the lbi and ubi refer to bounds of the search space
where the first one is the lower bound while the second one is the upper bound. Ii is the prey’s
smell intensity, r2 selected randomly between 0 and 1, S is the prey’s location (concentration
strength), di is the distance between badger and prey, and xprey refers to the prey position. The
third step is updating the density factor(𝛼). The density factor ensures the smoothness between
the searching or exploration phase and the exploitation phase by controlling the time-varying
randomization. The density factor decreases with time to minimize the randomization accord-
ing to Eq. (6). Where C is a constant = 2, and tmax is the maximum number of iterations.
One of the important steps in metaheuristic algorithms is avoiding the local optimal trap-
ping. The HBA changes the search direction by using a flag (F). This flag allows agents to dis-
cover new areas not visited yet in the search space. The HBA updates process position in two
phases “the digging phase” and “the honey phase”. In the digging phase, the badger updates
its position by moving like a Cardioid shape. This motion can be simulated by Eq. (7). Where
xnew refers to the badger’s new position, F is the direction alters flag, 𝛽 represents the ability to
get food, and r numbers are selected randomly between 0 and 1. The F can be calculated using
Eq. (8). In the honey phase, the badger updates his position by following the guided bird and
this motion can be simulated by Eq. (9). Where r7 selected randomly between 0 and 1 Fig. 1.
( )
xi = lbi + r1 × ubi − lbi (2)

S
I i = r2 × (3)
4𝜋di2

( )2
S = xi − xi+1 (4)

di = xprey − xi (5)

( )
−t
𝛼 = C × exp (6)
tmax

[ ]
xnew = xprey + F × 𝛽 × I × xprey + F × r3 × 𝛼 × di × coscos2𝜋r4 × 1 − coscos2𝜋r5 (7)

Fig. 1  Prey intensity and inverse square law [21]

13
Multimedia Tools and Applications

F = {1ifr6 ≤ 0.5 − 1else (8)

xnew = xprey + F × r7 × 𝛼 × di (9)

2.2 Sand cat swarm optimization (SCSO)

Sand cat swarm optimization algorithm by Seyyedabbasi et al. was inspired by sand cat
behaviors in nature [22]. The Sand Cat is a Felis mammal animal that lives in Asia deserts.
These environments are known as harsh environments for animals. The smart and small cat
has various life behaviors to do daily activities like hunting and escaping. Despite the great
similarity between a sand cat and a domestic cat in appearance, the living behavior is very
different. One of the most important of these differences is that sand cats do not live in a
group. However, Sand cats have some different features that enable them to live in these
harsh environments. The fur color of sand cats is near to the desert color which makes hiding
from other animals easier. Moreover, the sand cat’s paws have a density of fur that acts as an
insulator that protects it from high soil temperatures. The last different feature is the size of
the sand cat’s ears is bigger compared with the domestic cat’s ears. The tail of the sand cat
represents half of the cat’s length. cats’ length ranges between 45—57 cm with body weight
between 1 and 3.5 kg. As clawed animals, sand cats use their paws for hunting snakes, rep-
tiles, desert rodents, small birds, and insects. During hunting, firstly, the sand cat detects the
prey by using its ears to hear the low-frequency noises (below 2 kHz). Then it tracks the prey
until it finds the right moment to attack or dig when the prey is underground. To imitate this
behavior, the SCSO algorithm has two stages which are searching and attacking.
To achieve the swarm intelligence concepts, the SCSO algorithm contains swarm of
sand cats. The population is an array of sand cats and each cat (1D array) represents values
for all variables in the problem. The definition phase creates a candidate matrix with a size
equal to the sand cats’ number and the variable values specified between low and upper
boundaries. The fitness function is used to find the cat’s fitness cost function Eq. (10).
Once the iteration is done, the cat with the best cost function output is the best solution for
that iteration. The other cats enhance their position toward the best solution. However, if
the best solution is not better than the previous iteration’s solutions, the SCSO algorithm
ignores it.
[ ]
candidatesolutions = cat1 …catn x11 ⋯ x1d ⋮⋱⋮ xn1 ⋯ xnd (10)

( )
Fitness = f (SandCat) = f x1 , x2 , ⋯ , xd ;∀xi (iscalculatedforntime) (11)

where f is the fitness function value for each cat in the population. The SCSO algorithm imi-
tates the sand cat’s behaviors in two phases: searching the prey (exploration) and attacking
the prey (exploitation). The search phase relies on the fact that the sand cat hears low fre-
quencies. Each solution (cat) has a sensitivity range and this range decreases linearly after
each iteration to ensure that the cat will not move away from the prey. The initial value of this
range is between two and zero. The reason for selecting two is the fact that the sand cats can
hear low-frequencies below 2 kHz. Mathematically, the sensitivity range decreases according
to Eq. (12). Where r���G⃗ is the sensitivity range, SM is the cat’s hearing level which is assumed

13
Multimedia Tools and Applications

to be 2, iterc is current iteration and iterMax is maximum iterations. This equation is flexible
and adaptable. For example, the SM value can represent the agent’s action speed in another
problem. Moreover, the range value will be adapted with the iteration number. In a 100 itera-
tion, the value will be greater than 1 in the first fifty iterations and less than 1 in the last fifty
iterations. Equation (13) shows R �⃗ which is the main parameter that decides between moving
from exploration to exploitation. The R �⃗ parameter ensures the balance between these two
phases. To avoid the local optimum problem, each cat in the population has its sensitivity
range which is calculated by Eq. (14). However, the r���G⃗ is the general range sensitivity that
decreased linearly as mentioned before. Finally, each cat updates its position depending on
its sensitivity range, its current position, and the best-candidate position Eq. (15). Where the
��������bc⃗ and Pos
Pos �������⃗c are the best candidate and current positions respectively.
The cat sensitivity range takes a circular shape. In the attack phase, the direction of move-
ment is determined by a random angle (𝜃) on the circle. The distance between the current
solution Pos �������⃗c and the best solution Pos �������⃗b and the other parameters of movement are calculated
by Eq. (16). Where Pos ����������
rnd
⃗ represents the random position and is calculated by Eq. (17). The
random position guides cats to avoid the local optimum traps. Since the direction movement
is determined on a circle, each cat in the population moves in a different direction between 0
and 360 (-1 and 1 in the search space). The angle of the hunting position for each cat is deter-
mined by using the Roulette Wheel selection algorithm. Figure 3 shows the position updating
procedure for two consecutive iterations in the SCSO algorithm Fig. 2.
( )
SM × iterc
r���G⃗ = SM − (12)
itermax

�⃗ = 2 × r���G⃗ × rand(0,1) − r���G⃗


R (13)

⃗r = r���G⃗ × rand(0,1) (14)

( )
�����⃗ + 1) = ⃗r ∙ Pos
Pos(t �������⃗c (t)
��������bc⃗(t) − rand(0,1) ∙ Pos (15)

�����⃗ + 1) = Pos
Pos(t �������⃗b (t) − ⃗r ∙ Pos
����������

rnd ∙ cos(𝜃) (16)

Pos ⃗ |
���������� �������⃗ �������⃗ |
rnd = ||rand(0,1) ∙ Posb (t) − Posc (t)|| (17)

3 HBASCSO Algorithm

The exploration phase of the algorithm plays a very important role in optimizing
performance in terms of speeding up the convergence process and avoiding local
optimum conditions. As a result, exploitation also contributes significantly to the
performance of algorithms. It is common to use hybrid algorithms that combine
the advantages and disadvantages of metaheuristic algorithms. This study uses two
algorithms to hybridize: the honey badger algorithm (HBA) and the sand cat swarm
optimization algorithm (SCSO). The previous section provided in-depth descriptions

13
Multimedia Tools and Applications

Fig. 2  Position updating between ­Iterationi (a), and ­iterationi+1 (b) [22]

of these algorithms. The algorithms are both inspired by animal behavior in nature.
These algorithms each have strong abilities to find the optimal solution, but they both
have limitations. Furthermore, the no free lunch (NFL) theorem [44] indicates that no
algorithm can solve all optimization problems. There is no doubt that both of these
algorithms are simple to implement, as well as reasonable in terms of cost and time
complexity. Due to these characteristics, they are able to find the optimal solution in a
reasonable amount of time.
As a first step, a uniform distribution of randomly generated solutions is used to
fill the search space, including both sand cats and honey badgers. Then, the search
space boundary is controlled by checking the population. If the search agents are
found outside the boundary, they are amended. In this way, a fitness function is calcu-
lated as a result. Therefore, it is imperative that, in order to ensure that the solutions
are feasible and optimal, a fitness function be satisfied. There is a parameter called
a that has the function of controlling the switch between digging and attacking. This
parameter is a random value between 0 and 1. As long as the a parameter value is
smaller than 0.5, the HBASCSO algorithm is used, Eq. (15.a) which is the digging
phase. This equation is similar to a cardioid motion [19]. Otherwise, the HBASCSO
algorithm is used Eq. (15.b) to attacking on the prey. In the equation the cos ( 𝛽 ) is
also used that 𝛽 is the ability of search agents to attack on the food. The pseudocode
and flowchart is given in the algorithm 1 and Fig. 3.
[ ]
�⃗ + 1) = {Pos
X(t �������⃗c (t) + F × 𝛽 × I × Pos
�������⃗c (t) + F × r3 × 𝛼 × di × cosco2𝜋r4 × 1 − coscos2𝜋r5 ⃗r
( )
�������⃗c (t) a ≤ 0.5 (15.a)
��������bc⃗(t) − coscos(𝛽) ∙ Pos
. Pos
a > 0.5 (15.b)

13
Multimedia Tools and Applications

Fig. 3  The flowchart of proposed


algorithm

4 Result and Analysis

The purpose of this section is to evaluate the performance of the HBASCSO algo-
rithm through the use of benchmark functions. A comparison is made between the pro-
posed HBASCSO algorithm and seventeen popular algorithms, including, honey badger
algorithm (HBA) [21], sand cat swarm optimization (SCSO) [22], grey wolf opti-
mizer (GWO) [45], whale optimization algorithm (WOA) [17], harris hawks optimiza-
tion (HHO) [20], sine cosine algorithm (SCA) [46], particle swarm optimization (PSO)
[47], salp swarm algorithm (SSA) [48], gravitational search algorithm (GSA) [49], fick’s
law algorithm(FLA) [50], Henry gas solubility optimization (HGS) [51], moth-flame

Algorithm 1. Proposed hybrid optimization algorithm pseudocode

13
Multimedia Tools and Applications

optimization (MFO) [52], Bonobo optimization (BO) [53], artificial ecosystem-based opti-
mization (AEO) [54], multi-verse optimizer (MVO) [55], seagull optimization algorithm
(SOA) [56], slime mould algorithm (SMA) [57].
This study utilizes a greater number of metaheuristic algorithms than is usually the
case. To analyze the performance and variety of the proposed algorithms for each group
of benchmark functions, different metaheuristic algorithms have been selected. The bench-
mark functions used for this study are those from CEC2015 [58, 59], CEC2017 [60], and
CEC2019 [61]. In order to increase the accuracy of the analysis, three sets of benchmark
functions were used. All experiments are conducted in the same environment. All algo-
rithms are assumed to be simulated under similar settings, using 30 independent runs
with 30 search agents and 500 iterations. For a methodology to be effective, independ-
ent runs must be conducted to monitor the effects generated by random parameters. Each
metaheuristic algorithm parameter value is presented in Table 1.
Using benchmark functions, metaheuristic algorithms can be evaluated for their effec-
tiveness and efficiency. The CEC 2014 and 2015 benchmark functions are presented in
Table 2. This group consists of three types of functions: unimodal, multimodal, and fixed-
dimension multimodal. Within the unimodal benchmark functions (max or min), there is
only one global optimum. The multimodal function has both a global and a local opti-
mum, as the name implies. It is important to note that the fixed-dimension multimodal
function cannot be modified, as opposed to the other two categories of benchmark func-
tions. Table 3 presents the second set of benchmark functions. Metaheuristic algorithms
are measured using this type of benchmark in the Congress on Evolutionary Computation
[60]. The benchmark functions in this section are more challenging. In this set, there are
four groups of functions: unimodal, simple multimodal, hybrid, and composition functions.
In the third set of benchmark functions (CEC-C06 2019), we examined the test functions to
demonstrate the algorithm’s ability to handle large-scale optimization problems in Table 4.

4.1 Result Analysis for the Benchmark Functions CEC2014‑2015

The analysis for each algorithm with different sets of benchmark functions is explained
in Tables 5–8. An overview of the results, such as average, worst, best, and stand-
ard deviation, can be found in the appropriate tables. As mentioned before, this
paper used three different sets of benchmark functions as well as some recently pro-
posed metaheuristic algorithms to compare and evaluate the proposed algorithm. In
Table 5, the results for the first set of benchmark functions are presented. The obtained
results for unimodal functions with different metaheuristic algorithms show that the
HBASCSO algorithm has good performance in the functions F1, F2, F3, F4, and F7.
In this type of benchmark function, there is only one optimal solution. The HHO algo-
rithm performance in functions F5 and F6 is better than others. Table 6 illustrates the
results of multimodal functions. The obtained results for the functions F8 to F13 dem-
onstrate that the HBASCSO algorithm provides optimal results for the functions F9,
F10, and F11. While the results for the HBA algorithm for those functions were the
same as the proposed algorithm, the SCSO algorithm results for those functions were
not the same. As a result, the proposed hybrid algorithm can improve the functional-
ity of the HBA and SCSO algorithms. The hybrid metaheuristic algorithms are used
to capitalize on the advantages of the algorithms. It is possible to observe that the
HBA and SCSO algorithms are capable of exploration and exploitation efficiently in
hybridization.

13
Multimedia Tools and Applications

Table 1  Algorithm parameter settings for comparative


Algorithm Parameter Value Algorithm Parameter Value

HBA, SCSO, C 2 SCA a [2,0]


HBASCSO β 6 r1, r2, r3 and r4 rand
Sensitivity range [2,0] HGS k 0.03
(rG)
Phases control [− 2 rG, 2 rG] WOA A [2, 0]
range (R)
SMA z 0.03 a [2,0]
PSO Maximum weight 0.9 C 2.rand (0,1)
l [-1,1]
Minimum weight 0.4 b 1
MFO a [− 2: − 1] GWO a [2,0]
b 1 A [2, 0]
HHO E0 [-1,1] C 2.rand (0,1)
AEO h 2*rand SSA Initial speed (v0) 0
GSA Alpha 20 FLA D, C1, C2, C3, 0.1, 5, 2, 0.1, 0.2, 2
Gravitational con- 100 C4, C5
stants ­G0
BO scab 1.3 MVO WEPmin 0.2
scsb 1.45 WEPmax 1
rcpp 0.0039 SOA Fc [2, 0]

For functions F14-F23, which are fixed-dimension functions, the obtained results
are presented in Table 7. On the basis of the appropriate table, the HBASCSO deter-
mines the global optimum for F15, F16, F17, F18, F19, and F20. It was observed that
most metaheuristic algorithms can find the fixed-dimension benchmark function’s
global optimum, but for the function F20, the proposed algorithm’s mean result is bet-
ter than others. Besides, for functions F21, F23, the GWO algorithm always finds the
global optimum. Table 8 shows the output of rank based on the mean value of each
algorithm. All algorithms are ranked statistically in Table 8. Figure 4 shows the most
successful optimization algorithm, based on an analysis of the total rank summary of
all optimization algorithms.

4.2 Result Analysis for the Benchmark Functions CEC2017

To perform the numerical validation analysis, 29 benchmarks from CEC2017 were used.
Performance evaluations of many metaheuristic algorithms are conducted using these func-
tions. Among the CEC2017 functions, four types are distinguished: unimodal (F1 and F3),
multiple (F4—F10), hybrid (F11—F20), and composition (F21-F30). Table 3 presents
the specifications for these functions. These benchmarks have been used to evaluate the
HBASCSO, as well as comparisons with algorithms such as HBA, SCSO, SSA, GSA,
FLA, HGS, and MFO. In Table 9, results from the experiments are summarized, along
with average, worst, best, and standard deviation.
In terms of their average results, HBASCSO’s results are superior to these bench-
marks. By analyzing the average ranking values of the algorithms involved, the HBASCSO

13
Table 2  Benchmark functions (CEC14, 15)
Benchmark Formula dim range fmin type
Function

13
2
F1 Sphere f1 (x) = i=1 xi
30 [-100,100] 0 Unimodal
∑n

F2 Schwefel 2.22 30 [-10,10] 0 Unimodal


F3 Schwefel 1.2 30 [-100,100] 0 Unimodal
∑ n � � ∏n � �

f3 (x) = i=0 x
j=0 i
f2 (x) = i=1 �xi � + i=1 �xi �
∑n−1 �∑j<i �2

F4 Schwefel 2.21 f4 (x) = maxi xi , 1 ≤ i ≤ n 30 [-100,100] 0 Unimodal


{ }

F5 Generalized f5 (x) = i=1 100 xi+1 − xi2 + xi − 1 30 [-30,30] 0 Unimodal


Rosenbrock
∑n−1 � � �2 � �2 �

F6 STEP f6 (x) = xi + 0.5 30 [-100,100] 0 Unimodal


i=1
4
∑n �� ��2

F7 Quartic f7 (x) = + random[0,1)


i=1 ixi
30 [-1.28,1.28] 0 Unimodal
∑n

F8 Generalized 30 [-500,500] -418.9829*5 Multimodal


Schwefel f8 (x) = i=1 − xisinsin
�� �
∑n �x �
� i�

F9 Rastrigin f9 (x) = i=1 xi − 10coscos 2𝜋xi + 10 30 [-5.12, 0 Multimodal


5.12]
∑n � 2 � � �

F10 Ackley 1 n 30 [-32,32] 0 Multimodal


f10 (x) = −20expexp −0.2 n1 ni=1 xi2 − expexp n i=1
coscos 2𝜋xi + 20 + e
� �
� ∑ � ∑ � ��

F11 Griewank 2 X 30 [-600,600] 0 Multimodal


f11 (x) = 4000 i=1 xi − i=1 coscos +1
i
� �
√i
1 ∑n ∏n

𝜋
F12 Generalized f12 (x) = 10sinsin 𝜋y1
n xi +1
+ i=1 yi − 1 1 + 10sin2 𝜋y1+1 + yn − 1 + i=1 u xi , 10,100,4 30 [-50,50] 0 Multimodal

Penalized
� � ∑n−1 � �2 � � �� � �2 � ∑n � �

u xi , a, k, m = {k xi − a 0k −xi − a xi > a − a < xi < 1xi < −a


y(i = 1 + 4) ( )m ( )m

F13 Generalized f13 (x) = 0.1 sin2 3𝜋x1 + ni=1 xi − 1 1 + sin2 3𝜋xi + 1 + xn − 1 1 + sin2 2𝜋xni + i=1 u xi , 5,100,4 30 [-50,50] 0 Multimodal
� � � ∑ � �2 � � �� � �2 � � ��� ∑n � �

Penalized
1 1
F14 Shekel’s 2 [-65,65] 1 Fixed-dimension
Foxholes f14 (x) = 500
+
i=1 i ij
� �−1
∑25
j=1 j+∑2 (x −a )6

F15 Kowalik’s x1 (b2i +bi x2 ) 2 4 [-5,5] 0.00030 Fixed-dimension


f15 (x) = i=1 ai − b2 +b x +x
i i 3 4

Multimedia Tools and Applications

∑11 �
Table 2  (continued)
Benchmark Formula dim range fmin type
Function

F16 Six-Hump 6 2 [-5,5] -1.0316 Fixed-dimension


f16 (x) = 4x21 − 2.1x41 + 31 x + x1 x2 − 4x22 + 4x42
Camel-Back 1

F17 Branin 5.1 2 1 2 [-5,5] 0.398 Fixed-dimension


f17 (x) = x2 − x
4𝜋 2 1
+ 𝜋5 x1 − 6 + 10 1 − 8𝜋
cosx1 + 10
( )2 ( )

F18 Goldstein f18 (x) = 1 + x1 + x2 + 1 × 19 − 14x1 + 3x21 − 14x2 + 6x1 x2 + 3x22 . 30 + 2x1 − 3x2 × 18 − 32x1 + 12x21 + 48x2 − 36x1 x2 + 27x22 2 [-2,2] 3 Fixed-dimension
[ ( )2 ( )] [ ( )2 ( )]

Price
Multimedia Tools and Applications

F19 Hartman’s 3 3 [1, 3] -3.86 Fixed-dimension


f19 (x) = − i=1 ci expexp − j=1 aij xj − pij
Family
∑4 � ∑ � �2 �

F20 Hartman’s 6 6 [0, 1] -3.32 Fixed-dimension


f20 (x) = − i=1 ci expexp − j=1 aij xj − pij
Family
∑4 � ∑ � �2 �

F21 Shekel-5 f21 (x) = − i=1 X − ai X − ai )T + ci


∑5 �� �� � �−1 4 [0,10] -10.1532 Fixed-dimension
F22 Shekel-7 f22 (x) = − i=1 X − ai X − ai )T + ci
∑7 �� �� � �−1 4 [0,10] -10.4028 Fixed-dimension
F23 Shekel-10 f23 (x) = − i=1 X − ai X − ai )T + ci
∑10 �� �� � �−1 4 [0,10] -10.5363 Fixed-dimension

13
Multimedia Tools and Applications

Table 3  Review of CEC2017 benchmark function problems


Type Function No Function Description fmin

Unimodal functions 1 Shifted and rotated bent cigar function 100


3 Shifted and rotated Zakharov function 300
Simple multimodal functions 4 Shifted and rotated Rosenbrock’s function 400
5 5 Shifted and rotated Rastrigin’s function 500
6 Shifted and rotated expanded Scaffer’s F6 function 600
7 Shifted and rotated Lunacek Bi-Rastrigin function 700
8 Shifted and rotated Non-continuous Rastrigin’s func- 800
tion
9 Shifted and rotated Levy function 900
10 Shifted and rotated Schwefel’s function 1000
Hybrid functions 11 Hybrid function 1 (N = 3) 1100
12 Hybrid function 2 (N = 3) 1200
13 Hybrid function 3 (N = 3) 1300
14 Hybrid function 4 (N = 4) 1400
15 Hybrid function 5 (N = 4) 1500
16 Hybrid function 6 (N = 4) 1600
17 Hybrid function 6 (N = 5) 1700
18 Hybrid function 6 (N = 5) 1800
19 Hybrid function 6 (N = 5) 1900
20 Hybrid function 6 (N = 5) 2000
Composition functions 21 Composition function 1 (N = 3) 2100
22 Composition function 2 (N = 3) 2200
23 Composition function 3 (N = 4) 2300
24 Composition function 4 (N = 4) 2400
25 Composition function 5 (N = 5) 2500
26 Composition function 6 (N = 5) 2600
27 Composition function 7 (N = 6) 2700
28 Composition function 8 (N = 6) 2800
29 Composition function 9 (N = 3) 2900
30 Composition function 10 (N = 3) 3000

Table 4  Modern 10 benchmark test functions from CEC2019 (CEC-C06)


Function Benchmark Function dim range fmin

CEC01 Storn’s Chebyshev Polynomial Fitting Problem 9 [-8192,8192] 1


CEC02 Inverse Hilbert Matrix Problem 16 [-16384, 16384] 1
CEC03 Lennard–Jones Minimum Energy Cluster 8 [-4,4] 1
CEC04 Rastrigin’s Function 10 [-100,100] 1
CEC05 Grıenwank’s Function 10 [-100,100] 1
CEC06 Weiersrass Function 10 [-100,100] 1
CEC07 Modified Schwefel’s Function 10 [-100,100] 1
CEC08 Expanded Schaffer’s F6 Function 10 [-100,100] 1
CEC09 Happy Cat Function 10 [-100,100] 1
CEC10 Ackley Function 10 [-100,100] 1

13
Multimedia Tools and Applications

Table 5  Results for F1-F7 Algorithm (CEC2014-2015)


Function HBASCSO HBA SCSO GWO WOA HHO SCA PSO

F1 Mean 6.07E-274 2.62E-98 1.31E-121 1.04E-13 2.17E-57 3.76E-82 3.01E + 14 6.84E-03


Worst 1.13E-272 7.64E-97 2.03E-120 6.29E-13 6.52E-56 1.09E-80 9.75E + 14 8.94E-02
Best 7.10E-282 1.51E-110 7.53E-129 3.19E-15 5.12E-77 6.88E- 4.84E-02 4.73E-04
100
STD 0.00E + 00 1.39E-97 3.96E-121 1.52E-13 1.19E-56 1.99E-81 2.72E + 14 1.60E-02
F2 Mean 8.92E-143 4.45E-45 2.11E-58 1.28E-02 6.86E-37 1.21E-34 2.21E-02 2.67E + 13
Worst 2.00E-141 1.24E-43 3.07E-57 5.20E-02 7.95E-36 3.48E-33 1.71E-01 1.01E + 14
Best 3.36E-148 8.14E-53 9.73E-62 1.47E-03 1.52E-42 1.94E-44 7.68E-04 1.25E-03
STD 3.76E-142 2.26E-44 6.87E-58 1.16E-02 1.84E-36 6.35E-34 3.71E-02 4.50E + 13
F3 Mean 3.73E-170 1.32E-84 1.56E-82 6.25E + 08 4.28E + 14 2.77E-61 3.91E + 14 2.70E + 14
Worst 9.64E-169 3.64E-83 4.57E-81 4.52E + 09 7.53E + 14 3.74E-60 1.00E + 15 9.63E + 14
Best 8.81E-192 3.18E-96 5.58E-97 1.55E-03 1.78E + 14 2.81E-82 1.01E + 14 1.02E + 14
STD 0.00E + 00 6.63E-84 8.34E-82 1.01E + 09 1.52E + 14 8.96E-61 3.37E + 14 2.42E + 14
F4 Mean 1.29E-108 2.15E-35 3.08E-42 7.50E + 07 5.19E + 14 1.67E-32 3.50E + 14 7.00E + 14
Worst 2.08E-107 6.12E-34 8.69E-41 2.62E + 08 8.89E + 14 4.36E-31 5.91E + 14 9.82E + 14
Best 1.48E-113 1.58E-42 8.20E-48 8.61E + 06 1.03E + 14 1.31E-43 1.18E + 14 4.19E + 14
STD 3.87E-108 1.12E-34 1.58E-41 5.59E + 07 2.35E + 14 8.00E-32 1.23E + 14 1.60E + 14
F5 Mean 2.71E + 01 2.81E + 01 2.40E + 01 2.69E + 01 2.80E + 01 1.34E-02 5.97E + 04 6.12E + 03
Worst 2.88E + 01 2.88E + 01 2.63E + 01 2.85E + 01 2.88E + 01 6.36E-02 8.49E + 05 9.01E + 04
Best 2.56E + 01 2.62E + 01 2.28E + 01 2.59E + 01 2.73E + 01 7.99E-05 6.54E + 01 2.85E + 01
STD 9.13E-01 7.30E-01 6.99E-01 6.36E-01 4.43E-01 1.72E-02 1.66E + 05 2.28E + 04
F6 Mean 1.42E + 00 1.87E + 00 1.70E-02 8.63E-01 3.75E-01 1.75E-04 1.13E + 01 8.76E-03
Worst 2.33E + 00 3.54E + 00 2.50E-01 1.75E + 00 7.95E-01 7.72E-04 5.51E + 01 4.92E-02
Best 6.15E-01 1.02E + 00 8.37E-06 1.72E-01 9.32E-02 1.07E-06 4.47E + 00 2.86E-04
STD 4.87E-01 6.66E-01 6.33E-02 4.53E-01 1.91E-01 1.84E-04 9.82E + 00 1.14E-02
F7 Mean 1.38E-04 2.14E-04 5.03E-04 2.08E-03 3.54E-03 1.42E-04 1.35E-01 5.20E-02
Worst 5.49E-04 1.67E-03 1.98E-03 4.78E-03 1.66E-02 5.19E-04 8.59E-01 9.06E-02
Best 1.96E-05 9.75E-06 4.19E-05 3.36E-04 7.23E-05 7.61E-07 5.03E-03 2.27E-02
STD 1.22E-04 3.28E-04 4.51E-04 1.08E-03 4.38E-03 1.14E-04 2.04E-01 1.74E-02

identifies benchmark functions that provide the optimum results. A comprehensive analysis
of the HBASCSO algorithm was conducted along with an examination of the affecting
exploration and exploitation capabilities over the CEC2017 test functions. A better balance
between exploration and exploitation is possible after the hybridization of two metaheuris-
tic algorithms. Hybrid algorithms benefit from the main advantages of both HBA and
SCSO algorithms, even though they have operators to control a trade-off. During explora-
tion and exploitation of the HBASCSO, all search agents maintain their characteristics and
activity, which allows for efficient optimization of the search area. Table 10 summarizes
the ranking results for the HBASCSO algorithm and other algorithms. According to this
table, the algorithm that finds values close to the global optimum is the one with the low-
est overall ranking. As there are eight algorithms compared, the algorithm with the lowest
ranking can find results which are very close to the optimum. In contrast, the algorithm
with the highest value can find the worst results.

13
Table 6  Results for F8-F13 Algorithm (CEC2014-2015)
Function HBASCSO HBA SCSO GWO WOA HHO SCA PSO

13
F8 Mean -6.28E + 03 -6.59E + 03 -8.81E + 03 -5.67E + 03 -1.06E + 04 -1.26E + 04 -3.87E + 03 -8.36E + 03
Worst -5.00E + 03 -4.52E + 03 -6.12E + 03 -3.82E + 03 -8.07E + 03 -1.23E + 04 -3.35E + 03 -7.22E + 03
Best -7.84E + 03 -7.96E + 03 -1.02E + 04 -7.60E + 03 -1.26E + 04 -1.26E + 04 -4.58E + 03 -9.47E + 03
STD 6.44E + 02 7.92E + 02 1.01E + 03 1.11E + 03 1.67E + 03 4.08E + 01 2.99E + 02 6.20E + 02
F9 Mean 0.00E + 00 0.00E + 00 0.00E + 00 3.17E + 00 1.89E-15 0.00E + 00 3.43E + 01 5.20E + 01
Worst 0.00E + 00 0.00E + 00 0.00E + 00 1.68E + 01 5.68E-14 0.00E + 00 1.12E + 02 7.67E + 01
Best 0.00E + 00 0.00E + 00 0.00E + 00 5.68E-14 0.00E + 00 0.00E + 00 3.18E-03 2.60E + 01
STD 0.00E + 00 0.00E + 00 0.00E + 00 4.09E + 00 1.04E-14 0.00E + 00 2.78E + 01 1.28E + 01
F10 Mean 4.44E-16 4.44E-16 6.65E-01 1.05E-13 4.35E-15 4.44E-16 1.55E + 01 4.21E-01
Worst 4.44E-16 4.44E-16 2.00E + 01 1.75E-13 7.55E-15 4.44E-16 2.03E + 01 1.65E + 00
Best 4.44E-16 4.44E-16 4.44E-16 7.51E-14 4.44E-16 4.44E-16 4.31E-01 7.42E-03
STD 3.01E-31 3.01E-31 3.64E + 00 2.14E-14 2.35E-15 3.01E-31 7.85E + 00 5.63E-01
F11 Mean 0.00E + 00 0.00E + 00 0.00E + 00 2.72E-03 8.49E-03 0.00E + 00 9.48E-01 4.73E-02
Worst 0.00E + 00 0.00E + 00 0.00E + 00 1.94E-02 1.32E-01 0.00E + 00 2.01E + 00 2.10E-01
Best 0.00E + 00 0.00E + 00 0.00E + 00 0.00E + 00 0.00E + 00 0.00E + 00 1.41E-02 7.33E-04
STD 0.00E + 00 0.00E + 00 0.00E + 00 6.30E-03 3.23E-02 0.00E + 00 4.49E-01 4.94E-02
F12 Mean 7.20E-02 9.12E-02 6.61E-04 4.61E-02 2.02E-02 7.88E-06 3.11E + 03 1.68E-01
Worst 1.10E-01 1.82E-01 6.69E-03 1.18E-01 4.92E-02 3.87E-05 5.59E + 04 1.67E + 00
Best 3.00E-02 3.27E-02 5.44E-07 1.35E-02 7.93E-03 4.00E-08 5.91E-01 1.79E-05
STD 2.24E-02 3.65E-02 2.00E-03 2.74E-02 1.09E-02 1.20E-05 1.14E + 04 3.38E-01
F13 Mean 1.97E + 00 2.43E + 00 4.82E-01 6.10E-01 5.10E-01 9.48E-05 6.51E + 05 1.95E-01
Worst 2.60E + 00 2.80E + 00 1.15E + 00 1.19E + 00 1.01E + 00 3.55E-04 1.62E + 07 2.55E + 00
Best 9.47E-01 1.56E + 00 2.14E-04 1.99E-01 1.01E-01 2.24E-07 3.79E + 00 5.99E-03
STD 4.63E-01 3.15E-01 3.29E-01 2.35E-01 2.50E-01 9.81E-05 2.96E + 06 4.90E-01
Multimedia Tools and Applications
Table 7  Results for F14-F23 Algorithm (CEC2014-2015)
Function HBASCSO HBA SCSO GWO WOA HHO SCA PSO

F14 Mean 5.95E + 00 3.81E + 00 1.56E + 00 4.85E + 00 2.70E + 00 1.10E + 00 1.59E + 00 9.98E-01
Worst 1.27E + 01 1.27E + 01 5.93E + 00 1.08E + 01 1.08E + 01 1.99E + 00 2.98E + 00 9.98E-01
Best 9.98E-01 9.98E-01 9.98E-01 9.98E-01 9.98E-01 9.98E-01 9.98E-01 9.98E-01
STD 4.46E + 00 3.99E + 00 1.15E + 00 4.06E + 00 3.30E + 00 3.03E-01 9.24E-01 3.39E-16
F15 Mean 3.47E-04 3.68E-04 6.70E-03 6.40E-03 1.10E-03 4.34E-04 1.06E-03 3.36E-03
Worst 5.28E-04 1.22E-03 6.33E-02 2.04E-02 1.29E-02 1.23E-03 1.68E-03 2.04E-02
Multimedia Tools and Applications

Best 3.08E-04 3.07E-04 3.07E-04 3.07E-04 3.10E-04 3.09E-04 4.59E-04 3.07E-04


STD 4.49E-05 1.72E-04 1.36E-02 9.30E-03 2.26E-03 1.91E-04 3.85E-04 6.67E-03
F16 Mean -1.03E + 00 -1.03E + 00 -1.03E + 00 -1.03E + 00 -1.03E + 00 -1.03E + 00 -1.03E + 00 -1.03E + 00
Worst -1.03E + 00 -1.03E + 00 -1.03E + 00 -1.03E + 00 -1.03E + 00 -1.03E + 00 -1.03E + 00 -1.03E + 00
Best -1.03E + 00 -1.03E + 00 -1.03E + 00 -1.03E + 00 -1.03E + 00 -1.03E + 00 -1.03E + 00 -1.03E + 00
STD 4.52E-06 6.94E-10 0.00E + 00 3.27E-08 7.74E-10 6.79E-10 5.81E-10 6.20E-10
F17 Mean 3.98E-01 3.98E-01 3.98E-01 3.98E-01 3.98E-01 3.98E-01 3.98E-01 3.98E-01
Worst 3.99E-01 3.98E-01 3.98E-01 3.99E-01 3.98E-01 3.98E-01 3.98E-01 3.98E-01
Best 3.98E-01 3.98E-01 3.98E-01 3.98E-01 3.98E-01 3.98E-01 3.98E-01 3.98E-01
STD 1.76E-04 1.04E-07 1.13E-16 2.00E-04 1.16E-07 6.59E-08 6.37E-08 5.49E-08
F18 Mean 3.00E + 00 3.00E + 00 3.00E + 00 3.00E + 00 3.00E + 00 3.00E + 00 3.00E + 00 3.00E + 00
Worst 3.00E + 00 3.00E + 00 3.00E + 00 3.00E + 00 3.00E + 00 3.00E + 00 3.00E + 00 3.00E + 00
Best 3.00E + 00 3.00E + 00 3.00E + 00 3.00E + 00 3.00E + 00 3.00E + 00 3.00E + 00 3.00E + 00
STD 5.62E-05 2.89E-06 4.47E-15 5.26E-05 9.67E-06 1.05E-05 7.74E-06 4.19E-06
F19 Mean -3.86E + 00 -3.86E + 00 -3.86E + 00 -3.86E + 00 -3.86E + 00 -3.86E + 00 -3.86E + 00 -3.86E + 00
Worst -3.85E + 00 -3.85E + 00 -3.09E + 00 -3.86E + 00 -3.85E + 00 -3.85E + 00 -3.85E + 00 -3.85E + 00
Best -3.86E + 00 -3.86E + 00 -3.86E + 00 -3.86E + 00 -3.86E + 00 -3.86E + 00 -3.86E + 00 -3.86E + 00
STD 2.71E-03 3.33E-03 1.41E-01 1.61E-03 2.92E-03 3.05E-03 3.30E-03 2.78E-03
F20 Mean -3.28E + 00 -3.20E + 00 -3.27E + 00 -3.26E + 00 -3.17E + 00 -3.15E + 00 -3.24E + 00 -3.18E + 00

13
Table 7  (continued)
Function HBASCSO HBA SCSO GWO WOA HHO SCA PSO
Worst -3.09E + 00 -2.43E + 00 -2.84E + 00 -2.43E + 00 -1.84E + 00 -2.43E + 00 -3.02E + 00 -2.43E + 00

13
Best -3.32E + 00 -3.32E + 00 -3.32E + 00 -3.32E + 00 -3.32E + 00 -3.32E + 00 -3.32E + 00 -3.32E + 00
STD 1.67E-01 1.72E-01 1.05E-01 7.00E-02 3.44E-01 2.01E-01 9.78E-02 1.95E-01
F21 Mean -9.34E + 00 -5.99E + 00 -5.83E + 00 -9.90E + 00 -5.77E + 00 -5.66E + 00 -5.91E + 00 -4.77E + 00
Worst -8.81E-01 -2.63E + 00 -2.63E + 00 -2.63E + 00 -8.81E-01 -5.05E + 00 -5.06E + 00 -8.81E-01
Best -1.02E + 01 -1.02E + 01 -1.02E + 01 -1.02E + 01 -1.02E + 01 -1.00E + 01 -1.02E + 01 -1.02E + 01
STD 2.49E + 00 2.16E + 00 2.02E + 00 1.37E + 00 2.13E + 00 1.57E + 00 1.93E + 00 2.08E + 00
F22 Mean -6.60E + 00 -6.48E + 00 -9.86E + 00 -1.02E + 01 -7.01E + 00 -6.37E + 00 -5.59E + 00 -6.82E + 00
Worst -5.09E + 00 -2.77E + 00 -9.12E-01 -5.09E + 00 -2.77E + 00 -9.12E-01 -9.10E-01 -3.72E + 00
Best -1.04E + 01 -1.04E + 01 -1.04E + 01 -1.04E + 01 -1.04E + 01 -1.04E + 01 -1.04E + 01 -1.04E + 01
STD 2.35E + 00 2.68E + 00 2.08E + 00 9.70E-01 2.88E + 00 2.59E + 00 2.39E + 00 2.59E + 00
F23 Mean -7.82E + 00 -6.07E + 00 -9.00E + 00 -1.01E + 01 -6.94E + 00 -6.70E + 00 -6.25E + 00 -6.21E + 00
Worst -3.82E + 00 -9.46E-01 -1.86E + 00 -2.42E + 00 -5.13E + 00 -1.86E + 00 -9.44E-01 -2.42E + 00
Best -1.05E + 01 -1.05E + 01 -1.05E + 01 -1.05E + 01 -1.05E + 01 -1.05E + 01 -1.05E + 01 -1.05E + 01
STD 2.59E + 00 2.39E + 00 3.14E + 00 1.75E + 00 2.59E + 00 2.85E + 00 2.81E + 00 2.52E + 00
Multimedia Tools and Applications
Multimedia Tools and Applications

Table 8  The rank summary for HBASCSO HBA SCSO GWO WOA HHO SCA PSO
each metaheuristic algorithm on
benchmark functions (CEC2014-
F1 1 3 2 6 5 4 8 7
2015)
F2 1 3 2 5 6 4 7 8
F3 1 2 3 6 4 8 7 5
F4 1 3 2 6 4 7 5 8
F5 4 6 2 3 5 1 8 7
F6 6 8 3 5 4 1 2 7
F7 1 6 2 3 4 5 8 7
F8 3 4 5 2 7 8 1 6
F9 1 1 1 6 5 1 7 8
F10 1 1 7 5 4 1 8 6
F11 1 1 1 5 6 1 8 7
F12 5 6 4 3 2 1 8 7
F13 6 8 3 5 4 1 2 7
F14 8 6 3 7 5 2 4 1
F15 1 2 6 7 5 8 4 3
F16 1 1 1 1 1 1 1 1
F17 1 1 1 1 1 1 1 1
F18 1 1 1 1 1 1 1 1
F19 1 1 1 1 1 1 1 1
F20 1 5 2 3 7 8 4 6
F21 2 3 5 1 7 6 4 8
F22 5 6 2 1 3 7 8 4
F23 3 8 2 1 4 5 6 7
Total 56 86 61 84 95 83 113 123

4.3 Result Analysis for the Benchmark Functions CEC2019

Using the HBASCSO algorithm, the CEC2019 benchmark function is examined, and its
results are compared with those of other well-known metaheuristics. Furthermore, this bench-
mark test function is referred to as CEC-C06, also known as "The 100-digit challenge" [63].
The 10 functions of modern benchmark tests are listed in Table 4. These functions are used
for evaluating large-scale optimization problems in metaheuristic algorithms. It can be seen
from Table 4 that the first three functions have different dimensions. The functions 4 to 10 are
shifted and rotated between 100 and 100 to simulate the minimization problem in 10-dimen-
sional space. It has been determined that all of the functions in CEC2019 have reached their
global optimum at point 1, and all of the functions are scalable.
The HABSCSO algorithm performs well in the CEC01, CEC02, CEC03, and CEC10
algorithms as shown in Table 11. Analyzing the exploration and exploitation capabilities
of metaheuristic algorithms for unimodal functions is the goal of these functions. Table 12
shows that the HBASCSO algorithm is ranked first in the rank summary and in the com-
petitive rank for optimum values. The SCSO and SMA algorithms are also in the first rank
summary totally. Consequently, the SCSO algorithm achieved optimal results in CEC04
and CEC05. Meanwhile, the SMA algorithm obtained optimal results in CEC02, CEC07,
and CEC09. A comparison of the performance of the HBASCSO algorithm with newly

13
Multimedia Tools and Applications

Fig. 4  Total of obtained rank for each metaheuristic algorithm

proposed algorithms shows that it is very competitive. At the same time, the trade-off
between the exploration phase and the exploitation phase has been evidently examined.

5 Discussion

The numerical results show that the HBASCSO algorithm works better than many other
metaheuristic and hybrid metaheuristic algorithms, such as HBA, SCSO, GWO, WOA,
HHO, SCA, PSO, SSA, GSA, FLA, HGS, MFO, BO, AEO, MVO, SOA, and SMA. In the
first part of the performance analysis, there are 23 test functions used to compare the pro-
posed algorithm with HBA, SCSO, GWO, WOA, HHO, SCA, and PSO. Table8 presents
the rank summary of the proposed algorithm compared with the previously mentioned
algorithms on the CEC 2014–2015 benchmark functions. The proposed algorithm took first
place in the total result, while the SCSO algorithm ranked second in this comparison. The
second comparison shows the difference between the proposed algorithm and HBA, SCSO,
SSA, GSA, FLA, HGS, and MFO algorithms on CEC2017 test functions (F1-F30). The
mean, worst, best, and standard deviation metrics are used to compare the proposed algo-
rithm with other optimization algorithms. Table 10 summarizes the comparison results.
The HBASCSO algorithm took first place in this table, and the SCSO ranked second with
little difference. The last section of the analysis compares the proposed algorithm with the

13
Multimedia Tools and Applications

Table 9  Results for F1-F30 Algorithms D = 10 (CEC17)


Function Proposed HBA SCSO SSA GSA FLA HGS MFO

F1 Mean 2.96E + 03 1.03E + 07 1.16E + 06 2.56E + 05 2.02E + 06 3.50E + 05 8.04E + 08 4.48E + 07


Worst 8.90E + 03 2.99E + 08 3.19E + 06 3.63E + 06 1.61E + 07 7.88E + 05 1.29E + 09 1.14E + 09
Best 2.18E + 02 3.41E + 03 1.89E + 05 8.44E + 03 2.05E + 05 1.19E + 05 3.91E + 08 1.14E + 02
STD 2.32E + 03 5.45E + 07 8.16E + 05 6.95E + 05 4.21E + 06 1.76E + 05 2.41E + 08 2.07E + 08
F3 Mean 3.00E + 02 7.96E + 02 3.57E + 02 1.04E + 03 1.69E + 03 3.02E + 02 1.70E + 03 1.55E + 03
Worst 3.00E + 02 5.01E + 03 4.56E + 02 5.76E + 03 6.90E + 03 3.04E + 02 4.54E + 03 1.15E + 04
Best 3.00E + 02 3.04E + 02 3.11E + 02 3.13E + 02 3.57E + 02 3.00E + 02 7.23E + 02 3.00E + 02
STD 3.50E-11 9.20E + 02 3.27E + 01 1.24E + 03 1.40E + 03 8.48E-01 9.21E + 02 2.28E + 03
F4 Mean 4.13E + 02 4.33E + 02 4.03E + 02 4.17E + 02 4.23E + 02 4.19E + 02 4.43E + 02 4.14E + 02
Worst 4.86E + 02 5.46E + 02 4.68E + 02 4.77E + 02 5.13E + 02 4.87E + 02 4.83E + 02 5.01E + 02
Best 4.01E + 02 4.01E + 02 4.00E + 02 4.06E + 02 4.00E + 02 4.00E + 02 4.22E + 02 4.03E + 02
STD 2.16E + 01 3.68E + 01 1.23E + 01 1.97E + 01 3.17E + 01 2.63E + 01 1.67E + 01 2.17E + 01
F5 Mean 5.25E + 02 5.34E + 02 5.18E + 02 5.17E + 02 5.48E + 02 5.49E + 02 5.46E + 02 5.28E + 02
Worst 5.38E + 02 5.69E + 02 5.44E + 02 5.32E + 02 5.89E + 02 5.80E + 02 5.68E + 02 5.61E + 02
Best 5.12E + 02 5.13E + 02 5.08E + 02 5.07E + 02 5.15E + 02 5.21E + 02 5.32E + 02 5.08E + 02
STD 7.00E + 00 1.25E + 01 9.14E + 00 8.36E + 00 1.89E + 01 1.55E + 01 7.71E + 00 1.17E + 01
F6 Mean 6.06E + 02 6.10E + 02 6.00E + 02 6.01E + 02 6.36E + 02 6.27E + 02 6.18E + 02 6.00E + 02
Worst 6.19E + 02 6.34E + 02 6.01E + 02 6.04E + 02 6.59E + 02 6.47E + 02 6.25E + 02 6.02E + 02
Best 6.01E + 02 6.01E + 02 6.00E + 02 6.00E + 02 6.09E + 02 6.07E + 02 6.13E + 02 6.00E + 02
STD 4.70E + 00 6.96E + 00 1.02E-01 9.68E-01 1.33E + 01 9.91E + 00 3.11E + 00 3.69E-01
F7 Mean 7.46E + 02 7.60E + 02 7.29E + 02 7.33E + 02 7.87E + 02 7.83E + 02 7.71E + 02 7.33E + 02
Worst 7.76E + 02 8.00E + 02 7.47E + 02 7.53E + 02 8.77E + 02 8.15E + 02 7.87E + 02 7.69E + 02
Best 7.23E + 02 7.35E + 02 7.18E + 02 7.13E + 02 7.24E + 02 7.48E + 02 7.58E + 02 7.17E + 02
STD 1.37E + 01 1.86E + 01 7.33E + 00 1.10E + 01 3.49E + 01 1.91E + 01 7.99E + 00 1.24E + 01
F8 Mean 8.22E + 02 8.30E + 02 8.16E + 02 8.15E + 02 8.36E + 02 8.29E + 02 8.41E + 02 8.23E + 02
Worst 8.31E + 02 8.44E + 02 8.30E + 02 8.39E + 02 8.85E + 02 8.51E + 02 8.60E + 02 8.40E + 02
Best 8.10E + 02 8.09E + 02 8.05E + 02 8.04E + 02 8.18E + 02 8.11E + 02 8.24E + 02 8.11E + 02
STD 4.58E + 00 7.94E + 00 5.81E + 00 8.42E + 00 1.42E + 01 9.99E + 00 7.79E + 00 6.78E + 00
F9 Mean 9.44E + 02 9.30E + 02 9.01E + 02 9.08E + 02 1.45E + 03 1.38E + 03 9.98E + 02 1.05E + 03
Worst 1.06E + 03 1.54E + 03 9.06E + 02 9.51E + 02 2.81E + 03 1.98E + 03 1.06E + 03 1.30E + 03
Best 9.00E + 02 9.00E + 02 9.00E + 02 9.00E + 02 9.60E + 02 1.04E + 03 9.26E + 02 9.01E + 02
STD 4.40E + 01 1.23E + 02 1.59E + 00 1.44E + 01 3.77E + 02 2.61E + 02 3.68E + 01 1.22E + 02
F10 Mean 1.76E + 03 1.96E + 03 1.80E + 03 1.60E + 03 2.06E + 03 2.04E + 03 2.24E + 03 1.84E + 03
Worst 2.65E + 03 2.58E + 03 3.44E + 03 2.21E + 03 2.72E + 03 2.69E + 03 2.60E + 03 2.34E + 03
Best 1.04E + 03 1.26E + 03 1.05E + 03 1.25E + 03 1.25E + 03 1.45E + 03 1.85E + 03 1.30E + 03
STD 3.60E + 02 2.86E + 02 4.80E + 02 2.58E + 02 3.02E + 02 3.22E + 02 2.02E + 02 2.50E + 02
F11 Mean 1.12E + 03 1.16E + 03 1.12E + 03 1.13E + 03 1.22E + 03 1.18E + 03 1.21E + 03 1.14E + 03
Worst 1.15E + 03 1.43E + 03 1.20E + 03 1.22E + 03 1.48E + 03 1.50E + 03 1.31E + 03 1.37E + 03
Best 1.10E + 03 1.11E + 03 1.10E + 03 1.10E + 03 1.11E + 03 1.11E + 03 1.14E + 03 1.10E + 03
STD 1.33E + 01 6.24E + 01 2.19E + 01 2.10E + 01 9.03E + 01 7.86E + 01 3.99E + 01 7.66E + 01
F12 Mean 1.21E + 06 1.45E + 06 1.53E + 04 5.79E + 05 2.70E + 06 1.97E + 06 1.29E + 07 1.31E + 06
Worst 6.41E + 06 5.80E + 06 6.18E + 04 2.31E + 06 1.06E + 07 1.44E + 07 3.53E + 07 5.74E + 06
Best 3.21E + 04 6.91E + 04 1.97E + 03 1.03E + 04 5.12E + 04 1.56E + 04 1.30E + 06 2.01E + 03
STD 1.70E + 06 1.61E + 06 1.64E + 04 7.13E + 05 2.74E + 06 3.04E + 06 8.27E + 06 2.19E + 06
F13 Mean 5.67E + 03 1.34E + 04 1.22E + 04 8.90E + 03 2.09E + 04 1.72E + 04 3.39E + 04 8.30E + 03
Worst 3.39E + 04 3.40E + 04 3.61E + 04 2.47E + 04 8.65E + 04 4.99E + 04 8.15E + 04 3.34E + 04
Best 1.45E + 03 2.27E + 03 1.87E + 03 2.16E + 03 2.44E + 03 1.85E + 03 4.02E + 03 1.51E + 03

13
Multimedia Tools and Applications

Table 9   (Continued)


Function Proposed HBA SCSO SSA GSA FLA HGS MFO

STD 6.58E + 03 9.24E + 03 8.74E + 03 6.38E + 03 1.82E + 04 1.24E + 04 2.26E + 04 9.43E + 03


F14 Mean 1.55E + 03 1.96E + 03 1.49E + 03 3.13E + 03 1.82E + 03 1.55E + 03 1.61E + 03 2.24E + 03
Worst 1.69E + 03 5.21E + 03 1.59E + 03 7.27E + 03 5.24E + 03 1.95E + 03 2.10E + 03 1.15E + 04
Best 1.46E + 03 1.44E + 03 1.41E + 03 1.45E + 03 1.47E + 03 1.50E + 03 1.48E + 03 1.43E + 03
STD 5.90E + 01 1.21E + 03 4.19E + 01 1.90E + 03 8.58E + 02 8.04E + 01 1.47E + 02 1.87E + 03
F15 Mean 2.38E + 03 2.69E + 03 1.62E + 03 5.03E + 03 7.41E + 03 2.67E + 03 2.18E + 03 5.26E + 03
Worst 4.29E + 03 5.01E + 03 1.86E + 03 3.05E + 04 2.09E + 04 6.71E + 03 4.32E + 03 2.41E + 04
Best 1.57E + 03 1.56E + 03 1.52E + 03 1.69E + 03 1.93E + 03 1.61E + 03 1.61E + 03 1.57E + 03
STD 7.85E + 02 1.19E + 03 7.88E + 01 5.23E + 03 5.73E + 03 1.29E + 03 7.12E + 02 4.52E + 03
F16 Mean 1.72E + 03 1.75E + 03 1.76E + 03 1.71E + 03 1.84E + 03 1.87E + 03 1.73E + 03 1.68E + 03
Worst 1.99E + 03 1.98E + 03 1.98E + 03 1.99E + 03 2.12E + 03 2.03E + 03 1.90E + 03 1.85E + 03
Best 1.60E + 03 1.61E + 03 1.61E + 03 1.61E + 03 1.66E + 03 1.65E + 03 1.65E + 03 1.60E + 03
STD 1.00E + 02 1.11E + 02 1.10E + 02 1.04E + 02 1.40E + 02 1.08E + 02 6.80E + 01 7.95E + 01
F17 Mean 1.75E + 03 1.74E + 03 1.74E + 03 1.75E + 03 1.80E + 03 1.77E + 03 1.78E + 03 1.76E + 03
Worst 1.79E + 03 1.83E + 03 1.78E + 03 1.80E + 03 1.93E + 03 1.89E + 03 1.80E + 03 1.82E + 03
Best 1.73E + 03 1.70E + 03 1.71E + 03 1.72E + 03 1.73E + 03 1.72E + 03 1.75E + 03 1.73E + 03
STD 1.08E + 01 2.36E + 01 1.48E + 01 1.99E + 01 5.65E + 01 3.89E + 01 1.13E + 01 2.19E + 01
F18 Mean 2.30E + 04 1.62E + 04 1.09E + 04 2.56E + 04 1.61E + 04 1.99E + 04 1.30E + 05 2.43E + 04
Worst 5.63E + 04 5.12E + 04 3.51E + 04 5.55E + 04 3.79E + 04 4.75E + 04 3.23E + 05 5.55E + 04
Best 3.17E + 03 1.98E + 03 1.83E + 03 1.91E + 03 2.52E + 03 6.95E + 03 2.31E + 04 1.92E + 03
STD 1.52E + 04 1.42E + 04 1.08E + 04 1.72E + 04 1.11E + 04 1.26E + 04 7.95E + 04 1.68E + 04
F19 Mean 7.06E + 03 5.09E + 03 2.04E + 03 9.31E + 03 3.00E + 04 1.14E + 04 5.21E + 03 9.28E + 03
Worst 1.63E + 04 1.47E + 04 2.85E + 03 2.22E + 04 1.48E + 05 3.02E + 04 1.77E + 04 3.30E + 04
Best 1.97E + 03 1.92E + 03 1.90E + 03 1.92E + 03 2.04E + 03 1.96E + 03 2.02E + 03 1.95E + 03
STD 4.71E + 03 5.07E + 03 2.29E + 02 6.75E + 03 3.13E + 04 9.20E + 03 4.66E + 03 1.08E + 04
F20 Mean 2.04E + 03 2.10E + 03 2.12E + 03 2.07E + 03 2.15E + 03 2.16E + 03 2.09E + 03 2.05E + 03
Worst 2.16E + 03 2.23E + 03 2.22E + 03 2.19E + 03 2.31E + 03 2.31E + 03 2.14E + 03 2.21E + 03
Best 2.00E + 03 2.02E + 03 2.03E + 03 2.02E + 03 2.07E + 03 2.05E + 03 2.06E + 03 2.01E + 03
STD 3.44E + 01 6.53E + 01 5.67E + 01 4.98E + 01 6.06E + 01 6.95E + 01 1.84E + 01 4.18E + 01
F21 Mean 2.25E + 03 2.27E + 03 2.27E + 03 2.31E + 03 2.30E + 03 2.32E + 03 2.24E + 03 2.28E + 03
Worst 2.33E + 03 2.34E + 03 2.34E + 03 2.34E + 03 2.39E + 03 2.39E + 03 2.36E + 03 2.36E + 03
Best 2.20E + 03 2.20E + 03 2.20E + 03 2.20E + 03 2.21E + 03 2.20E + 03 2.20E + 03 2.20E + 03
STD 6.04E + 01 6.58E + 01 6.00E + 01 2.49E + 01 6.56E + 01 6.06E + 01 5.64E + 01 6.04E + 01
F22 Mean 2.30E + 03 2.31E + 03 2.30E + 03 2.31E + 03 2.31E + 03 2.32E + 03 2.37E + 03 2.30E + 03
Worst 2.31E + 03 2.40E + 03 2.31E + 03 2.33E + 03 2.34E + 03 2.34E + 03 2.46E + 03 2.35E + 03
Best 2.23E + 03 2.25E + 03 2.21E + 03 2.20E + 03 2.24E + 03 2.31E + 03 2.34E + 03 2.25E + 03
STD 1.81E + 01 2.23E + 01 1.67E + 01 2.18E + 01 2.15E + 01 5.56E + 00 2.59E + 01 1.37E + 01
F23 Mean 2.63E + 03 2.64E + 03 2.62E + 03 2.62E + 03 2.64E + 03 2.66E + 03 2.66E + 03 2.63E + 03
Worst 2.66E + 03 2.66E + 03 2.68E + 03 2.65E + 03 2.68E + 03 2.70E + 03 2.67E + 03 2.64E + 03
Best 2.61E + 03 2.62E + 03 2.61E + 03 2.60E + 03 2.62E + 03 2.62E + 03 2.64E + 03 2.61E + 03
STD 1.14E + 01 1.12E + 01 1.34E + 01 9.99E + 00 1.55E + 01 2.51E + 01 7.69E + 00 7.90E + 00
F24 Mean 2.65E + 03 2.72E + 03 2.72E + 03 2.74E + 03 2.75E + 03 2.78E + 03 2.78E + 03 2.74E + 03
Worst 2.79E + 03 2.79E + 03 2.78E + 03 2.77E + 03 2.87E + 03 2.87E + 03 2.81E + 03 2.78E + 03
Best 2.50E + 03 2.50E + 03 2.50E + 03 2.51E + 03 2.51E + 03 2.50E + 03 2.55E + 03 2.40E + 03
STD 1.24E + 02 9.70E + 01 8.85E + 01 4.55E + 01 8.90E + 01 8.28E + 01 4.71E + 01 7.99E + 01
F25 Mean 2.93E + 03 2.94E + 03 2.93E + 03 2.94E + 03 2.94E + 03 2.94E + 03 2.95E + 03 2.93E + 03
Worst 2.95E + 03 2.96E + 03 2.97E + 03 2.95E + 03 3.05E + 03 3.03E + 03 2.98E + 03 2.97E + 03
Best 2.90E + 03 2.90E + 03 2.90E + 03 2.90E + 03 2.65E + 03 2.90E + 03 2.92E + 03 2.90E + 03

13
Multimedia Tools and Applications

Table 9   (Continued)


Function Proposed HBA SCSO SSA GSA FLA HGS MFO

STD 2.34E + 01 1.73E + 01 2.35E + 01 1.42E + 01 6.18E + 01 3.28E + 01 1.85E + 01 2.38E + 01


F26 Mean 2.93E + 03 2.98E + 03 2.93E + 03 2.97E + 03 3.19E + 03 3.23E + 03 3.09E + 03 2.99E + 03
Worst 3.20E + 03 3.13E + 03 3.05E + 03 3.88E + 03 4.11E + 03 4.37E + 03 3.26E + 03 3.16E + 03
Best 2.81E + 03 2.61E + 03 2.60E + 03 2.90E + 03 2.90E + 03 2.61E + 03 3.03E + 03 2.90E + 03
STD 9.00E + 01 1.00E + 02 8.06E + 01 1.81E + 02 2.18E + 02 4.49E + 02 4.83E + 01 4.59E + 01
F27 Mean 3.11E + 03 3.10E + 03 3.11E + 03 3.09E + 03 3.14E + 03 3.14E + 03 3.10E + 03 3.09E + 03
Worst 3.18E + 03 3.17E + 03 3.20E + 03 3.10E + 03 3.23E + 03 3.24E + 03 3.11E + 03 3.10E + 03
Best 3.09E + 03 3.09E + 03 3.09E + 03 3.09E + 03 3.09E + 03 3.10E + 03 3.10E + 03 3.09E + 03
STD 2.15E + 01 1.71E + 01 2.74E + 01 2.17E + 00 4.30E + 01 4.18E + 01 1.59E + 00 1.89E + 00
F28 Mean 3.32E + 03 3.31E + 03 3.31E + 03 3.38E + 03 3.39E + 03 3.35E + 03 3.28E + 03 3.26E + 03
Worst 3.73E + 03 3.41E + 03 3.78E + 03 3.45E + 03 3.74E + 03 3.61E + 03 3.44E + 03 3.41E + 03
Best 3.10E + 03 3.10E + 03 3.10E + 03 3.17E + 03 3.17E + 03 3.11E + 03 3.20E + 03 3.17E + 03
STD 1.45E + 02 1.06E + 02 1.70E + 02 7.52E + 01 1.46E + 02 1.38E + 02 7.00E + 01 9.26E + 01
F29 Mean 3.20E + 03 3.24E + 03 3.21E + 03 3.19E + 03 3.35E + 03 3.32E + 03 3.22E + 03 3.18E + 03
Worst 3.30E + 03 3.41E + 03 3.43E + 03 3.26E + 03 3.61E + 03 3.68E + 03 3.29E + 03 3.28E + 03
Best 3.15E + 03 3.17E + 03 3.14E + 03 3.15E + 03 3.16E + 03 3.19E + 03 3.18E + 03 3.13E + 03
STD 4.11E + 01 7.14E + 01 6.43E + 01 2.74E + 01 1.21E + 02 9.84E + 01 2.96E + 01 4.13E + 01
F30 Mean 7.85E + 05 1.18E + 06 9.54E + 05 4.32E + 05 8.90E + 05 9.52E + 05 9.54E + 05 4.83E + 05
Worst 3.98E + 06 1.08E + 07 3.53E + 06 1.66E + 06 4.26E + 06 1.02E + 07 2.43E + 06 1.46E + 06
Best 6.06E + 03 6.96E + 03 3.66E + 03 5.48E + 03 1.91E + 04 7.85E + 03 1.78E + 05 6.26E + 03
STD 1.01E + 06 2.15E + 06 1.01E + 06 6.77E + 05 1.22E + 06 2.00E + 06 5.99E + 05 3.85E + 05

HBA, SCSO, BO, AEO, MVO, SOA, and SMA on the CEC2019 benchmark functions. In
this comparison, both the proposed algorithm, the SCSO, and the SMA introduce perfect
performance in many benchmark functions, and as summarized in Table 12, all three of
these algorithms took the first place in the total performance. The HBASCSO algorithm
analysis presented in the previous section demonstrated its effectiveness in comparison
with some optimization algorithms. However, HBASCSO’s advantages and disadvantages
can be summarized as follows:

• It is important to note that in order to improve performance, the HBASCSO has a trade-
off between the exploration and extraction phases. This is the cause of the hybridization
of the HBA and SCSO algorithms, which is what led to its hybridization.
• Considering disturbances and uncertainties is crucial for designing robust optimization
algorithms that fit better into real-world systems.
• In an analysis of the mean, worst, best, and standard deviation values of the obtained
results, it can be seen that the HABSCSO algorithm tries to get as close as possible to
the optimal solution. As a result, there is no significant difference in the results based
on mean, worst, and best.

5.1 Wilcoxon Rank Sum Test Analysis

The Wilcoxon signed rank test was developed by Wilcoxon and his colleagues (1970) as a
statistical procedure based solely on the order in which observations are presented in the

13
Multimedia Tools and Applications

Table 10  The rank summary for CEC2017


Function Proposed HBA SCSO SSA GSA FLA HGS MFO

1 1 6 4 2 5 3 8 7
3 1 6 3 2 4 8 5 7
4 3 1 8 4 6 5 2 7
5 4 3 1 8 2 7 5 6
6 4 5 1 3 8 7 6 1
7 4 5 1 2 8 7 6 2
8 2 4 1 3 8 6 7 5
9 4 3 1 2 8 7 5 6
10 2 5 3 1 6 7 8 4
11 1 5 1 3 8 6 7 4
12 3 6 1 2 8 7 4 5
13 1 5 4 3 7 6 8 2
14 2 5 1 8 6 2 4 7
15 3 5 1 6 8 4 2 7
16 3 5 6 2 7 8 5 1
17 3 1 1 3 8 6 7 5
18 6 4 1 8 3 5 2 7
19 3 2 1 5 8 6 7 4
20 1 6 7 3 4 8 5 2
21 1 3 3 5 7 8 2 6
22 1.33 4.33 1.33 4.33 4.33 7 8 1.33
23 2 3 1 1 3 4 4 2
24 1 2 2 3 5 6 6 3
25 1.33 2.25 1.33 2.25 2.25 2.25 3 1.33
26 1 4 1 3 7 8 6 5
27 3 2 3 1 4 4 2 1
28 4 3 3 6 7 5 2 1
29 3 6 4 2 8 7 5 1
30 3 7 6 1 4 5 6 2
Total 71.66 118.58 72.66 98.58 173.58 171.25 147 112.66

sample [62]. In this case, the one with the lowest ranking will be determined to be the best.
In this section, the Wilcoxon rank sum test is carried out at a significance level of 5%. In
addition to the analysis made, Tables 13 presents the p-values calculated by the nonpara-
metric Wilcoxon ranksum tests for the pair-wise comparison over two independent sam-
ples (HBASCSO vs. HBA, SCSO, GWO, WOA, HHO, SCA, and PSO) for the CEC2017.
Tables 14 and 15 present the p-values calculated by the nonparametric Wilcoxon rank-
sum tests for the pair-wise comparison over two independent samples (HBASCSO vs.
HBA, SCSO, GWO, WOA, HHO, SCA, PSO, BO, AEO, MVO, SOA, and SMA) for the
CEC2019. The p-values are generated by the Wilcoxon test with 0.05 significant level and
over 30 independent runs.

13
Table 11  Results for F1-F10 Algorithms (CEC19)
Function HBASCSO HBA SCSO BO AEO MVO SOA SMA

F1 Mean 4.31E + 04 4.50E + 04 7.06E + 04 1.41E + 08 3.87E + 10 5.37E + 04 8.38E + 09 1.92E + 09


Worst 5.07E + 04 5.18E + 04 9.69E + 05 1.47E + 09 2.06E + 11 7.09E + 04 4.32E + 10 6.55E + 09
Best 3.83E + 04 4.09E + 04 3.76E + 04 5.13E + 04 1.13E + 06 4.16E + 04 7.96E + 07 2.07E + 08
STD 2.82E + 03 3.51E + 03 1.70E + 05 2.90E + 08 5.46E + 10 6.74E + 03 1.08E + 10 1.41E + 09
F2 Mean 1.7342857E + 01 1.7375821E + 01 1.7354940E + 01 1.7375853E + 01 1.7350503E + 01 1.7361410E + 01 1.7493996E + 01 1.7342857E + 01
Worst 1.7342857E + 01 1.7681954E + 01 1.7675355E + 01 1.7676709E + 01 1.7368474E + 01 1.7381604E + 01 1.7760451E + 01 1.7342857E + 01
Multimedia Tools and Applications

Best 1.7342857E + 01 1.7342933E + 01 1.7343299E + 01 1.7343980E + 01 1.7343868E + 01 1.7348078E + 01 1.7379951E + 01 1.7342857E + 01


STD 3.4406909E-14 9.9726471E-02 6.0517395E-02 8.6513733E-02 5.8957280E-03 8.9925961E-03 8.8333936E-02 0.0000000E + 00
F3 Mean 1.2702405E + 01 1.2702443E + 01 1.2702412E + 01 1.2702407E + 01 1.2702405E + 01 1.2702414E + 01 1.2702527E + 01 1.2702412E + 01
Worst 1.2702410E + 01 1.2703491E + 01 1.2702516E + 01 1.2702480E + 01 1.2702410E + 01 1.2702435E + 01 1.2703021E + 01 1.2702516E + 01
Best 1.2702404E + 01 1.2702404E + 01 1.2702404E + 01 1.2702404E + 01 1.2702404E + 01 1.2702405E + 01 1.2702410E + 01 1.2702404E + 01
STD 1.2852713E-06 1.9813490E-04 2.8463529E-05 1.3829920E-05 1.1195980E-06 8.6148953E-06 1.2584367E-04 2.8477128E-05
F4 Mean 3.44E + 02 8.06E + 02 3.58E + 01 2.19E + 02 4.56E + 02 1.79E + 02 1.56E + 03 4.18E + 02
Worst 3.08E + 03 3.66E + 03 7.26E + 01 2.41E + 03 2.78E + 03 5.45E + 02 4.56E + 03 6.13E + 03
Best 8.02E + 01 4.84E + 01 1.49E + 01 3.62E + 01 1.56E + 02 8.18E + 01 5.09E + 02 3.98E + 00
STD 5.68E + 02 1.10E + 03 1.44E + 01 5.15E + 02 4.88E + 02 8.81E + 01 8.05E + 02 1.19E + 03
F5 Mean 1.60E + 00 1.39E + 00 1.15E + 00 1.35E + 00 1.87E + 00 2.48E + 00 2.22E + 00 1.16E + 00
Worst 2.65E + 00 2.02E + 00 1.45E + 00 1.78E + 00 2.60E + 00 4.87E + 00 2.49E + 00 1.48E + 00
Best 1.22E + 00 1.12E + 00 1.04E + 00 1.05E + 00 1.39E + 00 1.47E + 00 1.97E + 00 1.04E + 00
STD 2.88E-01 2.11E-01 1.10E-01 2.47E-01 3.50E-01 7.11E-01 9.44E-02 9.15E-02
F6 Mean 1.10E + 01 7.47E + 00 1.02E + 01 1.11E + 01 9.40E + 00 9.35E + 00 1.12E + 01 9.87E + 00
Worst 1.22E + 01 1.08E + 01 1.19E + 01 1.23E + 01 1.16E + 01 1.11E + 01 1.25E + 01 1.15E + 01
Best 9.64E + 00 4.68E + 00 4.85E + 00 9.48E + 00 7.23E + 00 4.84E + 00 1.02E + 01 7.34E + 00
STD 6.85E-01 1.52E + 00 1.68E + 00 6.62E-01 1.17E + 00 1.28E + 00 5.91E-01 1.02E + 00
F7 Mean 3.72E + 02 3.25E + 02 5.09E + 02 6.30E + 02 5.57E + 02 3.82E + 02 8.09E + 02 2.13E + 02

13
Table 11  (continued)
Function HBASCSO HBA SCSO BO AEO MVO SOA SMA
Worst 1.15E + 03 7.83E + 02 2.25E + 03 9.79E + 02 1.06E + 03 6.49E + 02 1.13E + 03 5.59E + 02

13
Best -5.56E + 01 -1.36E + 02 -1.05E + 01 3.00E + 02 1.97E + 02 9.25E + 01 2.71E + 02 -5.92E + 01
STD 3.06E + 02 2.13E + 02 5.45E + 02 1.77E + 02 2.44E + 02 1.31E + 02 2.16E + 02 1.69E + 02
F8 Mean 5.77E + 00 5.18E + 00 5.11E + 00 4.68E + 00 5.35E + 00 5.89E + 00 6.28E + 00 6.18E + 00
Worst 6.47E + 00 6.31E + 00 6.56E + 00 7.05E + 00 6.72E + 00 6.72E + 00 7.06E + 00 7.03E + 00
Best 4.75E + 00 3.60E + 00 3.16E + 00 2.61E + 00 2.93E + 00 3.99E + 00 4.45E + 00 5.21E + 00
STD 4.65E-01 7.03E-01 8.97E-01 1.21E + 00 9.04E-01 6.43E-01 5.55E-01 4.94E-01
F9 Mean 1.35E + 01 1.36E + 01 2.46E + 00 4.26E + 00 4.96E + 00 3.40E + 00 1.34E + 02 2.41E + 00
Worst 2.62E + 02 2.76E + 02 2.71E + 00 6.02E + 00 6.96E + 00 4.41E + 00 8.42E + 02 2.46E + 00
Best 3.17E + 00 3.23E + 00 2.38E + 00 2.71E + 00 3.36E + 00 2.76E + 00 2.22E + 01 2.36E + 00
STD 4.69E + 01 4.95E + 01 7.07E-02 9.78E-01 9.68E-01 4.22E-01 1.60E + 02 2.80E-02
F10 Mean 1.956E + 01 2.012E + 01 2.031E + 01 1.984E + 01 2.033E + 01 2.026E + 01 2.049E + 01 1.970E + 01
Worst 2.064E + 01 2.038E + 01 2.063E + 01 2.059E + 01 2.052E + 01 2.055E + 01 2.063E + 01 2.054E + 01
Best 6.370E + 00 1.989E + 01 2.001E + 01 7.589E + 00 2.009E + 01 2.001E + 01 2.014E + 01 8.086E-11
STD 2.994E + 00 1.172E-01 1.556E-01 2.552E + 00 1.220E-01 1.498E-01 1.115E-01 3.722E + 00
Multimedia Tools and Applications
Multimedia Tools and Applications

Table 12  The rank summary for HBASCSO HBA SCSO BO AEO MVO SOA SMA
CEC2019
1 1 2 4 5 8 3 7 6
2 1 6 4 7 3 5 8 1
3 1 7 4 3 1 6 8 4
4 5 8 1 4 7 3 2 6
5 5 4 1 3 6 8 7 2
6 6 1 5 7 3 2 8 4
7 3 2 5 7 6 4 7 1
8 5 3 2 1 4 6 8 7
9 6 7 2 4 5 3 8 1
10 1 4 6 3 7 5 8 2
Total 34 44 34 44 50 45 71 34

5.2 Computational Complexity

A fundamental metric for assessing algorithmic performance is time complexity. This


paper expresses this using big-O notation. This paper has a detailed assessment of the
complexity for the HBA, SCSO, and HBASCSO. The computational complexity of the
algorithm can be categorized into three primary segments: the population update method,
fitness evaluation, and initialization. In O(N × D) time, the HBA, SCSO, and HBASCSO
algorithms initialize the emphasize of each search agent, where N is the number of search
agents and D is the problem’s dimensionality. Consequently, the overall computational cost
of the HBASCSO is proportional to O(N × D × Max-iter) for a total of Max-iter iterations.
The HBASCSO algorithm general computing complexity is O(N2), assuming that N and D
are equivalent.

5.3 Examination of the Convergence Curve

The HBASCSO algorithm has a specified convergence rate. In order to avoid local optima,
exploration and exploitation should be balanced. In terms of exploration and exploita-
tion, control parameters are effective. Additionally, premature convergence is prevented
by hybridizing the two metaheuristic algorithms. In the early steps of optimization, which
include exploring the search space, it is necessary to make sudden changes in the search
agent’s movement. It is necessary to make these changes in order to determine the most
effective search areas in the search space. When the exploitation phase begins, search
agents find the local optimum solution and perform their operations in a specific manner.
Based on the obtained results, it is clear that there are some unforeseen changes in the
movement of search agents during the initial iterations. Additionally, the movement in the
final iteration should ideally decrease. These movements are considered essential [64]. The
convergence curve of the HBASCSO is shown in Fig. 5. The convergence curve behavior
of the HBASCSO algorithm in functions F1, F2, F3, F4, and F7 indicates that the pro-
posed algorithm exhibits a typical convergence pattern. It is also clear that the HBASCSO
algorithm has a tradeoff between exploration and exploitation phases. The HBA and
SCSO algorithms demonstrate efficient exploration and exploitation capabilities through
hybridization.

13
Multimedia Tools and Applications

Table 13  P-values at α = 0.05 by Wilcoxon test for CEC2017 with HBASCSO


vs. HBA vs. SCSO vs. GWO vs. WOA vs. HHO vs. SCA vs. PSO

F1 3.11E-01 1.60E-01 1.74E-05 3.25E-01 2.11E-01 3.06E-02 3.25E-01


F2 1.63E-01 7.90E-03 8.70E-09 2.99E-01 9.60E-02 2.47E-01 2.76E-01
F3 1.54E-01 2.91E-01 6.05E-02 1.48E-01 9.55E-02 2.12E-01 2.58E-01
F4 5.18E-02 2.40E-01 2.71E-03 1.47E-01 2.97E-01 7.12E-02 1.29E-01
F5 2.05E-03 4.60E-17 4.69E-01 9.10E-06 9.37E-06 3.64E-04 1.48E-03
F6 4.55E-09 1.76E-17 7.26E-05 9.00E-09 3.29E-07 1.44E-05 8.73E-07
F7 6.83E-01 4.95E-05 4.68E-09 8.44E-01 5.82E-01 1.36E-01 1.36E-01
F8 9.58E-04 3.57E-11 6.26E-02 1.99E-04 1.50E-04 2.75E-04 5.07E-04
F9 1.13E − 11 1.02E-04 1.02E-04 1.36E-01 2.21E-01 1.02E-04 1.16E-01
F10 4.34E-02 4.34E-02 5.60E-22 4.34E-02 4.34E-02 4.34E-02 5.08E-01
F11 1.23E-01 2.87E-01 5.34E-03 3.33E-03 3.11–01 3.13E-01 2.83E-01
F12 2.62E-01 8.75E-05 1.46E-02 5.08E-01 3.90E-01 9.90E-01 3.07E-01
F13 3.38E-03 5.81E-17 2.12E-17 1.59E-04 1.78E-03 1.42E-04 3.11E-03
F14 5.34E-01 2.84E-04 2.77E-01 2.60E-01 7.90E-01 3.69E-01 7.25E-01
F15 3.43E-01 1.49E-02 1.24E-01 3.98E-01 4.54E-01 4.07E-01 4.52E-01
F16 6.41E-03 6.39E-03 6.95E-03 6.40E-03 6.40E-03 6.40E-03 6.40E-03
F17 1.01E-02 1.01E-02 3.08E-01 1.01E-02 1.01E-02 1.01E-02 1.01E-02
F18 1.29E-03 5.09E-04 6.71E-01 2.73E-03 4.45E-03 3.23E-03 1.31E-03
F19 2.82E-01 3.27E-01 1.10E-01 7.09E-01 8.98E-01 6.25E-01 4.87E-01
F20 2.14E-01 7.68E-01 5.86E-01 2.19E-01 3.67E-02 6.64E-01 1.41E-01
F21 5.27E-01 1.02E-06 2.30E-12 8.29E-01 5.84E-01 5.69E-01 4.00E-02
F22 8.47E-01 1.28E-05 5.35E-09 5.34E-01 7.24E-01 1.18E-01 7.38E-01
F23 2.09E-02 8.50E-02 9.80E-04 2.68E-01 9.75E-02 1.26E-02 2.59E-02

6 Conclusion

In this study, a newly developed hybrid metaheuristic algorithm based on the Honey
Badger Algorithm (HBA) and the Sand Cat Swarm Optimization (SCSO) algorithm
has been developed. The HBASCSO algorithm aims to improve the performance of the
original HBA and SCSO algorithms by covering the weaknesses of each algorithm. One
of these weaknesses is the poor performance of the HBA algorithm in the exploitation
phase. On the other hand, SCSO is a very competitive algorithm and its performance in
the exploitation phase has outperformed many other algorithms. The obtained results
from well-known benchmark functions show that the HBASCSO algorithm has a smooth
mechanism for position updating. These well-known benchmark functions include the
CEC2015, CEC2017, and CEC2019 functions. For the CEC2015, the HBASCSO algo-
rithm was compared with seven different well-known metaheuristic algorithms (HBA,
SCSO, GWO, WOA, HHO, SCA, and PSO) and analyzed. According to the rank summary
for each metaheuristic algorithm, the HBASCSO algorithm ranked first in 14 functions
which makes it the best among all seven algorithms. For the CEC2017, the HBASCSO
algorithm was compared with some new well-known metaheuristic algorithms (SSA, GSA,
FLA, HGS, and MFO) and analyzed. The proposed algorithm outperformed 9 out of 30

13
Multimedia Tools and Applications

Table 14  P-values at α = 0.05 by Wilcoxon test for CEC2019 with HBASCSO


vs. HBA vs. SCSO vs. GWO vs. WOA vs. HHO vs. SCA vs. PSO

F1 3.66E-01 1.41E-08 8.51E-06 2.99E-01 1.62E-05 2.05E-17 1.40E-08


F2 NaN NaN NaN NaN NaN NaN NaN
F3 1.40E-02 1.70E-10 4.94E-03 1.55E-05 3.80E-10 7.81E-09 2.23E-10
F4 4.42E-03 5.01E-02 2.25E-01 1.34E-01 1.10E-01 4.49E-07 2.47E-01
F5 3.95E-04 3.08E-03 1.98E-04 1.16E-06 1.03E-08 4.30E-12 7.88E-11
F6 1.18E-02 8.01E-08 7.48E-07 7.06E-13 2.62E-11 9.24E-14 7.12E-08
F7 2.45E-03 1.96E-07 1.55E-05 4.00E-06 4.61E-10 1.39E-09 1.22E-09
F8 6.90E-05 2.91E-06 6.02E-04 9.98E-06 4.51E-03 4.57E-12 3.74E-11
F9 1.53E-04 7.72E-06 4.91E-04 5.93E-08 2.06E-10 4.48E-05 7.06E-06
F10 1.49E-02 7.20E-01 5.15E-02 3.41E-03 1.99E-03 3.30E-06 1.86E-05
F11 4.96E-03 2.42E-01 2.54E-02 1.38E-06 4.29E-04 2.33E-11 5.14E-08
F12 5.63E-01 5.70E-04 8.83E-02 6.61E-03 2.52E-01 4.46E-08 7.16E-04
F13 6.16E-01 4.23E-03 9.73E-02 2.25E-02 8.73E-02 1.91E-05 1.36E-01
F14 7.87E-02 3.96E-05 8.39E-05 1.04E-01 7.14E-01 6.67E-02 4.99E-11
F15 2.51E-01 1.28E-05 1.22E-02 2.40E-05 3.54E-01 3.79E-01 7.63E-03
F16 7.35E-01 2.29E-01 1.04E-01 2.61E-02 2.22E-03 1.44E-01 1.42E-08
F17 1.66E-01 2.74E-05 7.60E-01 2.41E-04 3.08E-02 2.78E-08 9.74E-10
F18 7.54E-02 3.54E-03 5.36E-01 3.56E-02 3.42E-01 1.46E-07 1.13E-04
F19 7.20E-02 3.07E-06 1.53E-01 6.02E-04 3.49E-02 1.73E-01 2.05E-06
F20 4.43E-01 5.01E-08 2.82E-03 5.47E-02 1.34E-02 8.44E-02 3.65E-11
F21 1.71E-01 2.00E-01 1.27E-05 8.01E-03 3.45E-04 5.42E-01 1.57E-02
F22 1.29E-01 4.42E-01 5.89E-01 2.39E-01 1.13E-03 8.27E-13 4.14E-01
F23 1.23E-01 1.85E-02 1.36E-04 6.57E-04 1.00E-05 1.24E-10 1.71E-09
F24 5.22E-02 1.80E-02 2.26E-04 1.18E-03 2.34E-04 2.85E-05 2.57E-02
F25 7.57E-02 4.77E-01 1.01E-01 3.93E-01 9.46E-02 1.75E-06 3.71E-01
F26 4.76E-02 8.78E-01 3.03E-01 1.62E-06 1.08E-03 7.87E-08 9.22E-01
F27 3.10E-01 9.98E-01 4.05E-03 1.78E-04 1.06E-03 4.17E-01 6.79E-01
F28 6.40E-01 6.86E-01 7.66E-02 9.47E-02 5.36E-01 1.45E-01 8.40E-02
F29 6.85E-03 3.28E-01 2.02E-01 1.80E-07 8.52E-06 2.12E-02 5.56E-03
F30 3.79E-01 5.89E-01 7.23E-02 5.37E-01 6.78E-01 5.39E-01 4.20E-01

Table 15  P-values at α = 0.05 by Wilcoxon test for CEC2019 with HBASCSO


vs. HBA vs. SCSO vs. BO vs. AEO vs. MVO vs. SOA vs. SMA

F1 4.71E-02 3.82E-01 1.25E-02 5.50E-04 2.01E-08 2.08E-04 3.48E-08


F2 9.99E-01 4.56E-02 3.02E-01 1.25E-01 3.77E-01 5.14E-06 4.56E-02
F3 3.01E-01 2.14E-01 4.54E-01 3.73E-01 1.34E-05 1.15E-05 2.01E-01
F4 6.29E-02 5.99E-03 3.96E-01 4.45E-01 1.39E-01 4.81E-07 7.60E-01
F5 1.50E-03 8.78E-09 8.19E-04 3.82E-04 2.78E-07 3.43E-12 3.26E-09
F6 1.65E-12 1.15E-02 7.14E-01 6.35E-08 4.59E-07 1.49E-01 5.55E-05
F7 8.61E-06 2.20E-01 3.31E-04 1.40E-01 7.31E-07 2.09E-03 1.75E-10
F8 7.67E-04 3.31E-03 4.16E-05 5.80E-03 4.78E-01 4.17E-04 3.83E-02
F9 9.98E-01 2.06E-01 2.85E-01 3.22E-01 2.47E-01 5.73E-04 2.04E-01
F10 3.15E-01 1.82E-01 7.09E-01 1.67E-01 2.16E-01 1.02E-01 8.76E-01

13
Multimedia Tools and Applications

Fig. 5  The convergence curve for some benchmark functions

test functions and ranked first in the total. For the CEC2019, the HBASCSO algorithm was
compared with (HBA, SCSO, BO, AEO, MVO, SOA, and SMA) algorithms. This bench-
mark includes 10 functions and the proposed algorithm ranked first in 3 of them alongside
SCSO and SMA. The performance of the proposed algorithm exceeds the performance of
the other metaheuristic algorithms and proves its utility in solving many engineering and
real-world problems.
Below are a few of the projects that are planned for the future.

• In concurrent or parallel systems, they can solve multi-objective problems.

13
Multimedia Tools and Applications

Fig. 5  (continued)

• The proposed algorithm can be used to define optimized fitness functions for artificial
neural networks.
• Feedback controller design for nonlinear systems can benefit from the proposed algo-
rithm.
• Bioinformatics applications can be analyzed using these algorithms to determine the
best method for extracting and filtering features.
• The HBASCSO can be effectively applied to real-world application problems, includ-
ing feature selection and robot path planning.

13
Multimedia Tools and Applications

Fig. 5  (continued)

13
Multimedia Tools and Applications

Fig. 5  (continued)

Funding Open access funding provided by the Scientific and Technological Research Council of Türkiye
(TÜBİTAK).

Declaration

Conflict of Interest The authors declare that they have no conflict of interest.

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License,
which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long
as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Com-
mons licence, and indicate if changes were made. The images or other third party material in this article
are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the
material. If material is not included in the article’s Creative Commons licence and your intended use is not
permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly
from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.

References
1. Mohammed H, Rashid T (2020) A novel hybrid GWO with WOA for global numerical optimization
and solving pressure vessel design. Neural Comput Appl 32(18):14701–14718
2. Mafarja M, Mirjalili S (2018) Whale optimization approaches for wrapper feature selection. Appl Soft
Comput 62:441–453
3. Huang CL, Dun JF (2008) A distributed PSO–SVM hybrid system with feature selection and param-
eter optimization. Appl Soft Comput 8(4):1381–1391

13
Multimedia Tools and Applications

4. Bianchi L, Gambardella LM, Dorigo M (2002) Solving the homogeneous probabilistic traveling sales-
man problem by the ACO metaheuristic. Ant Algorithms 2463:176–187
5. Azizi M, Aickelin U, Khorshidi HA, Shishehgarkhaneh MB (2022) Shape and size optimization of
truss structures by Chaos game optimization considering frequency constraints. J Adv Res 41:89–100
6. Tavakol Aghaei V, Onat A, Yıldırım S (2018) A Markov chain Monte Carlo algorithm for Bayesian
policy search. Systems Science & Control Engineering 6(1):438–455
7. Aghaei VT, Ağababaoğlu A, Yıldırım S, Onat A (2022) A real-world application of Markov chain
Monte Carlo method for Bayesian trajectory control of a robotic manipulator. ISA Trans 125:580–590
8. Manson SM (2001) Simplifying complexity: a review of complexity theory. Geoforum 32(3):405–414
9. Li Wenjun et al (2020) Parameterized algorithms of fundamental NP-hard problems: A survey. Human
centric Computing and Information Sciences 10.1:1–24
10. Talbi EG (2009) Metaheuristics: from design to implementation. John Wiley & Sons
11. Dokeroglu T, Sevinc E, Kucukyilmaz T, Cosar A (2019) A survey on new generation metaheuristic
algorithms. Comput Ind Eng 137:106040
12. Abdollahzadeh B, Gharehchopogh FS, Khodadadi N, Mirjalili S (2022) Mountain gazelle optimizer:
a new nature-inspired metaheuristic algorithm for global optimization problems. Adv Eng Softw
174:103282
13. Seyyedabbasi A (2022) WOASCALF: A new hybrid whale optimization algorithm based on sine
cosine algorithm and levy flight to solve global optimization problems. Adv Eng Softw 173:103272
14. Seyyedabbasi A (2023) A reinforcement learning-based metaheuristic algorithm for solving global
optimization problems. Adv Eng Softw 178:103411
15. Talbi EG (2009) Metaheuristics: from design to implementation, vol 74. Wiley, New York, pp 5–39
16. Mirjalili S (2015) Moth-flame optimization algorithm: A novel nature-inspired heuristic paradigm.
Knowl Based Syst 89:228–249
17. Mirjalili S, Lewis A (2016) The whale optimization algorithm. Adv Eng Softw 95:51–67
18. Wang G-G, Deb S, Gao X-Z, Coelho LDS (2016) A new metaheuristic optimisation algorithm moti-
vated by elephant herding behaviour. Int J Bio-Inspired Comput 8(6):394–409
19. Saremi S, Mirjalili S, Lewis A (2017) Grasshopper optimisation algorithm: theory and application.
Adv Eng Softw 105:30–47
20. Heidari A, Seyedali M, Hossam F, Ibrahim A, Majdi M, Huiling C (2019) Harris hawks optimization:
Algorithm and applications, Future Gener. Comput Syst 97:849–872
21 Hashim Fatma (2022) A, et al “Honey Badger Algorithm: New metaheuristic algorithm for solving
optimization problems.” Mathematics and Computers in Simulation. 192:84–110
22. Seyyedabbasi Amir, Farzad Kiani (2022) “Sand Cat swarm optimization: a nature-inspired algorithm
to solve global optimization problems.” Engineering with Computers. pp 1–25
23. Holland JH (1975) Adaptation in Natural and Artificial Systems: An Introductory Analysis with Appli-
cations to Biology, Control, and Artificial Intelligence. MIT Press, Cambridge, Mass, USA
24. Rechenberg (1978) Evolutionsstrategien. Springer, Berlin Heidelberg, pp 83–114
25. Hansen N, Ostermeier A (2001) Completely derandomized self-adaptation in evolution strategies. Evol
Comput 9(2):159–195
26. Kirkpatrick S, Gelatt CD, Vecchi MP (1983) Optimization by simulated annealing. Science
220(4598):671–680
27. Ting TO, Yang XS, Cheng S, Huang K (2015) Hybrid metaheuristic algorithms: past, present, and
future. Recent advances in swarm intelligence and evolutionary computation. pp 71–83
28. Abdel-Basset M, Abdel-Fatah L, Sangaiah AK (2018) Metaheuristic algorithms: A comprehensive
review. Computational intelligence for multimedia big data on the cloud with engineering applica-
tions. pp 185–231
29. Barshandeh S, Haghzadeh M (2021) A new hybrid chaotic atom search optimization based on tree-
seed algorithm and Levy flight for solving optimization problems. Engineering with Computers
37:3079–3122
30. Wang Z, Luo Q, Zhou Y (2021) Hybrid metaheuristic algorithm using butterfly and flower pollina-
tion base on mutualism mechanism for global optimization problems. Engineering with Computers
37:3665–3698
31. Houssein EH, Hosney ME, Elhoseny M et al (2020) Hybrid Harris hawks optimization with cuckoo
search for drug design and discovery in chemoinformatics. Sci Rep 10:14439
32. Gao Zheng-Ming et al (2020) “The hybrid grey wolf optimization-slime mould algorithm.” Journal
of Physics: Conference Series Vol. 1617, No. 1. IOP Publishing
33. Houssein, Essam H et al (2021) “Hybrid slime mould algorithm with adaptive guided differen-
tial evolution algorithm for combinatorial and global optimization problems.” Expert Systems with
Applications 174:114689

13
Multimedia Tools and Applications

34. Ficarella E, Lamberti L, Degertekin SO (2021) Comparison of three novel hybrid metaheuristic
algorithms for structural optimization problems. Comput Struct 244:106395
35. Jorge Diana et al (2022) “A hybrid metaheuristic for smart waste collection problems with work-
load concerns.” Computers & Operations Research 137:105518
36. Rodrigues, Leonardo R (2022) “A hybrid multi-population metaheuristic applied to load-sharing
optimization of gas compressor stations.” Comput & Electr Eng 97:107632
37. Öztaş T, Tuş A (2022) A hybrid metaheuristic algorithm based on iterated local search for vehicle
routing problem with simultaneous pickup and delivery. Expert Syst Appl 202:117401
38. Biabani Fatemeh, Saeed Saeed, Saleh Hamzehei-Javaran (2022) “A new insight into metaheuristic
optimization method using a hybrid of PSO, GSA, and GWO.” Structures, Vol. 44. Elsevier
39. Tiwari A, Chaturvedi A (2023) Automatic EEG channel selection for multiclass brain-computer
interface classification using multiobjective improved firefly algorithm. Multimedia Tools and
Applications 82(4):5405–5433
40. Tiwari A, Chaturvedi A (2022) A hybrid feature selection approach based on information theory
and dynamic butterfly optimization algorithm for data classification. Expert Syst Appl 196:116621
41. Tiwari A, Chaturvedi A (2022) Automatic channel selection using multiobjective X-shaped binary
butterfly algorithm for motor imagery classification. Expert Syst Appl 206:117757
42. Tiwari A (2023) A logistic binary Jaya optimization-based channel selection scheme for motor-
imagery classification in brain-computer interface. Expert Syst Appl 223:119921
43. Seyyedabbasi Amir et al (2021) “Hybrid algorithms based on combining reinforcement learning
and metaheuristic methods to solve global optimization problems.” Knowledge-Based Systems
223:107044
44. Wolpert DH, Macready WG (1997) No free lunch theorems for optimization. IEEE Trans Evol
Comput 1(1):67–82
45. Mirjalili S, Mirjalili SM, Lewis A (2014) Grey wolf optimizer Adv Eng Softw 69:46–61
46. Mirjalili S (2016) SCA: a sine cosine algorithm for solving optimization problems. Knowl Based
Syst 96:120–133
47. Eberhart R, Kennedy J (1995) Particle swarm optimization. Proceedings of the IEEE international
conference on neural networks, Vol. 4. pp 1942–1948
48. Mirjalili S, Gandomi AH, Mirjalili SZ, Saremi S, Faris H, Mirjalili SM (2017) Salp swarm algo-
rithm: a bio-inspired optimizer for engineering design problems. Adv Eng Softw 114:163–191
49. Rashedi E, Nezamabadi-Pour H, Saryazdi S (2009) GSA: a gravitational search algorithm. Inf Sci
179(13):2232–2248
50. Hashim FA, Mostafa RR, Hussien AG, Mirjalili S, Sallam KM (2023) Fick’s Law Algorithm: A
physical law-based algorithm for numerical optimization. Knowl-Based Syst 260:110146
51. Hashim FA, Houssein EH, Mabrouk MS, Al-Atabany W, Mirjalili S (2019) Henry gas solubility
optimization: A novel physics-based algorithm. Futur Gener Comput Syst 101:646–667
52. Mirjalili S (2015) Moth-flame optimization algorithm: A novel nature-inspired heuristic paradigm.
Knowl-Based Syst 89:228–249
53. Das AK, Pratihar DK (2022) Bonobo optimizer (BO): an intelligent heuristic with self-adjust-
ing parameters over continuous spaces and its applications to engineering problems. Appl Intell
52(3):2942–2974
54. Zhao W, Wang L, Zhang Z (2020) Artificial ecosystem-based optimization: a novel nature-inspired
meta-heuristic algorithm. Neural Comput Appl 32:9383–9425
55. Mirjalili S, Mirjalili SM, Hatamlou A (2016) Multi-verse optimizer: a nature-inspired algorithm for
global optimization. Neural Comput Appl 27:495–513
56. Dhiman G, Kumar V (2019) Seagull optimization algorithm: Theory and its applications for large-
scale industrial engineering problems. Knowl-Based Syst 165:169–196
57. Li S, Chen H, Wang M, Heidari AA, Mirjalili S (2020) Slime mould algorithm: A new method for
stochastic optimization. Futur Gener Comput Syst 111:300–323
58. Liang JJ, Qu BY, Suganthan PN, Chen Q (2014) Problem definitions and evaluation criteria for the
CEC 2015 competition on learning-based real-parameter single objective optimization. Technical
Report201411A. Computational Intelligence Laboratory, Zhengzhou University, Zhengzhou China
and Technical Report, Nanyang Technological University, Singapore 29:625–640
59. Helbig M, Engelbrecht A (2015) Benchmark functions for cec 2015 special session and competition on
dynamic multi-objective optimization. Comput. Sci., Univ. Pretoria, Pretoria, South Africa, Rep, Dept
60. Wu G, Mallipeddi R, Suganthan PN (2017) Problem definitions and evaluation criteria for the CEC
2017 competition on constrained real-parameter optimization. National University of Defense Tech-
nology, Changsha, Hunan, PR China and Kyungpook National University, Daegu, South Korea and
Nanyang Technological University, Singapore, Technical Report

13
Multimedia Tools and Applications

61. Liang JJ, Qu BY, Gong DW, Yue CT (2019) Problem definitions and evaluation criteria for the CEC
2019 special session on multimodal multiobjective optimization. Zhengzhou University, Computa-
tional Intelligence Laboratory
62. Woolson RF (2007) Wilcoxon signed‐rank test. Wiley encyclopedia of clinical trials. pp 1–3
63. Price KV, Awad NH, Ali MZ, Suganthan PN (2018) Problem definitions and evaluation criteria for the
100-digit challenge special session and competition on single objective numerical optimization. Tech-
nical Report. Nanyang Technological University, Singapore
64. Van den Bergh F, Engelbrecht AP (2006) A study of particle swarm optimization particle trajectories.
Inf Sci 176(8):937–971

Publisher’s Note Springer Nature remains neutral with regard to jurisdictional claims in published maps and
institutional affiliations.

13

You might also like