Download as pdf or txt
Download as pdf or txt
You are on page 1of 29

Accepted Manuscript

A Hybrid-coded Human Learning Optimization for Mixed-Variable


Optimization Problems

Ling Wang , Ji Pei , Muhammad Ilyas Menhas , Jiaxing Pi ,


Minrui Fei , Panos M. Pardalos

PII: S0950-7051(17)30189-2
DOI: 10.1016/j.knosys.2017.04.015
Reference: KNOSYS 3895

To appear in: Knowledge-Based Systems

Received date: 17 September 2016


Revised date: 12 April 2017
Accepted date: 24 April 2017

Please cite this article as: Ling Wang , Ji Pei , Muhammad Ilyas Menhas , Jiaxing Pi , Minrui Fei ,
Panos M. Pardalos , A Hybrid-coded Human Learning Optimization for Mixed-Variable Optimization
Problems, Knowledge-Based Systems (2017), doi: 10.1016/j.knosys.2017.04.015

This is a PDF file of an unedited manuscript that has been accepted for publication. As a service
to our customers we are providing this early version of the manuscript. The manuscript will undergo
copyediting, typesetting, and review of the resulting proof before it is published in its final form. Please
note that during the production process errors may be discovered which could affect the content, and
all legal disclaimers that apply to the journal pertain.
ACCEPTED MANUSCRIPT

Highlights
 This paper proposes a new hybrid-coded HLO (HcHLO) framework to tackle
mix-coded problems more efficiently and effectively.
 A new continuous human learning optimization algorithm is presented based on
the linear learning mechanism of humans.
 The results show that the HcHLO achieves the best-known overall performance
so far on the tested mix-coded problems.

T
IP
CR
US
AN
M
ED
PT
CE
AC

1
ACCEPTED MANUSCRIPT

A Hybrid-coded Human Learning Optimization for Mixed-Variable


Optimization Problems
Ling Wang1,*, Ji Pei1, Muhammad Ilyas Menhas1, Jiaxing Pi2, Minrui Fei1, and Panos M. Pardalos2

1 Shanghai Key Laboratory of Power Station Automation Technology, School of Mechatronics

Engineering and Automation, Shanghai University, Shanghai, 200072, China

2 Center for Applied Optimization, Department of Industrial and Systems Engineering, University of Florida,

Gainesville, Florida, 32611, USA

T
IP
Abstract. Human Learning Optimization (HLO) is an emerging meta-heuristic with promising

potential, which is inspired by human learning mechanisms. Although binary algorithms like HLO can

CR
be directly applied to mixed-variable problems that contains both continuous values and discrete or

Boolean values, the search efficiency and the performance of those algorithms may be significantly

US
spoiled due to “the curse of dimensionality” caused by the binary coding strategy especially when the

continuous parameters of problems require high accuracy. Therefore, this paper extends HLO and
AN
proposes a novel hybrid-coded HLO (HcHLO) framework to tackle mix-coded problems more

efficiently and effectively, in which real-coded parameters are optimized by a new continuous HLO
M

(CHLO) based on the linear learning mechanism of humans and the other variables are handled by the

binary learning operators of HLO. Finally, HcHLO is adopted to solve 14 benchmark problems and its
ED

performance is compared with that of recent meta-heuristic algorithms. The experimental results show

that the proposed HcHLO achieves the best-known overall performance so far on the test problems,
PT

which demonstrates the validity and superiority of HcHLO.


CE

Keywords: human learning optimization, meta-heuristic, continuous human learning optimization,

hybrid-coded problems, mixed-variable problems


AC

1.Introduction

Generally, many human learning activities are similar to the search process of meta-heuristics. For

instance, when a person learns how to play Sudoku, he or she repeatedly studies and practices to master

new skills, and evaluates his or her performance for guiding the following study to play better.

Similarly, meta-heuristics iteratively generate new solutions and calculate the corresponding fitness

values for adjusting the following search to find a better solution. Inspired by this fact, Wang et al. [1]

2
ACCEPTED MANUSCRIPT

presented a novel Human Learning Optimization (HLO) algorithm based on a simplified human

learning model in which three learning operators, i.e. the random learning operator, the individual

learning operator, and the social learning operator, are developed to search for the optimal solution.

Due to the strongest learning ability and the highest level of consciousness in studying, human being is

capable to solve a large number of complicated problems that other living beings, such as birds, ants,

and bees, can hardly address. Therefore, it is logical to presume that HLO, which is developed based on

the learning mechanisms of human being, may gain an advantage over other nature-inspired

T
meta-heuristics on optimization problems in our daily life [2]. Previous works [1-4] show that the HLO

IP
algorithms outperform recent proposed meta-heuristic variants, such as Differential Evolution (DE),

CR
Particle Swarm Optimization (PSO), Harmony Search (HS), and Fruit Fly Optimization, on numerical

functions, deceptive functions, and knapsack problems. Most notably, HLO has achieved the best

US
results on two well-studied sets of multi-dimensional knapsack problems, i.e. 5.100 and 10.100,

compared to the other publicly reported meta-heuristics [4].


AN
The HLO algorithms [1-4] are binary-coded algorithms. Compared with real-coded or discrete-coded

meta-heuristics, binary meta-heuristics is more flexible as they can solve binary problems, which

real-coding or discrete-coding meta-heuristics cannot handle directly, as well as discrete and


M

continuous optimization problems. Besides, binary meta-heuristics may have advantages on some

continuous problems like the controller design problems [5, 6] as the binary-coding strategy discretizes
ED

original infinite search space as finite candidate solutions, and consequently the search efficiency is

significantly improved especially when the parameters of problems do not need a high degree of
PT

accuracy. Due to these benefits, many researchers devoted themselves to the research of binary

meta-heuristics, and various binary variants of well-known meta-heuristics, like the binary Particle
CE

Swarm Optimization [7, 8], the binary Differential Evolution [9, 10], the binary Ant Colony

Optimization (ACO) [11], and the binary Harmony Search [6, 12], were developed and successfully
AC

applied to solve diverse optimization problems, such as the design of wind farm configuration, satellite

broadcast scheduling problems, point pattern matching problems, epileptic seizure detection and

prediction, and the design of rainfall-runoff modeling. Furthermore, more and more new binary

meta-heuristics, like the binary artificial bee colony [13], the binary gravitational search algorithm [14],

the binary fish swarm algorithm [15], the binary bat algorithm [16], the binary shuffled frog leaping

algorithm [17], the binary teaching-learning-based optimization algorithm [18], the binary monkey

3
ACCEPTED MANUSCRIPT

algorithm [19], and the binary grey wolf optimization [20], are proposed to better solve various

optimization problems.

However, the “curse of dimensionality” may arise when using binary meta-heuristics optimizes

high-dimensional continuous problems, which would significantly spoil the efficiency and performance

of binary algorithms because of the exponential growth of solution space. In this case, using

real-coding meta-heuristics instead of binary-coding ones would be a better choice. Therefore, although

binary meta-heuristics can be easily used to solve the hybrid-coded problems, in which real variables,

T
discrete variables, and/or binary variables are included, recent works focus on studying and developing

IP
powerful real-coded algorithms [21-23] to tackle them, as well as discrete or binary problems [24-29],

CR
for gaining better results.

As mentioned above, real-coding methods cannot directly optimize discrete and binary problems,

US
and hence discrete or binary variables in the hybrid-coded problems need to be encoded and operated

as real ones and then modified to discrete or binary variables to calculate fitness values [30]. The
AN
advantage of using real-coding meta-heuristics is that the ability of optimizing real variables is

significantly enhanced, which usually brings better results. But adopting real-coded algorithms to

optimize binary and discrete variables is not good enough since there is a fair possibility of losing vital
M

information on mapping a discrete or binary variable into a continuous one and then mapping it back to

the original discrete or binary space [30], which would cause severe performance loss as binary and
ED

discrete variables are as important as real variables for the hybrid-coded problems. Therefore, this

paper presents a novel hybrid-coded Human Learning Optimization (HcHLO) framework to solve the
PT

hybrid-coded problems more efficiently and effectively, in which a continuous Human Learning

Optimization (CHLO) based on the linear learning mechanism of humans is proposed to tackle
CE

real-coded parameters while the other types of variables are handled by HLO. As far as we know, this

is the first time to present continuous HLO as well as apply HLO to solve the hybrid-coded problems.
AC

The rest of the paper is organized as follows. Section 2 introduces the standard HLO algorithm

briefly. The proposed HcHLO method is presented in Section 3, in which the implementation of

HcHLO, as well as the continuous HLO, is described in detail. Then the proposed HcHLO is applied to

tackle the hybrid-coded problems collected from different engineering fields, and the results are

discussed and compared with recent works in Section 4. Finally, Section 5 concludes this paper.

2. Human Learning Optimization

4
ACCEPTED MANUSCRIPT

As a binary meta-heuristic, the standard HLO adopts the binary-coding framework. An individual in

HLO, i.e. a solution, is represented by a binary string as Eq. (1).

xi   xi1 xi 2 xij xiM  , xij {0,1},1  i  N ,1  j  M (1)

where xij is the j-th bit of the i-th individual, and N and M denotes the number of individuals in the

population and the length of solutions, respectively. Considering that initially there is no

prior-knowledge of problems, each individual of HLO is initialized with “0” or “1” stochastically. After

initialization, HLO uses three operators, i.e. the random learning operator, the individual learning

T
operator, and the social learning operator, to generate new candidates and search out the optimal

IP
solution.

CR
2.1 Learning operators

2.1.1 Random learning operator

US
Randomness always exists in the human learning as usually there is no or only partial prior knowledge

of new problems. Besides, humans need to keep exploring new strategies to learn better in which
AN
random learning is unavoidable [3]. And therefore, HLO uses the random learning operator as Eq. (2)

to simulate these phenomena of human random learning,


M

0, rand  0.5 (2)


xij  Rand (0,1)  
1, else
ED

where rand is a random number between 0 and 1.

2.1.2 Individual learning operator


PT

Individual learning is the ability of humans to build knowledge through the individual reflection about

external stimuli and sources [31]. It is very important for humans to learn from their own previous
CE

experience and knowledge and therefore they can improve the efficiency and effectiveness of learning

and avoid mistakes. To emulate this learning ability, HLO defines the individual knowledge database
AC

(IKD) as Eqs. (3-4) to memorize the personal best solutions,

ikd1 
ikd 
 2
 
IKD    ,1  i  N (3)
ikd i 
 
 
ikd N 

5
ACCEPTED MANUSCRIPT

ikdi1  iki11 iki12 ikj iki1M 


  ik iki 22 iki 2 j

iki 2 M 
ikdi1   i 21
   
ikdi     ,1  p  T (4)
ikdip  ikip1 ikip 2 ikipj ikipM 
   
   
ikdiT  ikiT 1 ikiT 2 ikiTj ikiTM 

where ikdi is the individual knowledge database of individual i, T denotes the size of IKDs, and ikipj

represents the j-th bit of p-th best solution of person i. When HLO performs individual learning, new

T
candidate solutions are yielded as Eq. (5) by the individual learning operator,

IP
xij  ikipj (5)

CR
where p is a random integer between 1 and T.

2.1.3 Social learning operator

US
Although humans may learn and solve problems by themselves, the learning efficiency would be very

low for hard problems. While in a social environment, the learning efficiency can be significantly
AN
improved by sharing knowledge among individuals. To achieve better performance, HLO designs the

social learning operator to mimic the social learning behavior of humans. Like the individual learning

operation, the social knowledge database (SKD) is defined in HLO as Eq. (6) to store the knowledge of
M

the population,

 skd1   sk11 sk12 sk1 j sk1M 


ED

 skd   sk sk22 sk2 j



sk2 M 
 2   21
   
SKD   =  ,1  q  H (6)
 skd q   skq1 skq 2 skqj skqM 
PT

   
   
 skd H   sk H 1 sk H 2 sk Hj sk HM 
CE

where H is the size of the SKD, and skdq is the q-th social knowledge in the SKD. When HLO runs the

social learning, new candidate solutions are generated as Eq. (7) based on the knowledge in the SKD,
AC

xij  skqj (7)

where q is a random integer between 1 and H.

2.2 Updating of the IKD and SKD

After all the candidate solutions are produced, the fitness of each new solution is calculated. If the

fitness of new individual is better than that of the worst one in the IKD or the current number of

solutions in the IKD is less than the pre-defined value, the corresponding IKD needs to be updated and

this new candidate will be stored. For the SKD, the same updating mechanism is adopted. However, to

6
ACCEPTED MANUSCRIPT

better maintain diversity and avoid the premature of HLO, the SKD updates no more than one solution

in each iteration of search.

HLO iteratively executes the learning operators to generate fresh solutions and updates the IKD as

well as the SKD until termination criteria are satisfied. The procedure of HLO is described in Fig. 1.

Begin

Set the given parameters of HLO and


initialize the population

T
Calculate the fitness of individuals, generate

IP
and initialize the IKD and the SKD

CR
Terminate
the iteration ? Yes

No

US
Yield new candidate solutions
through performing three
learning operators
Output the
results
AN
End
Calculate the fitness of
new solutions

Choose the better solutions


M

according to fitness

Update the IKD and


the SKD
ED

Fig.1 The flowchart of HLO


PT

3 Hybrid-coded Human Learning Optimization


CE

3.1 Continuous Human Learning Optimization

Although HLO possesses an excellent global search ability and can be used to solve real-coded

problems, it is clearly foreseen that the efficiency of HLO will be greatly reduced for high-dimensional
AC

continuous problems especially with the high demand of accuracy because of “the curse of

dimensionality” caused by the binary-coding framework. Therefore, to improve the performance of

HLO on the hybrid-coded problems, this paper develops a continuous Human Learning Optimization

(CHLO) algorithm. Different from HLO, CHLO adopts the real-coding framework, that is, each

continuous parameter of problems is directly represented as a real-coded variable, which is randomly

initialized between the lower bound and the upper bound of problems as Eq. (8):

7
ACCEPTED MANUSCRIPT

xij  xmin, j  r  ( xmax, j  xmin, j ) (8)

where xmin, j and xmax, j is the lower bound and upper bound of variable j, and r is a random number.

Obviously, the learning operators of HLO cannot work in continuous space and thus new learning

operators need be designed for CHLO. Human learning is diverse and complex. Previous research

shows that learning curves, which are a graphical representation of the increase of learning (vertical

axis) with experience (horizontal axis), are different on various tasks and can be described by the linear

function model, power function model, and many other models [32]. However, it has long been

T
reported in literature that there is a primacy of linear functions in human function learning. Recent

IP
work [33] demonstrates that both aspects of the behavior, extent and rate of selection, present evidence

CR
that human function learning obeys the Occam's razor principle, which may explain the previous

findings on primacy of linear function over non-linear functions since linear models would have low

US
parametric complexity. Hence, for the simplicity of implementation and reduction of computation,

linear learning operators are developed for CHLO.


AN
3.1.1 Linear random learning operator

The random learning operator in HLO is primarily used to keep the diversity of the population and
M

partially search with various new attempts to find the optimal solution. As mentioned above, because of

lacking the prior knowledge of problems, the linear random learning operator of CHLO is set to a
ED

random number in the feasible range as Eq. (9),

xij  xmin, j  r1  ( xmax, j  xmin, j ) (9)


PT

where r1 is a random number between 0 and 1.

3.1.2 Linear individual learning operator


CE

The individual learning operator is used to simulate the phenomena that humans learn from their

previous experience. Based on this idea, the linear individual learning operator of CHLO is designed as
AC

Eq. (10),

xij  ikipj  I L  r2  (skqj  ikipj ) (10)

where IL is the linear individual learning factor, r2 is a random number between -1 and 1, ikipj is the j-th

variable of the p-th solution in the IKD of individual i, and skqj is the j-th corresponding variable of the

p-th solution in the SKD. By performing the linear individual learning operator, CHLO searches based

on its previous knowledge ikipj, i.e. the first item on the right of Eq. (10), with a linear learning

8
ACCEPTED MANUSCRIPT

mechanism, i.e. the second item on the right of Eq. (10). The search range is dynamically adjusted

according to the value of skqj  ikipj , which is the potential range where better solutions may exist, and

therefore the search efficiency of the operator is guaranteed.

3.1.3 Linear social learning operator

The social learning operator is an effective way for HLO to absorb useful knowledge from the

collective experience of the population. Similar with the linear individual learning operator, a linear

social learning operator for CHLO is developed as Eq. (11),

T
xij  skqj  SL  r3  (skqj  ikipj ) (11)

IP
where SL is the linear social learning factor, and r3 is a random number between 0 and 1. The second

CR
item on the right of Eq. (11) is the linear learning strategy for the social learning operator of CHLO

while the first item on the right of Eq. (11), different from that of the individual learning operator, is the

social best information.


US
Note that IL and SL determine the search range of the individual learning operator and the social
AN
learning operator, respectively. A big IL or SL may help the algorithm converge fast while they also may

cause the premature convergence and spoil the accurate search ability of the algorithm. A small IL or SL
M

can help the algorithm explore solution space better but may reduce the efficiency of search. Therefore,

these two control parameters need be carefully set.


ED

3.2 Implementation of Hybrid-coded Human Learning Optimization

To efficiently and effectively solve mixed-variable optimization problems, the presented


PT

Hybrid-coded Human Learning Optimization adopts the binary-real mixed coding method, in which

continuous parameters are directly represented as real-coded variables while the Boolean or discrete
CE

parameters are coded as binary strings. Hence, the population of HcHLO can be described as Eq. (12),

 x1   R11 R12 R1 j R1M r B11 B21 B1 j B1M b 


  R 
AC

B2 M b 
 x2   21
R22 R2 j R2 M r B21 B22 B2 j
   
X   = 
 xi   Ri1 Ri 2 Rij RiM r Bi1 Bi 2 Bij BiM b  (12)
   
   
 xN   RN 1 RN 2
 RNj RNM r BN 1 Bi 2 BNj BNM b 
Array (R) Array (B)

where Array(R) stores all the real-coded variables of solutions and Array(B) reserves the binary vectors

of individuals which denotes all the binary and/or discrete variables of solutions. N is the size of the

population, and Mr and Mb represent the lengths of real-coded variables and binary vectors. The whole

9
ACCEPTED MANUSCRIPT

dimension of solutions is M, which equals to (Mr+Mb). Initially, the elements of each individual in

Array(R) and in Array(B) are random initialized as Eq. (8) and Eq. (2), respectively.

3.2.1 Random learning of HcHLO

The random learning operation of HcHLO is composed by the linear random operator and the random

operator, that is, real-coding variables Array(R) are operated by Eq. (13), i.e. the linear random learning

strategy, while binary elements Array(B) are generated by Eq. (14), i.e. the standard random learning

strategy of HLO,

T
Rij  xmin, j  r4  ( xmax, j  xmin, j ) (13)

IP
0, 0  r5  0.5
Bij   (14)

CR
1, else

where r4 and r5 are two independent stochastic numbers between 0 to 1.

3.2.2 Individual learning of HcHLO

US
To perform the individual learning, the personal best solutions need to be saved in the individual
AN
knowledge database of HcHLO, which is represented as Eq. (15)

ikd i1   ik iR11 ik iR12 ik iR1j ik iR1M r ik iB11 ik iB12 ik iB1j ik iB1Mb 


   ik iR ik iR 
ik iB2Mb 
  21
ikd ik iR2j ik iR2 M r ik iB21 ik iB22 ik iB2 j
 i2 22
M

   
ikdi     (15)
 ikd ip  ik iRp1 ik iRp 2 ik iRpj ik iRpM r ik iBp1 ik iBp 2 ik iBpj ik iBpMb 
   
   
ED

ikd iT  ik iRT 1 ik iRT 2 


 ik iRTj ik iRTM r ik iBT 1 ik iBT 2 ik iBTj ik iBTMb 
Array (ikiR ) Array (ikiB )
PT

where ik iR pj
is the j-th real-coded knowledge of the p-th best solution of individual i, ik iB denotes the pj

j-th binary knowledge of the p-th best solution gained by person i, and T is the size of IKDs.
CE

When HcHLO conducts individual learning, the linear individual learning operator is used to handle

the real-coding knowledge in Array(ikiR) as Eq. (16) while the standard individual learning operator is
AC

adopted to deal with binary bits as Eq. (17)

Rij  ikiRpj  I L  r5  (skqR j  ikiRpj ) (16)

Bij  ikiBpj
(17)

where r5 is a random number between -1 and 1.

3.2.3 Social learning of HcHLO

10
ACCEPTED MANUSCRIPT

Similarly, the best solutions of the population are reserved in the social knowledge database as Eq. (18)

for the social learning of HcHLO,

 skd 1   sk 1R1 sk 1R2 sk 1R j sk 1RM r sk 1B1 sk 1B2 sk 1B j sk 1BMb 


 skd   sk 
 2   2 R1 sk 2 R2 sk 2 R j sk 2 RM r sk 2 B1 sk 2 B2 sk 2 B j sk 2 BMb 
   
SKD     (18)
 skd q   sk qR1 sk qR2 sk qR j sk qRM r sk qB1 sk qB2 sk qB j sk qBMb 
   
   
 skd H   sk HR1 
 sk HR2 sk HR j sk HRM r sk HB1 sk HB2 sk HB j sk HBMb 
Array ( sk R ) Array ( sk B )

T
IP
where sk qR and sk qB denotes the j-th real-coded knowledge and the j-th binary knowledge of the
j j

q-th best solution in the SKD, respectively, and H is the size of the SKD.

CR
When HcHLO performs the social learning operation, the linear social learning operator is chosen to

US
tackle the continuous variables in Array(skR) as Eq. (19) while the binary knowledge in Array(skB) is

handled by the binary social learning operator as Eq. (20),


AN
Rij  ikiRpj  SL  r6  (skqR j  ikiRpj ) (19)

Bij  skqB j (20)


M

where r6 is a random number between 0 and 1.

3.2.4 Updating of the IKD and the SKD of HcHLO


ED

After the new population is yielded by the learning operation of HcHLO, the fitness of each candidate

solution is computed according to the objective function. The new candidate is directly stored in the
PT

IKD regardless of its fitness if the number of reserved solutions in the IKD is less than the pre-defined

size T; otherwise it only replaces the worst one in the IKD if it has a better fitness. The SKD of HcHLO
CE

is updated in the same way. However, for the same reason, i.e. maintaining diversity and avoiding the

premature, HcHLO at most replaces one solution in the SKD at each iteration.
AC

In summary, HcHLO generates new solutions by performing the random learning operation (RLO),

the individual learning operation (ILO), and the social learning operation (SLO) as Eq. (21),

RLO, if 0  r7  pr

xij  ILO, if pr  r7  pi (21)
SLO, if pi  r7  1

where pr and pi are two control parameters of HcHLO to determine the rates of conducting three

learning operations, and r7 is a random number between 0 and 1. Specifically, pr is the probability of

11
ACCEPTED MANUSCRIPT

executing the random learning, while (pi-pr) and (1-pi) are the probabilities of performing the

individual learning and the social learning, respectively.

HcHLO performs the learning operators and updates the IKD and the SKD iteratively until

termination criterions are met. The pseudo-code of HcHLO is shown in Algorithm 1 as follow:

Algorithm 1 Pseudo codes for HcHLO


1: Initialize population X.
2: Calculate Fitness Function: f(X).
3: Initialize the IKDs and SKD.
4: while the stop criterion is not satisfied do

T
5: for i = 1 to N do

IP
6: for j = 1 to M do
7: if (r7 > 0 and r7 < pr) then
8: Generate xij as Eqs. (13-14).

CR
9: else if (r7  pr and r7 < pi) then
10: Generate xij as Eqs. (16-17).
11: else if (r7  pi and r7 < 1) then
12:
13:
14:
Generate xij as Eqs. (19-20).
end if
end for
15: end for
US
AN
16: Calculate f(X).
17: Update the IKDs and SKD.
18: end while
M
ED

4. Results and discussions

In engineering areas, many problems are mix-coded optimization problems, which involve a number of
PT

system parameters of which some take on continuous values while others are restricted to a set of

discrete values or Boolean values [34, 35]. These discrete sets typically arise because parts of variables
CE

are only allowed to use standard-sized components or those are readily available while Boolean

variables represents whether the corresponding components are included or excluded. A total of 14
AC

engineering optimization problems from [35, 36] were adopted as the benchmark problems to evaluate

the performance of the presented HcHLO. Table 1 lists the global optimum and types of these 14

problems, and more details are given in the appendix.

First, HcHLO is compared with six improved algorithms developed in [35, 36], which, as far as we

know, achieved the best results on these 14 problems so far. The details of these six approaches are

listed in Table 2. For a fair comparison, the population of HcHLO is set to 10  M where M is the

dimension of problems, and the maximal number of function calculation (MaxNFC) on each problem is

12
ACCEPTED MANUSCRIPT

set as that recommended in [36]. Besides, following the instruction given in [36], if the gap between the

theoretical optima and the found one is less than 10 -6, the search will be terminated. Since all the

benchmark problems are the single-objective problems, the sizes of the IKDs and the SKD were both

set to 1 according to [1], and the IKD of HcHLO was re-initialized if the individual best solution was

not updated in 100 generations to avoid being trapped in the local optima. Note that the optimal control

parameters usually depend on problems and they are unknown without prior-knowledge. Therefore, a

set of fair parameter values, as listed in Table 3, was set to HcHLO by trial and error. HcHLO was

T
applied to solve all the 14 problems with 100 independent runs, and the results are listed in Table 4. For

IP
conveniently comparing the performance, the rankings and the average of used fitness calculation times

CR
of the algorithms are summarized in Tables 5 and 6, respectively.

Problems
P1
Best known
87.5
US
Table 1. The benchmark problems
Type
continuous-binary mixed
AN
P2 7.6672 continuous-binary mixed
P3 4.5796 continuous-binary mixed
P4 2 continuous-binary mixed
P5 2.1247 continuous-binary mixed
M

P6 1.0765548 continuous-binary mixed


P7 99.245209 continuous-binary mixed
P8 3.557473 continuous-binary mixed
ED

P9 -32217.4 continuous-binary mixed


P10 -0.808844 discrete
P11 -0.974565 discrete
P12 -0.999486 continuous-discrete mixed
PT

P13 5850.770 continuous-discrete mixed


P14 -75.134137 continuous-discrete mixed
CE

Table 2. Six improved algorithms proposed in [35, 36]


Algorithms
MDE’ modified differential evolution
AC

MA-MDE’ hybrid of memetic algorithm and modified differential evolution


MDE’-IHS hybrid of modified differential evolution and improved harmony search
MDE’-HJ hybrid of modified differential evolution and Hooke and Jeeves method
MDE’-IHS-HJ hybrid of modified differential evolution, improved harmony search, and Hooke and Jeeves method
PSO-MDE’-HJ hybrid of particle swarm optimization, the modified differential evolution, and Hooke and Jeeves method

13
ACCEPTED MANUSCRIPT

Table 3. The parameter setting of the algorithms


Algorithms Parameters
HcHLO pr=0.1, pi=0.85, IL=1, SL=2
HLO pr=5/M, pi=0.85+2/M
MBDE CR=0.9
CCPSO c=2.0, ⍵= 0.6, P=0.05, G=5
BHTPSO ⍵min= 0.2, ⍵max=0.6, c1min=0.5, c1max=2, c2min=1.0, c2max=2.0, c3min=0.5, c3max=1.5
BLDE p=max(0.05, min(0.15, 10/n))
SCA a=2, r1 = a – t*a/T

T
ALO a = min(x), b=max(x), c=xlb , d=xub
MFO flame no = round(N-l*(N-1)/T), a= -1 – t*(-1)/T , b=1

IP
GWO a= 2 – t*2/T, A=2a*r1-a, C=2*r2
BBA Qmin=0, Qmax=2, A=0.9, r=0.9

CR
WOA a= 2 – t*2/T, a2=-1+t*((-1)/T), b=1

Table 4. Results of HcHLO and the compared algorithms on the benchmark problems
Methods SR (%) MEAN STD MinNFC MaxNFC

P1
MDE’
MA-MDE’
MDE’-IHS
MDE’-HJ
54
91
84
100
US 89.879034
88.230145
87.500000
/
2.768746
1.899683
0.002118
/
7696
3901
3731
5859
15000
15000
15000
15000
AN
MDE’-IHS-HJ 96 / / 6589 15000
PSO-MDE’-HJ 100 / / 4596 15000
HcHLO 100 87.500000 0.000000 3543 15000
MDE’ 4 7.918619 0.047891 96070 100000
M

MA-MDE’ 13 7.883841 0.098982 87422 100000


MDE’-IHS 17 7.848896 0.121909 85048 100000
P2 MDE’-HJ 74 / / 28389 100000
MDE’-IHS-HJ 94 / / 10522 100000
ED

PSO-MDE’-HJ 73 / / 20910 100000


HcHLO 100 7.667194 0.000000 6005 100000
MDE’ 91 4.661414 0.311365 7912 15000
MA-MDE’ 93 4.579600 0.000003 13254 15000
PT

MDE’-IHS 97 4.579600 0.000005 6259 15000


P3 MDE’-HJ 0 / / 15795 15000
MDE’-IHS-HJ 48 / / 15116 15000
PSO-MDE’-HJ 0 / / 15511 15000
CE

HcHLO 100 4.579587 0.000000 5173 15000


MDE’ 91 2.009348 0.043579 1075 5000
MA-MDE’ 88 2.000000 0.000000 1677 5000
MDE’-IHS 96 2.000001 0.000000 3290 5000
AC

P4 MDE’-HJ 99 / / 1787 5000


MDE’-IHS-HJ 95 / / 1211 5000
PSO-MDE’-HJ 88 / / 1863 5000
HCcHLO 100 2.000000 0.000000 1287 5000
MDE’ 65 2.167894 0.132196 1987 5000
MA-MDE’ 84 2.124574 0.000071 1241 5000
MDE’-IHS 100 2.124604 0.000076 1290 5000
P5 MDE’-HJ 79 / / 1721 5000
MDE’-IHS-HJ 86 / / 1251 5000
PSO-MDE’-HJ 49 / / 2776 5000
HcHLO 100 2.124470 0.000000 407 5000
MDE’ 42 1.124453 0.075163 30030 50000
P6 MA-MDE’ 65 1.099805 0.055618 23462 50000
MDE’-IHS 38 1.094994 0.052898 45764 50000

14
ACCEPTED MANUSCRIPT

MDE’-HJ 90 / / 15964 50000


MDE’-IHS-HJ 83 / / 21890 50000
PSO-MDE’-HJ 71 / / 22929 50000
HcHLO 100 1.076555 0.000000 12540 50000
MDE’ 97 99.245209 0.001429 426 1797
MA-MDE’ 96 99.245209 0.001842 670 1797
MDE’-IHS 75 99.512250 1.485279 642 1797
P7 MDE’-HJ 91 / / 994 1797
MDE’-IHS-HJ 99 / / 458 1797
PSO-MDE’-HJ 100 / / 412 1797
HcHLO 100 99.241553 0.000000 248 1797
MDE’ 54 3.599903 0.059012 27329 50000
MA-MDE’ 85 3.564912 0.029017 20546 50000
MDE’-IHS

T
72 3.561157 0.008381 19947 50000
P8 MDE’-HJ 3 / / 50210 50000
MDE’-IHS-HJ

IP
81 / / 45821 50000
PSO-MDE’-HJ 28 / / 49206 50000
HcHLO 85 3.558935 0.004882 33293 50000

CR
MDE’ 100 -32217.427262 0.002836 1023 5495
MA-MDE’ 100 -32217.427106 0.003690 1913 5495
MDE’-IHS 100 -32217.427780 0.000000 403 5495
P9 MDE’-HJ 100 / / 495 5495
MDE’-IHS-HJ

US
100 / / 453 5495
PSO-MDE’-HJ 100 / / 555 5495
HcHLO 100 -32217.427780 0.000000 24 5495
MDE’ 93 −0.807608 0.005615 17567 50000
−0.807907
AN
MA-MDE’ 94 0.003077 30951 50000
MDE’-IHS 100 −0.808844 0.000000 3955 50000
P10 MDE’-HJ 47 / / 43090 50000
MDE’-IHS-HJ 92 / / 13152 50000
PSO-MDE’-HJ 89 / / 24484 50000
M

HcHLO 100 −0.808844 0.000000 19600 50000


MDE’ 100 −0.974565 0.000330 222 1000
MA-MDE’ 100 −0.974565 0.000977 338 1000
MDE’-IHS 88 −0.974565 0.000000 241 1000
ED

P11 MDE’-HJ 100 / / 285 1000


MDE’-IHS-HJ 100 / / 221 1000
PSO-MDE’-HJ 100 / / 288 1000
HcHLO 100 -0.974565 0.000000 263 1000
PT

MDE’ 100 -0.999631 0.000104 1460 14000


MA-MDE’ 100 -0.999638 0.000111 2524 14000
MDE’-IHS 100 -0.999635 0.000104 1070 14000
P12 MDE’-HJ 100 / / 1704 14000
CE

MDE’-IHS-HJ 100 / / 1762 14000


PSO-MDE’-HJ 100 / / 1414 14000
HcHLO 100 -0.999821 0.000009 1943 14000
MDE’ 17 6070.604982 109.163780 42108 50000
AC

MA-MDE’ 17 6040.005940 168.603518 42632 50000


MDE’-IHS 8 6082.551078 185.056741 46451 50000
P13 MDE’-HJ 76 / / 30138 50000
MDE’-IHS-HJ 50 / / 32618 50000
PSO-MDE’-HJ 99 / / 18265 50000
HcHLO 82 5908.944814 101.135968 30663 50000
MDE’ 99 -75.134137 0.000023 1603 10000
MA-MDE’ 98 -75.134137 0.000024 2856 10000
MDE’-IHS 100 -75.134137 0.000025 2977 10000
P14 MDE’-HJ 100 / / 3058 10000
MDE’-IHS-HJ 100 / / 1747 10000
PSO-MDE’-HJ 100 / / 2419 10000
HcHLO 100 -75.134137 0.000000 2034 10000

15
ACCEPTED MANUSCRIPT

Table 4 shows that HcHLO achieves the best results on 13 out of 14 problems, and it is only inferior

to PSO-MDE’-HJ on P13. The rankings in Table 5 clearly show that HcHLO outperforms the other 6

algorithms. Specifically, HcHLO finds the global optima (within 10-6 error) with the 100% successful

rate (SR) on 12 out of 14 problems, while MDE’, MA-MDE’, MDE’-IHS, MDE’-HJ, MDE’-IHS-HJ,

and PSO-MDE’-HJ only search out 3, 3, 4, 5, 4, and 6 out of 14 problems with the 100% successful

rate, respectively. Besides, Table 6 displays that HcHLO costs the least number of fitness calculation,

T
i.e. average 8359 on all the problems, which is 23.4% superior to the second-placed MDE’-IHS-HJ.

IP
Therefore, it is fair to claim that HcHLO possesses more robust and efficient performance on the test

CR
problems.

To further verify the performance of HcHLO, 10 recent proposed algorithms, i.e. the memory based

US
differential evolution algorithm (MBDE) [37], competitive and cooperative particle swarm

optimization (CCPSO) [38], memetic binary hybrid topology particle swarm optimization (BHTPSO)
AN
[39], binary learning differential evolution (BLDE) [40], the sine cosine algorithm (SCA) [41], the ant

lion optimizer (ALO) [42], the moth-flame optimization algorithm (MFO) [43], the grey wolf optimizer

(GWO) [44], the binary bat algorithm (BBA) [45], the whale optimization algorithm (WOA) [46], as
M

well as the standard HLO [1], were adopted to solve these 14 engineering problems. For a fair

comparison, the recommended parameter values of these algorithms were used, and the maximal
ED

number of function calculation is as the same as that of HcHLO, which are also given in Table 3. The

numerical results and the Wilcoxon signed-rank test (W-test) results are displayed in Table 7, where
PT

“1” represents that HcHLO significantly outperforms the compared algorithm at the 95% confidence,

“-1” denotes that HcHLO is significantly worse than the compared one, and “0” indicates that HcHLO
CE

is comparable in performance to the counterpart. For clearly analyzing the results, the W-test results are

summarized in Table 8.
AC

Table 5. Rankings of MDE’, MA-MDE’, MDE’-IHS, MDE’-HJ, MDE’-IHS-HJ, PSO-MDE’-HJ, and HcHLO
P1 P2 P3 P4 P5 P6 P7 P8 P9 P10 P11 P12 P13 P14 Mean
MDE’ 7 7 4 5 6 6 4 5 1 4 1 1 5 6 4.43
MA-MDE’ 6 6 3 6 4 5 5 1 1 3 1 1 5 7 3.86
MDE’-IHS 5 5 2 3 1 7 7 4 1 1 7 1 7 1 3.71
MDE’-HJ 1 3 6 2 5 2 6 7 1 7 1 1 3 1 3.28
MDE’-IHS-HJ 4 2 5 4 3 3 3 3 1 5 1 1 4 1 2.86
PSO-MDE’-HJ 1 4 7 6 7 4 1 6 1 6 1 1 1 1 3.36
HcHLO 1 1 1 1 1 1 1 1 1 1 1 1 2 1 1.07

16
ACCEPTED MANUSCRIPT

Table 6. The average number of function calculation on the benchmark problems


MDE’ MA-MDE’ MDE’-IHS MDE’-HJ MDE’-IHS-HJ PSO-MDE’-HJ HcHLO
P1 7696 3901 3731 5859 6589 4596 3543
P2 96070 87422 85048 28389 10522 20910 6005
P3 7912 13254 6259 15795 15116 15511 5173
P4 1075 1677 3290 1787 1211 1863 1287
P5 1987 1241 1290 1721 1251 2776 407

T
P6 30030 23462 45764 15964 21890 22929 12540
P7 426 670 642 994 458 412 248

IP
P8 27329 20546 19947 50210 45821 49206 33293
P9 1023 1913 403 495 453 555 24

CR
P10 17567 30951 3955 43090 13152 24484 19600
P11 222 338 241 285 221 288 263
P12 1460 2524 1070 1704 1762 1414 1943
P13 42108 42632 46451 30138 32618 18265 30663
P14
Mean
1603
16893
2856
16671
2977
15791 US
3058
14249
1747
10915
2419
11831
2034
8359
AN
Table 7 shows that HcHLO obtains the best results on all the 14 problems, and the W-test results

summarized in Table 8 demonstrate that HcHLO is significantly better than HLO, BLDE, BHTPSO,
M

CCPSO, MBDE, SCA, MFO, BBA, GWO, WOA, and ALO on 7, 13, 12, 8, 10, 7, 8, 10, 8, 9, and 8 out

of 14 problems, respectively. Compared with HLO, the proposed HcHLO has more control parameters
ED

and its implementation is more complicated as it uses the continuous learning operators and the binary

learning operators to deal with the real-coded variables and the other types of variables, respectively.
PT

However, it is fair to conclude that HcHLO is valid and worth using since HcHLO obviously surpasses

HLO on 7 out of 14 problems and has better numerical results on all the problems.
CE

Besides, the experimental results demonstrate the superiority of HLO as it outperforms the other

algorithms except SCA. As discussed in the previous work [2], the learning operators of HLO endow
AC

more complicated dynamic behaviors to the algorithm. For example, the individual learning operator

and the social learning operator of HLO generate a new candidate by copying the different bits of the

solutions in the IKD and SKD, which is similar to the crossover operator of Genetic Algorithms (GAs).

However, the real function of the individual learning operator and the social learning operator is a

variable-point crossover, that is, it can be the single-point crossover or the variable multi-point

crossover according to the generated random number r7 in Eq. (21). Therefore, the dynamic of HLO is

much complicated than that of GAs. Besides, as only two values, i.e. “0” and “1”, exist in binary space,

17
ACCEPTED MANUSCRIPT

the random learning operator of HLO is regarded as a mutation operator with the mutation probability

pr/2 in previous works [1-4]. However, the random learning operator works as the mutation operator

with the pr/2 rate only when the corresponding bits of the individual best solution and the social best

solution are the same. If the bit of the individual best solution is different from that of the social best

solution, like “1” for the individual best solution and “0” for the social best solution, the random

learning operator can be regarded as the individual learning operator if its random yielded value is “1”

or the social learning operator if its random generated value is “0”. Therefore, the random learning

T
operator works as the mutation operator in each individual of HLO with a different mutation rate,

IP
which is determined by the pr value and the difference between the corresponding individual best

CR
solution and the social best solution. Besides, with the updating of the IKDs and the SKD, the

difference between the individual best solution and the social best solution is accordingly changed,

US
which means that the probability of the random learning operator acting as the mutation operator is also

varied for each individual during the iterations. In short, the random learning operator in HLO may act
AN
as the mutation operator with a varied mutation rate for the different individuals at different

generations, which is much more complex than the standard mutation operator in GAs. Based on the

above analysis, it can be found that the learning operators of HLO have more complicated behaviors
M

than they appear. By comparing HLO with GAs, obviously HLO can search with more different modes

and with varied parameter values, which may be the essential reason that HLO possesses an excellent
ED

search ability.

It can also be found that the hybrid algorithms proposed in [36], i.e. PSO-MDE’-HJ, MDE’-IHS-HJ,
PT

are powerful as they have better results than those of HLO, MBDE, CCPSO, BDE, BHTPSO, SCA,

MFO, BBA, GWO, WOA, and ALO, which were developed later. Considering that PSO-MDE’-HJ and
CE

MDE’-IHS-HJ are specially designed to tackle these hybrid-coded problems, it is reasonable. However,

the experimental results show that HcHLO possesses the best known overall results so far on these 14
AC

problems instead of MDE’-IHS-HJ due to the excellent global search ability of HcHLO as well as its

reasonable mix-coded framework.

18
ACCEPTED MANUSCRIPT

T
IP
Table 7. Results of HcHLO, HLO, BLDE, BHTPSO, CCPSO, MBDE, SCA, MFO, BA, GWO, WOA, and ALO on the benchmark problems

CR
HcHLO HLO BLDE BHTPSO CCPSO MBDE SCA MFO BA GWO WOA ALO
Best 87.500000 87.500000 87.500000 87.500000 87.500000 87.500000 87.500000 87.500000 87.500000 87.500000 87.500000 87.500000
Mean 87.500000 87.500000 87.767621 87.500000 87.500000 87.547137 87.500000 87.500000 87.500000 87.500000 87.500000 87.500000
P1
Std 3.02E-08 1.22E-07 2.46E-01 1.84E-06 1.27E-06 4.71E-01 3.10E-07 2.91E-07 8.24E-07 7.06E-07 5.35E-07 2.24E-07

P2
W-test
Best
Mean
Std
W-test
/
7.667181
7.667194
4.76E-06
/
0
7.667184
7.667195
1.02E-05
0
1
7.667184
7.888805
3.93E-01
1
0
7.670735
7.922762
8.39E-02
1
0
7.667181
7.667194
5.30E-06
1
US1
7.667181
8.107200
5.87E-01
0
0
7.667182
7.750253
1.22E-02
1
0
7.667184
7.873050
1.09E-01
1
0
7.667201
7.668284
8.38E-04
1
0
7.667519
7.889777
1.07E-01
1
0
7.667332
7.831248
1.29E-01
1
0
7.667432
7.772781
2.01E-01
1
AN
Best 4.579587 4.579600 4.579598 4.579619 4.579588 4.579587 4.579596 4.579587 4.579623 4.579587 4.580446 4.579596
Mean 4.579597 4.584246 4.652072 4.648051 4.579657 4.579597 4.579597 4.579597 4.648051 4.579597 4.585862 4.589512
P3
Std 3.02E-06 2.31E-02 9.62E-02 8.96E-02 5.67E-05 2.09E-01 3.62E-06 3.48E-06 8.24E-02 1.05E-05 5.17E-01 4.33E-02
W-test / 1 1 1 0 0 0 0 1 0 1 1
Best 2.000000 2.000000 2.000000 2.000120 2.000000 2.000000 2.000000 2.000000 2.000000 2.000000 2.000000 2.000000
M

Mean 2.000000 2.000000 2.122889 2.011921 2.000000 2.073184 2.000000 2.000000 2.000000 2.000000 2.000000 2.000000
P4
Std 2.81E-07 8.52E-07 1.24E-01 1.19E-03 3.00E-7 1.10E-1 2.78E-07 5.17E-07 7.65E-07 3.32E-07 1.99E-07 5.49E-07
W-test / 0 1 1 0 1 0 0 0 0 0 0
Best 2.124470 2.124538 2.124693 2.124546 2.124472 2.124474 2.124470 2.124470 2.124486 2.124470 2.124470 2.124470
ED

Mean 2.124470 2.124693 2.131412 2.124692 2.124589 2.445195 2.126509 2.141912 2.135737 2.137581 2.124594 2.124675
P5
Std 6.42E-07 2.15E-05 1.10E-02 2.19E-05 6.10E-05 1.91E-01 7.65E-03 8.5E-02 1.44E-02 2.42E-02 7.84E-05 7.84E-05
W-test / 1 1 1 0 1 1 1 1 1 1 0
Best 1.076546 1.076547 1.091516 1.077308 1.076546 1.076546 1.076546 1.076546 1.079033 1.076617 1.076707 1.076546
Mean 1.081757 1.158096 1.248415 1.231980 1.0866196 1.099527 1.180620 1.144197 1.231470 1.165564 1.127812 1.175122
PT

P6
Std 2.97E-02 8.49E-02 1.58E-02 3.74E-02 3.17E-01 1.81E-01 8.53E-02 8.50E-02 4.74E-02 8.29E-02 5.03E-02 8.03E-02
W-test / 1 1 1 1 1 1 1 1 1 1 1
Best 99.239635 99.243964 99.239635 99.239635 99.239635 99.239635 99.239635 99.239635 99.239637 99.239635 99.239636 99.239636
P7 Mean 99.241553 99.597800 101.7905895 101.7619564 99.241803 110.919590 99.241569 99.403991 100.610843 99.502571 99.243185 99.353452
CE

Std 1.71E-03 1.39E+00 3.64E+00 3.69E+00 5.633E-03 1.47E+02 1.79E-03 1.84E-02 1.97E+00 2.96E-02 3.47E-03 5.47E-02

19
AC
ACCEPTED MANUSCRIPT

T
IP
W-test / 1 1 1 0 1 0 1 1 1 0 1

CR
Best 3.557466 3.557502 3.557778 3.558214 3.557466 3.557466 3.557466 3.557472 3.569668 3.557550 3.557755 3.557834

P8 Mean 3.558935 3.565095 3.605231 3.649570 3.559793 3.560045 3.559031 3.580417 3.708652 3.568889 3.559104 3.572482
Std 4.88E-03 8.17E-03 6.29E-02 9.51E-02 5.63E-02 5.81E-01 3.17E-03 4.80E-02 1.23E-01 2.08E-02 3.45E-03 3.59E-02
W-test / 0 1 1 1 1 0 1 1 1 0 1
Best -32217.42778 -32217.42778 -32217.42778 -32217.42778 -32217.42778 -32217.42778 -32217.42778 -32217.42778 -32217.42778 -32217.42778 -32217.42778 -32217.42778

P9
Mean
Std
W-test
Best
-32217.42778
2.19E-11
/
−0.808844
−0.808844
-32217.42778
2.19E-11
0
−0.808844
−0.808844
-32217.42778
2.19E-11
0
−0.808844
-32217.42778
2.19E-11
0
−0.808844
-32217.42778
2.19E-11
0
−0.808844
US
-32217.42778
2.19E-11
0
−0.808844
-32217.42778
2.19E-11
0
-0.808844
-32217.42778
2.19E-11
0
-0.808844
-32217.42778
2.19E-11
0
-0.808844
-32217.42778
2.19E-11
0
−0.808844
−0.808844
-32217.42778
2.19E-11
0
-0.808844
-32217.42778
2.19E-11
0
-0.808844
AN
Mean -0.802650 -0.793711 -0.757106 -0.760245 -0.807014 -0.808657 -0.755025 -0.787432 -0.788527
P10
Std 3.25E-11 3.25E-11 9.14E-03 1.36E-02 2.32E-02 5.64E-02 6.52E-03 1.87E-03 2.12E-02 8.92E-11 1.96E-02 1.42E-02
W-test / 0 1 1 1 1 1 1 1 0 1 1
Best -0.974565 -0.974565 -0.974565 -0.974565 -0.973054 -0.971472 -0.974565 -0.974565 -0.974565 -0.978799 -0.974565 -0.974565
Mean -0.974565 -0.974565 -0.974127 -0.974367 -0.972187 -0.970321 -0.974565 -0.974565 -0.974298 -0.976103 -0.973848 -0.974565
P11
Std 1.56E-15 1.56E-15 9.14E-03 1.13E-03 1.50E-03 3.24E-03 1.42E-15 1.82E-15 1.25E-03 1.06E-03 4.9E-03 1.75E-15
M

W-test / 0 1 0 1 1 0 0 0 1 1 0
Best -0.999892 -0.999890 -0.999872 -0.999872 -0.999844 -0.999888 -0.9998621 -0.999892 -0.999544 -0.999892 -0.999892 -0.999862
Mean -0.999821 -0.999626 -0.999565 -0.999121 -0.999594 -0.999653 -0.9997353 -0.999733 -0.999152 -0.999746 -0.997271 -0.999752
P12
Std 9.54E-06 1.09E-04 2.60E-04 2.00E-03 9.62E-05 1.06E-04 2.38E-04 9.92E-05 7.86E-03 8.18E-05 3.50E-03 7.86E-05
ED

W-test / 1 1 1 1 1 1 0 1 0 1 0
Best 5850.438514 5850.507633 5850.769287 5856.841490 5850.508182 5850.511380 5850.549073 5850.424435 5851.043264 5851.395380 5850.825749 5850.825749
Mean 5908.944814 5956.548160 6702.28527 6274.611674 5974.786682 6337.531826 5923.822778 5965.330069 6304.729238 5912.724358 6291.156549 5941.542192
P13
Std 1.01E+02 1.34E+02 8.76E+02 2.92E+02 1.03E+02 4.06E+02 1.94E+02 1.38E+02 2.75E+02 1.88E+01 3.40E+02 2.45E+02
W-test / 1 1 1 1 0 1 1 1 1 1 1
PT

Best -75.134137 -75.134137 -75.134135 -75.134133 -75.134137 -75.134137 -75.134137 -75.134137 -75.133224 -75.134137 -75.133869 -75.133869
Mean -75.134137 -74.550318 -72.024184 -74.886291 -74.533979 -74.102486 -74.645972 -74.483266 -74.577617 -75.120090 -74.557990 -74.601332
P14
Std 1.11E-07 1.96E+00 3.81E+00 4.70E-01 1.07E-04 2.33E+00 1.94E+00 2.22E-02 4.17E+01 4.33E-02 8.40E-01 7.38E-01
W-test / 1 1 1 1 1 1 1 1 1 1 1
CE

20
AC
ACCEPTED MANUSCRIPT

Table 8. Summary results of the W-test on the benchmark problems

HLO BLDE BHTPSO CCPSO MBDE SCA MFO BBA GWO WOA ALO
P1 0 1 0 0 1 0 0 0 0 0 0
P2 0 1 1 1 0 1 1 1 1 1 1
P3 1 1 1 0 0 0 0 1 0 1 1
P4 0 1 1 0 1 0 0 0 0 0 0
P5 1 1 1 0 1 1 1 1 1 1 0
P6 1 1 1 1 1 1 1 1 1 1 1
P7 1 1 1 0 1 0 1 1 1 0 1
P8 0 1 1 1 1 0 1 1 1 0 1
P9 0 0 0 0 0 0 0 0 0 0 0
P10 0 1 1 1 1 1 1 1 0 1 1
P11 0 1 0 1 1 0 0 0 1 1 0

T
P12 1 1 1 1 1 1 0 1 0 1 0
P13 1 1 1 1 1 1 1 1 1 1 1

IP
P14 1 1 1 1 0 1 1 1 1 1 1
Total 7 13 12 8 10 7 8 10 8 9 8

CR
5. Conclusions and future work

US
In engineering areas, many optimization problems are mix-coded problems. Although binary

optimization algorithms, like HLO, can be used to solve these problems directly, the efficiency of

search on continuous parameters with the requirement of high accuracy may be spoiled due to “the
AN
curse of dimensionality” raised by the binary coding strategy. Meanwhile, using continuous algorithms

to solve mixed-variable problems would incur significant performance reduction because of the binary
M

or discrete variables of the problems. Thus, this paper extends HLO and first presents a hybrid-coded

HLO framework to solve the mix-coding problems more efficiently and effectively, in which
ED

real-coded parameters are optimized by the continuous linear learning operators of CHLO while the

rest variables of problems are handled by the binary learning operators of HLO. The experimental
PT

results demonstrate the validity and superiority of the proposed HcHLO as it achieves the best-known

overall results so far on these benchmark problems.


CE

HLO is a newly developed meta-heuristic with promising potential. Our future work will further

explore the characteristics of HLO and apply it to solve diverse problems. Besides, HLO is developed
AC

based on a simplified human learning model in which only random learning, individual learning, and

social learning are simulated while many sophisticated brain functions and learning mechanisms, which

play important roles in human learning process and are even the key elements for humans having better

performance than other animals, are not considered. Thus, the most important direction of our

following research is to study and introduce these phenomena into HLO to enhance its search ability

based on the corresponding achievements in cognitive science and learning theories.

21
ACCEPTED MANUSCRIPT

Acknowledgments

This work is supported by National Natural Science Foundation of China (Grant No. 61304031 &

61633016), Key Project of Science and Technology Commission of Shanghai Municipality under

Grant No. 16010500300, 15220710400, and 14DZ1206302, and a Paul and Heidi Brown Preeminent

Professorship in Industrial and Systems Engineering, University of Florida.

References

T
[1] L. Wang, H. Ni, R. Yang, A simple human learning optimization algorithm, In: M. Fei, C. Peng, Z.

IP
Su, Y. Song, Q. Han (eds) Computational Intelligence, Networked Systems and Their Applications,
LSMS/ICSEE 2014. Communications in Computer and Information Science, vol 462. Springer,

CR
Berlin, Heidelberg, 2014, pp. 56-65.
[2] L. Wang, H. Ni, R. Yang, An adaptive simplified human learning optimization algorithm.
Information Sciences. 320 (2015) 126-139.

Optimization. 67 (1) (2017) 283-323. US


[3] L. Wang, L. An, J. Pi, A diverse human learning optimization algorithm, Journal of Global

[4] L. Wang, R. Yang, H. Ni, A human learning optimization algorithm and its application to
AN
multi-dimensional knapsack problems, Applied Soft Computing. 34 (2015) 736-743.
[5] L. Wang, H. Ni, W. Zhou, MBPOA-based LQR controller and its application to the double-parallel
inverted pendulum system, Engineering Applications of Artificial Intelligence. 36 (2014) 262-268.
M

[6] L. Wang, R. Yang, P.M. Pardalos, An adaptive fuzzy controller based on harmony search and its
application to power plant control, International Journal of Electrical Power & Energy Systems. 53
ED

(2013) 272-278.
[7] S. Pookpunt, W. Ongsakul, Design of optimal wind farm configuration using a binary particle
swarm optimization at Huasai district, Southern Thailand, Energy Conversion and Management.
PT

108 (2016) 160-180.


[8] R. Taormina, K.W. Chau, Data-driven input variable selection for rainfall–runoff modeling using
CE

binary-coded particle swarm optimization and Extreme Learning Machines, Journal of Hydrology.
529 (2015) 1617-1632.
[9] A.A. Salman, I. Ahmad, M.G.H. Omran, A metaheuristic algorithm to solve satellite broadcast
AC

scheduling problem, Information Sciences. 322 (2015) 72-91.


[10] A.M. Cruz, R.B. Fernández, H.M. Lozano, Automated functional test generation for digital
systems through a compact binary differential evolution algorithm, Journal of Electronic Testing.
31 (4) (2015) 361-380.
[11] N.K. Sreeja, A. Sankar. Ant colony optimization based binary search for efficient point pattern
matching in images, European Journal of Operational Research. 246 (1) (2015) 154-169.
[12] X. Kong, L. Gao, H. Ouyang, A simplified binary harmony search algorithm for large scale 0–1
knapsack problems, Expert Systems with Applications. 42 (12) (2015) 5337-5355.
[13] C. Ozturk, E. Hancer, D. Karaboga, A novel binary artificial bee colony algorithm based on

22
ACCEPTED MANUSCRIPT

genetic operators, Information Sciences. 297 (2015) 154-170.


[14] H. Nezamabadi-pour, A quantum-inspired gravitational search algorithm for binary encoded
optimization problems, Engineering Applications of Artificial Intelligence. 40 (2015) 62-75.
[15] P.K. Singhal, R. Naresh, V. Sharma, Binary fish swarm algorithm for profit-based unit
commitment problem in competitive electricity market with ramp rate constraints, IET Generation,
Transmission & Distribution. 9 (13) (2015) 1697-1707.
[16] M. Kang, J. Kim, J.M. Kim, Reliable fault diagnosis for incipient low-speed bearings using fault
feature analysis based on a binary bat algorithm, Information Sciences. 294 (2015) 423-438.
[17] B. Crawford , R. Soto, C. Peña, Solving the set covering problem with a shuffled frog leaping

T
algorithm, in: In: N. Nguyen, B. Trawiński, R. Kosala (eds) Intelligent Information and Database

IP
Systems. ACIIDS 2015. Lecture Notes in Computer Science, vol 9012, Springer, Cham, 2015, pp.
41-50.

CR
[18] M. Balvasi, M. Akhlaghi, H. Shahmirzaee, Binary TLBO algorithm assisted to investigate the
supper scattering plasmonic nano tubes, Superlattices and Microstructures. 89 (2016) 26-33.
[19] Y. Zhou, X. Chen, G. Zhou, An improved monkey algorithm for a 0-1 knapsack problem, Applied
Soft Computing. 38 (2016) 817-830.
US
[20] E. Emary, H.M. Zawbaa, A.E. Hassanien, Binary grey wolf optimization approaches for feature
selection, Neurocomputing. 172 (2016) 371-381.
AN
[21] T. Liao, K. Socha, M. Montes de Oca, Ant colony optimization for mixed-variable optimization
problems, Evolutionary Computation, IEEE Transactions on Evolutionary Computation. 18 (4)
(2014) 503-518.
M

[22] L. Le-Anh, T. Nguyen-Thoi, V. Ho-Huu, Static and frequency optimization of folded laminated
composite plates using an adjusted Differential Evolution algorithm and a smoothed triangular
ED

plate element, Composite Structures. 127 (2015) 382-394.


[23] A. Trivedi, D. Srinivasan, S. Biswas, A genetic algorithm–differential evolution based hybrid
framework: Case study on unit commitment scheduling problem, Information Sciences. 354 (2016)
PT

275-300.
[24] L. Cui, J. Deng, L. Wang, A novel locust swarm algorithm for the joint replenishment problem
considering multiple discounts simultaneously, Knowledge-Based Systems. 111 (2016) 51-62.
CE

[25] X. Lei, Y. Ding, H. Fujita, Identification of dynamic protein complexes based on fruit fly
optimization algorithm, Knowledge-Based Systems. 105 (2016) 270-277.
AC

[26] L. Cui, L. Wang, J. Deng, Intelligent algorithms for a new joint replenishment and synthetical
delivery problem in a warehouse centralized supply chain, Knowledge-Based Systems. 90 (2015)
185-198.
[27] Y. Shi, C. M. Pun, H. Hu, An Improved Artificial Bee Colony and Its Application,
Knowledge-Based Systems. 107 (2016) 14-31.
[28] L. Wang, Z. Wang, S. Liu, An effective multivariate time series classification approach using echo
state network and adaptive differential evolution algorithm, Expert Systems with Applications. 43
(2016) 237–249.
[29] L. Cui, L. Wang, J. Deng, A new improved quantum evolution algorithm with local search

23
ACCEPTED MANUSCRIPT

procedure for capacitated vehicle routing problem, Mathematical Problems in Engineering. 2013
(2013), Article ID 159495, 17 pages.
[30] D. Datta, J.R. Figueira, A real–integer–discrete-coded differential evolution, Applied Soft
Computing. 13 (9) (2013) 3884-3893.
[31] M. Magni, C. Paolino, R. Cappetta, Diving too deep: How cognitive absorption and group learning
behavior affect individual learning, Academy of Management Learning & Education. 12 (1)
(2013) 51-69.
[32] E.M. Dar-El, Human learning: From learning curves to learning organizations, Springer Science &
Business Media. 2013.

T
[33] D. Narain, J.B.J Smeets, P. Mamassian, Structure learning and the Occam's razor principle: a new

IP
view of human function acquisition, Frontiers in computational neuroscience. 8 (2014) 121.
[34] L.Cui, J. Deng, F. Liu, Investigation of investment in a single retailer two-supplier supply chain

CR
with random demand to decrease inventory inaccuracy. Journal of Cleaner Production. 142 (2017)
2018-2044.
[35] T.W. Liao, Two hybrid differential evolution algorithms for engineering design optimization.

US
Applied Soft Computing. 10 (4) (2010) 1188-1199.
[36] H. Yi, Q. Duan, T.W. Liao, Three improved hybrid metaheuristic algorithms for engineering
design optimization, Applied Soft Computing. 13 (5) (2013) 2433-2444.
AN
[37] R.P. Parouha, K.N. Das. A memory based differential evolution algorithm for unconstrained
optimization. Applied Soft Computing. 38 (2016) 501-517.
[38] Y. Li Y, Z.H. Zhan, S. Lin, Competitive and cooperative particle swarm optimization with
M

information sharing mechanism for global optimization problems, Information Sciences. 293
(2015) 370-382.
ED

[39] Z. Beheshti, S.M. Shamsuddin, S. Hasan, Memetic binary particle swarm optimization for discrete
optimization problems, Information Sciences. 299 (2015) 58-84.
[40] Y. Chen, W. Xie, X. Zou, A binary differential evolution algorithm learning from explored
PT

solutions, Neurocomputing. 149 (2015) 1038-1047.


[41] S. Mirjalili, SCA: A Sine Cosine Algorithm for solving optimization problems, Knowledge-Based
Systems. 96 (2016) 120-133.
CE

[42] S. Mirjalili. The ant lion optimizer, Advances in Engineering Software. 83 (2015) 80-98.
[43] S. Mirjalili, Moth-flame optimization algorithm: A novel nature-inspired heuristic paradigm,
AC

Knowledge-Based Systems. 89 (2015) 228-24.


[44] S. Mirjalili, S.M. Mirjalili, Lewis A, Grey wolf optimizer, Advances in Engineering Software. 69
(2014) 46-61.
[45] S. Mirjalili, S.M. Mirjalili, X.S. Yang, Binary bat algorithm, Neural Computing and Applications.
25 (3-4) (2014) 663-681.
[46] S. Mirjalili, A. Lewis, The whale optimization algorithm, Advances in Engineering Software.
95(2016) 51-67.

24
ACCEPTED MANUSCRIPT

Appendix

Problem 1.
Minimize F =7.5y1  6.4 x1  5.5 y2  6.0 x2
s.t. 0.8x1  0.67 x2  10
x1  20 y1  0
x2  20 y1  0
where 0  x1 , x2  20 , y1 , y2 {0,1} . The global optimum F* is 87.5 at x*= [12.5006, 0] and y*= [1, 0].

Problem 2.
Minimize F  2 x1  3 x2  1 . 5y1  2y2  0 . y5 3

T
s.t. F  2 x1  3x2  1.5 y1  2 y2  0.5 y3
( x1 )2  y1  1.25

IP
(x 2 )1.5  1.5 y2  3
x1  y1  1.6

CR
1.333x2  y2  3
 y1  y2  y3  0
where 0  x1 , x2  2 , y1 , y2 , y3 {0,1} . The global optimum F* is 7.667 at x*= [1.118, 1.310] and y*=
[1, 0, 1].

Problem 3.
US
AN
Minimize F  ( y1  1)2  ( y2  2)2  ( y3  1)2  ln( y4  1)  ( x1  1)2  ( x2  2)2  ( x3  3)2
s.t. y1  y2  y3  x1  x2  x35
y32  x12  x22  x32  5.5
y1  x1  1.2
M

y2  x2  1.8
y3  x3  2.5
y4  x1  1.2
ED

y22  x22  1.64


y32  x32  4.25
y22  x32  4.64
PT

where 0  x1  1.2 , 0  x2  1.281 , 0  x3  2.062 , y1 , y2 , y3 , y4 {0,1} . The global optimum F* is


4.5796 at x*= [0.2, 0.8, 1.908] and y*= [1, 0, 1, 1].
CE

Problem 4.
Minimize F  2 x  y
s.t. 1.25  x2  y  0
AC

x  y  1.6
where 0  x1  1.6 , y {0,1} . The global optimum F* is 2 at x*= 0.5 and y*= 1.

Problem 5.
Minimize F   y  2 x1  ln( x1 / 2)

s.t.  x1  ln( x1 / 2)  y  0
where 0.5  x1  1.4 , y {0,1} . The global optimum F* is 2.1247 at x1*= 0.5 and y*= 1.

Problem 6.
Minimize F  0.7 y  5( x1  0.5)2  0.8
25
ACCEPTED MANUSCRIPT

s.t.  exp( x1  0.2)  x2  0


x2 +1.1y  1.0
x1  1.2y  0.2
where 0.2  x1  1 , 2.22554  x2  1 , y {0,1} . The global optimum F* is 1.076543 at x*=
[0.94194, -2.1] and y*= 1.

Problem 7.
1  y1 y1
Minimize F  7.5 y1  5.5(1  y1 )  7v1  6v2  50  50
0.8 [1  exp(0.4v2 )] 0.9 [1  exp(0.5v1 )]
s.t. 0.9 [1  exp(0.5v1 )]  2 y1  0

T
0.8 [1  exp(0.4v2 )]  2(1  y1 )  0

IP
v1  10 y1
v2  10(1  y1 )

CR
where 0  v1 , v2  10 , y {0,1} . The global optimum F* is 99.245209 at v= [3.514237, 0] and y1= 1.

Problem 8.
Minimize F  ( y1  1)2  ( y2  2)2  ( y3  1)2  ln( y4 + 1)  ( x1  1)2  ( x2  2)2  ( x3  3)2
s.t. y1  y2  y3  x1  x2  x3  5
y32  x12  x22  x32  5.5
y1  x1  1.2
US
AN
y2  x2  1.8
y3  x3  2.5
y4  x1  1.2
M

y22  x22  1.64


y32  x32  4.25
ED

y22  x32  4.64


where 0  x1  1.2 , 0  x2  1.8 , 0  x3  2.5 , y1 , y2 , y3 , y4 {0,1} . The global optimum F* is
3.557473 at x*= [0.2, 1.28062, 1.95448] and y*= [1, 0, 1, 1].
PT

Problem 9.
Minimize F = 5.357854x12  0.835689 y1 x3  37.29329 y1  40792.141
CE

s.t. 85.334407  0.0056858 y2 x3  0.0006262 y1 x2  0.0022053x1 x3  92


80.51249  0.0071317 y2 x3  0.0029955 y1 y2  0.0021813x12  90  20
9.300961  0.0047026 x1 x3  0.0012547 y1 x1  0.0019085x1 x2  20  5
AC

where 27  x1 , x2 , x3  45 , y1 {78,...,102} integer, y2 {33,..., 45} integer. The global optimum F* is


-32217.4 at x*= [27, any, 27] and y*= [78, any].

Problem 10.
10
F   [1  (1  p j ) j ]
m
Minimize
j 1
10
s.t.  [a m
j 1
ij
2
j  cij mij ]  bi , i  1, 2,3, 4

[ p j ]  (0.81, 0.93, 0.92, 0.96, 0.99, 0.89, 0.85, 0.83, 0.94, 0.92)

26
ACCEPTED MANUSCRIPT

2 7 3 0 5 6 9 4 8 1
4 9 2 7 1 0 8 3 5 6 
[aij ]  
5 1 7 4 3 6 0 9 8 2
 
8 3 5 6 9 7 2 4 0 1
7 1 4 6 8 2 5 9 3 3
4 6 5 7 2 6 9 1 0 8 
[cij ]  
1 10 3 5 4 7 8 9 4 6
 
2 3 2 5 7 8 6 10 9 1

bij   (2.0 1013 , 3.11012 , 5.7 1013 , 9.3 1012 )


where m j  [1, 6] integer, j  1,....,10 . The global optimum F* is -0.808844 at m= [2 2 2 1 1 2 3 2 1

T
2].

IP
Problem 11.
4
F   R j

CR
Minimize
j 1
4
s.t. d j 1
1j  m2j  100

d
d
4
4

j 1
2j  (m j  exp(m j / 4))  150

 m j exp(m j / 4))  160


US
AN
3j
j 1

where
R1  1  q1 ((1  1 )q1  1 )m1 1
 2 q2  p2 q2m (1   2 )m 2 2

R2  1 
M

p2 
R3  1  q m3
3

R4  1  q4 ((1  4 )q4  4 )m4 1


ED

[ p j ]  (0.93, 0.92, 0.94, 0.91)


[q j ]  (0.07, 0.08, 0.06, 0.09)
[ j ]  (0.2, 0.06, 0.0, 0.3)
PT

1 2 3 4 
[dij ]  7 7 5 7 
7 8 8 6 
CE

m j  [1, 6] integer, j  1, 2, 4 , m3  [1, 5] integer. The global optimum F* is -0.974565 at m = [3 3 2


3].
AC

Problem 12.
4
F  [ 1 (1 p j
mj
Minimize ) ]
j 1
4
s.t. v j 1
j  m2j  250

4
1000  j mj
 j 1
j (
ln( p j )
)  (m j  exp( ))  400
4
4 mj
w
j 1
j  m j  exp(
4
)  500

where

27
ACCEPTED MANUSCRIPT

[v j ]  (1 2 3 2)
[w j ]  (6 6 8 7)
[ j ]  (1.0, 2.3, 0.3, 2.3) 105
[ j ]  (1.5, 1.5, 1.5, 1.5)
m j  [1, 10] integer, p j  [0.5, 1  106 ] , j  1, 2, 3, 4 . The global optimum F* is -0.999486 at m = [3 6
3 5], p = [0.960592, 0.760592, 0.972646, 0.804660].

Problem 13.
Minimize F  0.6224 x1 x2 x3  1.7781x12 x4  3.1661x2 x32  19.84 x1 x32
s.t. 0.0193x1 / x3  1  0

T
0.00954 x1 / x4  1  0
x2 / 240  1  0

IP
1296000  (4 / 3) x13
1  0
 x12 x2

CR
where 25  x1  150, 25  x2  240, x3 , x4 [0.0625, 0.125, ...,
1.1875,
1.25]. The global optimum F* is
5850.770 at x*= [38.858 221.402 0.750 0.375].

Problem 14.
Minimize F   f d c

s.t. 0.145dc0.1939 f 0.7071M 0.2343  Ra (max)


0 . 4 1 6 7
N
US
AN
0 . 8 3 3 3
2 9 . 6d7
c f c( m a x )

where 5  dc  30, 8.6  f  13.4, M {120, 140, 170, 200, 230, 270, 325, 400, 500}. The global
optimum F* is -75.1341 at d  5.6070 and f  13.4 when Ra (max)  0.3 and Nc (max)  7 .
*
c
*
M
ED
PT
CE
AC

28

You might also like