Professional Documents
Culture Documents
Wang2017 2
Wang2017 2
PII: S0950-7051(17)30189-2
DOI: 10.1016/j.knosys.2017.04.015
Reference: KNOSYS 3895
Please cite this article as: Ling Wang , Ji Pei , Muhammad Ilyas Menhas , Jiaxing Pi , Minrui Fei ,
Panos M. Pardalos , A Hybrid-coded Human Learning Optimization for Mixed-Variable Optimization
Problems, Knowledge-Based Systems (2017), doi: 10.1016/j.knosys.2017.04.015
This is a PDF file of an unedited manuscript that has been accepted for publication. As a service
to our customers we are providing this early version of the manuscript. The manuscript will undergo
copyediting, typesetting, and review of the resulting proof before it is published in its final form. Please
note that during the production process errors may be discovered which could affect the content, and
all legal disclaimers that apply to the journal pertain.
ACCEPTED MANUSCRIPT
Highlights
This paper proposes a new hybrid-coded HLO (HcHLO) framework to tackle
mix-coded problems more efficiently and effectively.
A new continuous human learning optimization algorithm is presented based on
the linear learning mechanism of humans.
The results show that the HcHLO achieves the best-known overall performance
so far on the tested mix-coded problems.
T
IP
CR
US
AN
M
ED
PT
CE
AC
1
ACCEPTED MANUSCRIPT
2 Center for Applied Optimization, Department of Industrial and Systems Engineering, University of Florida,
T
IP
Abstract. Human Learning Optimization (HLO) is an emerging meta-heuristic with promising
potential, which is inspired by human learning mechanisms. Although binary algorithms like HLO can
CR
be directly applied to mixed-variable problems that contains both continuous values and discrete or
Boolean values, the search efficiency and the performance of those algorithms may be significantly
US
spoiled due to “the curse of dimensionality” caused by the binary coding strategy especially when the
continuous parameters of problems require high accuracy. Therefore, this paper extends HLO and
AN
proposes a novel hybrid-coded HLO (HcHLO) framework to tackle mix-coded problems more
efficiently and effectively, in which real-coded parameters are optimized by a new continuous HLO
M
(CHLO) based on the linear learning mechanism of humans and the other variables are handled by the
binary learning operators of HLO. Finally, HcHLO is adopted to solve 14 benchmark problems and its
ED
performance is compared with that of recent meta-heuristic algorithms. The experimental results show
that the proposed HcHLO achieves the best-known overall performance so far on the test problems,
PT
1.Introduction
Generally, many human learning activities are similar to the search process of meta-heuristics. For
instance, when a person learns how to play Sudoku, he or she repeatedly studies and practices to master
new skills, and evaluates his or her performance for guiding the following study to play better.
Similarly, meta-heuristics iteratively generate new solutions and calculate the corresponding fitness
values for adjusting the following search to find a better solution. Inspired by this fact, Wang et al. [1]
2
ACCEPTED MANUSCRIPT
presented a novel Human Learning Optimization (HLO) algorithm based on a simplified human
learning model in which three learning operators, i.e. the random learning operator, the individual
learning operator, and the social learning operator, are developed to search for the optimal solution.
Due to the strongest learning ability and the highest level of consciousness in studying, human being is
capable to solve a large number of complicated problems that other living beings, such as birds, ants,
and bees, can hardly address. Therefore, it is logical to presume that HLO, which is developed based on
the learning mechanisms of human being, may gain an advantage over other nature-inspired
T
meta-heuristics on optimization problems in our daily life [2]. Previous works [1-4] show that the HLO
IP
algorithms outperform recent proposed meta-heuristic variants, such as Differential Evolution (DE),
CR
Particle Swarm Optimization (PSO), Harmony Search (HS), and Fruit Fly Optimization, on numerical
functions, deceptive functions, and knapsack problems. Most notably, HLO has achieved the best
US
results on two well-studied sets of multi-dimensional knapsack problems, i.e. 5.100 and 10.100,
meta-heuristics, binary meta-heuristics is more flexible as they can solve binary problems, which
continuous optimization problems. Besides, binary meta-heuristics may have advantages on some
continuous problems like the controller design problems [5, 6] as the binary-coding strategy discretizes
ED
original infinite search space as finite candidate solutions, and consequently the search efficiency is
significantly improved especially when the parameters of problems do not need a high degree of
PT
accuracy. Due to these benefits, many researchers devoted themselves to the research of binary
meta-heuristics, and various binary variants of well-known meta-heuristics, like the binary Particle
CE
Swarm Optimization [7, 8], the binary Differential Evolution [9, 10], the binary Ant Colony
Optimization (ACO) [11], and the binary Harmony Search [6, 12], were developed and successfully
AC
applied to solve diverse optimization problems, such as the design of wind farm configuration, satellite
broadcast scheduling problems, point pattern matching problems, epileptic seizure detection and
prediction, and the design of rainfall-runoff modeling. Furthermore, more and more new binary
meta-heuristics, like the binary artificial bee colony [13], the binary gravitational search algorithm [14],
the binary fish swarm algorithm [15], the binary bat algorithm [16], the binary shuffled frog leaping
algorithm [17], the binary teaching-learning-based optimization algorithm [18], the binary monkey
3
ACCEPTED MANUSCRIPT
algorithm [19], and the binary grey wolf optimization [20], are proposed to better solve various
optimization problems.
However, the “curse of dimensionality” may arise when using binary meta-heuristics optimizes
high-dimensional continuous problems, which would significantly spoil the efficiency and performance
of binary algorithms because of the exponential growth of solution space. In this case, using
real-coding meta-heuristics instead of binary-coding ones would be a better choice. Therefore, although
binary meta-heuristics can be easily used to solve the hybrid-coded problems, in which real variables,
T
discrete variables, and/or binary variables are included, recent works focus on studying and developing
IP
powerful real-coded algorithms [21-23] to tackle them, as well as discrete or binary problems [24-29],
CR
for gaining better results.
As mentioned above, real-coding methods cannot directly optimize discrete and binary problems,
US
and hence discrete or binary variables in the hybrid-coded problems need to be encoded and operated
as real ones and then modified to discrete or binary variables to calculate fitness values [30]. The
AN
advantage of using real-coding meta-heuristics is that the ability of optimizing real variables is
significantly enhanced, which usually brings better results. But adopting real-coded algorithms to
optimize binary and discrete variables is not good enough since there is a fair possibility of losing vital
M
information on mapping a discrete or binary variable into a continuous one and then mapping it back to
the original discrete or binary space [30], which would cause severe performance loss as binary and
ED
discrete variables are as important as real variables for the hybrid-coded problems. Therefore, this
paper presents a novel hybrid-coded Human Learning Optimization (HcHLO) framework to solve the
PT
hybrid-coded problems more efficiently and effectively, in which a continuous Human Learning
Optimization (CHLO) based on the linear learning mechanism of humans is proposed to tackle
CE
real-coded parameters while the other types of variables are handled by HLO. As far as we know, this
is the first time to present continuous HLO as well as apply HLO to solve the hybrid-coded problems.
AC
The rest of the paper is organized as follows. Section 2 introduces the standard HLO algorithm
briefly. The proposed HcHLO method is presented in Section 3, in which the implementation of
HcHLO, as well as the continuous HLO, is described in detail. Then the proposed HcHLO is applied to
tackle the hybrid-coded problems collected from different engineering fields, and the results are
discussed and compared with recent works in Section 4. Finally, Section 5 concludes this paper.
4
ACCEPTED MANUSCRIPT
As a binary meta-heuristic, the standard HLO adopts the binary-coding framework. An individual in
where xij is the j-th bit of the i-th individual, and N and M denotes the number of individuals in the
population and the length of solutions, respectively. Considering that initially there is no
prior-knowledge of problems, each individual of HLO is initialized with “0” or “1” stochastically. After
initialization, HLO uses three operators, i.e. the random learning operator, the individual learning
T
operator, and the social learning operator, to generate new candidates and search out the optimal
IP
solution.
CR
2.1 Learning operators
US
Randomness always exists in the human learning as usually there is no or only partial prior knowledge
of new problems. Besides, humans need to keep exploring new strategies to learn better in which
AN
random learning is unavoidable [3]. And therefore, HLO uses the random learning operator as Eq. (2)
Individual learning is the ability of humans to build knowledge through the individual reflection about
external stimuli and sources [31]. It is very important for humans to learn from their own previous
CE
experience and knowledge and therefore they can improve the efficiency and effectiveness of learning
and avoid mistakes. To emulate this learning ability, HLO defines the individual knowledge database
AC
ikd1
ikd
2
IKD ,1 i N (3)
ikd i
ikd N
5
ACCEPTED MANUSCRIPT
where ikdi is the individual knowledge database of individual i, T denotes the size of IKDs, and ikipj
represents the j-th bit of p-th best solution of person i. When HLO performs individual learning, new
T
candidate solutions are yielded as Eq. (5) by the individual learning operator,
IP
xij ikipj (5)
CR
where p is a random integer between 1 and T.
US
Although humans may learn and solve problems by themselves, the learning efficiency would be very
low for hard problems. While in a social environment, the learning efficiency can be significantly
AN
improved by sharing knowledge among individuals. To achieve better performance, HLO designs the
social learning operator to mimic the social learning behavior of humans. Like the individual learning
operation, the social knowledge database (SKD) is defined in HLO as Eq. (6) to store the knowledge of
M
the population,
skd H sk H 1 sk H 2 sk Hj sk HM
CE
where H is the size of the SKD, and skdq is the q-th social knowledge in the SKD. When HLO runs the
social learning, new candidate solutions are generated as Eq. (7) based on the knowledge in the SKD,
AC
After all the candidate solutions are produced, the fitness of each new solution is calculated. If the
fitness of new individual is better than that of the worst one in the IKD or the current number of
solutions in the IKD is less than the pre-defined value, the corresponding IKD needs to be updated and
this new candidate will be stored. For the SKD, the same updating mechanism is adopted. However, to
6
ACCEPTED MANUSCRIPT
better maintain diversity and avoid the premature of HLO, the SKD updates no more than one solution
HLO iteratively executes the learning operators to generate fresh solutions and updates the IKD as
well as the SKD until termination criteria are satisfied. The procedure of HLO is described in Fig. 1.
Begin
T
Calculate the fitness of individuals, generate
IP
and initialize the IKD and the SKD
CR
Terminate
the iteration ? Yes
No
US
Yield new candidate solutions
through performing three
learning operators
Output the
results
AN
End
Calculate the fitness of
new solutions
according to fitness
Although HLO possesses an excellent global search ability and can be used to solve real-coded
problems, it is clearly foreseen that the efficiency of HLO will be greatly reduced for high-dimensional
AC
continuous problems especially with the high demand of accuracy because of “the curse of
HLO on the hybrid-coded problems, this paper develops a continuous Human Learning Optimization
(CHLO) algorithm. Different from HLO, CHLO adopts the real-coding framework, that is, each
initialized between the lower bound and the upper bound of problems as Eq. (8):
7
ACCEPTED MANUSCRIPT
where xmin, j and xmax, j is the lower bound and upper bound of variable j, and r is a random number.
Obviously, the learning operators of HLO cannot work in continuous space and thus new learning
operators need be designed for CHLO. Human learning is diverse and complex. Previous research
shows that learning curves, which are a graphical representation of the increase of learning (vertical
axis) with experience (horizontal axis), are different on various tasks and can be described by the linear
function model, power function model, and many other models [32]. However, it has long been
T
reported in literature that there is a primacy of linear functions in human function learning. Recent
IP
work [33] demonstrates that both aspects of the behavior, extent and rate of selection, present evidence
CR
that human function learning obeys the Occam's razor principle, which may explain the previous
findings on primacy of linear function over non-linear functions since linear models would have low
US
parametric complexity. Hence, for the simplicity of implementation and reduction of computation,
The random learning operator in HLO is primarily used to keep the diversity of the population and
M
partially search with various new attempts to find the optimal solution. As mentioned above, because of
lacking the prior knowledge of problems, the linear random learning operator of CHLO is set to a
ED
The individual learning operator is used to simulate the phenomena that humans learn from their
previous experience. Based on this idea, the linear individual learning operator of CHLO is designed as
AC
Eq. (10),
where IL is the linear individual learning factor, r2 is a random number between -1 and 1, ikipj is the j-th
variable of the p-th solution in the IKD of individual i, and skqj is the j-th corresponding variable of the
p-th solution in the SKD. By performing the linear individual learning operator, CHLO searches based
on its previous knowledge ikipj, i.e. the first item on the right of Eq. (10), with a linear learning
8
ACCEPTED MANUSCRIPT
mechanism, i.e. the second item on the right of Eq. (10). The search range is dynamically adjusted
according to the value of skqj ikipj , which is the potential range where better solutions may exist, and
The social learning operator is an effective way for HLO to absorb useful knowledge from the
collective experience of the population. Similar with the linear individual learning operator, a linear
T
xij skqj SL r3 (skqj ikipj ) (11)
IP
where SL is the linear social learning factor, and r3 is a random number between 0 and 1. The second
CR
item on the right of Eq. (11) is the linear learning strategy for the social learning operator of CHLO
while the first item on the right of Eq. (11), different from that of the individual learning operator, is the
cause the premature convergence and spoil the accurate search ability of the algorithm. A small IL or SL
M
can help the algorithm explore solution space better but may reduce the efficiency of search. Therefore,
Hybrid-coded Human Learning Optimization adopts the binary-real mixed coding method, in which
continuous parameters are directly represented as real-coded variables while the Boolean or discrete
CE
parameters are coded as binary strings. Hence, the population of HcHLO can be described as Eq. (12),
B2 M b
x2 21
R22 R2 j R2 M r B21 B22 B2 j
X =
xi Ri1 Ri 2 Rij RiM r Bi1 Bi 2 Bij BiM b (12)
xN RN 1 RN 2
RNj RNM r BN 1 Bi 2 BNj BNM b
Array (R) Array (B)
where Array(R) stores all the real-coded variables of solutions and Array(B) reserves the binary vectors
of individuals which denotes all the binary and/or discrete variables of solutions. N is the size of the
population, and Mr and Mb represent the lengths of real-coded variables and binary vectors. The whole
9
ACCEPTED MANUSCRIPT
dimension of solutions is M, which equals to (Mr+Mb). Initially, the elements of each individual in
Array(R) and in Array(B) are random initialized as Eq. (8) and Eq. (2), respectively.
The random learning operation of HcHLO is composed by the linear random operator and the random
operator, that is, real-coding variables Array(R) are operated by Eq. (13), i.e. the linear random learning
strategy, while binary elements Array(B) are generated by Eq. (14), i.e. the standard random learning
strategy of HLO,
T
Rij xmin, j r4 ( xmax, j xmin, j ) (13)
IP
0, 0 r5 0.5
Bij (14)
CR
1, else
US
To perform the individual learning, the personal best solutions need to be saved in the individual
AN
knowledge database of HcHLO, which is represented as Eq. (15)
ikdi (15)
ikd ip ik iRp1 ik iRp 2 ik iRpj ik iRpM r ik iBp1 ik iBp 2 ik iBpj ik iBpMb
ED
where ik iR pj
is the j-th real-coded knowledge of the p-th best solution of individual i, ik iB denotes the pj
j-th binary knowledge of the p-th best solution gained by person i, and T is the size of IKDs.
CE
When HcHLO conducts individual learning, the linear individual learning operator is used to handle
the real-coding knowledge in Array(ikiR) as Eq. (16) while the standard individual learning operator is
AC
Bij ikiBpj
(17)
10
ACCEPTED MANUSCRIPT
Similarly, the best solutions of the population are reserved in the social knowledge database as Eq. (18)
T
IP
where sk qR and sk qB denotes the j-th real-coded knowledge and the j-th binary knowledge of the
j j
q-th best solution in the SKD, respectively, and H is the size of the SKD.
CR
When HcHLO performs the social learning operation, the linear social learning operator is chosen to
US
tackle the continuous variables in Array(skR) as Eq. (19) while the binary knowledge in Array(skB) is
After the new population is yielded by the learning operation of HcHLO, the fitness of each candidate
solution is computed according to the objective function. The new candidate is directly stored in the
PT
IKD regardless of its fitness if the number of reserved solutions in the IKD is less than the pre-defined
size T; otherwise it only replaces the worst one in the IKD if it has a better fitness. The SKD of HcHLO
CE
is updated in the same way. However, for the same reason, i.e. maintaining diversity and avoiding the
premature, HcHLO at most replaces one solution in the SKD at each iteration.
AC
In summary, HcHLO generates new solutions by performing the random learning operation (RLO),
the individual learning operation (ILO), and the social learning operation (SLO) as Eq. (21),
RLO, if 0 r7 pr
xij ILO, if pr r7 pi (21)
SLO, if pi r7 1
where pr and pi are two control parameters of HcHLO to determine the rates of conducting three
learning operations, and r7 is a random number between 0 and 1. Specifically, pr is the probability of
11
ACCEPTED MANUSCRIPT
executing the random learning, while (pi-pr) and (1-pi) are the probabilities of performing the
HcHLO performs the learning operators and updates the IKD and the SKD iteratively until
termination criterions are met. The pseudo-code of HcHLO is shown in Algorithm 1 as follow:
T
5: for i = 1 to N do
IP
6: for j = 1 to M do
7: if (r7 > 0 and r7 < pr) then
8: Generate xij as Eqs. (13-14).
CR
9: else if (r7 pr and r7 < pi) then
10: Generate xij as Eqs. (16-17).
11: else if (r7 pi and r7 < 1) then
12:
13:
14:
Generate xij as Eqs. (19-20).
end if
end for
15: end for
US
AN
16: Calculate f(X).
17: Update the IKDs and SKD.
18: end while
M
ED
In engineering areas, many problems are mix-coded optimization problems, which involve a number of
PT
system parameters of which some take on continuous values while others are restricted to a set of
discrete values or Boolean values [34, 35]. These discrete sets typically arise because parts of variables
CE
are only allowed to use standard-sized components or those are readily available while Boolean
variables represents whether the corresponding components are included or excluded. A total of 14
AC
engineering optimization problems from [35, 36] were adopted as the benchmark problems to evaluate
the performance of the presented HcHLO. Table 1 lists the global optimum and types of these 14
First, HcHLO is compared with six improved algorithms developed in [35, 36], which, as far as we
know, achieved the best results on these 14 problems so far. The details of these six approaches are
listed in Table 2. For a fair comparison, the population of HcHLO is set to 10 M where M is the
dimension of problems, and the maximal number of function calculation (MaxNFC) on each problem is
12
ACCEPTED MANUSCRIPT
set as that recommended in [36]. Besides, following the instruction given in [36], if the gap between the
theoretical optima and the found one is less than 10 -6, the search will be terminated. Since all the
benchmark problems are the single-objective problems, the sizes of the IKDs and the SKD were both
set to 1 according to [1], and the IKD of HcHLO was re-initialized if the individual best solution was
not updated in 100 generations to avoid being trapped in the local optima. Note that the optimal control
parameters usually depend on problems and they are unknown without prior-knowledge. Therefore, a
set of fair parameter values, as listed in Table 3, was set to HcHLO by trial and error. HcHLO was
T
applied to solve all the 14 problems with 100 independent runs, and the results are listed in Table 4. For
IP
conveniently comparing the performance, the rankings and the average of used fitness calculation times
CR
of the algorithms are summarized in Tables 5 and 6, respectively.
Problems
P1
Best known
87.5
US
Table 1. The benchmark problems
Type
continuous-binary mixed
AN
P2 7.6672 continuous-binary mixed
P3 4.5796 continuous-binary mixed
P4 2 continuous-binary mixed
P5 2.1247 continuous-binary mixed
M
13
ACCEPTED MANUSCRIPT
T
ALO a = min(x), b=max(x), c=xlb , d=xub
MFO flame no = round(N-l*(N-1)/T), a= -1 – t*(-1)/T , b=1
IP
GWO a= 2 – t*2/T, A=2a*r1-a, C=2*r2
BBA Qmin=0, Qmax=2, A=0.9, r=0.9
CR
WOA a= 2 – t*2/T, a2=-1+t*((-1)/T), b=1
Table 4. Results of HcHLO and the compared algorithms on the benchmark problems
Methods SR (%) MEAN STD MinNFC MaxNFC
P1
MDE’
MA-MDE’
MDE’-IHS
MDE’-HJ
54
91
84
100
US 89.879034
88.230145
87.500000
/
2.768746
1.899683
0.002118
/
7696
3901
3731
5859
15000
15000
15000
15000
AN
MDE’-IHS-HJ 96 / / 6589 15000
PSO-MDE’-HJ 100 / / 4596 15000
HcHLO 100 87.500000 0.000000 3543 15000
MDE’ 4 7.918619 0.047891 96070 100000
M
14
ACCEPTED MANUSCRIPT
T
72 3.561157 0.008381 19947 50000
P8 MDE’-HJ 3 / / 50210 50000
MDE’-IHS-HJ
IP
81 / / 45821 50000
PSO-MDE’-HJ 28 / / 49206 50000
HcHLO 85 3.558935 0.004882 33293 50000
CR
MDE’ 100 -32217.427262 0.002836 1023 5495
MA-MDE’ 100 -32217.427106 0.003690 1913 5495
MDE’-IHS 100 -32217.427780 0.000000 403 5495
P9 MDE’-HJ 100 / / 495 5495
MDE’-IHS-HJ
US
100 / / 453 5495
PSO-MDE’-HJ 100 / / 555 5495
HcHLO 100 -32217.427780 0.000000 24 5495
MDE’ 93 −0.807608 0.005615 17567 50000
−0.807907
AN
MA-MDE’ 94 0.003077 30951 50000
MDE’-IHS 100 −0.808844 0.000000 3955 50000
P10 MDE’-HJ 47 / / 43090 50000
MDE’-IHS-HJ 92 / / 13152 50000
PSO-MDE’-HJ 89 / / 24484 50000
M
15
ACCEPTED MANUSCRIPT
Table 4 shows that HcHLO achieves the best results on 13 out of 14 problems, and it is only inferior
to PSO-MDE’-HJ on P13. The rankings in Table 5 clearly show that HcHLO outperforms the other 6
algorithms. Specifically, HcHLO finds the global optima (within 10-6 error) with the 100% successful
rate (SR) on 12 out of 14 problems, while MDE’, MA-MDE’, MDE’-IHS, MDE’-HJ, MDE’-IHS-HJ,
and PSO-MDE’-HJ only search out 3, 3, 4, 5, 4, and 6 out of 14 problems with the 100% successful
rate, respectively. Besides, Table 6 displays that HcHLO costs the least number of fitness calculation,
T
i.e. average 8359 on all the problems, which is 23.4% superior to the second-placed MDE’-IHS-HJ.
IP
Therefore, it is fair to claim that HcHLO possesses more robust and efficient performance on the test
CR
problems.
To further verify the performance of HcHLO, 10 recent proposed algorithms, i.e. the memory based
US
differential evolution algorithm (MBDE) [37], competitive and cooperative particle swarm
optimization (CCPSO) [38], memetic binary hybrid topology particle swarm optimization (BHTPSO)
AN
[39], binary learning differential evolution (BLDE) [40], the sine cosine algorithm (SCA) [41], the ant
lion optimizer (ALO) [42], the moth-flame optimization algorithm (MFO) [43], the grey wolf optimizer
(GWO) [44], the binary bat algorithm (BBA) [45], the whale optimization algorithm (WOA) [46], as
M
well as the standard HLO [1], were adopted to solve these 14 engineering problems. For a fair
comparison, the recommended parameter values of these algorithms were used, and the maximal
ED
number of function calculation is as the same as that of HcHLO, which are also given in Table 3. The
numerical results and the Wilcoxon signed-rank test (W-test) results are displayed in Table 7, where
PT
“1” represents that HcHLO significantly outperforms the compared algorithm at the 95% confidence,
“-1” denotes that HcHLO is significantly worse than the compared one, and “0” indicates that HcHLO
CE
is comparable in performance to the counterpart. For clearly analyzing the results, the W-test results are
summarized in Table 8.
AC
Table 5. Rankings of MDE’, MA-MDE’, MDE’-IHS, MDE’-HJ, MDE’-IHS-HJ, PSO-MDE’-HJ, and HcHLO
P1 P2 P3 P4 P5 P6 P7 P8 P9 P10 P11 P12 P13 P14 Mean
MDE’ 7 7 4 5 6 6 4 5 1 4 1 1 5 6 4.43
MA-MDE’ 6 6 3 6 4 5 5 1 1 3 1 1 5 7 3.86
MDE’-IHS 5 5 2 3 1 7 7 4 1 1 7 1 7 1 3.71
MDE’-HJ 1 3 6 2 5 2 6 7 1 7 1 1 3 1 3.28
MDE’-IHS-HJ 4 2 5 4 3 3 3 3 1 5 1 1 4 1 2.86
PSO-MDE’-HJ 1 4 7 6 7 4 1 6 1 6 1 1 1 1 3.36
HcHLO 1 1 1 1 1 1 1 1 1 1 1 1 2 1 1.07
16
ACCEPTED MANUSCRIPT
T
P6 30030 23462 45764 15964 21890 22929 12540
P7 426 670 642 994 458 412 248
IP
P8 27329 20546 19947 50210 45821 49206 33293
P9 1023 1913 403 495 453 555 24
CR
P10 17567 30951 3955 43090 13152 24484 19600
P11 222 338 241 285 221 288 263
P12 1460 2524 1070 1704 1762 1414 1943
P13 42108 42632 46451 30138 32618 18265 30663
P14
Mean
1603
16893
2856
16671
2977
15791 US
3058
14249
1747
10915
2419
11831
2034
8359
AN
Table 7 shows that HcHLO obtains the best results on all the 14 problems, and the W-test results
summarized in Table 8 demonstrate that HcHLO is significantly better than HLO, BLDE, BHTPSO,
M
CCPSO, MBDE, SCA, MFO, BBA, GWO, WOA, and ALO on 7, 13, 12, 8, 10, 7, 8, 10, 8, 9, and 8 out
of 14 problems, respectively. Compared with HLO, the proposed HcHLO has more control parameters
ED
and its implementation is more complicated as it uses the continuous learning operators and the binary
learning operators to deal with the real-coded variables and the other types of variables, respectively.
PT
However, it is fair to conclude that HcHLO is valid and worth using since HcHLO obviously surpasses
HLO on 7 out of 14 problems and has better numerical results on all the problems.
CE
Besides, the experimental results demonstrate the superiority of HLO as it outperforms the other
algorithms except SCA. As discussed in the previous work [2], the learning operators of HLO endow
AC
more complicated dynamic behaviors to the algorithm. For example, the individual learning operator
and the social learning operator of HLO generate a new candidate by copying the different bits of the
solutions in the IKD and SKD, which is similar to the crossover operator of Genetic Algorithms (GAs).
However, the real function of the individual learning operator and the social learning operator is a
variable-point crossover, that is, it can be the single-point crossover or the variable multi-point
crossover according to the generated random number r7 in Eq. (21). Therefore, the dynamic of HLO is
much complicated than that of GAs. Besides, as only two values, i.e. “0” and “1”, exist in binary space,
17
ACCEPTED MANUSCRIPT
the random learning operator of HLO is regarded as a mutation operator with the mutation probability
pr/2 in previous works [1-4]. However, the random learning operator works as the mutation operator
with the pr/2 rate only when the corresponding bits of the individual best solution and the social best
solution are the same. If the bit of the individual best solution is different from that of the social best
solution, like “1” for the individual best solution and “0” for the social best solution, the random
learning operator can be regarded as the individual learning operator if its random yielded value is “1”
or the social learning operator if its random generated value is “0”. Therefore, the random learning
T
operator works as the mutation operator in each individual of HLO with a different mutation rate,
IP
which is determined by the pr value and the difference between the corresponding individual best
CR
solution and the social best solution. Besides, with the updating of the IKDs and the SKD, the
difference between the individual best solution and the social best solution is accordingly changed,
US
which means that the probability of the random learning operator acting as the mutation operator is also
varied for each individual during the iterations. In short, the random learning operator in HLO may act
AN
as the mutation operator with a varied mutation rate for the different individuals at different
generations, which is much more complex than the standard mutation operator in GAs. Based on the
above analysis, it can be found that the learning operators of HLO have more complicated behaviors
M
than they appear. By comparing HLO with GAs, obviously HLO can search with more different modes
and with varied parameter values, which may be the essential reason that HLO possesses an excellent
ED
search ability.
It can also be found that the hybrid algorithms proposed in [36], i.e. PSO-MDE’-HJ, MDE’-IHS-HJ,
PT
are powerful as they have better results than those of HLO, MBDE, CCPSO, BDE, BHTPSO, SCA,
MFO, BBA, GWO, WOA, and ALO, which were developed later. Considering that PSO-MDE’-HJ and
CE
MDE’-IHS-HJ are specially designed to tackle these hybrid-coded problems, it is reasonable. However,
the experimental results show that HcHLO possesses the best known overall results so far on these 14
AC
problems instead of MDE’-IHS-HJ due to the excellent global search ability of HcHLO as well as its
18
ACCEPTED MANUSCRIPT
T
IP
Table 7. Results of HcHLO, HLO, BLDE, BHTPSO, CCPSO, MBDE, SCA, MFO, BA, GWO, WOA, and ALO on the benchmark problems
CR
HcHLO HLO BLDE BHTPSO CCPSO MBDE SCA MFO BA GWO WOA ALO
Best 87.500000 87.500000 87.500000 87.500000 87.500000 87.500000 87.500000 87.500000 87.500000 87.500000 87.500000 87.500000
Mean 87.500000 87.500000 87.767621 87.500000 87.500000 87.547137 87.500000 87.500000 87.500000 87.500000 87.500000 87.500000
P1
Std 3.02E-08 1.22E-07 2.46E-01 1.84E-06 1.27E-06 4.71E-01 3.10E-07 2.91E-07 8.24E-07 7.06E-07 5.35E-07 2.24E-07
P2
W-test
Best
Mean
Std
W-test
/
7.667181
7.667194
4.76E-06
/
0
7.667184
7.667195
1.02E-05
0
1
7.667184
7.888805
3.93E-01
1
0
7.670735
7.922762
8.39E-02
1
0
7.667181
7.667194
5.30E-06
1
US1
7.667181
8.107200
5.87E-01
0
0
7.667182
7.750253
1.22E-02
1
0
7.667184
7.873050
1.09E-01
1
0
7.667201
7.668284
8.38E-04
1
0
7.667519
7.889777
1.07E-01
1
0
7.667332
7.831248
1.29E-01
1
0
7.667432
7.772781
2.01E-01
1
AN
Best 4.579587 4.579600 4.579598 4.579619 4.579588 4.579587 4.579596 4.579587 4.579623 4.579587 4.580446 4.579596
Mean 4.579597 4.584246 4.652072 4.648051 4.579657 4.579597 4.579597 4.579597 4.648051 4.579597 4.585862 4.589512
P3
Std 3.02E-06 2.31E-02 9.62E-02 8.96E-02 5.67E-05 2.09E-01 3.62E-06 3.48E-06 8.24E-02 1.05E-05 5.17E-01 4.33E-02
W-test / 1 1 1 0 0 0 0 1 0 1 1
Best 2.000000 2.000000 2.000000 2.000120 2.000000 2.000000 2.000000 2.000000 2.000000 2.000000 2.000000 2.000000
M
Mean 2.000000 2.000000 2.122889 2.011921 2.000000 2.073184 2.000000 2.000000 2.000000 2.000000 2.000000 2.000000
P4
Std 2.81E-07 8.52E-07 1.24E-01 1.19E-03 3.00E-7 1.10E-1 2.78E-07 5.17E-07 7.65E-07 3.32E-07 1.99E-07 5.49E-07
W-test / 0 1 1 0 1 0 0 0 0 0 0
Best 2.124470 2.124538 2.124693 2.124546 2.124472 2.124474 2.124470 2.124470 2.124486 2.124470 2.124470 2.124470
ED
Mean 2.124470 2.124693 2.131412 2.124692 2.124589 2.445195 2.126509 2.141912 2.135737 2.137581 2.124594 2.124675
P5
Std 6.42E-07 2.15E-05 1.10E-02 2.19E-05 6.10E-05 1.91E-01 7.65E-03 8.5E-02 1.44E-02 2.42E-02 7.84E-05 7.84E-05
W-test / 1 1 1 0 1 1 1 1 1 1 0
Best 1.076546 1.076547 1.091516 1.077308 1.076546 1.076546 1.076546 1.076546 1.079033 1.076617 1.076707 1.076546
Mean 1.081757 1.158096 1.248415 1.231980 1.0866196 1.099527 1.180620 1.144197 1.231470 1.165564 1.127812 1.175122
PT
P6
Std 2.97E-02 8.49E-02 1.58E-02 3.74E-02 3.17E-01 1.81E-01 8.53E-02 8.50E-02 4.74E-02 8.29E-02 5.03E-02 8.03E-02
W-test / 1 1 1 1 1 1 1 1 1 1 1
Best 99.239635 99.243964 99.239635 99.239635 99.239635 99.239635 99.239635 99.239635 99.239637 99.239635 99.239636 99.239636
P7 Mean 99.241553 99.597800 101.7905895 101.7619564 99.241803 110.919590 99.241569 99.403991 100.610843 99.502571 99.243185 99.353452
CE
Std 1.71E-03 1.39E+00 3.64E+00 3.69E+00 5.633E-03 1.47E+02 1.79E-03 1.84E-02 1.97E+00 2.96E-02 3.47E-03 5.47E-02
19
AC
ACCEPTED MANUSCRIPT
T
IP
W-test / 1 1 1 0 1 0 1 1 1 0 1
CR
Best 3.557466 3.557502 3.557778 3.558214 3.557466 3.557466 3.557466 3.557472 3.569668 3.557550 3.557755 3.557834
P8 Mean 3.558935 3.565095 3.605231 3.649570 3.559793 3.560045 3.559031 3.580417 3.708652 3.568889 3.559104 3.572482
Std 4.88E-03 8.17E-03 6.29E-02 9.51E-02 5.63E-02 5.81E-01 3.17E-03 4.80E-02 1.23E-01 2.08E-02 3.45E-03 3.59E-02
W-test / 0 1 1 1 1 0 1 1 1 0 1
Best -32217.42778 -32217.42778 -32217.42778 -32217.42778 -32217.42778 -32217.42778 -32217.42778 -32217.42778 -32217.42778 -32217.42778 -32217.42778 -32217.42778
P9
Mean
Std
W-test
Best
-32217.42778
2.19E-11
/
−0.808844
−0.808844
-32217.42778
2.19E-11
0
−0.808844
−0.808844
-32217.42778
2.19E-11
0
−0.808844
-32217.42778
2.19E-11
0
−0.808844
-32217.42778
2.19E-11
0
−0.808844
US
-32217.42778
2.19E-11
0
−0.808844
-32217.42778
2.19E-11
0
-0.808844
-32217.42778
2.19E-11
0
-0.808844
-32217.42778
2.19E-11
0
-0.808844
-32217.42778
2.19E-11
0
−0.808844
−0.808844
-32217.42778
2.19E-11
0
-0.808844
-32217.42778
2.19E-11
0
-0.808844
AN
Mean -0.802650 -0.793711 -0.757106 -0.760245 -0.807014 -0.808657 -0.755025 -0.787432 -0.788527
P10
Std 3.25E-11 3.25E-11 9.14E-03 1.36E-02 2.32E-02 5.64E-02 6.52E-03 1.87E-03 2.12E-02 8.92E-11 1.96E-02 1.42E-02
W-test / 0 1 1 1 1 1 1 1 0 1 1
Best -0.974565 -0.974565 -0.974565 -0.974565 -0.973054 -0.971472 -0.974565 -0.974565 -0.974565 -0.978799 -0.974565 -0.974565
Mean -0.974565 -0.974565 -0.974127 -0.974367 -0.972187 -0.970321 -0.974565 -0.974565 -0.974298 -0.976103 -0.973848 -0.974565
P11
Std 1.56E-15 1.56E-15 9.14E-03 1.13E-03 1.50E-03 3.24E-03 1.42E-15 1.82E-15 1.25E-03 1.06E-03 4.9E-03 1.75E-15
M
W-test / 0 1 0 1 1 0 0 0 1 1 0
Best -0.999892 -0.999890 -0.999872 -0.999872 -0.999844 -0.999888 -0.9998621 -0.999892 -0.999544 -0.999892 -0.999892 -0.999862
Mean -0.999821 -0.999626 -0.999565 -0.999121 -0.999594 -0.999653 -0.9997353 -0.999733 -0.999152 -0.999746 -0.997271 -0.999752
P12
Std 9.54E-06 1.09E-04 2.60E-04 2.00E-03 9.62E-05 1.06E-04 2.38E-04 9.92E-05 7.86E-03 8.18E-05 3.50E-03 7.86E-05
ED
W-test / 1 1 1 1 1 1 0 1 0 1 0
Best 5850.438514 5850.507633 5850.769287 5856.841490 5850.508182 5850.511380 5850.549073 5850.424435 5851.043264 5851.395380 5850.825749 5850.825749
Mean 5908.944814 5956.548160 6702.28527 6274.611674 5974.786682 6337.531826 5923.822778 5965.330069 6304.729238 5912.724358 6291.156549 5941.542192
P13
Std 1.01E+02 1.34E+02 8.76E+02 2.92E+02 1.03E+02 4.06E+02 1.94E+02 1.38E+02 2.75E+02 1.88E+01 3.40E+02 2.45E+02
W-test / 1 1 1 1 0 1 1 1 1 1 1
PT
Best -75.134137 -75.134137 -75.134135 -75.134133 -75.134137 -75.134137 -75.134137 -75.134137 -75.133224 -75.134137 -75.133869 -75.133869
Mean -75.134137 -74.550318 -72.024184 -74.886291 -74.533979 -74.102486 -74.645972 -74.483266 -74.577617 -75.120090 -74.557990 -74.601332
P14
Std 1.11E-07 1.96E+00 3.81E+00 4.70E-01 1.07E-04 2.33E+00 1.94E+00 2.22E-02 4.17E+01 4.33E-02 8.40E-01 7.38E-01
W-test / 1 1 1 1 1 1 1 1 1 1 1
CE
20
AC
ACCEPTED MANUSCRIPT
HLO BLDE BHTPSO CCPSO MBDE SCA MFO BBA GWO WOA ALO
P1 0 1 0 0 1 0 0 0 0 0 0
P2 0 1 1 1 0 1 1 1 1 1 1
P3 1 1 1 0 0 0 0 1 0 1 1
P4 0 1 1 0 1 0 0 0 0 0 0
P5 1 1 1 0 1 1 1 1 1 1 0
P6 1 1 1 1 1 1 1 1 1 1 1
P7 1 1 1 0 1 0 1 1 1 0 1
P8 0 1 1 1 1 0 1 1 1 0 1
P9 0 0 0 0 0 0 0 0 0 0 0
P10 0 1 1 1 1 1 1 1 0 1 1
P11 0 1 0 1 1 0 0 0 1 1 0
T
P12 1 1 1 1 1 1 0 1 0 1 0
P13 1 1 1 1 1 1 1 1 1 1 1
IP
P14 1 1 1 1 0 1 1 1 1 1 1
Total 7 13 12 8 10 7 8 10 8 9 8
CR
5. Conclusions and future work
US
In engineering areas, many optimization problems are mix-coded problems. Although binary
optimization algorithms, like HLO, can be used to solve these problems directly, the efficiency of
search on continuous parameters with the requirement of high accuracy may be spoiled due to “the
AN
curse of dimensionality” raised by the binary coding strategy. Meanwhile, using continuous algorithms
to solve mixed-variable problems would incur significant performance reduction because of the binary
M
or discrete variables of the problems. Thus, this paper extends HLO and first presents a hybrid-coded
HLO framework to solve the mix-coding problems more efficiently and effectively, in which
ED
real-coded parameters are optimized by the continuous linear learning operators of CHLO while the
rest variables of problems are handled by the binary learning operators of HLO. The experimental
PT
results demonstrate the validity and superiority of the proposed HcHLO as it achieves the best-known
HLO is a newly developed meta-heuristic with promising potential. Our future work will further
explore the characteristics of HLO and apply it to solve diverse problems. Besides, HLO is developed
AC
based on a simplified human learning model in which only random learning, individual learning, and
social learning are simulated while many sophisticated brain functions and learning mechanisms, which
play important roles in human learning process and are even the key elements for humans having better
performance than other animals, are not considered. Thus, the most important direction of our
following research is to study and introduce these phenomena into HLO to enhance its search ability
21
ACCEPTED MANUSCRIPT
Acknowledgments
This work is supported by National Natural Science Foundation of China (Grant No. 61304031 &
61633016), Key Project of Science and Technology Commission of Shanghai Municipality under
Grant No. 16010500300, 15220710400, and 14DZ1206302, and a Paul and Heidi Brown Preeminent
References
T
[1] L. Wang, H. Ni, R. Yang, A simple human learning optimization algorithm, In: M. Fei, C. Peng, Z.
IP
Su, Y. Song, Q. Han (eds) Computational Intelligence, Networked Systems and Their Applications,
LSMS/ICSEE 2014. Communications in Computer and Information Science, vol 462. Springer,
CR
Berlin, Heidelberg, 2014, pp. 56-65.
[2] L. Wang, H. Ni, R. Yang, An adaptive simplified human learning optimization algorithm.
Information Sciences. 320 (2015) 126-139.
[4] L. Wang, R. Yang, H. Ni, A human learning optimization algorithm and its application to
AN
multi-dimensional knapsack problems, Applied Soft Computing. 34 (2015) 736-743.
[5] L. Wang, H. Ni, W. Zhou, MBPOA-based LQR controller and its application to the double-parallel
inverted pendulum system, Engineering Applications of Artificial Intelligence. 36 (2014) 262-268.
M
[6] L. Wang, R. Yang, P.M. Pardalos, An adaptive fuzzy controller based on harmony search and its
application to power plant control, International Journal of Electrical Power & Energy Systems. 53
ED
(2013) 272-278.
[7] S. Pookpunt, W. Ongsakul, Design of optimal wind farm configuration using a binary particle
swarm optimization at Huasai district, Southern Thailand, Energy Conversion and Management.
PT
binary-coded particle swarm optimization and Extreme Learning Machines, Journal of Hydrology.
529 (2015) 1617-1632.
[9] A.A. Salman, I. Ahmad, M.G.H. Omran, A metaheuristic algorithm to solve satellite broadcast
AC
22
ACCEPTED MANUSCRIPT
T
algorithm, in: In: N. Nguyen, B. Trawiński, R. Kosala (eds) Intelligent Information and Database
IP
Systems. ACIIDS 2015. Lecture Notes in Computer Science, vol 9012, Springer, Cham, 2015, pp.
41-50.
CR
[18] M. Balvasi, M. Akhlaghi, H. Shahmirzaee, Binary TLBO algorithm assisted to investigate the
supper scattering plasmonic nano tubes, Superlattices and Microstructures. 89 (2016) 26-33.
[19] Y. Zhou, X. Chen, G. Zhou, An improved monkey algorithm for a 0-1 knapsack problem, Applied
Soft Computing. 38 (2016) 817-830.
US
[20] E. Emary, H.M. Zawbaa, A.E. Hassanien, Binary grey wolf optimization approaches for feature
selection, Neurocomputing. 172 (2016) 371-381.
AN
[21] T. Liao, K. Socha, M. Montes de Oca, Ant colony optimization for mixed-variable optimization
problems, Evolutionary Computation, IEEE Transactions on Evolutionary Computation. 18 (4)
(2014) 503-518.
M
[22] L. Le-Anh, T. Nguyen-Thoi, V. Ho-Huu, Static and frequency optimization of folded laminated
composite plates using an adjusted Differential Evolution algorithm and a smoothed triangular
ED
275-300.
[24] L. Cui, J. Deng, L. Wang, A novel locust swarm algorithm for the joint replenishment problem
considering multiple discounts simultaneously, Knowledge-Based Systems. 111 (2016) 51-62.
CE
[25] X. Lei, Y. Ding, H. Fujita, Identification of dynamic protein complexes based on fruit fly
optimization algorithm, Knowledge-Based Systems. 105 (2016) 270-277.
AC
[26] L. Cui, L. Wang, J. Deng, Intelligent algorithms for a new joint replenishment and synthetical
delivery problem in a warehouse centralized supply chain, Knowledge-Based Systems. 90 (2015)
185-198.
[27] Y. Shi, C. M. Pun, H. Hu, An Improved Artificial Bee Colony and Its Application,
Knowledge-Based Systems. 107 (2016) 14-31.
[28] L. Wang, Z. Wang, S. Liu, An effective multivariate time series classification approach using echo
state network and adaptive differential evolution algorithm, Expert Systems with Applications. 43
(2016) 237–249.
[29] L. Cui, L. Wang, J. Deng, A new improved quantum evolution algorithm with local search
23
ACCEPTED MANUSCRIPT
procedure for capacitated vehicle routing problem, Mathematical Problems in Engineering. 2013
(2013), Article ID 159495, 17 pages.
[30] D. Datta, J.R. Figueira, A real–integer–discrete-coded differential evolution, Applied Soft
Computing. 13 (9) (2013) 3884-3893.
[31] M. Magni, C. Paolino, R. Cappetta, Diving too deep: How cognitive absorption and group learning
behavior affect individual learning, Academy of Management Learning & Education. 12 (1)
(2013) 51-69.
[32] E.M. Dar-El, Human learning: From learning curves to learning organizations, Springer Science &
Business Media. 2013.
T
[33] D. Narain, J.B.J Smeets, P. Mamassian, Structure learning and the Occam's razor principle: a new
IP
view of human function acquisition, Frontiers in computational neuroscience. 8 (2014) 121.
[34] L.Cui, J. Deng, F. Liu, Investigation of investment in a single retailer two-supplier supply chain
CR
with random demand to decrease inventory inaccuracy. Journal of Cleaner Production. 142 (2017)
2018-2044.
[35] T.W. Liao, Two hybrid differential evolution algorithms for engineering design optimization.
US
Applied Soft Computing. 10 (4) (2010) 1188-1199.
[36] H. Yi, Q. Duan, T.W. Liao, Three improved hybrid metaheuristic algorithms for engineering
design optimization, Applied Soft Computing. 13 (5) (2013) 2433-2444.
AN
[37] R.P. Parouha, K.N. Das. A memory based differential evolution algorithm for unconstrained
optimization. Applied Soft Computing. 38 (2016) 501-517.
[38] Y. Li Y, Z.H. Zhan, S. Lin, Competitive and cooperative particle swarm optimization with
M
information sharing mechanism for global optimization problems, Information Sciences. 293
(2015) 370-382.
ED
[39] Z. Beheshti, S.M. Shamsuddin, S. Hasan, Memetic binary particle swarm optimization for discrete
optimization problems, Information Sciences. 299 (2015) 58-84.
[40] Y. Chen, W. Xie, X. Zou, A binary differential evolution algorithm learning from explored
PT
[42] S. Mirjalili. The ant lion optimizer, Advances in Engineering Software. 83 (2015) 80-98.
[43] S. Mirjalili, Moth-flame optimization algorithm: A novel nature-inspired heuristic paradigm,
AC
24
ACCEPTED MANUSCRIPT
Appendix
Problem 1.
Minimize F =7.5y1 6.4 x1 5.5 y2 6.0 x2
s.t. 0.8x1 0.67 x2 10
x1 20 y1 0
x2 20 y1 0
where 0 x1 , x2 20 , y1 , y2 {0,1} . The global optimum F* is 87.5 at x*= [12.5006, 0] and y*= [1, 0].
Problem 2.
Minimize F 2 x1 3 x2 1 . 5y1 2y2 0 . y5 3
T
s.t. F 2 x1 3x2 1.5 y1 2 y2 0.5 y3
( x1 )2 y1 1.25
IP
(x 2 )1.5 1.5 y2 3
x1 y1 1.6
CR
1.333x2 y2 3
y1 y2 y3 0
where 0 x1 , x2 2 , y1 , y2 , y3 {0,1} . The global optimum F* is 7.667 at x*= [1.118, 1.310] and y*=
[1, 0, 1].
Problem 3.
US
AN
Minimize F ( y1 1)2 ( y2 2)2 ( y3 1)2 ln( y4 1) ( x1 1)2 ( x2 2)2 ( x3 3)2
s.t. y1 y2 y3 x1 x2 x35
y32 x12 x22 x32 5.5
y1 x1 1.2
M
y2 x2 1.8
y3 x3 2.5
y4 x1 1.2
ED
Problem 4.
Minimize F 2 x y
s.t. 1.25 x2 y 0
AC
x y 1.6
where 0 x1 1.6 , y {0,1} . The global optimum F* is 2 at x*= 0.5 and y*= 1.
Problem 5.
Minimize F y 2 x1 ln( x1 / 2)
s.t. x1 ln( x1 / 2) y 0
where 0.5 x1 1.4 , y {0,1} . The global optimum F* is 2.1247 at x1*= 0.5 and y*= 1.
Problem 6.
Minimize F 0.7 y 5( x1 0.5)2 0.8
25
ACCEPTED MANUSCRIPT
Problem 7.
1 y1 y1
Minimize F 7.5 y1 5.5(1 y1 ) 7v1 6v2 50 50
0.8 [1 exp(0.4v2 )] 0.9 [1 exp(0.5v1 )]
s.t. 0.9 [1 exp(0.5v1 )] 2 y1 0
T
0.8 [1 exp(0.4v2 )] 2(1 y1 ) 0
IP
v1 10 y1
v2 10(1 y1 )
CR
where 0 v1 , v2 10 , y {0,1} . The global optimum F* is 99.245209 at v= [3.514237, 0] and y1= 1.
Problem 8.
Minimize F ( y1 1)2 ( y2 2)2 ( y3 1)2 ln( y4 + 1) ( x1 1)2 ( x2 2)2 ( x3 3)2
s.t. y1 y2 y3 x1 x2 x3 5
y32 x12 x22 x32 5.5
y1 x1 1.2
US
AN
y2 x2 1.8
y3 x3 2.5
y4 x1 1.2
M
Problem 9.
Minimize F = 5.357854x12 0.835689 y1 x3 37.29329 y1 40792.141
CE
Problem 10.
10
F [1 (1 p j ) j ]
m
Minimize
j 1
10
s.t. [a m
j 1
ij
2
j cij mij ] bi , i 1, 2,3, 4
[ p j ] (0.81, 0.93, 0.92, 0.96, 0.99, 0.89, 0.85, 0.83, 0.94, 0.92)
26
ACCEPTED MANUSCRIPT
2 7 3 0 5 6 9 4 8 1
4 9 2 7 1 0 8 3 5 6
[aij ]
5 1 7 4 3 6 0 9 8 2
8 3 5 6 9 7 2 4 0 1
7 1 4 6 8 2 5 9 3 3
4 6 5 7 2 6 9 1 0 8
[cij ]
1 10 3 5 4 7 8 9 4 6
2 3 2 5 7 8 6 10 9 1
T
2].
IP
Problem 11.
4
F R j
CR
Minimize
j 1
4
s.t. d j 1
1j m2j 100
d
d
4
4
j 1
2j (m j exp(m j / 4)) 150
where
R1 1 q1 ((1 1 )q1 1 )m1 1
2 q2 p2 q2m (1 2 )m 2 2
R2 1
M
p2
R3 1 q m3
3
1 2 3 4
[dij ] 7 7 5 7
7 8 8 6
CE
Problem 12.
4
F [ 1 (1 p j
mj
Minimize ) ]
j 1
4
s.t. v j 1
j m2j 250
4
1000 j mj
j 1
j (
ln( p j )
) (m j exp( )) 400
4
4 mj
w
j 1
j m j exp(
4
) 500
where
27
ACCEPTED MANUSCRIPT
[v j ] (1 2 3 2)
[w j ] (6 6 8 7)
[ j ] (1.0, 2.3, 0.3, 2.3) 105
[ j ] (1.5, 1.5, 1.5, 1.5)
m j [1, 10] integer, p j [0.5, 1 106 ] , j 1, 2, 3, 4 . The global optimum F* is -0.999486 at m = [3 6
3 5], p = [0.960592, 0.760592, 0.972646, 0.804660].
Problem 13.
Minimize F 0.6224 x1 x2 x3 1.7781x12 x4 3.1661x2 x32 19.84 x1 x32
s.t. 0.0193x1 / x3 1 0
T
0.00954 x1 / x4 1 0
x2 / 240 1 0
IP
1296000 (4 / 3) x13
1 0
x12 x2
CR
where 25 x1 150, 25 x2 240, x3 , x4 [0.0625, 0.125, ...,
1.1875,
1.25]. The global optimum F* is
5850.770 at x*= [38.858 221.402 0.750 0.375].
Problem 14.
Minimize F f d c
where 5 dc 30, 8.6 f 13.4, M {120, 140, 170, 200, 230, 270, 325, 400, 500}. The global
optimum F* is -75.1341 at d 5.6070 and f 13.4 when Ra (max) 0.3 and Nc (max) 7 .
*
c
*
M
ED
PT
CE
AC
28