Download as pdf or txt
Download as pdf or txt
You are on page 1of 8

2016 IEEE International Conference on Systems, Man, and Cybernetics • SMC 2016 | October 9-12, 2016 • Budapest, Hungary

A Population-Based Simulated Annealing Algorithm


for Global Optimization
Alireza Askarzadeh Carlos Eduardo Klein
Department of Energy Management and Optimization Pós-Graduação em Engenharia de Produção e Sistemas,
Institute of Science and High Technology and PPGEPS, Pontifícia Universidade Católica do Paraná,
Environmental Sciences PUCPR, Curitiba, Brazil
University of Advanced Technology, Kerman, Iran carlos.klein@outlook.com.br
askarzadeh_a@yahoo.com

Leandro dos Santos Coelho Viviana Cocco Mariani


Pós-Graduação em Engenharia de Produção e Sistemas, Pós-Graduação em Engenharia Mecânica, PPGEM
PPGEPS, Pontifícia Universidade Católica do Paraná, Pontifícia Universidade Católica do Paraná, PUCPR, e
PUCPR, Curitiba, Brazil Departamento de Engenharia Elétrica, Universidade
leandro.coelho@pucpr.br Federal do Paraná, UFPR, Curitiba, Brazil
viviana.mariani@pucpr.br

Abstract— Simulated annealing (SA) is a solo-search based algorithm might have an advantage over a solo-
algorithm, trying to simulate the cooling process of molten metals algorithm have been represented in [1].
through annealing to find the optimum solution in an
optimization problem. SA selects a feasible starting solution, Simulated annealing (SA) is a popular generic probabilistic
produces a new solution at the vicinity of it, and makes a decision solo-algorithm used for global optimization problems. The
by some rules to move to the new solution or not. However, the name and inspiration stem from annealing in metallurgy, a
results found by SA depend on the selection of the starting point process involving heating and controlled cooling of a material
and the decisions SA makes. In this paper, in order to ameliorate to increase the size of its crystals and reduce its defects. The
the drawbacks of the algorithm, a population-based simulated atoms become unstuck from their initial positions by heating
annealing (PSA) algorithm is proposed. PSA uses the
and wander randomly through states of higher energy. More
population’s ability to seek different parts of the search space,
thus hedging against bad luck in the initial solution or the
chance to find configurations with lower internal energy than
decisions. A set of benchmark functions was used in order to initial one is provided by slow cooling. In a similar way with
evaluate the performance of PSA algorithm. Simulation results this physical process, in SA, each feasible solution is
accentuate the superior capability of PSA in comparison with the analogous to a state of a physical system, and the fitness
other optimization algorithms. function which needs to be minimized is similar to the internal
Keywords - optimization, solo-searcher, population-based energy of the system in that state. The ultimate goal is to bring
searcher, simulated annealing. the system, from an arbitrary (random) initial state, to a state
in which the energy of the system is minimal. In each stage,
I. INTRODUCTION SA replaces the current solution by a random nearby solution
Thanks to their flexibility to solve complex problems, with a probability depending both on the difference between
meta-heuristic optimization algorithms have received the corresponding fitness values and also on a parameter,
significant attention and remarkable growth over the past few named temperature.
decades. The most common way to classify meta-heuristic The ease of implementation makes SA as an extremely
algorithms is based on solo-searchers (simulated annealing popular method for solving large and practical problems such
(SA), tabu search (TS), hill climbing (HC), etc) vs. as travelling salesman [2], job-shop scheduling [3],
population-based searchers, such as genetic algorithm (GA), communication systems [4], continuous optimization [5],
particle swarm optimization (PSO), ant colony optimization among others. However, SA suffers from two main
(ACO), among others. The first methods are those that employ drawbacks, being trapped in local minima and taking long
a single solution during the search process while in the latter a computational time to find a reasonable solution. Due to the
population of solutions is used and evolved during a given fact that SA is a solo-searcher, its success depends strongly on
number of iterations. Population-based algorithms have been the selection of the starting point and the decisions it makes.
found to perform well on many real world problems. This has Hence, any bad luck affects the nature of the results and
led to an effort by researchers to understand and explain this instead of a global minimum a local one may be achieved,
behavior. Five distinct mechanisms by which a population- especially when the problem dimension is high and there are
many local minima. Moreover, seeking search space with a

978-1-5090-1897-0/16/$31.00 ©2016 IEEE SMC_2016 004626


2016 IEEE International Conference on Systems, Man, and Cybernetics • SMC 2016 | October 9-12, 2016 • Budapest, Hungary

single solution takes long computational time to discover a II.1. SA Algorithm


reasonable solution. In order to improve the SA performance,
various researchers have developed different strategies like In respect to each other, the atoms of a molten metal can
faster annealing schedules [6], simulated annealing with an move freely at high temperatures, but their movements get
adaptive non-uniform mutation (nonu-SA) [7], adaptive restricted when the temperature is reduced. The arrangement
simulated annealing (ASA) [8], implementation as of atoms eventually makes crystals having the minimum
distributed/parallel algorithms [9], and hybridization of SA possible energy. However, the cooling rate determines the
with other heuristics, such as genetic algorithm [10], support final energy of the system. If the reduction of the temperature
vector machine [11], artificial neural network [12], and some occurs at a very fast rate, the crystalline state may not be
specific knowledge [13]. attained at all and as an alternative, a polycrystalline state may
be achieved in which the system’s energy is more than that of
In order to improve the SA performance considerably, it is the crystalline state. As a result, to reach the absolute
possible to bring the concept of population into the algorithm minimum energy state, the reduction of the temperature at a
and develop a novel version of SA algorithm based on social slow rate is essential. The process of slow cooling is known as
behavior. Using a population makes the algorithm more annealing process.
capable to seek different parts of the search space and provide
an advantage by hedging against being unlucky. In this paper, In SA algorithm, as the simulation comes along, the
we propose a population-based simulated annealing algorithm, temperature, T, is gradually reduced. The algorithm starts its
named PSA, in which a population of solutions attempts to search with a large enough value of T to wander towards a
update their positions into new ones based on the SA rules. In broad region of the search space and terminates it with a small
PSA, each solution memorizes its best experience and stores it one to move downhill according to the steepest descent
in population’s memory. The memory contains the best heuristic. The reduction of the temperature is specified by
experiences found by the solutions with which the solutions means of annealing schedule. In this paper, it is inversely
share their information with each other and update their proportional to a logarithmic function of the iteration index, t,
positions. as follows.
A large set of unimodal and multimodal benchmark T (1)
T (t ) = 0
functions is employed to assess the optimization power of
ln(1 + t )
PSA. To study the usefulness of PSA, its performance is
compared with those of SA with Gaussian mutation (GSA)
and SA with non-uniform mutation (nonu-SA) reported where T0 is the initial temperature.
recently in the literature [7]. Because PSA is a population- At any iteration, the current solution is x(t) and the
based algorithm it is fair to compare its performance against corresponding fitness function value is defined by f(x(t)). The
other population-based algorithms. To represent the probability of the next solution, x(t + 1), being at xp (a random
competitive performance of PSA to other population-based solution near-by x(t)) depends both on the difference between
algorithms, the results are also compared with those of genetic the corresponding fitness values, ∆F = f(xp)- f(x(t)), and also
algorithm (GA), particle swarm optimization (PSO), and state- on the temperature. As a result, the position of the next
of-the-art group search optimizer (GSO) algorithm, inspired solution is formulated as follows.
by animal search behavior [14].
 x if exp(− ΔF ) > r
The remainder of the paper is organized as follows. x(t + 1) =  p T (2)
 x(t ) otherwise.
Section II gives the fundamentals of the PSA. In Section III,
the results of simulation are presented and discussed. And the where r is an uniform random number in [0, 1]. As can be
paper is concluded in Section IV. seen, if ∆F≤0, xp is always accepted. There is a probability of
selecting xp as x(t + 1) even though the function value at xp is
II. FOUNDATIONS OF PSA ALGORITHM worse than that at x(t). This probability depends on ∆F and T
values. The process of producing new solutions continues
As widely discussed in the literature, the performance of until maximum number of iterations, tmax, is met.
SA algorithm depends strongly on starting point. Moreover,
SA may not be able to provide a reasonable solution in an II.2. PSA Algorithm
affordable time. For this reason, to enhance the accuracy of
the obtained result and speed up the convergence rate, we Due to the fact that SA is a solo-searcher, its performance
propose a PSA algorithm which employs a population to seek extremely depends on the starting point. If the quality of the
the search space, has a memory to store the best experiences, starting point is poor, the obtained result is not satisfactory.
and makes this chance that solutions share their information On the other hand, using an individual solution is not only
with each other. suitable to globally exploring the search space but also needs
long computational time to find a reasonable solution. These
drawbacks are revealed when the problem dimension is high
and there are many local minima. In order to ameliorate the

SMC_2016 004627
2016 IEEE International Conference on Systems, Man, and Cybernetics • SMC 2016 | October 9-12, 2016 • Budapest, Hungary

SA performance, this paper proposes the population-based


version of simulated annealing algorithm, named PSA.
As previously mentioned, PSA has a memory in which the
best experience found by each solution is memorized. In PSA,
each solution produces a new candidate solution in light of its
current position and the experiences stored in the memory.
Each solution evaluates the memory and selects
probabilistically two experiences as its own interesting elite
experiences. As the quality of an experience increases, the
probability of its selection increases, too. The new candidate
solution is produced as follows.
[ ]
x p , j = x j (t ) + w × r1 × ( x ej1 − x j (t )) + r2 × ( x ej 2 − x j (t )) ,
(3)
j = 1,2,..., n

where xp is the new candidate solution, x(t) denotes the current


position, xe1 and xe2 are the first and second interesting elite
experiences, j denotes dimension’s index, n is the problem
dimension, w is a time-varying weight which controls the
importance of the elite experiences, and r1 and r2 are random
numbers between zero and one. Figure 1 depicts the flowchart
of PSA algorithm.
The steps of the proposed algorithm used in this study are
as below.
Step 1: Set the iteration index, t, to 1 and the initial
temperature, T0.
Step 2: Initialize a population of solutions uniformly at
random in the search space
Step 3: Evaluate the fitness of each solution.
Step 4: Set the memory at the current positions.
Step 5: Select interesting elite experiences from the memory
for each solution.
Step 6: Generate new candidate solutions for the population
based on Eq. (3).
Step 7: Evaluate the fitness values.
Step 8: Use Eq. (2) for each solution to determine whether the
old solution is replaced with the new one.
Step 9: Update the memory.
Step 10: Update iteration index and temperature.
Step 11: Repeat Step 5 to Step 10 until the maximum number
of iterations is met.
Step 12: Select the best solution from the memory as the final
result.
Fig. 1. Flowchart of PSA algorithm.

III. SIMULATION RESULTS

A. Benchmark Functions
It has been proven that under certain assumptions, no
single search method is the best on average for all problems
[15]. In order to evaluate the optimization power of PSA
algorithm without a biased conclusion toward some selected
problems, a large set of standard benchmark functions has
been employed. Table I demonstrates these seventeen
benchmark functions and their descriptions. A more detailed
description of the functions can be found in [7]. The

SMC_2016 004628
2016 IEEE International Conference on Systems, Man, and Cybernetics • SMC 2016 | October 9-12, 2016 • Budapest, Hungary

benchmark functions can be categorized into unimodal B. Experimental Setting


functions (f1-f7), multimodal functions (f8-f12), and low-
In order to conduct numerical experiments the population
dimensional multimodal functions (f13-f17). A function is size is set to 200. Maximum number of iterations is 2000 for
unimodal if it has only one minimum. Multimodal functions
low-dimensional ( n ≤ 6 ) benchmark functions and 5000 for
have more than one minimum and the number of local minima
increases exponentially with the problem dimension [16]. high-dimensional benchmark functions. Time-varying weight,
w, is defined as a decreasing function from wmax = 1.25 to wmin
= 0.25. Roulette wheel is used as the probabilistic approach
employed by solutions to select their interesting elite
TABLE I. BENCHMARK FUNCTIONS AND THEIR DIMENSIONS, SEARCH RANGES,
experiences from the memory. T0 is calculated by T0 = -∆f0 /
AND GLOBAL MINIMA ln(χ0) where χ0 = 0.2 is the acceptance rate for the initial worse
solution and ∆f0 is the initial difference between the worst and
Test function N S fmin
the optimal solution. It is essential to mention that the
f1 ( x) = i=1 xi2
n
30 [-100, 100]n 0 parameters fine tuning process has been omitted here.
f 2 ( x) = i=1 xi + ∏i =1 xi
n n
30 [-10, 10]n 0
The code of the proposed optimization method is written in
2
f 3 ( x) = i =1 ( j =1 x j )
n i
30 [-100, 100] n
0 MATLAB computational environment and 30 independent
f 4 ( x) = max i { xi , 1 ≤ i ≤ n} 30 [-100, 100]n 0 runs are executed for each benchmark function. For f1-f17, the
performance of PSA algorithm is compared with the results
f 5 ( x) = i =1 (100( xi +1 −
n −1
xi2 ) 2 + ( xi − 1) ) 2
30 [-30, 30] n
0 obtained by SA with Gaussian mutation (GSA) and SA with
f 6 ( x) = i=1 ( xi + 0.5) 2
n
30 [-100, 100]n 0 non-uniform mutation (nonu-SA) reported in [7]. Because
PSA is a population-based algorithm, it is unfair to compare
f 7 ( x) = 
n
ix 4 + random [0, 1) 30 [-1.28, 1.28] n
0
i =1 i its performance with those obtained by the solo-searcher
f8 ( x) =  algorithms. As a result, to observe the PSA performance
n
( x2
i =1 i
− 10 cos(2π xi ) + 10) 30 [-5.12, 5.12] n
0
against population-based algorithms, we will also compare the
1 n 2 1 n
f 9 (x) = −20exp(−0.2  xi ) − exp(n i=1cos2π xi )
n i=1
results of the proposed algorithm with those obtained by
30 [-32, 32]n 0 genetic algorithm (GA), particle swarm optimization (PSO),
+ 20 + e
and group search optimizer (GSO) reported in [14]. The
1 x results have been adopted from the literature directly for
 ( xi ) 2 − ∏i=1 cos( ii ) + 1
n n
f10 ( x) = 30 [-600, 600]n 0
4000 i=1 comparison.
π
{10 sin 2 (π y1 ) + i =1 ( yi − 1) 2 [1 + 10 sin 2 (π y i +1 )]
n −1
f11 ( x) =
n C. Unimodal Functions
+ ( y n − 1) 2 } + i =1 u ( xi ,10,100, 4)
n

yi = 1 +
1
( xi + 1)
PSA algorithm has been tested on a set of unimodal
4 30 [-50, 50]n 0 functions and compared with the other algorithms. Table II
k ( xi − a ) m , xi > a

u ( xi , a, k , m) = 0, − a ≤ xi ≤ a
summarizes the mean and standard deviation of the function
 m
k ( − xi − a ) , xi < − a values over 30 runs found by PSA, GSA, and nonu-SA, the
rank of each algorithm in solving unimodal functions f1-f7, as
n −1
f12 ( x) = 0.1{10 sin 2 (3π x1 ) + i =1 ( xi − 1) 2 [1 + 10 sin 2 (3π xi +1 )] well as the best performance of each algorithm among runs.
+ ( xn − 1) [1 + sin (2π xn )]} + i =1u ( xi , 5,100, 4)
2 2 n
30 [-50, 50]n 0 The results of GA, PSO, and GSO algorithms obtained over
1000 runs are also listed in Table III in comparison with the
f13 ( x) = x12 + 2 x22 − 0.3 cos(3πx1 ) − 0.4 cos(4πx2 ) + 0.7 results found by our PSA.
2 [-100, 100]n 0

 1  Table II indicates that PSA has a better performance than


1 [-65.536,
f14 ( x) =  +  j =1 
25
2 0.998 GSA and non-SA on all the unimodal functions. Observe that
 500 j + i =1 ( xi − aij )  65.536]n
2 6
 PSA noticeably outperforms the other algorithms in solving
11  x (b 2 + bi x2 ) 
2
3.075×
functions f1, f2, f4, f6, and f7. On function f6, PSA finds the
f15 ( x) = i =1 ai − 21 i  4 [-5, 5]n global minimum in all runs. In summary, the search capability
bi + bi x3 + x4  10-4

of the three SA-based algorithms can be ordered as PSA >
[
f16 ( x) = −i =1 ci exp −  j =1 aij ( x j − pij ) 2
4 n
] 3 [0, 1]n -3.863 nonu-SA > GSA. From Table III, we can see that PSA has a
c exp[−  a (x − p ) ]
f17 ( x) = −i =1
4 n 2
6 [0, 1]n -3.322
better performance than GA, PSO, and GSO on all the
i j =1 ij j ij
functions except f3. On f3, though PSA produces better results
than GA and GSO, it is outperformed by PSO. Based on the
results the order of the algorithms in terms of their search
potential can be indicated as PSA > PSO > GSO > GA.

SMC_2016 004629
2016 IEEE International Conference on Systems, Man, and Cybernetics • SMC 2016 | October 9-12, 2016 • Budapest, Hungary

TABLE II. COMPARISON OF PSA WITH GSA AND NONU-SA ON BENCHMARK PSA in comparison with GSA and nonu-SA on f8-f12 over 30
FUNCTIONS F1-F7. ALL RESULTS HAVE BEEN AVERAGED OVER 30 RUNS.
runs. For four of the benchmark functions (f9-f12), PSA finds
Function Index PSA GSA nonu-SA better results than the other algorithms. Note that PSA
Mean 1.17e-45 15.88 1.95e-8 noticeably outperforms the other algorithms in solving
Std. 2.56e-45 2.28 2.58e-8
f1
rank 1 3 2 functions f9-f12. In solving function f10, PSA reaches the
Best 1.45e-49 10.86 1.73e-9 global minimum in all runs. In optimizing f8 PSA outperforms
Mean 1.44e-18 7.51e-5 3.32e-4 nonu-SA and is outperformed by GSA. As a result, the search
Std. 7.73e-18 4.07e-6 9.63e-3 power of the algorithms can be ordered as PSA > nonu-SA >
f2
rank 1 2 3
Best 5.41e-26 14.69 1.78e-2 GSA.
Mean 5.19 84.31 10.65 TABLE IV. COMPARISON OF PSA WITH GSA AND NONU-SA ON BENCHMARK
Std. 3.35 20.56 5.89 FUNCTIONS F8-F12. ALL RESULTS HAVE BEEN AVERAGED OVER 30 RUNS.
f3
rank 1 3 2 Function Index PSA GSA nonu-SA
Best 1.09 57.07 2.24 Mean 1.69 2.78e-2 44.58
Mean 5.46e-8 5.70 0.38 Std. 1.09 2.43e-1 16.66
Std. 1.08e-7 4.29 0.20 f8
f4 rank 2 1 3
rank 1 3 2 Best 2.46e-8 2.42e-2 15.92
Best 1.59e-9 1.98 0.073 Mean 4.44e-15 19.84 0.72
Mean 26.04 4.75e3 53.38 Std. 0 0.13 0.89
Std. 0.13 1.24e3 56.86 f9
f5 rank 1 3 2
rank 1 3 2 Best 4.44e-15 19.56 3.76e-6
Best 25.71 2.82e3 19.53 Mean 0 0.60 1.39e-2
Mean 0 17.67 1.37 Std. 0 7.80e-2 1.30e-2
Std. 0 2.17 1.27 f10
f6 rank 1 3 2
rank 1 3 2 Best 0 0.30 1.98e-7
Best 0 12 0 Mean 1.57e-32 13.28 0.44
Mean 4.82e-5 27.87 4.89e-2 Std. 5.47e-48 4.96 0.69
Std. 2.40e-5 6.19 2.49e-2 f11
f7 rank 1 3 2
rank 1 3 2 Best 1.57e-32 4.48 1.24e-10
Best 1.82e-5 14.28 2.01e-2 Mean 1.35e-31 2.29 4.03e-3
Average rank 1.00 2.86 2.14 Std. 6.57e-47 0.28 5.39e-3
Final rank 1 3 2 f12
rank 1 3 2
Best 1.35e-31 1.49 0.69e-9
TABLE III. COMPARISON OF PSA WITH GA, PSO, AND GSO ON BENCHMARK Average rank 1.20 2.60 2.20
FUNCTIONS F1-F7.
Final rank 1 3 2
Function Index PSA GA PSO GSO
Mean 1.17e-45 3.1711 3.6927e-37 1.9481e-8 A comparison between the obtained results by PSA versus
f1 Std. 2.56e-45 1.6621 2.4598e-36 1.1629e-8 GA, PSO, and GSO has been indicated in Table V. The results
rank 1 4 2 3
Mean 1.44e-18 0.5771 2.9168e-24 3.7039e-5 of GA, PSO, and GSO have been averaged over 1000 runs.
f2 Std. 7.73e-18 0.1306 1.1362e-23 8.6185e-5 PSA outperforms markedly GA, PSO, and GSO on f9-f12. On
rank 1 4 2 3 f8, PSA produces better results than PSO and is outperformed
Mean 5.19 9749.9145 1.1979e-3 5.7829 by GA and GSO. In this case the overall search performance is
f3 Std. 3.35 2594.9593 2.1109e-3 3.6813
rank 2 4 1 3 PSA > GSO > PSO > GA.
Mean 5.46e-8 7.9610 0.4123 0.1078
f4 Std. 1.08e-7 1.5063 0.2500 3.9981e-2
rank 1 4 3 2
Mean 26.04 338.5616 37.3582 49.8359 TABLE V. COMPARISON OF PSA WITH GA, PSO, AND GSO ON BENCHMARK
f5 Std. 0.13 361.497 32.1436 30.1771 FUNCTIONS F8-F12.
rank 1 4 2 3 Function Index PSA GA PSO GSO
Mean 0 3.6970 0.146 1.6000e-2 Mean 1.69 0.6509 20.7863 1.0179
f6 Std. 0 1.9517 0.4182 0.1333 f8 Std. 1.09 0.3594 5.9400 0.9509
rank 1 4 3 2 rank 3 1 4 2
Mean 4.82e-5 0.1045 9.9024e-3 7.3773e-2 Mean 4.44e-15 0.8678 1.3404e-3 2.6548e-5
f7 Std. 2.40e-5 3.6217e-2 3.5380e-2 9.2557e-2 f9 Std. 0 0.2805 4.2388e-2 3.0820e-5
rank 1 4 2 3 rank 1 4 3 2
Average rank 1.14 4.00 2.14 2.71 Mean 0 1.0038 0.2323 3.0792e-2
Final rank 1 4 2 3 f10 Std. 0 6.7545e-2 0.4434 3.0867e-2
rank 1 4 3 2
D. Multimodal Functions Mean 1.57e-32 4.3572e-2 3.9503e-2 2.7648e-11
f11 Std. 5.47e-48 5.0579e-2 9.1424e-2 9.1674e-11
D.1. Multimodal Functions with Many Local Minima rank 1 4 3 2
Mean 1.35e-31 0.1681 5.0519e-2 4.6948e-5
In multimodal functions (f8-f12), number of local minima f12 Std. 6.57e-47 7.0681e-2 0.5691 7.001e-4
increases exponentially with the problem dimension. rank 1 4 3 2
Therefore, multimodal functions are frequently considered as Average rank 1.40 3.40 3.20 2.00
Final rank 1 4 3 2
being difficult to optimize. Table IV presents the results of

SMC_2016 004630
2016 IEEE International Conference on Systems, Man, and Cybernetics • SMC 2016 | October 9-12, 2016 • Budapest, Hungary

D.2. Multimodal Functions with a Few Local Minima In order to investigate the convergence rate of PSA
algorithm seventeen benchmark test functions were chosen
In Table VI are described the results of PSA in comparison
and the average results over 30 runs were made. By the
with GSA and nonu-SA for functions f13-f17 over 30 runs. It is
simplicity only the performance of PSA for high-dimensional
obvious that PSA performs better than the other algorithms on functions over the first 2000 iterations and for low-
all the benchmark functions. In solving f13, PSA discovers the dimensional functions over the first 200 iterations, are
global minimum in all runs. The performance of PSA and presented in Figs. 2 to 10. The rapid convergence of PSA can
nonu-SA is similar on f13 and f16. The final rank of the be seen in Figs. 2 to 10, for four benchmark functions f1, f3, f14
algorithms in solving multimodal functions with a few local and f17, respectively.
minima can be concluded from Table VI as PSA > nonu-SA >
GSA. 6
x 10
4

TABLE VI. COMPARISON OF PSA WITH GSA AND NONU-SA ON BENCHMARK


FUNCTIONS F13-F17. ALL RESULTS HAVE BEEN AVERAGED OVER 30 RUNS.
5
Function Index PSA GSA nonu-SA
Mean 0 6.38e-3 0
4
Std. 0 5.53e-3 0
f13

Mean value
rank 1 3 1
Best 0 7.29e-4 0 3

Mean 0.998 11.85 1.19


Std. 1.11e-16 7.93 0.48
f14 2
rank 1 3 2
Best 0.998 0.998 0.998
1
Mean 6.72e-4 5.69e-3 7.62e-4
Std. 4.28e-5 1.06e-2 2.46e-4
f15
rank 1 3 2 0
0 200 400 600 800 1000 1200 1400 1600 1800 2000
Best 5.25e-4 7.97e-4 4.78e-4 Iteration
Mean -3.8628 -3.8566 -3.8628 Fig. 2. Average results over 30 runs obtained by PSA on f1
Std. 2.66e-15 3.37e-3 1.07e-6
f16
4
x 10
rank 1 3 1 10

Best -3.8628 -3.86 -3.8628 9


Mean -3.322 -3.04 -3.2619
8
Std. 8.88e-16 0.39 6.11e-2
f17
rank 1 3 2 7
Best -3.322 -3.23 -3.322
Mean value

6
Average rank 1.00 3.00 1.60
Final rank 1 3 2 5

4
From Table VII, we can observe in comparison with GA,
3
PSO and GSO, PSA achieves better results on benchmark
functions f14, f16, and f17. On f15, PSA outperforms GA and is 2

outperformed by PSO and GSO. On two benchmark functions, 1

the performance of PSA and GSO is similar. However, on f14- 0


0 200 400 600 800 1000 1200 1400 1600 1800 2000
f17, the overall performance of GSO is better than that of PSA Iteration

algorithm. The results of GA, PSO, and GSO have been Fig. 3. Average results over 30 runs obtained by PSA on f3.
averaged over 50 runs. Due to the fact that function f13 has not
been solved in [14], the comparison has made without 100
considering this function. From Table VII we can see that the 90
order of the search performance of the four algorithms is GSO 80
> PSA > GA > PSO.
70
TABLE VII. COMPARISON OF PSA WITH GA, PSO, AND GSO ON BENCHMARK
FUNCTIONS F14-F17.
Mean value

60

Function Index PSA GA PSO GSO 50


Mean 0.9980 0.9989 1.0239 0.9980 40
f14 Std. 1.11e-16 4.4333e-3 0.145 0
rank 1 3 4 1 30

Mean 6.72e-4 7.0878e-3 3.8074e-4 3.7713e-4 20


f15 Std. 4.28e-5 7.8549e-3 2.5094e-4 2.5973e-4
10
rank 3 4 2 1
Mean -3.8628 -3.8624 -3.8582 -3.8628 0
0 200 400 600 800 1000 1200 1400 1600 1800 2000
f16 Std. 2.66e-15 6.2841e-4 3.2127e-3 3.8430e-6 Iteration
rank 1 3 4 1
Mean -3.322 -3.2627 -3.1846 -3.2697 Fig. 4. Average results over 30 runs obtained by PSA on f4.
f17 Std. 8.88e-16 6.0398e-2 6.1053e-2 5.9647e-2
rank 1 3 4 2
Average rank 1.50 3.25 3.50 1.25
Final rank 2 3 4 1

SMC_2016 004631
2016 IEEE International Conference on Systems, Man, and Cybernetics • SMC 2016 | October 9-12, 2016 • Budapest, Hungary

4
x 10
6 600

5 500

4 400

Mean value
Mean value

3 300

2 200

1 100

0
0 0 200 400 600 800 1000 1200 1400 1600 1800 2000
0 200 400 600 800 1000 1200 1400 1600 1800 2000
Iteration Iteration

Fig. 5. Average results over 30 runs obtained by PSA on f6. Fig. 8. Average results over 30 runs obtained by PSA on f10.

6
400

350 5

300
4
Mean value

250
Mean value

3
200

150 2

100
1

50
0
0 20 40 60 80 100 120 140 160 180 200
0
0 200 400 600 800 1000 1200 1400 1600 1800 2000 Iteration
Iteration
Fig. 9. Average results over 30 runs obtained by PSA on f14
Fig. 6. Average results over 30 runs obtained by PSA on f8.
-2

25

20
-2.5
Mean value
Mean value

15

-3
10

-3.5
0 20 40 60 80 100 120 140 160 180 200

0
Iteration
0 200 400 600 800 1000 1200 1400 1600 1800 2000
Fig. 10. Average results over 30 runs obtained by PSA on f17
Iteration

Fig. 7. Average results over 30 runs obtained by PSA on f9. Promising results obtained by PSA in comparison with
solo-searchers and population-based searchers accentuate high
optimization capability of the proposed population-based SA
algorithm. It is clear that using a population can greatly
improve the SA performance and overcome considerably its
deficiencies. The success of PSA is not only because of
providing a population to seek the search space but also is
because of its memory which provides a social behavior
among solutions. Providing a memory makes this opportunity

SMC_2016 004632
2016 IEEE International Conference on Systems, Man, and Cybernetics • SMC 2016 | October 9-12, 2016 • Budapest, Hungary

for the solutions to know the search space better and decide [13] L. Wang, and L. Zhang, “Stochastic optimization using simulated
annealing with hypothesis test,” Applied Mathematics and Computation,
about their new solutions. vol. 174, pp. 1329-1342, 2006.
[14] S. He, Q. H. Wu, and J. R. Saunders, “Group search optimizer: an
IV. CONCLUSION AND FUTURE RESEARCH optimization algorithm inspired by animal searching behavior,” IEEE
Transactions on Evolutionary Computation, vol. 13, pp. 973-990, 2009.
[15] D. H. Wolpert, and W. G. Macready, “No free lunch theorems for
Solo-search nature of SA algorithm makes this algorithm optimization,” IEEE Transactions on Evolutionary Computation, vol. 1,
to be trapped in local minima more easily. In order to hedge pp. 67-82, 1997.
against being unlucky in the starting point or the decisions that [16] H. P. Schwefel, Evolution and Optimum Seeking, New York; Wiely,
SA makes, a population-based simulated annealing (PSA) 1995.
[17] S. Mahdavi, M. E. Shiri and S. Rahnamayan, “Metaheuristics in large-
algorithm has been proposed in this paper. PSA utilize the scale global continues optimization: A survey,” Information Sciences,
ability of a population to efficiently seek the different parts of vol. 295, pp. 407-428, 2015.
the search space. It has a memory by which solutions share [18] M. N. Omidvar, X. Li, and K. Tang, “Designing benchmark problems
their information and determine their new probable positions. for large-scale continuous optimization,” Information Sciences, vol. 316,
pp. 419-436, 2015.
In order to evaluate the optimization power of the proposed
algorithm, experiments have been conducted on testing a set
of unimodal and multimodal benchmark functions. In
comparison with solo-search algorithms, PSA produces
promising results. PSA has also a competitive performance
when it is compared with other population-based algorithms.
It can be drawn that PSA is a powerful optimization algorithm
with which we may obtain better results than other algorithms.
Further research work related to PSA is being carried out to
efficiently optimize other large scale cases (see benchmarks in
[17],[18]).
REFERENCES
[1] A. Prügel-Bennett, “Benefits of a population: five mechanisms that
advantage population-based algorithms,” IEEE Transactions on
Evolutionary Computation, vol. 14, pp. 500-517, 2010.
[2] V. Černý, “Thermodynamical approach to the traveling salesman
problem: an efficient simulation algorithm,” Journal of Optimization
Theory and Applications, vol. 45, pp. 41-51, 1985.
[3] C. Low, “Simulated annealing heuristic for flow shop scheduling
problems with unrelated parallel machines,” Computers & Operations
Research, vol. 32, pp. 2013-2025, 2005.
[4] C Paik, and S. Soni, “A simulated annealing based solution approach for
the two-layered location registration and paging areas partitioning
problem in cellular mobile networks,” European Journal of Operational
Research, vol. 178, pp. 579-594, 2007.
[5] M. Locatelli, “Convergence of a simulated annealing algorithm for
continuous global optimization,” Journal of Global Optimization, vol.
18, pp. 219-233, 2000.
[6] X. Yao, “A new simulated annealing algorithm,” International Journal
of Computer Mathematics, vol. 56, pp. 161-168, 1995.
[7] Z. Xinchao, “Simulated annealing algorithm with adaptive
neighborhood,” Applied Soft Computing, vol. 11, pp. 1827-1836, 2010.
[8] L. Ingber, “Adaptive simulated annealing (ASA): Lessons learned,”
1996.
[9] A. Bevilacqua, “A methodological approach to parallel simulated
annealing on an SMP system,” Journal of Parallel and Distributed
Computing, vol. 62, pp. 1548-1570, 2002.
[10] O. Cordon, F. Moya, and C. Zarco, “A new evolutionary algorithm
combining simulated annealing and genetic programming for relevance
feedback in fuzzy information retrieval systems,” Soft Computing, vol.
6, pp. 308-319, 2002.
[11] G. Pajares, and J. M. De La Cruz, “On combining support vector
machines and simulated annealing in stereovision matching,” IEEE
Transactions on Systems, Man, and Cybernetics, Part B: Cybernetics,
vol. 34, pp. 1646-1657, 2004.
[12] S. Salcedo-Sanz, R. Santiago-Mozos, and C. Bousono-Calzon, “A
hybrid Hopfield network-simulated annealing approach for frequency
assignment in satellite communications systems,” IEEE Transactions on
Systems, Man, and Cybernetics, Part B: Cybernetics, vol. 34, pp. 1108-
1116, 2004.

SMC_2016 004633

You might also like