Download as pdf or txt
Download as pdf or txt
You are on page 1of 10

Journal of Parallel and Distributed Computing 142 (2020) 36–45

Contents lists available at ScienceDirect

J. Parallel Distrib. Comput.


journal homepage: www.elsevier.com/locate/jpdc

Hybridization of firefly and Improved Multi-Objective Particle Swarm


Optimization algorithm for energy efficient load balancing in Cloud
Computing environments
A. Francis Saviour Devaraj a , Mohamed Elhoseny b , S. Dhanasekaran a , E. Laxmi Lydia c ,

K. Shankar d ,
a
Department of Computer Science and Engineering, Kalasalingam Academy of Research and education, India
b
Faculty of Computers and Information, Mansoura University, Egypt
c
Computer Science and Engineering, Vignan’s Institute of Information Technology (Autonomous), India
d
Department of Computer Applications, Alagappa University, Karaikudi, India

article info a b s t r a c t

Article history: Load balancing, in Cloud Computing (CC) environment, is defined as the method of splitting workloads
Received 23 January 2020 and computing properties. It enables the enterprises to manage workload demands or application
Received in revised form 12 March 2020 demands by distributing the resources among computers, networks or servers. In this research article,
Accepted 18 March 2020
a new load balancing algorithm is proposed as a hybrid of firefly and Improved Multi-Objective Particle
Available online 11 April 2020
Swarm Optimization (IMPSO) technique, abbreviated as FIMPSO. This technique deploys Firefly (FF)
Keywords: algorithm to minimize the search space where as the IMPSO technique is implemented to identify the
Cloud computing enhanced response. The IMPSO algorithm works by selecting the global best (gbest) particle with a
Firefly small distance of point to a line. With the application of minimum distance from a point to a line, the
Load balancing gbest particle candidates could be elected. The proposed FIMPSO algorithm achieved effective average
Task scheduling load for making and enhanced the essential measures like proper resource usage and response time
IMPSO of the tasks. The simulation outcome showed that the proposed FIMPSO model exhibited an effective
performance when compared with other methods. From the simulation outcome, it is understood that
the FIMPSO algorithm yielded an effective result with the least average response time of 13.58ms,
maximum CPU utilization of 98%, memory utilization of 93%, reliability of 67% and throughput of 72%
along with a make span of 148, which was superior to all the other compared methods.
© 2020 Elsevier Inc. All rights reserved.

1. Introduction technique to schedule the input requests as well as control the


processing resource in an effective manner. Task scheduling and
Cloud Computing (CC) is a rapidly developing concept in the resource management provide high economic rate and resource
domain of distributed computing which could be applied in di- application until the specified limits. The main barriers in the
verse fields such as data storage, data analysis and IoT appli- utilization of CC operations are allocating and scheduling the
cations [10]. CC is an advanced technology that can change the resources. In order to overcome these complexities, many re-
way how traditional businesses work. It offers various facilities searchers showed their interest in task scheduling process in CC.
to registered clients in the form of online services which like to The main work of task scheduling is to organize the input request
avoid user investment in computing architecture. Some of the in a definite way so that every resource is utilized in an efficient
services by CC are Infrastructure-as-a-Service (IaaS), Software-as- manner. Each service is provided to numerous clients and several
a-Service (SaaS) and Platform-as-a-Service (PaaS) [17]. For every tasks might be executed at the same time. If the system does not
service, a user should request the Cloud Service Provider (CSP) via apply scheduling method, then it results in longer waiting time
internet. The CSP has to manage the resources in order to satisfy for the process to be performed. Unfortunately, few requests get
the requests received from clients. SP leverages the scheduling terminated due to long waiting time that exceeded the maximum
limit. While scheduling is carried out, the concern scheduler is
∗ Corresponding author. required to monitor some of the limitations such as behavior of
the task, size of the request, execution time of the task, available
E-mail addresses: saviodev@gmail.com (A. Francis Saviour Devaraj),
mohamed_elhoseny@mans.edu.eg (M. Elhoseny), srividhans@gmail.com resource and the load induced on the resources.
(S. Dhanasekaran), elaxmi2002@yahoo.com (E. Laxmi Lydia), Task scheduling is the main problem involved in CC. The merit
drkshankar@ieee.org (K. Shankar). involved in CC is its efficient usage of all the resources which

https://doi.org/10.1016/j.jpdc.2020.03.022
0743-7315/© 2020 Elsevier Inc. All rights reserved.
A. Francis Saviour Devaraj, M. Elhoseny, S. Dhanasekaran et al. / Journal of Parallel and Distributed Computing 142 (2020) 36–45 37

can occur only when the task scheduling is carried out properly as depicted in the table analysis. Dynamic techniques apply bio-
[2]. Hence, both task scheduling and resource allocation have inspired algorithms for scheduling the workloads. The swarm-
been considered as mandatory operations for all processes to based algorithms are employed to allocate load in CC in order to
be performed. At present, people using internet could gather fix the consecutive objects based on the velocity and location of
information from any location at any time without any idea about particles. There are merits and demerits associated with different
the host structure. These kinds of hosting infrastructures include load balancing techniques. The scheduling model is enhanced
different systems and abilities which are controlled by CSP. CC with round robin algorithm. This task scheduling process works
improves the potentials of host infrastructure that can make on the basis of meta-heuristic models. The roulette selection
use of internet services. CSP gains efficiency through facilitating model, based on random scheduling, is determined after several
the clients by making use of Cloud Services (CS). Clients apply iterations and fitness values. The static algorithm does not come
the CS to employ the whole processing services from software under the category of meta-heuristic technique to manage the
to hardware. The services offered in CC follow the pay-as-you- load in cloud. This technique incurs low operational cost when
go model. The utilization of resources may increase or decrease compared to primitive models such as First Come First Serve
depending on the CSP users and their demand for applications. It (FCFS) and Round Robin scheduling. Kennedy and Eberhart [13]
is considered as a merit of CC though it comes with additional presented a meta-heuristic approach called Particle Swarm Opti-
cost. The CS user can choose different services based on the mization (PSO). This work depends on a set of particle methods
application required. However, this freedom of optional service that follow a flock of birds to transform from one source to
may lead to challenging issues that needs to be detected properly. another. The optimal position in search space has relied on the
CC research has two main parts namely task scheduling and velocity of the previous particle. In 11 scheduling techniques,
resource allocation. The efficient usage of the resource is based on the static and alternate models of GA, SA, Min–Min, Max–Min,
scheduling and load balancing techniques to avoid the arbitrary Tabu Search and so on are described. Pacini et al. [18] discussed
allocation of sources. CC aims at resolving difficult operations the operation of swarm optimization that resulted in improved
using scheduling techniques. solutions.
Load balancing and task scheduling are important players in In the literature [27], a constraint-based PSO scheme was
CC environment; thus, the current research work proposes out used to allocate the tasks in sequential nodes. Under the appli-
a novel technique in order to obtain better results with the cation of CC platform, the PSO algorithm was described in the
help of hybridizing firefly and Improved Multi-Objective Particle literature [19]. The appropriate particle is evaluated through the
Swarm Optimization (IMPSO) technique, abbreviated as, FIMPSO. fitness function. Then, the velocity of a particle is obtained by
It applies Firefly (FF) algorithm to minimize the search space the particle’s best position (pbest) in process and global best
whereas the IMPSO technique is implemented to identify the position (gbest) in a swarm. When compared to existing works
enhanced response. IMPSO works by selecting the global best and relevant algorithms, it reduces the total cost of computation.
(gbest) particle with a small distance of point to the line. With the It concentrates on the concurrence of Ant Colony Optimization
application of minimum distance from a point to line, the gbest (ACO) technique. As the name itself describes, it discusses about
particle candidates could be elected. The simulation outcome the role of an ant while it finds its food. It offers the application
from the proposed model was found to be too optimized when of a PSO algorithm to distribute the resources on VM in the cloud.
compared with alternate models [1,3,12,16,24]. Garg and Buyya [8] signified the Network Simulator to sched-
The upcoming sections are planned as follows. Relevant stud- ule each load from the system. The study deployed PSO for
ies are discussed in Section 2. The proposed FIMPSO algorithm scheduling cloudlets based on VMs in CloudSim, which depends
is described in Section 3 and validated in Section 4. At last, the upon a group of particles in search space. A new searching mech-
conclusions are drawn in Section 5. anism was introduced based on Newton’s law of gravity named
‘Gravitational Search Algorithm’ (GSA). It did not consider the op-
2. Related works eration of storing optimal positions for future applications. Here,
the fitness value can be used to compute the size of the particle.
Cloud Computing (CC) is described as a virtual distributed The location of the secondary particle is nothing but a sum of
computing that shares the maximum resources among its cus- velocity and position of the existing particle. GSA is employed
tomers across a large area with the help of internet. The resources in filter modeling functions. The force could be measured using
could be requested and applied by many cloudlets simultane- gravitational constant which is a significant one. The application
ously. Clients can access the centralized resource at anytime, of fuzzy segmentation, based on gravitational searching model,
any where over the internet. In the literature [5], a CloudSim was done in the literature [11] from a collection of satellite
simulator was proposed to simulate the cloudlets on virtual sys- images in order to identify the information.
tem. The load balancers allocate the cloudlets’ functions on data In the previous study [20], noise filtering ultrasound images
centers, while the whole workload and few classes present in were used to assess the GSA model. The novel element through
scheduling are explained. Buyya et al. [26] followed a principle to Binary Gravitational Search Algorithm (BGSA) was presented to
schedule the works from one particle to another as mentioned. optimize the scheduling operation which is produced from vari-
This scheduling process depends upon the energy of a node in ous platforms. A hybrid GSA was introduced in the literature [14]
managing the workload. If the load is passed on to the Virtual Ma- using orthogonal crossover as well as pattern searching to sched-
chines (VM), it results in underbalanced nodes. Khiyaita et al. [15] ule the load in CC environment. Two efficient GSA optimization
described various models, their importance and the algorithms of schemes were developed in the previous study [23] to improve
load balancing to resolve the issues because of the imbalanced the diversity of particles and utilize the memory models in math-
node. ematical calculations. It established security measures on the
Load scheduling is the process of allocating and operating basis of behavioral graphs and implied the concentration on load
cloudlets on VM in an optimal way to decrease the compu- balance as well as service allotment in the CC platform. The
tational cost. It is applied to decrease the transfer time, time GSA depends upon repulsive and attractive forces to resolve the
taken for waiting, response time, execution time in addition optimization issue.
to the operational cost. Chaudhary and Kumar [6] opined that There are recent works conducted in this research arena [4,21,
scheduling might be dynamic or static while allocating the load 22,25] in which the study [4] proposed PROUD, a new approach
38 A. Francis Saviour Devaraj, M. Elhoseny, S. Dhanasekaran et al. / Journal of Parallel and Distributed Computing 142 (2020) 36–45

Table 1
Summary of related works.
Reference Objective Algorithm used
[6] Deployed a scheduling algorithm for load allocation in a dynamic or static way Swarm intelligence techniques, Roulette selection
[13] Developed a scheduling technique in CC environment PSO algorithm
[18] Analysis of different algorithms Swarm intelligence techniques
[27] To allocate the tasks in sequential nodes Constraint-based PSO scheme
[8] For scheduling cloudlets based on VMs in CloudSim PSO algorithm
[14] Developed a scheduling technique in CC environment Hybrid GSA using orthogonal crossover

to secure the outsource data designcryption procedure for edge


servers to lessen the computational overhead on the client side.
Besides, a new CLoud scientific wOrkflowSchedUlingalgoRithm
based on attack–defensE game model (CLOSURE) [25] was also
proposed in the literature. Table 1 summarizes the reviewed
works in terms of its objective and underlying algorithms. Though
there are several works available in the literature, some of the
methods do not consider the characteristics of CC environment
yet it needs to improve the performance in several aspects.

3. Proposed system

The proposed FIMPSO algorithm involves the hybridization


of FF and IMPSO algorithms. The presented FIMPSO algorithm
reaches the effective average load for making and enhances the
essential measures such as proper resource usage and response
time of the tasks. In the following subsections, the processes
involved in the proposed algorithm are discussed in detail.

3.1. FF algorithm

The proposed model is established as a Hybrid of FF and IMPSO


which is explained briefly. Generally, the FF technique is classified
into three premises [7]:

1. Every FF belongs to the same gender. Only the opposite sex


attracts each other.
2. In fireflies, the attraction is corresponding to brightness
i.e., a lower brightened FF would be consumed by a high
intensified FF. Hence the attraction and brightness of a FF Fig. 1. Flowchart of FF algorithm.
are reduced when the distance gets increased between the
fireflies. If a FF is not brighter when compared to a specific
fly, then it moves randomly in such a way that no flies are The moving FF i against the brighter FF j is implied by the
attracted. following equation:
3. An objective function is used to compute the brightness of 2
xi = xi + β0 e−γ r
( )
fireflies. xj − xi + ∝ (4)
FF could move using three levels such as:
The process involved in FF algorithm is illustrated in Fig. 1. The
fundamental concepts involved in FF algorithm are light intensity • The present state of FF,
and attraction. • Fixed absorption value with the application of high intensi-
Fireflies’ attraction is measured by defining its intensity, fied FF,
whereas the brightness is acquired from an objective function • The random movement is defined by α and identical distri-
while optimizing an issue. Light intensity and attraction are esti- bution is in the interval [0, 1].
mated through the equations where I0 and β0 denote brightness
and attraction respectively. 3.2. IMPSO algorithm
−γ r 2
I = I0 e (1)
The flowchart of the proposed technique for IMPSO algorithm
−γ r 2
β0 = β0 e (2) is depicted in Fig. 2. The steps involved in IMPSO algorithm are
given below.
The distance among the fireflies i in the place xi and j in position
xj are denoted by Euclidean distance formula, where xik indicates (1) Initialization of PSO;
the FF position i in spatial dimension k:
 (a) Fix a swarm set number J as well as particle dimension
 d D;
⏐⏐ ⏐⏐  ∑( )2
rij − xi − xj = √
⏐⏐ ⏐ ⏐ xik − xjk (3) (b) Set a computation range for variable U(l) values
k=1 i.e., U(l) mn and U(l)mx ;
A. Francis Saviour Devaraj, M. Elhoseny, S. Dhanasekaran et al. / Journal of Parallel and Distributed Computing 142 (2020) 36–45 39

Fig. 2. Flowchart of IMPSO algorithm.

(c) Particle speed control, variable v ar(l) mx = U(l) mx − (4) Particle position initialization: pbest(l) = U(l) and gbest (l) =
U(l) mx ; Best particle found in U(l)
(d) Position of swarm and initialization of speed, random (5) Archive initialization
particle formation, U(l), v ar(l), l = 1, 2, M. Save the non-dominated solutions which are identified in
U into archive
(2) Parameter evolution; (6) If iterations does not accomplish z mx

(a) Max Iterations z mx ; (b) Explore gbest (l) from archive


(b) Higher and lower inertia weight, value as ω mn = Line Im solver for non-dominated particle U (m) from
0.4, ω mx = 0.9; archive.
(c) Training factors tl and t2 , t1 = g = 2. Distance dlm from U(l) to line Im .
If dlk = mn {dlm }, particle k in archive would be
(3) Evaluation of objective function declared as the GBG, glbest = U (k).
For l = 1 to J (J is the population size) (c) Updating the position as well as particle speed
For k = 1 to I (I is objective function numbers) v arz +1 (l) = ω × v arz (l) + tl R1 (pbestz (l) − Uz (l))
(a) Compute fk (ul ) + t2 R2 (gbestz (l) − Uz (l)) (6)
(b) Particle generalization Uz+1 (l) = Uz (l) + v arz (l),
where z denotes the number of iterations whereas Rl
fk (ul ) − mn fk (ul )
fk′ (ul ) = . (5) and R2 represent the random values in the range of
mx fk (ul ) − mn fk (ul ) [0 . . . 1].
40 A. Francis Saviour Devaraj, M. Elhoseny, S. Dhanasekaran et al. / Journal of Parallel and Distributed Computing 142 (2020) 36–45

If U(l) expands over the boundaries, then it is combined which is composed of maximum CD values. Different guides are
by fixing the decision parameter which is similar to selected for all the particles from particular portion of a dataset.
the value of corresponding lower or upper boundary Thus, the GBG could be selected with the assistance of CD value;
while the velocity is increased by −1. Hence it finds however, it is a random selection process [9]. In order to resolve
the opposite direction. the demerits involved in MDPL-MPOSO, FIMPSO is proposed to
(d) Perform mutation on (l). identify the global best guidance for every particle. The funda-
(e) Calculate U(l) mental model named Minimum Distance of Point to Line (MDPL)
(f) Update the archive is established. Afterwards, it computes the global best guide for
If the particles are not dominated by recorded solu-
all particles in population. In 2D system, a straight line L can
tions, then it introduces new non-dominated solutions
be identified with the help of origin point O(0, 0) and anypoint
in U within the archive. Every solution present in the
H(a, b). Line L is described below:
archive is dominated by a novel solution. Once the
archive is completed, a replaced solution must be com- x y
= (7)
puted on the basis of CD value. a b
The personal best solution for all the particles present Point P(x0 , y0 ) is external line L, distance d between the points p
in U should be upgraded. If the present pbest (l) domi-
and line L is explained as follows
nates the location in memory, then the position of the
particle is updated with the function of pbest (l) = U(l). |bx0 − ay0 |
d= √ (8)
(g) Improved iteration value z a2 + b 2
(7) Cycle value gets improved until the iteration requirements Likewise, the 3-D coordinate system has a straight line which is
are attained. generated by origin point O(0, 0, 0) and any point H(a, b, c) as
described by
3.3. Global best guide (GBG) x y z
= = (9)
a b c
As declared earlier, some essential MOPSO techniques exist.
In every technique, there is a suggestion to discover the GBG. Point P(x0 , y0 , z0 ) is an exterior line L, distance d between the
In this division, several techniques, benefits, and drawbacks are points p and line L is mentioned by:
considered after which a novel technique is begun to discover the ⏐⇀

⇀ ⏐

GBG. ⏐OP × OH ⏐
⏐ ⏐
d=
⏐⇀⏐
⏐ ⏐
3.3.1. Multi-objective PSO ⏐OH ⏐
Here, the objective spaces are separated into hyper-cubes ⏐ ⏐
prior to the selection of GBG to every particle. Then, a fitness √
value is allotted for every hypercube based upon the number (cy0 − bz0 )2 + (cx0 − az0 )2 + (bx0 − ay0 )2
= √ (10)
of elite particles inside it. If the elite particles are hypercube, a2 + b2 + c 2
then the fitness value is lesser comparatively. Next, the selected
By utilizing the basic model of a distance of point to line with
roulette-wheel is executed for the hypercube while one is chosen.
regard to the objective space, the identification of GBG gbest
At last, GBG is an arbitrary particle selected from the chosen
between the collection members to particle l of 2 objective op-
hypercube. Consequently, GBG is chosen by making use of the
roulette-wheel chosen technique arbitrarily. Probably, a particle timized populations is as follows.
does not choose a proper guide as its global guide. First, a line Im is drawn of the point H (m) with coordi-
nates (f1m , f2m ) in 2-objective spaces. U (m) is the connected
3.3.2. Multi-objective Optimization using dynamic neighborhood PSO non-dominated particle in store. Im is described as pursues:
This method applies a dynamic neighborhood strategy. This fl f2
research article reveals the concept of two-objective optimiza- = (m = 1, 2, . . . , J ) . (11)
f1m f2m
tions; the GBG of a particle is identified in objective space. Ini-
tially, the distance from particle l, as well as other alternate In the 2nd, the distance dlm is computed from point P(l) by
particles, are evaluated with respective values, which is termed co-ordinates (f1l , f2l ) in the objective space store to the popula-
as fixed objective, where the ‘k’ local neighbors are found on the tion particle U(l) to line Im (m = 1, 2, . . . , J ). dlm is described as
basis of distance of calculation. Then the local optima between pursues:
other neighbors are measured using second objective value and
is named as GBG ??l???????? for particle l. Therefore, it consists of |f2m f1l − f1m f2l |
dlm = √ (m = 1, 2, . . . , J , m ̸= l) . (12)
fixed objective selection which is conducted utilizing the prior 2
f1m 2
+ f2m
knowledge of objective functions. While the 1-D optimization
technique is helpful in handling multi-objective functions. Hence, At last, the store particle U (k) is regarded as glbest = U (k) with
the selection of GBG is based on single objective function. respect to optimized output as distance dlk from P(l) to stores line
Ik is the smallest. dlk is described as follows
3.3.3. Crowding Distance in Multi-objective PSO (CD-MOPSO)
Crowding Distance value offers an estimation about the solu- dlk = mn {dlm |m = 1, 2, . . . , j} . (13)
tion’s density which is surrounded by it. Initially, an enveloped
archive records the independent solutions that are identified in In every particle through the smallest distance to the line of the
prior iterations. The non-dominated solutions are employed as records, the member should choose the part which records the
GBG of the particles in swarm. Then, the solutions present in member as the GBG. Consequently, MDPL-MOPSO could establish
archives are filtered by reducing the CD value. Consequently, a generally-suitable guide as its global guide to every particle in
the GBG of a particle is chosen among non-dominated solutions the population.
A. Francis Saviour Devaraj, M. Elhoseny, S. Dhanasekaran et al. / Journal of Parallel and Distributed Computing 142 (2020) 36–45 41

3.4. Hybridization of FIMPSO algorithm Table 2


Size of synthetic datasets.
Type of tasks Number of tasks Size of tasks (MI)
The proposed model is a combination of FF and IMPSO tech-
niques. Both response speed and accuracy can be improved in Extra-large 800–1000 100000–200000
Large 600–700 70000–100000
this new approach and is deemed to be very effective. The fea-
Medium 400–500 50000–70000
ture of the FF technique is composed of maximum convergence; Small 100–200 30000–50000
however it gets decreased in the exploration process. Later, the
IMPSO technique is incorporated with improved sensitivity in
Table 3
the beginning phase. Therefore, the random behavior of particles
Type of VM instances [7].
in IMPSO results in the sensitivity of primary population. If the
Name of VM instances CPU capacity (MIPS) Memory capacity (GB)
initial population is not selected properly, then the algorithm
Extra-large 35000 20
could not attain an optimal solution. To obtain good result in this Large 25000 15
case, the presented model should initialize the optimized popu- Medium 20000 10
lation which is achieved by the application of FF algorithm.The Small 10000 5
evaluation function is described as follows:
( )
ti − tmin ci − cmin
1− αti × + αci × (14)
tmax − tmin cmax − cmin average turnaround time and average response time. The table
values indicate that the FIMPSO attained better results over other
Where:
scheduling algorithms compared, in a considerable way. When
• tmin : Minimum run time, measuring the results in terms of average turn around time, it can
• tmax : Maximum run time, be noted that the maximum average turn around time required
• cmin : Minimum input time, by IPSO and FF techniques were of 57.74 ms and 55.54 ms
• cmax : Maximum input time. respectively. A slightly low average turnaround time was re-
quired by RR, FCFS and SJF methods with average turnaround
At this point, every implemented procedure is estimated to time being 41.98 ms, 41.87 ms and 41.56 ms respectively. After
choose an optimal execution order. In order to schedule var- that, GA attained manageable results with a moderate average
ious operations, FIMPSO technique has obtained two different turnaround time of 26.57 ms. At the same time, the IPSO-FF
attributes as input i.e., Arrival time and run time. The estimated algorithm exhibited a competitive average turnaround time of
measure is explained by: 22.13 ms. However, the presented FIMPSO algorithm yielded
( ) an effective outcome with the least average turnaround time
ti − tmin ci − cmin
1 − αti × + αci × (15) of 21.09 ms. Similarly, when measuring the results in terms
tmax − tmin cmax − cmin of average response time, it can be noted that the maximum
average response time required by IPSO and FF techniques were
4. Performance validation of 49.23 ms and 48.87 ms respectively. A slightly lower average
response time was required by RR, FCFS and SJF methods with the
The proposed FIMPSO algorithm was simulated using MATLAB average response time being 30.50 ms, 30.84 ms and 30.24 ms
tool. Every task had its login time and run time, which under went respectively. Next to this, GA attained manageable results with a
arbitrary initialization and was offered using Job shop software. moderate average response time of 20.30 ms. Simultaneously, the
For comparison purposes, a set of algorithms namely Round Robin IPSO-FF algorithm exhibited a competitive average response time
(RR), First Come First Service (FCFS), Short jobs First (SJF) and of 15.21 ms. However, the proposed FIMPSO algorithm yielded
Genetic Algorithm (GA) were employed. Moreover, the set of an effective outcome with the least average response time of
measures used to investigate the results were execution time, 13.58 ms.
resource utilization, reliability, make span and throughput.
4.3. Analysis of results in terms of CPU utilization
4.1. Implementation setup
Fig. 3 shows the results attained by the presented model in
Table 2 shows the size of various synthetic datasets along with terms of CPU utilization. The figure indicates that the FIMPSO
the number of tasks linked to it. The table shows that the extra achieved the maximum CPU utilization under all types of tasks in
large task consists of 800–1000 tasks and the size of the tasks is in comparison with other methods. Under small types of tasks, both
the range of 100000–200000MI. Similarly, the large task consists RD and WRR models showed poor CPU utilization by achieving
of 600–700 tasks whereas the size of the tasks is in the range minimum utilization of 45% and 49%, respectively. At the same
of 70000–100000MI. Likewise, the medium-sized task consists of time, both DLB and LB-BC methods tried to manage well by at-
400–500 tasks with task sizes ranging between 50000–70000MI. taining a slight increase in CPU utilization i.e., 52% and 57%. Next
In the same way, the small-sized task consists of 100–200 tasks to that, both LB-RC and FF-IPSO algorithms offered closer results
with 30000-50000MI sized tasks. [7]. In addition, the task sizes to FIMPSO by attaining 67% and 69% CPU utilization respec-
were created in a random fashion during run time and the size tively. But, the presented model exhibited superior performance
is defined by Millions of Instructions (MI). Besides, the research by attaining the maximum CPU utilization of 71%.
utilized 80 servers with a variety of resource capacities and On the other hand, under extra large types of tasks, both
loads. Every server hosted different kinds of edVM instances with RD and WRR models showed only poor CPU utilization with
various CPU and memory capacities as shown in Table 3. minimum utilization of 75% and 79% respectively. At the same
time, both DLB and LB-BC methods tried to manage well by
4.2. Analysis of the results in terms of execution time attaining a mild increase in the CPU utilization i.e., 84% and 89%
respectively. Next to that, both LB-RC and FF-IPSO algorithms
Table 4 provides a detailed comparison of different schedul- offered closer results to FIMPSO by attaining 94% and 96% CPU
ing methods with FIMPSO algorithm in terms of average load, utilization respectively. But, the presented model yielded superior
42 A. Francis Saviour Devaraj, M. Elhoseny, S. Dhanasekaran et al. / Journal of Parallel and Distributed Computing 142 (2020) 36–45

Table 4
Comparison between the proposed method and other scheduling methods.
Methods Average load (ms) Average turnaround time (ms) Average response time (ms)
RR 0.430 41.98 30.50
FCFS 0.460 41.87 30.84
SJF 0.495 41.56 30.24
GA 0.310 26.57 20.30
IPSO 0.457 57.74 49.23
Firefly 0.470 55.54 48.87
FF-IPSO 0.259 22.13 15.21
FIMPSO 0.247 21.09 13.58

Fig. 3. Comparative results analysis in terms of CPU utilization. Fig. 4. Comparative results analysis in terms of memory utilization.

performance by attaining the maximum CPU utilization of 98% inferred that the presented FIMPSO algorithm is highly effective
under extra large tasks. Therefore, it can be inferred that the in terms of memory utilization under all the types of tasks,
presented FIMPSO algorithm is highly effective in terms of CPU irrespective of its size. The figure also states that the increase in
utilization under all types of tasks, irrespective of the task sizes. number of tasks results in the increase of memory utilization.
The figure also states that the increase in number of tasks results
in increase in CPU utilization. 4.5. Analysis of results in terms of reliability

The reliability analysis of various methods was performed


4.4. Analysis of results in terms of memory utilization
between FIMPSO and existing methods as shown in Fig. 5. Under
small types of tasks, both RD and WRR models showed poor reli-
A detailed analysis conducted between FIMPSO and existing ability by achieving only the minimum reliability of 76% and 81%
methods in terms of memory utilization is shown in Fig. 4. From respectively. At the same time, both DLB and LB-BC methods tried
the figure, it is understood that the FIMPSO achieved the maxi- to manage well by attaining a slight increase in the reliability up
mum memory utilization under all types of tasks in comparison to 87% and 92%. Next to that, the LB-RC algorithm yielded closer
with other methods. Under small types of tasks, both RD and results to FF-IPSO and FIMPSO by attaining the reliability of 92%.
WRR models showed poor memory utilization by achieving only However, the presented and LB-RC models yielded superior
the minimum memory utilization of 40% and 44% respectively. performance by attaining the maximum reliability of 100%. On
At the same time, both DLB and LB-BC methods tried to manage the other hand, under extra large types of tasks, both RD and
well by attaining a slight increase in the memory utilization up to WRR models showed poor reliability by achieving 35% and 39%
49% and 53% respectively. Next to that, both LB-RC and FF-IPSO minimum reliability only. At the same time, both DLB and LB-BC
algorithms achieved closer results to FIMPSO by attaining 57% and methods tried to manage well by attaining a slight increase in the
58% memory utilization respectively. Nevertheless, the presented reliability up to 45% and 54%. Next to that, both LB-RC and FF-IPSO
model exhibited superior performance by attaining the maximum algorithms achieved closer results to FIMPSO by attaining each
memory utilization of 60%. On the other hand, under extra large 63% identical reliability. However, the presented model exhibited
types of tasks, both RD and WRR models showed poor memory the superior performance by attaining the maximum reliability
of 67% under extra large tasks. Therefore, it can be inferred that
utilization by achieving only a minimum memory utilization of
the presented FIMPSO algorithm is highly effective in reliability
72% and 76% respectively.
under all the types of tasks irrespective of the task sizes. It is also
At the same time, both DLB and LB-BC methods tried to man-
inferred that the reliability gets decreased when the number of
age well by attaining a slight increase in the memory utilization tasks increases.
up to 81% and 84%. Next to that, both LB-RC and FF-IPSO algo-
rithms achieved closer results to FIMPSO by attaining 89% and 4.6. Analysis of results in terms of make span
91% memory utilization respectively. But, the presented model
yielded superior performance by attaining the maximum memory Fig. 6 provides a detailed comparison of different scheduling
utilization of 93% under extra large tasks. Therefore, it can be methods with FIMPSO algorithm in terms of make span. The
A. Francis Saviour Devaraj, M. Elhoseny, S. Dhanasekaran et al. / Journal of Parallel and Distributed Computing 142 (2020) 36–45 43

Fig. 5. Comparative results analysis in terms of reliability. Fig. 7. Comparative results analysis in terms of average throughput.

Next to that, both LB-RC and FF-IPSO algorithms yielded closer


results to FIMPSO by attaining the identical average throughput of
96% and 98% respectively. But, the presented model exhibited su-
perior performance by attaining the maximum average through-
put of 100% under extra large tasks. On the other hand, under
extra large types of tasks, both RD and WRR models showed
poor average throughput by achieving only a minimum aver-
age throughput of 30% and 36% respectively. At the same time,
both DLB and LB-BC methods tried to manage well by attaining
a slight increase in the average throughputs of 44% and 53%.
Next to that, both LB-RC and FF-IPSO algorithms achieved closer
results to FIMPSO by attaining the identical average throughput
of 65% and 68% respectively. But, the presented model excelled
through its superior performance by attaining the maximum av-
erage throughput of 72% under extra large tasks.

4.8. Discussion

Fig. 6. Comparative results analysis in terms of make span.


The results achieved by the proposed model under several
aspects are listed below.

figure indicates that the FIMPSO yielded better results over the • FIMPSO algorithm yielded an effective outcome with the
compared scheduling algorithms in a considerable way. least average response time of 13.58 ms.
When measuring the results in terms of extra-large tasks and • The proposed model attained the maximum CPU utilization
make span, it is noted that the maximum make span of 280 and of 98% under extra large tasks.
280 were required by RD and WRR techniques respectively. Next, • The presented model reached the maximum memory uti-
slightly lower make span was required by DLB, LB-BC and LB-RC lization of 93% under extra large tasks.
methods with the make span of 273, 261 and 153 respectively. • Besides, the maximum reliability of 67% under extra large
At the same time, the IPSO-FF algorithm required a competitive tasks was offered by the proposed method along with a
make span of 150. However, the presented FIMPSO algorithm was make span of 148. It also attained the maximum average
effective with the least make span requirement of 148. It is to throughput of 72% under extra large tasks.
be noted that the make span gets gradually increased with the
increasing number of tasks. Therefore, it can be inferred that the presented FIMPSO algorithm
has an effective average throughput under all types of tasks,
4.7. Analysis of results in terms of average throughput irrespective of the task sizes. It can also be inferred that the
average throughput gets decreased when the number of tasks
Finally, an extensive average throughput analysis of various increase.
methods was performed between FIMPSO and existing methods
as shown in the Fig. 7. Under small types of tasks, both RD and 5. Conclusion
WRR models showed poor average throughput by achieving only
a minimum average throughput of 65% and 72% respectively. At This paper has presented an energy efficient load balancing
the same time, both DLB and LB-BC methods tried to manage well algorithm in cloud environment using FIMPSO algorithm. The
by attaining a slight increase in the average throughput of 81% presented algorithm incorporates the benefits of both FF and
and 90%. IMPSO algorithms. The presented FIMPSO algorithm achieved an
44 A. Francis Saviour Devaraj, M. Elhoseny, S. Dhanasekaran et al. / Journal of Parallel and Distributed Computing 142 (2020) 36–45

effective average load for making and enhanced the essential [11] C. Gupta, S. Jain, Multilevel fuzzy partition segmentation of satellite
measures like proper resource usage and response time of the images using GSA, in: International Conference on Signal Propagation and
Computer Technology, ICSPCT, 2014.
tasks. For experimentation, the set of measures used to investi-
[12] P. Kendrick, T. Baker, Z. Maamar, A. Hussain, R. Buyya, D. Al-Jumeily, An
gate the results were execution time, resource utilization, reliabil- efficient multi-cloud service composition using a distributed multiagent-
ity, make span and throughput. The simulation outcome showed based, memory-driven approach, IEEE Trans. Sustain. Comput. (2018) http:
that the proposed FIMPSO model excelled in its performance //dx.doi.org/10.1109/TSUSC.2018.2881416.
over the compared methods. From the simulation outcome, it is [13] J. Kennedy, R. Eberhart, Particle swarm optimization, in: IEEE International
Conference on Neural Networks, vol. 4, IEEE, 1995, pp. 1942–1948.
understood that the FIMPSO algorithm achieved effective results [14] M. Khatibinia, S. Khosravi, A hybrid approach based on an improved
with the least average response time of 13.58 ms, maximum CPU gravitational search algorithm and orthogonal crossover for optimal shape
utilization of 98%, memory utilization of 93%, reliability of 67% design of concrete gravity dams, Appl. Soft Comput. J. 16 (2014) 223–233.
and throughput of 72% along with a make span of 148, which [15] A. Khiyaita, Bakkali El, M. Zbakh, D.E. Kettani, Load balancing cloud
computing: State of art, in: IEEE National Days of Network Security and
was superior to all other compared methods. The future scope is
Systems, in: JNS2, IEEE, 2012, pp. 106–109.
inclusive of improvements in the presented FIMPSO algorithm to [16] Y. Kotb, I. Al Ridhawi, M. Aloqaily, T. Baker, Y. Jararweh, H. Tawfik, Cloud-
use the data deduplication algorithms. based multi-agent cooperation for IoT devices using workflow-nets, J. Grid
Comput. 17 (4) (2019) 625–650.
Declaration of competing interest [17] M. Mezmaz, N. Melab, Y. Kessaci, Y.C. Lee, E.-G. Talbi, A.Y. Zomaya, D.
Tuyttens, A parallel bi-objective hybrid meta heuristic for energy-aware
scheduling for cloud computing systems, J. Parallel Distrib. Comput. 71
The authors declare that they have no known competing finan- (11) (2011) 1497–1508.
cial interests or personal relationships that could have appeared [18] E. Pacini, C. Mateos, C.G. Garino, Distributed job scheduling based on
to influence the work reported in this paper. swarm intelligence: A survey, Comput. Electr. Eng. 40 (2013) 252–269.
[19] S. Pandey, R. Buyya, et al., A particle swarm optimization based heuristic
for scheduling workflow applications in cloud computing environments, in:
CRediT authorship contribution statement 24th IEEE International Conference on Advanced Information Networking
and Applications, IEEE, 2010, pp. 400–407.
A. Francis Saviour Devaraj: Conceptualization, Writing - orig- [20] E. Rashedi, A. Zarezadeh, Noise filtering in ultrasound images using Grav-
itational Search Algorithm, in: Iranian Conference on Intelligent Systems,
inal draft, Methodology. Mohamed Elhoseny: Formal analysis,
ICIS, 2014.
Writing - review & editing, Resources. S. Dhanasekaran: Vali- [21] G.T. Reddy, N. Khare, Hybrid firefly-bat optimized fuzzy artificial neural
dation, Methodology, Software. E. Laxmi Lydia: Project adminis- network based classifier for diabetes diagnosis, Int. J. Intell. Eng. Syst. 10
tration, Resources, Validation. K. Shankar: Software, Supervision, (4) (2017) 18–27.
Visualization. [22] G.T. Reddy, N. Khare, Heart disease classification system using optimised
fuzzy rule based algorithm, Int. J. Biomed. Eng. Technol. 27 (3) (2018)
183–202.
Acknowledgments [23] G. Sun, A. Zhang, X. Jia, X. Li, S. Ji, Z. Wang, DMMOGSA: Diversity-enhanced
and memory-based multiobjective gravitational search algorithm, Inform.
Dr. K. Shankar sincerely acknowledge the financial support of Sci. 363 (2016) 52–71.
[24] Y. Wang, Y. Guo, Z. Guo, T. Baker, W. Liu, CLOSURE: A cloud scientific
RUSA–Phase 2.0 grant sanctioned vide Letter No. F. 24-51/2014-U,
workflow scheduling algorithm based on attack–defense game model,
Policy (TNMulti-Gen), Dept. of Edn. Govt. of India, Dt. 09.10.2018. Future Gener. Comput. Syst. (2019) http://dx.doi.org/10.1016/j.future.2019.
11.003.
References [25] Y. Wang, Y. Guo, Z. Guo, T. Baker, W. Liu, CLOSURE: A cloud scientific
workflow scheduling algorithm based on attack–defense game model,
[1] B.A. Al-Maytami, P. Fan, A. Hussain, T. Baker, P. Liatsis, A task schedul- Future Gener. Comput. Syst. (2019).
ing algorithm with improved makespan based on prediction of tasks [26] J. Yu, R. Buyya, K. Ramamohanarao, Workflow scheduling algorithms for
computation time algorithm for cloud computing, IEEE Access 7 (2019) grid computing, in: Meta-Heuristics for Scheduling in Distributed Comput-
160916–160926. ing Environments, in: Series of Studies in Computational Intelligence, vol.
[2] M. Armbrust, A. Fox, R. Griffith, A.D. Joseph, R. Katz, A. Konwinski, G. Lee, D. 146, Springer, 2008, pp. 173–214.
Patterson, A. Rabkin, I. Stoica, et al., A view of cloud computing, Commun. [27] A.E.M. Zavala, A.H. Aguirre, E.R.V. Diharce, S.B. Rionda, Constrained opti-
ACM 53 (4) (2010) 50–58. misation with an improved particle swarm optimisation algorithm, Int. J.
[3] T. Baker, M. Asim, H. Tawfik, B. Aldawsari, R. Buyya, An energy-aware Intell. Comput. Cybern. 1 (3) (2008) 425–453.
service composition algorithm for multiple cloud-based IoT applications, J.
Netw. Comput. Appl. 89 (2017) 96–108.
[4] S. Belguith, N. Kaaniche, M. Hammoudeh, T. Dargahi, PROUD: verifiable Dr. A. Francis Saviour Devaraj is a Professor & Head,
privacy-preserving outsourced attribute based signcryption supporting Department of CSE, School of Computing, Kalasalingam
access policy update for cloud assisted IoT applications, Future Gener. University, India. In 2011, he completed Ph.D in
Comput. Syst. (2019). Department of Computer Science with specialization
[5] R. Buyya, S. Pandey, C. Vecchiola, Cloudbus toolkit for market-oriented in Information Security, Manonmaniam Sundaranar
cloud computing, in: CloudCom 09: Proceedings of the 1st International University, India. He is the author or co-author of
Conference on Cloud Computing, vol. 5931, in: LNCS, Springer, 2009, more than 20 research publications. His current re-
pp. 24–44. search interests include Healthcare applications, Cloud
[6] D. Chaudhary, B. Kumar, An analysis of the load scheduling algorithms in computing, Internet of Things, and Soft computing
the cloud computing environment: A survey, in: IEEE 9th International algorithms.
Conference on Industrial and Information Systems, ICIIS, IEEE, 2014,
pp. 1–6.
[7] Z. Fan, T. Wang, Z. Cheng, G. Li, F. Gu, An improved multiobjective particle Dr. Mohamed Elhoseny is currently an Assistant Pro-
swarm optimization algorithm using minimum distance of point to line, fessor at the Faculty of Computers and Information,
Shock Vib. (2017) (2017). Mansoura University where he is also the Director
[8] S.K. Garg, R. Buyya, Network cloudsim: Modelling parallel applications in of Distributed Sensing and Intelligent Systems Lab.
cloud simulations, in: 4th IEEE/ACM International Conference on Utility Besides, he has been appointed as an ACM Distin-
and Cloud Computing, IEEE CS Press, 2011, pp. 105–113. guished Speaker from 2019 to 2022. Collectively, Dr.
[9] M.M. Golchi, S. Saraeian, M. Heydari, A hybrid of firefly and improved Elhoseny authored/co-authored over 85 ISI Journal ar-
particle swarm optimization algorithms for load balancing in cloud ticles in high-ranked and prestigious journals such as
environments: Performance evaluation, Comput. Netw. 162 (2019) 106860. IEEE Transactions on Industrial Informatics (IEEE), IEEE
[10] J. Gubbi, R. Buyya, S. Marusic, M. Palaniswami, Internet of Things (IoT): a Transactions on Reliability (IEEE), Future Generation
vision, architectural elements, and future directions, Future Gener. Comput. Computer Systems (Elsevier), and Neural Computing
Syst. 29 (7) (2013) 1645–1660. and Applications (Springer). Besides, Dr. Elhoseny authored/edited Conference
A. Francis Saviour Devaraj, M. Elhoseny, S. Dhanasekaran et al. / Journal of Parallel and Distributed Computing 142 (2020) 36–45 45

Proceedings, Book Chapters, and 10 books published by Springer and Taylor & Dr. E. Laxmi Lydia is a Professor of Computer Science
Francis. His research interests include Smart Cities, Network Security, Artificial Engineering at Vignan’s Institute of Information Tech-
Intelligence, Internet of Things, and Intelligent Systems. Dr. Elhoseny serves as nology(A). She is a big data analytics online trainer
the Editor-in-Chief of International Journal of Smart Sensor Technologies and for the international training organization and she has
Applications, IGI Global. Moreover, he is an Associate Editor of many journals presented various webinars on big data analytics. She
such as IEEE Access (Impact Factor 3.5), IEEE Future Directions, PLOS One journal is certified by Microsoft Certified Solution Developer
(Impact Factor 2.7), Remote Sensing (Impact Factor 3.5), and International (MCSD). She published more than 100 research papers
Journal of E-services and Mobile Applications, IGI Global (Scopus Indexed). Also, in international journals in the area big data analyt-
he is an Editorial Board member in reputed journals such as Applied Intelligence, ics and data sciences and she published ten research
Springer (Impact Factor 1.9). Moreover, he served as the co-chair, the publication papers in international conference proceedings. She is
chair, the program chair, and a track chair for several international conferences an author for the big data analytics book and currently
published by IEEE and Springer. she is working on government DST funded project and she holds patents.

Dr. S. Dhanasekaran has started his Academic ca- Dr. K. Shankar is currently a Post Doc Fellow in the
reer as Lecturer in Department of IT in Arulmigu Alagappa University, Karaikudi, India. Collectively, Dr.
Kalasalingam College of Engineering (AKCE) in 2008. K. Shankar authored/co-authored over 40 ISI Journal
Now he has been working as Associate Professor in articles (with total Impact Factor 102.051) and 148
the Department of CSE, Kalasalingam University. He Scopus Indexed Articles. He has guest-edited several
has completed Ph.D., (Cloud Computing) in the year special issues at many journals published by Inder-
2017 at Kalasalingam University under the Guidance science and MDPI. He has served as Guest Editor,
of Dr.V. Vasudevan Senior Professor & Registrar of Associate Editor in SCI, Scopus indexed journals like
Kalasalingam University, Srivilliputtur, and Tamilnadu, Elsevier, Springer, Wiley & MDPI. Dr. Shankar au-
India. He is highly motivated, well-disciplined profes- thored/edited Conference Proceedings, Book Chapters,
sional with 11-years of Teaching Experience in the area and 2 books published by Springer. He has been a part
of Computer science & Engineering with flexibility, loyalty & strong motivational of various seminars, paper presentations, research paper reviews, and convener
skills. He is a Life time member of ISTE and IEEE. He has received Best Research and a session chair of the several conferences. He displayed vast success in
paper Award, Teaching Competence Award, Faculty Advisor ship Award and continuously acquiring new knowledge and applying innovative pedagogies
Motivational Awards at Kalasalingam University. He acted as a Resource person and has always aimed to be an effective educator and have a global outlook.
and convener for conducting various FDP, Conference, and Workshops. Moreover His current research interests include Healthcare applications, Secret Image
he has published more than 20 Research papers in which SCOPUS - (6) and SCI Sharing Scheme, Digital Image Security, Cryptography, Internet of Things, and
(Thomson Reuters-(2)) Indexed Journals with Impact Factor. Optimization algorithms.

You might also like