Download as pdf or txt
Download as pdf or txt
You are on page 1of 23

Chapter 6

METHODOLOGY

6.1 Present Work

6.1.1 Problem Formulation


The huge amount of computations a Cloud can fulfil in a specific amount of time cannot be
performed by the best supercomputer. However, Cloud performance can still be improved by
making sure all the resources available in the Cloud are utilized optimally using a good load
balancing algorithm. Statistical results of various research papers in the literature review
indicate that intensive local search improves the quality of solutions found in such dynamic
load balancing algorithms. Moreover, the multi-population approaches obtain better quality
with less computational effort. In the existing algorithm, Load balancing is done by using ant
colony optimization technique in cloud computing by considering autonomous agent
approach in order to reduce the issues like migration of task to the available resources in
case the load of nodes reaches the threshold level. In the existing algorithm, only the load of
machines is considered and it does not work on proper resource utilization.In the proposed
work the execution time for balancing the load and improving the resource utilization
will be minimized.

6.1.2Research Gaps
1. In the previous study, the load balancing has not been studied separately from the
optimization of resources.
2. The existing study fails to efficiently implement the aggregation of two or more load
balancing techniques.
3. The existing study lacks in optimizing both the Data Centre Service Time and Transfer
Cost.

6.1.3OBJECTIVES
1. To study and analyze various meta heuristic load balancing algorithms.
2. To design efficient algorithm for providing system load balancing using Novel Hybrid
(ACO & PSO) based technique.
3. To design efficient algorithm for providing the system load balancing using Novel GA
technique.
4. To compare the results of A2LB Algorithm with the proposed Hybrid (ACO & PSO)
based technique and Novel GA based technique for load balancing on the basis of:
 Overall Response Time.
 Data Center Service Time.
 Transfer Cost.
5. To minimize the execution time for improving the resource utilization of the balanced
machines by using Novel Resource Aware Scheduling Algorithm
6. To compare the results of existing resource aware algorithm with the proposed Novel
Resource Aware Scheduling algorithm on the basis of:
 Overall Response Time.
 Data Center Service Time.
 Transfer Cost.

6.2 CloudSim: A Could Simulation Framework [74]

6.2.1 Introduction to CloudSim

CloudSim is an open source software under the GPL license developed in the Cloud Computing
and Distributed Systems (CLOUDS) Laboratory, at the Computer Science and Software
Engineering Department of the University of Melbourne [75]. CloudSim simulation framework
enables users to test application in controlled and repeatable environment, find the system
bottlenecks without the need of real clouds & try different configurations for developing
adaptive provisioning techniques.
CloudSim provides a generalised and extensible simulation framework that enables seamless
modelling and simulation of app performance. By using CloudSim, developers can focus on
specific systems design issues that they want to investigate, without getting concerned about
details related to cloud-based infrastructures and services.
Advances in computing have opened up many possibilities. Previously, the main concern of
application developers was the deployment and hosting of applications, keeping in mind the
acquisition of resources with a fixed capacity to handle the expected traffic due to the demand
for the application, as well as the installation, configuration and maintenance of the whole
supporting stack. With the advent of the cloud, application deployment and hosting has become
flexible, easier and less costly because of the pay-per-use chargeback model offered by cloud
service providers.
Cloud computing is a best-fit for applications where users have heterogeneous, dynamic, and
competing quality of service (QoS) requirements. Different applications have different
performance levels, workloads and dynamic application scaling requirements, but these
characteristics, service models and deployment models create a vague situation when we use the
cloud to host applications. The cloud creates complex provisioning, deployment, and
configuration requirements.CloudSim is a library for the simulation of cloud scenarios. It
provides essential classes for describing datacenters, computational resources, virtual machines,
applications, users, and policies for the management of various parts of the system such as
scheduling and provisioning.

6.2.2. Architecture
CloudSim model of Cloud Computing architecture consists of three layers the system layer, the
core middleware, and the user-level middleware as shown in Figure 6.1 below. These three
layers corresponding to three top layers of cloud computing architecture Iaas, Paas and SaaS
respectively.
Fig. 6.1: Cloud-simulation-framework [75]

6.2.3. CloudSim Toolkit [74]


CloudSim is an extensible simulation toolkit or framework that enables modelling, simulation
and experimentation of Cloud computing systems and application providing environments. The
CloudSim toolkit supports both system and behaviourmodelling of Cloud system components.

Basic components of CloudSim

1. Datacenter
Datacenter is used to model the core services at the system level of a cloud
infrastructure. It consists of a set of hosts which manage a set of virtual machines whose
tasks are to handle “low level” processing, and at least one datacenter must be created to
start the simulation.
2. Host
This component is used to assign processing capabilities (which is specified in the
million of instruction per second that the processor could perform), memory and a
scheduling policy to allocate different processing cores to multiple virtual machines that
is in the list of virtual machines managed by the host.
3. Virtual Machines
This component manages the allocation of different virtual machines different hosts, so
that processing cores can be scheduled (by the host) to virtual machines. This
configuration depends on particular application, and the default policy of the allocation
of virtual machines is “first-come, first-serve”.

4. Datacenter Broker
The responsibility of a broker is to meditate between users and service providers,
depending on the requirement of quality of service that the user specifies. In other
words, the broker will identify which service provider is suitable for the user based on
the information it has from the Cloud Information Service and it then negotiates with the
providers about the resources that meet the requirement of the user. The user of
CloudSim needs to extend this class in order to specify requirement in their experiments.
5. Cloudlet
This component represents the application service whose complexity is modelled in
CloudSim in terms of the computational requirements.

6. CloudCoordinator
This component manages the communication between other Cloud Coordinator services
and brokers, and also monitor the internal state of a datacenter which will be done
periodically in terms of the simulation time.

6.3. NetBeans IDE


6.3.1. Introduction to NetBeans IDE
Net Beans IDE is a free, open source, integrated development environment (IDE) that enables
you to develop desktop, mobile and web applications. The IDE supports application
development in various languages, including Java, HTML5, PHP and C++. The IDE provides
integrated support for the complete development cycle, from project creation through
debugging, profiling and deployment. The IDE runs on Windows, Linux, Mac OS X, and other
UNIX-based systems [65]. The IDE provides comprehensive support for JDK 7 technologies
and the most recent Java enhancements. It is the first IDE that provides support for JDK 7, Java
EE 7, and JavaFX 2. The IDE fully supports Java EE using the latest standards for Java, XML,
Web services, and SQL and fully supports the GlassFish Server, the reference implementation
of Java EE [65].

6.3.2. Features of NetBeans IDE [48]


1. Tools for Java 8 Technologies:
Anyone interested in getting started with lambdas, method references, streams, and
profiles in Java 8 can do so immediately by downloading NetBeans IDE 8. Java hints
and code analyzers help you upgrade anonymous inner classes to lambdas, right across
all your code bases, all in one go.

2. Tools for Java EE Developers:


The code generators for which NetBeans IDE is well known have been beefed up
significantly. Where before you could create bits and pieces of code for various popular
Java EE component libraries, you can now generate complete PrimeFaces applications,
from scratch, including CRUD functionality and database connections.

3. Tools for Maven:


A key strength of NetBeans IDE, and a reason why many developers have started using
it over the past years, is it’s out of the box support for Maven. No need to install a
Maven plugin as it’s a standard part of the IDE. No need to deal with IDE-specific files,
since the POM provides the project structure.

4. Tools for JavaScript:


Thanks to powerful new JavaScript libraries and frameworks over the years, JavaScript
as a whole has become a lot more attractive for many developers. For some releases
already, NetBeans IDE has been available as a pure frontend environment.

5. Best Support for Latest Java Technologies:


NetBeans IDE is the official IDE for Java 8. With its editors, code analyzers, and
converters, you can quickly and smoothly upgrade your applications to use new Java 8
language constructs [48].

6. Write Bug Free Code:


The cost of buggy code increases the longer it remains unfixed. NetBeans provides static
analysis tools, especially integration with the widely used FindBugs tool, for identifying
and fixing common problems in Java code. In addition, the NetBeans Debugger lets you
place breakpoints in your source code, add field watches, step through your code, run
into methods, take snapshots and monitor execution as it occurs [48].

6.3.3. Working with CloudSim in NetBeans: - [16]


1. Download Net Beans Latest version. Install it in your device.
2. Hit Google and download CloudSim 3.0.
3. Extract the CloudSim 3.0 folder using winrar or any other software.
4. Open up your net beans.
5. Go to “File” in the menu bar. Click new project.
6. Click Java and then “java application.
7. lick Next and Name your project as “cloudsimproject1”
8. Click next and then you will see “cloudsimproject1” will be created. Now expand
cloudsimproject1 and you will see two folders. Source packages and Libraries.
9. As you can see above initially you will have to add jar files into the libraries folder. Now
click add JAR/FOLDER
10. Select all those jar files and click Open. These all would get added to the Libraries
folder.
11. Now go to the CloudSim 3.0.3 extracted folder into your directory. Go to the path as
given below: cloudsim3.0.3 -> examples->org. Copy the “org” sub folder which is
inside examples as its parent folder. Once you copy it, now go to netbeans and click
paste into the source packages.
12. You will see the CloudSim examples get installed.
13. Click to open any example and run it.
14. You will get the results.

6.4Algorithms
To get the understanding of how CloudSim works and how scheduling is done in CloudSim,
basic static algorithms have been implemented to understand the concept of task scheduling in
cloud environment using CloudSim. First come first serve and shortest-job-first are two static
algorithms implemented for scheduling tasks on cloud. First come first serve can be
implemented by either creating a queue and assigning the task to the free resource or it can be
implemented in a round robin fashion in which the tasks are assigned in a circular fashion one
after the other, just like round robin CPU scheduling algorithm. Static algorithms and
dynamic algorithms differ in the way the input is passed to them and how the resources are
managed.

6.4.1 Static Algorithms


First Come First Serve
First come first serve is the easiest algorithm to implement. No pre-computation or additional
computation at the run time is done in this algorithm to schedule the tasks. The incoming
cloudlets are mapped to the virtual machine (VM) on the basis of this arrival time. The
cloudlet with the minimum arrival time is assigned to first virtual machine and there on rest of
the incoming cloudlets are assigned to subsequent VM’s on the basis of the arrival time. No
other parameter is considered for scheduling the incoming cloudlets. The algorithm does not
consider any of the properties of cloudlet or virtual machine at the time of scheduling, but
only the arrival time of the cloudlets. The approach followed in this work is to allocate the
tasks to the available resources in a round robin fashion on the basis of the order in which they
are received from the broker and no other parameter of resource allocation is considered.
Following is the pseudo code for implementing the first come first serve:
Let m be the number of VM’s
Let n be the number of cloudlets
Let cloudletlist be list of all cloudlets. Size of cloudletlist is n
For each cloudlet
Assign one cloudlet at a time to a VM based on modulus, i.e. n%m.
return the cloudletList; // this list will be sent to broker for execution.

Shortest Job First

Even the next scheduling algorithm is based on the property of cloudlet and no property of
virtual machine is considered while performing the scheduling of cloudlets on virtual
machines. Initially the cloudlets are sorted on the basis of their length in ascending order and
then the same are mapped to virtual machines (VM) in the order of their length. Length here
refers to the no. of instructions or expected execution time of cloudlet on some standard
virtual machine. Following is the pseudo code for SJF:

Let m be the number of VM’s


Let n be the number of cloudlets
Let cloudletlist be list of all cloudlets. Size of cloudletlist is n
The templist will be used as temporary cloudlist for sorting the cloudlets and sortlist will
contain the cloudlets in the sorted order.
For each cloudlet in the cloudletlist
Add cloudlet to the templist
Sort the templist and save the sorted list in sortlist
Send the sorted cloudlist, i.e. sortlist to broker for execution

6.4.2 Ant Colony Optimization

Ant Colony Optimization (ACO) comes under the category of metaheuristic algorithms. The
algorithm is based on real ants and how they search for food. The ants travel from their colony
to the food source. The ants leave pheromones as they walk. Initially random ants select random
paths. The pheromone they leave also evaporates but at lesser intensity. So, the shortest path
after some time is the one with highest pheromone intensity which leads all other ants to follow
that path. After a certain period, all ants choose that path and which happens to be the shortest
path [45].
Following is the pseudo code for implementation of ACO. [63]

Pseudo Code 1: ACO algorithm


//Input
Input: List of Cloudlet (Tasks) and List of VMs

//Output
Output: The best solution for tasks allocation on VMs Steps:

//Pseudo Code
1. Initialize:
Set Current_iteration_t=1.
Set Current_optimal_solution=null.
Set Initial value τij(t)=c for each path between tasks and VMs

//random assignment of ants on VM


2. Place m ants on the starting VMs randomly.
// selection of VM for each task
3. “For k:=1 to m do
Place the starting VM of the k-th ant in tabuk.
Do ants_trip while all ants don't end their trips
Every ant chooses the VM for the next task.
Insert the selected VM to tabuk.
End Do”

// updating the optimal solution


4.“For k:=1 to m do
Compute the length Lk of the tour described by the k-th ant according to Equation
Update the current_optimal_solution with the best founded solution.”

5. “For every edge (i, j), apply the local pheromone.”


6. Apply global pheromone update according to Equation 7.
7. Increment Current_iteration_t by one.”
// check if maximum iterations done
8. “If (Current_iteration_t<tmax)
Empty all tabu lists.
Goto step 2
Else
Print current_optimal_solution.
End If”

7. Return

Pseudo code 2: Scheduling based ACO algorithm

//Input
Input: Incoming Cloudlets and VMs List
//Output
Output: Print “scheduling completed and waiting for more Cloudlets”Steps:

//Pseudo Code
1. “Set Cloudlet List=null and temp_List_of_Cloudlet=null
2. Put any incoming Cloudlets in Cloudlet List in order of their arriving time.
3. do ACO_P while Cloudlet List not empty or there are more incoming Cloudlets
Set n= size of VMs list
// if the no. of cloudlets leftmore than VMs
if (size of Cloudlet List greater than n)
Transfer the first arrived n Cloudlets.from Cloudlet List and put them on
temp_List_of_Cloudlet
// if the no. of cloudlets left less than VMs
else
Transfer all Cloudlets.from Cloudlet List and put them on
temp_List_of_Cloudlet
end If
Execute ACO procedure with input temp_List_of_Cloudlet and n
end Do”
Fig. 6.2: Flow Diagram of Ant Colony Optimization Scheduling Algorithm

6.4.3 A2LB
Autonomous Agent based Load Balancing (A2LB) algorithm, which is also an ant-based load
balancing algorithm was proposed by A. Singh et al. in 2015. Autonomous agent-based load
balancing algorithm (A2LB) tries to address the issues like optimizing resource utilization,
improving throughput, minimizing response time, dynamic resource scheduling with scalability
and reliability. A2LB works by ensuring that all the resources are properly utilized and the
resources are further used in a manner that the load remains balanced. Total of three agents are
used to implement the autonomous agent based load balancing algorithm namely, migration
agent, load agent and channel agent. The algorithm begins by random initialization and
allocation of resources to tasks. Once the resource becomes over loaded, the tasks are then
assigned to similar VM with lesser load. Ants are useful agents as they are able to find the
shortest length in a very less time [62]. Following are the responsibilities of three agents [62]:

Load Agent: The major responsibility of a load agent is to calculate the load on every available
virtual machine after allocation of a new job in the data centre. It maintains all such information
in table termed as VM_Load_Fitness table.

Channel Agent: The channel agent initiates migration agents on receiving the request from
load agent. The idea is to search for virtual machines with similar configuration in other data
centres. It maintains the information received from migration agent in table termed as Response
Table.

Migration Agent (MA): Channel agent is responsible for initiating the Migration agent.
Migration agent communicates with load agents of other datacenters to find a compatible VM
whose fitness value is greater than 25.
In case any such VM is found, the channel agent migrates the task to that VM.

The algorithm is based on random selection of virtual machines for tasks. The performance of
the A2LB algorithm is dependent on the migration agent. Sometimes the tasks have to wait
when the channel agent is not able to find a suitable alternate VM machine when the VM on
which the task is scheduled becomes overloaded.
6.4.4 Particle Swarm Optimization
Particle Swarm Optimization (PSO) is influenced by the social behaviour of animals such as a
group of birds searching for food source or a group of fish protecting themselves from a hunter
[37]. A particle in this algorithm is referenced to an animal. PSO algorithm is based on two
factors: position & velocity. Each particle in the swarm is referenced to by its position that
keeps changing with time till it finds the best position or solution. The position of particles in a
solution space represents a solution for the problem [72]. Velocity decides the movement of
each particle in the swarm. The performance of a particle is measured by a fitness value, which
is problem specific. PSO has gained popularity due to its simplicity and its usefulness in broad
range of applications with low computational cost. PSO was originally developed for
continuous optimization problems. Hence, there is a need to make necessary encoding to solve
discrete optimization problems such as scheduling. The first step in PSO is to encode the
problem.

One method is to represent a particle using a vector representing mapping of resource to a task
[72]. Velocities can also be represented using a vector. PSO has fewer primitive mathematical
operators than other metaheuristic algorithms which results in lesser convergence time. Particle
swarm optimization can be implemented as load balancer. Initially the tasks should be assigned
to different VM’s using some random algorithm. The selected algorithm can be a static or
dynamic algorithm. The tasks can also be assigned randomly without the use of any task
scheduling algorithm. The idea is to have all tasks assigned to a VM before the load balancing
in implemented. Once the task scheduling is achieved, the next step is to compute the load of all
VM’s. The idea is to identify which VM’s are under-loaded and which VM’s are over-loaded.
Then the Particle swarm optimization load balancing algorithm is used to migrate tasks from the
overloaded VM’s to the under-loaded VM’s.

Only problem with the selection Particle Swarm Optimization over other metaheuristic
scheduling algorithms is that it leads to premature schedule by getting trapped into the local
best. Sometimes it gives a feel that it takes lesser time to yield the result, but that result may not
explore all the search paths.
PSO Task Scheduling Algorithm [43]
1. “Set particle dimension as equal to the size of ready tasks T.
2. Initialize particles position randomly from PC = 1.....j and velocity vi randomly.
3. For each particle, calculate its fitness value.
4. If the fitness value is better than the previous best pbest, set the current fitness value as
the new pbest.
5. Perform Steps 3 and 4 for all particles and select the best particle as gbest.
6. For all particles, calculate velocity and update their positions.
7. If the stopping criteria or maximum iteration is not satisfied, repeat from Step 3 & 4.”
Fig. 6.3: Flow Diagram of Particle Swarm Optimization Scheduling algorithm
6.4.5 Proposed Hybrid Meta-heuristic task scheduling algorithm
Algorithm

Input:

CloudletList: List of all clouldlets received.


VmList: List of all VM’s

Main_FunctionHybrid_TS

// initially schedule a small no. of incoming tasks using PSO [number of tasks can be fixed
or decided on the basis of percentage of total tasks]. In the proposed work, cloudlets = 2
times the size of VM list are initially passed to PSO scheduling algorithm
PSOCloudletList = CloudletList [1: Sizeof (VmList)* 2]
1. Call PSO_TS (PSOCloudletList, VmList)
2. ACOCloudletList = CloudletList – PSOCloudletList
// Scheduling based ACO algorithm
4. temp_List_of_Cloudlet = null, temp_ACO_List_of_Cloudlet = ACOCloudletList and n= size
of VMs list
5. while temp_ACO_List_of_Cloudlet not empty
if (size of temp_ACO_List_of_Cloudlet greater than n)
Transfer the first arrived n Cloudlets from temp_List_of_Cloudlet and put them on
temp_List_of_Cloudlet
else
Transfer all Cloudlets from temp_List_of_Cloudlet and put them on temp_List_of_Cloudlet
end If
6. Call ACO_TS (temp_ACO_List_of_Cloudlet, VmList)
7. temp_ACO_List_of_Cloudlet = temp_ACO_List_of_Cloudlet - temp_List_of_Cloudlet
8. Compute the degree of imbalance factor between the VM’s.
9. If the degree of imbalance factor is greater than threshold value then

a. For each VM in the VM List, compute the length of all cloudlets assigned to
each VM in the VM List.
b. Sort of VM’s on the basis of length and create a list UL_VmList of all
underloaded VM’s and List of Cloudlets OL_VM_CloudletList of Cloudlets in
the execution list of overloaded VM’s
c. Call PSO_TS (OL_VM_CloudletList, UL_VmList) and transfer Cloudlets from
overloaded loaded VM to Least loaded VM in the sorted list.
End Do

Function PSO_TS (CloudletList, “VmList)


1. Set particle dimension as equal to the size of ready tasks T.
2. Initialize particles position randomly from PC = 1.....j and velocity vi randomly.
3. For each particle, calculate its fitness value.
4. If the fitness value is better than the previous best pbest, set the current fitness value as
the new pbest.
5. Perform Steps 3 and 4 for all particles and select the best particle as gbest.
6. For all particles, calculate velocity and update their positions.
7. If the stopping criteria or maximum iteration is not satisfied, repeat from Step 3 & 4.”

Function ACO_TS (CloudletList, VmList)


1. Initialize pheromone value for each path between tasks and resources, set optimal
solution to NULL and place m ants on random resources.
2. Repeat for each ant
a. Put the starting resource of first task in tabu list and all other tasks in allowed list.
b. Based on the probability function or transition rule, select the resource for all remaining
tasks in the allowed list.
3. Compute fitness of all ants which in this case is Makespan time.
4. Replace the optimal solution with the ant’s solution having best fitness value if its value
of better than previous optimal solution.
5. Update both local and global pheromone.
6. Stop when the termination condition is met and print the optimal solution.
Degree of imbalance (DI) can be computed using equation 1 & 2. It measures the imbalance
between VMs [63].

𝑇𝐿_𝑇𝑎𝑠𝑘𝑠
Ti =𝑃𝑒_𝑛𝑢𝑚 (1)
𝑗 ∗ 𝑃𝑒_𝑚𝑖𝑝𝑠 𝑗

𝑇𝑚𝑎𝑥 −𝑇𝑚𝑖𝑛
DI = (2)
𝑇𝑎𝑣𝑔

TL_Tasks refers to the length of all tasks assigned to VMi. Tmax, Tmin and Tavg refer to the
average of Ti among all VMs [63].

6.4.6 Proposed Improved Resource Aware Hybrid Meta-heuristic algorithm for Load
Balancing

The pheromone value for all paths between each task and resource is set to a constant value at
the start of each iteration of the ACO algorithm. It somehow fails to take into account the
current load of each resource into consideration for the tasks previously assigned to these
resources. In the proposed method, the pheromone value for each path is set based on the degree
of imbalance between all resources based on already assigned tasks to each resource. Initially
the tasks are scheduled using the Particle Swarm Optimization. Even though the scheduling is
done with the help of PSO, the pheromone values are updated as per the schedule prepared
using PSO. Once the set number of tasks are scheduled using PSO, remaining tasks are
scheduled using PSO [72]. With this the ACO is expected to produce better results as it based
on resource awareness. Pheromone value for each path at the start of iterations is set as follows:
For all VM’s, i= 0 to sizeof(VmList)-1, in the sorted VM list,
𝑇 −𝑇0
Compute the Degree of Imbalance DI = 𝑇𝑖 [63]
𝑎𝑣𝑔

Set pheromone value for paths between each task and resource VMi= c – DI, where c is
some constant value.

All VM’s are sorted on the basis of length of all cloudlets assigned to them. T0 has minimum
length and pheromone value for let’s say T4 depends on T0, T1, T2, T3& T4.
Also, the probability function in ACO is computed as follows,
[𝜏 𝑖𝑗 (𝑡)]𝛼 ∗ [𝑛 𝑖𝑗 ]𝛽
Pkij(t)= if j𝜖 𝑎𝑙𝑙𝑜𝑤𝑒𝑑 𝑘 [63]
𝑠∈ 𝑎𝑙𝑙𝑜𝑤𝑒𝑑 𝑘 [ 𝜏 𝑖𝑠 (𝑡)]𝛼 ∗ [𝑛 𝑖𝑠 ]𝛽

0 Otherwise
where [𝑛𝑖𝑗 ]𝛽 = 1/dij
dij is the total time needed by taski to finish, which is sum of expected execution time of ataski
on resourcej& the transfer time. It is expressed as follows:

𝑇𝐿_𝑇𝑎𝑠𝑘 𝑖 𝐼𝑛𝑝𝑢𝑡 𝐹𝑖𝑙𝑒 𝑆𝑖𝑧𝑒


dij =𝑃𝑒 _𝑛𝑢𝑚 + [63]
𝑗 ∗𝑃𝑒_𝑚𝑖𝑝𝑠 𝑗 𝑉𝑀_𝑏𝑤 𝑗

𝐼𝑛𝑝𝑢𝑡 𝐹𝑖𝑙𝑒 𝑆𝑖𝑧𝑒


In the equation above, Transfer cost = 𝑉𝑀_𝑏𝑤 𝑗

where Input File Size is the total length of the file &𝑉𝑀_𝑏𝑤𝑗 stands for band width of resourcej

To reduce the transfer cost along with the overall execution time, it is important to sperate the
expected execution and the transfer cost as follows:
𝑇𝐿_𝑇𝑎𝑠𝑘 𝑖
d1ij = 𝑃𝑒_𝑛𝑢𝑚
𝑗 ∗𝑃𝑒_𝑚𝑖𝑝𝑠 𝑗

𝐼𝑛𝑝𝑢𝑡 𝐹𝑖𝑙𝑒 𝑆𝑖𝑧𝑒


d2ij = 𝑉𝑀_𝑏𝑤 𝑗

now [𝑛𝑖𝑗 ]𝛽 can be represented as = [(1/ d1ij)β1 + (1/ d1ij)β2]


and hence, β = β1 + β2

Algorithm
Input:
CloudletList: List of all clouldlets received.
VmList: List of all VM’s

Main_FunctionHybrid_TS
// initially schedule a small no. of incoming tasks using PSO [number of tasks can be fixed
or decided on the basis of percentage of total tasks]. In the proposed work, cloudlets = 2
times the size of VM list are initially passed to PSO scheduling algorithm

1. PSOCloudletList = CloudletList [1: Sizeof (VmList)* 2]


2. Call PSO_TS (PSOCloudletList, VmList)
3. ACOCloudletList = CloudletList – PSOCloudletList

// Scheduling based ACO algorithm

4. temp_List_of_Cloudlet = null, temp_ACO_List_of_Cloudlet = ACOCloudletList and n=


size of VMs list
5. while temp_ACO_List_of_Cloudlet not empty
if (size of temp_ACO_List_of_Cloudlet greater than n)
Transfer the first arrived n Cloudlets from temp_List_of_Cloudlet and put them
on temp_List_of_Cloudlet
else
Transfer all Cloudlets from temp_List_of_Cloudlet and put them on
temp_List_of_Cloudlet
end If
6. Call ACO_TS (temp_ACO_List_of_Cloudlet, VmList)
7. temp_ACO_List_of_Cloudlet = temp_ACO_List_of_Cloudlet - temp_List_of_Cloudlet
8. Compute the resource utilization (VM utilization), and if it is less than 80%

// imbalance factor is used to determine is migration is needed


a. Compute the degree of imbalance factor between the VM’s.
i. For each VM in the VM List, compute the length of all cloudlets assigned
to each VM in the VM List.
//sorting is done to find the underloaded and overloaded VMs.
ii. Sort the VM’s on the basis of length and create a list UL_VmList of all
underloaded VM’s and List of Cloudlets OL_VM_CloudletList of
Cloudlets in the execution list of overloaded VM’s
iii. Call PSO_TS (OL_VM_CloudletList, UL_VmList) and transfer
Cloudlets from overloaded loaded VM to Least loaded VM in the sorted
list.
End Do
Function PSO_TS (CloudletList, VmList)

// initialize theparticle “positions


1. Set particle dimension as equal to the size of ready tasks T.
2. Initialize particles position randomly from PC = 1.....j and velocity vi randomly.
3. For each particle, calculate its fitness value.
4. If the fitness value is better than the previous best pbest, set the current fitness value as
the
new pbest.
5. Perform Steps 3 and 4 for all particles and select the best particle as gbest.”
//update the velocity and positions
6. For all particles, calculate velocity and update their positions.
//terminate if the stopping criteria is met
7. If the stopping criteria or maximum iteration is not satisfied, repeat from Step 3 & 4.
Function ACO_TS (CloudletList, VmList)

1. Sort the VM’s on the basis of length. i.e. length of cloudlets assigned to each VM.
2. Initialize pheromone value for each path between tasks and resources as follows:
3. For all VM’s, i= 0 to sizeof(VmList)-1, in the sorted VM list,
𝑇 −𝑇0
Compute the Degree of Imbalance DI = 𝑇𝑖
𝑎𝑣𝑔

Set pheromone value for paths between each task and resource VMi= c – DI,
where c is some constant value.
4. Set optimal solution to NULL and place m ants on random resources.
5. Repeat for each ant
c. Put the starting resource of first task in tabu list and all other tasks in allowed list.
d. Based on the probability function or transition rule, select the resource for all remaining
tasks in the allowed list.
6. Compute fitness of all ants which in this case is Makespan time.
7. Replace the optimal solution with the ant’s solution having best fitness value if its value
of better than previous optimal solution.
8. Update both local and global pheromone.
9. Stop when the termination condition is met and print the optimal solution.

Degree of imbalance (DI) can be computed using equation 3 & 4. It measures the imbalance
between VMs [63].

𝑇𝐿_𝑇𝑎𝑠𝑘𝑠
Ti =𝑃𝑒_𝑛𝑢𝑚 (3)
𝑗 ∗ 𝑃𝑒_𝑚𝑖𝑝𝑠 𝑗

𝑇𝑚𝑎𝑥 −𝑇𝑚𝑖𝑛
DI = (4)
𝑇𝑎𝑣𝑔

TL_Tasks refers to the length of all tasks assigned to VMi. Tmax, Tmin and Tavg refer to the
average of Ti among all VMs [63].

Resource/VM utilization can be measured using equation below [43]:

You might also like