Download as pdf or txt
Download as pdf or txt
You are on page 1of 21

Accepted Manuscript

Resource provisioning and work flow scheduling in clouds using


augmented Shuffled Frog Leaping Algorithm

Parmeet Kaur, Shikha Mehta

PII: S0743-7315(16)30146-0
DOI: http://dx.doi.org/10.1016/j.jpdc.2016.11.003
Reference: YJPDC 3556

To appear in: J. Parallel Distrib. Comput.

Received date: 25 February 2016


Revised date: 25 September 2016
Accepted date: 2 November 2016

Please cite this article as: P. Kaur, S. Mehta, Resource provisioning and work flow scheduling
in clouds using augmented Shuffled Frog Leaping Algorithm, J. Parallel Distrib. Comput.
(2016), http://dx.doi.org/10.1016/j.jpdc.2016.11.003

This is a PDF file of an unedited manuscript that has been accepted for publication. As a
service to our customers we are providing this early version of the manuscript. The manuscript
will undergo copyediting, typesetting, and review of the resulting proof before it is published in
its final form. Please note that during the production process errors may be discovered which
could affect the content, and all legal disclaimers that apply to the journal pertain.
*Highlights (for review)

 Meta-heuristic algorithms explored for workflow scheduling in clouds


 An improvement proposed to the meta-heuristic algorithms
 An augmented variation of Shuffled Frog Leaping Algorithm (ASFLA) formulated
 Obtained solutions are execution cost optimal and also meet deadline constraint.
 ASFLA outperforms Particle Swarm Optimization and SFLA
*Manuscript
Click here to view linked References

Resource Provisioning and Work flow Scheduling in Clouds using Augmented Shuffled
Frog Leaping Algorithm

Parmeet Kaur, Shikha Mehta


Department of Computer Science
Jaypee Institute of Information Technology
NOIDA, India

Abstract:

The on-demand provisioning and resource availability in cloud computing make it ideal for executing
scientific workflow applications. An application can start execution with a minimum number of resources
and allocate further resources when required. However, workflow scheduling is a NP hard problem and
therefore meta-heuristics based solutions have been widely explored for the same. This paper presents an
augmented Shuffled Frog Leaping Algorithm (ASFLA) based technique for resource provisioning and
workflow scheduling in the Infrastructure as a service (IaaS) cloud environment. The performance of the
ASFLA has been compared with the state of art PSO and SFLA algorithms. The efficacy of ASFLA has
been assessed over some well-known scientific workflows of varied sizes using a custom Java based
simulator. The simulation results show a marked improvement in the performance criteria of achieving
minimum execution cost and meeting the schedule deadlines.

Keywords: Cloud computing; resource provisioning; scheduling; scientific workflow; shuffled frog
leaping algorithm

1. Introduction

Scientific workflows are a technique to depict and manage activities and computations occurring in
scientific processes. These are used to describe applications that consist of a sequence of computational
tasks which have data- and control-flow dependencies between them [1]. A rapid increase in the size of
scientific data has resulted in an increase in the complexity of analysis as well as computations.
Therefore, in comparison to manual processing, the scientific workflow systems are more appropriate for
automated derivation of information and improved performance in complex processes. The workflow
systems are gaining more importance and are present in a variety of complex scientific applications such
as weather data analysis and modeling, structural chemistry, bioinformatics, image processing and
medical applications etc. [1] The complex scientific applications may be executed on varied platforms
including local workstations, clusters, supercomputers and grids [2]. Each of these platforms provides
different levels of performance, cost and usability for these applications. Due to the data and compute
intensive nature of these applications, it is necessary that the scientific applications are executed in a cost
effective manner on high performance systems.
A significant solution which has emerged in recent years and garnered notable interest for scientific
applications is the cloud computing platform. The cloud provides scalable and flexible resources on an
elastic pay-per-use model which makes it a suitable candidate for the execution of compute or data
intensive applications [3]. Cloud-computing services are offered according to different models, namely,
Software as a Service (SaaS), Platform as a Service (PaaS) and Infrastructure as a Service (IaaS).
Software as a Service (SaaS) cloud computing model uses the internet to deliver applications that can be
accessed on a client’s side while being managed by a third-party vendor. SaaS is widely used for
conventional cloud applications such as the social networking Websites; Web-based e-mail; services that
allow to keep and edit documents online etc. Another type of service model is the Platform as a Service
(PaaS) that provides developers with a framework that they can use to develop or customize applications.
This allows quick and cost-efficient development, testing, and deployment of applications. A third type of
service model is Infrastructure as a Service (IaaS) which provides a pool of resources of varied types that
can be leased by users according to their requirements. The work presented in this paper focuses on task
scheduling and resource provisioning in Infrastructure as a Service (IaaS) clouds in particular. Such
clouds are directly useable by the scientific workflow applications due to the scalability and heterogeneity
of available resources. IaaS cloud providers offer heterogeneous virtualized servers or virtual machines
(VMs) which can be dynamically provisioned, allocated and managed using a pay-per-use model.
The feature of virtualization provided by cloud computing makes the deployment and execution of
scientific workflows easy on cloud. This results from the benefits offered by virtualization such as
customization of services according to user needs, fault tolerance, process migration and performance
isolation. Virtualization has enabled cloud computing platforms to dynamically allocate virtual machines
as scalable Internet services (for eg. Amazon EC2/S3). However, the pay-per-use model necessitates the
efficient use of resources and optimization of the execution time of the application programs on the cloud.
Therefore, resource provisioning and task scheduling algorithms are essential for reducing the execution
time, besides improving the resource utilization. In general, determining an optimal task schedule for a set
of dependent tasks is an NP hard problem [4]. The heterogeneity and dynamicity of the cloud computing
environment further complicate the problem. The use of cloud computing for workflow applications is
therefore challenging and there exist very few attempts to utilize IaaS clouds for such applications. Since
the tasks scheduling is a NP hard problem, a few heuristic and meta-heuristic based evolutionary
algorithms have been employed for this problem in literature.
This paper explores the application of the meta-heuristic based Shuffled Frog Leaping Algorithm (SFLA)
to the resource provisioning and task scheduling problem for clouds. In 2003, Eusuff and Lansey [22]
presented the SFLA which aims to model and imitate the behavior of a group of frogs in search of food
located randomly on stones in a pond. It brings together the gains of the genetic-based memetic algorithm
as well as the social behavior-based Particle Swarm Optimization (PSO) algorithm. SFLA has been used
to solve many complex optimization problems in areas such as resource distribution, multi-user detection
in DS-CDMA Communication System, web document classification, clustering etc; however remains
unexplored in the area of resource scheduling and provisioning in IaaS clouds. Further, we propose a new
meta-heuristic based solution which is an enhancement to the basic SFLA and is referred to as
Augmented Shuffled Frog Leaping Algorithm (ASFLA).
ASFLA aims to minimize the total execution cost which is taken as a sum of processing time for the
individual tasks of a workflow and the transfer time of the results of a task to its dependent tasks. Further,
the proposed algorithm ensures that the generated schedules meet the specified deadlines for the
execution of the application. The efficacy of the proposed algorithm has been evaluated with respect to
the Particle Swarm Optimization algorithm and SFLA using a variety of well-known scientific workflows
of different sizes. In other set of experiments, the impact of varying the number of generations as well as
the number of resources was studied. The experiment results show that the proposed approach, ASFLA
performs better than the PSO and SFLA in minimizing the total execution cost and meeting the schedule
deadlines.

The contributions of the paper are listed as follows:


 The application of PSO and SFLA to the resource provisioning and workflow scheduling in
clouds is explored.
 An augmented variation of the conventional SFLA (ASFLA) is formulated for better results.
 An improvement is proposed to the meta-heuristic algorithms so that the resultant solution is cost
optimal and also meets deadline constraint.

The paper is organized as follows. Section 2 details the resource provisioning and task scheduling
problem for scientific workflows. Section 3 lists the related work existing in literature. An augmented
variation of the conventional SFLA is presented in Section 4. The proposed solution based on the ASFLA
is put forth in Section 5. The simulation experiments and obtained results are discussed in Section 6
followed by conclusion.

2. Problem Formulation

Scientific workflows comprise of varying number of tasks and their size may range from a small number
to up to a million tasks. As the size of the workflow increases, it is beneficial to distribute the execution of
tasks across different computing resources in order to achieve a reasonable finishing time. The cloud
computing environment offers a number of benefits for scientific application execution by providing
unrestricted number of resources. The scheduling algorithms determine a mapping between tasks and
resources in order to meet one or more optimization objectives. The frequently used optimization criteria
are minimization of application execution cost or time; minimization of makespan which denotes the time
at which the last task finishes; maximization of resource utilization; meeting the deadline constraints etc.
Apart from these, a scheduling algorithm is also required to consider the inter-task dependencies in a
workflow. A task can be executed only if its parent tasks in the workflow have finished execution.
Additionally, cloud computing environment is beneficial for workflows since it enables a user to directly
provision the resources required for an application and schedule the computation tasks with a user-
controlled scheduler. This enables a user to allocate a resource only when it is required and once
allocated, the resource may be used subsequently for the execution of many tasks.
Thus, it is evident that efficient resource provisioning and task scheduling is a major challenge for
achieving high performance in clouds [5]. Moreover, the task scheduling problem is a NP hard problem
and does not lend itself to optimal solutions in a small time, given the large solution space in the cloud
computing environment. Therefore, meta-heuristic based solutions have been used to provide near-
optimal solutions in a reasonably short time. The techniques of Ant Colony Optimization (ACO), Genetic
Algorithm (GA) and Particle Swarm Optimization (PSO) have been frequently used for developing
scheduling algorithms for cloud environment [6]. These techniques have been found to perform better
than the non-heuristic based approaches in determining efficient scheduling and resource provisioning
algorithms. However, there is a scope for further refinement of the solutions and therefore, this work
presents an augmented SFLA based technique for resource provisioning and task scheduling in IaaS
clouds.
The workflow is depicted as a directed acyclic task graph (N, E), where the node set N represents the set
of tasks of the application and E represents the set of weighted edges denoting the dependencies amongst
the tasks. Each task has an associated cost of execution on a particular VM. The set of resources R
denotes the different type of resources/VMs available to the user. The processing capacity of a VM is
specified in terms of floating point operations per second (FLOPS) and is available from the VM provider
or can be calculated to an approximation [7]. Based on the processing capacity of a VM, the execution
time of a task on a given VM can be calculated. A directed edge, E1,2 exists from a node N1 to node N2 if
N1 precedes N2 in the task graph, i.e. if the task represented by N1 is required to be completed before the
task represented by N2 can start execution. The weight on edge E1,2 denotes the cost of transfer of output
from N1 to the node N2. The node with no incoming edges represents the entry task and the node with no
outgoing edges represents the exit task. A deadline, Δ associated with each workflow denotes the time by
which the workflow should finish execution. Fig. 1 illustrates an example of a workflow.
The proposed solution approach combines the problems of resource provisioning and scheduling and uses
SFLA to solve them as an optimization problem. The heuristics for resource provisioning and scheduling
may have different optimization objectives. This work focuses on finding a schedule to execute a
workflow on IaaS computing resources such that the total execution cost is minimized and the specified
deadline is met. Fig. 2 depicts an example of a schedule corresponding to the task graph of Fig. 1.
1

2 3

4
5 66 7
8
9

Fig.1 Example of a workflow

Fig 2 Schedule corresponding to a workflow

It is assumed that various types of VMs are made available by IaaS providers and that users can lease
them on demand based on the requirements. The algorithm generates a schedule which defines the
mapping between tasks and resources. Further, the solution also lists the number of VMs that should be
leased and the duration of time for which they should be leased. Thus, the objective of the proposed
scheme is to minimize the Overall Execution Cost (OEC) and keeping the Finish Time within the
deadline.
The proposed solution has been designed for applications with strict deadline constraints. Therefore, the
solution is expected to perform best in cloud environments where it can be ensured that VMs belonging to
different users do not interfere with each other. Many existing cloud providers allow single tenancy or
virtual private clouds or creation of dedicated instances. For instance, vCloud Air Dedicated Cloud 1 is a
single-tenant, physically isolated IaaS platform with a dedicated cloud management stack. This is
essentially like a private cloud instance in the public cloud and hence, provides additional flexibility by
assigning resources to separate virtual dedicated clouds with individual user access controls. Amazon
Virtual Private Cloud (Amazon VPC)2 is another example which lets the users provision a logically
isolated section of the Amazon Web Services (AWS) cloud. Besides, it allows the creation of a hardware
Virtual Private Network (VPN) connection between a corporate datacenter and its VPC so that the AWS
cloud can be utilized as an addition to the corporate datacenter. Amazon Web Services also allows the
option of launching dedicated Instances which run on hardware dedicated to a single customer for
additional isolation. Dedicated Instances ensure that no other companies are running on the same physical
host, thus eliminating the interference from other users. A tenancy attribute of a VPC can be set to
‘dedicated’ to ensure that all instances launched in the VPC will run on single-tenant hardware, instead of
the default multi tenant value. Similarly, Microsoft Azure3 presents the opportunity of setting up a VPC
using Virtual Networks. Our proposed solution can also be used for applications where regulatory and
security requirements prohibit physical sharing of hardware.

3. Related Work
The problem of scheduling Workflows on distributed systems is NP-hard by a reduction from the
multiprocessor scheduling problem. Therefore, a polynomial-time optimal solution to the problem is not
possible. The conventional techniques for multiprocessor scheduling either provide a globally optimum
1
http://vcloud.vmware.com/au/service-offering/dedicated-cloud
2
https://aws.amazon.com/vpc/
3
https://azure.microsoft.com/en-in/services/virtual-network/
solution in a large time or produce a solution in a low time which is not a global optimum. This has led to
the development of various sub-optimal solutions based on heuristics and meta-heuristics for workflow
scheduling in systems such as the grid computing and cloud computing. Of these the solutions for grids
assume that a limited group of computing resources is available and the aim of a scheduling algorithm, in
general, is to minimize the time taken for the application’s execution and the cost of execution is not a
concern. In comparison, the focus of the workflow scheduling algorithms for the clouds is on optimizing
the cost or meeting the execution deadlines. Algorithms such as the genetic algorithms (GA), Ant Colony
Optimization (ACO) and Particle Swarm Optimization (PSO) have been utilized for improved solutions
of the problem. However, the results are constrained by efficiency and hence, this paper presents a
augmented SFLA for task scheduling.
Two genetic algorithms using additional heuristics have been presented in [8] for reducing the complexity
of the scheduling algorithms and to enhance the system performance. One of the genetic algorithms
applies two fitness functions; the first to minimize the total execution time and the second for load
balancing. The second genetic algorithm utilizes a task duplication technique in order to reduce the
communication overhead between the processors. The algorithms have been found to perform better than
the traditional algorithms.
Workflow scheduling has been explored in literature for computing grids. A recent paper [9] has
introduced three novel techniques for the task scheduling problem in grids. The techniques utilize the
Directed Search Optimization (DSO) firstly to frame the task scheduling as an optimization problem to be
solved by DSO. Subsequently, the DSO is used as for training a three layer Artificial Neural Network
(ANN) and a Radial Basis Function Neural Network (RBFNN). The DSO trained networks are utilized
for task scheduling with improved performance than other existing algorithms.
A workflow scheduling algorithm based on dynamic critical path (DCP) [10] establishes an efficient
mapping between tasks and resources by calculating the critical path in the workflow task graph at every
step. A higher priority is given to that task in the critical path which is expected to complete earlier. The
algorithm has been found to be effective for schedule generation in most types of workflows for grids.
An ant colony optimization (ACO) algorithm for scheduling of large-scale workflows is proposed in [11].
The algorithm focuses on the users’ Quality of Service (QoS) parameters or preferences, minimum QoS
thresholds and QoS constraints for a given application. The algorithm aims to determine a solution that
optimizes the QoS parameters while meeting the QoS constraints.
The work in [12] considers the execution cost of applications on grids as the QoS parameter while
scheduling workflows on grids. A budget constraint based scheduling algorithm based on a genetic
algorithm is presented. The algorithm minimizes the time taken for execution while adhering to a
specified budget. The algorithm is aimed at the use of Utility Grids for services over a secure and shared
world-wide network based on a pay-per-use model. A Markov decision process has been used for faster
convergence of the genetic algorithm in case of a very low budget.
A variation of the PSO, a rotary chaotic particle swarm optimization (RCPSO) algorithm, is presented in
[13] for the trustworthy scheduling of workflows in grids. The authors discuss that apart from time and
cost constraints, the issues of security, reliability and availability need to be addressed. The presented
RCPSO algorithm optimizes the scheduling in a large-scale grid with a number of resources. The results
show that RCPSO performs better than GA, ACO and PSO in trustworthy scheduling of workflows in
grids.
The solutions described above depict the challenges and some solutions for workflow scheduling.
However, the performance of the algorithms and their applicability in the dynamic and heterogeneous
cloud environment is still questionable. There have been some attempts at exploring the meta-heuristic
based algorithms for scheduling in clouds.
A dynamic method for workflow scheduling on clouds [14] is presented with an objective to minimize the
execution cost based on the pricing model of the cloud. Various types of VMs may be available to be
leased on demand at different costs. The solution reduces the cost but is not a near-optimal solution.
Another recent work on workflow ensemble developed for clouds is presented by Malawski et al. [15]. A
range of dynamic and static algorithms for workflow scheduling on clouds have been presented in [15]
with the aim to maximize the number of executed workflows, while meeting the deadline and budget
constraints. The algorithms have considered the delays involved in leasing the cloud based VMs and the
solutions consider variations in tasks’ estimated execution time. However, the algorithms do not consider
the heterogeneity of IaaS clouds as all VMs are assumed to be of the same type.
A static algorithm for scheduling a single workflow instance on an IaaS cloud is presented in [16]. The
algorithm considers the critical paths of a workflow and it takes into account the heterogeneity of VMs
and the cost models of clouds. The objective of the algorithm is to minimize the execution cost
considering the deadline of application execution and the availability of resources. However, they do not
have a global optimization technique in place capable of producing a near-optimal solution; instead, they
use a task level optimization and hence fail to utilize the whole workflow structure and characteristics to
generate a better solution. Other authors have used PSO to solve the workflow scheduling problem.
A PSO based algorithm for minimizing the cost of execution of a single workflow as well as load
balancing is presented in [17]. The algorithm assumes the availability of a fixed set of VMs and is similar
in approach to the algorithms for grids.
A variation of PSO, the Revised Discrete Particle Swarm Optimization (RDPSO) is proposed for
scheduling applications among cloud services in [18]. The algorithm considers the cost of computation as
well as the cost of data transmission according to a cloud model. The work aims to minimize the make
span or the cost. However, a fixed set of initial VMs is assumed, thereby not making use of the elasticity
of IaaS clouds.
The authors of [19] focus on the requirement for robustness for the scheduling problem in clouds. The
algorithm schedules workflow tasks on heterogeneous Cloud resources with the objective of minimization
of makespan and the cost. However, robustness, in general, is possible by introduction of redundancy in
the system and similarly the presented algorithm increases the robustness of the schedule with a
corresponding increase in budget.
The approach of [20] has been designed to handle the specific characteristics of cloud computing such as
the availability and elasticity of heterogeneous resources. A PSO based strategy for resource provisioning
and scheduling for scientific workflows on IaaS clouds has been presented. The approach aims to
minimize the total cost of execution while adhering to the deadline constraints of the workflow. The
algorithm has been tested on some well-known scientific workflows of varied sizes. The simulation
experiments demonstrate that the PSO based approach performs better than the other contemporary
algorithms.
A taxonomy and description of the scheduling problem in the cloud computing is presented in [6].
Further, a detailed survey of the evolutionary approaches to the problem is discussed as per the taxonomy.
The authors state that not much research has been undertaken in this area and a lot can be done improve
the state-of-art. A comprehensive survey of the metaheuristic based scheduling techniques for the cloud
computing environment is available in [21]. The authors support the use of metaheuristics for finding
suboptimal solution in short period of time in the clouds. The authors analyze the algorithms based on the
techniques of Ant Colony Optimization (ACO), Genetic Algorithm (GA), Particle Swarm Optimization
(PSO), League Championship Algorithm (LCA) and BAT algorithm.
It has been observed that there is scope for improvement in the field of scheduling for clouds in order to
enhance the quality of solutions and the speed of convergence. Workflow scheduling in clouds is a
current area of research open to novel and efficient solutions. Recently, a metaheuristic algorithm, SFLA
has been established as an efficient algorithm in various applications such as clustering [23], evolution of
queries [24], sequencing optimization [25], recommender systems [26] etc. Hence this paper aims to
assess the suitability of SFLA for resource provisioning and workflow scheduling in IaaS clouds. For
further enhancing the performance of SFLA, a new algorithm ASFLA has been proposed so that the
resultant solution is more cost optimal and also meets deadline constraint.
4. Augmented Shuffled Frog Leaping Algorithm (ASFLA)

Nature has been a rich source of inspiration for development of computational tools and techniques that
are used for solving complex problems. Nature has guided the research community to watch and learn the
intelligent mechanisms evolved by it, like marching of ants in an army, birds flocking in high skies,
waggle dance of the honeybee, nest building of the social wasp and fish schools of deep waters etc.
Research in this field has led to the development of search methods popularly known as nature-inspired
techniques that mimic the metaphor of natural biological evolution for providing solutions to intricate
problems. Shuffled frog leaping algorithm (SFLA) is a recent algorithm that integrates the benefits of
both the genetic-based memetic algorithm and the social behaviour-based PSO algorithm. The efficacy of
SFLA has been established in a wide variety of optimization problems such as clustering, sequencing,
recommender systems etc.
In SFLA, population consists of a group of frogs searching for food in a pond (Fig. 3). Search for food
incorporates two alternating processes: intra-group communication of frogs within a memeplex for local
exploration and inter-group communication between the frogs belonging to different memeplexes for
global evolution. In conventional SFLA as shown in Fig 4, an initial population P (size ‘n’) of frogs Fi is
generated randomly. After computing the fitness of all initial solutions, whole population is sorted in the
descending order of their fitness. Subsequently all frogs (F) are distributed into ‘m’ memeplexes MP1,
MP2, MP3…MPm as follows:

Fig 3 Search Space with Memeplexes in SFLA

 
MP d  Fkd Fkd  Fd  m( k 1) , k  1,2,......., n d  1,2,.....m (1)

Within each memeplex, for exploitation, the position of worst solution (Xw) is improved by adjusting the
fitness landscape according the local best solutions (Xlb) as shown in equation 2 &3.

Change in frog position (Dd) = rand () x (Xlb- Xw) (2) (Dmax >= Dd >=Dmin)
New Position Xw = Xw + Dd ; (3)

If the position of worst solution improves, it replaces the worst solution else worst solution is directed to
enhance its position according to global best frog (equations 4 & 5).

Change in frog position (Dd) = rand () x (Xlb- Xw) (4) (Dmax >= Dd >=Dmin)
New Position Xw = Xw + Dd (5)
Algorithm: Shuffled Frog Leaping
1) Set the dimensions of frogs to d.
2) Initialize the population (P) of frogs (solutions) with random
positions. For each frog, compute fitness
3) Sort the population P in descending order of their fitness
4) Determine the fitness of global best frog (Xgb ) as fgb
5) Divide P into m memeplexes
6) For each memeplex m
a) Determine the fitness of local best( Xlb )frog as flb and the fitness
of local worst( Xw ) frogs as fw
b) Try to improve the position of worst frog using Eqs. (2) with
respect to local best frog. If fitness improves, update the position
of worst frog.
c) Else
d) Try to improve the worst frog position using Eqs. (4) with respect
to global best frog(Xgb). If position improves, update the position
of worst frog
e) Else
f) Replace worst solution with new randomly generated solution
7) End
8) Combine the evolved memeplexes
9) Sort the population P in descending order of their fitness
10) Check if termination is true
11) End
Fig 4: Conventional SFL algorithm Pseudo-code

At this stage, if the position of frog improves, it replaces the worst solution else a new solution is
generated at random to replace the least fitted solution (equation 6).

New Position Xw = rand() * Dmax + rand() x Dmin (6)


where Dmax and Dmin are respectively the maximum and minimum allowed changes in a frog’s position,
rand() function generates a random number between 0 and 1.
Thus, in SFLA only the worst solutions are directed to enhance their positions with respect to best
solutions locally/globally. However, due to random generation of initial population, even the global best
frogs may not be truly global and may lead to local optima. Since, the leaping action of the frogs
searching for food depends on the individual movement of inertia, there may exist positions with more
food closer to the best frogs which may be obtained by updating the moment of inertia of best frogs. This
approach facilitates the best frogs to leap out to the new positions with more food in their vicinity. This
technique of restructuring the ideas of frogs, i.e., adapting the positions of the best solutions along the
neighborhood positions to improve convergence capability of the algorithm is incorporated in ASFLA as
re-establishment technique (Fig 6).
1. Set the dimensions of frogs to d
2. Initialize the population (P) of frogs (solutions) with random positions. For each
frog, compute fitness
3. Sort the population P in descending order of their fitness
4. Determine the fitness of global best frog (Xgb ) as fgb
5. Divide P into m memeplexes
6. For each memeplex m
a. Determine the fitness of local best( Xlb )frog as flb and the fitness of local worst(
Xw ) frogs as fw
b. For alternative iterations(i.e. periodicity n=2) try to improve the position of
local best frog
c. New Solution ( Xn) =re-establishment(Xlb)
d. If the fitness of best frogs improves, replace the frog with new Frog
e. Try to improve the position of worst frog using Eqs. (2) with respect to local best
frog. If fitness improves, update the position of worst frog.
f. Else
g. Try to improve the worst frog position using Eqs. (4) with respect to global best
frog(Xgb). If position improves, update the position of worst frog
h. Else
i. Replace worst solution with new randomly generated solution
j. End
7. Combine the evolved memeplexes
8. Sort the population P in descending order of their fitness
9. Check if termination is true
10. End

Fig 5: Pseudo-code for Augmented SFL Algorithm

// Re-establishment (Xlb)

1. Begin
2. Compute fitness of Xlb as fw
3. Select an incremental value d=a*rand( )
4. For each dimension j in individual Xlb
a. new_ value (j)=original_value (j)+d
b. compute new fitness of Xlb as fn
c. If fitness fn not better than fw then
d. new_ value (j)=original_value (j) - d
e. endif
f. compute new fitness of Xlb as fn
g. If fitness fn not better than fw then
h. new_ value (j)=original_value (j)
i. endif
5. Next j
6. return frog Xlb
7. End

Fig 6: Pseudo-code for Re-establishment

Thus, evolution in augmented SFLA is carried out along two dimensions- through restoration of best
frogs during memetic iterations and regular improvement in the positions of worst frogs through social
interactions. Accordingly, the algorithm has been modified as portrayed in Fig 5. Steps 1 to step 5 remain
the same. In the step 6, within each memeplex, for each iteration, determine the best and the worst
solutions. For alternative iterations within the memeplex, best frogs attempt to re-establish themselves to
enhance their positions while worst frogs endeavor to improve their positions through social interaction in
all memetic iterations (Fig 5). Fig 6 depicts the Re-establishment strategy adapted to solve the task
scheduling problem. This strategy may be used to solve any kind of discrete or continuous optimization
problems. For task scheduling, re-establishment is performed by searching the better positions lying in the
locality of best fitted solutions, if the fitness improves, adapted solution is retained; else best frog remains
in its original position. In Fig 6, ‘a’ is a constant value that suits the problem.

5. Proposed Solution

The current solutions for resource provisioning and task scheduling face efficiency constraints and there
is a requirement to explore better solutions. With this objective, the paper presents an ASFLA based
approach for the resource provisioning and task scheduling problem in the IaaS clouds. The proposed
algorithm is an enhanced version of the SFLA and provides better solutions than the SFLA. Further, it
results in schedules which meet the workflow’s deadline constraints. This is in contrast to the basic meta-
heuristic based algorithms which provide near-optimal solutions but have a limited capability to handle
any additional constraints.

5.1 Modeling the scheduling problem as an ASFLA problem

In order to model the scheduling problem as an ASFLA problem, the workflow and its tasks are
represented as a frog. Each frog carries a meme which consists of d memotypes. The kth frog, F(k) is
denoted as a vector of d memotype values; F(k)=(Mk1,Mk2,...,Mkd). The number of memotypes (also
referred to as the dimensions of the meme), i.e. d, is determined by the number of tasks in the workflow.
The number of coordinates or the dimensions of the frog is represented by the number of tasks in the
workflow. Further, the number of available resources determines the search space in which the frog is
allowed to move in search of food. Hence, the value of a coordinate can be between 0 and the number of
resources or VMs available to it. The frog’s position represents the assignment of tasks to resources. For
example, the frog in Fig. 7 is ten dimensional and its position is represented by 10 coordinates since it
denotes a workflow with ten tasks. Each coordinate of the frog can have a value in the range of 0-4 if 4
resources or VMs are available to it. Here the value, 3.2 of the seventh coordinate represents the task 7
and denotes that the task 7 has been mapped to resource 3. In the subsequent discussion, the notation Mi
refers to the ith coordinate of a frog and corresponds to the task ti in the workflow.

Fig 7 Frog’s Position

Fig 8 Task to Resource mapping


5.2 Schedule Generation

Obtaining a solution of the scheduling and resource provisioning problem using ASFLA involves
generation of a schedule in the form of a frog and subsequently computing the value of fitness of the frog
using a fitness function. The fitness function is based on the objective of the optimization problem. In this
case, the proposed algorithm aims to minimize the overall execution cost for the workflow while adhering
to the deadline constraints imposed by the application. We evaluate the Overall Execution Cost (OEC) for
each schedule and use it as the fitness function. The Overall Execution Time (OET) of a schedule is used
for checking the schedule with regard to the deadline constraints. Cloud computing is characterized by
the availability of infinite and heterogeneous resources. However, our algorithm assumes that initially v
number of VMs is leased for the application, where v is equal to the number of tasks that may execute in
parallel in the workflow. This prevents the search space from being too large and also reflects the
availability of sufficient resources to execute the tasks of the workflow in parallel.
We assume that the set of tasks in the workflow is given by an array T. The time taken by each task ti on
each resource, available in the set Initial_VMs is calculated, to an approximation using the method of
[7]. These costs are represented by a n × v matrix, Task_Exe_Time, where n is the number of tasks in the
workflow and v is the number of VMs in Initial_VMs. Further, a task in the workflow may be dependent
on other tasks in the workflow for their output. Therefore, we represent the dependencies of the workflow
by a n × n matrix, Dep such that Dep[i][j]=1 if task ti depends on task tj , 0 otherwise. The time taken to
transfer the output of a task to its dependent task(s) is stored in the n × n matrix, Task_Transfer_Time.
The algorithm for mapping a frog to a schedule evaluates the fitness of a frog by iterating through each
memotype of the frog. The dimension of a memotype Mki, corresponds to a task ti and its value
corresponds to a VM, given by Initial_VMs[Mi]. The time at which the task begins depends on the
finishing time of the tasks on which it depends as well as the time at which the required VM is available
to it.
The data structures used by the algorithm are presented in Table 1:

Symbol Description
OEC Overall Execution Cost
OET Overall Execution Time
InitialVM Set of Initial VMs
Task_Exe_Time Matrix representing the time taken for execution of a task
on a resource
Task_Transfer_Time Matrix representing the time taken to transfer the result of
one task to another, if a dependency exists between them
Leased_VMs Set of VMs leased by the application
Dep Matrix representing the dependency of a task on other
tasks in the workflow; Dep[i][j] =1 if task ti depends on
task tj; 0 otherwise
BTi Begin Time for a task ti
FTi Finish Time for a task ti
LBT_VM Lease Begin Time for a VM
LFT_VM Lease Finish Time for a VM
Table 1: Data Structures used by the algorithm:
Algorithm for Schedule Generation from a frog:

Input:

T: Set of d workflow tasks


InitialVMs: Set of Initial VMs
F(k)=(Mk1,Mk2,...,Mkd)

1. Initialize OEC=0, OET=0, Leased_VMs=Φ: flag =0


2. Calculate Task_Exe_Time, Task_Transfer_Time
3. Calculate Dep
4. For i =0 to (d-1)
i. ti =T[i]
ii. VM(i)=Initial_VMs[Mi]
iii. flag =0
iv. for j=0 to (d-1)
If dep[i][j]==1, { flag =1; BTi=max(BTi, FTj, LFT_VM(VM(i)) }
If flag==0 BTi= LFT_VM(VM(i))
v. exe_time= Task_Exe_Time(ti, VM(i))
vi. for j=0 to (d-1)
If dep[j][i]==1 and VM(j)<>VM(i)
Trans_time+=Trans_time+Task_Transfer_time[i][j]
vii. Tot_time(i, VM(i))=exe_time+trans_time
viii. FT(i, VM(i))=BTi+ Tot_time(i, VM(i))
ix. If VM(i)  Leased_VMs, add it, LBT_VM(VM(i))=BTi
x. LFT_VM(VM(i))= Tot_time(i, VM(i)) +BTi
5. For each VM c  Leased_VMs
i. OEC=OEC+((LFT_VM[c]-LBT_VM[c])*Cost[c])
ii. if(LFT_VM[c]>OET)
OET=LFT_VM[c]

The execution of the algorithm provides the VMs that are required to be leased for the workflow. Each
task ti is associated with a VM, VM(i) whose start and finish times are given as LBT_VM(VM(i)) and
LFT_VM(VM(i)) respectively. Thereafter, the algorithm computes the Overall Execution Cost (OEC) and
Overall Execution Time (OET) for the current solution. Thus, the schedule corresponding to the particular
frog is given by the combinations of the task to VM mapping along with the associated begin and finish
times for the VMs.
Finally, ASFLA and the schedule generation algorithms are combined to generate a near optimal schedule
as described next. In ASFLA fitness of the frog is computed in terms of OEC (overall execution cost)
which is evaluated by first generating the schedule using the algorithm given above. If the OET for this
schedule corresponding to a frog exceeds the deadline, ASFLA replaces the frog with a new randomly
generated frog. In this manner the ASFLA retains only the schedules which meet the deadline constraints.

6. Simulation Results

Meticulous experiments were conducted in order to evaluate the performance of the proposed approach.
The performance was evaluated using a custom JAVA simulator and a range of well-known scientific
workflows such as Montage, LIGO and CyberShake (Fig 9). Each simulation experiment was performed
25 times and the average results have been reported. The element of randomness introduced in Section 4
for the ASFLA is also implemented in the basic SFLA and PSO algorithm. As discussed in a previous
section, the basic PSO and SFLA may not have the constraint handling ability. Therefore, for a fair
comparison with ASFLA, this improvement was made in the considered algorithms since it leads to
schedules which meet the desired deadline constraints.

Fig 9 Workflows: a)Montage b) CyberShake c) LIGO

For all the experiments, heterogeneous resources were assumed and therefore, the execution time of a task
varied from one resource to other. The deadline of workflows has been decided by taking the average of
minimum and maximum possible execution time for each workflow execution. For calculating the
slowest execution time; a single VM of the least cost is leased and all the workflow tasks are executed on
it. The fastest execution time is calculated by using one VM of the fastest type for each workflow task.
The focus of the experiments was to analyze the behavior of the algorithms in the following scenarios:
 Sensitivity of ASFLA with respect to periodicity parameter ‘n’
 Effect of size of task graph on the Overall Execution Cost(OEC)
 Analysis of Converging time of ASFLA
 Effect of number of processors on the Overall Execution Cost(OEC)
 Evaluation of scalability of the algorithms by varying the size of task graph

Table 2 illustrates the parameters used to control the behavior of the various algorithms.

Algorithms Population size Number of Control parameters


generations
PSO 100 50 c1 = c2 = 2.
SFLA 100 50 No of Memeplex = 4,
memetic Iterations =5
ASFLA 100 50 No of Memeplex = 4,
memetic Iterations =5, n=2,
a=2

Table 2: Parameters used for the algorithms


Sensitivity of ASFLA with respect to periodicity parameter ‘n’:
The periodicity parameter ‘n’ controls the frequency of performing re-establishment strategy that is the
number of times the best frogs attempt to improve their positions. Accordingly, the best frogs re-establish
themselves periodically. In order to identify the best value of control parameter ‘n’, a study was
performed over varied values of ‘n’ = 1, 2, 5, 10, 20, 30, 40, 50. Value of n=1 implies best frogs try to
reposition in every iteration. Experiments were performed to compute the Overall Execution Cost (OEC)
for number of tasks=100, resources/VMs =3 and generations=250. It can be observed from the results
(Table 3) that minimum value of OEC is obtained for n=2. Therefore, in all other studies, value of n=2 for
ASFLA was considered for the workflows.

ASFLA
Periodicity 1 2 5 10 20 30 40 50
(n)
Overall 8000 7940 8000 7980 7980 7980 7980 7980
Execution
Cost(OEC)
Table 3: Performance of ASFLA for periodicity parameter (n)

Effect of size of task graph on the Overall Execution Cost(OEC): This study was performed on three
different workflows from different scientific areas: Montage, LIGO and CyberShake. All these workflows
have diverse structures and dissimilar data and computational characteristics. The Montage workflow is
an astronomy application which is used to produce custom mosaics of the sky depending on a set of input
images. Majority of the tasks are I/O intensive and do not require high CPU processing capacity. The
LIGO workflow is from the field of physics with the objective of detecting gravitational waves. This
workflow has mostly CPU intensive tasks requiring large memory. CyberShake is used to distinguish
earthquake hazards by generating synthetic seismograms and is a data intensive workflow with high CPU
and memory requirements. Along with the scientific workflows, two other workflows were generated
randomly: one with a small number of tasks (tasks=9) and another with a large number of tasks
(tasks=100). Since heterogeneous resources have been assumed, we have taken the cost of cloud
resources as 10, 20 and 30 units in this simulation. The other parameters used in the experiments are listed
in Table 4.

Scientific No. Of tasks No. Of


Workflows processors /
resources
CYBERSHAKE 20 3
MONTAGE 25 3
LIGO 40 3
Random 9 3
Workflow1
Random 100 3
Workflow2

Table 4: Workflow Parameters

Table 5 depicts the OEC obtained due to PSO, SFLA and ASFLA. It can be clearly observed from the
result that, for both the scientific workflows as well as random workflows, SFLA is able to lower the
overall execution cost by 41% on the average as compared to PSO. However, ASFLA improves the OEC
by approx 49% on the average with respect to PSO.

WorkFlow No of OEC
tasks PSO SFLA ASFLA Improvement Improvement of
of SFLA with ASFLA with
respect to PSO respect to PSO
(%) (%)
Random 9 1840 1240 1200 33 35
Workflow1
CyberShake 20 9140 5940 4760 35 48
Montage 25 19060 11020 7660 42 60
LIGO 40 17860 10110 9660 43 46
Random 100 17570 8070 7940 54 55
Workflow2

Table 5: OEC for PSO, SFLA and ASFLA

Analysis of Converging time of ASFLA: The focus of this experiment was to analyze the behavior of
algorithms over number of generations. For this experiment, a random workflow of size 50 has been
executed using 4 VMs. The maximum number of generations was in the range: 50, 100, 150, 200, and
250.

7000
6000
5000
4000 PSO
OEC

3000 SFLA
2000
ASFLA
1000
0
50 100 150 200 250
Generations

Fig 10 OEC vs Number of generations

Figure 10 depicts the OEC values of each algorithm over the varied number of generations. Results
portray that SFLA performs better than PSO. Further ASFLA significantly outperforms both PSO and
SFLA for all generations. Moreover, it can be observed that an increase in number of generations does not
lead to a marked reduction in the OEC for any of the algorithms. It is not required to increase the
complexity of the algorithms by escalating the number of generations. Therefore in the subsequent
experiments, the numbers of generations have been fixed as 50.

Effect of number of processors on the Overall Execution Cost (OEC): Rigorous experiments were
carried out to study the effect of varying number of resources on workflows of different sizes. For each
workflow the number of processors has been taken in the range from 1-10.

Size of Task graph: 50, 100, 150, 200


No. of Processors: 1, 2,3,4,5,6,7,8,9,10
Maximum number of iterations: 50
The results of the experiments are demonstrated in the Figure 11.
It can be observed from the results that, as expected, the cost of execution increases with the increase in
number of resources utilized for workflow execution. The results also depict that PSO and SFLA show
inconsistent behavior as the number of resources varies. In some cases PSO performs better than SFLA
and the trend is reversed in other cases. However, ASFLA constantly outperforms both PSO and SFLA.
Moreover, the OEC does not depict a major variation for the ASFLA with an increase in number of
processors. During the experiments, it was noticed that the overall execution time decreased as higher
number of resources was utilized for execution. Those results have not been included as all the resultant
schedules are meeting the deadline constraints as was the objective of the presented work. It can also be
inferred from these results that increasing the number of resources does not necessarily decrease the OEC.

Number of tasks=50 Number of tasks=100


9 9
PSO
Processors

PSO
Processors
7 7
SFLA SFLA
5 5
ASFLA ASFLA
3 3
1 1
0 5000 10000 15000 0 10000 20000 30000
OEC OEC

Number of tasks=150 Number of tasks=200

9 9
Processors
Processors

7 7 PSO
PSO
5 5 SFLA
SFLA
3 3 ASFLA
ASFLA
1 1
0 20000 40000 60000 0 20000 40000 60000
OEC OEC

Fig 11 OEC vs the number of processors utilized

Evaluation of scalability of the algorithms by varying the size of task graph This experiment
analyzed the scalability of the considered algorithms using very large sized workflows. We have
considered 100 processors for executing random workflows with sizes varying as 250, 300, 500 and 1000.
It was observed (Fig. 12) that as the size of the workflow increases, PSO and SFLA give comparable
performance. In comparison, ASFLA provides huge performance benefits by reducing OEC, on an
average of 77% as compared to PSO and SFLA. Thus, it can be established that ASFLA is highly scalable
and therefore, appropriate for executing large workflow applications in IaaS clouds.
1.20E+09
1.00E+09
OEC 8.00E+08
6.00E+08
PSO
4.00E+08
2.00E+08 SFLA
0.00E+00 ASFLA
250 300
500
1000
No. of Tasks

Fig 12 OEC for Large Sized Workflows

Observations regarding Robustness and Overhead: The robustness of the proposed algorithm can be
inferred from the fact that it is successful in meeting the deadline constraints for workflows with varied
number of tasks as well as resources. Moreover, the ASFLA is able to reduce the overall execution cost
by up to 77% as compared to the other considered algorithms, i.e. PSO and SFLA. However, this
reduction causes an overhead of increased execution time. Therefore, it can be inferred that SFLA is more
appropriate for problems where a low execution time is the priority. In comparison, ASFLA yields better
performance where minimizing the execution cost is the prime concern.

7. Conclusion

This paper explores the use of meta-heuristic optimization techniques for determining a resource
provisioning and scheduling strategy for diverse workflows on Infrastructure as a Service (IaaS) clouds.
The objective is to optimize the cost of execution of the application while meeting the specified deadline.
A new Augmented Shuffled Frog Leaping Algorithm (ASFLA) based on the meta-heuristic optimization
technique, SFLA has been presented for solving the problem. A rigorous comparison of performance of
ASFLA, SFLA and PSO has been conducted for a variety of workflows in a heterogeneous environment.
An element of randomness has been introduced in all the algorithms so that they all result in near-optimal
schedules and also meet the deadline constraint. The experimental analysis of ASFLA, SFLA and PSO
shows that the former outperforms the other algorithms in reducing the overall execution cost of the
considered workflows. Future work includes evaluation of the proposed approaches on cloud systems.

References

1. Gideon Juve, Ann Chervenak, Ewa Deelman, Shishir Bharathi, Gaurang Mehta, and Karan Vahi,
Characterizing and profiling scientific workflows. Future Gener. Comput. Syst. 29, 3 (2013), 682-692.
2. Gideon Juve, Ewa Deelman, Scientific Workflows in the Cloud, Grids, Clouds and Virtualization, (2011),
71-91.
3. R. Buyya, C.S. Yeo, S. Venugopal, J. Broberg, and I. Brandic, Cloud computing and emerging IT
platforms: Vision, hype, and reality for delivering computing as the 5th utility. Future Gener. Comput. Syst.
25, 6 (2009), 599-616.
4. J. D. Ullman, NP-complete scheduling problems, J. Comput. Syst. Sci. 10, 3 (1975), 384-393.
5. Cui Lin, Shiyong Lu, Scheduling Scientific Workflows Elastically for Cloud Computing, International
Conference on Cloud Computing, (2011), 746-747.
6. Zhi-Hui Zhan, Xiao-Fang Liu, Yue-Jiao Gong, Jun Zhang, Henry Shu-Hung Chung, and Yun Li, Cloud
Computing Resource Scheduling and a Survey of Its Evolutionary Approaches. ACM Comput. Surv., 47, 4,
Article 63 ( 2015), 33 pages.
7. S. Ostermann, A. Iosup, N. Yigitbasi, R. Prodan, T. Fahringer, and D. Epema, A performance analysis of
EC2 cloud computing services for scientific computing, in Cloud Computing, Berlin, Germany: Springer,
(2010),115–131.
8. Fatma A. Omara, Mona M. Arafa, Genetic algorithms for task scheduling problem, Journal of Parallel
Distributed Computing 70 (1) (2010) 13–22.
9. Binodini Tripathy, Smita Dash, Sasmita Kumari Padhy, Dynamic task scheduling using a directed neural
network, Journal of Parallel and Distributed Computing, Volume 75, ( 2015), 101-106,
10. M. Rahman, S. Venugopal, R. Buyya, A Dynamic Critical Path Algorithm for Scheduling Scientific
Workflow Applications on Global Grids, in IEEE International Conference on e-Science and Grid
Computing, vol., no.,(2007)35-42.
11. Wei-Neng Chen; Jun Zhang, An Ant Colony Optimization Approach to a Grid Workflow Scheduling
Problem With Various QoS Requirements, in Systems, Man, and Cybernetics, Part C: IEEE Transactions
on Applications and Reviews, , vol.39, no.1, (2009) 29-43.
12. J. Yu and R Buyya, A budget constrained scheduling of workflow applications on utility grids using
genetic algorithms, in Proc. 1st Workshop Workflows Support Large-Scale Sci., (2006), 1–10.
13. Q. Tao, et al., A rotary chaotic PSO algorithm for trustworthy scheduling of a grid workflow, Comput.
Oper. Res. 38 (5) (2011) 824–836.
14. M. Mao and M. Humphrey, Auto-scaling to minimize cost and meet application deadlines in cloud
workflows, in Proc. Int. Conf. High Perform. Comput., Netw., Storage Anal., (2011), 1–12.
15. M. Malawski, G. Juve, E. Deelman, and J. Nabrzyski, Cost-and deadline-constrained provisioning for
scientific workflow ensembles in IaaS clouds, in Proc. Int. Conf. High Perform. Comput., Netw., Storage
Anal., vol. 22 (2012), 1–11.
16. S. Abrishami, M. Naghibzadeh, and D. Epema, Deadline-constrained workflow scheduling algorithms for
IaaS clouds, Future Generation Comput. Syst., vol. 23, no. 8, (2012) 1400–1414.
17. S. Pandey, L. Wu, S. M. Guru, and R. Buyya, A particle swarm optimization-based heuristic for scheduling
workflow applications in cloud computing environments, in Proc. IEEE Int. Conf.Adv. Inform. Netw.
Appl., (2010), 400–407.
18. Z. Wu, Z. Ni, L. Gu, and X. Liu, A revised discrete particle swarm optimization for cloud workflow
scheduling, in Proc. IEEE Int. Conf. Comput. Intell. Security, (2010), pp. 184–188.
19. D. Poola, S.K. Garg, R. Buyya, Yang Yun, K. Ramamohanarao, Robust Scheduling of Scientific
Workflows with Deadline and Budget Constraints in Clouds, in 2014 IEEE 28th International Conference
on Advanced Information Networking and Applications (AINA), vol., no., (2014).858-865.
20. M.A. Rodriguez, R. Buyya, Deadline Based Resource Provisioning and Scheduling Algorithm for
Scientific Workflows on Clouds," in IEEE Transactions on Cloud Computing, vol.2, no.2, (2014) 222-235.
21. Mala Kalra, Sarbjeet Singh, A review of metaheuristic scheduling techniques in cloud computing, Egyptian
Informatics Journal, Vol. 16, 3, (2015), 275–295.
22. M. Eusuff, K. Lansey, and F. Pasha, Shuffled frog-leaping algorithm: A memetic meta-heuristic for
discrete optimization. Engineering Optimization, 38(2), (2006), 129-154.
23. Babak Amiri, Mohammad Fathian, Ali Maroosi , Application of shuffled frog-leaping algorithm on
clustering,The International Journal of Advanced Manufacturing Technology, Volume 45, Number 1-2,
(2009),199-209
24. S. Mehta and H. Banati , Trust aware social context filtering using Shuffled Frog Leaping Algorithm,
International conference on Hybrid Intelligent Systems ( 2012), 342-347.
25. Guang-Yu Zhu and Wei-Bo Zhang, An improved Shuffled Frog-leaping Algorithm to optimize component
pick-and-place sequencing optimization problem, Expert Syst. Appl., 41, 15 (2014), 6818-6829.
26. Hema Banati and Shikha Mehta, Improved shuffled frog leaping algorithm for continuous optimisation
adapted SEVO toolbox., Int. J. Adv. Intell. Paradigms 5, 1/2 (2013), 31-44.
*Author Biography & Photograph

Parmeet Kaur received PhD in Computer


Engineering from NIT, Kurukshetra in 2016,
M.Tech. in Computer Science from Kurukshetra
University, India in 2008 and B.E. in Computer
Science and Engineering from P.E.C., Chandigarh,
India in1998. She is currently working in Jaypee
Institute of Information Technology, NOIDA,
India. Her research interests include fault tolerance
in mobile systems, scheduling in cloud computing
etc.

Dr Shikha Mehta received PhD in Computer


Science from University of Delhi in 2013. She is
currently working in Jaypee Institute of
Information Technology, NOIDA, India. Her
research interests include Nature Inspired
Algorithms, Soft computing, Information Retrieval,
Large Scale Global Optimization etc.

You might also like