Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1of 22

CHAPTER-3

LITERATURE REVIEW

In this chapter, we will explore at related work and virtual machine scheduling as a

resource allocation approach in cloud computing. As a result, this chapter gives an

understanding of the basic information and strategies for assigning resources for load

balancing to enhance cloud data center system efficiency.

3.1 Literature Review

In the domain of information technology, cloud computing is among the most essential

technologies. The fundamental issue with cloud computing is load balancing across

numerous cloud servers. It's one of the most crucial challenges in the continuous

expansion of cloud computing. The need for new cloud services with high-speed services

is a major concern in today's world.  S. Swarnakaret. al. [7] presented a new distribution

methodology of all incoming requests among virtual machines using improved dynamic

load balancing approach (IDLBA) for cloud platform. As a result, it is simulated three

times using the CloudAnalyst simulator, each time with a various number of tasks of

varying lengths. Through the use of Dynamic Tables in the load balancer, this suggested

algorithm benefits in better resource usage by allocating incoming jobs to different virtual

machines with various processing speeds in different data centers in different locations in

an efficient manner. As a result, the average makespan time has decreased, and because

Dynamic Tables continuously updates the list of available VMs, it has also helped to

minimize the average response time.

15
By utilizing resource management, cloud computing provides flexible, adaptive, and

resource sharing capabilities. The fundamentals to achieving resource utilization with

high performance management in cloud computing is resource tracking and anticipation.

One of the primary difficulties in cloud computing is resource scheduling; the scheduling

policy and methodology impact the efficiency of the cloud system [8]. Due to resource

limits, Cloud Computing has recently introduced high-performance computing capacity,

urging cloud providers to fully utilize resources. [9], [10] proposes to use a Hidden

Markov Model to manage the cloud - hosted resources .The proposed model is used for

resource monitoring and then the resource will be classified based on Less, Average, and

Heavy loaded categories as the availability of the resources and the appropriate

scheduling algorithm will be selected on demand, the efficiency of algorithm has been

calibrated using different kind of workload scenario.

Grid computing enables the efficient computing and computational resource

management. Now in these days these services are frequently obtainable. But at the other

hand, when the quantity of requested resources grows, the demand on these servers

grows, resulting in resource conflicts and network challenges in computational servers. In

order to prevent such kind of faults Hidden Markov Model based predictive schemes is

proposed and implemented for host load approximation. [11], [12] present the simulation

which is first compute the host load using the GridSim simulator and then a separate tool

is implemented for next approximated. The obtained results demonstrate the effectiveness

of technique in terms of high accurate predictive values and less time and space

complexity of the system. Therefore, the presented technique is adoptable for the real-

world scenarios of host load approximation and fault tolerance scheme development [13].

16
The load balancing becomes an important point for performance and stability of the

system. As a result, an algorithm for improving system efficiency by distributing

workload among VMs is required. [14] Methods: To accomplish load balancing and QoS,

task allocation methods are used. Authors [15] introduced the Load Balancing Decision

Algorithm (LBDA) in this work to control and keep it consistent amongst virtual

machines in a datacenter while also minimizing the completion time and response time.

Findings: The LBDA method works in three steps. First, it assesses VM storage and load

to categories VM states. Second, figure out how long each VM will take to complete the

task [16]. Eventually, depending on the VM status and job time required, decides how to

allocate the jobs among the VMs. Improvements: The results of our suggested LBDA

were compared to those of Shortest Job First, MaxMin , and Round Robin. The results

revealed that the suggested LBDA outperforms existing algorithms. Cloud computing is

an emerging methodology in which data and IT services are delivered via the Web by

employing remote servers. It represents a novel method of supplying computing

resources by permitting on-demand network access.

Cloud computing is made up of a number of services, each of which can handle multiple

jobs [17]. Because task scheduling is an NP-complete problem, task management can be

a significant component of cloud computing technologies. Several scheduling task

methods have been suggested to improve the efficiency of virtual machines hosted in

cloud computing. [18] Provide a method for resolving the issue optimally while taking

into consideration the QoS constraints imposed by the various user demands [19]. This

technique, based on the Branch and Bound algorithm, allows to assign tasks to different

virtual machines while ensuring load balance and a better distribution of resources. The

17
experimental results show that our approach gives very promising results for effective

tasks planning [20].

S. Santra and K. Mali [21] developed a circular Round Robin strategy to load balancing

in cloud computing and attempted to describe an improved load balancing paradigm.

Their efforts result in a dependable and quick working environment for the user-assigned

task. It also established a reliable communication channel between the virtual machine

and the broker. CloudSim and Java were used to build the entire system. They used the

FCFS scheduling mechanism in combination to the Round Robin scheduling technique to

produce better results. The major goal of the paper was to use the Round Robin approach

in CloudSim to load balance virtual machines in cloud computing.

From conception to deployment, usage to management, and so on, O. Kaneria and R.K.

Banyal[22] identified numerous stages of cloud implementation. The paper mentioned

several elements of cloud computing, including protection, speed, and confidentiality.

They altered fundamental load balancing algorithms and took advantage of cloud

resources. They experimented on CloudSim and updated the algorithms that come

standard with CloudSim, such as Round-robin and throttled. The paper's major goal was

to assign tasks to different datacenters and subsequently allocate resources. It also looked

for vacant hosts in the datacenter, and if one is found, it allocates a process to that host; if

no vacant hosts are found, it looks for the host with the most processors, and allocates a

task to that host. As a result, tasks are distributed effectively to the datacenters.

The key obstacles and issues of load balancing in cloud computing were examined by S.

Joshi and U. Kumari [23]. According to the researcher, cloud computing is rapidly

evolving, and as a result, a big number of consumers are becoming interested in it. The
18
author explored a variety of obstacles and issues, including virtualization,cloud service

models,  load balancing, and a variety of load balancing algorithms, including Round

robin, central manager, max-min, min-min, honey bee,   threshold, and so on. The author

examined and contrasted the advantages and disadvantages of each algorithm. By

comparing the different current load balancing algorithms, the major goal of this research

was to analyze appropriate load balancing in terms of reliability, utilizations, consistency,

and response time.

H. Mehta et al. [24] mentioned the main issue of resource scheduling in cloud computing,

as it actually impacts the cloud system's efficiency. As per the report, resource

management and forecasting were the fundamental to achieving high-performance

resource use. The author employed a Hidden Markov Model to keep track of the cloud

resources. The resources were then divided into three categories: light, medium, and

heavy weighted. Following the deployment of a Hidden Markov Model to analyze the

resources, an appropriate resource scheduling technique was employed to assign tasks to

those resources. After observation, the author used different algorithm such as FCFS,

Min-min, and Max-min, among others, to contrast the findings. Different algorithms

might be employed for various situations depending on the usage pattern.

The necessity for efficiency in cloud computing was described by F. Alam et al [25].

According to the researcher, because of its convenience, the round robin load balancing

algorithm (RLBA) is the most often used method, however it is inefficient in any case.

As a result, the author developed two new techniques, Adaptive RLBA and Predictive

RLBA, and then used simulation data to test the usefulness of both algorithms. Server

load correlation and load variance were utilized as comparison criteria. A supervised

19
learning model named support vector machine was used in the Predictive RLBA. In both

a uniform and non-uniform online traffic area, both changed methods performed better

than the old approach.

3.2 Cloud Data Center

The evolution of data center design is at a fork in the road. Huge data expansion, difficult

economic situations, and the physical constraints of power, heat, and space are all putting

a strain on the business. A data center (also known as a server farm) is a centralized

storage, administration, and distribution facility for data and information. A server farms

is often a structure that accommodates computing system and related components

including broadcast communication and storage frameworks. Power sources that are

duplicate or backup, data communications links that are redundant, environmental

standards, and security equipment are all common [9].

Physical hard drive memory resources are pooled into storage pools, through which

“logical storage” is generated, which is a crucial benefit for the data centre. Because most

storage systems are diverse, storage hardware from a range of suppliers can be introduced

with less or no impact. Several multiple computer systems that utilize the same pool of

storage space can access these logical storage areas. Other than the tangible benefits of

centralized backups and the requirement for lesser hard drives altogether, one of the most

significant advantages of storage virtualization is that data can be copied or transferred to

some other locations transparently to the server while using logical storage point.

The centralization of all building resources such as HVAC, electrical, network links,

wiring, hardware, software, and people is indeed one of the less glamorous or “high-tech”

20
features of the data centre. Many businesses have multiple server rooms with replicated

services across their whole organization, all of which run on identical hardware and

software components. In order to lessen redundancy and wasteful costs, several

organizations are consolidating their server rooms into private server farms to minimize

redundancy of software, hardware, and infrastructure required to run their company [9].

3.3 Scheduling in Cloud Computing

A real-time system is a controlling system that is typically integrated inside other

equipment, making it difficult to tell that it is even a computer. It gathers information

from its surroundings, processes it, and responds. A real-time system interacts, responds,

and changes its behavior in order to influence the domain in which it is positioned.

Scheduling allows optimal allocation of resources among given tasks in a finite time to

achieve desired quality of service. In its most formal form, a scheduling issue contains

jobs that must be arranged on resources while complying to certain constraints in order to

optimize certain objective function. The goal is to create a plan that indicates when and

on what resource each job will be conducted [11].

In real time systems, a scheduling policy is responsible not only for ordering the use of

systems resources. It should also be able to predict the worst-case behavior of the system

when the scheduling algorithm is applied. Scheduling allows for the most efficient

utilization of resources among jobs in a finite amount of time in order to attain the

required level of service quality. In its most formal form, a scheduling issue includes jobs

that must be arranged on resources while following to certain constraints in order to

optimize certain objective function. The goal is to create a timeline that describes when

21
and on what resource each activity will be completed [11]. A scheduling policy in real-

time systems is accountable not only for organizing the utilization of system resources,

but it should also be capable of predicting the system's worst-case behavior when

scheduling technique is used.

Task scheduling allows you to make the most of your resources by assigning specific

tasks to specified personnel. It automatically enhances service and performance level.

The following are the main scheduling algorithms and their properties [26]: Figure 3.1

depicts the scheduling algorithm categories.

Figure 3.1 Different Scheduling Algorithms

3.3.1 First Come First Serve (FCFS)

The jobs that arrived earliest are executed first in this algorithm. When jobs are added to

the queue, they are placed at the end. Each process is removed one by one from the front

of the line. It can be claimed that the job that comes first is given the highest priority,

while the job that comes last is assigned the lowest priority. There is no preprocessing of

the VM list or the Job list in this procedure.This algorithm is simple and easy to

22
implement. Although the time it takes for an incoming job to be processed can be

shortened, disadvantage is that the VM capacity may be underutilized or inversely [27].

Characteristics-

 There is no prioritizing at all, which means that each process must be completed

before adding another.

 This approach does not perform better with delay-sensitive traffic because the

waiting time and delay are both quite high.

 Because context switches only happen when a process ends, no process queue

organization is required, and scheduling overhead is minimal.

3.3.2 Shortest Job First (SJF)

The Shortest Job First Algorithm is a pre-emptive algorithm that identifies the waiting

process with the shortest execution time. After that, the process is assigned to the

processor with the shortest burst time. This algorithm requires the jobs to be preprocessed

in such a way that they are ordered according to their lengths. When the jobs come, the

broker re-arranges them according to their length and sends them to the VMs in the new

sequence [28].

Characteristics-

 One of the challenges with the SJF algorithm is that it has to figure out what the

next processor request is.

 It minimizes the average waiting time by running smaller processes even before

main process.

23
 When a system is overburdened with little processes, starvation occurs.

 It boosts job output by prioritizing shorter jobs that should be completed first and

have a quicker turnaround time.

3.3.3 Round Robin (RR)

Processes are run in this kind of algorithm in the same way that they are in FIFO, but

they are limited to processor time, termed as time-slice. If the process is not finished

before the processor timer runs out, the processor moves on to the next process in the

queue that is in the waiting state. The preemption or new process is subsequently moved

to the back of the ready list, and new processes are appended to the queue's tail end.

Characteristics-

 If we use a relatively shorter or quantum, the CPU performance will be reduced.

 If we use a large time-slice or quantum, we will have a slow response time. 

 Because the waiting period is so lengthy, there will be a very slight possibility

that deadlines will be reached.

 A minimum time slice should be set to a particular job that needs to be analyzed.

 It's a real-time algorithm that reacts to an event within a fixed period of time.

3.3.4 Priority Scheduling(PS)

Every process is given a priority in this method, and processes are run in terms of

precedence. FCFS is used for priorities with the same priority.This requires a significant

amount of preprocessing, such as grouping Jobs as per their lengths and VMs according

to their MIPS (Millions of Instructions per Second) values. As quickly as the execution

begins, the broker assigns the utmost priority task to the highest priority VM for
24
implementation, and so on. When a job comes in, the broker will look for the maximum

possible VM and allocate the job to that VM.

Characteristics-

 When there are a lot of processes with the same priority, there is a lot of waiting

time.

 Processes with a higher priority have a shorter wait time and a shorter delay.

 Low-priority processes may suffer from starvation.

 Each process is given a value that specifies its priority level in priority scheduling.

3.4 Virtual Machine (VM) Scheduling

Virtualization technology translates a huge range of physical resources to VMs in a cloud

computing environment. The VM scheduling method will allocate tasks to virtual

machines and then deploy them to various real machines to accomplish resource sharing

and maintain QoS and system performance. As a result, it is a critical technology for

successfully scheduling and deploying virtual machines (VMs) as per user requirements

in order to maximize resource usage and lower energy usage costs. Both homogeneous

and heterogeneous systems are supported by VM scheduling [29]. Thousands of physical

machine, which could be regular PC machines or high-performance servers, make up a

cloud data centre. To provide a collection of VMs, the cloud computing technology

virtualizes these physical resources for application programs. These VMs can have the

same or different memory, CPU, and bandwidth.

25
The scheduling of virtual machines is scalable. The data center's resources can be

dynamically adjusted. The cloud computing model should be able to manage resources

and deliver them quickly, so that resources are available when they are needed.

Simultaneously, VMs can be produced at any moment for tasks submitted by users and

removed at any time once they have been used.VM scheduling are a sustainable. When a

server fails, the VMs executing on it must be moved to other servers as soon as possible

to guarantee job execution continues. The scheduling of virtual machines is a hybrid of

distributed and centralized methods [29]. Cloud computing provides customers with

highly concentrated resources in terms of data centers, allowing them to access the entire

resource pool. However, the distributed processing mechanism is used in the internal VM

scheduling process. The scheduling of virtual machines is dynamic. Users in cloud

computing acquire resources as required, and virtual machines (VMs) are constructed to

meet their needs. However, once the VMs have been constructed, they may be unable to

accomplish the purpose or may be terminated due to failure. As a result, the system

should schedule work to other VMs and, if necessary, build new VMs for users to use it.

Static scheduling and dynamic scheduling are 2 kinds of scheduling modes used in cloud

computing virtual machine scheduling [30]. The pre-scheduling approach is the major

focus of static scheduling. Prior to the actual scheduling, the suitable scheduling system

has been designed. As a result, prior to scheduling, effective allocation is essential, and

adjustments cannot be made after scheduling, which needs great judgment. It is possible

to alter the dynamic cloud computing virtual machine scheduling method in real time

during execution or during the operating process, resulting in high execution efficiency

and application value.

26
3.5 Challenges of Resource Management in Cloud Computing

Management of cloud resources [16] delivers as the primary underlying technology for

cloud computing that manages resources from a broad range of devices and delivers a

complete overview to users and applications. It has features including a remote and

protected interface for generating, removing, customizing, and tracking virtual resources,

dynamically controlling resources, providing adjustable resource allocation strategies,

and elastically delivering resources depending on the requirements of the business [17].

Developing a highly scalable resource management system that meets the demands of

adaptability, efficiency isolation, and effectiveness, on the other hand, is difficult. Due to

the poor consolidation ratio achieved by statically assigning one VM to a preset number

of physical CPUs and RAM, cloud management platforms such as Amazon EC2 [19]

have poor machine usage efficiency. Furthermore, cloud vendors allow users to configure

parameters to control resource requests. It's critical to make it as simple as possible for

users to submit effective resource requests with minimal effort. Finally, when the scale of

a cluster grows, current resource management strategies struggle to keep up. The number

of VMs that may be launched in the cloud is usually limited by the centralized storage

infrastructure. As a result, cloud service providers must create cloud management

systems and technologies that support scaling, outstanding reliability, dependability,

availability, and security in order for cloud services to be adopted quickly and widely.

3.6 Resource Allocation Strategies in Cloud Computing

The input variables of Resource Allocation Strategies, as well as the manner in which

resources are allocated, differ depending on the services, infrastructure, and kind of

27
applications that require resources. The RAS used in the cloud is discussed in the next

section:

Execution Time: In the cloud, many resource allocation strategies are offered. Proper job

execution time and pre-emptable scheduling are taken into account for resource

allocation in [20]. It uses several forms of renting computing abilities to resolve the issue

of resource contention and boost resource usage. However, calculating a job's execution

time is a difficult task for a user, and inaccuracies are common. In the cloud, many

resource allocation strategies are offered. Proper job execution time and pre-emptable

scheduling are taken into account for resource allocation in [20]. It uses several forms of

renting computing abilities to resolve the issue of resource contention and boost resource

usage. However, calculating a job's execution time is a difficult task for a user, and

inaccuracies are common.

Policy: Because centralized user and resource management does not provide

for resource, scalable user and organization-level system security control [31]. Authors   

added a new layer called domain between the user and the virtualized resources to

decentralize user and virtualized resource management for IaaS. The domain layer

assigns virtualized resources to users based on role-based access control (RBAC).

Virtual Machine (VM): Authors [32] proposes a system that can autonomously scale its

infrastructure resources. The system is made up of a virtual network of virtual machines

that can migrate across many domains of physical infrastructure in real time. A virtual

compute environment may automatically move itself across the infrastructure and grow

its resources by leveraging dynamic provision of infrastructure resources and dynamic

28
application requirements. However, the preceding research only covers non-preemptable

scheduling policies.

Gossip: Clusters, servers, nodes, their location reference, and capacity vary by cloud

environment. An author [10] addresses the concept of resource management in a large-

scale cloud system (with over 100,000 servers) and proposes the generic Gossip protocol

for equitable CPU resource distribution to clients.

Utility Function: Many ideas exist for dynamically managing VMs in IaaS by

maximizing some target function such as cost minimization, cost performance, and

achieving QoS criteria. The function is represented as a Utility property that is chosen

based on reaction time, number of QoS, met objectives, profit, and other factors [12].

Resource allocation based on response time as a metric of utility function is suggested for

heterogeneous cloud computing systems (multitier cloud), taking into account memory,

CPU and communication resources.

3.6.1 Resource Allocation: Advantages and Limitations

Resource Allocation is the method of allocating available resources to required cloud

applications through the web in cloud computing. If resource allocation is not handled

accurately, services are starved. By enabling service providers to handle the resources for

every specific component, scheduler solves this problem. The Resource Allocation

Strategy is concerned with merging cloud provider tasks in order to use and allocate

scarce resources inside the confines of the cloud environment in order to cater the

requirements of the cloud application. It demands the type and quantity of resources

29
required by each program to fulfill a user task. The order and timing of resource

allocation are also factors in determining an appropriate resource allocation strategy.

Pros-

 The main benefit of resource allocation is that the user does not need to install any

hardware or software in order to receive requests and host them in the cloud.

 The user does not need to spend a lot of money on hardware or software systems.

 During a resource scarcity, Cloud service providers might exchange information

on the web.

 There are no boundaries between midrange and global space. We have the ability

to send our requests and submissions all around the world.

Cons-

 Users rent cloud resources from distant servers for their operations, but they have

no authority over these resources.

 Because the actual data on how the cloud environment works is dependent on the

cloud service provider, extensive understanding is needed for controlling and

allocating resources in cloud computing.

 The data of final users can be subject to phishing scams and espionage in a

deployment paradigm such as public cloud. Because cloud servers are accessible

and linked, malware can easily spread throughout the network.

 When a user wants to switch to some other cloud provider for enhanced outcomes

or storage systems, a migration issue occurs. Transferring large amounts of data

from one vendor to another is a complicated and time-consuming procedure.

30
3.7 Load Balancing in Cloud Computing

A major challenge in parallel programming of a distributed framework is the way to

manage load balancing and scheduling of such a framework which might comprise of

heterogeneous PCs. The field of resource management includes load balancing. With the

developments in computer technology and the emergence of numerous distributed

systems, the issue of load balancing in distributed systems has received much interest. A

dynamic load balancing algorithm makes no assumptions about task behavior or the

system's overall state; therefore load balancing decisions are completely based on the

system's present state. Many crucial issues must be addressed in the implementation of an

appropriate dynamic load balancing algorithm, including load levels comparison, load

estimation,  performance indices, stability of the system, volume of data transmitted

among nodes, evaluation of job resource needs, job selection for transmission, remote

node selection, and more [15].

The rapid advancement of computer technology has increased the demand for high-speed

computing, as well as the necessity for high scalability, availability, and reaction time. As

a result, distributed and parallel processing systems were developed, in which multiple

processors process the job at the same time. Effective task distribution over several

processors is one of the primary research challenges in parallel and distributed systems.

Load balancing is a technique for reducing reaction time, increasing throughput, and

avoiding overload. The goal of load balancing is to make sure that every processor in the

system is doing similar amounts of work at all times [18].

31
3.7.1 Need of Load Balancing

The technique of spreading workloads over numerous servers, referred to as a server

cluster, is defined as load balancing. The basic goal of load balancing is to keep any

single server from becoming overburdened and perhaps failing. We can dynamically

place the responsibility local to the system to remote nodes or machines that are

underutilized to balance the load on a machine. This improves user experience by

lowering response time, enhancing resource utilization, lowering task denials, and

improving the system's performance ratio.

Pros-

 Load balancing is an easy and low-cost method of ensuring that our system runs

seamlessly and effectively.

 The purpose of a load balancer is to be capable of handling any unexpected events. A

load balancer can simply redistribute data to some other node in our network if one

malfunctions, making it scalable and adaptable when managing traffic.

 When cloud-based systems are dealing with large workloads, single servers may get

flooded with requests. Cloud server load balancing system can assure high standard

of service availability and faster response time in cloud platforms in such a

circumstance. Businesses may restart critical business procedures and operations

without a delay because load balancers handle these abrupt upsurges efficiently.

 Cloud server load balancers can transfer traffic to different geographical areas in the

event of an emergency or when a particular geographic area is impacted by natural

hazards such as tidal waves, earthquakes and hurricanes, causing cloud servers to

become unavailable 
32
3.7.2 Load Balancing Approaches

Load balancing must be done in such a way that all VMs are balanced in order to get the

most out of their potential and increase system efficiency.There are two techniques to

load balancing. The approach to load balancing might be static load balancing or dynamic

load balancing. The static technique is based on information gathered in prior of the load

balancing decision. The dynamic load balancing method distributes jobs to computing

nodes according to their current status. The following are the descriptions:

i. Static Load Balancing

Before program execution begins, tasks are assigned to processors in static scheduling. At

compilation time, information about task execution timings and processing resources is

presumed to be known. Static scheduling approaches are processor non-preemptive,

meaning that a job is always processed on the processor to which it is allotted. Static

scheduling solutions often aim to reduce the total execution time of a parallel program

while reducing transmission delay.

The load balancing judgments are made computationally or probabilistically at compile

time depending on the performance of computing nodes and remains stable during

runtime in the Static load balancing technique. In this technique, the number of tasks in

each node is fixed [33]. Static load balancing solutions are non-preemptive, which means

that once a node has been assigned a load, it cannot be moved to another node.

Pros-

 It has the least number of communication latency.

33
 Algorithms are basic and uncomplicated to execute.

 The system’s overhead is reduced to a minimum.

Cons-

 The task cannot be relocated while it is being executed.

 The system's overall efficiency has suffered as a result of load variations.

 When tasks have different execution times and nodes are heterogeneous, this method

is less beneficial.

i. Dynamic Load Balancing

Dynamic load balancing algorithms adjust the distribution of work across workstations in

real time, based on present or recent load information. There are three broad types of

situations in which static load balancing could be impractical or can contribute to load

imbalance: [33]:

 The first group of issues includes those where all of the tasks are available at the start

of the calculation, but the time needed by each work varies.

 The second type of issues is one in which tasks are accessible at the start but the time

needed by each work changes as the computation advances;

 The third type of issues is one in which tasks are not available at the start but are

formed dynamically.

Static load balancing requires too many information about the job and system before

executing, which is not always feasible, as in these three groups of difficulties. As a

result, dynamic load balancing was created to solve these limitations.

34
When one of the processors becomes overloaded, dynamic algorithms assign processes

dynamically. Rather, they are queued in the primary host's queue and dynamically

allocated in response to requests from remote hosts. Dynamic load balancing uses

runtime state information to create more informative load balancing judgments during

execution. The work load is dispersed across the processors at runtime via dynamic load

balancing techniques. These algorithms keep track of changes in the system's demand and

redistribute it appropriately [34].Six policies distinguish dynamic load balancing

algorithms: initiation, transfer, selection, profitability, location, and information.

i. Initiation Policy: determines who is responsible for initiating the load balancing

operation.

ii. Transfer Policy: determines whether or not a node is in a suitable state for load

transfer.

iii. Migration Policy: The source node chooses the best job for migration.

iv. Profitability Policy: A load balancing decision is taken based on the system's load

imbalance factor at the time.

v. Location Policy: determines which nodes are best for load sharing.

vi. Information Policy: provides a means for computing nodes to share load state

information.

Pros-

 For heterogeneous systems, dynamic load balancing works effectively because tasks

can be reallocated to any processor during runtime, reducing overloading and under

loading issues.
35
 It works effectively for tasks that take varied amounts of time to complete.

 The system does not need to be aware of the programs run-time behavior before they

are executed.

Cons-

 When the number of processors increases, there is greater transmission over heads.

 Because dynamic load balancing techniques are complicated, they are difficult to

execute

 Because it is preemptive, system overhead rises.

36

You might also like