Load Sharing

You might also like

Download as ppt, pdf, or txt
Download as ppt, pdf, or txt
You are on page 1of 27

LOAD SHARING

www.powerpointpresentationon.blogspot.com

Base Papers
Epoch load sharing by Helen D. Kartza and Ralph C. Hilzer. Effective Load Sharing on Heterogeneous Networks of Workstations by Li Xiao, Xiaodong Zhang and Yanxia Qu.

Objectives
Define Load Sharing. What is Epoch Load Sharing? Define other load sharing strategies. Comparison. Examine Load Sharing in Heterogeneous Networks of Workstations. Compare all the techniques available.

Definition
Load balancing is to divide work evenly among the processors. Load sharing is to ensure that no processor remains idle when there are other heavily loaded processors in the system.

Algorithms
With sender-initiated algorithms, loaddistribution activity is initiated when an over-loaded node (sender) tries to send a task to another under-loaded node (receiver). In receiver-initiated algorithms, loaddistribution is initiated by an under-loaded node (receiver), when it requests a task from an over-loaded node (sender).

Scheduling Policies
Static Dynamic or Adaptive Probabilistic Deterministic

Static Policy
Scheduling policies that use information about the average behavior of the system and ignore the current state, are called static policies. The principle advantage of static policies is simplicity, since they do not require the maintenance and processing of system state information.

Static Policy
In the probabilistic case, the scheduling policy is described by state independent branching probabilities. Jobs are dispatched randomly to workstations with equal probability. In the deterministic case, routing decisions are based on system state, so jobs join the shortest of the all workstation queues.

Dynamic Policy
Policies that react to the system state are called adaptive or dynamic policies. Adaptive policies tend to be more complex, mainly because they require information on the system's current state when making transfer decisions. improve performance benefits over those achievable with static policies.

Dynamic Policy
When workstations become idle, jobs can migrate from heavily loaded workstation queues to idle workstations. Job Migration can be Receiver initiated or Sender initiated. It balances the job load and can improve overall system performance.

EPOCH LOAD SHARING


With this policy, load is evenly distributed among workstations, and job migration occurs only at the end of predefined intervals. The time interval between successive load sharing transfers is called an epoch.

Simulation Model

Job Scheduling Policies


Probabilistic (Pr) Probabilistic with Migration (PrM) Shortest Queue (SQ) SQ with Migration (SQM) Epoch Load Sharing (EPS)

Probabilistic
With this policy, a job is dispatched randomly to one of the workstations with equal probability. Therefore, with this method the scheduler is never activated to make decisions which depend on system state.

Probabilistic with Migration


Jobs are assigned to processor queues in the same way as in the Pr case. However, when a processor becomes idle and there are jobs waiting at the other processor queues, a job migrates from the most heavily loaded processor to the idle processor.

Shortest Queue
With this strategy, a job is assigned to the shortest processor queue. Therefore the scheduler is activated every time a job arrives. SQM is a variation of SQ, where migration takes place in the same way as in PrM.

Conclusions
For all levels of migration overhead, all N, and for all epoch sizes, ELS involves much less overhead than the shortest queue (SQ) policy, and involves less overhead than the Probabilistic Migratory (PrM) method, in terms of the collection of global system information For high loads, ELS with small epoch size is preferred since it performs very close to the SQ method. For moderate loads, in some cases the PrM method is best while in other cases ESL with a small epoch size is preferred. For light loads, the SQ method is recommended.

Policies in Heterogeneous Networks


CPU-Based load sharing Memory based load sharing CPU-Memory-based load sharing

CPU-Based
The load index in each computing node is represented by the length of the CPU waiting queue, Lj . A CPU threshold on node j, denoted as CTj , is set based on the CPU computing capability. For a new job requesting service in a computing node, if the waiting queue is shorter than the CPU threshold (Lj < CTj ), the job is executed locally. Otherwise, the load sharing system finds the remote node with the shortest waiting queue to remotely execute this job. This policy is denoted as CPU RE.

Memory Based
Instead of using Lj , we propose to use the memory threshold, MTj to represent the load index. For a new job requesting service in a computing node, if the node memory threshold is smaller than the user memory space (MTj < RAMj ), the job is executed locally. Otherwise, the load sharing system finds the remote node with the lightest memory load to remotely execute this job. This policy is denoted as MEM RE.

CPU-Memory Based
We have proposed a load index which considers both CPU and memory resources. The basic principle is as follows. When a computing node has sufficient memory space for both running and requesting jobs, the load sharing decision is made by a CPU-based policy. When the node does not have sufficient memory space for the jobs, a memory based policy makes the load sharing decisions.

Comparison

Memory load sharing is better than CPU because


the workloads we used are memoryintensive. Memory based are able to identify less powerful nodes.

You might also like