Download as pdf or txt
Download as pdf or txt
You are on page 1of 15

III BSc (Semester – VI) Distributed Systems Unit IV

UNIT IV

Task Assignment Approach, Load Balancing Approach, Load Sharing


Approach, Process Migration and Threads.

******************
Introduction:
There are three techniques for scheduling processes of a
distributed system:
Task Assignment Approach, in which each process submitted
by a user for processing is viewed as a collection of related tasks and
these tasks are scheduled to suitable nodes so as to improve
performance.
Load-Balancing Approach, in which all the processes submitted
by the users are distributed among the nodes of the system so as to
equalize the workload among the nodes.
Load-Sharing Approach, which simply attempts to conserve the
ability of the system to perform work by assuring that no node is idle
while processes wait for being processed.

1. Explain Task assignment Approach in Distributed System.


Each process is viewed as a collection of tasks. These tasks are
scheduled to suitable processor to improve performance. This is not a
widely used approach because:

It requires characteristics of all the processes to be known in advance.


Ø This approach does not take into consideration the dynamically
changing state of the system.
In this approach, a process is considered to be composed of multiple
tasks and the goal is to find an optimal assignment policy for the tasks
of an individual process. The following are typical assumptions for the
task assignment approach:

Ø Minimize IPC cost (this problem can be modeled using network


flow model)
ü Efficient resource utilization
ü Quick turnaround time
ü A high degree of parallelism
Assumptions:
Ø A process has already been split up into pieces called tasks. This
split occurs along natural boundaries (such as a method), so that
each task will have integrity in itself and data transfers among the
tasks are minimized.
Ø The amount of computation required by each task and the speed
of each CPU are known.
Ø The cost of processing each task on every node is known. This is
derived from assumption 2.

1 Prepared by P.Y.Kumar © www.anuupdates.org


III BSc (Semester – VI) Distributed Systems Unit IV
Ø The IPC costs between every pair of tasks are known. The IPC
cost is 0 for tasks assigned to the same node. This is usually
estimated by an analysis of the static program. If two tasks
communicate n times and the average time for each inter-task
communication is t, the IPC costs for the two tasks is n * t.
Ø Precedence relationships among the tasks are known.
Ø Reassignment of tasks is not possible.
Goal is to assign the tasks of a process to the nodes of a distributed
system in such a manner as to achieve goals such as the following
goals:
ü Minimization of IPC costs
ü Quick turnaround time for the complete process
ü A high degree of parallelism
ü Efficient utilization of system resources in general
These goals often conflict. E.g., while minimizing IPC costs tends
to assign all tasks of a process to a single node, efficient utilization of
system resources tries to distribute the tasks evenly among the nodes.
So also, quick turnaround time and a high degree of parallelism
encourage parallel execution of the tasks, the precedence relationship
among the tasks limits their parallel execution.
Also note that in case of m tasks and q nodes, there are mq
possible assignments of tasks to nodes. In practice, however, the actual
number of possible assignments of tasks to nodes may be less than mq
due to the restriction that certain tasks cannot be assigned to certain
nodes due to their specific requirements (e.g. need a certain amount of
memory or a certain data file).
Task Assignment Example:
There are two nodes, {n1, n2} and six tasks {t1, t2, t3, t4, t5,
t6}. There are two task assignment parameters – the task execution
cost (xab the cost of executing task a on node b) and the inter-task
communication cost (cij the inter-task communication cost between
tasks i and j).

Execution Costs
Inter-task communication cost Nodes
T1 T2 T3 T4 T5 T6 N1 N2

T1 0 6 4 0 0 12 T1 5 5

T2 6 0 8 12 3 0 T2 2 ∞
T3 4 8 0 0 11 0 T3 4 4

T4 0 12 0 0 5 0 T4 6 3

T5 0 3 11 5 0 0 T5 5 2

T6 12 0 0 0 0 0 T6 ∞ 4

2 Prepared by P.Y.Kumar © www.anuupdates.org


III BSc (Semester – VI) Distributed Systems Unit IV
Task t6 cannot be executed on node n1 and task t2 cannot be
executed on node n2 since the resources they need are not available on
these nodes.
Serial Assignment, where tasks t1, t2, t3 are assigned to node n1 and
tasks t4, t5, t6 are assigned to node n2:
Execution cost, x = x11 + x21 + x31 + x42 + x52 + x62 = 5 + 2 + 4
+ 3 + 2 + 4 = 20
Communication cost, c = c14 + c15 + c16 + c24 + c25 + c26 + c34 +
c35 + c36 = 0 + 0 + 12 + 12 + 3 + 0 + 0 + 11 + 0 = 38. Hence total
cost = 58.

Optimal Assignment, where tasks t1, t2, t3, t4, t5 are assigned to
node n1 and task t6 is assigned to node n2.
Execution cost, x = x11 + x21 + x31 + x41 + x51 + x62
= 5 + 2 + 4 + 6 + 5 + 4 = 26
Communication cost, c = c16 + c26 + c36 + c46 + c56
= 12 + 0 + 0 + 0 + 0 = 12
Total cost = 38

2. Explain Load Balancing Approach in Distributed Systems:


In this, the processes are distributed among nodes to equalize the
load among all nodes. The scheduling algorithms that use this approach
are known as Load Balancing or Load Leveling Algorithms. These
algorithms are based on the intuition that for better resource utilization,
it is desirable for the load in a distributed system to be balanced evenly.
This a load balancing algorithm tries to balance the total system load by
transparently transferring the workload from heavily loaded nodes to
lightly loaded nodes in an attempt to ensure good overall performance
relative to some specific metric of system performance.
We can have the following categories of load balancing
algorithms:

Static: Ignore the current state of the system. E.g. if a node is heavily
loaded, it picks up a task randomly and transfers it to a random node.
These algorithms are simpler to implement but performance may not be
good.

3 Prepared by P.Y.Kumar © www.anuupdates.org


III BSc (Semester – VI) Distributed Systems Unit IV
Dynamic: Use the current state information for load balancing. There is
an overhead involved in collecting state information periodically; they
perform better than static algorithms.

Deterministic: Algorithms in this class use the processor and process


characteristics to allocate processes to nodes.

Probabilistic: Algorithms in this class use information regarding static


attributes of the system such as number of nodes, processing capability,
etc.
Centralized: System state information is collected by a single node.
This node makes all scheduling decisions.

Distributed: Most desired approach. Each node is equally responsible


for making scheduling decisions based on the local state and the state
information received from other sites.

Cooperative: A distributed dynamic scheduling algorithm. In these


algorithms, the distributed entities cooperate with each other to make
scheduling decisions. Therefore they are more complex and involve
larger overhead than non-cooperative ones. But the stability of a
cooperative algorithm is better than of a non-cooperative one.

Non-Cooperative: A distributed dynamic scheduling algorithm. In


these algorithms, individual entities act as autonomous entities and
make scheduling decisions independently of the action of other entities.
4 Prepared by P.Y.Kumar © www.anuupdates.org
III BSc (Semester – VI) Distributed Systems Unit IV
Load Balancing Policies:
Transfer Policy: First of all the state of the different machines is
determined by calculating it’s workload. A transfer policy determines
whether a machine is in a suitable state to participate in a task transfer,
either as a sender or a receiver. For example, a heavily loaded machine
could try to start process migration when its load index exceeds a
certain threshold.
Selection Policy: This policy determines which task should be
transferred. Once the transfer policy decides that a machine is in a
heavily-loaded state, the selection policy selects a task for transferring.
Selection policies can be categorized into two policies: preemptive and
non-preemptive. A preemptive policy selects a partially executed task.
As such, a preemptive policy should also transfer the task state which
can be very large or complex. Thus, transferring operation is expensive.
A non-preemptive policy selects only tasks that have not begun
execution and, hence, it does not require transferring the state of task.
Location Policy: The objective of this policy is to find a suitable
transfer partner for a machine, once the transfer policy has decided that
the machine is a heavily-loaded state or lightly-loaded one. Common
location policies include: random selection, dynamic selection, and state
polling.
Information Policy: This policy determines when the information
about the state of other machines should be collected, from where it has
to be collected, and what information is to be collected.

3. Explain Load Sharing Approach in Distributed System.


Several researchers believe that load balancing, with its
implication of attempting to equalize workload on all the nodes of the
system, is not an appropriate objective. This is because the overhead
involved in gathering the state information to achieve this objective is
normally very large, especially in distributed systems having a large
number of nodes. In fact, for the proper utilization of resources of a
distributed system, it is not required to balance the load on all the
nodes. It is necessary and sufficient to prevent the nodes from being
idle while some other nodes have more than two processes. This
rectification is called the Dynamic Load Sharing instead of Dynamic Load
Balancing.
The design of a load sharing algorithms require that proper
decisions be made regarding load estimation policy, process transfer
policy, state information exchange policy, priority assignment policy,
and migration limiting policy. It is simpler to decide about most of these
policies in case of load sharing, because load sharing algorithms do not
attempt to balance the average workload of all the nodes of the system.
Rather, they only attempt to ensure that no node is idle when a node is
heavily loaded. The priority assignments policies and the migration
limiting policies for load-sharing algorithms are the same as that of
load-balancing algorithms.

5 Prepared by P.Y.Kumar © www.anuupdates.org


III BSc (Semester – VI) Distributed Systems Unit IV

4. Process Migration in Distributed System.


PROCESS MIGRATION = relocation of a process from its current
node to another node
- Process migration moves a process during its execution, so that
it continues on another processor, with continuous access to all its
resources,
- It may be initiated without the knowledge of the running process
or any other process interacting with it,

Process Migration
- Similar to remote execution, a process migration needs to locate
and negotiate a remote host, transfer the cod image, and initialize the
remote operation,
- Since the process transferred to the destination node is pre-
empted, its state information must be also transferred,
- The state information, which should be transferred, consists of
two parts:
Computation state and Communication state,
1. Transparency
Ø levels of transparency:
a. object access level - supports non-preemptive process
migration
ü allows free initiation of programs at an arbitrary computer,
ü provides access to objects in location independent manner,
b. system call & interprocess communication level - supports
pre-emptive process migration
- all system call and interprocess communication should be
location independent in order not to depend upon its originating node
after being migrated,

6 Prepared by P.Y.Kumar © www.anuupdates.org


III BSc (Semester – VI) Distributed Systems Unit IV
2. Minimal Interference
• Migration should cause minimal interference to the progress of
the process and the whole system,
• it is achieved by minimizing the time of the process being
migrated
- Freezing time -,
3. Minimal Residual Dependencies
• Migrated process should not any way continue to depend on its
previous node once it has started executing on its new node,
4. Efficiency
ü main sources of inefficiency in process migration:
ü the time required for migrating a process,
ü the cost of locating an object,
ü the cost of supporting remote execution once the process is
migrated,
5. Robustness
ü the failure of a node other than the one on which a process
is currently running should not in any way affect accessibility or
Execution of that process.

Process Migration Policies and Mechanisms:


Process migration can be divided into two main phases:
Process Migration Policy - selection of a process to migrate and
determination of when and where to it is migrated,
• collecting necessary statistics and making decision relating to
process migration, and
Process Migration Mechanism - mechanism to migrate the selected
processes to a destination computer,
• performing process transfer taking into account decisions made
by process migration policy.

A process is an operating system (OS) entity of a program in


execution. Associated with it are address space and other OS internal
attributes such as home directory, open file descriptors, user id, and
program counter and so on. Process migration is defined as the transfer
of a process between different nodes connected by a network.

The motivations for process migration are as follows.

Dynamic Load Balancing: It allows processes to take advantage of


less loaded nodes by migrating from overloaded ones.

Availability: Processes reside on a failure node can be migrated to


other healthy nodes.

System Administration: Processes that reside on a node that is to be


undergone system maintenance can be migrated to other nodes.

7 Prepared by P.Y.Kumar © www.anuupdates.org


III BSc (Semester – VI) Distributed Systems Unit IV
Data Locality: Processes can take advantage of locality of data or other
special capabilities of a particular node.
Mobility: Processes can be migrated from a handheld device or laptop
computer to a more powerful server computer before the device get
disconnected from the network.

Fault Recovery: Although process migration is useful in many


contexts, it is not widely deployed nowadays. One of the problems is the
complexity to support process migration on top of a system that does
not have supporting facilities in design. These facilities include network
transparency, naming transparency, location transparency and others.
Implementing process migration on systems that do not have relevant
facilities may lead to degradation in performance and security,
complicated implementation and poor reliability.

5. Threads in Distributed System:


Process (program in execution):
Ø A process consists of an execution environment together with one
or more threads.
Ø A thread is the operating system abstraction of an activity.
Ø An execution environment is the unit of resource management: a
collection of local kernel managed resources to which its
threads have access.
ü An Execution environment consist of :
ü Address space.
ü Thread synchronization and communication resources (e.g.
semaphores).
ü Computing resources (file systems, windows, etc.)
ü Expensive to create and manage.
Ø Threads (lightweight process):
ü Schedulable activities attached to processes.
ü Arise from the need for concurrent activities to share
resources within one process.
• Enable to overlap computation with input and output.
• Allow concurrent processing of client requests in
servers – each request handled by one thread.
ü Easier to create and destroy.
Address Spaces
Ø A unit of management a process’s virtual memory.
Ø Large and consists of one or more regions separated by
inaccessible area of virtual memory to allow growth.
Ø A region is an area of contiguous virtual memory accessible by the
threads of the owning process.
Ø Each region is specified by:
ü Lowest virtual address and size.
ü Read/write/execute permissions for the process’s threads.
ü Whether can be grown upwards or downwards.
Ø Gaps are left between regions to allow for growth and regions can
be overlapped when extended in size.
8 Prepared by P.Y.Kumar © www.anuupdates.org
III BSc (Semester – VI) Distributed Systems Unit IV
Ø Data files can be mapped into the address space as an array of
bytes in memory.
Ø No of regions in address space are motivated by the several
reasons
ü Need for shared memory between the threads, need for
separate stack for each thread.
ü File mapping.
ü Need for sharing between two processes.
New Process Creation
Ø An indivisible operation provided by the operating system.
ü The UNIX fork system call creates a process with an
execution environment copied from the caller.
Ø But, the creation of a new process in a distributed system can be
separated into two independent aspects:
ü The choice of a target host.
ü The creation of an execution environment.
Ø Choice of process host:
ü Determine the node at which the new process will reside
according to transfer and location policies for sharing the
processing load:
• The transfer policy determines whether to situate a
new process locally or remotely.
• The location policy determines which node should host
a new process.
ü Location polices may be static or adaptive:
• Static location policies operate without regard to the
current state of the system based on a mathematical
analysis aimed at optimizing the all system and may
be deterministic or probabilistic.
• Adaptive location polices apply heuristics to make the
decision based on unpredictable run-time factors on
each node.
ü Load sharing system may be centralized, hierarchical or
decentralized:
• One manger component take the decision in the
centralized system.
• There are several mangers organized in a tree
structure in hierarchical system and each manger
makes the decisions as far down the tree.
• Nodes in the decentralized system exchange
information with one another directly to make
allocation decisions using:
• Sender-initiated algorithm: the node requires creating
a new process is responsible for initiating the transfer
decisions.
• Receiver-initiated algorithm: the node with relatively
low load advertises its existence other nodes to
transfer work to it.

9 Prepared by P.Y.Kumar © www.anuupdates.org


III BSc (Semester – VI) Distributed Systems Unit IV

Ø Creation of a new execution environment:


ü There are two approaches to defining and initiating the
address space of a new created process:
• The address space is of statically defined format and
initialized with zeros.
• The address space is defined with respect to an
existing execution environment.
• In case of UNIX fork, the newly created child process
share the parent’s text region and has its own heap
and stack regions.
ü Copy-on-write approach:
• A general approach of inheriting all regions of the
parent process by the child process.
• An inherited region is logically copied from the
parent’s region by sharing its frame between the two
address spaces.
• A page in a region is physically copied when one or
other process attempts to modify it.

Threads Performance
Ø Consider the server has a pool of one or more threads.
Ø Each thread removes a client request from a queue of received
requests and processes it.
Ø Example: (how multi-threading maximize the server throughput)
• Request processing: 2 ms
• I/O delay (no caching): 8 ms
• Single thread:
• 10 ms per requests, 100 requests per second.
• Two threads (no caching):
• 8 ms per request, 125 requests per second
• two threads and caching:
• 75% hit rate
• mean I/O time per request: 0.75 * 0 + 0.25 * 8ms =
2 ms
• 500 requests per second

10 Prepared by P.Y.Kumar © www.anuupdates.org


III BSc (Semester – VI) Distributed Systems Unit IV
• increased processing time per request as a result of
caching : 2.5 ms
• 400 requests per second

Client and server threads

Multi-threaded Server Architectures


Ø There are various ways to mapping requests to threads within a
server.
Ø The threading architectures of various implementations are:
• Worker pool architecture:
• Pool of server threads serves requests in queue.
• Possible to maintain priorities per queue.
• Thread-per-request architecture:
• Thread lives only for the duration of request handling.
• Maximizes throughput (no queueing).
• Expensive overhead for thread creation and
destruction.
• Thread-per-connection/per-object architecture:
• Compromise solution.
• No overhead for creation and deletion of threads.
• Requests may still block, hence throughput is not
maximal.

per-connection threads per-object threads


workers

I/O remote I/O remote


remote objects
objects objects

a. Thread-per-request b. Thread-per-connection c. Thread-per-object


Ø Threads within the clients – threads are also useful for the
clients when client has two threads one can generate requests and
other can make request to the server so that first thread
continues to perform some other task than blocking.

11 Prepared by P.Y.Kumar © www.anuupdates.org


III BSc (Semester – VI) Distributed Systems Unit IV
Threads Programming
Ø Some languages provided direct support for threads concurrent
programming (e.g. C, Ada95, Modula-3 and Java).
Ø Java provides Thread class that includes the following methods for
creating, destroying and synchronizing threads:
• Thread(Thread Group group, Runnable target, String name)
Creates a new thread, in the SUSPENDED state, belong to
group and be identified as name; the thread will execute the
run() method of target.
• setPriority(int newPriority), getPriority() - Set and return the
thread’s priority.
• run() - A thread executes the run() method of its target
object, if it has one, and otherwise its own run() method.
• start() - Change the state of the thread from
SUSPENDED to RUNNABLE.
• sleep(int millisecs) - Cause the thread to enter the
SUSPENDED state for the specified time.
• destroy() - Destroy the thread.

Java Thread Lifetimes (Life Cycle)


Ø A new thread is created in the SUSPENDED state on the same
Java Virtual Machine (JVM) as its creator.
Ø A thread executes run() method after it is made RUNABLE with
the start() method.
Ø Threads can be assigned a priority and Java implementations will
run a particular thread in preference to any thread with lower
priority.
Ø A thread ends its life when it returns from the run() method or
when its destroy() method is called.
Ø Programs can manage threads in groups:
• Thread group is assigned at the time of its creation.
• thread groups useful to shield various applications running
in parallel on one JVM.
• A thread in one group may not interrupt thread in another
group.
• Thread group facilitates control of the relative priorities of
threads.
Thread synchronization:
• The main difficult issues in multi-threaded programming are
the sharing of objects and the techniques used for thread
coordination and cooperation.
• Threads do not have private copies of static (class) variables
or object instance variables.
• Java provides synchronized key word that designate well
known monitor construct.
Java Thread Synchronization
Ø Each thread’s local variables in methods are private to it.
Ø An object can have synchronized and non-synchronized methods.

12 Prepared by P.Y.Kumar © www.anuupdates.org


III BSc (Semester – VI) Distributed Systems Unit IV
• Example: Synchronized addTo() and removeFrom() methods
to serialize requests in worker pool example.
Ø Any object can only be accessed through one invocation of any of
its synchronized methods.
Ø Threads can be blocked and woken up via condition variables:
• Thread awaiting a certain condition calls an object’s wait()
method.
• Other thread calls notify() or notifyAll() to awake one or all
blocked threads.
• Example:
• When worker thread discovers no requests to be
processed calls wait() on instance of Queue.
• When I/O thread adds request to queue calls notify()
method of queue to wake up worker.

Java Thread Scheduling (Priority)


Ø A special yield() method is used to enable scheduling of threads to
make progress.
Ø There are two types of scheduling threads:
o Preemptive scheduling threads
• A thread may be suspended at any point to make
away for another thread.
o Non-preemptive scheduling threads
• A thread runs until makes a call to the threading
system to de-schedule it and schedule another thread
to run.
• A code section without a threading system call is
automatically a critical section.
• Run exclusively and therefore it cannot take
advantage of a multiprocessor.
• Must take care long-running code sections that do not
contain threading system calls.
• Unsuitable for real-time applications processed in
absolute times.
Threads Implementation
Ø Many operating system kernels provide support for multi-threaded
processes (e.g. Windows NT, Solaris, and Mach).
• Provide thread creation, management system calls and
scheduling individual threads.
Ø Other operating systems have only a single-threaded process
abstraction.
• Multi-threaded processes are implemented by linking a user
library of procedures to application programs.
• Suffer from the following problems:
• Threads within a process cannot take advantage of a
multiprocessor.
• A thread that takes a page fault blocks the entire
process and all its threads.

13 Prepared by P.Y.Kumar © www.anuupdates.org


III BSc (Semester – VI) Distributed Systems Unit IV
Threads within different process can not scheduled

according to a single schema of relative prioritization.
• But have significant advantages:
• Operations of thread creation are significantly less
costly.
• Allow customizing the thread scheduling module and
support more user-level threads to suit particular
application requirements.
Ø The advantages of user-level and kernel-level threads
implementations can be combined:
• Mach OS enable user-level code to provide scheduling hints
to the kernel’s thread scheduler.
• Solaris 2 adopts hierarchical scheduling that supports both
kernel-level and user-level threads.
• A user-level scheduler assigns each user-level thread
to a kernel-level thread.
• Take the advantage of a multiprocessor.
• Disadvantage: still lakes flexibility
If a kernel-level thread is blocked, then all user-
level threads assigned to it are also prevented
from running.
• Several research projects have developed hierarchical
scheduling further to provide greater efficiency and
flexibility:
• Fast Threads implementation of a hierarchic event-
based scheduling system:
Consider the main system components are a
kernel running on a computer with one or more
processors and a set of application programs
running on it.
Each application process contains a user-level
scheduler to manage its threads.
The kernel is responsible for allocating virtual
processors to processes.
Process can also give back a virtual process
when no longer needed. they can also request
for extra virtual processor.
Kernel creates SA by loading a physical
processor registers with a context.
The user level scheduler has the task of
assigning its ready thread to the set of SA’s
currently executing with it.

14 Prepared by P.Y.Kumar © www.anuupdates.org


III BSc (Semester – VI) Distributed Systems Unit IV

P added
Process Process
A B SA preempted
Process SA unblocked

Kernel SA blocked

P idle
Virtual processors Kernel
P needed

A. Assignment of virtual processors B. Events between user-level scheduler & kernel


to processes Key: P = processor; SA = scheduler activation

Notifications sent from kernel to process are

1. Virtualprocessoradded: kernel has assigned a new virtual processor.


The scheduler can load the SA with the context of ready thread.
2. SA (scheduler activator) preempted: the kernel taken away the
specified SA from the process.
3. SA Blocked: A has blocked in the kernel and kernel is using fresh SA
to notify the scheduler. The scheduler sets the SA with new ready
thread.
4. SA Unblocked: SA that was blocked in the kernel has unblocked and
is ready to execute at user level again. The scheduler can now return
corresponding thread to ready list.

15 Prepared by P.Y.Kumar © www.anuupdates.org

You might also like