Download as pdf or txt
Download as pdf or txt
You are on page 1of 43

R20 CMR Technical Campus

UNIT-III
Deadlocks-System Model, Deadlocks Characterization, Methods for Handling
Deadlocks, Deadlock Prevention, Deadlock Avoidance, Deadlock Detection, and
Recovery from Deadlock .
Process Management and Synchronization-The Critical Section Problem,
Synchronization Hardware, Semaphores, and Classical Problems of
Synchronization, Critical Regions, Monitors, Inter-process Communication
Mechanisms- IPC between processes on a single computer system, IPC between
processes on different systems, using pipes, FIFOs, message queues, shared memory.

I. DEADLOCKS

1. SYSTEM MODEL
• A system can be modeled as a collection of limited resources to be allocated
to number of processes needs to be executed.
• For example, resources may include memory, printers, CPUs, open files, tape
drives, CD-ROMS, etc.
• Some resources may have a single instance and some of them may have
multiple instances.
• In normal operation a process must request a resource before using it, and
release it whenits usage is completed, in the following sequence:
1. Request: Every process must put a request to get the required resource.
If the request cannot be immediately granted, then the process must
wait until the resource(s) it needs become available.
2. Use: The process uses the resource, e.g. prints to the printer or reads from
the file.
3. Release: The process releases the resource when its usage is completed.
So that it becomes available for other processes.

Syeda Sumaiya Afreen pg.


1
R20 CMR Technical Campus

2. DEADLOCKS CHARECTERIZATION
Deadlock Definition: Deadlock is a situation where, the execution of two or more
processesis blocked because each process holds some resource and waits for other
resource held by some other process.

Example:

Fig 2.1 : Deadlock example


• Process P1 holds resource R1 and waits for resource R2 which is held by
process P2.
• Process P2 holds resource R2 and waits for resource R1 which is held by
process P1.
• None of the two processes can complete the execution and release their
resource.
• Thus, both the processes keep waiting infinitely.

Deadlock Necessary Conditions: *****

The four necessary conditions to occur deadlock are:


(1). Mutual Exclusion: Mutual exclusion means a resource can be held by only one
process at a time. In other words, if a process P1 is using some resource R1 at a
particular instant of time,then some other process P2 can't hold or use the same
resource R1 at the same instant of time.
(2). Hold and Wait: A process is holding at least one resource and is waiting to
acquire additional resources that are currently being held by other processes.
(3). No preemption: A resource can't be preempted forcefully from a process by
another process. That is, a resource can only be released voluntarily by the
holding process, when its usage completed.
(4). Circular Wait: Circular wait is a condition when the first process is waiting for

Syeda Sumaiya Afreen pg.


2
R20 CMR Technical Campus

the resourceheld by the second process, the second process is waiting for the
resource held by the third process, and so on. At last, the last process is waiting
for the resource held by the first process. So, every process is waiting for each
other and forms a circular wait.
Deadlock will occur if all the above four conditions happen simultaneously.

3. METHODS FOR HANDLING DEADLOCKS


The different ways of handling the deadlocks are:
Deadlock Handling Methods

Deadlock Prevention Deadlock Avoidance Deadlock detection Deadlock Ignorance


and recovery

1. Deadlock prevention: In this method, the system will prevent the happening of
at least one of the four necessary conditions of the deadlock.
2. Deadlock avoidance: In this method, the system always wants to be in a safe
state. It maintains a set of data and using that data it decides whether a new
request should be accepted or not. If the system is going into the unsafe state by
taking that new request, then that request is avoided. Otherwise the request is
accepted.
3. Deadlock detection and recovery: The CPU apply periodically Bankers
algorithm or checkfor cycles in resource allocation graph to detect the deadlock.
If deadlock is detected, then the system abort one after the other process or
preempt some resources until the deadlock is gone.
4. Deadlock Ignorance: If deadlocks only occur once a year or so, it may be
better to simplylet them happen and reboot as necessary than to incur the constant
overhead and system performance penalties associated with deadlock prevention
or detection. This is the approach that both Windows and UNIX take.

Syeda Sumaiya Afreen pg.


3
R20 CMR Technical Campus

4. DEADLOCK PREVENTION
The deadlock prevention strategy involves designing a system that violates
one of the four necessary conditions for the occurrence of deadlock. This ensures
that the system remains free from the deadlock. The various conditions to violate
necessary conditions are:
(1). Mutual Exclusion:
• To violate this condition, all the system resources must be shareable.
• But in reality, some resources can not be shared. For example, Printer. It is
mutually exclusive by nature. So, this condition can not be violated always.

(2). Hold and Wait: This condition can be violated in the following ways-
Approach-01: Starts with all approach
• A process has to first request for all the resources it requires for execution.
• Once it has acquired all the resources, only then it can start its execution.
• This approach ensures that the process does not hold some resources and wait
for other resources.
• But this approach is less efficient and cannot be implementable since it is not
possible to predict in advance which resources will be required during
execution.
Approach-02: Start with few and acquire remaining latter approach
• A process is allowed to acquire the resources it desires at the current moment.
• After acquiring the resources, it can start execution.
• Latter, in order to continue the execution, the process may need other
resources. Now before making any new request, it has to compulsorily release
all the resources that it holds currently.
• This approach is efficient and implementable.

(3). No Preemption: This condition can by violated by forceful preemption.


• Consider a process is holding some resources and request other resources that
can not be immediately allocated to it.
• The condition can be violated, by forcefully preempting the currently held
resources.
• A process is allowed to forcefully preempt the resources hold by some other
process (victim) only if:
➢ It is a high priority process or a system process.
➢ The victim process is in the waiting state.
(4). Circular Wait: To violate this condition, the following approach is followed-
• A natural number is assigned to every resource starting from one.
Syeda Sumaiya Afreen pg.
4
R20 CMR Technical Campus

• Each process is allowed to request for the resources either in only


increasing or onlydecreasing order of the resource number.
• In case increasing order is followed, if a process requires a lesser number
resource, then itmust release all the resources having larger number and vice
versa.
• This approach is the most practical approach and implementable. However,
this approachmay cause starvation but will never lead to deadlock.

5. RESOURCE ALLOCATION GRAPH


The resource allocation graph (RAG) is the pictorial representation of the
state of a system that gives the complete information about all the processes which
are holding some resources or waiting for some other resources. It also contains the
information about all the instances of all the resources whether they are available or
being used by the processes.
The symbols used in RAG are:
The process is represented by a Circle.
The Resource is represented by a
rectangle.Single instance of resource
Multiple instances (three) of resource. Each dot inside rectangle represent
Pi Rj one instance.(Assigned edge) The process Pi requesting for resource
Pi Rj
Rj.
Pi Rj
(Request edge) The Resource Rj is assigned to process Pi.
(Claim edge) A claim edge will be converted into request edge latter.

Claim edge: When the process Pi needs resource Rj, first process Pi place claim
edge to resource Rj. Latter, the claim edge will be converted into request edge if it
does not form a cycle.
If a process is using (or assigned) a resource, an arrow is drawn from the resource
node to theprocess node.
If a process is requesting a resource, an arrow is drawn from the process node to
the resourcenode.
Syeda Sumaiya Afreen pg.
5
R20 CMR Technical Campus

Deadlocks can be described more precisely in terms of a directed graph called a


system resource-allocation graph. This graph consists of a set of vertices V and a set of
edges E. The set of vertices V is partitioned into two different types of nodes: P ={P1,
P2,,.., Pn}, the set consisting of all the active processes in the system, and R = {R1,
R2, •••, Rn}, the set consisting of all resource types in the system.
A directed edge from process Pi to resource type Rj is denoted by Pi→Rj, it signifies
that process Pi, has requested an instance of resource type Rj, and is currently waiting
for that resource. A directed edge from resource type Rj to process Pi is denoted by Rj
→Pi it signifies that an instance of resource type Rj has been allocated to process Pi;.
A directed edge Pi —> Rj is called a request edge; a directed edge Rj → Pi is called an
assignment edge.
The resource-allocation graph shown in Figure depicts the following situation.

Fig 5.1 : Resource-allocation graph.

➢ The sets P, R, and E:


• P={P1,P2,P3}
• R= {R1,R2,R3,R4}
• £ = {P1→R1,P2→R3,R1→P2,R2→P2,R2→P1,R3→P3 }

➢ Resource instances:
• One instance of resource type R1
• Two instances of resource type R2
• One instance of resource type R3
• Three instances of resource type R4
Syeda Sumaiya Afreen pg.
6
R20 CMR Technical Campus

➢ Process states:
• Process P1 is holding an instance of resource type R2 and is waiting for an
instance of resource type R1.
• Process P2 is holding an instance of R1 and an instance of R2 and is waiting for
an instance of R3.
• Process P3 is holding an instance of R3.

Given the definition of a resource-allocation graph, it can be shown that, if the graph
contains no cycles, then no process in the system is deadlocked. If the graph does
contain a cycle, then a deadlock may exist.
If each resource type has exactly one instance, then a cycle implies that a
deadlock has occurred. If the cycle involves only a set of resource types, each of which
has only a single instance, then a deadlock has occurred. Each process involved in the
cycle is deadlocked. In this case, a cycle in the graph is both a necessary and a
sufficient condition for the existence of deadlock.
If each resource type has several instances, then a cycle does not necessarily
imply that a deadlock has occurred. In this case, a cycle in the graph is a necessary but
not a sufficient condition for the existence of deadlock.
To illustrate this concept, we return to the resource-allocation graph depicted in
(Figure 5.1). Suppose that process P3 requests an instance of resource type R2. Since
no resource instance is currently available, a request edge P3 —>R2 is added to the
graph (Figure 5.2). At this point, two minimal cycles exist in the svstem:

Processes P1, P2, and P3 are deadlocked. Process P2 is waiting for the resource R3,
which is held by process P3. Process P3 is waiting for either process P1 or

Fig 5.2: Resource-allocation graph with a deadlock

Syeda Sumaiya Afreen pg.


7
R20 CMR Technical Campus

process P2 to release resource R2. In addition, process P1 is waiting for process P 2 to


release resource R1.
Now consider the resource-allocation graph in (Figure 5.3)

Fig 5.3 : Resource-allocation graph with a cycle but no deadlock

In this example, we also have a cycle

However, there is no deadlock. Observe that process P4 may release its instance of
resource type R2. That resource can then be allocated to P3, breaking the cycle.

Fig 5.4 : Resource-allocation graph with a cycle but no deadlock.

Syeda Sumaiya Afreen pg.


8
R20 CMR Technical Campus

In summary if a resource-allocation graph does not have a cycle, then the system is
not in a deadlocked state. If there is a cycle, then the system may or may not be in a
deadlocked state. This observation is important when we deal with the deadlock
problem.

Example 1 (Single instances RAG): process P1 holds resource R1 and waiting for
R2, process P2 holds resource R2 and waiting for R1. Assume R1 and R2 are single
instance. The RAG is given below

P1
P1 is holding R1 P1 is waiting for R2

R1 R2
P2 P2 is holding R2
P2 is waiting for R1

If there is a cycle in the Resource Allocation Graph and each resource in the
cycle provides only one instance, then the processes will be in deadlock. Therefore
process P1 and process P2 are in deadlock.

Note: RAG can not be used to detect deadlock, if the number of instances of any
resource is greater than one.

6. DEADLOCK AVOIDANCE
Deadlock-prevention algorithms, as discussed earlier, prevent deadlocks
by restraining how requests can be made. The restraints ensure that at least one of
the necessary conditions for deadlock cannot occur and, hence, that deadlocks
cannot hold. Possible side effects of preventing deadlocks by this method,
however, are low device utilization and reduced system throughput.
An alternative method for avoiding deadlocks is to require additional
information about how resources are to be requested. For example, in a system
with one tape drive and one printer, the system might need to know that process P
will request first the tape drive and then the printer before releasing both
Syeda Sumaiya Afreen pg.
9
R20 CMR Technical Campus

resources, whereas process Q will request first the printer and then the tape drive.
With this knowledge of the complete sequence of requests and releases for each
process, the system can decide for each request whether or not the process should
wait in order to avoid a possible future deadlock. Each request requires that in
making this decision the system consider the resources currently available, the
resources currently allocated to each process, and the future requests and releases
of each process.
Safe State

• The general idea behind deadlock avoidance is that it makes sure always
the system is insafe state and will not enter into unsafe state.
• A system is in safe state if there exists a safe sequence of processes { P0,
P1, P2, ..., Pn }such that all of the resource requests for each Pi can be
satisfied by currently available resources + resources held by all other
processes Pj, where j < i in safe sequence.
• If a safe sequence exists, then the system is in not in deadlock.
• If a safe sequence does not exist, then the system is in an unsafe state,
which may lead todeadlock.
• All safe states are deadlock free, but not all unsafe states lead to deadlocks.

unsafe
deadlock

safe

Fig 6.0 : Safe, unsafe, and deadlock state spaces

➢ Deadlock avoidance can be achieved by two methods. They are


• Checking for cycle in Resource Allocation Graph: It is
applicable, if all the resources are of single instance only. It use
claim edge before converting it into request edge.
• Bankers algorithm (For both single instances or multiple instances of

Syeda Sumaiya Afreen pg.


10
R20 CMR Technical Campus

resources)

6.1 Resource-Allocation-Graph Algorithm

If we have a resource-allocation system with only one instance of each


resource type, a variant of the resource-allocation graph can be used for deadlock
avoidance. In addition to the request and assignment edges already described, we
introduce a new type of edge, called a claim edge. A claim edge Pi—> Rj indicates that
process Pi may request resource Rj at some time in the future. This edge resembles a
request edge in direction but is represented in the graph by a dashed line.
When process Pi requests resource Rj, the claim edge Pi—> Rj is
converted to a request edge. Similarly, when a resource Rj is released by Pi, the
assignment edge Rj → Pi is reconverted to a claim edge Pi —> Rj. We note that the
resources must be claimed a priori in the system. That is, before process p starts
executing, all its claim edges must already appear in the reource-allocation graph. We
can relax this condition by allowing a claim edge Pi—> Rj to be added to the graph only
if all the edges associated with process Pi are claim edges.
Suppose that process Pi requests resource Rj. The request can be granted
only if converting the request edge Pi —> Rj to an assignment edge Rj —> Pi does not
result in the formation of a cycle in the resource-allocation graph. Note that we check
for safety by using a cycle-detection algorithm. An algorithm for detecting a cycle in
this graph requires an order of n2 operations, where n is the number of processes in the
system. If no cycle exists, then the allocation of the resource will leave the system in a
safe state. If a cycle is found, then the allocation will put the system in an unsafe state.
Therefore, process Pi will have to wait for its requests to be satisfied.

Fig 6.1.1: Resource-allocation graph for deadlock avoidance

To illustrate this algorithm, we consider the resource-allocation graph of (Figure 6.1),


Syeda Sumaiya Afreen pg.
11
R20 CMR Technical Campus

Suppose that P2requests R2. Although R2 is currently free, we cannot allocate it to P2,
since this action will create a cycle in the graph (Figure 6.2). A cycle indicates that the
system is in an unsafe state. If P1 requests R2, and P2 requests R1, then a deadlock will
occur.

Fig 6.1.2 : An unsafe state in a resource-allocation graph.

6.2 BANKER’S ALGORITHM *****:

Banker’s Algorithm is a deadlock avoidance strategy.


The Banker's Algorithm gets its name because it follows the strategy used in
banking system to assure that when they lend out resources they will still be
able to satisfy all their clients. (A banker won't loan out a little money to start
building a house unless they are assured that they will later be able to loan out
the rest of the money to finish the house.)
When a process starts up, it must state in advance the maximum allocation of
resources itmay request, up to the amount available on the system.
When a request is made, the scheduler determines whether granting the
request would leave the system in a safe state. If not, then the process must
wait until the request can be granted safely.
To implement banker’s algorithm, following four data structures are used:
(1) Available: It is a single dimensional array. It specifies the number of
instances of each resource type currently available.

Syeda Sumaiya Afreen pg.


12
R20 CMR Technical Campus

(2) Max: It is a two dimensional array. It specifies the maximum number


of instancesof each resource type that a process can request.
(3) Allocation: It is a two dimensional array. It specifies the number of
instances ofeach resource type that has been allocated to the process.
(4) Need: It is a two dimensional array. It specifies the number of
instances of eachresource type that a process requires for execution.
Need[i][j] = Max[i][j] - Allocation [i][j]

In order to apply the Banker's algorithm, we first need an algorithm for


determiningwhether the system is in safe state or not. This can be done by
using safety algorithm.
To implement safety algorithm, following two data structures are used:
(1) Work: It is a single dimensional array. It specifies the number of
instances ofeach resource type currently available. (Work =
Available)
(2) Finish: It is a single dimensional array. It specifies whether the
process hasfinished its execution or not.

6.2.1 Safety Algorithm

The algorithm for finding out whether or not a system is in a safe state. This algorithm
can be described, as follows:
Step 1: Let Workand Finishbe vectors of length m and n, respectively. ( m is
the totalnumber of resources and n is the total number of processes)
Initially, Work = Available Finish[i]
=falsefor i = 0, 1, ... , n - 1.
Step 2: Find an index i such that both the below conditions satisfies
Finish[i]== false
Needi<= Work
If there is no such i present, then proceed to step 4.
Step 3: Perform the following:
Work = Work + Allocation;Finish[i] =
true;
Go to step 2.
Step 4: If Finish[i] == truefor all i, then the system is in a safe state.
That means if all processes are finished, then the system is in safe state.

Syeda Sumaiya Afreen pg.


13
R20 CMR Technical Campus

6.2.2 Resource-Request Algorithm

Let Requestibe the request array for process Pi. Requesti[j]= kmeans process Pi wants
k instances of resource type Rj. When a request for resources is made by process Pi,
the following actions are taken:
Resource-Request Algorithm
Step 1: If Requesti<= Needi
Goto step (2); otherwise, raise an error condition, since the process has
exceeded itsmaximum claim.

Step 2: If Requesti<= Available


Goto step (3); otherwise, Pi must wait, since the resources are not available.
Step 3: Have the system pretend to have allocated the requested resources to
process Piby modifying the state as follows:
Available = Available - Requesti;
Allocationi= Allocationi+ Requesti;
Needi= Needi–Requesti;

If the resulting resource-allocation state is safe, the transaction is completed, and process
P; is allocated its resources. However, if the new state is unsafe, then P, must wait for
Request , and the old resource-allocation state is restored.

***** [IMPORTANT] PROBLEM: Considering a system with five processes P0


through P4 and three resources of type A, B, C. Resource type A has 10 instances, B
has 5 instances and type C has 7 instances. Suppose at time t0 following snapshot of
the system has been taken:

Allocation MAX Available


Process AB C AB C AB C
P0 0 1 0 7 5 3 3 3 2
P1 2 0 0 3 2 2
P2 3 0 2 9 0 2
P3 2 1 1 2 2 2
Syeda Sumaiya Afreen pg.
14
R20 CMR Technical Campus

P4 0 0 2 4 3 3

a) What will be the content of the Need matrix?

Solution: Need = Max – Allocation

7 5 3 0 1 0
3 2 2 2 0 0
= 9 0 2 - 3 0 2
2 2 2 2 1 1
4 3 3 0 0 2

7 4 3
1 2 2
= 6 0 0
0 1 1
4 3 1

b) Is the system in a safe state? If yes, then what is the safe sequence?

Solution: Use safety algorithm steps to check whether the system is in safe state
or not.

Step 1: Work = Available = [ 3 3 2]


Finish = [ F F F F F ]
Step 2 & 3:

Is there any process Pi


Finish
satisfying Needi ≤ Work = Work + Allocationi
Vector
Work
and Finishi = F (false)
Syeda Sumaiya Afreen pg.
15
R20 CMR Technical Campus

P1 work = [3 3 2 ] + [2 0 0] = [5 3 [F TF F F]
2]
P3 work = [5 3 2 ] + [2 1 1] = [7 4 [F TF T F]
3]
P4 work = [7 4 3 ] + [0 0 2] = [7 4 [F TF T T]
5]
P2 work = [7 4 5 ] + [3 0 2] = [10 4 [ F T T T T]
5]
P0 work = [10 4 5 ] + [0 1 0] = [10 [T TT T T]
5 7]

Step 4: Finishi = T (true) for all processes.

So, the system is in safe state. As the system is in safe state, there is no
deadlock. The safe sequence is < P1, P3, P4, P2, P0>.

c) What will happen if process P1 requests one additional instance of


resource type A andtwo instances of resource type C?

Solution: P1 request for one additional instance of resource type A and two
instances ofresource type C. i.e., P1 request = [ 1 0 2]
The request made by a process should be accepted or not is decided by
applying the steps ofresource-request algorithm.

Step 1: If Requesti<= Needithen go to step 2. Otherwise raise an

error.[ 1 0 2] ≤ [1 2 2 ] ( yes true,

then go to step 2)

Step 2: If Requesti<= Availablethen go to step (3); otherwise, Pi must wait.


[1 0 2] ≤ [ 3 2 2 ] ( yes true, then go to step 3)

Step
3: Available= Available –Requesti
= [3 3 2 ] – [ 1 0 2 ]
= [2 3 0 ]
Allocationi= Allocationi+ Requesti
= [2 0 0 ] + [ 1 0 2 ]
Syeda Sumaiya Afreen pg.
16
R20 CMR Technical Campus

ti =
[ = [1 2 2 ] – [ 1 0 2 ]
3 = [0 2 0 ]
0
As the resource-request algorithm steps completed, we need to check the safety
algo2rithm for existence of any safe sequence or not. If safe sequence exists, then
only] the changes made by resource-request algorithms become permanent.
Needi Otherwise, its changesare rejected. It means the new request is rejected.
After inserting the modifications done in the above the Allocation, Max, Need
= and Available is
Needi–
Reques

Allocation MAX Need Available


Process A BC A BC A BC
P0 0 1 0 7 5 3 7 4 3 2 3 0
P1 3 0 2 3 2 2 0 2 0
P2 3 0 2 9 0 2 6 0 0
P3 2 1 1 2 2 2 0 1 1
P4 0 0 2 4 3 3 4 3 1

Now apply safety algorithm steps to find safe sequence

Step 1: Work = Available = [ 2 3 0]


Finish = [ F F F F F ]

Syeda Sumaiya Afreen pg.


17
R20 CMR Technical Campus

Step 2 & 3:

Is there any process Pi


Finish
satisfying Needi ≤ Work = Work + Allocationi
Vector
Work
and Finishi = F (false)
P1 work = [2 1 2 ] + [3 2 0] = [5 3 2] [ F T F F F ]
P3 work = [5 3 2 ] + [2 1 1] = [7 4 3] [ F T F T F ]
P4 work = [7 4 3 ] + [0 0 2] = [7 4 5] [ F T F T T ]
P0 work = [7 4 5 ] + [0 1 0] = [7 5 5] [ T T F T T ]
P2 work = [7 5 5 ] + [3 0 2] = [10 5 [ T T T T T ]
7]

Step 4: Finishi = T (true) for all processes.

So, the system is in safe state. As the systemis in safe state, there is no
deadlock. The safe sequence is < P1, P3, P4, P0, P2 >. As there exist a safe
sequence, the request made by P1 is accepted and the changes done by
resource- request algorithm are made permanent.

d) If process P4 requests [3 3 0] resource, can it be granted or not?

Solution: P4 request = [ 3 3 0]
The request made by a process should be accepted or not is decided by
applying the steps ofresource-request algorithm.
Step 1: If Requesti<= Needithen go to step 2. Otherwise

raise an error.[ 3 3 0] ≤ [4 3 1](

yes true, then go to step 2)

Step 2: If Requesti<= Availablethen go to step (3); otherwise, Pi must wait.


[3 3 0] ≤ [3 3 2 ] ( yes true, then go to step 3)

Step 3:
Available= Available –Requesti
= [3 3 2 ] – [ 3 3 0 ]
= [0 0 2 ]
Allocationi= Allocationi+ Requesti
Syeda Sumaiya Afreen pg.
18
R20 CMR Technical Campus

= [0 0 2 ] + [ 3 3 0 ]
= [3 3 2 ]
Needi = Needi–Requesti

= [4 3 1 ] – [ 3 3 0 ]
= [1 0 1 ]
As the resource-request algorithm steps completed, we need to check
the safety algorithm for existence of any safe sequence or not. If safe
sequence exists, then only the changes made by resource-request
algorithms become permanent. Otherwise, its changesare rejected. It
means the new request is rejected.
After inserting the modifications done as per the above, the Allocation,
Max, Need and Available is (Only P4 values and Available matrix
values are changed. Others are same)

Allocation MAX Need Available


Process A BC A BC A BC
P0 0 1 0 7 5 3 7 4 3 0 0 2
P1 2 0 0 3 2 2 1 2 2
P2 3 0 2 9 0 2 6 0 0
P3 2 1 1 2 2 2 0 1 1
P4 3 3 2 4 3 3 1 0 1

Now apply safety algorithm steps to find safe sequence

Step 1: Work = Available = [ 0 0 2]


Finish = [ F F F F F ]
Step 2 & 3:

Is there any process Pi


Finish
satisfying Needi ≤ Work = Work + Allocationi
Vector
Work
and Finishi = F (false)
P0 [FF F F F]
P1 - [F F F F F]
P2 - [F F F F F]
P3 - [F F F F F]

Syeda Sumaiya Afreen pg.


19
R20 CMR Technical Campus

P4 - [F F F F F]

Step 4: Finishi is not T (true) for all processes.

So, the system is in not in safe state. As the system is not in safe state,
there is deadlock. Therefore, safe sequence does not exist. As the safe
sequence does not exist, the request made by P4 is rejected and the changes
done by resource-request algorithm are rolled back.

7. DEADLOCK DETECTION

• If deadlocks are not avoided, then another approach is to detect when they
have occurred and recover somehow.
• In addition to the performance hit of constantly checking for deadlocks, a
policy algorithm must be in place for recovering from deadlocks, and there
is potential for lost work when processes must be aborted or have their
resources preempted.

Single Instance of Each Resource Type


• If each resource category has a single instance, then we can use a variation
of the resource-allocation graph known as a wait-for graph.
• A wait-for graph can be constructed from a resource-allocation graph by
eliminating the resources and collapsing the associated edges, as shown in
the figure below.
• An arc from Pi to Pj in a wait-for graph indicates that process Pi is waiting
for a resource that process Pj is currently holding.
R20 CMR Technical Campus

Fig 7.1 - (a) Resource allocation graph. (b) Corresponding wait-for graph

• If there is a cycle in the wait-for graph, then it indicates deadlocks.


• This algorithm must maintain the wait-for graph, and periodically search it for
cycles.

Several Instances of a Resource Type

• Write the safety algorithm steps here. It use safety algorithm as discussed in
the above topic (Deadlock Avoidance). If safe sequence exists, then there is
no deadlock. Otherwise there is a deadlock.
• The wait-for graph scheme is not applicable to a resource-allocation system
with multiple instances of each resource type. The algorithm employs
several time-varying data structures that are similar to those used in the
banker's algorithm
Available : A vector of length m indicates the number of available
resources of each type.
Allocation: An n x m matrix defines the number of resources of each type
currently allocated to each process.
Request: An n x in matrix indicates the current request of each process. If
R20 CMR Technical Campus

Request[i][j] equals k, then process Pi is requesting k more instances of


resource type Rj.
• The detection algorithm described here simply investigates every possible
allocation sequence for the processes that remain to be completed.
1. Let Work and Finish be vectors of length m and n, respectively. Initialize
Work = Available. For i = 0 , 1 , . . . , n-1, if Allocationi!= 0, then
Finish[i]=- false; otherwise, Finisli[i] = true.
2. Find an index i such that both
a. Finish[i] ==false
b. Requesti < =Work
If no such i exists, go to step 4.
3. Work = Work + Allocationi Finish[i] = true Go to step 2.
4. If Finish[i] == false, for some i 0 < =i < n, then the system is in a
deadlocked state. Moreover, if Finish[i] == false, then process Pi is
deadlocked.

• This algorithm requires an order of in x n2 operations to detect whether the


system is in a deadlocked state. consider a system with five processes P0
through P4 and three resource types A, B, and C. Resource type A has seven
instances, resource type B has two instances, and resource type C has six
instances. Suppose that, at time To, we have the following resource-
allocation state:
R20 CMR Technical Campus

We claim that the system is not in a deadlocked state. Indeed, if we execute


our algorithm, we will find that the sequence <P0, P2, P3, P1, P4 > results
in Finish[i] == true for all i.

Suppose now that process P2 makes one additional request for an instance
of type C. The Request matrix is modified as follows:

We claim that the system is now deadlocked.

8. DEADLOCK RECOVERY

When a detection algorithm determines that a deadlock exists, then the


system or operator is responsible for handling deadlock problem. There are two
options for breaking a deadlock.
• Process Termination
R20 CMR Technical Campus

• Resource preemption

Process Termination:
There are two method to eliminate deadlocks by terminating a process as follows:
• Abort all deadlocked processes: This method will break the deadlock cycle
clearly by terminating all process. This method is cost effective. And it
removes the partial computations completed by the processes.
• Abort one process at a time until the deadlock cycle is eliminated: This
method terminates one process at a time, and invokes a deadlock-detection
algorithm to determine whether any processes are still deadlocked.
Resource Preemption:
In resource preemption, the operator or system preempts some resources
from processes and give these resources to other processes until the deadlock cycle
is broken. If preemption is required to deal with deadlocks, then three issues need
to be addressed:
• Selecting a victim: The system or operator selects which resources and
which processes are to be preempted based on cost factor.
• Rollback: The system or operator must roll back the process to some
safe state and restart it from that state.
• Starvation: The system or operator should ensure that resources will
not always be preempted from the same process.
R20 CMR Technical Campus

II . PROCESS MANAGEMENT AND


SYNCHRONIZATION

Process Synchronization is the task of coordinating the concurrent execution of


processes in a way that no two processes can have access to the same shared data
and resources at the same time.

1. CRITICAL SECTION PROBLEM


A Critical Section is a code segment that accesses shared variables and has to be
executed as an atomic action. It means that in a group of cooperating processes,
at a given point of time, only one process must be executing its critical section. If
any other process also wants to execute its critical section, it must wait until the
first one finishes.

do {
entry section
critical section
exit section
remainder section
} while (TRUE);

• Entry Section: It decides the entry of a particular process is allowed or not.

• Critical Section: This part allows one process to enter and modify the shared
variable.

• Exit Section: Exit section allows the other processes that are waiting in the
Entry Section, to enter into the Critical Sections.
R20 CMR Technical Campus

• Remainder Section: All other parts of the Code, which is not shared (code
other than in Critical, Entry, and Exit Section), are known as the Remainder
Section.

A Race condition is a scenario that occurs in a multithreaded environment due


to multiple threads sharing the same resource or executing the same piece of code.
If not handled properly, this can lead to an undesirable situation, where the output
state is dependent on the order of execution of the threads. To avoid race condition,
the shared data is treated as critical section and solution is provided to access this
data.

Necessary conditions for solution to Critical Section Problem: A solution to the


critical section problem must satisfy the following three conditions:

• Mutual Exclusion: Out of a group of cooperating processes, only one


process can be in its critical section at a given point of time.
• Progress: If no process is in its critical section, and if one or more threads
want to execute their critical section then any one of these threads must be
allowed to get into its critical section.
• Bounded Waiting: When a process makes a request for getting into critical
section, there should be a limit for number of other processes can get into
their critical section before this process allowed to get into critical section.
So after the limit is reached, system must grant the process permission to get
into its critical section.
R20 CMR Technical Campus

Peterson’s solution

Peterson’s solution is a software based solution for the critical section


problem that satisfies all the three requirements like Mutual Exclusion, Progress
and Bounded Waiting. It can provide solution to only two processes critical section
problem. If number of processes greater than two, it does not work. Let the two
processes be P0 and P1. The process shares two variables.
They are given as

boolean
flag[2]; int
turn;

Initially, the value of flag [0] = flag[1] = false. The variable turn indicates
whose turn to execute the critical section. For example, when process P0 executes, i
= 0 and j = 1. If process P1 executes, i = 1 and j = 0. The structure of Peterson’s
solution is given as:
R20 CMR Technical Campus

When, the process Px wants to enter into critical section, it first assigns
flag[i] = true. Then it sets the 'turn = j' by assuming that if other process wants to
enter the critical section. When both processes are attempted to enter at the same
time, the 'turn' value will be set to both ‘i’ and ‘ j’ roughly at the same time. But,
one assignment is last and other will be saved in turn. So, only one process is
allowed to enter critical section depending on turn value.

2. SYNCHRONIZATION HARDWARE

To generalize the solution(s) expressed above, each process when


entering their critical section must set some sort of lock, to prevent other
processes from entering their critical sections simultaneously, and must release
the lock when exiting their critical section, to allow other processes to proceed.
Obviously it must be possible to attain the lock only when no other process has
already set a lock. Specific implementations of this general procedure can get
quite complicated, and may include hardware solutions as outlined in this
section.

• One simple solution to the critical section problem is to simply prevent a process
from being interrupted while in their critical section, which is the approach
taken by non preemptive kernels. Unfortunately this does not work well in
multiprocessor environments, due to the difficulties in disabling and the re-
enabling interrupts on all processors. There is also a question as to how this
approach affects timing if the clock interrupt is disabled.
• Another approach is for hardware to provide certain atomic operations. These
operations are guaranteed to operate as a single instruction, without interruption.
R20 CMR Technical Campus

One such operation is the "Test and Set", which simultaneously sets a boolean
lock variable and returns its previous value, as shown in Figures below:

Another variation on the test-and-set is an atomic swap of two booleans, as shown in


Figures below:
R20 CMR Technical Campus

3. SEMAPHORES

Semaphores are integer variables that are used to solve the critical section problem
by using two atomic operations wait and signal. Initial value for S = 1.

Wait: The wait operation decrements the value of its argument S. If S is


negative or zero, then no operation is performed and the process is blocked.
wait(S)
{
while
(S<=0); S--;
}
Signal: The signal operation increments the value of its argument S.
signal(S)
{
S++;
}
Types of Semaphores: There are two main types of semaphores i.e. counting
semaphores and binary semaphores. Details about these are given as follows −
• Counting Semaphores: These are integer value semaphores and have
an unrestricted value domain. These semaphores are used to coordinate
the resource access, where the semaphore count is the number of available
resources. If the resources are added, semaphore count automatically
incremented and if the resources are removed, the count is decremented.
Semaphores can also be used to synchronize certain operations between processes.
For example, suppose it is important that process P1 execute statement S1 before
process P2 executes statement S2.
R20 CMR Technical Campus

• First we create a semaphore named synch that is shared by the two processes,
and initialize it to zero.

Then in process P1 we insert the code:

S1;
signal( synch );

and in process P2 we insert the code:

wait( synch );
S2;

Because synch was initialized to 0, process P2 will block on the wait until after
P1 executes the call to signal.

• Binary Semaphores: The binary semaphores are like counting semaphores


but their value is restricted to 0 and 1. The wait operation only works when
the semaphore is 1 and the signal operation succeeds when semaphore is 0.
They can be used to solve the critical section problem as described above,
and can be used as mutexes on systems that do not provide a separate
mutex mechanism.. The use of mutexes for this purpose is shown in Figure
6.9

do
{ waiting(mutex);
// critical section
signal(mutex);
// remainder section
}while (TRUE);
R20 CMR Technical Campus

Fig 3.1 : Mutual-exclusion implementation with semaphores

Semaphore Implementation

The big problem with semaphores as described above is the busy loop in
the wait call, which consumes CPU cycles without doing any useful work. This
type of lock is known as a spinlock, because the lock just sits there and spins
while it waits. While this is generally a bad thing, it does have the advantage of
not invoking context switches, and so it is sometimes used in multi-processing
systems when the wait time is expected to be short - One thread spins on one
processor while another completes their critical section on another processor.

• An alternative approach is to block a process when it is forced to wait for an


available semaphore, and swap it out of the CPU. In this implementation each
semaphore needs to maintain a list of processes that are blocked waiting for it,
so that one of the processes can be woken up and swapped back in when the
semaphore becomes available. ( Whether it gets swapped back into the CPU
immediately or whether it needs to hang out in the ready queue for a while is a
scheduling problem. )
• The new definition of a semaphore and the corresponding wait and signal
operations are shown as follows:
R20 CMR Technical Campus

4. CLASSICAL PROBLEMS OF SYNCHRONIZATION


Semaphore can be used in classical problems of synchronization besides Mutual
Exclusion. The classical problems of process synchronization are:

1. Bounded Buffer (Producer-Consumer) Problem

2. The Readers Writers Problem

3. Dining Philosophers Problem

1. BOUNDED BUFFER (PRODUCER-CONSUMER) PROBLEM


• It contains a finite array of n buffers; each buffer can store one value. Initially
all the buffers are empty. There are two types of processes. A producer process
can produce an item and place it in an empty buffer. The consumer process can
consume (remove) an item from the buffer.
• The conditions are
o At any time only one producer or one consumer allowed to use the buffer.
o When buffer array is empty, consumer is not allowed to consume an item.
R20 CMR Technical Campus

o When buffer array is full, producer is not allowed to produce an item.


• Solution to this problem is done by creating two counting semaphores "full" and
"empty" to keep track of the current number of full and empty buffers
respectively. One binary semaphore "mutex" is used to allow only either one
producer or one consumer to access array buffer.
• The initial values of semaphores are mutex = 1, full = 0, empty = n
• The code for producer-consumer problem (Bounded buffer) is:

Fig: Structure of Fig: Structure of


Producer Process Consumer Process

Fig 4.1 : Structures of Producer and Consumer Process


R20 CMR Technical Campus

2. THE READERS WRITERS PROBLEM


• In this problem there are some processes called readers that only read the
shared data, and never change it, and there are other processes called writers
who may change the data.
• The conditions are
o At any time either writer or readers allowed, but not both
o At any time only one write is allowed to write the data
o Multiple readers can read the data simultaneously.
• Solution to this problem is, creating one counting semaphore "wrt" to allow
only one writer to write the data. One binary semaphore "mutex" is used to
allow only either writer or reader to perform the operation. One integer
variable "readcount" is used to keep track no of readers are reading the data.
• The initial values of semaphores are mutex = 1, wrt = 1, readcount = 0.

Structure of Structure of Writer


Reader Process
Process

Syeda Sumaiya Afreen pg.


35
R20 CMR Technical Campus

Fig 4.2 : Structures of Reader and Writer Process

3. DINING PHILOSOPHERS PROBLEM

There are five philosophers sitting around a table. There are five
chopsticks/forks kept on the table such that in between every two philosophers
one chopstick is placed. A bowl of rice is placed in the centre of the table.
When a philosopher wants to eat, he must use two chopsticks - one from their
immediate left and one from their immediate right. When a philosopher wants
to think, he keeps down both chopsticks at their original place.
• Solution 1: No hold and wait: No philosopher is allowed to hold one
chopstick and wait for second chopstick. A philosopher has to take the
chopsticks if both chopsticks are available.
• Solution 2: Allow up to n-1 philosophers: Allowing up to maximum of n-1
(in our case 4) philosophers to eat at any time, then there is a chance of one
philosopher can eat.
• Solution 3: Even-odd philosopher: According to sitting order of
Philosophers, serial numbers are assigned starting from 1. Even number

Syeda Sumaiya Afreen pg.


36
R20 CMR Technical Campus

philosophers are allowed to take their right chopstick first and then take left
chopstick. Odd number philosophers are allowed to take their left chopstick
first and then take right chopstick later.

do {
wait (chopstick [i] );
wait(chopstick [ (i + 1) % 5] ) ;
// eat
signal(chopstick [i]);
signal(chopstick [(i + 1) % 5]);
/ / think
}while (TRUE);
Fig 4.3 : The structure of philosopher i.

5. MONITORS

• Semaphores can be very useful for solving concurrency problems, but only if
programmers use them properly. If even one process fails to abide by the
proper use of semaphores, either accidentally or deliberately, then the whole
system breaks down. ( And since concurrency problems are by definition rare
events, the problem code may easily go unnoticed and/or be heinous to debug. )
• For this reason a higher-level language construct has been developed, called
monitors.

Monitor Usage

• A monitor is essentially a class, in which all data is private, and with the special
restriction that only one method within any given monitor object may be active
Syeda Sumaiya Afreen pg.
37
R20 CMR Technical Campus

at the same time. An additional restriction is that monitor methods may only
access the shared data within the monitor and any data passed to them as
parameters. I.e. they cannot access any data external to the monitor.

Figure 5.1: Syntax of a monitor.

Figure 5.2 shows a schematic of a monitor, with an entry queue of


processes waiting their turn to execute monitor operations ( methods. )

Figure 5.2 : Schematic view of a monitor

Syeda Sumaiya Afreen pg.


38
R20 CMR Technical Campus

• In order to fully realize the potential of monitors, we need to introduce one


additional new data type, known as a condition.
o A variable of type condition has only two legal operations, wait and
signal. I.e. if X was defined as type condition, then legal operations
would be X.wait( ) and X.signal( )
o The wait operation blocks a process until some other process calls signal,
and adds the blocked process onto a list associated with that condition.
o The signal process does nothing if there are no processes waiting on that
condition. Otherwise it wakes up exactly one process from the
condition's list of waiting processes. ( Contrast this with counting
semaphores, which always affect the semaphore on a signal call. )
• Figure 6.18 below illustrates a monitor that includes condition variables within
its data space. Note that the condition variables, along with the list of processes
currently waiting for the conditions, are in the data space of the monitor - The
processes on these lists are not "in" the monitor, in the sense that they are not
executing any code in the monitor.

Figure 5.3 : Monitor with condition variables

Syeda Sumaiya Afreen pg.


39
R20 CMR Technical Campus

• But now there is a potential problem - If process P within the monitor issues a
signal that would wake up process Q also within the monitor, then there would
be two processes running simultaneously within the monitor, violating the
exclusion requirement. Accordingly there are two possible solutions to this
dilemma:

Signal and wait - When process P issues the signal to wake up process Q, P then
waits, either for Q to leave the monitor or on some other condition.

Signal and continue - When P issues the signal, Q waits, either for P to exit the
monitor or for some other condition.

There are arguments for and against either choice. Concurrent Pascal offers a third
alternative - The signal call causes the signaling process to immediately exit the
monitor, so that the waiting process can then wake up and proceed.

• Java and C# ( C sharp ) offer monitors bulit-in to the language. Erlang offers
similar but different constructs.

III. INTER PROCESS COMMUNICATION MECHANISMS

Inter Process Communication (IPC) is a mechanism that involves communication of


one process with another process. This communication happens

 Between two processes on same system: The operating system acts as an


interface between IPC on same system. When two processes on same

Process A Process A

Operating System
Syeda Sumaiya Afreen pg.
40
R20 CMR Technical Campus

system want to communicate, they usually use shared memory.

 Between a process on one system with a process on other system: The


network connecting these two different computers acts as an
interface for this type of IPC. In this category, each process interacts
with its own operating system. The operating system communicates
with operating system of other system usually by passing messages.

The IPC can also be:

 Between related processes: A parent process creates a child process. They


need to communicate to carry out the task.
 Between unrelated processes: One process may not have any parent child
type relationship with other process. Even though they do not have any
relationship, the intermediate output of one process may require to other
process.

3. The Inter Process Communication can be carried out by:

a. Pipes
b. Named pipes or FIFO
c. Message Queues
d. Shared Memory

Syeda Sumaiya Afreen pg.


41
R20 CMR Technical Campus

a. Pipes: It is used for communication between two related processes. The


mechanism used in pipes is half duplex meaning only one way
communication, i.e., the first process communicates with the second process.
To achieve a full duplex i.e., for the second process to communicate with the
first process another pipe is required.

b. FIFO: Pipes were meant for communication between related processes. We can
also use pipes for unrelated process communication. In this case named pipe also
known as FIFO (First-In- First-Out) is used. The mechanism used in FIFO is
full duplex meaning, same single named pipe can be used for two-way
communication (communication between the server and the client, plus the client
and the server at the same time). The Named Pipe supports bi-directional
communication.

c. Message Queues: It support communication between two or more processes


with full duplex capacity. The processes will communicate with each other by
posting a message and retrieving it out of the queue. Once the message retrieved
from the queue, it is no longer available in the queue.

Syeda Sumaiya Afreen pg.


42
R20 CMR Technical Campus

d. Shared Memory: Communication between two or more processes is achieved


through a shared piece of memory among all processes. The shared memory needs
to be protected from each other by synchronizing access to all the processes. This
type of communication only works for IPC between processes on same system.

********* ALL THE BEST ************

Syeda Sumaiya Afreen pg.


43

You might also like