Professional Documents
Culture Documents
OS Lab Manual
OS Lab Manual
LABORATORY WORKBOOK
Name ____________________
Roll No ___________________
Marks Obtained ____________
Signature___________________
CONTENTS
S
No
.
Object of Experiments
Remarks
10
11
12
Date
Signature
Operating System
LAB-1
Name ____________________
Roll No ___________________
Date ______________________
Signature___________________
Objective
Introduction to algorithms, reading andwriting algorithms.
Theory
An algorithm is a finite well-defined step-by-step list of instruction for solving a particular problem. Algorithms can contain
mathematical equations, logics, linguistic terms, graphs and flowcharts.
The format for the formal presentation of an algorithm consists of two parts.
The first part is a paragraph telling the purpose of algorithm, identifying variables in algorithms and listing the input data.
The second part consists of list of steps to be executed.
An algorithm contains the following compulsory elements.
Identification Number
All algorithms have an identification number or a label on them
e.g. Algorithm 4.3
where 4 represents chapter and 3 represents that it is the third algorithm of chapter 4.
Steps, Control, Exit
Starting with step 1, steps are executed one after the other in a sequence. Hence control is transferred sequentially.
However in some cases if defined the control can go from any one step directly to any other step ahead or backward
e.g. instead of going to step 2 from step 1 step 5 can be
Steps contain one or more than one statement separated by a comma ending with a full stop
e.g. Set K:=1, LOC:=1 and MAX := DATA [1].
Steps and statements are executed left to right.
Algorithm completes if Exit statement is encountered.
Variable Names
Variable names are always in capital letters and counter variable names may have single letters
e.g. MAX, DATA, MIN, etc.
for counters K, N, etc.
Assignment Statement
Assignment operators use dot equal tonotation
e.g. MAX := DATA [1]
which means assign the value of DATA [1] to MAX.
Input / Output
Taking input and variable initialization is done using Read statement
e.g. Read: K.
Conditions and checks
Almost all algorithm contain conditions and\or checks which are implemented by the key If followed by a condition.
If K > N then:
Important to note :1.
Algorithms can not have deadlocks.
2.
Algorithms can not have infinite loops or such logics which are never ending.
3.
Algorithms do not have unused variables.
4
Example
The example below shows an algorithm which finds the largest value from an array. Please note that this algorithm is valid
for all positive values of largest element and a finite length of array. The code written below is also known as
PESUDO CODE.
Algorithm
Algorithm 1.1:
(Largest Element in Array) A nonempty array DATA with N numerical values is given.
This algorithm finds the location LOC and MAX of the largest element of DATA. The
Variable K is used as a counter.
Step 1. [Initialize.]Set K:=1, LOC:=1 and MAX:=DATA[1].
Step 2.
Step 3.
Step 4.
Step 5.
Algorithms can also have Numerical equations. The algorithm below is implementing a quadratic equation.
Quadratic formula,
If D = b2 - 4ac
_______
x = - b b2 - 4ac
2a
If D = negative
If D = 0
If D = positive
Then
Then
Then
No real solution
And x = -b.
2a
Algorithm
Algorithm 1.2:
(Quadratic Equation) This algorithm inputs the coefficients A, B, C of a quadratic
equation and outputs the real solutions, if any.
Step 1.
Read: A, B, C.
Step 2.
Set D := B2 - 4AC.
Step 3.
If D > 0, then:
_
_
(a)
Set X1 := ( -B + D ) / 2A and X2 := ( -B -D ) / 2A .
(b) Write: X1, X2
Else if D = 0, then:
(a)
Set X := -B / 2A .
(b) Write: UNIQUE SOLUTION, X
Else
Write: NO REAL SOLUTION.
[End of If structure.]
Step 4.
Exit.
Assignment
Write an algorithm which finds the lowest value in an array. Also implement that algorithm in C or Java.
Write an algorithm which generates the prime numbers between 1 and 50. Also implement that algorithm in C or
Java.
Operating System
LAB-2
Name ____________________
Roll No ___________________
Date ______________________
Signature___________________
Objective
Introduction to Scheduling and Algorithm to calculate Average Waiting Time (AWT) and its implementation.
Scheduling is the method by which threads, processes or data flows are given access to system resources (e.g.
processor time, communications bandwidth). This is usually done to load balance a system effectively or achieve a
target quality of service.
CPU scheduler can be invoked at ve different points:
1. When a process switches from the new state to the ready state.
2. When a process switches from the running state to the waiting state.
3. When a process switches from the running state to the ready state.
4. When a process switches from the waiting state to the ready state.
5. When a process terminates.
Theory
Average Waiting Time
Process
Where,
TAWt
TCPU1
N
is
Process-1
8
0
0
Burst
Time
P1
P2
P3
P4
P5
TAWt
9
1
Process-2
4
0+8
8
Process-3
9
0+8+4
12
Process-4
5
8+4+9
21
Process-5
1
8+4+9+5
26
PESUDO CODE(algorithm) and C Language code for calculating Average Waiting Timefor the given set of processes.
Program
Algorithm
Algorithm 2.1:
(Average Waiting Time) A nonempty array
PROCSS [ B ] with N Number of process and
Burst time B for each process values is
given. WT is the waiting Time and AWT is the
average waiting time. The variables K1 and
K2 are used as counters. This algorithm
calculates the average waiting time AWT.
Step 1.
Read PROCESS[B].
Step 2.
Set K1 := 1, WT := 0, AWT := 0.
Step 3.
Step 4.
Step 5.
WT := WT + PROCESS[K2].
Step 9.
Assignment
Calculate and verify AWT for the following sets of burst times.
o 8,4,5,9,1
o 9,4,7,2,5
o 8,3,6,1,4
8
Operating System
LAB-3
Name ____________________
Roll No ___________________
Date ______________________
Signature___________________
Objective
Algorithm to calculate Average Turnaround Time (ATT), Time between submission and completion
and its implementation.
Theory
Average Turnaround Time
(T
TWt
CPU
TBt)1 (T
Process
CPU
TBt ) 2 (T
N
CPU
Burst
Time
P1
P2
P3
P4
P5
4
9
1
TBt ) 3 ............
Where,
TBt
TCPU
N
Process-1
8
0+8
8
Process-2
4
0+8+4
12
Process-3
9
8+4+9
21
Process-4
5
8+4+9+5
26
C Language code for calculating Average Turnaround Time for the given set of processes.
Program
Algorithm
10
Process-5
1
8+4+9+5+1
27
Assignment
Calculate and verify ATT for the following sets of burst times.
o 8,4,5,9,1
o 9,4,7,2,5
o 8,3,6,1,4
11
12
Operating System
LAB-4
Name ____________________
Roll No ___________________
Date ______________________
Signature___________________
13
Objective
Populating the Processor Time Line.
Theory
Processor Time Line
Processor Time Line is a time graph
the process with respect to time.
If the processes in the given table are
they will generate the following time
TIME
PROCES
S
Burst
Time
Process
P1
P2
P3
P4
P5
4
2
1
10
11
12
13
14
P1
P
1
P
1
P
2
P
2
P
2
P
2
P
3
P
3
P4
P4
P4
P4
P4
P5
Program
Explanation
Conditions that apply to the program for Processor
Time Line:
All process arrive on First Come First Server
concept
Assignment
Populate and verify the time line for the following sets of burst times also calculate their AWT and ATT.
o 8,4,5,9,1
9,4,7,2,5
8,3,6,1,4
14
Operating System
LAB-5
Name ____________________
Roll No ___________________
Date ______________________
Signature___________________
15
Objective
Implementing non pre-emptive algorithms, First Come First Server (FCFS) and calculating AWT and ATT
Arrival
Burst
Process
Theory
Time
Time
Average Waiting
Time
8
P
3
1
Where,
TAWt
is
TCPU1
N
the
P2
P3
P4
P5
TAWt
(T
TWt
CPU
TBt)1 (T
CPU
TBt) 2 (T
N
CPU
TBt ) 3 ............
Where,
TBt
TCPU
N
PESUDO CODE for first come first server algorithm provided all the processes arrive sequentially.
Algorithm
Algorithm 2.1:
(First Come First Serve) A nonemptyarray PROCSS [ B ] with N Number of process and
Burst time B for each process values is given. WT is the waiting Time and TT is the turn
around time. The variable K is used as a counter. This algorithm finds the average
waiting time AWT and average turn around time ATT.
Variable K is used as a counter.
Step 1.
Read PROCESS[B].
Step 2.
Set K := 1, WT := 0.
Step 3.
Step 4. WT := WT + PROCESS[K].
TT := TT + PROCESS[K] + PROCESS[K+1].
Step 5.
Exit.
16
P0
0-0=0
P1
5-1=4
P2
8-2=6
P3
16 - 3 = 13
Operating System
LAB-6
Name ____________________
Roll No ___________________
Date ______________________
Marks Obtained ____________
Signature___________________
18
Objective
Implementing non pre-emptive algorithms, Shortest Job First (SJF).
Theory
A set of processes arrive with some given order and with varying burst times. According to the SJF Algorithm shortest job
should be evaluated first. Hence the process queue is sorted such that shortest job is on the first location.
Processer should know in advance how much time process will take.
Process
Arrival Time
Execute
Time
Service
Time
P0
P1
P2
14
P3
P0
3-0=3
P1
0-0=0
P2
14 - 2 = 12
P3
8-3=5
Assignment
Calculate the Average Waiting Time (AWT) and Average Turn Around Time (ATT) for the above Algorithm of
Shortest Job First (SJF).
Operating System
20
LAB-7
Name ____________________
Roll No ___________________
Date ______________________
Marks Obtained ____________
Signature___________________
Objective
Implementing Priority based Scheduling.
Theory
A set of 5 processes arrive randomly with some given priority and with varying burst times. According to the Priority
Scheduling Algorithm process with highest priority should be executed first. Hence the processes in queue are sorted
according to their priority.
Algorithm
Algorithm 4.1:
(Priority Based Scheduling) A nonempty two dimensional array PROCSS [ P ] [ B ] with N
Number of Process. Each Process has given values for Priority P and Burst time B. This
algorithm sorts the processes according to their priority.Variable K is outer counter and
Ki is the inner counter.
Step 1.
Read PROCESS[P][B].
Step 2.
[Initialize] Set K := 1, Ki := 1.
Step 3.
Step 4.
Exit.
Each process is assigned a priority. Process with highest priority is to be executed first and so on.
Processes with same priority are executed on first come first serve basis.
Priority can be decided based on memory requirements, time requirements or any other resource requirement.
0-0=0
P1
3-1=2
P2
8-2=6
P3
16 - 3 = 13
Assignment
Implement the Priority Based Scheduling and Calculate the Average Waiting Time (AWT) and Average Turn
Around Time (ATT).
22
Operating System
LAB-8
Name ____________________
Roll No ___________________
Date ______________________
23
Objective
Implementing Round Robin Algorithm.
Theory
A set of 5 processes arrives with some given order and with varying burst times. According to the SJF Algorithm shortest
job should be evaluated first. Hence the process queue is sorted such that shortest job is on the first location.
Algorithm
Algorithm 5.1:
Step 2.
Step 3.
Step 4.
Step 5.
Step 6.
Set Ke:= 1.
Step 7.
Step 8.
DATA[Ke] := DATA[Ke] 1.
Step 9.
Exit.
24
Once a process is executed for given time period. Process is preempted and other process executes for
given time period.
Context switching is used to save states of preempted processes.
P0
(0-0) + (12-3) = 9
P1
(3-1) = 2
P2
6-2) + (15-9) = 10
P3
(9-3) + (18-12) = 12
25
Operating System
LAB-9
Name ____________________
Roll No ___________________
Date ______________________
Marks Obtained ____________
Signature___________________
26
Objective
Study produces consumer problem with Semaphores
if (itemCount == 0) {
sleep();
}
item = removeItemFromBuffer();
itemCount = itemCount - 1;
if (itemCount == BUFFER_SIZE - 1) {
wakeup(producer);
}
consumeItem(item);
}
}
The problem with this solution is that it contains a race condition that can lead to a deadlock. Consider the following scenario:
1. The consumer has just read the variable itemCount, noticed it's zero and is just about to move inside the if-block.
2. Just before calling sleep, the consumer is interrupted and the producer is resumed.
3. The producer creates an item, puts it into the buffer, and increases itemCount.
4. Because the buffer was empty prior to the last addition, the producer tries to wake up the consumer.
5. Unfortunately the consumer wasn't yet sleeping, and the wakeup call is lost. When the consumer resumes, it goes to
sleep and will never be awakened again. This is because the consumer is only awakened by the producer
when itemCount is equal to 1.
6. The producer will loop until the buffer is full, after which it will also go to sleep.
Since both processes will sleep forever, we have run into a deadlock. This solution therefore is unsatisfactory.
An alternative analysis is that if the programming language does not define the semantics of concurrent accesses to shared
variables (in this case itemCount) without use of synchronization, then the solution is unsatisfactory for that reason, without
needing to explicitly demonstrate a race condition.
Using semaphores
Semaphores solve the problem of lost wakeup calls. In the solution below we use two semaphores, fillCount and emptyCount,
to solve the problem. fillCount is the number of items to be read in the buffer, and emptyCount is the number of available
spaces in the buffer where items could be written. fillCount is incremented and emptyCount decremented when a new item
has been put into the buffer. If the producer tries to decrement emptyCount while its value is zero, the producer is put to sleep.
The next time an item is consumed, emptyCount is incremented and the producer wakes up. The consumer works analogously.
procedure producer() {
while (true) {
item = produceItem();
down(emptyCount);
putItemIntoBuffer(item);
up(fillCount);
}
}
procedure consumer() {
while (true) {
down(fillCount);
item = removeItemFromBuffer();
up(emptyCount);
consumeItem(item);
}
}
The solution above works fine when there is only one producer and consumer. With multiple producers sharing
the same memory space for the item buffer, or multiple consumers sharing the same memory space, this
solution contains a serious race condition that could result in two or more processes reading or writing into the
same slot at the same time. To understand how this is possible, imagine how the
procedure putItemIntoBuffer() can be implemented. It could contain two actions, one determining the next
available slot and the other writing into it. If the procedure can be executed concurrently by multiple producers,
then the following scenario is possible:
1. Two producers decrement emptyCount
29
2. One of the producers determines the next empty slot in the buffer
3. Second producer determines the next empty slot and gets the same result as the first producer
4. Both producers write into the same slot
To overcome this problem, we need a way to make sure that only one producer is
executing putItemIntoBuffer() at a time. In other words we need a way to execute a critical
section with mutual exclusion. To accomplish this we use a binary semaphore called mutex. Since the value
of a binary semaphore can be only either one or zero, only one process can be executing between
down(mutex) and up(mutex). The solution for multiple producers and consumers is shown below..
mutual exclusion refers to the problem of ensuring that no two processes or threads are in their critical
section at the same time. Here, a critical section refers to a period of time when the process accesses a shared
resource, such as shared memory.
semaphore mutex = 1;
procedure producer() {
while (true) {
item = produceItem();
// loop forever
// create a new item to put in the buffer
procedure consumer() {
30
down(emptyCount);
while (true) {
// loop forever
down(fillCount);
down(mutex);
item = removeItemFromBuffer();
up(mutex);
}
}
Notice that the order in which different semaphores are incremented or decremented is essential: changing the
order might result in a deadlock.
Assignment:
31
Operating System
LAB-10
Name ____________________
Roll No ___________________
Date ______________________
Marks Obtained ____________
Signature___________________
32
Objective
Study produces consumer problem with Monitors
Using monitors
The following pseudo code shows a solution to the producer-consumer problem using monitors. Since mutual
exclusion is implicit with monitors, no extra effort is necessary to protect the critical section. In other words, the
solution shown below works with any number of producers and consumers without any modifications. It is also
noteworthy that using monitors makes race conditions much less likely than when using semaphores.
monitor ProducerConsumer {
int itemCount
condition full;
condition empty;
procedure add(item) {
while (itemCount == BUFFER_SIZE) {
wait(full);
}
putItemIntoBuffer(item);
itemCount = itemCount + 1;
if (itemCount == 1) {
notify(empty);
}
}
33
procedure remove() {
while (itemCount == 0) {
wait(empty);
}
item = removeItemFromBuffer();
itemCount = itemCount - 1;
if (itemCount == BUFFER_SIZE - 1) {
notify(full);
}
return item;
}
}
procedure producer() {
while (true) {
item = produceItem()
ProducerConsumer.add(item)
}
}
procedure consumer() {
while (true) {
item = ProducerConsumer.remove()
34
consumeItem(item)
}
}
Note the use of while statements in the above code, both when testing if the buffer is full or empty. With
multiple consumers, there is a race condition where one consumer gets notified that an item has been put into
the buffer but another consumer is already waiting on the monitor so removes it from the buffer instead. If
the while was instead an if, too many items might be put into the buffer or a remove might be attempted on an
empty buffer.
1) synchronized keyword provides locking which ensures mutual exclusive access of shared resource and prevent data race
condition.
2) synchronized keyword also prevent reordering of code statement by compiler which can cause subtle concurrent issue if we don't
use synchronized or volatile keyword.
3) synchronized keyword involve locking and unlocking. before entering into synchronized method or block thread needs to acquire
the lock at this point it reads data from main memory than cache and when it release the lock it flushes write operation into main
memory which eliminates memory inconsistency errors.
Mutual exclusion refers to the problem of ensuring that no two processes or threads (henceforth referred to only as
processes) are in their critical section at the same time
Critical section is a piece of code that accesses a shared resource (data structure or device) that must not be
concurrently accessed by more than one thread of execution
Race condition or race hazard is a type of flaw in a system where the output is dependent on the sequence or timing of
other uncontrollable events.
Assignment: Producer Consumer Problem with Wait and Notify( Using Monitor)
Producer Consumer Problem is a classical concurrency problem and in fact it is one of the concurrency design pattern. In this program
we have used wait and notify method from java.lang.Object class.
Step: 1
Create new Java application with the name ProducerConsumerSolution
Step: 2 import the following classes
import java.util.Vector;
import java.util.logging.Level;
import java.util.logging.Logger;
Step: 2
35
Step: 4
Write a method name produce throws InterruptedException
//wait if queue is full
while (sharedQueue.size() == SIZE) {
synchronized (sharedQueue) {
System.out.println("Queue is full " + Thread.currentThread().getName()
+ " is waiting , size: " + sharedQueue.size());
sharedQueue.wait();
}
//producing element and notify consumers
synchronized (sharedQueue) {
sharedQueue.add(i);
sharedQueue.notifyAll();
}
Step: 5
Repeat step 2 for class consumer
Step: 6
Write override run method for consumer
while (true) {
try {
System.out.println("Consumed: " + consume());
Thread.sleep(50);
} catch (InterruptedException ex) {
}
Step: 7
Write consume throws InterruptedException
//wait if queue is empty
while (sharedQueue.isEmpty()) {
synchronized (sharedQueue) {
System.out.println("Queue is empty " + Thread.currentThread().getName()
+ " is waiting , size: " + sharedQueue.size());
sharedQueue.wait();
}
}
//Otherwise consume element and notify waiting producer
synchronized (sharedQueue) {
sharedQueue.notifyAll();
return (Integer) sharedQueue.remove(0);
}
36
}
Step: 8
In the main method create an object of Vector class with the name of sharedQueue, initialize an integer variable size with
4
Create 2 objects of Thread class by calling public Thread (Runnable target, String name) constructor for Producer and
consumer Runnable
public Producer(Vector sharedQueue, int size)
public Consumer(Vector sharedQueue, int size)
Start both threads
37
Operating System
LAB-11
Name ____________________
Roll No ___________________
Date ______________________
Marks Obtained ____________
Signature___________________
38
39
Operating System
LAB-12
Name ____________________
Roll No ___________________
Date ______________________
Marks Obtained ____________
Signature___________________
40
Objective
Evaluate algorithm by running it on a particular string of memory references (reference string) and computing the number
of page faults on that string
In a computer operating system that uses paging for virtual memory management, page replacement algorithms decide
which memory pages to page out (swap out, write to disk) when a page of memory needs to be allocated. Paging
happens when a page fault occurs and a free page cannot be used to satisfy the allocation, either because there are
none, or because the number of free pages is lower than some threshold.
When the page that was selected for replacement and paged out is referenced again it has to be paged in (read in from
disk), and this involves waiting for I/O completion. This determines the quality of the page replacement algorithm: the less
time waiting for page-ins, the better the algorithm. A page replacement algorithm looks at the limited information about
accesses to the pages provided by hardware, and tries to guess which pages should be replaced to minimize the total
number of page misses, while balancing this with the costs (primary storage and processor time) of the algorithm itself.
Page replacement algorithm is an algorithm decides which pages should be writing to disk when new page needs to
allocated. Page replacement increases the system performance. While there is also a possibility of picking a randomize
page to remove during the page fault occurs, in this case system performance will be optimize if small sized pages is
chosen rather than the heavily pages. Page replacement algorithms does the work on the basis of both theoretical and
implementations.
Page replacement takes the following procedure. If no frame is free, we find one that is not currently being used and free
it. We can free the frame by making changing in the page table to point that the page is no longer available in the memory.
Now we can modify and use the page fault procedure to include page replacement.
LRU
LRU replacement associates with each page the time of that pages last use,When a page must be replaced, LRU
chooses the page that has not been used for the longest period of time
OPTIMAL
Replace page that will not be used for longest period of time,This is a design to guarantee the lowest page-fault rate for a
fixed number of frames
FIFO
When a page must be replaced, the oldest page is chosen
Assignment #1
Write a code to find out the number of hits for given page frame for LRU and FIFO.
41
Operating System
LAB-13
Name ____________________
Roll No ___________________
Date ______________________
Marks Obtained ____________
Signature___________________
42
Object:
Discuss Deadlock prevemtation and Implement Bankers Algorithm
The Banker's algorithm is a resource allocation & deadlock avoidance algorithm developed by Edsger Dijkstra
that tests for safety by simulating the allocation of pre-determined maximum possible amounts of all resources,
and then makes a "safe-state" check to test for possible deadlock conditions for all other pending activities,
before deciding whether allocation should be allowed to continue.
Algorithm
The Banker's algorithm is run by the operating system whenever a process requests resources. The algorithm
prevents deadlock by denying or postponing the request if it determines that accepting the request could put
the system in an unsafe state (one where deadlock could occur). When a new process enters a system, it must
declare the maximum number of instances of Banker's algorithm Example
Assuming that the system distinguishes between four types of resources, (A, B, C and D), the following is an
example of how those resources could be distributed. Note that this example shows the system at an instant
before a new request for resources arrives. Also, the types and number of resources are abstracted. Real
systems, for example, would deal with much larger quantities of each resource.
Available system resources:
ABCD
3112
Processes (currently allocated resources):
ABCD
P1 1 2 2 1
P2 1 0 3 3
P3 1 1 1 0
Processes (maximum resources):
ABCD
P1 3 3 2 2
P2 1 2 3 4
P3 1 1 5 0
Safe and Unsafe States
A state (as in the above example) is considered safe if it is possible for all processes to finish executing
(terminate). Since the system cannot know when a process will terminate, or how many resources it will have
requested by then, the system assumes that all processes will eventually attempt to acquire their stated
maximum resources and terminate soon afterward. This is a reasonable assumption in most cases since the
system is not particularly concerned with how long each process runs (at least not from a deadlock avoidance
perspective).
Also, if a process terminates without acquiring its maximum resources, it only makes it easier on the system.
Given that assumption, the algorithm determines if a state is safe by trying to find a hypothetical set of requests
by the processes that would allow each to acquire its maximum resources and then terminate (returning its
resources to the system). Any state where no such set exists is an unsafe state.
Pseudo-Code[3]
P - set of processes
Mp - maximal requirement of resources for process p
Cp - current resources allocation process p
A - currently available resources
43
while (P != ) {
found = False
foreach (p P) {
if (Mp Cp A) {
/* p can obtain all it needs. */
/* assume it does so, terminates, and */
/* releases what it already has. */
A = A + Cp
P = P {p}
found = True
}
}
if (!found) return UNSAFE
}
return SAFE
Example
We can show that the state given in the previous example is a safe state by showing that it is possible for each
process to acquire its maximum resources and then terminate.
1. P1 acquires 2 A, 1 B and 1 D more resources, achieving its maximum
The system now still has 1 A, no B, 1 C and 1 D resource available
2. P1 terminates, returning 3 A, 3 B, 2 C and 2 D resources to the system
The system now has 4 A, 3 B, 3 C and 3 D resources available
3. P2 acquires 2 B and 1 D extra resources, then terminates, returning all its resources
The system now has all resources: 6 A, 4 B, 7 C and 6 D
4. Because all processes were able to terminate, this state is safe
Note that these requests and acquisitions are hypothetical. The algorithm generates them to check the safety of
the state, but no resources are actually given and no processes actually terminate. Also note that the order in
which these requests are generated if several can be fulfilled doesn't matter, because all hypothetical
requests let a process terminate, thereby increasing the system's free resources.
For an example of an unsafe state, consider what would happen if process 2 were holding 1 more unit of
resource B at the beginning.
Requests
When the system receives a request for resources, it runs the Banker's algorithm to determine if it is safe to
grant the request. The algorithm is fairly straight forward once the distinction between safe and unsafe states is
understood.
1. Can the request be granted?
If not, the request is impossible and must either be denied or put on a waiting list
2. Assume that the request is granted
3. Is the new state safe?
If so grant the request
If not, either deny the request or put it on a waiting list
Whether the system denies or postpones an impossible or unsafe request is a decision specific to the operating
system.
Example
Continuing the previous examples, assume process 3 requests 2 units of resource C.
1. There is not enough of resource C available to grant the request
2. The request is denied
On the other hand, assume process 3 requests 1 unit of resource C.
1. There are enough resources to grant the request
2. Assume the request is granted
The new state of the system would be:
Available system resources
44
ABCD
Free 3 1 0 2
Processes (currently allocated resources):
ABCD
P1 1 2 2 1
P2 1 0 3 3
P3 1 1 2 0
Processes (maximum resources):
ABCD
P1 3 3 2 2
P2 1 2 3 4
P3 1 1 5 0
Determine if this new state is safe
P1 can acquire 2 A, 1 B and 1 D resources and terminate
Then, P2 can acquire 2 B and 1 D resources and terminate
Finally, P3 can acquire 3 C resources and terminate
Therefore, this new state is safe
Since the new state is safe, grant the request
Finally, assume that process 2 requests 1 unit of resource B.
1. There are enough resources
2. Assuming the request is granted, the new state would be:
Available system resources:
ABCD
Free 3 0 1 2
Processes (currently allocated resources):
ABCD
P1 1 2 2 1
P2 1 1 3 3
P3 1 1 1 0
Processes (maximum resources):
ABCD
P1 3 3 2 2
P2 1 2 3 4
P3 1 1 5 0
This state safe? Assuming P1, P2, and P3 request more of resource B and C.
P1 is unable to acquire enough B resources
P2 is unable to acquire enough B resources
P3 is unable to acquire enough C resources
No process can acquire enough resources to terminate, so this state is
not safe
Since the state is unsafe, deny the request
Note that in this example, no process was able to terminate. It is possible that some processes will be able to
terminate, but not all of them. That would still be an unsafe state.
45
Operating System
LAB-14
Name ____________________
Roll No ___________________
Date ______________________
Marks Obtained ____________
Signature___________________
46
Object:
Discuss Election and Ring Algorithm
Election Algorithms
Many distributed algorithms employ a coordinator process that performs functions needed by the other
processes in the system. These functions include enforcing mutual exclusion, maintaining a global wait-for
graph for deadlock detection, replacing a lost token, and controlling an input or output device in the
system. If the coordinator process fails due to the failure of the site at which it resides, the system can
continue execution only by restarting a new copy of the coordinator on some other site.
The algorithms that determine where a new copy of the coordinator should be restarted are called election
algorithms.
Election algorithms assume that a unique priority number is associated with each active process in the
system. For ease of notation, we assume that the priority number of process Pi is i. To simplify our
discussion, we assume a one-to-one correspondence between processes and sites and thus refer to both
as processes. The coordinator is always the process with the largest priority number. Hence, when a
coordinator fails, the algorithm must elect that active process with the largest priority number. This
number must be sent to each active process in the system. In addition, the algorithm must provide a
mechanism for a recovered process to identify the current coordinator.
47