Download as pdf or txt
Download as pdf or txt
You are on page 1of 18

ADVANTAGES

1) Low scheduling overhead: Since processes are permanently assigned to their respective
queues, the overhead of scheduling is low, as the scheduler only needs to select the appropriate
queue for execution.

2) Fairness: The scheduling algorithm provides a fair allocation of CPU time to different types of
processes, based on their priority and requirements.

3) Customizable: The scheduling algorithm can be customized to meet the specific requirements of
different types of processes
.
4) Preemption: Preemption is allowed in Multilevel Queue Scheduling, which means that higher-
priority processes can preempt lower-priority processes, and the CPU is allocated to the higher-
priority process. This helps ensure that high-priority processes are executed first

5) Prioritization: Priorities are assigned to processes based on their type, characteristics, and
importance, which ensures that important processes are executed in a timely manner.

6) Efficient allocation of CPU time

DISADVANTAGES

1) There are chances of starvation for lower-priority processes. If higher-priority processes keep
coming, then the lower-priority processes won't get an opportunity to go into the running state.

2) Multilevel queue scheduling is inflexible.

3) There may be added complexity in implementing and maintaining multiple queues and
scheduling algorithms.

MULTI LEVEL FEEDBACK SCHEDULING

In a multilevel queue-scheduling algorithm, processes are permanently assigned to a queue on entry to


the system. Processes do not move between queues. This setup has the advantage of low scheduling
overhead, but the disadvantage of being inflexible.

Multilevel feedback queue scheduling, however, allows a process to move between queues. The idea is
to separate processes with different CPU-burst characteristics. If a process uses too much CPU time, it
will be moved to a lower-priority queue. Similarly, a process that waits too long in a lower-priority queue
may be moved to a higher-priority queue. This help to prevents starvation

In general, a multilevel feedback queue scheduler is defined by the following parameters:


1) The queues number in the system.
2) The method for determining when a queue should be demoted to a lower-priority queue.
3) When a process is upgraded to a higher-priority queue, this process determines when it gets
upgraded.
4) The method for determining which processes will enter the queue and when those processes will
require service

ADVANTAGES OF MFQS

1) This is a flexible Scheduling Algorithm


2) This scheduling algorithm allows different processes to move between different queues.
3) In this algorithm, A process that waits too long in a lower priority queue may be moved to a higher
priority queue which helps in preventing starvation
4) Prevents CPU overload.

DISADVANTAGES OF MFQS

1) This algorithm is too complex.


2) As processes are moving around different queues which leads to the CPU overheads.
3) In order to select the best scheduler this algorithm requires some other methods

NEED

1) This scheduling is more flexible than Multilevel queue scheduling.


2) This algorithm helps in reducing the response time.
NOTE Advantages hi lik ne hai
NOTE DIAGRAM IN MODEL

MULTIPLE PROCESSORS SCHEDULING

A multi-processor is a system that has more than one processor but shares the same memory, bus,
and input/output devices. The bus connects the processor to the RAM, to the I/O devices, and to all the
other components of the computer.

The system is a tightly coupled system. This type of system works even if a processor goes down. The
rest of the system keeps working. In multi-processor scheduling, more than one processors(CPUs)
share the load to handle the execution of processes smoothly.

The scheduling process of a multi-processor is more complex than that of a single processor system
because of the following reasons.

1) Load balancing is a problem since more than one processors are present.
2) Processes executing simultaneously may require access to shared data.
3) Cache affinity should be considered in scheduling.
Approaches to Multiple Processor Scheduling

Symmetric Multiprocessing: In symmetric multi-processor scheduling, the processors are self-


scheduling. The scheduler for each processor checks the ready queue and selects a process to
execute. Each of the processors works on the same copy of the operating system and communicates
with each other. If one of the processors goes down, the rest of the system keeps working.

Asymmetric Multiprocessing: In asymmetric multi-processor scheduling, there is a master server,


and the rest of them are slave servers. The master server handles all the scheduling processes and I/O
processes, and the slave servers handle the users' processes. If the master server goes down, the
whole system comes to a halt. However, if one of the slave servers goes down, the rest of the system
keeps working
.
Note- Processor Affinity – Processor Affinity means a processes has an affinity for the processor on
which it is currently running.

Advantages and disadvantage m multiprocessor system hai lik na hai

DEADLOCK

A Deadlock is a situation where each of the computer process waits for a resource which is being
assigned to some another process. In this situation, none of the process gets executed since the
resource it needs, is held by some other process which is also waiting for some other resource to
be released.
Let us assume that there are three processes P1, P2 and P3. There are three different resources
R1, R2 and R3. R1 is assigned to P1, R2 is assigned to P2 and R3 is assigned to P3.
After some time, P1 demands for R1 which is being used by P2. P1 halts its execution since it can't
complete without R2. P2 also demands for R3 which is being used by P3. P2 also stops its
execution because it can't continue without R3. P3 also demands for R1 which is being used by P1
therefore P3 also stops its execution.
In this case None of the process is progressing and they all waiting. The computer becomes
unresponsive since all the processes got blocked.
NECESSARY CONDITION

Mutual Exclusion There should be a resource that can only be held by one process at a time.

Hold and Wait A process can hold multiple resources and still request more resources from other
processes which are holding them.

No Preemption A resource cannot be preempted from a process by force. A process can


only release a resource when its completely executed.

Circular Wait A process is waiting for the resource held by the second process, which is
waiting for the resource held by the third process and so on, till the last process is waiting
for a resource held by the first process. This forms a circular chain. For example: Process 1
is allocated Resource2 and it is requesting Resource 1. Similarly, Process 2 is allocated
Resource 1 and it is requesting Resource 2. This forms a circular wait loop.
Difference between Starvation and Deadlock

1=> Deadlock is a situation where no process got blocked and no process proceeds
Starvation is a situation where the low priority process got blocked and the high priority processes
proceed.

2=> Deadlock is an infinite waiting.


Starvation is a long waiting but not infinite.

3=>Every Deadlock is always a starvation.


Every starvation need not be deadlock.

4=>The requested resource is blocked by the other process.


The requested resource is continuously be used by the higher priority processes.

5=>Deadlock happens when Mutual exclusion, hold and wait, No preemption and circular wait
occurs simultaneously.
It occurs due to the uncontrolled priority and resource management.

DEAD LOCK PREVENTION

Mutual exclusion- Mutual section from the resource point of view is the fact that a resource can
never be used by more than one process simultaneously which is fair enough but that is the main
reason behind the deadlock. If a resource could have been used by more than one process at the
same time then the process would have never been waiting for any resource.
However, if we can be able to violate resources behaving in the mutually exclusive manner then the
deadlock can be prevented.

Eliminate hold and wait- Allocate all required resources to the process before the start of its
execution, this way hold and wait condition is eliminated but it will lead to low device utilization. for
example, if a process requires a printer at a later time and we have allocated a printer before the
start of its execution printer will remain blocked till it has completed its execution.
The process will make a new request for resources after releasing the current set of resources. This
solution may lead to starvation.

No Preemption- Deadlock arises due to the fact that a process can't be stopped once it starts.
However, if we take the resource away from the process which is causing deadlock then we can
prevent deadlock.
This is not a good approach at all since if we take a resource away which is being used by the
process then all the work which it has done till now can become inconsistent.

Circular Wait- To violate circular wait, we can assign a priority number to each of the resource. A
process can't request for a lesser priority resource. This ensures that not a single process can
request a resource which is being utilized by some other process and no cycle will be formed.
DEADLOCK DETECTION
Deadlock detection algorithms are used to identify the presence of deadlocks in computer systems.
These algorithms examine the system's processes and resources to determine if there is a circular
wait situation that could lead to a deadlock. If a deadlock is detected, the algorithm can take steps
to resolve it and prevent it from occurring again in the future. There are several popular deadlock
detection algorithms.

Purpose of deadlock detection algorithm

1) The purpose of a deadlock detection algorithm is to identify and resolve deadlocks in a computer
system.
2) It does so by identifying the occurrence of a deadlock, determining the processes and resources
involved, taking corrective action to break the deadlock, and restoring normal system operations.
3) The algorithm plays a crucial role in ensuring the stability and reliability of the system by
preventing deadlocks from causing the system to freeze or crash.

DEADLOCK DETECTION ALOGRITHM

1) RESOURCE ALLOCATION GRAPH- The resource allocation graph is the pictorial


representation of the state of a system. As its name suggests, the resource allocation graph is the
complete information about all the processes which are holding some resources or waiting for some
resources.

It also contains the information about all the instances of all the resources whether they are available or
being used by the processes.

In Resource allocation graph, the process is represented by a Circle while the Resource is represented
by a rectangle.
Vertices are mainly of two types, Resource and process. Each of them will be represented by a
different shape. Circle represents process while rectangle represents resource.

A resource can have more than one instance. Each instance will be represented by a dot inside the
rectangle.

Edges in RAG are also of two types, one represents assignment and other represents the wait of a
process for a resource. The above image shows each of them.

A resource is shown as assigned to a process if the tail of the arrow is attached to an instance to the
resource and the head is attached to a process.

A process is shown as waiting for a resource if the tail of an arrow is attached to the process while the
head is pointing towards the resource.

Example
Let's consider 3 processes P1, P2 and P3, and two types of resources R1 and R2. The
resources are having 1 instance each.

According to the graph, R1 is being used by P1, P2 is holding R2 and waiting for R1, P3
is waiting for R1 as well as R2.

The graph is deadlock free since no cycle is being formed in the graph.

DETECTION USING RAG

If a cycle is being formed in a Resource allocation graph where all the resources have the single
instance then the system is deadlocked.

In Case of Resource allocation graph with multi-instanced resource types, Cycle is a necessary
condition of deadlock but not the sufficient condition.

The following example contains three processes P1, P2, P3 and three resources R2, R2, R3. All the
resources are having single instances each.

If we analyze the graph then we can find out that there is a cycle formed in the graph
since the system is satisfying all the four conditions of deadlock.
Allocation Matrix Allocation matrix can be formed by using the Resource allocation
graph of a system. In Allocation matrix, an entry will be made for each of the resource
assigned. For Example, in the following matrix, en entry is being made in front of P1 and
below R3 since R3 is assigned to P1.

Process R1 R2 R3

P1 0 0 1

P2 1 0 0

P3 0 1 0

Request Matrix

In request matrix, an entry will be made for each of the resource requested. As in the
following example, P1 needs R1 therefore an entry is being made in front of P1 and
below R1.

Process R1 R2 R3

P1 1 0 0

P2 0 1 0

P3 0 0 1

Avial = (0,0,0)

Neither we are having any resource available in the system nor a process going to
release. Each of the process needs at least single resource to complete therefore they
will continuously be holding each one of them.

We cannot fulfill the demand of at least one process using the available resources
therefore the system is deadlocked as determined earlier when we detected a cycle in
the graph.
Advantages

1) Easy to understand and implement


2) Can handle multiple types of resources
3) Helps identify the processes involved in a deadlock

Disadvantages

1) Can be time-consuming for large systems


2) Can give false positives if there are multiple requests for the same resource
3) Assumes that all resources are pre-allocated, which may not be the case in some systems.

2) Wait-for-Graph Algorithm: It is a variant of the Resource Allocation graph. In this


algorithm, we only have processes as vertices in the graph. If the Wait-for-Graph contains a cycle
then we can say the system is in a Deadlock state. Now we will discuss how the Resource
Allocation graph will be converted into Wait-for-Graph in an Algorithmic Approach. We need to
remove resources while converting from Resource Allocation Graph to Wait-for-Graph.
Resource Allocation Graph: Contains Processes and Resources.
Wait-for-Graph: Contains only Processes after removing the Resources while
conversion from Resource Allocation Graph.

Algorithm:

Step 1: Take the first process (Pi) from the resource allocation graph and check the path in which
it is acquiring resource (Ri), and start a wait-for-graph with that particular process.
Step 2: Make a path for the Wait-for-Graph in which there will be no Resource included from the
current process (Pi) to next process (Pj), from that next process (Pj) find a resource (Rj) that will
be acquired by next Process (Pk) which is released from Process (Pj).
Step 3: Repeat Step 2 for all the processes.
Step 4: After completion of all processes, if we find a closed-loop cycle then the system is in a
deadlock state, and deadlock is detected.

Now we will see the working of this Algorithm with an Example. Consider a Resource Allocation
Graph with 4 Processes P1, P2, P3, P4, and 4 Resources R1, R2, R3, R4. Find if there is a
deadlock in the Graph using the Wait for Graph-based deadlock detection algorithm.
Step 1: First take Process P1 which is waiting for Resource R1, resource R1 is acquired by
Process P2, Start a Wait-for-Graph for the above Resource Allocation Graph.

Step 2: Now we can observe that there is a path from P1 to P2 as P1 is waiting for R1 which is
been acquired by P2. Now the Graph would be after removing resource R1 looks like.

Step 3: From P2 we can observe a path from P2 to P3 as P2 is waiting for R4 which is acquired
by P3. So make a path from P2 to P3 after removing resource R4 looks like.

Step 4: From P3 we find a path to P4 as it is waiting for P3 which is


acquired by P4. After removing R3 the graph looks like this.
Step 5: Here we can find Process P4 is waiting for R2 which is acquired by P1. So finally the
Wait-for-Graph is as follows:

Step 6: Finally In this Graph, we found a cycle as the Process P4 again came back to
the Process P1 which is the starting point (i.e., it’s a closed-loop). So, According to the
Algorithm if we found a closed loop, then the system is in a deadlock state. So here we
can say the system is in a deadlock state.

Advantages
1) Can handle multiple types of resources
2) Useful for systems with a large number of processes
3) Provides a clear visualization of the deadlock
Disadvantages
1) Can be time-consuming for large systems
2) May give false positives if there are multiple requests for the same resource
3) Assumes that all resources are pre-allocated, which may not be the case in some
systems.

3 ) Banker algorithm

Banker's algorithm is a deadlock avoidance algorithm. It is named so because this


algorithm is used in banking systems to determine whether a loan can be granted or not.
Consider there are n account holders in a bank and the sum of the money in all of their
accounts is S. Every time a loan has to be granted by the bank, it subtracts the loan
amount from the total money the bank has. Then it checks if that difference is greater
than S. It is done because, only then, the bank would have enough money even if all
the n account holders draw all their money at once.Banker's algorithm works in a similar
way in computers.
Whenever a new process is created, it must specify the maximum instances of each
resource type that it needs, exactly

CHARACTERISTICS OF BANKER'S ALGORITHM


The characteristics of Banker's algorithm are as follows:
1) If any process requests for a resource, then it has to wait.
2) This algorithm consists of advanced features for maximum resource allocation.
3) There are limited resources in the system we have.
4) In this algorithm, if any process gets all the needed resources, then it is that it should
return the resources in a restricted period.
5) Various resources are maintained in this algorithm that can fulfill the needs of at least
one client.

Data Structures used to implement the Banker’s Algorithm


Some data structures that are used to implement the banker's algorithm are:

1. Available It is an array of length m. It represents the number of available resources


of each type. If Available[j] = k, then there are k instances available, of resource type Rj.
2. Max It is an n x m matrix which represents the maximum number of instances of
each resource that a process can request. If Max[i][j] = k, then the process Pi can
request atmost k instances of resource type Rj.

3. Allocation It is an n x m matrix which represents the number of resources of each


type currently allocated to each process. If Allocation[i][j] = k, then process Pi is
currently allocated k instances of resource type Rj.
4. Need It is a two-dimensional array. It is an n x m matrix which indicates the
remaining resource needs of each process. If Need[i][j] = k, then process Pi may need k
more instances of resource type Rj to complete its task.
The baker’s algorithm is a combination of two algorithms namely, Safety
Algorithm and Resource Request Algorithm. These two algorithms together control
the processes and avoid dead lock in a system.

Safety Algorithm

The safety algorithm is one that determines whether or not the system is in a safe state.
It follows the following safe sequence in the banker’s algorithm −
Step 1
 Consider two vectors named Work and Finish of lengths "m" and "n" respectively.
 Initialize: Work = Available
 Finish [i] = false; for i = 0, 1, 2, 3, 4, 5 … n.

Step 2

 Find "i" such that both Finish [i] = false and Needi ≤ Work
 If such "i" does not exist, then go to the step 4.
Step 3
 Work = Work + Allocation
 Finish [i] = true
 Go to step 2.

Step 4
 If Finish [i] = true for all "i", it means that the system is in a safe state otherwise it
is in unsafe state.
 This algorithm takes (m × n2) operations to decide whether the state is safe or not.

Resource Request Algorithm

This algorithm determines how a system behaves when a process request for each type of
resource in the system.
Let us consider a request vector Requesti for a process Pi. When Requesti [j] = k, then the
process Pi requires k instances of resource type Rj.
When a process Pi requests for resources, the following sequence is followed −
Step 1
If Requesti ≤ Needi, then go to the step 2. Otherwise give an error as the process has exceeded
its maximum claim for the resource.
Step 2
If Requesti ≤ Available, then go to the step 3. Otherwise, the process Pi must wait until the
resource is available.
Step 3
If the requested resource is allocated to the process, Pi by modifying the state as follows −
Available = Available – Requesti;
Allocationi = Allocationi + Requesti;
Needi = Needi – Requesti;
When the state of the resulting resource allocation is safe, then the requested resources are
allocated to the process, Pi. If the new state is unsafe, then the process Pi must wait for Requesti
and the old resource allocation state is restored.

Advantages
1) Prevents deadlocks by ensuring that processes acquire all required resources before execution
2) Can handle multiple resource types
3) Provides a safe and efficient resource allocation method

Disadvantages
1) May not be feasible for systems with a large number of processes and resources
2) Assumes that resource requirements are known in advance, which may not be the case in
some systems
3) May result in low resource utilization if resources are reserved but not used.

DEADLOCK RECOVERY
1. Process Termination:
To eliminate the deadlock, we can simply kill one or more processes. For this, we use
two methods:
(a). Abort all the Deadlocked Processes:
Aborting all the processes will certainly break the deadlock, but with a great expense. The
deadlocked processes may have computed for a long time and the result of those partial
computations must be discarded and there is a probability to recalculate them later.

(b). Abort one process at a time until deadlock is eliminated:


Abort one deadlocked process at a time, until deadlock cycle is eliminated from the
system. Due to this method, there may be considerable overhead, because after
aborting each process, we have to run deadlock detection algorithm to check
whether any processes are still deadlocked.

Advantages of Process Termination:


1) It is a simple method for breaking a deadlock.
2) It ensures that the deadlock will be resolved quickly, as all processes involved in the
deadlock are terminated simultaneously.
3) It frees up resources that were being used by the deadlocked processes, making
those resources available for other processes.

Disadvantages of Process Termination:


1) It can result in the loss of data and other resources that were being used by the
terminated processes.
2) It may cause further problems in the system if the terminated processes were critical to
the system’s operation.
3) It may result in a waste of resources, as the terminated processes may have already
completed a significant amount of work before being terminated.

2. Resource Preemption:
To eliminate deadlocks using resource preemption, we preempt some resources from
processes and give those resources to other processes. This method will raise three
issues –
(a). Selecting a victim:
We must determine which resources and which processes are to be preempted and
also the order to minimize the cost.

(b). Rollback:
We must determine what should be done with the process from which resources are
preempted. One simple idea is total rollback. That means abort the process and
restart it.

(c). Starvation:
In a system, it may happen that same process is always picked as a victim. As a
result, that process will never complete its designated task. This situation is
called Starvation and must be avoided. One solution is that a process must be
picked as a victim only a finite number of times.

Advantages of Resource Preemption:


1) It can help in breaking a deadlock without terminating any processes, thus preserving
data and resources.
2) It is more efficient than process termination as it targets only the resources that are
causing the deadlock.
3) It can potentially avoid the need for restarting the system.

Disadvantages of Resource Preemption:


1) It may lead to increased overhead due to the need for determining which resources
and processes should be preempted.
2) It may cause further problems if the preempted resources were critical to the
system’s operation.
3) It may cause delays in the completion of processes if resources are frequently
preempted.

DEADLOCK AVOIDANCE
Deadlock Avoidance is a process used by the Operating System to avoid Deadlock.
Before understand the deadlock avoid algorithm we need to understand 2 states
safe state A safe state refers to a system state where the allocation of resources to each process
ensures the avoidance of deadlock. The successful execution of all processes is achievable, and
the risk of a deadlock is low.
unsafe state implies a system state where a deadlock may occur. The successful completion of
all processes is not assured, and the risk of deadlock is high.
Algorithm m banker and rag lik na hai

HANDLING DEADLOCK

Deadlock Ignorance- Deadlock Ignorance is the most widely used approach among all the
mechanism. This is being used by many operating systems mainly for end user uses. In this
approach, the Operating system assumes that deadlock never occurs. It simply ignores deadlock.
This approach is best suitable for a single end user system where User uses the system only for
browsing and all other normal stuff.

Deadlock prevention- Deadlock happens only when Mutual Exclusion, hold and wait, No
preemption and circular wait holds simultaneously. If it is possible to violate one of the four
conditions at any time then the deadlock can never occur in the system.

The idea behind the approach is very simple that we have to fail one of the four conditions but there
can be a big argument on its physical implementation in the system.

Deadlock avoidance In deadlock avoidance, the operating system checks whether


the system is in safe state or in unsafe state at every step which the operating system
performs. The process continues until the system is in safe state. Once the system
moves to unsafe state, the OS has to backtrack one step.

In simple words, The OS reviews each allocation so that the allocation doesn't cause the
deadlock in the system.
Deadlock detection and recovery This approach let the processes fall in deadlock and
then periodically check whether deadlock occur in the system or not. If it occurs then it
applies some of the recovery methods to the system to get rid of deadlock.

You might also like