unit-2 os 2024

You might also like

Download as pptx, pdf, or txt
Download as pptx, pdf, or txt
You are on page 1of 47

CPU scheduling

• CPU scheduling is a process which allows one process to use


while the execution of another process is on hold (waiting
state) due to unavailability of any resources like I/O etc.
Types of CPU scheduling
• When a process switches from the running state to the waiting
state
• When a process switches from the running state to the ready
state
• When a process switches from the waiting state to the ready
state
• When a process terminates.
Preemptive scheduling
• the scheduling in which a running program can be interrupted
if a high priority process enters the queue & is allocated to the
CPU is called Preemptive scheduling
• Non Preemptive scheduling
• the scheduling in which a running program cannot be
interrupted by any other process is called Non Preemptive
scheduling
Scheduling Algorithms

• 1. First-Come, First-Served Scheduling


• 2. Shortest-Job-First Scheduling
• 3. Priority Scheduling
• 4. Round-Robin Scheduling
1. First-Come, First-Served(FCFS) Scheduling
• FIFO is also called first come served scheduling method.
• The process are dispatched according to their arrival time at
the ready queue.
• Easy to understand and implement
• It is implementation based in FIFO queue
• FCFS is non preemptive cpu scheduling algorithm.
• scheduling algorithm simply schedules the jobs according to
their arrival time.
• The job which comes first in the ready queue will get the CPU
first. the arrival time of the job, the sooner will the job get the
CPU.
• if a longer job has been assigned to the cpu then many shorter
job after it will have to wait.
Advantages of FCFS

• Simple
• Easy
• First come, First served
Disadvantages of FCFS
• The scheduling method is non-preemptive, the process will
run to the completion.
• Due to the non-preemptive nature of the algorithm, the
problem of starvation may occur.
• Although it is easy to implement, but it is poor in performance
since the average waiting time is higher as compare to other
scheduling algorithms.
2.Shortest Job First (SJF) Scheduling

• the processes according to their arrival time (in FCFS


scheduling). However, SJF scheduling algorithm, schedules
the processes according to their burst time.
• In SJF scheduling, the process with the lowest burst time,
among the list of available processes in the ready queue, is
going to be scheduled next.
• However, it is very difficult to predict the burst time needed
for a process hence this algorithm is very difficult to
implement in the system.
Advantages of SJF

• Maximum throughput
• Minimum average waiting and turnaround time
Disadvantages of SJF
• May suffer with the problem of starvation
• It is not implementable because the exact Burst time for a
process can't be known in advance.
3.Priority scheduling
• Priority cpu scheduling algorithm is preemptive and non
preemptive algorithm. it is one of the most common
scheduling algorithm in batch system.
• A priority is associated with each process and the cpu is
allocated to the process with the highest priority.
• This scheduling algorithm cpu select higher priority process
first.
• if the priority of two process is same then FCFS scheduling
algorithm is applied for solving problem.
• A SJF algorithm is a special case of a priority scheduling
algorithm with priority(p) being proportional to 1/p
4.Round Robin Scheduling
• Round Robin scheduling each ready task runs turn by only in a
cyclic queue for a limited time slice. This algorithm also offer
starvation free execution of process.
• Round-robin scheduling is designed for timesharing system.
• It is similar to the FCFS scheduling, but preemptive is added
to switch between processes.
• A time quantum is typically 10 to 100 milliseconds.
• The ready queue is implemented in FIFO manner.
• Round Robin only preemptive algorithm apply a for
scheduling.
Multiple Processor Scheduling
• In multiple-processor scheduling multiple CPU’S are available
• Load sharing becomes possible as distributed among these
available processor
• Multiple processor scheduling is more complex as compared
to single processor scheduling
• Multiprocessor systems may be heterogeneous (different
kinds of CPUs) or homogenous (the same CPU). There may
be special scheduling constraints, such as devices connected
via a private bus to only one CPU.
• Multi processor scheduling is of two types
• i)Asymmetric scheduling
• ii) symmetric scheduling
i)Asymmetric scheduling

• It is used when all the scheduling decisions and I/O processing


are handled by a single processor called the Master Server.
The other processors execute only the user code.
• This is simple and reduces the need for data sharing, and this
entire scenario is called Asymmetric Multiprocessing.
ii) symmetric scheduling
• It is used where each processor is self-scheduling. All
processes may be in a common ready queue, or each processor
may have its private queue for ready processes.
• The scheduling proceeds further by having the scheduler for
each processor examine the ready queue and select a process
to execute.
i)Asymmetric scheduling
System Call interface for process management

• A system is used to create a new process or a duplicate process called a


fork.
• The duplicate process consists of all data in the file description and
registers common.
• The original process is also called the parent process and the duplicate
is called the child process.
• The fork call returns a value, which is zero in the child and equal to the
child’s PID (Process Identifier) in the parent.
• The system calls like exit would request the services for terminating a
process.
• Loading of programs or changing of the original image with duplicate
needs execution of exec.
• Pid would help to distinguish between child and parent processes
System Call interface for process management

• fork()
• Processes generate clones of themselves using
the fork() system call.
• It is one of the most common ways to create processes in
operating systems.
• When a parent process spawns a child process, execution of
the parent process is interrupted until the child process
completes.
• Once the child process has completed its execution, control is
returned to the parent process.
exit()

• The exit() is a system call that is used to end program


execution.
• This call indicates that the thread execution is
complete, which is especially useful in multi-threaded
environments.
• The operating system reclaims resources spent by the
process following the use of the exit() system
function.
Wait()
• In some systems, a process may have to wait for another
process to complete its execution before proceeding.
• When a parent process makes a child process, the parent
process execution is suspended until the child process is
finished.
• The wait() system call is used to suspend the parent process.
Once the child process has completed its execution, control is
returned to the parent process.
waitpid()
• The waitpid() system call suspends execution of the current
process until a child specified by pid argument has changed
state.
• By default, waitpid() waits only for terminated children, but
this behaviour is modifiable via the options argument
exec()

• When an executable file replaces an earlier executable file in


an already executing process, this system function is invoked.
As a new process is not built, the old process identification
stays, but the new process replaces data, stack, data, head, etc.
Deadlock – System model

Deadlock
• Every process needs some resources to complete its execution.
However, the resource is granted in a sequential order.
• Deadlock is a situation where a set of processes are blocked
because each process is holding a resource and waiting for another
resource acquired by some other process.
• The process requests for some resource.
• OS grant the resource if it is available otherwise let the process
waits.
• The process uses it and release on the completion.
• A Deadlock is a situation where each of the computer process waits
for a resource which is being assigned to some another process.

Deadlock – System model
• Let us assume that there are three processes P1, P2 and P3.
There are three different resources R1, R2 and R3.
• R1 is assigned to P1, R2 is assigned to P2 and R3 is assigned
to P3.
• After some time, P1 demands for R1 which is being used by
P2.
• P1 halts its execution since it can't complete without R2.
• P2 also demands for R3 which is being used by P3.
• P2 also stops its execution because it can't continue without
R3.
• P3 also demands for R1 which is being used by P1 therefore
P3 also stops its execution.
In this scenario, a cycle is being formed among the three processes. None of the process is progressing and they are all waiting. The computer
becomes unresponsive since all the processes got blocked.

• a cycle is being formed among the three processes. None of


the process is progressing and they are all waiting. The
computer becomes unresponsive since all the processes got
blocked.
Deadlock Characterization (OR)
Necessary condition Of Deadlock
• A deadlock happens in operating system when two or more
processes need some resource to complete their execution that
is held by the other process.
• A deadlock occurs if the four conditions hold true. But these
conditions are not mutually exclusive. They are given as
follows −
i) Mutual Exclusion
There should be a resource that can only be held by one process
at a time. In the diagram below, there is a single instance of
Resource 1 and it is held by Process 1 only.
ii) Hold and Wait
• A process can hold multiple resources and still request more
resources from other processes which are holding them. In the
diagram given below,
• Process 2 holds Resource 2 and Resource 3 and is requesting
the Resource 1 which is held by Process 1.
iii) No Preemption

• A resource cannot be preempted from a process by force. A


process can only release a resource voluntarily. In the diagram
below
• Process 2 cannot preempt Resource 1 from Process 1. It will
only be released when Process 1 relinquishes it voluntarily
after its execution is complete.
iv) Circular Wait

• A process is waiting for the resource held by the second


process, which is waiting for the resource held by the third
process and so on, till the last process is waiting for a resource
held by the first process.
• This forms a circular chain. For example: Process 1 is
allocated Resource2 and it is requesting Resource 1. Similarly,
Process 2 is allocated Resource 1 and it is requesting Resource
2. This forms a circular wait loop.
Methods for Handling Deadlocks
• there are three different methods for dealing with the
deadlock problem
• Ensure that the system will never enter a deadlock
state. (Prevention and Avoidance)
• Allow the system to enter a deadlock state and then
recover. (Detection and Recovery)
• Ignore the problem and pretend that deadlocks never
occur in the system; used by most operating systems,
including UNIX.
Deadlock Prevention
• By ensuring that at least one of these conditions cannot hold,
we can prevent the occurrence of a deadlock
Mutual Exclusion – not required for sharable resources; must
hold for non sharable resources.
Hold and Wait – must guarantee that whenever a process requests
a resource, it does not hold any other resources.
No Preemption – If a process that is holding some resources
requests another resource that cannot be immediately allocated
to it, then all resources currently being held are released.
Circular Wait – impose a total ordering of all resource types, and
require that each process requests resources in an increasing
order of enumeration
Deadlock Avoidance
• Requires that the system has some additional apriori
information available.
• Simplest and most useful model requires that each process
declare the maximum number of resources of each type that it
may need.
• The deadlock-avoidance algorithm dynamically examines the
resource-allocation state to ensure that there can never be a
circular-wait condition.
• Resource-allocation state is defined by the number of available
and allocated resources, and the maximum demands of the
processes.
i) Safe State
• A state is safe if the system can allocate all resources
requested by all processes ( up to their stated maximums )
without entering a deadlock state.
• More formally, a state is safe if there exists a safe sequence of
processes { P0, P1, P2, ..., PN }
• such that all of the resource requests for Pi can be granted
using the resources currently allocated to Pi and all processes
Pj where j < i.
• ( I.e. if all the processes prior to Pi finish and free up their
resources, then Pi will be able to finish also, using the
resources that they have freed up. )
Figure - Safe, unsafe, and deadlocked state spaces.
• If a safe sequence does not exist, then the system is in an
unsafe state, which MAY lead to deadlock. ( All safe states are
deadlock free, but not all unsafe states lead to deadlocks. )
ii) Resource-Allocation Graph Algorithm

• If resource categories have only single instances of their


resources, then deadlock states can be detected by cycles in
the resource-allocation graphs.
• unsafe states can be recognized and avoided by augmenting
the resource-allocation graph with claim edges, noted by
dashed lines, which point from a process to a resource that it
may request in the future.
• In order for this technique to work, all claim edges must be
added to the graph for any particular process before that
process is allowed to request any resources.
• When a process makes a request, the claim edge Pi->Rj is
converted to a request edge.
Figure - Resource allocation graph for deadlock avoidance
• Similarly when a resource is released, the assignment reverts back to a
claim edge.
• This approach works by denying requests that would produce cycles in the
resource-allocation graph, taking claim edges into effect.
• Consider for example what happens when process P2 requests resource
R2:
Figure - An unsafe state in a resource allocation graph

• The resulting resource-allocation graph would have a cycle in


it, and so the request cannot be granted.
iii) Banker's Algorithm
• For resource categories that contain more than one instance the
resource-allocation graph method does not work, and more
complex ( and less efficient ) methods must be chosen.
• The Banker's Algorithm gets its name because it is a method
that bankers could use to assure that when they lend out
resources they will still be able to satisfy all their clients
• When a process starts up, it must state in advance the maximum
allocation of resources it may request, up to the amount
available on the system.
• When a request is made, the scheduler determines whether
granting the request would leave the system in a safe state.
• If not, then the process must wait until the request can be
granted safely.
• The banker's algorithm relies on several key data structures:
Banker's Algorithm
• Let n = number of processes, and m = number of resources types.
• Available – Vector of length m indicates the number of available
resources of each type. If Available[ j] = k, then k instances of resource
type Rj are available.
• Max : n × m matrix. Defines the maximum demand of each process. If
Max[i,j] = k, then process Pi may request at most k instances of resource
type Rj .
• Allocation : n × m matrix. Defines the number of resources of each type
currently allocated to each process.
• If Allocation[i,j] = k, then process Pi is currently allocated k instances of
resource type Rj .
• Need : n × m matrix. Indicates the remaining resource need of each
process.
• If Need[i,j] = k, then process Pi may need k more instances of resource
type Rj to complete its task.
• Note that Need[i,j] = Max[i,j] − Allocation[i,j] Deadlock Avoidance
Banker’s Algorithm - Definitions L
a) Safety Algorithm

• Let Work and Finish be vectors of length m and n,


respectively. Initialize:
• Work = Available
• Finish [i] = false for i = 0, 1, …, n- 1
• 2. Find an i such that both:
(a) Finish[i] = false
(b) Needi ≤ Work
If no such i exists, go to step 4
• 3. Work = Work + Allocation Finish[i] = true go to step 2
• 4. If Finish [i] == true for all i, then the system is in a
safe state
b) Resource-Request Algorithm for Process P

• Request = request vector for process Pi . If Requesti [j] = k then


process Pi wants k instances of resource type Rj
• 1. If Requesti ≤ Needi go to step 2. Otherwise, raise error condition,
since process has exceeded its maximum claim
• 2. If Requesti ≤ Available, go to step 3. Otherwise Pi must wait, since
resources are not available
• 3. Pretend to allocate requested resources to Pi by modifying the state
as follows:
Available = Available – Request;
Allocation i = Allocation i + Request i ;
Need i = Need i – Request i
• If safe ⇒ the resources are allocated to Pi
• If unsafe ⇒ Pi must wait, and the old resource-allocation state is
restored
Deadlock Detection
i) Single Instance of Each Resource Type
• Maintain wait-for graph
• Nodes are processes
• Pi → Pj if Pi is waiting for Pj
• Periodically invoke an algorithm that searches for a cycle in
the graph. If there is a cycle, there exists a deadlock
• An algorithm to detect a cycle in a graph requires an order of
n2 operations, where n is the number of vertices in the graph
Figure - (a) Resource allocation graph. (b) Corresponding
wait-for graph

• As before, cycles in the wait-for graph indicate deadlocks.


• This algorithm must maintain the wait-for graph, and
periodically search it for cycles.
ii) Several Instances of a Resource Type

• Available: A vector of length m indicates the number of


available resources of each type.
• Allocation: An n x m matrix defines the number of resources
of each type currently allocated to each process.
• Request: An n x m matrix indicates the current request of
each process. If Request [i j ] = k, then process Pi is requesting
k more instances of resource type. Rj
iii) Detection Algorithm
• 1. Let Work and Finish be vectors of length m and n, respectively
Initialize:
(a) Work = Available
(b) For i = 1,2, …, n, if Allocationi ≠ 0, then Finish[i] =
false;otherwise, Finish[i] = true
• 2. Find an index i such that both:
(a) Finish[i] == false
(b) Request ≤ Work If no such i exists, go to step 4
• 3. Work = Work + Allocation i Finish[i] = true go to step 2
• 4. If Finish[i] == false, for some i, 1 ≤ i ≤ n, then the system is in
deadlock state. Moreover, if Finish[i] == false, then Pi is
deadlocked
• This algorithm requires an order of O(m x n2) operations to detect
whether the system is in deadlocked state.
Recovery From Deadlock

• There are three basic approaches to recovery from deadlock:


– Inform the system operator, and allow him/her to take
manual intervention.
– Terminate one or more processes involved in the deadlock
– Preempt resources.
i) Process Termination

• Abort all deadlocked processes


• Abort one process at a time until the deadlock cycle is
eliminated
In which order should we choose to abort?
• Priority of the process
• How long process has computed, and how much longer to
completion
• Resources the process has used
• Resources process needs to complete
• How many processes will need to be terminated
• Is process interactive or batch?
ii) Resource Preemption

• Selecting a victim –As in process termination we must determine the


order of preemption to minimize cost.
• Cost factors may include such parameters as the number of resources a
deadlocked process is holding and the amount of time the process has thus
far consumed during its execution.
• Rollback – return to some safe state, restart process for that
state
• Starvation – same process may always be picked as victim,
include number of rollback in cost factor

You might also like