Download as pptx, pdf, or txt
Download as pptx, pdf, or txt
You are on page 1of 37

Chapter 4:

CPU Scheduling &


Algorithms
CPU Scheduling & Objective
CPU scheduling :

It is a process which allows one process to use the CPU while the execution of another process is
on hold(in waiting state) due to unavailability of any resource like I/O etc, thereby making full
use of CPU. The aim of CPU scheduling is to make the system efficient, fast and fair.

Scheduling Objectives :

1. Fairness : All the jobs should get fair chance to get use of CPU.

2. Efficiency: Scheduler should keep the system busy whole time

3. Response Time : A scheduler should minimize the response time for an interactive user.

4. Throughput : A scheduler should maximize the number of jobs processed per unit time.
CPU and I/O Bursts Cycle
CPU burst cycle : It is a time period when process is busy with
CPU.
I/O burst cycle : It is a time period when process is busy in
working with I/O resources.

• A process execution consists of a cycle of CPU execution and


I/O wait.
• A process starts its execution when CPU is assigned to it, so
process execution begins with a CPU burst cycle.
• This is followed by an I/O burst cycle when a process is busy
doing I/O operations.
• A process switch frequently from CPU burst cycle to I/O burst
cycle and vice versa.
• The complete execution of a process starts with CPU burst
cycle, followed by I/O burst cycle, then followed by another
CPU burst cycle, then followed by another I/O burst cycle and
so on.
• The final CPU burst cycle ends with a system request to
terminate execution.
Pre-emptive & Non Preemptive Scheduling

Pre-emptive Scheduling : Even if CPU Non-Preemptive Scheduling : Once


is allocated to one process, CPU can be preempted the CPU has been allocated to a process the
to other process if other process is having higher process keeps the CPU until it releases CPU either
priority or some other fulfilling criteria. by terminating or by switching to waiting state.
1. Throughput is high.
1. Throughput is less 2. It is not suitable for RTS.
2. Only the processes having higher priority are 3. Processes having any priority can get
scheduled. scheduled.
3. It doesn’t treat all processes as equal. 4. It treats all process as equal.
4. Algorithm design is complex. 5. Algorithm design is simple.

Circumstances for preemptive Circumstances for Non preemptive

5. Process switch from running to ready state Process switches from running to waiting state
6. Process switch from waiting to ready state Process terminates

For e.g.: Round Robin, Priority For e.g.: FCFS algorithm It is suitable
algorithms for RTS.
CPU Scheduling Criteria
 CPU Utilization : In multiprogramming the main objective is to keep CPU as busy as
possible. CPU utilization can range from 0 to 100 percent.

 Throughput : It is the number of processes that are completed per unit time.

 Turnaround time :The time interval from the time of submission of a process to the
time of completion of that process is called as turnaround time. It is calculated as:
Turnaround Time = Waiting Time + Burst Time or End Time – Arrival Time
 Waiting time : It is the sum of time periods spent in the ready queue by a process. It
is calculated as:
Waiting Time = Start Time – Arrival Time
 Response time : The time period from the submission of a request until the first
response is produced is called as response time.
First-Come, First-Served (FCFS) Scheduling

First come first serve(FCFS) :


• It is scheduling algorithm, as the name suggests, the process which arrives first, gets executed first, or we can say
that the process which requests the CPU first, gets the CPU allocated first.
• It's easy to understand and implement.
• A perfect real life example of FCFS scheduling is buying tickets at ticket counter.

Example :
First-Come, First-Served (FCFS) Scheduling

Process Burst Time


P1 24
P2 3
P3 3
 Suppose that the processes arrive in the order: P1 , P2 , P3
The Gantt Chart for the schedule is:
P1 P2 P3

0 24 27 30

 Waiting time for P1 = 0; P2 = 24; P3 = 27


 Average waiting time: (0 + 24 + 27)/3 = 17
FCFS Scheduling (Cont.)
Suppose that the processes arrive in the order:
P 2 , P 3 , P1
 The Gantt chart for the schedule is:

P2 P3 P1

0 3 6 30

 Waiting time for P1 = 6; P2 = 0; P3 = 3


 Average waiting time: (6 + 0 + 3)/3 = 3
 Much better than previous case
 Convoy effect - short process behind long process
 Consider one CPU-bound and many I/O-bound processes
FCFS Scheduling (Cont.)

FCFS
 Gantt chart:

P1 P2 P3 P4 P5

0 7 11 21 27 35

Process Arrival Time Burst Time Waiting Time

P1 0 7 0-0=0
P2 1 4 7-1=6
P3 2 10 11-2=9
P4 3 6 21-3=18
P5 4 8 27-4=23

Average waiting Time=0+6+9+18+23/5 =56/5=11.2


First-Come, First-Served (FCFS) Scheduling

0 24 27 31
First-Come, First-Served (FCFS) Scheduling
Q1. Find the average waiting time & Turn around time uing FCFS algorithm

Process Arrival Time Burst Time


P1 0 8
P2 1 4
P3 2 9
P4 3 5

Q2. Find the average waiting time & Turn around time uing FCFS algorithm

Process Burst Time


P1 6
P2 8
P3 7
P4 3
Shortest-Job-First (SJF) Scheduling
 Shortest Job First (SJF): It is an algorithm in which the
process having the smallest execution time is chosen for the
next execution.
 This scheduling method can be preemptive or non-
preemptive.

 It significantly reduces the average waiting time for other


processes awaiting execution.

 The full form of SJF is Shortest Job First.


Example of SJF
Shortest-Job-First (SJF) Scheduling
Q1. The jobs are scheduled for execution as follows:
Example of SJF
ProcessArriva l Time Burst Time
P1 0.0 6
P2 2.0 8
P3 4.0 7
P4 5.0 3
 SJF scheduling chart(Gantt chart)
P4 P1 P3 P2

0 3 9 16 24

 Average waiting time = (3 + 16 + 9 + 0) / 4 = 7


Example of SJF
Q1. The jobs are scheduled for execution as follows:(Non-Preemptive SJF)
Process Arrival Time Burst Time
P1 0 7
P2 1 4
P3 2 10
P4 3 6
P5 4 8
Solution :

Process Arrival Time Burst Time Waiting Time


P1 0 7 0
P2 1 4 7-1=6
P3 2 10 25-2=23
P4 3 6 11-3=8
P5 4 8 17-4=13

Average Waiting Time=(0+6+23+8+13)/5=10 ms


Example of SJF
Q1. The jobs are scheduled for execution as follows:(Preemptive SJF)
Process Arrival Time Burst Time
P1 0 7
P2 1 4
P3 2 10
P4 3 6
P5 4 8

Solution : Gantt Chart

P1 P2 P1 P4 P5 P3

0 1 5 11 17 25 35

Process Arrival Time Burst Time Waiting Time


P1 0 7 0+(5-1)=4
P2 1 4 1-1=0
P3 2 10 25-2=23
P4 3 6 11-3=8
P5 4 8 17-4=13

Average Waiting Time=4+0+23+8+13/5=9.6


Example of Shortest-remaining-time-first
 Now we add the concepts of varying arrival times and preemption to the analysis

ProcessA arri Arrival TimeT Burst Time


P1 0 8
P2 1 4
P3 2 9
P4 3 5
 Preemptive SJF Gantt Chart
P1 P2 P4 P1 P3

0 1 5 10 17 26

 Average waiting time = [(10-1)+(1-1)+(17-2)+5-3)]/4 = 26/4 = 6.5 msec


Priority Scheduling
 A priority number (integer) is associated with each process
 The CPU is allocated to the process with the highest priority
(smallest integer  highest priority)
 Preemptive
 Nonpreemptive

 SJF is priority scheduling where priority is the inverse of predicted


next CPU burst time
 Problem  Starvation – low priority processes may never execute
 Solution  Aging – as time progresses increase the priority of the
process
Example-1 of Priority Scheduling

ProcessA arri Burst TimeTPriority


P1 10 3
P2 1 1
P3 2 4
P4 1 5
P5 5 2
 Priority scheduling Gantt Chart

P2 P5 P1 P3 P4

0 1 6 16 18 19

 Average waiting time = 8.2 msec


Example-2 of Priority Scheduling
Example-3 of Priority Scheduling(non-preemptive)
Example-3 of Priority Scheduling
Example-3 of Priority Scheduling
Round Robin (RR)
 Each process gets a small unit of CPU time (time quantum q),
usually 10-100 milliseconds. After this time has elapsed, the
process is preempted and added to the end of the ready queue.
 If there are n processes in the ready queue and the time quantum
is q, then each process gets 1/n of the CPU time in chunks of at
most q time units at once. No process waits more than (n-1)q
time units.
 Timer interrupts every quantum to schedule next process
 Performance
 q large  FIFO
 q small  q must be large with respect to context switch, otherwise
overhead is too high
Example of RR with Time Quantum = 4

Q1. Time Quantum = 4 Process Burst Time


P1 10
P2 04
P3 09
P4 06

Process Burst Time


P1 10
Q2. Time Quantum = 3 P2 03
P3 07
P4 05

Process Burst Time


Q3. Time Quantum = 1 P1 10
P2 01
P3 02
P4 01
P5 05
Example of RR with Time Quantum = 2

Q4. Time Quantum = 4 Process Id Burst time

P1 5

P2 3
P3 1
P4 2
P5 3
Multilevel Queue

1. Multilevel queue scheduling classifies processes into different groups. It


partitions the ready queue into several separate queues.
2. The processes are permanently assigned to one queue based on some
properties such as memory size, priority, process type, etc. Each queue has
its own scheduling algorithm.
3. In a system there are foreground processes and background processes. So
system can divide processes into two queues: one for background and other
for foreground.
4. Foreground queue can be scheduled with Round Robin algorithm where as
background queue can be scheduled by First Come First Serve algorithm.
5. Scheduling is done for all the processes inside the queue as well as for all
separate queues.
Multilevel Queue Scheduling
What is Deadlock
 In multiprogramming environment, several
processes may compete for a finite number
of resources.
 A process requests resources and if the
resources are not available then the process
enters into the waiting state. Sometimes a
waiting process is never again able to change
its status because the resources requested by
it are held by other waiting processes. This
situation is called as deadlock.
 When a process request for resources held by
another waiting process which in turn is
waiting for resources held by another waiting
process and not a single process can execute
its task, then deadlock occurs in the system.
Necessary conditions for Deadlock
Necessary Conditions:
 1. Mutual exclusion : At least one resource must be held in a non-sharable mode; that is,
only one process at a time can use the resource.
 2. Hold and Wait : A process must be holding at least one resource and waiting to acquire
additional resources that are currently being held by other processes.
 3. No pre-emption : Resources cannot be pre-empted i.e a resource can be released only
voluntarily by the process holding it.
 4. Circular wait : A set {P0,P1…Pn} of waiting processes must exist such that P0 is waiting
for a resource held by P1,P1 is waiting for a resource held by P2,…,Pn-1 is waiting for a
resource held by Pn and Pn is waiting for a resource held by P0. Each process is waiting for
the resources held by other waiting processes in circular form.
Banker’s Algorithm
Banker’s Algorithm :
This algorithm calculates resources allocated, required and available before allocating resources to
any process to avoid deadlock. It contains two matrices on a dynamic basis. Matrix A contains
resources allocated to different processes at a given time. Matrix B maintains the resources which
are still required by different processes at the same time. For each request for any resource by a
process OS goes through all these trials of imaginary allocation and updation. After this if the
system remains in the safe state, and then changes can be made in actual matrices.
Each process must a priori claim maximum useWhen a process requests a resource it may have to
wait

When a process gets all its resources it must return them in a finite amount of time
Data Structures for the Banker’s Algorithm
Algorithm F: Free resources

Step 1: When a process requests for a resource, the OS allocates it on a trial basis.
Step 2: After trial allocation, the OS updates all the matrices and vectors. This updating can be
done by the OS in a separate work area in the memory.
Step 3 : It compares F vector with each row of matrix B on a vector to vector basis.
Step 4 : If F is smaller than each of the row in Matrix B i.e. even if all free resources are
allocated to any process in Matrix B and not a single process can completes its task then
OS concludes that the system is in unstable state.
Step 5 : If F is greater than any row for a process in Matrix B the OS allocates all
required resources for that process on a trial basis. It assumes that after completion of
process, it will release all the recourses allocated to it. These resources can be added to
the free vector.
Step 6: After execution of a process, it removes the row indicating executed process from both
matrices.
Step 7: This algorithm will repeat the procedure step 3 for each process from the matrices and
finds that all processes can complete execution without entering unsafe state.
Resource-Request Algorithm for Process Pi

Request = request vector for process Pi. If Requesti [j] = k then


process Pi wants k instances of resource type Rj
1. If Requesti  Needi go to step 2. Otherwise, raise error condition,
since process has exceeded its maximum claim
2. If Requesti  Available, go to step 3. Otherwise Pi must wait,
since resources are not available
3. Pretend to allocate requested resources to Pi by modifying the state
as follows:
Available = Available – Request;
Allocationi = Allocationi + Requesti;
Needi = Needi – Requesti;
 If safe  the resources are allocated to Pi
 If unsafe  Pi must wait, and the old resource-allocation state is
restored
Example of Banker’s Algorithm
 5 processes P0 through P4;
3 resource types:
A (10 instances), B (5instances), and C (7 instances)
Snapshot at time T0:
Allocation Max Available
ABC ABC ABC
P0 0 1 0 753 332
P1 2 0 0 322
P2 3 0 2 902
P3 2 1 1 222
P4 0 0 2 433
Example (Cont.)
 The content of the matrix Need is defined to be Max – Allocation

Need
ABC
P0 7 4 3
P1 1 2 2
P2 6 0 0
P3 0 1 1
P4 4 3 1

 The system is in a safe state since the sequence < P1, P3, P4, P2, P0> satisfies
safety criteria
End
of Chapter 4

You might also like