Download as pdf or txt
Download as pdf or txt
You are on page 1of 10

Unit: 4 CPU Scheduling

4.1 Scheduling – Objectives,


concept, criteria, CPU and I/O burst
cycle.
4.2 Types of Scheduling-Pre-
emptive, Non pre-emptive.
4.3 Scheduling Algorithms. First
come first served (FCFS), Shortest
job first (SJF), Round Robin (RR),
Priority.
4.4 Other Scheduling. Multilevel,
Multiprocessor, real-time.
4.5 Deadlock. System model,
principle necessary conditions,
mutual exclusion, critical region.
4.6 Deadlock handling. Prevention
and avoidance.

4.1 Scheduling – Objectives, concept, criteria, CPU


and I/O burst cycle.
Objectives:
CPU scheduling is the basis of multi-programmed operating systems. The
objective of multiprogramming is to have some process running at all times,
in order to maximize CPU utilization. Scheduling is a fundamental
operating-system function. Almost all computer resources are scheduled
before use.
Scheduling of processes/work is done to finish the work on time.
Below are some terminologies with respect to a CPU scheduling.
Arrival Time: Time at which the process arrives in the ready queue.
Completion Time: Time at which process completes its execution.
Burst Time: Time required by a process for CPU execution.
Turn Around Time: Time Difference between completion time and arrival time.
Turn Around Time = Completion Time – Arrival Time
Waiting Time (W.T): Time Difference between turnaround time and burst time.
Waiting Time = Turn Around Time – Burst Time

The criteria include the following:


1. CPU utilisation –
The main objective of any CPU scheduling algorithm is to keep
the CPU as busy as possible. Theoretically, CPU utilisation can
range from 0 to 100 but in a real-time system, it varies from 40 to
90 percent depending on the load upon the system.
2. Throughput –
A measure of the work done by CPU is the number of processes
being executed and completed per unit time. This is called
throughput. The throughput may vary depending upon the length
or duration of processes.

3. Turnaround time –
For a particular process, an important criteria is how long it takes
to execute that process. The time elapsed from the time of
submission of a process to the time of completion is known as the
turnaround time. Turn-around time is the sum of times spent
waiting to get into memory, waiting in ready queue, executing in
CPU, and waiting for I/O.

4. Waiting time –
A scheduling algorithm does not affect the time required to
complete the process once it starts execution. It only affects the
waiting time of a process i.e. time spent by a process waiting in
the ready queue.

5. Response time –
In an interactive system, turn-around time is not the best criteria.
A process may produce some output fairly early and continue
computing new results while previous results are being output to
the user. Thus, another criteria is the time taken from
submission of the process of request until the first response is
produced. This measure is called response time.

CPU-I/O Burst Cycle

Process execution consists of a cycle of CPU execution and I/O wait.


Processes alternate between these two states. Process execution begins
with a CPUburst. That is followed by an I/O burst, then another CPU
burst, then another I/O burst, and so on. Eventually, the last CPU burst
will end with a system request to terminate execution, rather than with
another I/O burst.
Alternating sequence of CPU and I/O bursts.

4.2 Types of Scheduling-Pre-emptive, Non pre-


emptive.
Pre-emptive Scheduling is a CPU scheduling technique that works by
dividing time slots of CPU to a given process. The time slot given might be
able to complete the whole process or might not be able to it. When the
burst time of the process is greater than CPU cycle, it is placed back into the
ready queue and will execute in the next chance. This scheduling is used
when the process switch to ready state.
Algorithms that are backed by pre-emptive Scheduling are round-robin (RR),
priority, SRTF (shortest remaining time first).
Non-pre-emptive Scheduling is a CPU scheduling technique the process
takes the resource (CPU time) and holds it till the process gets terminated or
is pushed to the waiting state. No process is interrupted until it is
completed, and after that processor switches to another process.
Algorithms that are based on non-pre-emptive Scheduling are FCFS and
non-pre-emptive priority based scheduling: shortest Job first (SJF).
Pre-emptive Vs Non-Pre-emptive Scheduling

Pre-emptive Scheduling Non-Pre-emptive Scheduling

Resources are allocated according Resources are used and then held
to the cycles for a limited time. by the process until it gets
terminated.
Pre-emptive Scheduling Non-Pre-emptive Scheduling

The process can be interrupted, The process is not interrupted


even before the completion. until its life cycle is complete.

Starvation may be caused, due to Starvation can occur when a


the insertion of priority process in process with large burst time
the queue. occupies the system.

Maintaining queue and remaining No such overheads are required.


time needs storage overhead.

4.3 Scheduling Algorithms. First come first serve (FCFS),


Shortest job first (SJF), Round Robin (RR), Priority.

First Come First Served (FCFS) Scheduling

 With this scheme, the process that requests the CPU first is allocated the
CPU first.
 The implementation of the FCFS policy is easily managed with FIFO queue.
 When a process enters the ready queue, its PCB (Process Control Block) is
linked onto the tail of the queue.
 When the CPU is free, it is allocated to the process at the head of the queue.
The running process is then removed from the queue.
 FCFS scheduling is non-pre-emptive, very simple, can be implemented with
a FIFO queue, Not a good choice when there are variable burst times (CPU
bound and I/O bound), drawback: causes short processes to wait for longer
ones.

Shortest Job First (SJF) Scheduling

 When the CPU is available, it is assigned to the process that has the
smallest next CPU burst.
 If the two processes have the same length or amount of next CPU burst,
FCFS scheduling is used to break the tie.
 A more appropriate term for this scheduling method would be the shortest
next CPU burst algorithm because scheduling depends on the length of the
next CPU burst of a process rather than its total length.
 SJF works based on burst times (not total job time), difficult to get this
information, must approximate it for short term scheduling, used frequently
for long term scheduling, pre-emptive (SJF) or non-pre-emptive (SRTF),
Special case of priority scheduling.

Preemptive and Non-preemptive Algorithm

 The SJF algorithm can either be pre-emptive or non-pre-emptive.


 The choice arises when a new process arrives at the ready queue while a
previous process is still executing.
 The next CPU burst of the newly arrived process may be shorter than what
is left of the currently executing process.
 A pre-emptive SJF algorithm will pre-empt the currently executing process,
whereas a, non-pre-emptive algorithm will allow the currently running
process to finish its CPU burst.
 Pre-emptive SJF scheduling, is sometimes called shortest-remaining-time-
first-scheduling,
Process Table

Waiting time for P1 = 10 – 1 = 9


Waiting time for P2 = 1 – 1 = 0
Waiting time for P3 = 17 – 2 = 15
Waiting time for P4 = 5 – 3

Average waiting time

Priority Scheduling

 A priority is associated with each process and the CPU is allocated to the
process with the highest priority, Equal priority processes are scheduled in
FCFS order.
 We can be provided that low numbers represent high priority or low
numbers represent low priority, According to the question, we need to
assume anyone of the above.
 Priority scheduling can be either pre-emptive or non-pre-emptive.
 A pre-emptive priority scheduling algorithm will pre-empt the CPU, if the
priority of the newly arrived process is higher than the priority of the
currently running process.
 A non-pre-emptive priority scheduling algorithm will simply put the new
process at the head of the ready queue.

A major problem with priority scheduling algorithm is indefinite blocking or


starvation.
Process Table

Waiting time for P1 = 6


Waiting time for P2 = 0
Waiting time for P3 = 16
Waiting time for P4 = 18
Waiting time for P5 = 1
Average waiting time

Round Robin (RR) Scheduling

 The RR scheduling algorithm is designed especially for time sharing systems.


 It is similar to FCFS scheduling but pre-emption is added to switch between
processes.
 A small unit of time called a time quantum or time slice is defined.
 If time quantum is too large, this just becomes FCFS, since context-
switching costs time, the rule of thumb is 80% of CPU bursts should be
shorter than the time quantum.

Process Table

Let’s take time quantum = 4 ms. Then the resulting RR schedule is as Follows

P1 waits for the 6 ms (10 – 4), P2 waits for 4 ms and P3 waits for 7 ms.
Thus,
Average waiting time

4.4 Other Scheduling.


Multilevel, Multiprocessor, real-time.

Multilevel Queue Scheduling:

 This scheduling algorithm has been created for saturations in which


processes are easily classified into different groups
 A multilevel queue scheduling algorithm partitions the ready queue into
several separate queues.
 The processes are permanently assigned to one queue, generally based on
some property of the process, such as memory size, process priority or
process type.

Multilevel Feedback Queue Scheduling


 This scheduling algorithm allows a process to move between queues.
 The idea is to separate processes according to the characteristics of their
CPU bursts.
 If a process uses too much CPU time, it will be moved to a lower priority
queue.
 Similarly, a process that waits too long in a lower priority queue may be
moved to a higher priority queue. This form of aging prevents starvation.

Multiple-Processor Scheduling

If multiple CPUs are available, the scheduling problem is correspondingly more


complex. We concentrate on homogenous multiprocessors. Even with those systems we
have certain limitations. If several identical processors are available, then load-sharing
can occur. It is possible to provide a separate queue for each procesor. In this case
however, one processor could be idle, with an empty queue, while another processor
was very busy. To prevent this situation, we use a common ready queue. All processes
go into one queue and are scheduled onto any available processor. In such a scheme,
one of two scheduling approaches may be used. In one approach, each processor is
self-scheduling. Each processor examines the common ready queue and selects a
process to execute. each processor must be programmed very carefully. We must
ensure that two processors do not choose the same process, and that processes are not
lost from the queue. The other approach avoids this problem by appointing one
processor as scheduler for the other processors, thus creating a master-slave structure.
This asymmetric multiprocessing is far simpler than symmetric multiprocessing,
because only one processor accesses the system data structure, alleviating the need for
data sharing.

Real-Time Scheduling
Real-time computing is divided into two types. Hard real-time systems are required to
complete a critical task within a guaranteed amount of time. Generally, a process is
submitted along with a statement of the amount of time in which it needs to complete
or perform I/O. The scheduler then either admits the process, guaranteeing that the
process will complete on time, or rejects the request as impossible. This is known as
resource reservation.

Soft real-time computing is less restrictive. It requires that critical processes receive
priority over less fortunate ones. Implementing soft real-time functionality requires
careful design of the scheduler and related aspects of the operating system. First, the
system must have priority scheduling, and real-time processes must have the highest
priority. The priority of real-time processes must not degrade over time, even though
the priority of non-real-time processes may. Second, the dispatch latency must be
small. The smaller the latency, the faster a real-time process can start executing once it
is runnable. The high-priority process would be waiting for a lower-priority one to
finish. This situation is known as priority inversion. In fact, a chain of processes
could all be accessing resources that the high-priority process needs. This problem can
be solved via the priority-inheritance protocol, in which all these processes (the ones
accessing resources that the high-priority process needs) inherit the high priority until
they are done with the resource in question. When they are finished, their priority
reverts to its original value.

You might also like