21CS56 - Operating Systems Chapter 5 - (Module2) Process Scheduling

You might also like

Download as pdf or txt
Download as pdf or txt
You are on page 1of 41

21CS56 – Operating systems

Chapter 5- (Module2) process scheduling

Department of Information Science and Engg 1


Transform Here
Chapter 5: Process Scheduling
● Basic Concepts
● Scheduling Criteria
● Scheduling Algorithms
● Multiple-Processor Scheduling

Department of Information Science and Engg 2


Transform Here
Objectives

● To introduce CPU scheduling, which is the basis for


multiprogrammed operating systems
● To describe various CPU-scheduling algorithms

Department of Information Science and Engg 3


Transform Here
Basic Concepts
● Maximum CPU utilization
obtained with multiprogramming
● CPU–I/O Burst Cycle – Process
execution consists of a cycle of
CPU execution and I/O wait
● CPU burst followed by I/O burst
● CPU burst distribution is of main
concern

Department of Information Science and Engg 4


Transform Here
Histogram of CPU-burst Times
Large number of short bursts

Small number of longer bursts

Department of Information Science and Engg


Transform Here
CPU Scheduler
● The CPU scheduler selects from among the processes in ready queue, and
allocates the CPU to one of them
● Queue may be ordered in various ways
● CPU scheduling decisions may take place when a process:
1. Switches from running to waiting state
2. Switches from running to ready state
3. Switches from waiting to ready
4. Terminates

Department of Information Science and Engg 6


Transform Here
Preemptive and Nonpreemptive Scheduling
● For situations 1 and 4, there is no choice in terms of scheduling. A new process
( if one exists in the ready queue) must be selected for execution.
🡺nonpreemptive
● Under Nonpreemptive scheduling, once the CPU has been allocated to a
process, the process keeps the CPU until it releases it either by terminating or by
switching to the waiting state.
● Virtually all modern operating systems including Windows, MacOS, Linux, and UNIX use
preemptive scheduling algorithms.
● All other situations there is a choice (2 and 3) 🡺 preemptive scheduling 🡺 can
result in race conditions when data are shared among several processes.
● Consider access to shared data
● Consider preemption while in kernel mode
● Consider interrupts occurring during crucial OS activities

Department of Information Science and Engg 7


Transform Here
Preemptive Scheduling and Race Conditions
▪ Preemptive scheduling can result in race conditions
when data are shared among several processes.
▪ Consider the case of two processes that share data.
While one process is updating the data, it is preempted
so that the second process can run. The second process
then tries to read the data, which are in an inconsistent
state.

Department of Information Science and Engg 8


Transform Here
Dispatcher
● Dispatcher module gives control of the
CPU to the process selected by the
short-term scheduler; this involves:
● switching context
● switching to user mode
● jumping to the proper location in the
user program to restart that program
● Dispatch latency – time it takes for the
dispatcher to stop one process and
start another running

Department of Information Science and Engg 9


Transform Here
Scheduling Criteria
● CPU utilization – keep the CPU as busy as possible
● Throughput – # of processes that complete their
execution per time unit
● Turnaround time – amount of time to execute a particular
process
● Waiting time – amount of time a process has been waiting
in the ready queue
● Response time – amount of time it takes from when a
request was submitted until the first response is produced,
not output (for time-sharing environment)
Department of Information Science and Engg 10
Transform Here
Scheduling Algorithm Optimization Criteria

● Max CPU utilization


● Max throughput
● Min turnaround time
● Min waiting time
● Min response time

Department of Information Science and Engg 11


Transform Here
FCFS Scheduling
The process that requests the CPU first is allocated
the CPU first.
The implementation is easily done using a FIFO queue.
Procedure:
1. When a process enters the ready-queue, its PCB is
linked onto the tail of the queue.
2. When the CPU is free, the CPU is allocated to the
process at the queue’s head.
3. The running process is then removed from the
queue.

Department of Information Science and Engg 12


Transform Here
First- Come, First-Served (FCFS) Scheduling
Process Burst Time
P1 24
P2 3
P3 3
● Suppose that the processes arrive in the order: P1 , P2 , P3
The Gantt Chart for the schedule is:

● Waiting time for P1 = 0; P2 = 24; P3 = 27 TAT: p1=24, P2=27, P3=30


● Average waiting time: (0 + 24 + 27)/3 = 17 Avg. TAT= (24+27+30)/3=27

Department of Information Science and Engg 13


Transform Here
FCFS Scheduling (Cont.)
Suppose that the processes arrive in the order:
P2 , P3 , P1
● The Gantt chart for the schedule is:

● Waiting time for P1 = 6; P2 = 0; P3 = 3


● Average waiting time: (6 + 0 + 3)/3 = 3
● Much better than previous case
● Convoy Effect: Short process behind long process
● Consider one CPU-bound and many I/O –bound Processes

Department of Information Science and Engg 14


Transform Here
Advantage:
1. Code is simple to write & understand.
Disadvantages:
1. Convoy effect: All other processes wait for one big
process to get off the CPU.
2. Non-preemptive (a process keeps the CPU until it
releases it).
3. Not good for time-sharing systems.
4. The average waiting time is generally not minimal.

Department of Information Science and Engg 15


Transform Here
Shortest-Job-First (SJF) Scheduling
▪ Associate with each process the length of its next CPU burst
Use these lengths to schedule the process with the shortest
time
▪ SJF is optimal – gives minimum average waiting time for a
given set of processes
▪ Preemptive version called shortest-remaining-time-first
▪ How do we determine the length of the next CPU burst?
Could ask the user
Estimate

Department of Information Science and Engg 16


Transform Here
Shortest-Job-First (SJF) Scheduling
❑The CPU is assigned to the process that has the smallest next CPU burst.

❑If two processes have the same length CPU burst, FCFS scheduling is used
to break the tie.

❑For long-term scheduling in a batch system, we can use the process time

limit specified by the user, as the ‘length’

❑SJF can't be implemented at the level of short-term scheduling, because

there is no way to know the length of the next CPU burst.

Department of Information Science and Engg 17


Transform Here
Example of SJF
ProcessArriva l Time Burst Time
P1 0.0 6
P2 2.0 8
P3 4.0 7
P4 5.0 3

● SJF scheduling chart

● Average waiting time = (3 + 16 + 9 + 0) / 4 = 7 Avg. TAT= (9+24+16+3)/4=13

Department of Information Science and Engg 18


Transform Here
Determining Length of Next CPU Burst
● Can only estimate the length – should be similar to the previous one
● Then pick process with shortest predicted next CPU burst

● Can be done by using the length of previous CPU bursts, using


exponential averaging

● Commonly, α set to ½
● Preemptive version called shortest-remaining-time-first

Department of Information Science and Engg 19


Transform Here
Prediction of the Length of the Next CPU Burst

Department of Information Science and Engg 20


Transform Here
• α =0
Examples of Exponential Averaging
τn+1 = τn
Recent history does not count
• α =1
τn+1 = α tn
Only the actual last CPU burst counts
• If we expand the formula, we get:
τn+1 = α tn+(1 - α)α tn -1 + …
+(1 - α )j α tn -j + …
+(1 - α )n +1 τ0

Since both α and (1 - α) are less than or equal to 1, each


successor predecessor term has less weight than its
predecessor
Department of Information Science and Engg 21
Transform Here
SJF
❑SJF algorithm may be either 1) non-preemptive or
2)preemptive.
❑1. Non preemptiveSJF
The current process is allowed to finish its CPU burst.
❑2. PreemptiveSJF
If the new process has a shorter next CPU burst than
what is left of the executing process, that process is
preempted.
It is also known as SRTF scheduling
(Shortest-Remaining-Time-First).
Department of Information Science and Engg 22
Transform Here
Example of Shortest-remaining-time-first
● Now we add the concepts of varying arrival times and preemption to the
analysis
ProcessA arri Arrival TimeT Burst Time
P1 0 8
P2 1 4
P3 2 9
P4 3 5
● Preemptive SJF Gantt Chart

WT..P1=0+9=9,P2=1-1=0,P3=17-2=15,P4=5-3=2.
Avg. WT= (9+0+15+2)/4=6.5
TAT= P1=17, P2=4, P3=24, P4=7 Avg TAT= (17+4+24+7)/4=13
Department of Information Science and Engg 23
Transform Here
Priority Scheduling
❑A priority is associated with each process.
❑ The CPU is allocated to the process with the highest priority.
❑ Equal-priority processes are scheduled in FCFS order.
❑Priority scheduling can be either preemptive or non-preemptive.
❑1.Preemptive
The CPU is preempted if the priority of the newly arrived process is
higher than the priority of the currently running process.
2. Non Preemptive
The new process is put at the head of the ready-queue

Department of Information Science and Engg 24


Transform Here
Priority Scheduling
● Problem ≡ Starvation – low priority processes may
never execute

● Solution ≡ Aging – as time progresses increase the


priority of the process

Department of Information Science and Engg 25


Transform Here
Example of Priority Scheduling
ProcessA arri Burst TimeT Priority
P1 10 3
P2 1 1
P3 2 4
P4 1 5
P5 5 2

● Priority scheduling Gantt Chart

● Wt=P1=6, P2=0, P3=16, P4=18,P5=1 Avg. WT=(6+0+16+18+1)/5=8.2


● TAT=P1=16, P2=1.P3=18, P4=19, P5=6 Avg TAT=(16+1+18+19+6)/5=12

Department of Information Science and Engg 26


Transform Here
Round Robin (RR)
● Each process gets a small unit of CPU time (time quantum q), usually
10-100 milliseconds. After this time has elapsed, the process is
preempted and added to the end of the ready queue.
● If there are n processes in the ready queue and the time quantum is q,
then each process gets 1/n of the CPU time in chunks of at most q time
units at once. No process waits more than (n-1)q time units.
● Timer interrupts every quantum to schedule next process
● Performance
● q large ⇒ FIFO
● q small ⇒ q must be large with respect to context switch, otherwise
overhead is too high
Note that q must be large with respect to context switch, otherwise
overhead is too high.
Department of Information Science and Engg 27
Transform Here
Example of RR with Time Quantum = 4
Process Burst Time
P1 24
P2 3
P3 3
● The Gantt chart is:

TAT=P1=30, P2=7.P3=10, Avg TAT=(30+7+10)/3=15.6


Wt=P1=30-24=6, P2=7-3=4, P3=10-3=7, Avg. WT=(6+4+7)/3=5.66
● Typically, higher average turnaround than SJF, but better response
● q should be large compared to context switch time
● q usually 10ms to 100ms, context switch < 10 microseconds

Department of Information Science and Engg 28


Transform Here
Time Quantum and Context Switch Time

Department of Information Science and Engg 29


Transform Here
Turnaround Time Varies With The Time
Quantum

80% of CPU bursts should be shorter than q

Department of Information Science and Engg 30


Transform Here
Multilevel Queue
• The ready queue consists of multiple
queues
• Multilevel queue scheduler defined
by the following parameters:
Number of queues
Scheduling algorithms for each
queue
Method used to determine which
queue a process will enter when that
process needs service
Scheduling among the queues

Department of Information Science and Engg 31


Transform Here
Multilevel Queue
● Ready queue is partitioned into separate queues, eg:
● foreground (interactive)
● background (batch)
● The ready-queue is partitioned into several separate queues
● The processes are permanently assigned to one queue based on some property like
memory size, process priority or process type.
● Each queue has its own scheduling algorithm:
● foreground – RR
● background – FCFS
● Scheduling must be done between the queues:
● Fixed priority Preemptive scheduling; (i.e., serve all from foreground then from
background). Possibility of starvation.
● Time slice – each queue gets a certain amount of CPU time which it can schedule
amongst its processes; i.e., 80% to foreground in RR 20% to background in FCFS

Department of Information Science and Engg 32


Transform Here
Multilevel Queue
• With priority scheduling, have separate queues for each priority.
• Schedule the process in the highest-priority queue!

Department of Information Science and Engg 33


Transform Here
Multilevel Queue Scheduling

Department of Information Science and Engg 34


Transform Here
Multilevel Feedback Queue
● A process can move between the various queues;
● The basic idea: Separate processes according to the features of their CPU
bursts.
● For example
1. If a process uses too much CPU time, it will be moved to a lower-priority
queue.
This scheme leaves I/O-bound and interactive processes in the higher-priority
queues.
2. If a process waits too long in a lower-priority queue, it may be moved to a
higher-priority queue This form of aging prevents starvation.

Department of Information Science and Engg 35


Transform Here
Example of Multilevel Feedback Queue
● Three queues:
● Q0 – RR with time quantum 8 milliseconds
● Q1 – RR time quantum 16 milliseconds
● Q2 – FCFS
● Scheduling
● A new job enters queue Q0 which is served FCFS
4 When it gains CPU, job receives 8 milliseconds
4 If it does not finish in 8 milliseconds, job is
moved to queue Q1
● At Q1 job is again served FCFS and receives 16
additional milliseconds
4 If it still does not complete, it is preempted and
moved to queue Q2

Department of Information Science and Engg 36


Transform Here
Multilevel Feedback Queue
● Multilevel-feedback-queue scheduler defined by the
following parameters:
● number of queues
● scheduling algorithms for each queue
● method used to determine when to upgrade a
process
● method used to determine when to demote a process
● method used to determine which queue a process
will enter when that process needs service

Department of Information Science and Engg 37


Transform Here
Multiple-Processor Scheduling
● CPU scheduling more complex when multiple CPUs are available
● Homogeneous processors within a multiprocessor
● Asymmetric multiprocessing – only one processor accesses the system data structures,
alleviating the need for data sharing (master-slave)
● Symmetric multiprocessing (SMP) – each processor is self-scheduling, all processes in
common ready queue, or each has its own private queue of ready processes
● Currently, most common
● Processor affinity – process has affinity for processor on which it is currently running🡺 SMP
Avoid processes migrating from one processor to another processor.
soft affinity- When an OS try to keep a process on one (same) processor because of policy, but
cannot guarantee it will happen.
hard affinity- When an OS have the ability to allow a process to specify that it is not to migrate
to other processors

Department of Information Science and Engg 38


Transform Here
Multiple-Processor Scheduling (Contd)

Department of Information Science and Engg 39


Transform Here
Multiple-Processor Scheduling – Load Balancing
● If SMP, need to keep all CPUs loaded for efficiency
● Important on systems where each processor has its own private queue
of eligible processes to execute.
● Load balancing attempts to keep workload evenly distributed
● Push migration – A specific task periodically checks load on each
processor, and if found pushes task from overloaded CPU to other
CPUs
● Pull migration – idle processors pulls waiting task from busy
processor.
● Push and Pull migrations are often implemented in parallel on load
balancing system.

Department of Information Science and Engg 40


Transform Here
End of Chapter 6

Department of Information Science and Engg 41


Transform Here

You might also like