Chap 4 CPU Scheduling

You might also like

Download as ppt, pdf, or txt
Download as ppt, pdf, or txt
You are on page 1of 21

Chapter 4: CPU Scheduling

Operating System Concepts – 8th Edition Silberschatz, Galvin and Gagne ©2009
Chapter 5: CPU Scheduling
 BASIC CONCEPTS
 SCHEDULING CRITERIA
 SCHEDULING ALGORITHMS
 MULTIPLE-PROCESSOR SCHEDULING

Operating System Concepts – 8th Edition 5.2 Silberschatz, Galvin and Gagne ©2009
Objectives
 To introduce CPU scheduling, which is the basis for multiprogrammed operating
systems

 To describe various CPU-scheduling algorithms

 To discuss evaluation criteria for selecting a CPU-scheduling algorithm for a


particular system

Operating System Concepts – 8th Edition 5.3 Silberschatz, Galvin and Gagne ©2009
BASIC CONCEPTS
 In a single-processor system, only one process can run at a time; any others must wait
until the CPU is free and can be rescheduled.
 The objective of multiprogramming is to have some process running at all times, to
maximize CPU utilization.
 The idea is relatively simple. A process is executed until it must wait, typically for the
completion of some I/O request. In a simple computer system, the CPU then just sits idle.
 All this waiting time is wasted; no useful work is accomplished. With multiprogramming,
we try to use this time productively.
 Several processes are kept in memory at one time. When one process has to wait, the
operating system takes the CPU away from that process and gives the CPU to another
process.
 This pattern continues. Every time one process has to wait, another process can take
over use of the CPU.
 Scheduling of this kind is a fundamental operating-system function. Almost all computer
resources are scheduled before use.
 The CPU is, of course, one of the primary computer resources. Thus, its scheduling is
central to operating-system design.

Operating System Concepts – 8th Edition 5.4 Silberschatz, Galvin and Gagne ©2009
Alternating Sequence of CPU and
I/O Bursts Cycle

Operating System Concepts – 8th Edition 5.5 Silberschatz, Galvin and Gagne ©2009
CPU Scheduler
 Whenever the CPU becomes idle, the OS must select one of the processes in
ready queue, and allocates the CPU to one of them
 Queue may be ordered in various ways
 The selection process is carried out by the short-term scheduler (or CPU
scheduler). The scheduler selects a process from the processes in memory that
are ready to execute and allocates the CPU to that process.

Operating System Concepts – 8th Edition 5.6 Silberschatz, Galvin and Gagne ©2009
CPU Scheduler (cont)
 CPU scheduling decisions may take place when a process:
1. Switches from running to waiting state
2. Switches from running to ready state
3. Switches from waiting to ready
4. Terminates
 Scheduling under 1 and 4 is nonpreemptive or cooperating scheduling
 All other scheduling is preemptive
 Consider access to shared data
 Consider preemption while in kernel mode
 Consider interrupts occurring during crucial OS activities

Operating System Concepts – 8th Edition 5.7 Silberschatz, Galvin and Gagne ©2009
Dispatcher
 Another component involved in the CPU-scheduling function is the dispatcher.
 Dispatcher module gives control of the CPU to the process selected by the
short-term scheduler; this involves:
 switching context
 switching to user mode
 jumping to the proper location in the user program to restart that program

 Dispatch latency – time it takes for the dispatcher to stop one process and
start another running

Operating System Concepts – 8th Edition 5.8 Silberschatz, Galvin and Gagne ©2009
SCHEDULING CRITERIA
 Many criteria have been suggested for comparing CPU-scheduling algorithms.
Which characteristics are used for comparison can make a substantial difference in
which algorithm is judged to be best.
 The criteria include the following:
 CPU utilization – We want to keep the CPU as busy as possible. Conceptually, CPU
utilization can range from 0 to 100 percent. In a real system, it should range from 40
percent (for a lightly loaded system) to 90 percent (for a heavily used system).
 Throughput – processes that complete their execution per time unit. If the CPU
is busy executing processes, then work is being done. One measure of work is the
number of processes that are completed per time unit, called throughput. For long
processes, this rate may be one process per hour; for short transactions, it may be ten
processes per second.
 Turnaround time – amount of time to execute a particular process. From the
point of view of a particular process, the important criterion is how long it takes to
execute that process. The interval from the time of submission of a process to the time
of completion is the turnaround time. Turnaround time is the sum of the periods spent
waiting to get into memory, waiting in the ready queue, executing on the CPU, and
doing I/0.

Operating System Concepts – 8th Edition 5.9 Silberschatz, Galvin and Gagne ©2009
SCHEDULING CRITERIA (cont)
 Waiting time – amount of time a process has been waiting in the ready queue.
The CPU-scheduling algorithm does not affect the amount of time during which a
process executes or does I/0; it affects only the amount of time that a process spends
waiting in the ready queue. Waiting time is the sum of the periods spent waiting in the
ready queue.
 Response time – amount of time it takes from when a request was submitted
until the first response is produced, not output (for time-sharing environment). In
an interactive system, turnaround time may not be the best criterion. Often, a process
can produce some output fairly early and can continue computing new results while
previous results are being output to the user. Thus, another measure is the time from the
submission of a request until the first response is produced. This measure, called
response time, is the time takes to start responding, not the time it takes to output the
response. The turnaround time is generally limited by the speed of the output device.

Operating System Concepts – 8th Edition 5.10 Silberschatz, Galvin and Gagne ©2009
SCHEDULING ALGORITHM OPTIMIZATION
CRITERIA

 CPU scheduling deals with the problem of deciding which of the processes in the
ready queue is to be allocated the CPU.
 There are many different CPU-scheduling algorithms. In this section, we describe
three of them which are:
 First-Come, First-Served Scheduling
 Shortest-Job-First Scheduling
 Priority Scheduling

Operating System Concepts – 8th Edition 5.11 Silberschatz, Galvin and Gagne ©2009
First-Come, First-Served (FCFS) Scheduling
 With this scheme, the process that requests the CPU first is allocated the CPU first.
The implementation of the FCFS policy is easily managed with a FIFO queue.
 When a process enters the ready queue, its PCB is linked onto the tail of the queue.
When the CPU is free, it is allocated to the process at the head of the queue.
 The running process is then removed from the queue. The code for FCFS scheduling
is simple to write and understand.
 Note also that the FCFS scheduling algorithm is nonpreemptive. Once the CPU has
been allocated to a process, that process keeps the CPU until it releases the CPU,
either by terminating or by requesting I/0.
 The FCFS algorithm is thus particularly troublesome for time-sharing systems, where
it is important that each user get a share of the CPU at regular intervals.
 It would be disastrous to allow one process to keep the CPU for an extended period.

Operating System Concepts – 8th Edition 5.12 Silberschatz, Galvin and Gagne ©2009
First-Come, First-Served (FCFS) Scheduling

Process Burst Time


P1 24
P2 3
P3 3
 Suppose that the processes arrive in the order: P1 , P2 , P3
The Gantt Chart for the schedule is:

P1 P2 P3

0 24 27 30

 Waiting time for P1 = 0; P2 = 24; P3 = 27


 Average waiting time: (0 + 24 + 27)/3 = 17

Operating System Concepts – 8th Edition 5.13 Silberschatz, Galvin and Gagne ©2009
Shortest-Job-First (SJF) Scheduling
 This algorithm associate with each process the length of its next CPU burst
 Use these lengths to schedule the process with the shortest time

 SJF is optimal – gives minimum average waiting time for a given set of
processes
 The difficulty is knowing the length of the next CPU request
 Could ask the user

Operating System Concepts – 8th Edition 5.14 Silberschatz, Galvin and Gagne ©2009
Example of SJF
ProcessArriva l Time Burst Time
P1 0.0 6
P2 2.0 8
P3 4.0 7
P4 5.0 3
 SJF scheduling chart

P4 P1 P3 P2

0 3 9 16 24

 Average waiting time = (3 + 16 + 9 + 0) / 4 = 7

Operating System Concepts – 8th Edition 5.15 Silberschatz, Galvin and Gagne ©2009
Priority Scheduling
 A priority number (integer) is associated with each process
 The CPU is allocated to the process with the highest priority (smallest integer 
highest priority)
 Preemptive
 Nonpreemptive
 SJF is priority scheduling where priority is the inverse of predicted next CPU burst
time
 Problem  Starvation – low priority processes may never execute
 Solution  Aging – as time progresses increase the priority of the process

Operating System Concepts – 8th Edition 5.16 Silberschatz, Galvin and Gagne ©2009
Example of Priority Scheduling
ProcessA arri Burst TimeT Priority
P1 10 3
P2 1 1
P3 2 4
P4 1 5
P5 5 2
 Priority scheduling Gantt Chart

P2 P5 P1 P3 P4

0 1 6 16 18 19

 Average waiting time = 8.2 msec

Operating System Concepts – 8th Edition 5.17 Silberschatz, Galvin and Gagne ©2009
Multiple-Processor Scheduling
 CPU scheduling more complex when multiple CPUs are available
 Homogeneous processors within a multiprocessor
 Asymmetric multiprocessing – only one processor accesses the system data
structures, alleviating the need for data sharing
 Symmetric multiprocessing (SMP) – each processor is self-scheduling, all
processes in common ready queue, or each has its own private queue of ready
processes
 Currently, most common
 Processor affinity – process has affinity for processor on which it is currently
running
 soft affinity
 hard affinity
 Variations including processor sets

Operating System Concepts – 8th Edition 5.18 Silberschatz, Galvin and Gagne ©2009
NUMA and CPU Scheduling

Note that memory-placement algorithms can also consider


affinity

Operating System Concepts – 8th Edition 5.19 Silberschatz, Galvin and Gagne ©2009
Multicore Processors
 Recent trend to place multiple processor cores on same physical chip
 Faster and consumes less power
 Multiple threads per core also growing
 Takes advantage of memory stall to make progress on another thread while
memory retrieve happens

Operating System Concepts – 8th Edition 5.20 Silberschatz, Galvin and Gagne ©2009
End of Chapter 4

Operating System Concepts – 8th Edition Silberschatz, Galvin and Gagne ©2009

You might also like