Download as doc, pdf, or txt
Download as doc, pdf, or txt
You are on page 1of 22

Chapter 5 CPU Scheduling CPU and I/O Burst Cycle The execution of a process consists of a cycle of CPU execution

on and I/O wait. A process begins with a CPU burst, followed by an I/O burst, followed by another CPU burst and so on. The last CPU burst will end will a system request to terminate the execution. The CPU burst durations vary from process to process and computer to computer. An I/O bound program has many very short CPU bursts. A CPU bound program might have a few very long CPU bursts.

Prepared by Dr. Amjad Mahmood

5.1

Histogram of CPU-burst Times

Types of Scheduling The key to the multiprogramming is scheduling. There are four types of scheduling that an OS has to perform. These are: o Long Term scheduling The long term scheduling determines which programs are admitted to the system

Prepared by Dr. Amjad Mahmood

5.2

for processing. Thus, it controls the level of multiprogramming. Once admitted, a job or a user program becomes a process and is added to the queue for the short term scheduling (in some cases added to a queue for medium term scheduling). Long term scheduling is performed when a new process is created. The criteria used for long-term scheduling may include first-come-first serve, priority, expected execution time, and I/O requirements. o Medium-Term Scheduling The medium-term scheduling is a part of swapping function. This is a decision to add a process to those that are at least partially in main memory and therefore available for execution. The swapping-in decision is made on the need to manage the degree of multiprogramming and the memory requirements of the swapped-out process. o Short-Term Scheduling A decision of which ready process to execute next is made in short-term scheduling. o I/O Scheduling The decision as to which processs pending I/O requests shall be handled by the available I/O device is made in I/O scheduling.

Prepared by Dr. Amjad Mahmood

5.3

Prepared by Dr. Amjad Mahmood

5.4

CPU Scheduler Whenever, the CPU becomes idle, the OS must select one of the processes in the ready-queue to be executed. The selection process is carried out the short-term scheduler or CPU scheduler. The CPU scheduler selects a process from the ready queue and allocates the CPU to that process.

CPU scheduling decisions may take place when a process: 1. The running process changes from running to waiting state (current CPU burst of that process is over). 2. The running process terminates 3. A waiting process becomes ready (new CPU burst of that process begins) 4. The current process switches from running to ready stat (e.g. because of timer interrupt).

Prepared by Dr. Amjad Mahmood

5.5

Scheduling under 1 and 2 is nonpreemptive. Once a process is in the running state, it will continue until it terminates or blocks itself. Scheduling under 1 and 2 is preemptive. Currently running process may be interrupted and moved to the Ready state by OS. Allows for better service since any one process cannot monopolize the processor for very long

Dispatcher Dispatcher module gives control of the CPU to the process selected by the short-term scheduler; this involves: Switching context Switching to user mode Jumping to the proper location in the user program to restart that program Dispatch latency time it takes for the dispatcher to stop one process and start another running.

Prepared by Dr. Amjad Mahmood

5.6

What is a Good Scheduler? Criteria User oriented Turnaround time: time interval from submission of job until its completion. It includes actual processing time plus time spent waiting for resources, including the processor. OR the amount of time between moment a process first enters Ready State and the moment the process exits Running State for the last time (Completed). Waiting time: sum of periods spent waiting in ready queue. Response time: time interval from submission of job to first response. Often a process can begin producing some output to the user while continuing to process the request. Thus this is a better measure than turnaround time from the users point of view. Normalized turnaround time: ratio of turnaround time to service time. Service Time: The amount of time process needs to be in running state (Acquired CPU) before it is completed. System oriented CPU utilization: percentage of time CPU is busy. CPU utilization may range from 0 to 100%. In a real system, it should range from 40% to 90%. Throughput: number of jobs completed per time unit. This depends on the average length of the processes.
Prepared by Dr. Amjad Mahmood

5.7

Service time, wait time, turn around time, and throughput are some of the metrics used to compare scheduling algorithms. Any good scheduler should: Maximize CPU utilization and throughput Minimize turnaround time, waiting time, response time Goals of Scheduling Algorithm for Different Systems All systems Fairness Giving each process a fair share of the CPU Policy enforcement seeing that stated policy is carried out. Balance Keeping all parts of the system busy Batch systems Maximize throughput (job/hour) Minimize turnaround time Maximize CPU utilization Interactive systems Minimize response time (respond to request quickly) Proportionality Meet users expectations Real-time systems Meeting deadlines Predictability avoid quality degradation in multimedia systems.

Prepared by Dr. Amjad Mahmood

5.8

Scheduling Algorithms CPU scheduling deals with the problem of deciding which of the processes in the ready queue is to be allocated the CPU. There are many different CPU scheduling algorithms, which we will discuss now. First-Come First-Served (FCFS) Scheduling The process that requests the CPU first is allocated the CPU first. It is nonpreemptive algorithm. Can easily be implemented with a FIFO queue. When a process enters the ready queue, its PCB is linked onto the tail of the queue. When CPU is free, it is allocated to the process at the head of the queue.

Advantages Very simple Disadvantages Long average and worst-case waiting times Poor dynamic behavior (convoy effect - short process behind long process)

Prepared by Dr. Amjad Mahmood

5.9

Example 1: Process P1 P2 P3

Burst Time 24 3 3

Suppose that the processes arrive in the order: P1, P2,P3.The Gantt Chart for the schedule is: P1 24 P2 27 P3 30

Waiting time for P1 = 0; P2 = 24; P3 = 27 Average waiting time: (0 + 24 + 27)/3 = 17

Suppose that the processes arrive in the order P2, P3, P1. The Gantt chart for the schedule is:
5.10

Prepared by Dr. Amjad Mahmood

P2 0

P3 3 6

P1 30

Waiting time for P1 = 6; P2 = 0; P3 = 3 Average waiting time: (6 + 0 + 3)/3 = 3 Example 2: Consider the following set of processes: Process P1 P2 P3 P4 P5 Arrival Time 0 2 4 6 8 Service Time 3 6 4 5 2

Calculate waiting time, average waiting time, and turnaround time. (To Be Solved in the Class)

Prepared by Dr. Amjad Mahmood

5.11

Shortest-Job First Scheduling (SJF) This algorithm associates with each process the length of its next CPU burst. When the CPU is available, it is assigned the process that has the smallest next CPU burst. It is a non-preemptive policy.

Preemptive SJF Shortest Remaining Time First Preemptive version of SJF. If a new process arrives with CPU burst length less than remaining time of current executing process, preempt the currently executing process and allocate the CPU to the new process. Advantages: Minimizes average waiting times. Problems: How to determine length of next CPU burst? Problem: starvation of jobs with long CPU bursts.

Prepared by Dr. Amjad Mahmood

5.12

Example:
Process Arrival Time P1 0.0 P2 2.0 P3 4.0 P4 5.0 Burst Time 7 4 1 4

SJF (non-preemptive)

Average waiting time = (0 + 6 + 3 + 7)/4 = 4

SRT (preemptive SJB)

Average waiting time = (9 + 1 + 0 +2)/4 = 3


Prepared by Dr. Amjad Mahmood

5.13

Determining Length of the Next CPU burst in SJF Can only estimate the length. Can be done by using the length of previous CPU bursts, using exponential averaging.
1. tn = actual lenght of nth CPU burst 2. n +1 = predicted value for the next CPU burst 3. , 0 1 4. Define :

n =1 = tn + (1 ) n .

Priority Scheduling

In priority scheduling, a priority (an integer) is associated with each process. Priorities can be assigned either externally or internally. The CPU is allocated to the process with the highest priority (smallest integer means highest priority). Priority scheduling can be: Preemptive Nonpreemptive Problem

Prepared by Dr. Amjad Mahmood

5.14

Starvation or indefinite blocking low priority processes may never execute. Solution Aging as time progresses increase the priority of the process. Conceptual and Implementation view of Priority Scheduling

Example: Process P1 P2 P3 P4 P5 CPU Burst 10 1 2 1 5 Priority 3 1 3 4 2

(Solution to be given in the class)

Prepared by Dr. Amjad Mahmood

5.15

Round-Robin Scheduling

Each process gets a small unit of CPU time (called time quantum), usually 10-100 milliseconds. After this time has elapsed, the process is preempted and added to the end of the ready queue. Ready queue is treated as a circular queue. CPU scheduler goes around the ready queue, allocating the CPU to each process for a time interval of 1 time quantum. Ready queue is a FIFO queue of processes. New processes are added to the tail of the ready queue and the CPU scheduler picks the first process from the ready queue, sets a timer to interrupt after 1 time quantum and dispatches the process. If the process has a CPU burst of less than 1 time quantum, it releases the CPU voluntarily. Otherwise, the timer will go off and will cause an interrupt to the OS. A context switch will be executed and the process will be put at the tail of the ready queue.

Prepared by Dr. Amjad Mahmood

5.16

The CPU scheduler then picks up the next process in the ready queue. If there are n processes in the ready queue and the time quantum is q, Each process gets 1/n of the CPU time in chunks of at most q time units at once. No process waits more than (n-1)q time units until its next time quantum. Example: If there are 5 processes, with a time quantum of 20 ms, then each process will get up to 20 ms every 100 ms. Typically, RR has higher average turnaround than SJF, but better response.

Example: Process P1 P2 P3 P4

Burst Time 53 17 68 24 Time Quantum: 20 The Gantt chart is:


P1 0 20 P2 37 P3 57 P4 77 P1 P3 97 117 P4 P1 P3 P3

121 134 154 162

Prepared by Dr. Amjad Mahmood

5.17

RR With Context Switching Overhead In RR, context switching overhead should be considered.

RR and Time Quantum The performance of the RR depends heavily on the size of the time quantum. If the time quantum is very large, then RR policy is the same as FCFS policy. If the time quantum is very small then most of the CPU time will be spent on context switching. Turnaround time also depends on the time quantum A rule of thumb is that 80% of the CPU bursts should be shorter than the time quantum.

Prepared by Dr. Amjad Mahmood

5.18

Turnaround time varies with the time quantum

Multilevel Queue Scheduling Ready queue is partitioned into separate queues. Each queue may have its own scheduling algorithm.

Prepared by Dr. Amjad Mahmood

5.19

A multilevel queue scheduling with absolute priory

Scheduling must be done between the queues. Fixed priority scheduling; (i.e., serve all from foreground then from background). Possibility of starvation. Time slice each queue gets a certain amount of CPU time which it can schedule amongst its processes; i.e., 80% to foreground in RR and 20% to background in FCFS.

Prepared by Dr. Amjad Mahmood

5.20

Multilevel Feedback Queue Scheduling A process can move between the various queues; aging can be implemented this way.

Multilevel-feedback-queue scheduler defined by the following parameters: Number of queues Scheduling algorithms for each queue Method used to determine when to upgrade a process Method used to determine when to demote a process Method used to determine which queue a process will enter when that process needs service

Prepared by Dr. Amjad Mahmood

5.21

Example Three queues: Q0 time quantum 8 milliseconds Q1 time quantum 16 milliseconds Q2 FCFS Scheduling A new job enters queue Q0 which is served FCFS. When it gains CPU, job receives 8 milliseconds. If it does not finish in 8 milliseconds, job is moved to queue Q1. At Q1 job is again served FCFS and receives 16 additional milliseconds. If it still does not complete, it is preempted and moved to queue Q2.

Algorithm Evaluation Deterministic modeling takes a particular predetermined workload and defines the performance of each algorithm for that workload. Queueing models Implementation

Prepared by Dr. Amjad Mahmood

5.22

You might also like