Download as pdf or txt
Download as pdf or txt
You are on page 1of 28

CS211 Operating System

SP21
Week 7 (22-24 Feb)
Dr. Amritanjali
Shortest Remaining Time (SRT)
• Preemptive version of SPN
• Scheduler chooses the process that has the shortest expected
remaining processing time
• Running process can be preempted if a ready process has
lesser remaining processing time
• Risk of starvation of longer processes
• Overhead of recording elapsed service time
• Lesser number of interrupts than RR
SRT Example
SRT Scheduling Characteristics
• Decision Mode- Preemptive
• Throughput- High
• Response Time- Provides good response time
• Overhead- Can be high
• Effect on Processes- Penalizes long processes
• Starvation- Possible
Highest Response Ratio Next
(HRRN)
Response Ratio ( R)
R = (w+ s)/s
w- time spent waiting for processor
s- expected service time
• Scheduler selects the process with highest
response ratio
• It accounts for the age of the process
HRRN Example
HRRN Scheduling Characteristics
• Decision Mode- Non-preemptive
• Throughput- High
• Response Time- Good response time
• Overhead- Can be high
• Effect on Processes- Good balance
• Starvation- No
Feedback Queue Scheduling
• Preemptive scheduling
• Dynamic priority mechanism is used
• When a process first enters into the system it is
placed in RQ0 (highest priority queue)
• When a process is blocked or is preempted, it is
demoted to next lower priority level queue
• Newer shorter processes are favored over longer
processes
• Within each queue, processes are executed with a
certain time quantum
Feedback Queue Example
Feedback Queue Scheduling
Characteristics
• Decision Mode- Preemptive (at time quantum)
• Throughput- Not emphasized
• Response Time- Not emphasized
• Overhead- Can be high
• Effect on Processes- May favour I/O bound processes
• Starvation- Possible
Multiprocessors
• Loosely Coupled Multiprocessors
Collection of autonomous systems

• Tightly Coupled Multiprocessors


Processors are under the control of an integrated
system
Independent Parallelism
• Independent tasks
• No explicit synchronization among tasks
Coarse-grained Parallelism
• Synchronization among tasks at a gross level
• Multiprocessing with concurrent processes in
multiprogramming environment
Medium-grained Parallelism
• Multitasking applications requiring frequent
coordination and interaction
• Scheduling decisions can affect performance
of the application
Fine-grained Parallelism
• Highly parallel applications
• More complex use of parallelism than is found
in the use of threads
Multiprocessor Scheduling
Design Issues
• Assignment of tasks to processors
• Use of Multiprogramming
• Actual Dispatching of Process
Assignment of Tasks
• Static Assignment
Task is assigned permanently to a processor from
activation until its completion
• Dynamic Assignment
Tasks can be switched to different processors

• Dedicated queue
• Common queue
Assignment of Tasks
Master-Slave Architecture
• One processor (Master) executes the key functions of
the operating system, has control of memory and I/O
Devices
• Master processor assigns task to the slave processors
• Slave processors sends request to master processor
for accessing resources
Peer Architecture
• Kernel can execute on any processor
• Each processor does self-scheduling
Use of Multiprogramming
• Traditional multiprocessors deals with independent
or coarse grained applications
• The goal is to optimize use of processors by using
multiprogramming
• Medium grained applications runs on systems with
large number of processors
• Tries to provide best performance, on average, to the
applications instead of maximizing processor usage
Process Dispatching
• For multiprogrammed uniprocessor systems,
sophisticated algorithms can be used for selecting
process
• For multiprocessors, simple scheduling algorithms
provide better performances
Multiprocessor Scheduling
• Load Sharing
Global ready queue
• Gang Scheduling
A related set of threads are co-scheduled on a
set of processors
• Dedicated Processor Assignment
Threads assigned to specific processor
• Dynamic Scheduling
Number of threads altered during execution
Load Sharing
• Extension of uniprocessor scheduling to
multiprocessor environment
• Evenly distributes load across all processors
• No centralized scheduler is required, when a
processor is idle, scheduler executes on that
processor and selects next thread from global queue
Scheduling Strategy
• First come first served (FCFS)
• Smallest number of threads first
• Selects a thread of the job with smallest number of
unscheduled threads. Selected thread is executed
until completion or blocking
• Preemptive smallest number of threads first
Problems with Load Sharing
• Central queue should be accessed in a mutually exclusive
manner. It can become performance bottleneck with large
number of processors
• Preempted threads are unlikely to resume execution on same
processor, making caching inefficient
• If all the threads are treated as shared pool, then it is unlikely
that all the threads of a program execute at the same time. If
there is frequent interaction among the threads, then the
performance of the program can get seriously affected
Gang Scheduling
• Scheduling a related set of tasks simultaneously on a
set of processors
• If closely related tasks execute in parallel,
synchronization blocking may be reduced
• Scheduling overhead also reduce as each decision
affects multiple processes and processors
• It is beneficial for medium to fine grained parallel
applications
• Simultaneous execution of cooperating threads can
reduce time in resource allocation
Gang Scheduling
Dedicated Processor Assignment
• A group of processors are dedicated to an application for
the duration of the application
• Processors are not multiprogrammed
• This approach is used in highly parallel systems with
hundreds of processors, where processor utilization is
not as important as performance
• Avoidance of process switching results in substantial
speed up
• Number of processors to be allocated is decided by the
number of threads of an application that needs to be
simultaneously executed for acceptable performance
Dynamic Scheduling
• With the help of system tools and programming language
the number of threads can be altered dynamically
• A new job is started with one processor
• As jobs makes new requests, OS allocates processors as
per availability. If request cannot be satisfied then it is
added to the queue of outstanding requests
• The application makes decision, which threads to
suspend or execute with its allocated partition of
processors
• If processors become available OS scans the queue,
allocates one processor to all new jobs and allocates
remaining processors to other jobs on FCFS basis

You might also like