Professional Documents
Culture Documents
Unit Ii - Process Scheduling
Unit Ii - Process Scheduling
Identifier
Accounting Information
Process State- Five State Model
Suspend Blocked
Suspend
Process Creation
Parent process create children processes,
which, in turn create other processes, Networking
• This scheduling method was used by Microsoft Windows 3.x. Windows 95 introduced preemptive
scheduling, and all subsequent versions of Windows operating systems have used preemptive
scheduling.
• Preemptive scheduling can result in race conditions when data are shared among several processes.
Dispatcher
Dispatcher module gives control of the CPU to the process selected by the short-term scheduler;
this involves:
switching context
switching to user mode
jumping to the proper location in the user program to restart that program
Dispatch latency – time it takes for the dispatcher to stop one process and start another running
Scheduling Criteria
CPU utilization – keep the CPU as busy as
possible
Throughput – # of processes that complete their Scheduling Algorithm
execution per time unit Optimization Criteria
Turnaround time – amount of time to execute a Max CPU utilization
particular process Max throughput
Waiting time – amount of time a process has been Min turnaround time
waiting in the ready queue Min waiting time
Response time – amount of time it takes from Min response time
when a request was submitted until the first
response is produced, not output (for time-sharing
environment)
First- Come, First-Served (FCFS) Scheduling
Process Burst Time
Suppose that the processes arrive in the order: P1 , P2 , P3
The Gantt Chart for the schedule is:
P1 P2 P3
0 24 27 30
Associate with each process the length of its next CPU burst
Use these lengths to schedule the process with the shortest time
SJF is optimal – gives minimum average waiting time for a given set of
processes
The difficulty is knowing the length of the next CPU request
Could ask the user
SJF scheduling is used frequently in long-term scheduling.
Example of SJF
Process Burst Time
P1 6
P2 8
P3 7
P4 3
P1 6 2
P2 2 5
P3 8 1
P4 3 0
P5 4 4
Determining Length of Next CPU Burst
Can only estimate the length – should be similar to the previous one
Then pick process with shortest predicted next CPU burst
Can be done by using the length of previous CPU bursts, using exponential
averaging
Commonly, α set to ½
Preemptive version called shortest-remaining-time-first
Prediction of the Length of the Next CPU Burst
Examples of Exponential Averaging
=0
n+1 = n
Recent history does not count
=1
n+1 = tn
Only the actual last CPU burst counts
If we expand the formula, we get:
n+1 = tn+(1 - ) tn -1 + …
+(1 - )j tn -j + …
+(1 - )n +1 0
Since both and (1 - ) are less than or equal to 1, each successive term has
less weight than its predecessor
Example of Shortest-remaining-time-first
Now we add the concepts of varying arrival times and preemption to the
analysis
Process Arrival Time Burst Time
P1 0 8
P2 1 4
P3 2 9
P4 3 5
Preemptive SJF Gantt Chart
P 1
P 2
P4 P1 P3
0 1 5 10 17 26
If there are n processes in the ready queue and the time quantum is q, then each process gets
1/n of the CPU time in chunks of at most q time units at once. No process waits more than
(n-1)q time units.
A process can move between the various queues; aging can be implemented
this way
Multilevel-feedback-queue scheduler defined by the following parameters:
number of queues
scheduling algorithms for each queue
method used to determine when to upgrade a process
method used to determine when to demote a process
method used to determine which queue a process will enter when that
process needs service
Example of Multilevel Feedback Queue
Three queues:
Q0 – RR with time quantum 8 milliseconds
Q1 – RR time quantum 16 milliseconds
Q2 – FCFS
Scheduling
A new job enters queue Q0 which is served FCFS
When it gains CPU, job receives 8 milliseconds
If it does not finish in 8 milliseconds, job is moved to
queue Q1
At Q1 job is again served FCFS and receives 16
additional milliseconds
If it still does not complete, it is preempted and
Multiple-Processor Scheduling
If multiple CPUs are available, load sharing becomes possible
CPU scheduling more complex when multiple CPUs are available
Homogeneous processors within a multiprocessor
Asymmetric multiprocessing – only one processor accesses the system data structures,
alleviating the need for data sharing
Symmetric multiprocessing (SMP) – each processor is self-scheduling, all processes in
common ready queue, or each has its own private queue of ready processes
Currently, most common
Processor affinity – process has affinity for processor on which it is currently running
soft affinity
hard affinity
Variations including processor sets
NUMA and CPU Scheduling
Multicore Processors
Recent trend to place multiple processor cores on same physical chip
Faster and consumes less power
Multiple threads per core also growing
Takes advantage of memory stall to make progress on another thread while
memory retrieve happens
Multithreaded Multicore System
Real-Time CPU Scheduling
https://forms.gle/B2ZKyp5eHNyXEAab6
Interprocess Communication
Processes within a system may be independent or cooperating
Cooperating process can affect or be affected by other processes, including sharing data
Clearly, any process that shares data with other processes is a cooperating process.
Reasons for cooperating processes:
Information sharing
Computation speedup
Modularity
Convenience
Cooperating processes need Interprocess communication (IPC)
Two models of IPC
Shared memory
Message passing
Communications Models
Implementation issues:
How are links established
Can a link be associated with more than two processes?
How many links can there be between every pair of communicating
processes?
What is the capacity of a link?
Is the size of a message that the link can accommodate fixed or variable?
Is a link unidirectional or bi-directional
Message Passing (Cont.)
Logical:
Direct or indirect
Synchronous or asynchronous
Automatic or explicit buffering
Direct Communication
Messages are directed and received from mailboxes (also referred to as ports)
Each mailbox has a unique id
Processes can communicate only if they share a mailbox
Mailbox sharing
P1, P2, and P3 share mailbox A
P1, sends; P2 and P3 receive
Who gets the message?
Solutions
Allow a link to be associated with at most two processes
Allow only one process at a time to execute a receive operation
Allow the system to select arbitrarily the receiver. Sender is notified who the
receiver was.
Synchronization
Message passing may be either blocking or non-blocking
Blocking is considered synchronous
Blocking send -- the sender is blocked until the message is received
Blocking receive -- the receiver is blocked until a message is available
Non-blocking is considered asynchronous
Non-blocking send -- the sender sends the message and continue
Non-blocking receive -- the receiver receives:
A valid message, or
Null message
Different combinations possible
If both send and receive are blocking, we have a rendezvous
Synchronization (Cont.)
Producer-consumer becomes trivial
message next_produced;
while (true) {
/* produce an item in next produced */
send(next_produced);
}
message next_consumed;
while (true) {
receive(next_consumed);
#include <stdio.h>
#include <sys/types.h>
#include <unistd.h>
int main() {
fork();
printf("Called fork() system call\n");
return 0;
}
Examples of IPC Systems - POSIX
POSIX Shared Memory
Process first creates shared memory segment
shm_fd = shm_open(name, O CREAT | O RDWR, 0666);
Also used to open an existing segment to share it
Set the size of the object
v
LINUX Process states
Solaris Scheduling
Solaris uses priority-based thread scheduling where each thread belongs to
one of six classes:
1. Time sharing (TS)
2. Interactive (IA)
3. Real time (RT)
4. System (SYS)
5. Fair share (FSS)
6. Fixed priority (FP)
Windows Scheduling
Windows schedules threads using a priority-based preemptive scheduling
Algorithm.
The Windows scheduler ensures that the highest-priority thread will always run.
The portion of the Windows kernel that handles scheduling is called the dispatcher.
A thread selected to run by the dispatcher will run until it is preempted by a higher-
priority thread, until it terminates, until its time quantum ends, or until it calls a
blocking system call, such as for I/O.
If a higher-priority real-time thread becomes ready while a lower-priority thread is
running, the lower-priority thread will be preempted. This preemption gives a real-time
thread preferential access to the CPU when the thread needs such access.
The base priorities for each priority class are:
REALTIME PRIORITY CLASS—24
HIGH PRIORITY CLASS—13
ABOVE NORMAL PRIORITY CLASS—10
NORMAL PRIORITY CLASS—8
BELOW NORMAL PRIORITY CLASS—6
IDLE PRIORITY CLASS—4
Linux Scheduling
provides a scheduling algorithm that runs in constant time—known as O(1)—
regardless of the number of tasks on the system.
The Linux scheduler is a preemptive, priority-based algorithm with two separate
priority ranges: a real-time range from 0 to 99 and a nice value ranging from 100 to
140.
Linux assigns higher-priority tasks longer time quanta and lower-priority tasks shorter
time quanta
The kernel maintains a list of all runnable tasks in a runqueue data structure. Each
processor maintains its own runqueue and schedules itself independently. Each
runqueue contains two priority arrays: active and expired.