Download as pdf or txt
Download as pdf or txt
You are on page 1of 92

PROCESS (UNIT II)

Dr. Indrajeet Kumar


Department of CSE
Graphic Era Hill University
Dehradun
Introduction
• Earlier computer system allowed only one
program to be executed at a time.
• It means this program had complete control on
the system resources and access all the
system.
• Modern system allows multiprogramming and
executed concurrently.
• This evolution required more control and
management of various programs. This leads to
the notation of process.
Process (Introduction)

• A process is
–Program under execution.
–Dispatch unit.
–Instance of program.
–Concern of control.
Process VS Program
Process Program
• Program under execution. • Set of instruction.

• Lies in main memory. • Lies in secondary

• Uses resources. memory.

• Active instance. • Without resource use.

• Passive instance
Process (Cont…)
• A process is more than a program code.
• It includes current activity, as represented by the
value of program counter and the content of the
processor registers.
• Also includes a process stack, which contains
temporary data like function parameters, return
address and local variables.
• Data section which contains global variables.
• Also include a heap, which handles the run time
memory management or dynamics memory
allocation.
Contains temporary variables
Stack (Parameters, return address,
Local variables)

Dynamic Data Handled by DMA, Heap,


(Heap) dynamic memory allocation.

Static data (Stack) Stack pointer, register content

Text Section Program Code


Loading the process
• Two common methods:
– Double click on executable file.
– Entering the name of file on command
prompt.

• A program has multiple copy of process,


Each has separate process id, data section,
stack, and heap section but common text
section.
Operation on Process
• Creation

• Scheduling

• Execution

• Killing or Termination
States of Process
• A process has following states
– New.
– Running.
– Waiting.
– Ready.
– Terminated
Process state diagram
Uni-Programming OS

Admitted Exit
NEW RUNNING Terminated

I/O or event
wait/completed

I/O
Process state diagram
Multi-programming OS
States of Process
• A process has following states
– New.
– Running.
– Block & Wait.
– Ready.
– Termination.
–Suspend ready
–Suspend wait or suspend block
State transition diagram
Schedulers in OS
• Schedulers in OS are special system
software.
• They help in scheduling the processes in
various ways.
• They are mainly responsible for selecting
the jobs to be submitted into the system
and deciding which process to run.
Types

• Long-term scheduler

• Short-term scheduler

• Medium-term scheduler
Long-term Scheduler
• Long-term scheduler is also known as Job
Scheduler.
• It selects a balanced mix of I/O bound and CPU
bound processes from the secondary memory
(new state).
• Then, it loads the selected processes into the
main memory (ready state) for execution.
• The primary objective of long-term scheduler
is to maintain a good degree of
multiprogramming.
Degree of Multiprogramming
In multiprogramming systems,
• Multiple processes may be present in the ready
state which are all ready for execution.
• Degree of multiprogramming is the maximum
number of processes that can be present in the
ready state.
• Long-term scheduler controls the degree of
multiprogramming.
• Medium-term scheduler reduces the degree of
multiprogramming.
Short-term Scheduler
• Short-term scheduler is also known as
CPU Scheduler.
• It decides which process to execute next
from the ready queue.
• After short-term scheduler decides the
process, Dispatcher assigns the decided
process to the CPU for execution.
• The primary objective of short-term
scheduler is to increase the system
performance.
Medium-term Scheduler
• Medium-term scheduler swaps-out the processes
from main memory to secondary memory to free up
the main memory when required.
• Thus, medium-term scheduler reduces the degree of
multiprogramming.
• After some time when main memory becomes
available, medium-term scheduler swaps-in the
swapped-out process to the main memory and its
execution is resumed from where it left off.
• Swapping may also be required to improve the
process mix.
Attributes of process
• A process has following attributes
– Process id
– Program counter
– Process state
– Priority
– General purpose register
– List of open files
– List of open devices
– Protection
Process control block (PCB)

• Each process is represented in OS by a PCB.

• It contains many pieces of information


associated with a specific process including
process attributes values.
Structure of PCB
Contains process id
Indicates the address of the next instruction to
be executed.
Contain the information about state.
Contains priority of process.
Depends on hardware architecture.
Register like Accumulator, index register, SP,
general purpose reg.
List of open files.

List of open devices

Memory protection.
CPU Switch From Process to Process
Process Scheduling Queues
• Job queue – set of all processes in the
system.
• Ready queue – set of all processes
residing in main memory,
ready and waiting to execute.
• Device queues – set of processes waiting
for an I/O device.
• Process migration between the various
queues.
Ready Queue And Various I/O Device
Queues
Representation of Process
Scheduling
CPU Scheduling
• Maximum CPU utilization
obtained with
multiprogramming.

• CPU–I/O Burst Cycle –


Process execution consists
of a cycle of CPU
execution and I/O wait.
CPU Scheduling
• Selects among the processes in memory that
are ready to execute, and allocates the CPU
to one of them.
• Non-preemptive: once CPU given to the
process it cannot be preempted until
completes its CPU burst.
• Preemptive: Where a scheduler may
preempt a low priority running process
anytime when a high priority process enters
into a ready state.
CPU Scheduling
• CPU scheduling decisions may take place when a
process:
1. Switches from running to waiting state.
2. Switches from running to ready state.
3. Switches from waiting to ready.
4. Terminates.
• Scheduling under 1 and 4 is non-preemptive.
• All other scheduling is preemptive.
Scheduling Criteria
• CPU utilization – keep the CPU as busy as possible
• Throughput – # of processes that complete their
execution per time unit
• Turnaround time – amount of time to execute a
particular process
• Waiting time – amount of time a process has been
waiting in the ready queue
• Response time – amount of time it takes from when a
request was submitted until the first response is
produced, not output (for time-sharing environment)
Optimization Criteria
• Max CPU utilization

• Max throughput

• Min turnaround time

• Min waiting time

• Min response time


Different time with respect to a process.
Arrival Time: Time at which the process arrives in the ready
queue.
Completion Time: Time at which process completes its
execution.
Burst Time: Time required by a process for CPU execution.
Turn Around Time: Time Difference between completion
time and arrival time.
Turn Around Time = Completion Time – Arrival Time
Waiting Time(W.T): Time Difference between turn around
time and burst time.
Waiting Time = Turn Around Time – Burst Time
Single Processor Scheduling Algorithms

• First Come, First Served (FCFS)


• Shortest Job First (SJF)
• Priority
• Round Robin (RR)
FCFS Scheduling
Process Burst Time
P1 24
P2 3
P3 3
• With FCFS, the process that requests the CPU first is allocated the CPU
first
• Case #1: Suppose that the processes arrive in the order: P1 , P2 , P3
The Gantt Chart for the schedule is:
P1 P2 P3

0 24 27 30
• Waiting time for P1 = 0; P2 = 24; P3 = 27
• Average waiting time: (0 + 24 + 27)/3 = 17
• Average turn-around time: (24 + 27 + 30)/3 = 27
FCFS Scheduling (Cont.)
• Case #2: Suppose that the processes arrive in the order: P2 , P3 , P1

• The Gantt chart for the schedule is:

P2 P3 P1

0 3 6 30

• Waiting time for P1 = 6; P2 = 0; P3 = 3


• Average waiting time: (6 + 0 + 3)/3 = 3 (Much better than
Case #1)
• Average turn-around time: (3 + 6 + 30)/3 = 13
Example
Process Id Arrival time Burst time

P1 3 4

P2 5 3

P3 0 2

P4 5 1

P5 4 3
PID AT BT CT TAT WT

P1 3 4 7 7–3=4 4–4=0

P2 5 3 13 13 – 5 = 8 8–3=5

P3 0 2 2 2–0=2 2–2=0

P4 5 1 14 14 – 5 = 9 9–1=8

P5 4 3 10 10 – 4 = 6 6–3=3
FCFS Scheduling (Cont.)
• Case #1 is an example of the convoy effect; all the
other processes wait for one long-running process to
finish using the CPU
– This problem results in lower CPU and device
utilization; Case #2 shows that higher utilization might
be possible if the short processes were allowed to run
first.
• The FCFS scheduling algorithm is non-preemptive
– Once the CPU has been allocated to a process, that
process keeps the CPU until it releases it either by
terminating or by requesting I/O.
– It is a troublesome algorithm for time-sharing systems
Example2- FCFS

Final Gantt Chart

P1 waiting time: 0
P2 waiting time: 24
P3 waiting time: 27
The average waiting time: (0+24+27)/3 = 17
Shortest-Job-First (SJF) Scheduling
• Associate with each process the length of its next CPU
burst. Use these lengths to schedule the process with the
shortest time.
• Two schemes:
– Nonpreemptive – once CPU given to the process it
cannot be preempted until completes its CPU burst.
– Preemptive – if a new process arrives with CPU burst
length less than remaining time of current executing
process, preempt. This scheme is know as the
Shortest-Remaining-Time-First (SRTF).
• SJF is optimal – gives minimum average waiting time for a
given set of processes.
Example of Non-Preemptive SJF
Process Arrival Time Burst Time
P1 0.0 7
P2 2.0 4
P3 4.0 1
P4 5.0 4
• SJF (non-preemptive)

P1 P3 P2 P4

0 3 7 8 12 16

• Average waiting time = (0 + 6 + 3 + 7)/4 =4


Example of Preemptive SJF
Process Arrival Time Burst Time
P1 0.0 7
P2 2.0 4
P3 4.0 1
P4 5.0 4
• SJF (preemptive)

P1 P2 P3 P2 P4 P1

0 2 4 5 7 11 16

• Average waiting time = (9 + 1 + 0 +2)/4 = 3


• Consider three process, all arriving at time zero, with
total execution time of 10, 20 and 30 units
respectively. Each process spends the first 20% of
execution time doing I/O, the next 70% of time doing
computation, and the last 10% of time doing I/O
again. The operating system uses a shortest
remaining compute time first scheduling algorithm
and schedules a new process either when the
running process gets blocked on I/O or when the
running process finishes its compute burst. Assume
that all I/O operations can be overlapped as much as
possible. For what percentage of does the CPU
remain idle?
Total Burst I/O Burst CPU Burst I/O Burst
PID
Time

Process P1 10 2 7 1

Process P2 20 4 14 2

Process P3 30 6 21 3
HW

Burst Time
Process No. Arrival Time
CPU Burst I/O Burst CPU Burst

P1 0 3 2 2

P2 0 2 4 1

P3 2 1 3 2

P4 5 2 2 1
Determining Length of Next CPU Burst
• Can be estimate the length of process by simple averaging.
• Can be done by using the length of previous CPU bursts, using
exponential averaging.

1. tn = actual lenght of nthCPU burst


2.  n +1 = predicted value for the next CPU burst
3.  , 0    1
4. Define :

 n+1 =  tn + (1 −  ) n .
Examples of Exponential Averaging
•  =0
– n+1 = n
– Recent history does not count.
•  =1
– n+1 = tn
– Only the actual last CPU burst counts.
• If we expand the formula, we get:
n+1 =  tn+(1 - )  tn -1 + …
+(1 -  )j  tn -1 + …
+(1 -  )n=1 tn 0
• Since both  and (1 - ) are less than or equal to 1, each successive
term has less weight than its predecessor.
Priority Scheduling
Priority Scheduling
• The SJF algorithm is a special case of the general priority scheduling
algorithm
• A priority number (integer) is associated with each process
• The CPU is allocated to the process with the highest priority (smallest
integer = highest priority)
• Priority scheduling can be either preemptive or non-preemptive
– A preemptive approach will preempt the CPU if the priority of the
newly-arrived process is higher than the priority of the currently
running process
– A non-preemptive approach will simply put the new process (with
the highest priority) at the head of the ready queue
• SJF is a priority scheduling algorithm where priority is the predicted next
CPU burst time
• The main problem with priority scheduling is starvation, that is, low
priority processes may never execute
• A solution is aging; as time progresses, the priority of a process in the
ready queue is increased
Priority Scheduling
• Consider the set of 4 processes whose arrival
time and burst time are given below-

Burst Time
Arrival
Process No. Priority
Time
CPU Burst I/O Burst CPU Burst

P1 0 2 1 5 3

P2 2 3 3 3 1

P3 3 1 2 3 1
•Average Turn Around time = (10 + 13 + 6) / 3 = 29 / 3 = 9.67
units
•Average waiting time = (6 + 9 + 3) / 3 = 18 / 3 = 6 units
Round Robin (RR)
• Each process gets a small unit of CPU time (time
quantum), usually 10-100 milliseconds. After this time
has elapsed, the process is preempted and added to the
end of the ready queue.
• If there are n processes in the ready queue and the time
quantum is q, then each process gets 1/n of the CPU
time in chunks of at most q time units at once. No
process waits more than (n-1)q time units.
• Performance
– q large  FIFO
– q small  q must be large with respect to context
switch, otherwise overhead is too high.
Example: RR with Time Quantum = 20

Process Burst Time


P1 53
P2 17
P3 68
P4 24
• The Gantt chart is:

P1 P2 P3 P4 P1 P3 P4 P1 P3 P3

0 20 37 57 77 97 117 121 134 154 162

• Typically, higher average turnaround than SJF, but better response.


• Consider the set of 5 processes whose arrival
time and burst time are given below-
Process Id Arrival time Burst time
P1 0 5

P2 1 3

P3 2 1

P4 3 2

P5 4 3

If the CPU scheduling policy is Round Robin with time quantum =


2 unit, calculate the average waiting time and average turn
around time
• Ready Queue-
• P5, P1, P2, P5, P4, P1, P3, P2, P1

Process Id Exit time Turn Around time Waiting time

P1 13 13 – 0 = 13 13 – 5 = 8
P2 12 12 – 1 = 11 11 – 3 = 8
P3 5 5–2=3 3–1=2
P4 9 9–3=6 6–2=4
P5 14 14 – 4 = 10 10 – 3 = 7

•Average Turn Around time = (13 + 11 + 3 + 6 + 10) / 5 = 43 / 5 = 8.6 unit


•Average waiting time = (8 + 8 + 2 + 4 + 7) / 5 = 29 / 5 = 5.8 unit
How a Smaller Time Quantum Increases Context Switches
Turnaround Time Varies With The Time Quantum
Multi-Level Queue Scheduling
• Multi-level queue scheduling is used when
processes can be classified.
• For example, foreground (interactive)
processes and background (batch)
processes.
– The two types of processes have different
response-time requirements and so may have
different scheduling needs.
– Also, foreground processes may have priority
(externally defined) over background processes.
Multi-Level Queue Scheduling
• A multi-level queue scheduling algorithm
partitions the ready queue into several
separate queues.
• The processes are permanently assigned
to one queue, generally based on some
property of the process such as memory
size, process priority, or process type.
Multi-Level Queue Scheduling
• Each queue has its own scheduling algorithm
– The foreground queue might be scheduled
using an RR algorithm.
– The background queue might be scheduled
using an FCFS algorithm.
• In addition, there needs to be scheduling among
the queues, which is commonly implemented as
fixed-priority pre-emptive scheduling
– The foreground queue may have absolute
priority over the background queue
Multi-level Queue Scheduling
• One example of a multi-level queue are the
five queues shown below
Multi-level Queue Scheduling
• Each queue has absolute priority over
lower priority queues
• For example, no process in the batch
queue can run unless the queues below it
are empty.
• However, this can result in starvation for
the processes in the lower priority queues
Multilevel Queue Scheduling
• Another possibility is to time slice among the queues
• Each queue gets a certain portion of the CPU time,
which it can then schedule among its various
processes:
– The foreground queue can be given 80% of the
CPU time for RR scheduling.
– The background queue can be given 20% of the
CPU time for FCFS scheduling.
Multi-Level Feedback Queue Scheduling
• In multi-level feedback queue scheduling, a process
can move between the various queues; aging can be
implemented this way.
• A multilevel-feedback-queue scheduler is defined by
the following parameters:
– Number of queues
– Scheduling algorithms for each queue
– Method used to determine when to support a
process.
– Method used to determine when to demote a
process.
– Method used to determine which queue a
process will enter when that process needs
service.
Example of Multilevel Feedback Queue
• Scheduling
– A new job enters queue Q0 (RR) and is placed at the end. When it
gains the CPU, the job receives 8 milliseconds. If it does not finish in
8 milliseconds, the job is moved to the end of queue Q1.
– A Q1 (RR) job receives 16 milliseconds. If it still does not complete,
it is pre-empted and moved to queue Q2 (FCFS).
Q0

Q1

Q2
Multiple-Processor Scheduling
• If multiple CPUs are available, load sharing
among them becomes possible; the
scheduling problem becomes more
complex.
• We concentrate in this discussion on systems
in which the processors are identical
(homogeneous) in terms of their functionality.
– We can use any available processor to run
any process in the queue.
• Two approaches: Asymmetric processing
and symmetric processing (see next slide)
Multiple-Processor Scheduling
• Asymmetric Multiprocessing (ASMP)
– One processor handles all scheduling decisions,
I/O processing, and other system activities
– The other processors execute only user code
– Because only one processor accesses the
system data, the need for data sharing is
reduced.
• Symmetric Multiprocessing (SMP)
– Each processor schedules itself
– All processes may be in a common ready
queue or each processor may have its own
ready queue
– Either way, each processor examines the
ready queue and selects a process to execute
Multiple Processor Scheduling
➢Efficient use of the CPUs requires load balancing to keep
the workload evenly distributed
❖In a Push migration approach, a specific task regularly
checks the processor loads and redistributes the
waiting processes as needed
❖In a Pull migration approach, an idle processor pulls a
waiting job from the queue of a busy processor
➢Virtually all modern operating systems support MPS,
including Windows XP, Solaris, Linux, and Mac OS X
Real-Time Scheduling
• Hard real-time systems – required to complete a
critical task within a guaranteed amount of time.
• Soft real-time computing – requires that critical
processes receive priority over less fortunate ones.
Algorithm Evaluation
Criteria may include several measures, such as:
❑Maximize CPU utilization under the constraint that
the maximum response time is 1 second.
❑Maximize throughput such as turnaround time is
linearly proportional to total execution time.
Technique for Algorithm Evaluation
• Deterministic modelling – takes a
particular predetermined workload and
defines the performance of each algorithm
for that workload.
Deterministic Modelling: Using FCFS scheduling

Process Burst Time


P1 10
P2 29
P3 3
P4 7
P5 12
Avg. Waiting Time = 28 ms.
Deterministic Modelling:
Using nonredemptive SJF scheduling

Process Burst Time


P1 10
P2 29
P3 3
P4 7
P5 12

Avg. Waiting Time = 13 ms.


Deterministic Modelling:
Using round robin scheduling
(Time quantum = 10ms)

Process Burst Time


P1 10
P2 29
P3 3
P4 7
P5 12
Avg. Waiting Time: = 23 ms.
Threads
• A thread sometimes called a Light weight process(LWP)
is a basic unit of CPU utilization, it comprises a thread ID,
a program counter, a register set, and a stack.
• If the web server ran as a traditional single-threaded
process, it would be able to service only one client at a
time.
• It is generally more efficient for one process that contains
multiple threads to serve the same purpose. This
approach would multithreaded the web-server
processes.
Single and Multithreaded Processes
Benefits
• Responsiveness: Multithreaded an interactive application
may allow a program to continue running even if part of it is
blocked or is performing a lengthy operation, thereby
increasing responsiveness to the user.

• Resource Sharing: Threads share the memory and the


resources of the to which they belong.

• Economy: Allocating memory and resources for process


creation is costly. Solution ,threads share resources of the
process to which they belong.

• Utilization of multiprocessor architectures: The benefits


of multithreading can be greatly increased in a
multiprocessor architecture, where each thread may be
running in parallel on a different processor.
User Threads
• Thread management done by user-level
threads library
• User threads are supported the kernel
and implemented by a thread library at
the user level.
Kernel Threads
Kernel thread are supported directly by the
operating system:
The kernel performs thread creation ,
scheduling and management in kernel space.
Supported by the Kernel
Multithreading Models
• Many-to-One

• One-to-One

• Many-to-Many
Many-to-One
Many user-level threads mapped to
single kernel thread.
One-to-one
Each user-level thread maps to kernel
thread.
Many-to-Many
Allows many user level threads to be mapped to many kernel
threads
Allows the operating system to create a sufficient number of kernel
threads.
Thank you

You might also like