Operating System: Process Management

You might also like

Download as pdf or txt
Download as pdf or txt
You are on page 1of 12

Dept of Computer Science & Engineering College of Engineering Munnar

Prepared by Shine N Das 1


Operating System
Process Management

What is a process?
In order to understand a process let us first understand what a process is and how it is
different from program. A program doesnt require any resources, whereas a process does. Once a
user wants to execute a program, it is located on the desk and is loaded in the main memory at that
time it becomes a process.
Thus a process may be defined as A Program Under Execution , which competes for the
CPU time and the other resources.
Process States
In order to manage switching between processes, the OS defines three basic process states as given
below.








Running: There is only one process which is executed by the CPU at any given moment. This
process is termed as running process.
Ready : A process which is not waiting for any external event such as an I/O operation and which
is not running is said to be in ready state. The OS maintains a list of all such ready processes and
when the CPU becomes free, it chooses one of them for execution as per its scheduling policy.
(When you sit at a terminal and give a command to the OS to execute certain program, the OS
locates the program on the disk loads it in the memory, create a new process for this program and
enter this program in the list of ready processes).
Blocked (Waiting): When a process is waiting for an external event such as an I/O operation, the
process is said to be blocked state.
The major difference between a blocked and a ready process is that a blocked process cannot be
directly scheduled even if the CPU is free, where as a ready process can be scheduled if the CPU is
free.

Ready
Wait or
Blocked
Run
Dept of Computer Science & Engineering College of Engineering Munnar
Prepared by Shine N Das 2
Let us trace the steps that will be followed when a running process encounters an I/O instructions.
Let us assume that two processes Process A (running) and Process B (ready)
i) Process A was running and it issues a system call for an I/O operation.
ii) OS saves the context of Process A in register save area of Process A.
iii) OS now changes the state of Process A to blocked, and add it to the list of blocked
processes.
iv) The OS instructs the I/O controller to perform for Process A.
v) The OS now pick up a ready process (Say Process B) out of the list of all ready
processes. This is done as per the scheduling algorithm.
vi) The OS restore the context of Process B from the register save area of Process B (We
assume that Process B was an already existing process in the system). If process B was a
new process, the OS would locate on the disk for the executable file.
vii) At this juncture Process B is executing, but IO for Process A is also going on.
viii) Eventually IO request by Process A is completed. The hard ware generates and interrupt
at this point.
ix) As a part of interrupt service routine (ISR), the OS now moves Process B from running
to ready state.
x) The OS moves process A from blocked to ready state.
xi) The OS then takes a ready process from the list of ready processes. This is done as per
the scheduling algorithm.
xii) The selected process is dispatched after restoring its context from its register save area.



State Transition Diagram









Wake Up (6)
Time up (4)
New Ready
Wait or
Blocked
Run
Halted
Enter (1) Admit (2)
Dispatch (3)
I/O request (5)
Terminate (7)
Dept of Computer Science & Engineering College of Engineering Munnar
Prepared by Shine N Das 3
Transition 1(Enter) : When we create a process, the OS puts it in the list of new
processes.

Transition 2 (Admit) : The OS introduces a Process in a new list first, depending upon the
length of the ready queue, upgrade the processes from new to the ready list.

Transition 3 (Dispatch) : When a processes turn comes, from the ready state the OS
dispatches it to the running state by loading the values of CPU register in the register save area.

Transition 4 (Time Up) : Each processes is normally given certain time to run. This is known
as time slice. When the time slice for a process is over it is put in the ready state again.

Transition 5 (I O Request) : Before the time slice is over, if the process wants to performs some
I/O operations the OS makes this process blocked and takes up the next ready process.

Transition 6 (Wake Up) : When the I/O for the original process is over, the hardware generate
an interrupt whereupon the OS changes this process into ready. This process can again be
dispatched when its turn arrives.

Transition 7 ( Terminate/Halt) : The whole process is repeated until the process is terminated.
After termini nation, it is possible for the OS to put this process into halted state.

The OS, therefore , provides for at least seven basic system calls or routines.

i) Enter : New
ii) Admit : New ready
iii) Dispatch : Ready Running
iv) Time up : Running Ready
v) Block : Running Blocked
vi) Wake-up : Blocked Ready
vii) Halt : Running Halted



Dept of Computer Science & Engineering College of Engineering Munnar
Prepared by Shine N Das 4
Process Control Block (PCB)
The operating maintains the information about each process in a record or a data structure called
Process Control Block(PCB). Each user process as a PCB it is created when user creates a
process and it is removed from system when the process killed or terminated. All these PCB are
kept in the memory reserved for the OS.

Process ID
Process State
Process priority
Register save area for PC, IR , SP
Pointer to other resources
List of open files
Accounting Information
Other Informations (Current Dir)
Pointer to other PCB


1) Process Id (PID): This is number allocated by the OS to the process on creation. The OS
normally sets a limit on the maximum number of process that it can handle and schedule. Let us
assume that this number is n. this means that the PID can take on values between 0 and n-1.
2) Process State: The information regarding different state of the process are stored in a
codified fashion in the PCB.
3) Process Priority: Some process are urgently required to be completed (higher priority) than
other (lower priority). This priority can set externally by user/system administrator or internally
by the OS depending on various parameters. PCB stores the final resultant value of the priority
of the process.
4) Register save area: This stores the contents of all CPU registers on the context switch.
5) Pointer to other resources: This gives pointers other resources maintained for the process.
6) List of open files: Shows a list of open files. This information is required for the OS to close
all open files not closed by the process explicitly on termination.
7) Accounting information: This gives the account of the usage of resources such as CPU time,
connect time, disk I/O used etc by the process.
8) Other information: It stores the information regarding current directory.
Dept of Computer Science & Engineering College of Engineering Munnar
Prepared by Shine N Das 5
9) Pointers to other PCBs: This gives the address of the next PCB with in a specific category
(Process State).


Process Control Block
Any PCB will be allocated for a running process, ready process or a blocked process. If the
PCB is not allocated to any of theses possible state, then it has to be unallocated or free. In order
to manage all these we can imagine that the OS also maintains four queues with there
corresponding headers as follows.
One for a running process, One for a ready, One for a blocked process and One for free
PCBs.











We also know that there can be only one process at a time. There for its header shows only
one slot. But all other headers have two slots- One for the PCB number of the first PCB and the
other is for the PCB number of the last one in the same state.
Each PCB itself has two pointer slots for the forward and backward chains. The first one for
the PCB number of the next process and the second one for PCB number of the previous
process in the same state.





















Ready = 13,4,14,7 Running = 3
Blocked = 5,0,2,10,12
Free = 8,1,6,11,9
Running Header
3

13

7

5

12

8

9
0 Blocked 1 Free 2 Blocked 3 Running 4 Ready

5 Blocked 6 Free 7 Ready 8 Free 9 Free

10 Blocked 11 Free 12 Blocked 13 Ready 14 Ready

-------- ---------- ---------- --------- ---------


2 5 8 6 0 10 * * 13 14
0 * 11 1 * 14 * 1 4 *
4 12 6 9 10 * * 4 4 7
Dept of Computer Science & Engineering College of Engineering Munnar
Prepared by Shine N Das 6
Scheduling

Sending jobs from the list of job to the CPU for execution
OS do not perform CPU scheduling in one step.
Instead, scheduling is spitted into three phases:
Long term Scheduling (Job Scheduling)
Medium term Scheduling (Swapper)
Short term scheduling (dispatcher)

Long term Scheduler

Long tem scheduler decides when to initiate processing of a job which has arrived in the
system.
It gives admission to the ready list from the arrived list by allocating resources like disk
space, I/O devices and internal resources like tables, control blocks etc to set up execution
of a process in the system.
Medium term Scheduler

It focuses on various house keeping actions for scheduling
Medium term scheduler shares information concerning ready processes with short term
scheduler which uses these information to select a process for execution.
Medium Term Scheduler maintains many lists of processes and moves processes between
these lists as their states changes.
Eg: When a running processes gets preempted, the medium term scheduler changes its state
to newly term scheduler changes its state to ready and moves it from the running list to the
ready list.
When a process makes a system call for I/O it changes the state from running to blocked and
moves it from running to blocked list.
Medium term scheduler also responsible for swapping in and swapping out of processes
between CPU and Memory.
When the memory manager swaps-out a process, the M T scheduler changes the state of the
process to swapped out and moves the process to an appropriate list.
Short term Scheduler
S T Scheduler focuses on which process to execute on the CPU.
It picks one processes from the list of ready processes for execution on the CPU, and hands
it to the dispatching mechanism.
Dept of Computer Science & Engineering College of Engineering Munnar
Prepared by Shine N Das 7
Dispatcher loads the state of the selected process contents of PSW and CPU registers
into the CPU.

In short:

The long term scheduler selects processes which should be considered for scheduling by the
medium term scheduler.
The medium term scheduler selects processes which should be considered for scheduling by
the short term scheduler.
The short term scheduler selects processes which should be executed on the CPU.


Scheduling Philosophy
There are basically two scheduling philosophies: Non Preemptive and Preemptive

Non-Preemptive: In the early days of the computing scheduling was mostly non preemptive.
A non preemptive philosophy means that a running process retain it control of the CPU and the
allocate resources until its surrenders control to the OS. This means that even if a higher priority
process enters the system, the running process cannot be forced to give up the control. However
if the running process become blocked due to any I/O request, another process can be
scheduled.

Preemptive scheduling: A preemptive scheduling philosophy allows a higher priority process to
replace a currently running even if the time slice is not over or it has not requested for any I/O.
This requires context switching more frequently, thus reducing the throughput.

Scheduling Policies/ Algorithms:
CPU scheduling deals with the problem of deciding which of the processes in the ready queue is
to be allocated the CPU.

1) First Come First Served Scheduling (FCFS):
The simplest scheduling algorithm is first come first served (FCFS) in this scheme, the process
that requests the CPU first is allocated the CPU first. It is a no preemptive philosophy .Priority
of a process is determined by FIFO data structure where the new processes add at the tail of the
Dept of Computer Science & Engineering College of Engineering Munnar
Prepared by Shine N Das 8
ready queue. When the CPU is free, dispatcher removes processes from the head of the ready
queue.
Eg: Suppose you go to the supper market, pick out the food you want, and go to the
check out counter. You go to the end of the line and wait until every one in front of
you has been checked out and then you get checked out.
2) Shotest Job First (SJF):
Another non-preemptive scheduling algorithm is shortest job first. It select the job with the
shortest expected time. In case of a tie, first come fist serves scheduling can be used.
Shortest job first scheduling minimizes the average wait time, because its services small
processes before it serve large one.
The disadvantage of the SJF policy is that, if ready list is saturated, then process with large
service time tend to be left in the ready list while small processes receives services.

The average waiting can be reduced in the following manner

Consider three processes

Process Burst Time
P1 24
P2 3
P3 5

If the processes arrives in the order P1,P2, P3 and are served in FCFS order, the average wait
time is calculated using the Gantt chart.




The waiting time is 0 milli second for P1, 24 milli second for process P2 and 27 milli second
for process for P3. Thus the average wait time is (0+24+27)/3=17 milli seconds.


If the processes are treated in SJF manner,

32 27
24
0
P1 P2 P3
Dept of Computer Science & Engineering College of Engineering Munnar
Prepared by Shine N Das 9
P2 P3 P1


Average wait time is (0+3+8)/3=3.65 milli seconds

Eg: Some times at the supper market checked out counter you will have a full
bascket food and some will get in line behind you who has only one think to get
checked out. They might ask you if they can go first, or you might allow them go
first. The reason you let the person with few things to go first is that people
understand the advantage of letting small checked out go first. Suppose the person
with one item takes one minute to be checked out, and your groceries 10 minutes. If
you go first, then you wait 0 minutes and he wait 10 minutes and the average waiting
time for both of you is 5 minute. If he first then he wait 0 minute and you wait 1
minute and the average wait time is .5 minutes.
3 Priority Scheduling:
In priority scheduling, each process is assigned a priority, and runnable process with highest
priority is allowed to run. Priority scheduling is best when process have different levels of
importance.
Priorities are generally some fixed range of numbers. Such as 0-xxxx. Low number to
represent high priority.
Consider the following set of process assumed to have arrived at time 0, in the order
P1,P2..P5 with the length of the CPU burst time given in mill sec.

Process Burst Time Priority
P1 10 3
P2 1 1
P3 2 4
P4 1 5
P5 5 2

Using priority scheduling, we would schedule these process according to the Gautt Chart.
P2 P5 P1 P3 P4

The average wait time is (6+0+16+18+1)/5= 8.2ms
0
3
8
32
0
1 6
16 18
19
Dept of Computer Science & Engineering College of Engineering Munnar
Prepared by Shine N Das 10
Prioirity scheduling can be either preemptive or non preemptive. When process at
the ready key its priority is compared with the priority of the currently running process.
Preemptive scheduling algorithm will preempt the CPU if the priority of the newly arrived
process is higher than the priority of the currently running process. Non preemptive
priority scheduling algorithm will simply put the new process at the head of the ready
queue.

3) Round Robin Policy
It is the simple process which holds all the ready processes in one single queue and
dispatch them one by one. Each process is allocated a certain time slice. If the process
consumes full time slice, and it is not over, the process state is changed from running to
ready and it is pushed at the end of the ready key. If the running process request an I/O
before the time slice is over it is pushed in to the blocked state. After its I/O is completed it
is again introduced into the end of the ready queue and eventually dispatched.
Round robin policy is a preempted scheduling policy.





Consider the following set of processes that arrives at time 0, with the length
of the CPU burst time given in milli sec.
Process Burst Time
P1 24
P2 3
P3 3

If we use time quantum of 4 milli sec, then process P1 gets the first 4 milli sec. Since
it requires another 20 milli sec, it is preempted after the first time quantum and the CPU is
given to the next process in the queue process P2. Since P2 does not need 4 milli sec, it
quits before its time quantum time expires. The CPU is then given to the next process, P3.
Once each process has received one time quantum, the CPU is returned to process P1 for an
additional time quantum etc.


P3

P2

P1

CPU

P1

P3

P2

CPU
Dept of Computer Science & Engineering College of Engineering Munnar
Prepared by Shine N Das 11
P1 P2 P3 P1 P1 P1 P1 P1
0 4 7 10 14 18 22 26 30

Average waiting time =((10-4)+4+7)/3=17/3=5.7 milli sec.
Problem: Consider the following set of process, with length of CPU burst time given in milli
sec.
Process Burst Time Priority
P1 10 3
P2 1 1
P3 2 5
P4 1 4
P5 5 2

The process are assumed to have arrived in the order P1,P2,P3,P4,P5 all at time 0.
a) Draw four Gantt charts illustrating the execution of these process using FCFS,
SJF and RR(quantum-1) scheduling.
b) What is the turn around time of each process for each of the scheduling
algorithm in part a.
c) What is the waiting time of each process for each of the scheduling
d) Which of the schedule in part a will have the minimum average waiting time.

4) Multiple queue:
A multiple queue scheduling algorithm partition the ready queue into several
separate queues based on priority. Each queue has its own scheduling algorithms.
For Eg: Separate queue might be used for foreground(batch process and other user
process) process and background process(system process etc)
The foreground queue might be scheduled by an RR algorithm, while background
queue is scheduled by FCFS algorithm.

Also there is a time slice between queues. Each portion of the CPU time will be
scheduled for different queues.
For Ex: The foreground queue can be given 80% of CPU time for RR schedule
where as background queues receives 20% of CPU time.

Highest priority
Dept of Computer Science & Engineering College of Engineering Munnar
Prepared by Shine N Das 12








Problem:
Consider the following set of processes, with length of CPU burst time given in milli
sec.
Process Burst Time
P1 10
P2 29
P3 3
P4 7
P5 12

The processes are assumed to have arrived in the order P1,P2,P3,P4,P5 all at time 0.
Consider the FCFS, SJF and RR algorithm(quantum=10 milli sec) for this set of
processes. Which algorithm would give the minimum average wait time.

Answer:
1) FCFS : Avg. wait time = (0+10+39+42+49)/5= 28ms
2) SJF : Avg. wait time = (10+32+0+3+20)/5 = 13ms
3) RR : Avg. wait time = (0+(52-20)+20+23+(50-10))/5
= (0+32+20+23+40)/5 = 23ms
SYSTEM PROCESS
INTERACTIVE PROCESS
BATCH PROCESS
.
Lowest priority

You might also like