Professional Documents
Culture Documents
UNIT III Process Management
UNIT III Process Management
UNIT -III
Process
Management
Early computer systems allowed only one program to be executed at a time.
Current day computer systems allow multiple programs to be loaded into memory
and executed concurrently.
* Process Concept –
a) The Process –
A program is a set of instructions to perform a specific task. A process is a program
in execution. Also, an executable part of a program called as process. A program is
dependent on user͛s logic. A process is dependent on a program. In multiprogramming
environment, multiple processes are possible in main memory at same time, so
process management is important task of operating system.
A program is a passive entity, whereas a process is an active entity. A program
becomes a process when an executable file is loaded into memory.
b) Process States –
As a process executes, it changes state. The state of a process is defined in part by
the current activity of that process. Each process may be in one of the following states
as shown in following figure,
• Process State – It denotes current state of the process. It may be new, ready,
run, or wait.
• I/O Status Information – This information includes the list of I/O devices
allocated to the process.
* Process Scheduling –
The following tasks are performed by process scheduler,
Each rectangular box represents a queue. Each circle represents the resource that
serves the queue and the arrows indicate the flow of processes in the system. A new
process is initially put in the ready queue. It waits there until it is selected for execution
or is dispatched.
Once the process is allocated the CPU and is executing, one of several events could
occur:
• The process could finish
• The process could issue an I/O request and then be placed in an I/O queue.
b) Schedulers –
In a system, more processes are submitted than can be executed immediately.
These processes are spooled to a mass storage device (typically a disk), where they are
kept for later execution.
The long term scheduler or job scheduler selects processes from this pool and
loads them into memory for execution. The short term scheduler or CPU scheduler
selects from among the processes that are ready to execute and allocates the CPU to
one of them.
Most processes can be divided into either I/O bound or CPU bound. An I/O bound
process is one that spends more of its time doing I/O computations. A CPU bound
process uses more of its time doing computations.
c) Context Switch –
Multiprogramming is a concept of increasing utilization of CPU by always
having something for CPU to execute. When CPU switches from one process to
another process, the time which is required for switching called as, c͞ ontext switching͟.
While context switching, the contents of old process are stored in specific memory
area called as, ͞register save area͟.
Consider, there are two processes are created by operating system from above
program
i.e. P1 for main () and P2 for add (). First, process P1 is allocated to a CPU for
execution; CPU executes instruction one by one from process P1.
When it comes to the instruction c = add (a, b); then stops execution because the
result of this instruction depends on result of another process i.e. P2. Thus, there is
need of execution of process P2 before process P1; so, CPU stores content of
OS 1
BCA SY 2023-24 UNIT - III Process Management
current process i.e. P1 in register save area and sends back to main memory.
Then CPU accepts another process i.e. P2, executes that process and sends result
to suspended process i.e. P1. Again, CPU accepts suspended process i.e. P1 as well
as current contents of process P1 from register save area and then resume execution of
process P1 and completes its execution.
When CPU switches from process P1 to P2 or process P2 to P1, the time which
is required for switching called as context switching.
* Basic Concepts –
In a single-processor system, only one process can run at a time; any others must
wait until the CPU is free. The objective of multiprogramming is to have some process
running at all times, to maximize CPU utilization.
2) CPU Scheduler –
Whenever the CPU becomes idle, the operating system must select one of the
processes in the ready queue to be executed. The selection process is carried out by
the short-term scheduler (or CPU scheduler).
The scheduler selects a process from the processes in memory that are ready to
execute and allocates the CPU to that process.
3) Preemptive Scheduling –
CPU-scheduling decisions may take place under the following three situations:
1. When a process switches from the running state to the waiting state
2. When a process switches from the waiting state to the ready state
3. When a process terminates
Under nonpreemptive scheduling, once the CPU has been allocated to a process,
the process keeps the CPU until it releases the CPU either by terminating or by switching
to the waiting state.
Preemptive algorithms are driven by the notion of prioritized computation. The
process with the highest priority should always be the one currently using the
OS 2
BCA SY 2023-24 UNIT - III Process Management
processor.
If a process is currently using the processor and a new process with a higher
priority enters, the ready list, the process on the processor should be removed and
returned to the ready list until it is once again the highest-priority process in the system.
4) Dispatcher –
The dispatcher is the module that gives control of the CPU to the process selected
by the short-term scheduler.
The dispatcher should be as fast as possible, since it is invoked during every
process switch. The time it takes for the dispatcher to stop one process and start another
running is known as the dispatch latency.
* Scheduling Criteria –
Different CPU scheduling algorithms have different properties and are applicable
in different situations. In selecting which algorithm to use in which situation different
criteria are to be followed:
• CPU utilization – To keep the CPU as busy as possible. CPU utilization can range
from 0 to 100 percent.
• Waiting time – It is the amount of time a process spends waiting in ready queue.
It is the sum of the periods spent waiting in the ready queue.
• Response time – It is the time from submission of a request until the first
response is produced. It is the amount of time taken to start responding.
OS 3
BCA SY 2023-24 UNIT - III Process Management
* Scheduling Algorithms –
CPU scheduling deals with the problem of deciding which of the processes in
the ready queue is to be allocated the CPU. There are many different CPU scheduling
algorithms.
When a process enters the ready queue, its PCB is linked onto the tail of the queue. When
the CPU is free, it is allocated to the process at the head of the queue. The running
process is then removed from the queue. The average waiting time under the FCFS
policy is often quite long.
The FCFS scheduling algorithm is nonpreemptive.
Consider the following set of processes that arrive at time 0, with the length of
the CPU burst given in seconds:
If the processes arrive in the order P1, P2, P3, and are served in FCFS order, we get the
result shown in the following Gantt chart:
The waiting time is 0 seconds for process P1, 24 seconds for process P2, and 27
seconds for process P3. Thus, the average waiting time is (0 + 24 + 27)/3 = 17 seconds.
If the processes arrive in the order P2, P3, P1, however, the results will be as
shown in the following Gantt chart:
OS 4
BCA SY 2023-24 UNIT - III Process Management
The average waiting time is now for P1, P2, & P3; (6 + 0 + 3)/3 = 3 seconds.
The waiting time is 3 seconds for process P1, 16 seconds for process P2, 9 seconds
for process P3, and 0 seconds for process P4. Thus, the average waiting time is (3 + 16 +
9 + 0)/4 = 7 seconds.
When a new process is arrives at the ready queue while a previous process is
still executing. The next CPU burst of the newly arrived process may be shorter than
what is left of the currently executing process.
A preemptive SJF algorithm will preempt the currently executing process, whereas
a nonpreemptive SJF algorithm will allow the currently running process to finish its CPU
burst.
The average waiting time for this example is ((10 - 1) + (1 - 1) + (17 - 2) + (5 - 3))/4
= 26/4
= 6.5 seconds.
3) Priority Scheduling –
The SJF algorithm is a special case of the general priority scheduling algorithm. A
priority is associated with each process, and the CPU is allocated to the process with
the highest priority. Equal-priority processes are scheduled in FCFS order.
Priorities are generally indicated by some fixed range of numbers, such as 0 to 7
or 0 to 4,095. However, there is no general agreement on whether 0 is the highest or
lowest priority.
Some systems use low numbers to represent low priority; others use low numbers
for high priority.
The waiting time for process P1, P2, P3, P4, & P5 is 6, 0, 16, 18, & 1 and the average
waiting time is 41/5 = 8.2 seconds.
Priority scheduling can be either preemptive or nonpreemptive. When a process
arrives at the ready queue, its priority is compared with the priority of the currently
running process.
A preemptive priority scheduling algorithm will preempt the CPU if the priority of
the newly arrived process is higher than the priority of the currently running process. A
priority scheduling algorithm can leave some low priority processes waiting indefinitely.
P1 P2 P3 P1
0 1 2 4 13
The waiting time for process P1, P2, & P3 is 0+3=3, 0 & 0 and the average waiting
time is 3/3 = 1 seconds.
OS 7
BCA SY 2023-24 UNIT - III Process Management
To implement RR scheduling, we keep the ready queue as a FIFO queue of
processes. New processes are added to the tail of the ready queue. The CPU
scheduler picks the first process from the ready queue, sets a timer to interrupt after 1
time slice, and dispatches the process.
One of two things will then happen. The process may have a CPU burst of less than
1 time slice. In this case, the process itself will release the CPU. The scheduler will then
proceed to the next process in the ready queue.
Otherwise, if the CPU burst of the currently running process is longer than 1 time
slice, the timer will go off and will cause an interrupt to the operating system. A context
switch will be executed, and the process will be put at the tail of the ready queue. The
CPU scheduler will then select the next process in the ready queue.
In the RR scheduling algorithm, no process is allocated the CPU for more than
1 time slice in a row.
The performance of the RR algorithm depends heavily on the size of the time slice.
If the time slice is extremely large, the RR policy is the same as the FCFS policy; if the
time slice is extremely small (say, 1 millisecond), the RR approach is called processor
sharing.
OS 8