Download as pdf or txt
Download as pdf or txt
You are on page 1of 11

BSc CS Fy 2023-24 UNIT – III Process Management

UNIT -III
Process
Management
Early computer systems allowed only one program to be executed at a time.
Current day computer systems allow multiple programs to be loaded into memory
and executed concurrently.

* Process Concept –
a) The Process –
A program is a set of instructions to perform a specific task. A process is a program
in execution. Also, an executable part of a program called as process. A program is
dependent on user͛s logic. A process is dependent on a program. In multiprogramming
environment, multiple processes are possible in main memory at same time, so
process management is important task of operating system.
A program is a passive entity, whereas a process is an active entity. A program
becomes a process when an executable file is loaded into memory.

b) Process States –
As a process executes, it changes state. The state of a process is defined in part by
the current activity of that process. Each process may be in one of the following states
as shown in following figure,

Figure – Process State


Model

• New – The process is being created.


• Ready – The process is waiting to be assigned to a processor.
• Running – Instructions are being executed.
Ms.S.B.Munde COCSIT, Latur 1
BSc CS Fy 2023-24 UNIT – III Process Management
• Waiting – The process is waiting for some event to occur (such as an I/O
completion).
• Terminated – The process has finished execution.

c) Process Control Block (PCB) –


Each process is represented in the operating system by a process control block
(PCB). It is also called a task control block as shown in following figure,

Figure – Process Control Block (PCB)

• Pointer – It points to the next PCB in the ready queue.

• Process ID – This is a number allocated by the operating system to the


process on creation for unique identification.

• Process State – It denotes current state of the process. It may be new, ready,
run, or wait.

• Process Priority – It determines importance of the process to the system.

• Memory Management Information – This information may include such


information as the value of the base and limit address.

• Accounting Information – This information includes the amount of CPU time


required.

• I/O Status Information – This information includes the list of I/O devices
allocated to the process.

* Process Scheduling –
The following tasks are performed by process scheduler,

Ms.S.B.Munde COCSIT, Latur 2


BSc CS Fy 2023-24 UNIT – III Process Management
• Keep track of status of processes.
• Decide which process gets processor?
• Allocate processors to processes.
• Deallocate processors from processes.
a) Scheduling Queues –
As programs enter in the system, they are put into a job queue. The processes
that are available in main memory and are ready and waiting to execute are kept on
a list called the ready queue.
Processes waiting for devices are kept in device queue or I/O queue. Each device
has its own device queue. A common representation for a process scheduling is a
queuing diagram, as shown in following figure,

Figure – Queuing diagram representation of process scheduling

Each rectangular box represents a queue. Each circle represents the resource that
serves the queue and the arrows indicate the flow of processes in the system. A new
process is initially put in the ready queue. It waits there until it is selected for execution
or is dispatched.

Once the process is allocated the CPU and is executing, one of several events could
occur:
• The process could finish
• The process could issue an I/O request and then be placed in an I/O queue.

Ms.S.B.Munde COCSIT, Latur 3


BCA SY 2023-24 UNIT - III Process Management

b) Schedulers –
In a system, more processes are submitted than can be executed immediately.
These processes are spooled to a mass storage device (typically a disk), where they are
kept for later execution.
The long term scheduler or job scheduler selects processes from this pool and
loads them into memory for execution. The short term scheduler or CPU scheduler
selects from among the processes that are ready to execute and allocates the CPU to
one of them.
Most processes can be divided into either I/O bound or CPU bound. An I/O bound
process is one that spends more of its time doing I/O computations. A CPU bound
process uses more of its time doing computations.

c) Context Switch –
Multiprogramming is a concept of increasing utilization of CPU by always
having something for CPU to execute. When CPU switches from one process to
another process, the time which is required for switching called as, c͞ ontext switching͟.
While context switching, the contents of old process are stored in specific memory
area called as, ͞register save area͟.

Consider the following


example, main ()
{
….
c = add (a, b);
….
}

int add (x, y)


{
return (x + y);
}

Consider, there are two processes are created by operating system from above
program
i.e. P1 for main () and P2 for add (). First, process P1 is allocated to a CPU for
execution; CPU executes instruction one by one from process P1.
When it comes to the instruction c = add (a, b); then stops execution because the
result of this instruction depends on result of another process i.e. P2. Thus, there is
need of execution of process P2 before process P1; so, CPU stores content of
OS 1
BCA SY 2023-24 UNIT - III Process Management
current process i.e. P1 in register save area and sends back to main memory.
Then CPU accepts another process i.e. P2, executes that process and sends result
to suspended process i.e. P1. Again, CPU accepts suspended process i.e. P1 as well
as current contents of process P1 from register save area and then resume execution of
process P1 and completes its execution.
When CPU switches from process P1 to P2 or process P2 to P1, the time which
is required for switching called as context switching.

* Basic Concepts –
In a single-processor system, only one process can run at a time; any others must
wait until the CPU is free. The objective of multiprogramming is to have some process
running at all times, to maximize CPU utilization.

1) CPU-I/O Burst Cycle –


The success of CPU scheduling depends on an observed property of processes:
Process execution consists of a cycle of CPU execution and I/O wait. Processes alternate
between these two states.
Process execution begins with a CPU burst. That is followed by an I/O burst, which
is followed by another CPU burst, then another I/O burst, and so on. The final CPU burst
ends with a system request to terminate execution.

2) CPU Scheduler –
Whenever the CPU becomes idle, the operating system must select one of the
processes in the ready queue to be executed. The selection process is carried out by
the short-term scheduler (or CPU scheduler).
The scheduler selects a process from the processes in memory that are ready to
execute and allocates the CPU to that process.

3) Preemptive Scheduling –
CPU-scheduling decisions may take place under the following three situations:

1. When a process switches from the running state to the waiting state
2. When a process switches from the waiting state to the ready state
3. When a process terminates

Under nonpreemptive scheduling, once the CPU has been allocated to a process,
the process keeps the CPU until it releases the CPU either by terminating or by switching
to the waiting state.
Preemptive algorithms are driven by the notion of prioritized computation. The
process with the highest priority should always be the one currently using the

OS 2
BCA SY 2023-24 UNIT - III Process Management
processor.
If a process is currently using the processor and a new process with a higher
priority enters, the ready list, the process on the processor should be removed and
returned to the ready list until it is once again the highest-priority process in the system.
4) Dispatcher –
The dispatcher is the module that gives control of the CPU to the process selected
by the short-term scheduler.
The dispatcher should be as fast as possible, since it is invoked during every
process switch. The time it takes for the dispatcher to stop one process and start another
running is known as the dispatch latency.

* Scheduling Criteria –
Different CPU scheduling algorithms have different properties and are applicable
in different situations. In selecting which algorithm to use in which situation different
criteria are to be followed:

• CPU utilization – To keep the CPU as busy as possible. CPU utilization can range
from 0 to 100 percent.

• Throughput – It is the number of processes completed per time unit, called


throughput. It may range from 10 processes per second upto 1 process per hour.

• Turnaround time – It is the time difference between submission of a process


and completion of a process. Turnaround time is the sum of the periods spent
waiting to get into memory, waiting in the ready queue, executing on the CPU,
and doing I/O.

• Waiting time – It is the amount of time a process spends waiting in ready queue.
It is the sum of the periods spent waiting in the ready queue.

• Response time – It is the time from submission of a request until the first
response is produced. It is the amount of time taken to start responding.

We want to maximize CPU utilization and throughput and to minimize turnaround


time, waiting time, and response time.

OS 3
BCA SY 2023-24 UNIT - III Process Management

* Scheduling Algorithms –
CPU scheduling deals with the problem of deciding which of the processes in
the ready queue is to be allocated the CPU. There are many different CPU scheduling
algorithms.

1) First-Come, First-Served Scheduling (FCFS) –


The simplest CPU-scheduling algorithm is the first-come, first-served (FCFS)
scheduling algorithm. The process that requests the CPU first is allocated the CPU first.
The implementation of the FCFS policy is easily managed with a FIFO queue.

When a process enters the ready queue, its PCB is linked onto the tail of the queue. When
the CPU is free, it is allocated to the process at the head of the queue. The running
process is then removed from the queue. The average waiting time under the FCFS
policy is often quite long.
The FCFS scheduling algorithm is nonpreemptive.

Consider the following set of processes that arrive at time 0, with the length of
the CPU burst given in seconds:

Process Burst Time


P1 24
P2 3
P3 3

If the processes arrive in the order P1, P2, P3, and are served in FCFS order, we get the
result shown in the following Gantt chart:

The waiting time is 0 seconds for process P1, 24 seconds for process P2, and 27
seconds for process P3. Thus, the average waiting time is (0 + 24 + 27)/3 = 17 seconds.

If the processes arrive in the order P2, P3, P1, however, the results will be as
shown in the following Gantt chart:

OS 4
BCA SY 2023-24 UNIT - III Process Management

The average waiting time is now for P1, P2, & P3; (6 + 0 + 3)/3 = 3 seconds.

2) Shortest-Job-First Scheduling (SJF) –


A different approach to CPU scheduling is the shortest-job-first (SJF) scheduling
algorithm. This algorithm associates with each process the length of the process's next
CPU burst.
When the CPU is available, it is assigned to the process that has the smallest next
CPU burst. If the next CPU bursts of two processes are the same, FCFS scheduling is
used to break the tie.
As an example of SJF scheduling, consider the following set of processes, with the
length of the CPU burst given in seconds:

Process Burst Time


P1 6
P2 8
P3 7
P4 3

Using SJF scheduling, we would schedule these processes according to the


following Gantt chart:

The waiting time is 3 seconds for process P1, 16 seconds for process P2, 9 seconds
for process P3, and 0 seconds for process P4. Thus, the average waiting time is (3 + 16 +
9 + 0)/4 = 7 seconds.

The SJF algorithm can be either preemptive or nonpreemptive.

When a new process is arrives at the ready queue while a previous process is
still executing. The next CPU burst of the newly arrived process may be shorter than
what is left of the currently executing process.
A preemptive SJF algorithm will preempt the currently executing process, whereas
a nonpreemptive SJF algorithm will allow the currently running process to finish its CPU
burst.

Consider the following example,


OS 5
BCA SY 2023-24 UNIT - III Process Management

Using SJF preemptive scheduling, we would schedule these processes according


to the following Gantt chart:

Process P1 is started at time 0, since it is the only process in the queue.


Process P2 arrives at time 1. The remaining time for process P1 (7 seconds) is larger
than the time required by process P2 (4 seconds), so process P1 is preempted, and
process P2 is scheduled.

The average waiting time for this example is ((10 - 1) + (1 - 1) + (17 - 2) + (5 - 3))/4
= 26/4
= 6.5 seconds.

3) Priority Scheduling –
The SJF algorithm is a special case of the general priority scheduling algorithm. A
priority is associated with each process, and the CPU is allocated to the process with
the highest priority. Equal-priority processes are scheduled in FCFS order.
Priorities are generally indicated by some fixed range of numbers, such as 0 to 7
or 0 to 4,095. However, there is no general agreement on whether 0 is the highest or
lowest priority.
Some systems use low numbers to represent low priority; others use low numbers
for high priority.

Here, we assume that low numbers represent high priority. As an example,


consider the following set of processes, assumed to have arrived at time 0, in the order
P1, P2, P3, P4, & P5 with the length of the CPU burst given in seconds:

Process Burst Time Priority


P1 10 3
P2 1 1
P3 2 4
OS 6
BCA SY 2023-24 UNIT - III Process Management
P4 1 5
P5 5 2

Using priority scheduling, we would schedule these processes according to the


following Gantt chart:

The waiting time for process P1, P2, P3, P4, & P5 is 6, 0, 16, 18, & 1 and the average
waiting time is 41/5 = 8.2 seconds.
Priority scheduling can be either preemptive or nonpreemptive. When a process
arrives at the ready queue, its priority is compared with the priority of the currently
running process.
A preemptive priority scheduling algorithm will preempt the CPU if the priority of
the newly arrived process is higher than the priority of the currently running process. A
priority scheduling algorithm can leave some low priority processes waiting indefinitely.

As an example, consider the following set of processes,

Process Burst Time Priority Arrival Time


P1 10 3 0
P2 1 1 1
P3 2 2 2

Using priority scheduling, we would schedule these processes according to the


following Gantt chart:

P1 P2 P3 P1
0 1 2 4 13

The waiting time for process P1, P2, & P3 is 0+3=3, 0 & 0 and the average waiting
time is 3/3 = 1 seconds.

4) Round-Robin Scheduling (RR) –


It is similar to FCFS scheduling, but preemption is added to switch between
processes. A small unit of time, called a time quantum or time slice, is defined.
A time slice is generally from 10 to 100 milliseconds. The ready queue is
treated as a circular queue. The CPU scheduler goes around the ready queue, allocating
the CPU to each process for a time interval of up to 1 time slice or time slice.

OS 7
BCA SY 2023-24 UNIT - III Process Management
To implement RR scheduling, we keep the ready queue as a FIFO queue of
processes. New processes are added to the tail of the ready queue. The CPU
scheduler picks the first process from the ready queue, sets a timer to interrupt after 1
time slice, and dispatches the process.
One of two things will then happen. The process may have a CPU burst of less than
1 time slice. In this case, the process itself will release the CPU. The scheduler will then
proceed to the next process in the ready queue.
Otherwise, if the CPU burst of the currently running process is longer than 1 time
slice, the timer will go off and will cause an interrupt to the operating system. A context
switch will be executed, and the process will be put at the tail of the ready queue. The
CPU scheduler will then select the next process in the ready queue.

Consider the following set of processes,

Process Burst Time


P1 24
P2 3
P3 3
If we use a time slice of 4 milliseconds, then process P1 gets the first 4
milliseconds. Since it requires another 20 milliseconds, it is preempted after the first
time slice, and the CPU is given to the next process in the queue, process P2.
Since process P2 does not need 4 milliseconds, it quits before its time slice expires.
The CPU is then given to the next process, process P3. Once each process has received
1 time slice, the CPU is returned to process P1 for an additional time slice. The resulting
RR schedule is,

The average waiting time is 17/3 = 5.66 milliseconds.

In the RR scheduling algorithm, no process is allocated the CPU for more than
1 time slice in a row.
The performance of the RR algorithm depends heavily on the size of the time slice.
If the time slice is extremely large, the RR policy is the same as the FCFS policy; if the
time slice is extremely small (say, 1 millisecond), the RR approach is called processor
sharing.

OS 8

You might also like