Download as pptx, pdf, or txt
Download as pptx, pdf, or txt
You are on page 1of 72

UNIT II

Process Management: Process concepts, process scheduling, Operations on processes,

Interprocess communication, CPU Scheduling: Scheduling-criteria, scheduling algorithms, Thread

scheduling: Multiple processor scheduling, algorithm evaluation, Multithreaded programming,

Multi-core Programming, Multi-threading Models, Thread Libraries.


Process

 A process is basically a program under execution.

 The execution of a process must progress in a sequential fashion.

 When a program is loaded into the memory and it becomes a process, it can be divided into four

sections ─ stack, heap, text and data.


Operations on processes
 Stack

The process Stack contains the temporary data such as method/function parameters, return
address and local variables.

 Heap

This is dynamically allocated memory to a process during its run time

 Text

This includes the current activity represented by the value of Program Counter and the contents
of the processor's registers.

 Data

This section contains the global and static variables.


Process Elements

Two essential elements of a process are:

 Program code

which may be shared with other processes that are executing the same program

 A set of data associated with that code

when the processor begins to execute the program code, we refer to this executing entity as a

process
Attributes of a process

 The Attributes of the process are used by the Operating System to create the process control block

(PCB) for each of them.

 PCB is a data structure maintained by the Operating System for every process.

 A PCB keeps all the information needed to keep track of a process.

 The architecture of a PCB is completely dependent on Operating System and may contain different

information in different operating systems.

 The PCB is maintained for a process throughout its lifetime, and is deleted once the process

terminates.
 Process State

The current state of the process i.e., whether it is ready, running, waiting, or whatever.

 Process privileges

This is required to allow/disallow access to system resources.

 Process ID

Unique identification for each of the process in the operating system.

 Pointer

A pointer to parent process.

 Program Counter

Program Counter is a pointer to the address of the next instruction to be executed for this
process.
 CPU registers

Various CPU registers where process need to be stored for execution for running state.

 CPU Scheduling Information

Process priority and other scheduling information which is required to schedule the process.

 Memory management information

This includes the information of page table, memory limits, Segment table depending on memory
used by the operating system.

 Accounting information

This includes the amount of CPU used for process execution, time limits, execution ID etc.

 IO status information

This includes a list of I/O devices allocated to the process.


Process Life Cycle

 When a process executes, It passes through different states

 A process changes its state during its execution is called the process life cycle.

 The process life cycle can be defined by a state diagram.

 Which has states representing the execution status of process at various time and transitions.

 The current state of the process tells us about the current activity of the process.

 The state of a process may change due to events like I/O requests, interrupt routines, synchronization
of processes, process scheduling algorithms, etc.

 To maintain the information about process, the operating system uses the process control block


(PCB).
Process States

 Two-State Process Model

 Three-State Process Model

 Five-State Process Model

 Six State Process Model

 Seven-State Process Model


Two-State Process Model
Two State Process Model consists of two states:
 Not-running State: Process waiting for execution.

 Running State: Process currently executing.

 When a process is first created by the OS, it initializes the program control block for the process
and the new process enters the system in Not-running state.

 After some time, the currently running process will be interrupted by some events, and the OS will
move the currently running process from Running state to Not-running state.

 The dispatcher then selects one process from Not-running processes and moves the process to the
Running state for execution.

 Dispatcher: Dispatcher is a program that gives the CPU to the process selected by the CPU
scheduler.
Three State Process Model
 There is one major drawback of two state process model.

 When dispatcher brings a new process from not-running state to running state, the process might
still be waiting for some event or I/O request.

 So, the dispatcher must traverse the queue and find a not-running process that is ready for
execution.

 To overcome this problem, we split the not-running state into two states: Ready State and Waiting
(Blocked) State.
 Ready State: The process in the main memory that is ready for execution.

 Waiting or Blocked State: The process in the main memory that is waiting for some event.

 The OS maintains a separate queue for both Ready State and Waiting State. A process moves
from Waiting State to Ready State once the event it’s been waiting for completes.
Five-State Process Model
States

 New: The process that is just being created. The Program Control Block is already being made but

the program is not yet loaded in the main memory.

 Ready: A process that is waiting to be executed.

 Running: The currently executing process.

 Waiting/Blocked: Process waiting for some event such as completion of I/O operation, waiting

for other processes, synchronization signal, etc.

 Terminated/Exit: A process that is finished or aborted due to some reason.


State Transitions

 New -> Ready: The long term scheduler picks up a new process from secondary memory and

loads it into the main memory when there are sufficient resources available. The process is now in

ready state, waiting for its execution.

 Ready -> Running: The short term scheduler or the dispatcher moves one process from ready

state to running state for execution.

 Running -> Terminated: The OS moves a process from running state to terminated state if the

process finishes execution or if it aborts.


 Running -> Ready: This transition can occur when the process runs for a certain amount of time

running without any interruption.

 Running -> Waiting: A process is put in the waiting state if it must wait for some event. For

example, the process may request some resources or memory which might not be available.

 Waiting -> Ready: A process moves from waiting state to ready state if the event the process has

been waiting for, occurs. The process is now ready for execution.
Six State Process Model
 A Six state process model with suspend state.

 the processor is much faster than I/O devices.

 Therefore, a situation may occur where the processor executes so fast that all of the processes move to
waiting state and no process is in ready state.

 The CPU sits idle until atleast one process finishes the I/O operation. This leads to low CPU utilization.

 To prevent this, if all the processes in the main memory are in waiting state, the OS suspends a process
in waiting/blocked state and move it to secondary memory.

 All suspend processes are kept in a queue. The memory of the swapped process is freed.

 CPU can now bring some other process in the main memory.

 There are two options. First is to bring a brand new process and the second option is to bring another
process from suspend queue back to main memory. 
State Transitions

 Waiting -> Suspend: The OS moves a process from the waiting state to a suspend state if all the

processes in the main memory are in the waiting state.

 Suspend -> Ready: When sufficient memory is available, the OS moves a process from the

suspend state back to the main memory for execution.

 Suspend -> Waiting: The process brought by OS from secondary memory to main memory might

still be waiting for some event.


Seven State Process Model
 A Seven state process model with two suspended states.

 the CPU doesn’t know which process in the suspend queue is ready for execution.

 CPU may swap a process that is still waiting for event completion from secondary memory back to

the main memory.

 There is no point in moving a blocked process back to the main memory. The performance suffers.

To avoid this, we divide the suspend state into 2 states:

 Blocked/Suspend: The process is in secondary memory but not yet ready for execution.

 Ready/Suspend: The process is in secondary memory and ready for execution.


State Transitions

 Waiting -> Blocked/Suspend: If all the processes in the main memory are in the waiting state, the

processor swaps out atleast one waiting process back to secondary memory to free memory to

bring another process.

 Blocked/Suspend -> Waiting: This transition might look unreasonable but if the priority of a

process in Blocked/Suspend state is greater than processes in Ready/Suspend state then CPU may

prefer process with higher priority.

 Blocked/Suspend -> Ready/Suspend: The process moves from Blocked/Suspend to

Ready/Suspend state if the event, the process has been waiting for occurs.
 Ready/Suspend -> Ready: The OS moves a process from secondary memory to the main

memory when there is sufficient space available. Also, if there is a high priority process in

Ready/Suspend state, then OS may swap it with a lower priority process in the main memory.

 Ready -> Ready/Suspend: The OS moves a process from the ready state to ready/suspended to

free main memory for a higher priority process.

 New -> Ready/Suspend: The OS may move a new process to Ready/Suspended if the main

memory is full.
Schedulers

Schedulers are special system software which handle process scheduling in various ways. Their

main task is decide which process to run. Schedulers are of three types −

 Long-Term Scheduler

 Short-Term Scheduler

 Medium-Term Scheduler
 Long Term Scheduler

 It is also called a job scheduler.

 A long-term scheduler determines which programs are admitted to the system for processing.

 It selects processes from the queue and loads them into memory for execution.

 Process loads into the memory for CPU scheduling.

 Medium Term Scheduler

 Medium-term scheduling is a part of swapping.

 It removes the processes from the memory.

 The medium-term scheduler is in-charge of handling the swapped out-processes.


 Short Term Scheduler

 It is also called as CPU scheduler.

 its main objective is to increase system performance.

 It is the change of ready state to running state of the process.

 CPU scheduler selects a process among the processes that are ready to execute and allocates
CPU to one of them.
 Short-term schedulers, also known as dispatchers, make the decision of which process to
execute next.
 Short-term schedulers are faster than long-term schedulers.
Context Switch

 A context switch is the mechanism to store and restore the state of a CPU in Process Control block

so that a process execution can be resumed from the same point at a later time.

 Using this technique, a context switcher enables multiple processes to share a single CPU.

 Context switching is an essential part of a multitasking operating system features.

 When the scheduler switches the CPU from executing one process to execute another, the state

from the current running process is stored into the process control block
When the process is switched, the following information is stored for later use.

 Program Counter

 Scheduling information

 Base register value

 Currently used register

 Changed State

 I/O State information

 Accounting information
Operations on processes
The execution of a process is a complex activity. It involves various operations. Following are
the operations that are performed while execution of a process

 Creation: This the initial step of process execution activity. Process creation means the
construction of a new process for the execution. This might be performed by system, user or old
process itself.

 Scheduling/Dispatching: The event or activity in which the state of the process is changed from
ready to running

 Blocking: When a process invokes an input-output system call that blocks the process and
operating system put in block mode. Block mode is basically a mode where process waits for
input-output.
 Preemption: When a timeout occurs that means the process hadn’t been terminated in the allotted

time interval and next process is ready to execute, then the operating system preempts the process.

This operation is only valid where CPU scheduling supports preemption.

 Termination: Process termination is the activity of ending the process. In other words, process

termination is the relaxation of computer resources taken by the process for the execution
Interprocess communication
 Interprocess communication is the mechanism provided by the operating system that allows
processes to communicate with each other.

 Processes executing concurrently in the operating system may be either independent processes or
cooperating processes.

 A process is independent if it cannot affect or be affected by the other processes executing in the
system.

 Any process that does not share data with any other process is independent.

 A process is cooperating if it can affect or be affected by the other processes executing in the
system.

 Any process that shares data with other processes is a cooperating process.
 The communication between these processes can be seen as a method of co-operation between

them.

 Processes can communicate with each other through both:

 Shared Memory

 Message passing
 Shared Memory:

 Shared memory is the memory that can be simultaneously accessed by multiple processes.

 This is done so that the processes can communicate with each other.

 Message Queue:

 Multiple processes can read and write data to the message queue without being connected to

each other.

 Messages are stored in the queue until their recipient retrieves them.

 Message queues are quite useful for Interprocess communication and are used by most

operating systems.
CPU Scheduling

 It is a process of determining which process will own CPU for execution while the execution of another

process is on hold(in waiting state) due to unavailability of any resource like I/O etc.,.

 The aim of CPU scheduling is to make the system efficient, fast and fair.

 Whenever the CPU becomes idle, the operating system must select one of the processes in the ready

queue to be executed.

 The selection process is carried out by the short-term scheduler (or CPU scheduler).

 The scheduler selects one from set of processes in memory that are ready to execute, and allocates the

CPU to one of them.


Types

Two kinds of Scheduling methods

 Preemptive Scheduling

 Non- Preemptive Scheduling


Preemptive Scheduling

 The tasks are usually assigned with priorities.

 At times it is necessary to run a certain task that has a higher priority before another task although

it is running.

 Therefore, the running task is interrupted for some time and resumed later when the priority task

has finished its execution.

Non- Preemptive Scheduling

 Once the CPU has been allocated to a process, the process keeps the CPU until it releases the CPU

either by terminating or by switching to the waiting state.


scheduling is Preemptive or Non-Preemptive?

To determine if scheduling is preemptive or non-preemptive, consider these four parameters:

1. A process switches from the running to the waiting state.

2. Specific process switches from the running state to the ready state.

3. Specific process switches from the waiting state to the ready state.

4. Process finished its execution and terminated.


Scheduling Criteria
There are many different criteria's to check when considering the "best" scheduling algorithm, they are:

 CPU Utilization

To make out the best use of CPU and not to waste any CPU cycle, CPU would be working most of the
time(Ideally 100% of the time).

 Throughput

It is the total number of processes completed per unit time or rather say total amount of work done in a
unit of time.

 Turnaround Time

It is the amount of time taken to execute a particular process, i.e. The interval from time of submission of
the process to the time of completion of the process(Wall clock time).
 Waiting Time

The amount of time a process has been waiting in the ready queue to acquire get control on the
CPU.

 Load Average

It is the average number of processes residing in the ready queue waiting for their turn to get
into the CPU.

 Response Time

Amount of time it takes from when a request was submitted until the first response is produced.
Scheduling Algorithms

To decide which process to execute first and which process to execute last to achieve maximum
CPU utilization, we have some algorithms, they are:

 First Come First Serve(FCFS) Scheduling

 Shortest-Job-First(SJF) Scheduling

 Priority Scheduling

 Round Robin(RR) Scheduling

 Multilevel Queue Scheduling

 Multilevel Feedback Queue Scheduling


First Come First Serve(FCFS)

 The process which arrives first, gets executed first, or we can say that the process which requests

the CPU first, gets the CPU allocated first.

 First Come First Serve, is just like FIFO(First in First out) Queue data structure, where the data

element which is added to the queue first, is the one who leaves the queue first.

 It is a non-preemptive scheduling algorithm.

 Poor in performance as average waiting time is high.


Wait time of each process is as follows −

Process Wait Time : Service Time - Arrival Time

P0 0-0=0

P1 5-1=4

P2 8-2=6

P3 16 - 3 = 13

Average Wait Time: (0+4+6+13) / 4 = 5.75


Shortest Job Next (SJN)

 SJF is a full form of (Shortest job first) is a scheduling algorithm in which the process with the
shortest execution time should be selected for execution next.

 This scheduling method can be preemptive or non-preemptive. It significantly reduces the average
waiting time for other processes awaiting execution.

Characteristics of SJF Scheduling

 It is associated with each job as a unit of time to complete.

 In this method, when the CPU is available, the next process or job with the shortest completion
time will be executed first.
Non Pre-emptive Shortest Job First

Execution
Process Arrival Time Time Service Time

P0 0 5 0
P1 1 3 5
P2 2 8 14
P3 3 6 8

Waiting time of each process is as follows −

Process Waiting Time


P0 0-0=0
P1 5-1=4
P2 14 - 2 = 12
P3 8-3=5

Average Wait Time: (0 + 4 + 12 + 5)/4 = 21 / 4 = 5.25


Pre-emptive Shortest Job First:
In Preemptive Shortest Job First Scheduling, jobs are put into ready queue as they arrive, but as
a process with short burst time arrives, the existing process is preempted or removed from
execution, and the shorter job is executed first.
Priority Based Scheduling

 Priority scheduling is a method of scheduling processes based on priority.

 In this method, the scheduler selects the tasks to work as per the priority.

 Priority scheduling also helps OS to involve priority assignments.

 The processes with higher priority should be carried out first, whereas jobs with equal priorities

are carried out on a round-robin or FCFS basis.

 Priority can be decided based on memory requirements, time requirements, etc.


Arrival Execution Service
Process Priority
Time Time Time
P0 0 5 1 0
P1 1 3 2 11
P2 2 8 1 14
P3 3 6 3 5

Waiting time of each process is as follows −

Process Waiting Time


P0 0-0=0
P1 11 - 1 = 10
P2 14 - 2 = 12
P3 5-3=2

Average Wait Time: (0 + 10 + 12 + 2)/4 = 24 / 4 = 6


Process Arrival Time Burst Time Priority

P1 0 8 3

P2 1 1 1

P3 2 3 2

P4 3 2 3

P5 4 6 4

Average waiting time (AWT),


= ((5-1) + (1-1) + (2-2) + (12-3) + (14-4)) / 5 = 23/5 = 4.6
Round Robin Scheduling

 Round Robin is the preemptive process scheduling algorithm.

 Each process is provided a fix time to execute, it is called a quantum.

 Once a process is executed for a given time period, it is preempted and other process executes for

a given time period.

 Context switching is used to save states of preempted processes.


Process Wait Time : Service Time - Arrival Time
P0 (0 - 0) + (12 - 3) = 9
P1 (3 - 1) = 2
P2 (6 - 2) + (14 - 9) + (20 - 17) = 12
P3 (9 - 3) + (17 - 12) = 11

Average Wait Time: (9+2+12+11) / 4 = 8.5


Multilevel Queue Scheduling:

 According to the priority of process, processes are placed in the different queues.

 Generally high priority process are placed in the top level queue.

 Only after completion of processes from top level queue, lower level queued processes are

scheduled.

Multi level Feedback Queue Scheduling:

 It allows the process to move in between queues.

 The idea is to separate processes according to the characteristics of their CPU bursts.

 If a process uses too much CPU time, it is moved to a lower-priority queue.


Algorithm Evaluation

 selecting an algorithm can be difficult. The first problem is defining the criteria to be used in
selecting an algorithm.

 Criteria are often defined in terms of CPU utilization, response time, or throughput.

 To select an algorithm, criteria may include several measures, such as:

 Maximizing CPU utilization under the constraint that the maximum response time is 1 second

 Maximizing throughput such that turnaround time is (on average) linearly proportional to total
execution time
Thread

 A thread is a path of execution within a process.

 A thread is also called a lightweight process.

 A process can contain multiple threads.

 Threads provide a way to improve application performance through parallelism by dividing a process

into multiple threads

 A thread shares with its peer threads few information like code segment, data segment and open files.

 When one thread alters a code segment memory item, all other threads see that.
Advantages

 Threads minimize the context switching time.

 Use of threads provides concurrency within a process.

 Efficient communication.

 It is more economical to create and context switch threads.

 Threads allow utilization of multiprocessor architectures to a greater scale and efficiency.


Types of Thread

Threads are implemented in following two ways −

 User Level Threads − User managed threads.

 Kernel Level Threads − Operating System managed threads acting on kernel, an operating

system core.
Multithreading Models

 Multiple threads within the same application can run in parallel on multiple processors and a

blocking system call need not block the entire process.

 Multithreading models are three types

Many to many relationship.

Many to one relationship.

One to one relationship.


Many to Many Model
 The many-to-many model multiplexes any number of user threads onto an equal or smaller
number of kernel threads.

 The following diagram shows the many-to-many threading model where 6 user level threads are
multiplexing with 6 kernel level threads.
Many to One Model

 Many-to-one model maps many user level threads to one Kernel-level thread.

 Thread management is done in user space by the thread library.

 When thread makes a blocking system call, the entire process will be blocked.

 Only one thread can access the Kernel at a time, so multiple threads are unable to run in parallel
on multiprocessors.
One to One Model
 There is one-to-one relationship of user-level thread to the kernel-level thread.

 This model provides more concurrency than the many-to-one model.

 It also allows another thread to run when a thread makes a blocking system call.

 It supports multiple threads to execute in parallel on microprocessors.


Multiple-Processor Scheduling

 A multiprocessor system consists of several processors which share memory.

 In the multiprocessor, there is more than one processor in the system

 In the multiprocessor system all the processor operate under the single operating system.

 multiple processor scheduling is more complex as compared to single processor scheduling.

 In multiple processor scheduling the processors are identical i.e. HOMOGENEOUS, in terms of

their functionality, we can use any processor available to run any process in the queue.
Approaches

 when all the scheduling decisions and I/O processing are handled by a single processor which is

called the Master Server and the other processors executes only the user code. This entire

scenario is called Asymmetric Multiprocessing.

  uses Symmetric Multiprocessing where each processor is self scheduling. All processes may be

in a common ready queue or each processor may have its own private queue for ready processes. 
Multi core programming
 A processor that has more than one core is called Multicore Processor

 These cores can individually read and execute program instructions, giving feel like computer
system has several processors but in reality, they are cores and not processors.

 Processor can run instructions on separate cores at same time. This increases overall speed of
program execution in system.

 Multicore systems support Multithreading and Parallel Computing.

 Software that can run parallelly is preferred because we want to achieve parallel execution with
the help of multiple cores.

 Concurrent systems that we create using multicore programming have multiple tasks executing in
parallel. This is known as concurrent execution
Thread Libraries

A thread library provides the programmer an API for creating and managing threads. There are
two primary ways of implementing a thread library.
 The first approach is to provide a library entirely in user space with no kernel support. All
code and data structures for the library exist in user space. This means that invoking a
function in the library results in a local function call in user space and not a system call.
 The second approach is to implement a kernel-level library supported directly by the OS. In
this case, code and data structures for the library exist in kernel space. Invoking a function in
the API for the library typically results in a system call to the kernel.
Three main thread libraries are in use today:

 POSIX Pthreads: Pthreads, the threads extension of the POSIX standard, may be provided as

either a user- or kernel-level library.

 Win32: The Win32 thread library is a kernel-level library available on Windows systems.

 Java: The Java thread API allows thread creation and management directly in Java programs.

However, because in most instances the JVM is running on top of a host OS, the Java thread

API is typically implemented using a thread library available on the host system.

You might also like