Ch3 Osy Notes

You might also like

Download as pdf or txt
Download as pdf or txt
You are on page 1of 118

Operating System

Chapter 3
Process Management
MRS A.S.KHANDAGALE

1
2
• Process-structure
• Program
• Difference between Process and Program
• Process states(process state transition diagram)
• Process Control Block

3
Process

• A process is basically a program in execution. The execution of


a process
• A process is defined as an entity which represents the basic unit
of work to be implemented in the system.s must progress in a
sequential fashion.
• To put it in simple terms, we write our computer programs in a
text file and when we execute this program, it becomes a
process which performs all the tasks mentioned in the program.

4
• When a program is loaded into the memory and it becomes a process
and process is a basic unit of Work to be implemented in the system
• it can be divided into four sections ─ stack, heap, text and data.
The following image shows a simplified layout of a process inside main memory −

5
• Each process has

1) Text :-Text section contains the program code.This includes the


current activity represented by the value of Program Counter
and the contents of the processor's registers.
2) Data :-This section contains the global and static variables.
3) Heap :-Heap is used for dynamic memory allocation to a
process during its run time. It is managed via calls to
new,delete,malloc,free.
4) Stack :- The satck is used for local variables.The process Stack
contains the temporary data such as method/function
parameters, return address and local(temporary)
variables.space on the stack is reserved for local variables when
they are declared and the space is freed up when the variables
go out of scope.

6
Difference between Process and Program
Program:-
• A program is an executable file which contains a certain set of
instructions written to complete the specific job on your computer.
For example, Google browser chrome.exe is an executable file which
stores a set of instructions written in it which allow you to view web
pages.
• Programs are never stored on the primary memory in your
computer. Instead, they are stored on a disk or secondary memory
on your PC or laptop. They are read from the primary memory and
executed by the kernel.

7
Process:-
• A process is an execution of any specific program. It is considered an active entity
that actions the purpose of the application. Multiple processes may be related to the
same program.
• For example, If you are double click on your Google Chrome browser icon on your PC
or laptop, you start a process which will run the Google Chrome program. When you
open another instance of Chrome, you are essentially creating a two process.

8
Program Vs. Process
Parameter Process Program
An executing part of a program is A program is a group of ordered operations to
Definition
called a process. achieve a programming goal.

Resource The resource requirement is quite high in


The program only needs memory for storage.
management case of a process.

The process has a shorter and very A program has a longer lifespan as it is stored
Lifespan limited lifespan as it gets terminated in the secondary memory until it is not
after the completion of the task. manually deleted.
Required Process holds resources like CPU, The program is stored on disk in some file and
Process memory address, disk, I/O, etc. does not require any other resources.
Entity type
A process is a dynamic or active entity. A program is a passive or static entity.
/Nature
A process contains many resources like A program needs memory space on disk to
Contain
a memory address, disk, printer, etc. store all instructions.
9
Process States
 A process state is a condition of the process at a specific instant of time. It
also defines the current position of the process.

 Different Process States:- New ,Ready , Running ,Waiting , Terminated

 Processes in the operating system can be in any of the following


states:

10
1. New :- The process is being created. Every new operation which
is requested to the system is known as the new born process.
2. Ready :- When the process is ready to execute but he is
waiting for the CPU to execute is called as the ready state.The
process is waiting to be assigned to a processor
3. Running :- Instructions are being executed. When the program is
executed by the CPU ,then this is called as running process
4. Waiting or blocked state:- A process that cannot execute until
some event occurs or an I/O completion.The process is waiting
for some event to occur(such as an I/O completion or reception
of a signal).
5. Terminated :- The process has finished execution.The process
will be automatically terminated by the CPU.The processor ill
deallocate memory which is allocated for process.
11
Process State Transition Diagram
When a process executes, it passes through different states. An active process is normally
in one of the five states in the diagram. The arrows show how the process changes states.

Figure :-Process State Transition Diagram


12
Process control block (PCB)
• Each process is represented in OS by Process Control block
(PCB) also called as task control block.
• PCB defines a process to OS which contains entire
information about process.
• When a process is created ,the OS creates a corresponding
PCB and releases whenever process terminates.

• A Process Control Block is a data structure maintained by the


Operating System for every process. The PCB is identified by an
integer process ID (PID).

13
• A PCB keeps all the information needed to keep track of a
process as listed below in the table −

14
1) Process ID:- Each process is identified by its process number,called
process identification number.Each process has unique process-id
through which it is identified.The process-id is provided by OS.
2) Process priority :- Each process is assigned certain level of
prioritywhich shows the preference of the one process over another
proess for execution. Priority may be given by the user/system manager
or it may be given internally by OS.This field stores the priority of a
particular process.
3) Pointer :- Pointer points to another process control
block.Pointer is used for maintaining the scheduling list.
4) Process State :- The current state of the process i.e.,
whether it is new,ready, running, waiting, terminated or
whatever.
15
5) Program counter:-Program Counter is a pointer to the address
of the next instruction to be executed for this process.
6)CPU registers :- The registers vary in number and
type,depending on computer architecture.They include
accumulators,index registers,stack pointers and general-purpose
registers and any condition code information.
7)CPU Scheduling information:-Process priority and other
scheduling information which is required to schedule the process.
8)Memory management information:-This includes the
information of page table, memory limits, Segment table depending
on memory used by the operating system.

16
9) Accounting information:- This information includes the
amount of CPU time used,job or process numbers and so on.
10)I/O Status information:- This information includes the list of
I/O devices allocated to the process,a list of opened files and
so on.
11)File management :- It includes information about all open
files ,acess rights etc.

Thus PCB simply serves as repository for any information that


may vary from process to process

17
System
Chapter 3
Process
Management
3.2 Process
Scheduling

MRS A.S.KHANDAGALE

1
Contents
Process Scheduling
Scheduling Queues
Queuing Diagram
Schedulers
 Long-Term Scheduler
 Short-Term Scheduler
 Medium-Term Scheduler

2
Process Scheduling
Scheduling is a fundamental operating-system function.
Almost all computer resources are scheduled before use.
As we know that we can perform many programs at a time on the
computer .But there is a single CPU,so for running all the programs
concurrently or simultaneously then e use the scheduling.
Processess are the small programs those are executed by the user
according to their request.
CPU executes all the process according to some rules or some
schedule.
Scheduling is that in which each process have some amount of CPU
time

3
Definition :- Process Scheduling
The process scheduling is the activity of the process manager
that handles the removal of the running process from the CPU
and the selection of another process on the basis of a particular
strategy.
Process scheduling is an essential part of a Multiprogramming
operating systems. Such operating systems allow more than
one process to be loaded into the executable memory at a time
and the loaded process shares the CPU using time
multiplexing.

4
Process Scheduling Queues
Scheduling queues refers to queues of processes or devices.
For a uniprocessor system,there will never be more than one
running process.
If there is more than one process, the rest will have to wait until
the CPU is free and can be rescheduled.
The OS maintains all PCBs in Process Scheduling Queues.
The OS maintains a separate queue for each of the process
states and PCBs of all processes in the same execution state
are placed in the same queue.
When the state of a process is changed, its PCB is unlinked
from its current queue and moved to its new state queue.
The processes ,which are ready and waiting to execute ,are
kept on list called ready queueThe list is generally a linked list.

5
The Operating System maintains the following important process
scheduling queues −
1)Job queue − This queue keeps all the processes in the system.
2)A ready Queue:-
This queue keeps a set of all processes residing in main memory,
ready and waiting to execute.
The processes ,which are ready and waiting to execute ,are kept on
list called ready queue.
The list is generally a linked list.
A ready queue header will contain pointers to the first and last
PCB’S(Process Control Block) in the list.
Each PCB has a pointer field which points to the next process in the
ready queue. 6
3)Device queues −
When a process is allocated the CPU,it executes for a while and
eventually quits,is interrupted or waits for the occurance of a particular
event, such as the completion of an I/O request.
In the case of an I/O request,such as tape drive ot to a shared device
such as disk
Since there are many processes in the system, the disk may be busy
with the I/O request of some other process.
The process therefore may have to wait for the disk.The list of
processes waiting for a particular I/O device is called a Device Queue
Thus The processes which are blocked due to unavailability of an I/O
device constitute this queue.
Each device has its own device queue
7
8
9
Queuing Diagram

Job queue

10
In the above-given Diagram,
• Rectangle represents a queue.
• Circle denotes the resource
• Arrow indicates the flow of the process.
1.Every new process first put in the Ready queue .It waits in the ready
queue until it is finally processed for execution. Here, the new process
is put in the ready queue and wait until it is selected for execution or it
is dispatched.
2.One of the processes is allocated the CPU and it is executing
3.The process should issue an I/O request
4.Then, it should be placed in the I/O queue.
5.The process should create a new subprocess
6.The process should be waiting for its termination.
7.It should remove forcefully from the CPU, as a result interrupt. Once
interrupt is completed, it should be sent back to ready queue.
11
Summary :- Queuing Diagram
A process enters the system from the outside world and is put in the
ready queue.
It waits in the ready queue until it is selected for the CPU.
After running on the CPU,its waits for an I/O operation by moving to an
I/O queue.
Eventually it is served by the I/O device and returns to ready queue.
A process continues this CPU,I/O cycle until it finishes and then it exits
from the system.
Above queuing diagram represents the process scheduling.
Each rectangle box represents queue.
Two types of queues are present
The ready queue and set of device queues.
The circles represents the resources that serve the queues and the arrows
indicate the flow of processes in the system 12
As new process is initially put in the ready queue .It waits there until it
is selected for execution or is dispatched.
Once the process is assigned the CPU and is executing,one of the
several events could occur:
1. The process could create a new sub-process and wait for the
termination of the sub-process.
2. The process could issue an I/O request and then be placed in
an result I/O queue.
3. The process could be removed forcibly from the CPU,as a
result of an interrupt, and again put in the ready queue.
In the first two cases ,the process transition from the waiting
state to ready state occurs.
A process continues this cycle until it terminates.
When the process terminates ,it is removed from all queues
and its PCB and resources are de-allocated.

13
Schedulers
Schedulers are special system software which handle process
scheduling in various ways.
Their main task is to select the jobs to be submitted into the
system and to decide which process to run.
Schedulers are of three types −
• Long-Term Scheduler
• Short-Term Scheduler
• Medium-Term Scheduler

14
Schedulers

15
16
Long Term Scheduler:
It is also called a job scheduler.
A long-term scheduler determines which programs are admitted to the
system for processing.
It selects processes from the job queue and loads them into main memory
ready queue for execution.
Long term scheduler determines which programs are admitted to the
system for processing.
 Process loads into the memory for CPU scheduling.
Time-sharing operating systems have no long term scheduler.
When a process changes the state from new to ready, then there is use of
long-term scheduler.
The primary objective of the job scheduler is to provide a balanced mix of
jobs,such as I/O bound and CPU(Processor )bound.
The long term scheduler must make careful selection.

17
An I/O bound process spends more of its time doing I/O than it
spends doing computation.
The CPU bound process on the other hand spends more of its
time in doing computations
If all the processes are I/O bound ,the Ready queue will almost
always be empty and short term scheduler will have little to do.
If all processes are CPU bound ,the I/O waiting queue will
almost always be empty,devices will go unused and again the
system will be unbalanced.
The system with best performance will have balanced mix of
CPU bound and I/O bound processes.
The long term scheduler executes much less frequently’
The Long term scheduler controls the degree of
multiprogramming,by keeping maximum number of processes
in main memory.
18
Short Term Scheduler(CPU Scheduler)
It is also called as CPU scheduler.
Its main objective is to increase system performance in accordance
with the chosen set of criteria.
It is the change of ready state to running state of the process.
CPU scheduler selects a process among the processes in main
memory that are ready to execute and allocates CPU to one of them.
Short-term schedulers, also known as dispatchers, make the
decision of which process to execute next.
Short-term schedulers are faster than long-term schedulers.
The short term scheduler makes scheduling decisions much more
frequently than other schedulers.
A scheduling decisions will at a minimum have to be made after
every time slice and these are very short.
19
This scheduler can be preemptive ,i.e ca[pable of forcibly
removing processes from CPU when it decides to allocate that
CPU to another process or “Preemptive” in which the
scheduler is unable to force processes off the CPU.
Short term schedulers is invoked very frequenly(milliseconds)i.e
must be very fast.
Long term scheduler is invoked very infrequently(seconds and
minutes).may be slow.
Short term scheduler makes the fine grained decision of hich
process to execute next.

20
21
Medium Term Scheduler
Medium-term scheduling is a part of swapping.
It removes the processes from the memory.
It reduces the degree of multiprogramming.
The medium-term scheduler is in-charge of handling the swapped out-
processes.

22
A running process may become suspended if it makes an I/O
request.
A suspended processes cannot make any progress towards
completion.
 In this condition, to remove the process from memory and make
space for other processes, the suspended process is moved to the
secondary storage. This process is called swapping, and the
process is said to be swapped out or rolled out.
Swapping may be necessary to improve the process mix.

23
• Thus Medium-term schedulers are those schedulers whose
decision will have a mid-term effect on the performance of the
system. It is responsible for swapping of a process from the
Main Mem
• It is helpful in maintaining a perfect balance between the I/O
bound and the CPU bound. It reduces the degree of
multiprogramming.ory to Secondary Memory and vice-versa.

24
Long-term vs Short-term vs Medium-term Schedulers

25
Operating System
Chapter 3
Process Management
3.3 Inter Process
Communication(IPC)
Message Passing System,Shared
Memory system

MRS A.S.KHANDAGALE

1
Contents
Inter Process Communication (IPC)
Definition
Purposes of IPC
Communication Models
1)Message passing system
 Direct communication
 Indirect communication
Symmetric and Asymmetric adressing
 Process Synchronization
- Blocking Send,Non Blocking Send, Blocking Receive, NonBlocking Receive
Buffering
- Zero capacity, Bounded capacity,Un Bounded capacity
2) Shared memory model- Producer consumer problem example
-Unbounded buffer , bounded buffer
2
Inter Process Communication (IPC)
Def:- An exchange of information among processes is called IPC.

Inter Process Communication (IPC) refers to a mechanism, where


the operating systems allow various processes to communicate i.e.
exchange data with each other.

This involves synchronizing their actions and managing shared


data.

IPC is particularly useful in a distributed environment where the


ommunicating processes may reside on different computers
connected with a network

3
Purposes of IPC
Data transfer
Sharing data
Event notification
Resources sharing and synchronization
Process control

4
 Processes executing concurrently in the operating system may
be either independent or cooperating processes.
Reasons for providing an environment that allows process
cooperation.
1) Information Sharing Several users may be interested in the
same piece of information.
2) Computational Speed up Process can be divided into sub
tasks to run faster, speed up can be achieved if the computer has
multiple processing elements.
3) Modularity Dividing the system functions into separate
processes or threads.
4) Convenience Even an individual user may work on many tasks
at the same time.
5
Processes can communicate or allows IPC with each other through two
fundamental models
1.Shared Memory –
Processes can exchange information by reading and writing data to the
shared region.
Faster than message passing as it can be done at memory speeds when
within a computer.
System calls are responsible only to establish shared memory
regions.
2.Message passing –
The data or information is exchanged in the form of messages.
Mechanism to allow processes to communicate and synchronize their actions
without sharing the same address space
Particularly useful in distributed environment. 6
Communication Models

7
Message Passing System
• Messages are collection of data objects and their structures
• Messages have a header containing system dependent control
information and a message body that can be fixed or variable size.
• When a process interacts with another, two requirements have to be
satisfied. Synchronization and Communication.
• Fixed Length •
-Easy to implement
-Minimizes processing and storage overhead.
• Variable Length
- Requires dynamic memory allocation, so fragmentation could
occur.

8
• Message passing provides a mechanism to allow processes
to communicate and to synchronize their actions without
sharing same address space and is useful in a distributed
environment, where communicating processes may reside
on same computer or different computers connected by a
network.
• For example , a chat program used on world wide web
could be designed so that chat participants communicate
with one another by exchanging messages.

9
• Communication among the user processes is accomplished
through passing of message.An IPC facility provides two
operations:
• Two generic message passing primitives for sending and receiving
messages.
1. send (destination, message)
2. receive (source, message)

source or dest={ process name, link, mailbox, port}


If process A and B want to communicate ,they must send
messages to and receive messages from eah other,a
communication link exist between them
This link is implemented not physically but also logically

10
• Processess that want to communicate must have way to
refer or address each other
• Addressing can be done by either-
1)Direct communication
2)Indirect communication
1) Direct communication
With direct communication ,each process that wants to
communicate must explicitly name the recipient or sender of the
communication.
i)send(A,message):- send a message to process A.
ii)receive(B,message):- Receive a message from process B.
11
Direct communication has the folloing properties:-
1)A link is established automatically between every
pair of processes that want to communicate.The
processes need to know only each other’s identity
to communicate.
2)Al ink is associated with exactly two processes
3)Exactly one link exists between each pair of
processess
12
Indirect Communication
• With indirect communication, the message is sent to and receives
from mailboxes or ports.
• Mailbox is the place or queue where messages can be placed by
processes and from which messages can be removed.

Each mailbox has unique identinifcation.Two processes can


communicate with one another only if they share a mailbox.
The send and receive primitives are Defined as follows:-
i) send(A,message):- send amessage to mailbox A.
ii) Receive (A,message):- receive a message from mailbox A
13
• In this scheme ,a communication link has following properties
i) A link is established between a pair of processes only if
both members of the pair have a shared mailbox.
ii) Alink may be associated with more than two
processes.Between each pair of communicating
processes,there may be a number of different links, with
each link corresponding to one mailbox.
Suppose processes p1,p2,p3 all shares mailbox.A process p1
sends a message to A,while p2 and p3 each execute and
receive messages from A.A mailbox owned by operating
system is independent and is not attached to any particular
process.
14
• The operating system then must provide a mechanism that
allows a process to do the followingi)
i) Create a new mailbox.
ii) Send and receive messages through the mailbox.
iii) Delete a mailbox

• Symmetric Addressing : Both the processes have to explicitly


name in the communication primitives.

• Asymmetric Addressing : Only sender needs to indicate the


recipient.

15
Synchronization
• Communications between processes takes place by calls
to send and receive primitives

• Process synchronization refers to the idea that multiple


processes are to join up and handshake at a certain
point.In order to reach an agreement or commit to a
certain sequence of actions.

16
• Message passing may be blocking and nonblocking also
known as synchronous and asynchronous.
1.Blocking Send:-The sending process is blocked until the
message is received by the receiving process or by the mailbox.
2.Non Blocking Send:- The sending process sends the maessage
and resumes operations.
3. Blocking Receive:-The receive process blocks until a messge is
available.
4.Non blocking receive:- The receive retrives either a valid
message or null.
Different combinations of send and receive are possible

17
Buffering
• When a communication is direct or indirect ,messages
exchanged by communicating process reside in a temporary
queue.Such a queue can be implemented in 3 ways.

1.Zero capacity:- The queue has maximum length 0,thus link


cannot have any message waiting in it.In this case the sender
must block until the receipent receives the message.
2.Bounded capacity:- The queue has finite length n, hence at
the most n messages can reside in it.The link has a finite
capacity.However ,if the link is full ,the sender must block until
space is available in the queue.

18
• Unbounded Capacity:-The queue has potentially infinite
length ,thus any number of messages can wait in it.The
sender never blocks.

• The zero capcity case is sometimes reffered to as a


message system with no buffering;the other cases are
referred to as automatic buffering.

19
Shared Memory
• Shared memory is memory that may be simultaneously
accessed by multiple progrms ith intent to provide
communication among them or avoid redundant copies.
• Shared memory is an efficient means of passing data
between programs.
• Depending on context ,programs may run on a single
processor or on multiple separate processors.
• IPC using shared memory requires a region of shared
memory among the communicating processes.Processess
can then exchange information by reading and writing
data to shared region.

20
• A shared memory region resides in the address space of
the process creating the shared memory segment.Other
processes that wish to communicate using this shared
memory segment must attach it to their address space.
• Normally the OS does not allow one process to acess the
memory region of another process.Shared memory
requires that 2 or more processes agree to remove this
restriction.
• They can then exchange the information by reading and
writing data in the shared areas.The form of the data and
the location are determined by these processes and are
not controlled by OS
21
• Consider the Producer –consumer problem ; a producer
process produces information that is consumed by a consumer
process.
• To allow producer and consumer processes to run
concurrently,we must have available a buffer of items that
can be filled by the producer and emptied by the consumer.
• This buffer will reside in a region of memory that is shared by
the producer and consumer processes.
• A producer can produce one item while the consumer is
consuming another item.
• The producer and consumer must be synchronized,so that the
consumer does not try to consume an item that has not yet
been produced
22
Two types of buffers can be used :-
• 1.Unbounded buffer:-it places no practical limit on the size
of buffer.The consumer may have to wait for new items
,but the producer can always produce new items.
• 2.Bounded Buffer:- It has fixed size buffer.Inthis case the
onsumer must wait if the buffer is empty,and the producer
must wait if the buffer is full.

23
• Shared memory allows maximum speed and convenience
of communication
• Shared memory is faster than message passing as it isS
implemented using system calls and thus requires the
more time consuming task of kernel intervention.
Vs
In Shared memory ,system calls are required only to
establish shared memory regions. Once shared memory is
established ,all access are treated as routine memory acess
and no assistance from the kernel is required.

24
Operating System
Chapter 3
Process Management
3.3 Context Switch

MRS A.S.KHANDAGALE

1
2
Define
Context switch
Reasons for Context switching
Steps involved in Context Switching
Advantages and disadvantages of Context switch.

3
Context Switch
• A context switch (also referred as a process switch or a task switch) is the
switching of the CPU (central processing unit) from one process or thread to
another process or thread .
• Switching the CPU to another process requires saving the state of the old
process and loading the saved state for new process .This task is known as
context switch.
• In other words the operating system to schedule all processes in main memory
to run on the CPU at equal intervals.
• Eah switch of the CPU from one process to another is called context switch

4
• The context switching process involved a number of steps that
need to be followed.
• You can't directly switch a process from the running state to the
ready state.
• You have to save the context of that process.
• If you are not saving the context of any process P then after
some time, when the process P comes in the CPU for execution
again, then the process will start executing from starting.
• But in reality, it should continue from that point where it left the
CPU in its previous execution. So, the context of the process
should be saved before putting any other process in the running
state.

5
Context switching can happen due to the following
reasons:
• When a process of high priority comes in the ready state. In this
case, the execution of the running process should be stopped
and the higher priority process should be given the CPU for
execution.
• When an interruption occurs then the process in the running
state should be stopped and the CPU should handle the interrupt
before doing something else.
• When a transition between the user mode and kernel mode is
required then you have to perform the context switching.

6
• The context of process is represented in the PCB of the
process,it includes the value of the CPU registers,the
process state and the memory management information.
• When a context switch occurs ,the kernel saves the
context of the old process in its PCB and loads the saved
context of the new process scheduled to run.
• Context switch times are highly dependent on hardware
support. Its speed varies from machine to machine
depending on memory speed, the number of registers
that must be copied and the existence of special
instructions.
• Context switch time is pure overhead ,because the system
does no useful work while switching.
7
• Context switching is an essential part of a multitasking
operating system features.

• Context switch requires (n+m)b* K time units to save the


state of the proessor ith n general rgisters,assuming b are
the store operations are required to save n and m registers
,assuming b are the store operations are required to save
n and m registers of two process control blocks and each
store instruction requires K time units.

8
Steps involved in Context Switching
The process of context switching involves a number of steps. The following diagram depicts the process of
context switching between the two processes P1 and P2.

9
• In the above figure, you can see that initially, the process P1 is in the running state and the process
P2 is in the ready state. Now, when some interruption occurs then you have to switch the process
P1 from running to the ready state after saving the context and the process P2 from ready to
running state. The following steps will be performed:
1. Firstly, the context of the process P1 i.e. the process present in the running state will be saved in
the Process Control Block of process P1 i.e. PCB1.
2. Now, you have to move the PCB1 to the relevant queue i.e. ready queue, I/O queue, waiting queue,
etc.
3. From the ready state, select the new process that is to be executed i.e. the process P2.
4. Now, update the Process Control Block of process P2 i.e. PCB2 by setting the process state to
running. If the process P2 was earlier executed by the CPU, then you can get the position of last
executed instruction so that you can resume the execution of P2.
5. Similarly, if you want to execute the process P1 again, then you have to follow the same steps as
mentioned above(from step 1 to 4).

10
• Advantage of Context Switching
• Context switching is used to achieve multitasking i.e.
multiprogramming.
• Multitasking gives an illusion to the users that more than one
process are being executed at the same time.
• But in reality, only one task is being executed at a particular
instant of time by a processor.
• Here, the context switching is so fast that the user feels that the
CPU is executing more than one task at the same time.

11
The disadvantage of Context Switching
• The disadvantage of context switching is that it requires some
time for context switching i.e. the context switching time.
• Time is required to save the context of one process that is in the
running state and then getting the context of another process
that is about to come in the running state.
• During that time, there is no useful work done by the CPU from
the user perspective. So, context switching is pure overhead in
this condition.

12
Thank You

13
Operating System
Chapter 3
Process Management
3.4 Threads:-Benefits,User &
Kernel level Threads

MRS A.S.KHANDAGALE

1
2
Contents:-
Thread
single threaded process or multithreaded process
Difference between Process and Thread.
Benefits of multithreading
 Responsiveness, Resource sharing ,Economy, Multiprocessor architecture
Two types of Threads
User level threads
Kernel level Threads

3
Thread(Light Weight Process LWP)

• Thread is an execution unit that is part of a process.


• A process can have multiple threads, all executing at the same time.
It is a unit of execution in concurrent programming.
• A thread is lightweight and can be managed independently by a
scheduler.
• It helps you to improve the application performance using
parallelism.

4
Thread
• Thread is a basic unit of CPU utilization .It comprises thread ID,a program
counter ,a register set and a stack.
• The ability of an OS to execute different parts of programs called threads
simultaneously.
• It shares with another threads belonging to the same process its code
section ,data section and other operating system resources,such as open
files and signals
• Traditional process has single threadof control
• If the process has multiple threads .It can do more than one task at a time
• Many software packages that run on desktop PC’S are multithreaded.
• Example:- A word processor may have athread for displaying
graphics,another thread for reading keystrokes from the user and the third
thread for performing spelling and grammer chcking in the background,
5
a)Single Threaded process b) Multi –threaded process

6
Difference between Process and Thread
S.N. Process Thread

1 Process is heavy weight or resource intensive. Thread is light weight, taking lesser resources
than a process.
2 Process switching needs interaction with Thread switching does not need to interact
operating system. with operating system.
3 In multiple processing environments, each All threads can share same set of open files,
process executes the same code but has its child processes.
own memory and file resources.

4 If one process is blocked, then no other While one thread is blocked and waiting, a
process can execute until the first process is second thread in the same task can run.
unblocked.
5 Multiple processes without using threads use Multiple threaded processes use fewer
more resources. resources.
6 In multiple processes each process operates One thread can read, write or change another
independently of the others. thread's data.

7
Benefits ofMultithreaded programming
1. Responsiveness: If the process is divided into multiple threads, if
one thread completes its execution, then its output can be
immediately returned.Ex. A multithreaded web browser could still
allow user interaction in one thread while an image is being loaded
in another thread.
2. Resource sharing: By default ,threads share the memory and the
resources of the process to hich they belong. Resources like code,
data, and files can be shared among all threads within a process.
3. Economy:- Allocating memory and resources for process creation
is costlyThreads share the resources of the process to which they
belong and hence it is more economical to create and context
switch threads
4. Utilization of Multiprocessor architecture:- Inthis all threads can run
in parallel on different processor,Multithreading on multicpu
machine increase concurrency.
8
5. Communication: Communication between multiple threads is
easier, as the threads shares common address space.
Thus
• Threads minimize the context switching time.
• Use of threads provides concurrency within a process.
• Efficient communication.
• It is more economical to create and context switch threads.
• Threads allow utilization of multiprocessor architectures to a
greater scale and efficiency.

9
Types of Thread
Threads are implemented in following two ways −
• User Level Threads − User managed threads.
• Kernel Level Threads − Operating System managed threads acting on
kernel, an operating system core.
Threads that are provided at user level are the user threads and at the
kernel level are kernel threads.

10
User Level Threads
• The threads implemented at the user level are known as
user threads.
• In user level thread ,thread management is done by the
application,the kernel is not aware of existence of threads.
• User theads are supported above kernel and are
implemented by a thread library at the user level.The library
provides support for thread creation ,scheduling and
management with no support from the kernel.
• Because the kernel is unaware of user level threads,all
thread creation and scheduling are down in user spae
without the need for control intervention.

11
• Therefore ,user level threads are generally fast to create
and manage.
• User thread libraries include POSIX P threads,Mach C-
threads and Solaris 2 UI-threads.

12
Advantages of User level Threads
• Thread switching does not require Kernel mode privileges.
• User level thread can run on any operating system.
• Scheduling can be application specific in the user level thread.
• User level threads are fast to create and manage.
• Simple representation and management.
• User level thread library easy to port.

13
Disadvantages of user level threads
• In a typical operating system, most system calls are
blocking.i.e. If a user level thread is blocked in the kernel,the
entire process(all threads of that process ) are blocked.
• Multithreaded application cannot take advantage of
multiprocessing because the kernel assigns one process to only
one processor)

14
Kernel Threads
• In this case, thread management is done by the Kernel.
• Kernel threads are supported directly by the operating system.
• The kernel threads creation,scheduling and management in kernel
space
• As the thread management is done by OS ,kernel threads are slower
to create and manage than user threads.
• As kernel is managing threads ,if thread performs blocking system
call ,the kernel can schedule another thread in the application for
execution.
• Also in a multiprocessor environment,the kernel can schedule
threads on different processors.
• Most currently OS-including Windows N,Windows 2000,Solaris
2,Unix support kernel threads

15
Advantages of Kernel threads
• Kernel can simultaneously schedule multiple threads from the
same process on multiple processors.
• If one thread in a process is blocked, the Kernel can schedule
another thread of the same process.
• Kernel routines themselves can be multithreaded.

16
Disadvantages of Kernel threads
• Kernel threads are generally slower to create and manage than
the user threads.
• Transfer of control from one thread to another within the same
process requires a mode switch to the Kernel.

17
18
Thank You

19
Operating System
Chapter 3
Process Management
3.4 Multithreading Models

MRS A.S.KHANDAGALE

1
2
Contents:-
Multi Threading Models in Process Management
• Many to One Model
• One to One Model
• Many to Many Model
Process Commands
ps(),wait,sleep,exit ,kill

3
Multi Threading Models in Process Management
• Multithreading allows the execution of multiple parts of a
program at the same time. These parts are known as threads
and are lightweight processes available within the process.
Therefore, multithreading leads to maximum utilization of the
CPU by multitasking.
• Many operating systems support kernel thread and user thread
in a combined way. Example of such system is Solaris.
• In a combined system, multiple threads within the same
application can run in parallel on multiple processors.

4
Multi threading model are of three types.

The user threads must be mapped to kernel threads, by one


of the following strategies:
• Many to One Model
• One to One Model
• Many to Many Model

5
Many to One Model
• In the many to one model, many user-level threads are all
mapped onto a single kernel thread.
• Thread management is handled by the thread library in user
space, which is efficient in nature.

6
• Entire process will block if a thread makes a blocking
system call
• Only one thread can acess the kernel at a time.Multiple
threads are unable to run in parallel on multiprocessors.
• Green threads a thread library available for solaris2 uses
this model.
Advantages of Many to One Model
1.Totally portable
2.Easy to with few system dependencies.
3.Mainly used in language systems ,portable libraries.
4.Efficient system in terms of performance.
5.One kernel threads controls multiple user threads.
7
• Disadvantages of Many to One model

1.Cannot take advantage of parallelism


2. One block call blocks all user threads.

8
One to One Model
• The one to one model maps each of the user threads to a
kernel thread.

9
• This means that many threads can run in parallel on multiprocessors
and other threads can run when one thread makes a blocking
system call.

• Thus it provides more concurrency than the many to one model

• Drawback of this model is that creating user thread requires


creating the corresponding kernel thread.

10
Advantages of One to one model
Can exploit parallelism i.eMultiple threads can run parallel.
Provides more concurrency.
Less complications in the processing.

Disdvantages of One to one model


Every time with user thread kernel thread is created.
Limiting the number of total threads. Since a lot of kernel
threads burden the system, there is restriction on the number of
threads in the system
Kernel thread is overhead.
It reduces the performance of system
11
Many to Many model
• The many to many model maps many of the user threads to a
equal number or lesser kernel threads.
• The number of kernel threads may be specific to either
particular application or a particular machine.
• This model allows to create many threads
• Concurrency is not gained because the kernel can schedule
only one thread at a time.
• The one to one model allows for greater frequency.,but the
developer has to be careful not to create too many threads
within an application.

12
• The many to many model suffers from neither of these
shortcomings,developer can create as many threads as
necessary and the corresponding kernel threads can run
in parallel on multiprocessor.

• Also when thread performs a blocking system call ,the


kernel can schedule another thread for execution.

• Solaris 2,HP-UX and Tru64 UNIX support this model.

13
• Advantages of Many to Many model
Many threads can be created as per user’s requirement.
Multiple kernel or equal level user level threads can be
created.
• Disadvantages of Many to Many model
True concurrency cannot be achieved.
Multiple kernel threads of kernel is an overhead for
operating system
Performance is less

14
Process command:-ps() (process status)
• This command is used to display information about individual
processes that are exeuting on the system.
• Suntax:- ps [options] [tty]
• Example:-
• Result contains four columns of information.
Where,
PID – the unique process ID
TTY – terminal type that the user is logged into
TIME – amount of CPU in minutes and seconds that the process has
been running
CMD – name of the command that launched the process.
View all the running processes use either of the following option with ps –
[root@rhel7 ~]# ps -A
[root@rhel7 ~]# ps -e
15
Wait() command

• wait is a built-in command of Linux that waits for completing


any running process. wait command is used with a particular
process id or job id.
• Example:- wait

16
Sleep() command

Sleep command is used to delay for a fixed amount of time


during the execution of any script. When the coder needs to
pause the execution of any command for the particular purpose
then this command is used with the particular time value.
Example:-
$ sleep 2
echo "Task Completed“
• Output:- the string “Task completed” will print after waiting for
2 seconds.

17
Exit() command
• exit command in linux is used to exit the shell where it is
currently running.
Options for exit command –
•exit: Exit Without Parameter

After pressing enter, the terminal will simply close.


•exit [n] : Exit With Parameter

18
Kill () command
The kill command terminates a process.
Syntax:- kill [signal number] pid
Signal number
1-hangup signal
2-Interrupt signal
3- Quit signal
9-Kill signal
15-Software termination signal
Example:- $kill 779
Terminates a process with pid 779
19
Thank You

20

You might also like