Chapter 3

You might also like

Download as pdf or txt
Download as pdf or txt
You are on page 1of 16

Operating System

Unit 3: Process Management


Process:
A process is basically a program in execution. The execution of a process must
progress in a sequential fashion.
When a program is loaded into the memory and it becomes a process, it can be
divided into four sections ─ stack, heap, text and data.
The following image shows a simplified layout of a process inside main memory
Stack: The process Stack contains the temporary data such
as method/function parameters, return address and local
variables.
Heap: This is dynamically allocated memory to a process
during its run time.
Text: This includes the current activity represented by the
value of Program Counter and the contents of the
processor's registers.
Data: This section contains the global and static variables.

Process State Transition diagram:


There is minimum five state: New, Ready, Running, Wait, Terminate

• New (Create) – In this step, the process is about to be created but not yet
created, it is the program which is present in secondary memory that will be
picked up by OS to create the process.
• Ready - Whenever a process is created, it directly enters in the ready state,
in which, it waits for the CPU to be assigned. The OS picks the new processes
from the secondary memory and put all of them in the main memory. The
processes which are ready for the execution and reside in the main memory
are called ready state processes. There can be many processes present in the
ready state.

Notes Prepared By: Dhanashri Agrawal | SSVPS BSD POLYTECHNIC DHULE


Operating System

• Running - One of the processes from the ready state will be chosen by the
OS depending upon the scheduling algorithm. Hence, if we have only one
CPU in our system, the number of running processes for a particular time will
always be one. If we have n processors in the system then we can have n
processes running simultaneously.
• Wait - Whenever the process requests access to I/O or needs input from the
user it enters the blocked or wait state. The process continues to wait in the
main memory and does not require CPU. Once the I/O operation is
completed the process goes to the ready state.
• Terminate - When a process finishes its execution, it comes in the
termination state. All the context of the process (Process Control Block) will
also be deleted the process will be terminated by the Operating system.
• Suspend Ready - A process in the ready state, which is moved to secondary
memory from the main memory due to lack of the resources (mainly primary
memory) is called in the suspend ready state. If the main memory is full and
a higher priority process comes for the execution then the OS have to make
the room for the process in the main memory by throwing the lower priority
process out into the secondary memory. The suspend ready processes
remain in the secondary memory until the main memory gets available.
• Suspend Wait - Instead of removing the process from the ready queue, it's
better to remove the blocked process which is waiting for some resources in

Notes Prepared By: Dhanashri Agrawal | SSVPS BSD POLYTECHNIC DHULE


Operating System

the main memory. Since it is already waiting for some resource to get
available hence it is better if it waits in the secondary memory and make
room for the higher priority process. These processes complete their
execution once the main memory gets available and their wait is finished.

Process Control Block


Process Control Block is a data structure that contains information of the process
related to it. The process control block is also known as a task control block, entry
of the process table.

State: It stores the respective state of the


process. i.e., new, ready, running, waiting or
terminated.
Process Number: This shows the number of the
particular process. Every process is assigned with
a unique id known as process ID or PID which
stores the process identifier.
Program Counter: This contains the address of
the next instruction that needs to be executed in
the process.
Registers: This specifies the registers that are used by the process. They may
include accumulators, index registers, stack pointers, general purpose registers etc.
Memory Limits: This field contains the information about memory management
system used by operating system. This may include the page tables, segment tables
depending on the memory system used. It also contains the value of the base
registers, limit registers etc.
Open Files List: This information includes the list of files opened for a process.

Process scheduling:

Notes Prepared By: Dhanashri Agrawal | SSVPS BSD POLYTECHNIC DHULE


Operating System

The process scheduling is the activity of the process manager that handles the
removal of the running process from the CPU and the selection of another process
on the basis of a particular strategy.
Process scheduling is an essential part of a Multiprogramming operating systems.
Such operating systems allow more than one process to be loaded into the
executable memory at a time and the loaded process shares the CPU using time
multiplexing.

There are three types of process scheduler.


• Long Term or job scheduler:
It is also called a job scheduler. It selects processes from the queue and loads them
into memory for execution. The primary objective of the job scheduler is to provide
a balanced mix of jobs, such as I/O bound and processor bound. It also controls the
degree of multiprogramming. If the degree of multiprogramming is stable, then the
average rate of process creation must be equal to the average departure rate of
processes leaving the system.
On some systems, the long-term scheduler may not be available or minimal. Time-
sharing operating systems have no long-term scheduler. When a process changes
the state from new to ready, then there is use of long-term scheduler.

• Short Term or CPU scheduler:


It is also called as CPU scheduler. It is the change of ready state to running state of
the process. CPU scheduler selects a process among the processes that are ready
to execute and allocates CPU to one of them.
Short-term schedulers, also known as dispatchers, make the decision of which
process to execute next. Short-term schedulers are faster than long-term
schedulers.

• Medium Term or swapping scheduler:

Notes Prepared By: Dhanashri Agrawal | SSVPS BSD POLYTECHNIC DHULE


Operating System

Medium-term scheduling is a part of swapping. It removes the processes from the


memory. It reduces the degree of multiprogramming. The medium-term scheduler
is in-charge of handling the swapped out-processes.
A running process may become suspended if it makes an I/O request. A suspended
processes cannot make any progress towards completion. In this condition, to
remove the process from memory and make space for other processes, the
suspended process is moved to the secondary storage. This process is called
swapping, and the process is said to be swapped out or rolled out. Swapping may
be necessary to improve the process mix.

Long-Term Scheduler Short-Term Scheduler Medium-Term Scheduler

It is a job scheduler It is a CPU scheduler It is a process swapping scheduler.

Speed is lesser than short Speed is fastest among other Speed is in between both short-
term scheduler two and long-term scheduler.

It controls the degree of It provides lesser control over It reduces the degree of
multiprogramming degree of multiprogramming multiprogramming.

It is almost absent or minimal It is also minimal in time sharing


It is a part of Time-sharing systems.
in time sharing system system
It selects processes from pool It can re-introduce the process into
It selects those processes which
and loads them into memory memory and execution can be
are ready to execute
for execution continued.

Scheduling Queue:
As processes enter the system, they are put into a job queue, which consists of all
processes in the system. First place a new process in the Ready queue and then it
waits in the ready queue till it is selected for execution. A ready-queue header
contains pointers to the first and final PCBs in the list. Each PCB includes a pointer
field that points to the next PCB in the ready queue.

Notes Prepared By: Dhanashri Agrawal | SSVPS BSD POLYTECHNIC DHULE


Operating System

Once the process is assigned to the CPU and is executing, any one of the following
events occur −
• The process issue an I/O request, and then placed in the I/O queue.
• The process may create a new sub process and wait for termination.
• The process may be removed forcibly from the CPU, which is an interrupt,
and it is put back in the ready queue.
In the first two cases, the process switches from the waiting state to the ready
state, and then puts it back in the ready queue. A process continues this cycle till it
terminates, at which time it is removed from all queues and has its PCB and
resources deallocated.
Context Switch:
Context switching is a technique or method used by the operating system to switch
a process from one state to another to execute its function using CPUs in the
system. In this phenomenon, the execution of the process that is present in the
running state is suspended by the kernel and another process that is present in the
ready state is executed by the CPU. When switching perform in the system, it stores
the old running process's status in the form of registers and assigns the CPU to a
new process to execute its tasks. While a new process is running in the system, the
previous process must wait in a ready queue. The execution of the old process
starts at that point where another process stopped it. It defines the characteristics

Notes Prepared By: Dhanashri Agrawal | SSVPS BSD POLYTECHNIC DHULE


Operating System

of a multitasking operating system in which multiple processes are shared the same
CPU to perform multiple tasks without the need for additional processors in the
system.
Need for Context switching:
Context switching helps to share a single CPU across all processes to complete its
execution and store the system's tasks status. When the process reloads in the
system, the execution of the process starts at the same point where there is
conflicting.
Following are the reasons that describe the need for context switching in the
Operating system.

• The switching of one process to another process is not directly in the system.
Context switching helps the operating system that switches between the
multiple processes to use the CPU's resource to accomplish its tasks and
store its context. We can resume the service of the process at the same point
later. If we do not store the currently running process's data or context, the
stored data may be lost while switching between processes.
• If a high priority process falls into the ready queue, the currently running
process will be shut down or stopped by a high priority process to complete
its tasks in the system.
• If any running process requires I/O resources in the system, the current
process will be switched by another process to use the CPUs. And when the
I/O requirement is met, the old process goes into a ready state to wait for its
execution in the CPU. Context switching stores the state of the process to
resume its tasks in an operating system. Otherwise, the process needs to
restart its execution from the initials level.
• If any interrupts occur while running a process in the operating system, the
process status is saved as registers using context switching. After resolving
the interrupts, the process switches from a wait state to a ready state to
resume its execution at the same point later, where the operating system
interrupt occurs.

Notes Prepared By: Dhanashri Agrawal | SSVPS BSD POLYTECHNIC DHULE


Operating System

• Context switching allows a single CPU to handle multiple process requests


simultaneously without the need for any additional processors.

Inter-process communication:
Inter process communication is the mechanism provided by the operating system
that allows processes to communicate with each other. Processes executing
concurrently in the operating system may be either independent processes or
cooperating processes.
A process is independent if it cannot affect or be affected by the other processes
executing in the system. Any process that does not share data with any other
process is independent.
A process is cooperating if it can affect or be affected by the other processes
executing in the system. Clearly, any process that shares data with other processes
is a cooperating process. There are several reasons for providing an environment
that allows process cooperation:
• Information sharing: Since several users may be interested in the same piece
of information (for instance, a shared file), we must provide an environment
to allow concurrent access to such information.
• Computation speedup: If we want a particular task to run faster, we must
break it into subtasks, each of which will be executing in parallel with the
others.
• Modularity: We may want to construct the system in a modular fashion,
dividing the system functions into separate processes or threads.
• Convenience: Even an individual user may work on many tasks at the same
time. For instance, a user may be editing, listening to music, and compiling
in parallel. Cooperating processes require an IPC mechanism that will allow
them to exchange data and information.
There are two fundamental models of inter process communication:
• shared memory and
• message passing

Notes Prepared By: Dhanashri Agrawal | SSVPS BSD POLYTECHNIC DHULE


Operating System

Shared memory:
In the shared-memory model, the region of memory that is shared by cooperating
processes is established. Processes can then exchange information by reading and
writing data to the shared region. Shared memory is a memory shared between
two or more processes. Each process has its own address space; if any process
wants to communicate with some information from its own address space to other
processes, then it is only possible with IPC techniques.
Shared memory is the fastest inter-process communication mechanism. The
operating system maps a memory segment in the address space of several
processes to read and write in that memory segment without calling operating
system functions.

Message Passing:
Message passing provides a mechanism to allow processes to communicate and to
synchronize their actions without sharing the same address space. It is particularly
useful in a distributed environment, where the communicating processes may
reside on different computers connected by a network.
send(message) and
receive(message)

Notes Prepared By: Dhanashri Agrawal | SSVPS BSD POLYTECHNIC DHULE


Operating System

Messages sent by a process can be either fixed or variable in size. If only fixed-sized
messages can be sent, the system-level implementation is straight-forward. This
restriction, however, makes the task of programming more difficult. Conversely,
variable-sized messages require a more complex system-level implementation, but
the programming task becomes simpler.
• Direct or indirect communication
• Synchronous or asynchronous communication

Direct or indirect communication:


Processes that want to communicate must have a way to refer to each other. They
can use either direct or indirect communication. Under direct communication, each
process that wants to communicate must explicitly name the recipient or sender of
the communication. In this scheme, the send () and receive () primitives are defined
as:
• send (P, message)—Send a message to process P.
• receive (Q, message)—Receive a message from process Q.
This scheme exhibits symmetry in addressing; that is, both the sender process and
the receiver process must name the other to communicate.
A variant of this scheme employs asymmetry in addressing. Here, only the sender
names the recipient; the recipient is not required to name the sender. In this
scheme, the send () and receive () primitives are defined as follows:

• send (P, message)—Send a message to process P.


• Receive (id, message)—Receive a message from any process. The variable id
is set to the name of the process with which communication has taken place.
With indirect communication, the messages are sent to and received from
mailboxes, or ports. A mailbox can be viewed abstractly as an object into which
messages can be placed by processes and from which messages can be removed.
Each mailbox has a unique identification. A process can communicate with another
process via several different mailboxes, but two processes can communicate only

Notes Prepared By: Dhanashri Agrawal | SSVPS BSD POLYTECHNIC DHULE


Operating System

if they have a shared mailbox. The send () and receive () primitives are defined as
follows:
• Send (A, message)—Send a message to mailbox A.
• Receive (A, message)—Receive a message from mailbox A.

Synchronous or asynchronous communication:


Communication between processes takes place through calls to send () and receive
() primitives. There are different design options for implementing each primitive.
Message passing may be either blocking or non-blocking—also known as
synchronous and asynchronous.

• Blocking send: The sending process is blocked until the message is received
by the receiving process or by the mailbox.
• Non-blocking sends: The sending process sends the message and resumes
operation.
• Blocking receiver: The receiver blocks until a message is available.
• Nonblocking receive: The receiver retrieves either a valid message or a null.
Different combinations of send () and receive () are possible. When both send
() and receive () are blocking, we have a rendezvous between the sender and
the receiver.

Threads:
A thread is a single sequential flow of execution of tasks of a process, so it is also
known as thread of execution or thread of control. There is a way of thread
execution inside the process of any operating system. Apart from this, there can be
more than one thread inside a process.
Each thread has its own program counter, stack, and set of registers. But the
threads of a single process might share the same code and data/file. Threads are
also termed as lightweight processes as they share common resources.
A traditional (or heavy weight) process has a single thread of control. If a process
has multiple threads of control, it can perform more than one task at a time.

Notes Prepared By: Dhanashri Agrawal | SSVPS BSD POLYTECHNIC DHULE


Operating System

Single-threaded and Multi-threaded Processes:


Single threaded processes contain the execution of instructions in a single
sequence. In other words, one command is processes at a time.
The opposite of single threaded processes are multithreaded processes. These
processes allow the execution of multiple parts of a program at the same time.
These are lightweight processes available within the process.

Multithreaded processes can be implemented as user-level threads or kernel-level threads.

• User-level Threads
The user-level threads are implemented by users and the kernel is not
aware of the existence of these threads. It handles them as if they were
single-threaded processes. User-level threads are small and much
faster than kernel level threads. Also, there is no kernel involvement in
synchronization for user-level threads.
• Kernel-level Threads
Kernel-level threads are handled by the operating system directly and
the thread management is done by the kernel. The context information
for the process as well as the process threads is all managed by the
kernel. Because of this, kernel-level threads are slower than user-level
threads.

Notes Prepared By: Dhanashri Agrawal | SSVPS BSD POLYTECHNIC DHULE


Operating System

Benefits
Responsiveness: Multithreading an interactive application may allow a program to
continue running even if part of it is blocked or is performing a lengthy operation,
thereby increasing responsiveness to the user. This quality is especially useful in
designing user interfaces. A single- threaded application would be unresponsive to
the user until the operation had completed. In contrast, if the time-consuming
operation is performed in a separate thread, the application remains responsive to
the user.
Resource sharing: Processes can only share resources through techniques such as
shared memory and message passing. Such techniques must be explicitly arranged
by the programmer. However, threads share the memory and the resources of the
process to which they belong by default. The benefit of sharing code and data is
that it allows an application to have several different threads of activity within the
same address space.
Economy: Allocating memory and resources for process creation is costly. Because
threads share the resources of the process to which they belong, it is more
economical to create and context-switch threads.
Scalability. The benefits of multithreading can be even greater in a multiprocessor
architecture, where threads may be running in parallel on different processing
cores. A single-threaded process can run on only one processor, regardless how
many are available.
Multi-threading model:
Threads may be provided either at the user level, for user threads, or by the kernel,
for kernel threads. User threads are supported above the kernel and are managed
without kernel support, whereas kernel threads are supported and managed
directly by the operating system. Virtually all contemporary operating systems —
including Windows, Linux, MacOS X, and Solaris—support kernel threads.
Ultimately, a relationship must exist between user threads and kernel threads.

There exists three established multithreading models classifying these


relationships are:

o Many to one multithreading model

Notes Prepared By: Dhanashri Agrawal | SSVPS BSD POLYTECHNIC DHULE


Operating System

o One to one multithreading model


o Many to Many multithreading models

Many to one Multithreading model:


The many-to-one model maps many user-level threads to
one kernel thread. Thread management is done by the
thread library in user space, so it is efficient. However, the
entire process will block if a thread makes a blocking
system call. Also, because only one thread can access the
kernel at a time, multiple threads are unable to run in
parallel on multicore systems.

One to one Multithreading Model:


The one-to-one model maps a single user-level thread to a
single kernel-level thread. This type of relationship facilitates
the running of multiple threads in parallel. However, this
benefit comes with its drawback.

Many to many Multithreading Model:

In this type of model, there are several user-level threads


and several kernel-level threads. The number of kernel
threads created depends upon a particular application. The
developer can create as many threads at both levels but may
not be the same. The many to many model is a compromise
between the other two models. In this model, if any thread
makes a blocking system call, the kernel can schedule
another thread for execution. Also, with the introduction of multiple threads,
complexity is not present as in the previous models. Though this model allows the
creation of multiple kernel threads, true concurrency cannot be achieved by this
model. This is because the kernel can schedule only one process at a time.

Process commands:

Notes Prepared By: Dhanashri Agrawal | SSVPS BSD POLYTECHNIC DHULE


Operating System

PS(Process Status):
PS command is used to list the currently running processes and their PIDs along
with some other information depends on different options.
Syntax:
ps [options]
Options for ps commands:
PID TTY TIME CMD
12330 pts/0 00:00:00 bash
21621 pts/0 00:00:00 ps
PID: the unique process ID
TTY: terminal type that the user is logged into
TIME: amount of CPU in minutes and seconds that the process has been running
CMD: name of the command that launched the process.
View Processes: View all the running processes use either of the following
option with ps –
ps -A or ps -e
View Processes not associated with terminal: View all processes except both
session leaders and processes not associated with a terminal.
ps -a
View all processes associated with this terminal: ps -T
View all the running processes: ps -r

Wait:
Wait command used to monitor the previous process, depends on previous
process return status it will return the exit status.
For example, if we want to wait to complete a particular process ID 13245, then
we should use “wait 13245” when process 13245 complete wait command return
the return values of 13245 exit status.

Notes Prepared By: Dhanashri Agrawal | SSVPS BSD POLYTECHNIC DHULE


Operating System

Syntax:
wait [options] ID
Sleep:
sleep command is used to create a dummy job. A dummy job helps in delaying
the execution. It takes time in seconds by default but a small suffix(s, m, h, d)
can be added at the end to convert it into any other format. This command
pauses the execution for an amount of time which is defined by NUMBER.
Syntax:
sleep number[suffix]
Exit:
exit command is used to exit from the current shell. It takes a parameter as a
number and exits the shell with a return of status number. If we did not provide
any parameter, it would return the status of the last executed command. The exit
command closes a script and exits the shell.

If we have more than one shell tab, the exit command will close the tab where it is
executed. This is a built-in command, and we cannot find a dedicated manual page
for this.

Syntax:

exit

Kill:
The kill is a very useful command in Linux that is used to terminate the process
manually. It sends a signal which ultimately terminates or kills a particular process
or group of processes. If the user does not specify a signal to send with the kill
command, the process is terminated using the default TERM signal.

Notes Prepared By: Dhanashri Agrawal | SSVPS BSD POLYTECHNIC DHULE

You might also like