Professional Documents
Culture Documents
Chapter 3
Chapter 3
Chapter 3
• New (Create) – In this step, the process is about to be created but not yet
created, it is the program which is present in secondary memory that will be
picked up by OS to create the process.
• Ready - Whenever a process is created, it directly enters in the ready state,
in which, it waits for the CPU to be assigned. The OS picks the new processes
from the secondary memory and put all of them in the main memory. The
processes which are ready for the execution and reside in the main memory
are called ready state processes. There can be many processes present in the
ready state.
• Running - One of the processes from the ready state will be chosen by the
OS depending upon the scheduling algorithm. Hence, if we have only one
CPU in our system, the number of running processes for a particular time will
always be one. If we have n processors in the system then we can have n
processes running simultaneously.
• Wait - Whenever the process requests access to I/O or needs input from the
user it enters the blocked or wait state. The process continues to wait in the
main memory and does not require CPU. Once the I/O operation is
completed the process goes to the ready state.
• Terminate - When a process finishes its execution, it comes in the
termination state. All the context of the process (Process Control Block) will
also be deleted the process will be terminated by the Operating system.
• Suspend Ready - A process in the ready state, which is moved to secondary
memory from the main memory due to lack of the resources (mainly primary
memory) is called in the suspend ready state. If the main memory is full and
a higher priority process comes for the execution then the OS have to make
the room for the process in the main memory by throwing the lower priority
process out into the secondary memory. The suspend ready processes
remain in the secondary memory until the main memory gets available.
• Suspend Wait - Instead of removing the process from the ready queue, it's
better to remove the blocked process which is waiting for some resources in
the main memory. Since it is already waiting for some resource to get
available hence it is better if it waits in the secondary memory and make
room for the higher priority process. These processes complete their
execution once the main memory gets available and their wait is finished.
Process scheduling:
The process scheduling is the activity of the process manager that handles the
removal of the running process from the CPU and the selection of another process
on the basis of a particular strategy.
Process scheduling is an essential part of a Multiprogramming operating systems.
Such operating systems allow more than one process to be loaded into the
executable memory at a time and the loaded process shares the CPU using time
multiplexing.
Speed is lesser than short Speed is fastest among other Speed is in between both short-
term scheduler two and long-term scheduler.
It controls the degree of It provides lesser control over It reduces the degree of
multiprogramming degree of multiprogramming multiprogramming.
Scheduling Queue:
As processes enter the system, they are put into a job queue, which consists of all
processes in the system. First place a new process in the Ready queue and then it
waits in the ready queue till it is selected for execution. A ready-queue header
contains pointers to the first and final PCBs in the list. Each PCB includes a pointer
field that points to the next PCB in the ready queue.
Once the process is assigned to the CPU and is executing, any one of the following
events occur −
• The process issue an I/O request, and then placed in the I/O queue.
• The process may create a new sub process and wait for termination.
• The process may be removed forcibly from the CPU, which is an interrupt,
and it is put back in the ready queue.
In the first two cases, the process switches from the waiting state to the ready
state, and then puts it back in the ready queue. A process continues this cycle till it
terminates, at which time it is removed from all queues and has its PCB and
resources deallocated.
Context Switch:
Context switching is a technique or method used by the operating system to switch
a process from one state to another to execute its function using CPUs in the
system. In this phenomenon, the execution of the process that is present in the
running state is suspended by the kernel and another process that is present in the
ready state is executed by the CPU. When switching perform in the system, it stores
the old running process's status in the form of registers and assigns the CPU to a
new process to execute its tasks. While a new process is running in the system, the
previous process must wait in a ready queue. The execution of the old process
starts at that point where another process stopped it. It defines the characteristics
of a multitasking operating system in which multiple processes are shared the same
CPU to perform multiple tasks without the need for additional processors in the
system.
Need for Context switching:
Context switching helps to share a single CPU across all processes to complete its
execution and store the system's tasks status. When the process reloads in the
system, the execution of the process starts at the same point where there is
conflicting.
Following are the reasons that describe the need for context switching in the
Operating system.
• The switching of one process to another process is not directly in the system.
Context switching helps the operating system that switches between the
multiple processes to use the CPU's resource to accomplish its tasks and
store its context. We can resume the service of the process at the same point
later. If we do not store the currently running process's data or context, the
stored data may be lost while switching between processes.
• If a high priority process falls into the ready queue, the currently running
process will be shut down or stopped by a high priority process to complete
its tasks in the system.
• If any running process requires I/O resources in the system, the current
process will be switched by another process to use the CPUs. And when the
I/O requirement is met, the old process goes into a ready state to wait for its
execution in the CPU. Context switching stores the state of the process to
resume its tasks in an operating system. Otherwise, the process needs to
restart its execution from the initials level.
• If any interrupts occur while running a process in the operating system, the
process status is saved as registers using context switching. After resolving
the interrupts, the process switches from a wait state to a ready state to
resume its execution at the same point later, where the operating system
interrupt occurs.
Inter-process communication:
Inter process communication is the mechanism provided by the operating system
that allows processes to communicate with each other. Processes executing
concurrently in the operating system may be either independent processes or
cooperating processes.
A process is independent if it cannot affect or be affected by the other processes
executing in the system. Any process that does not share data with any other
process is independent.
A process is cooperating if it can affect or be affected by the other processes
executing in the system. Clearly, any process that shares data with other processes
is a cooperating process. There are several reasons for providing an environment
that allows process cooperation:
• Information sharing: Since several users may be interested in the same piece
of information (for instance, a shared file), we must provide an environment
to allow concurrent access to such information.
• Computation speedup: If we want a particular task to run faster, we must
break it into subtasks, each of which will be executing in parallel with the
others.
• Modularity: We may want to construct the system in a modular fashion,
dividing the system functions into separate processes or threads.
• Convenience: Even an individual user may work on many tasks at the same
time. For instance, a user may be editing, listening to music, and compiling
in parallel. Cooperating processes require an IPC mechanism that will allow
them to exchange data and information.
There are two fundamental models of inter process communication:
• shared memory and
• message passing
Shared memory:
In the shared-memory model, the region of memory that is shared by cooperating
processes is established. Processes can then exchange information by reading and
writing data to the shared region. Shared memory is a memory shared between
two or more processes. Each process has its own address space; if any process
wants to communicate with some information from its own address space to other
processes, then it is only possible with IPC techniques.
Shared memory is the fastest inter-process communication mechanism. The
operating system maps a memory segment in the address space of several
processes to read and write in that memory segment without calling operating
system functions.
Message Passing:
Message passing provides a mechanism to allow processes to communicate and to
synchronize their actions without sharing the same address space. It is particularly
useful in a distributed environment, where the communicating processes may
reside on different computers connected by a network.
send(message) and
receive(message)
Messages sent by a process can be either fixed or variable in size. If only fixed-sized
messages can be sent, the system-level implementation is straight-forward. This
restriction, however, makes the task of programming more difficult. Conversely,
variable-sized messages require a more complex system-level implementation, but
the programming task becomes simpler.
• Direct or indirect communication
• Synchronous or asynchronous communication
if they have a shared mailbox. The send () and receive () primitives are defined as
follows:
• Send (A, message)—Send a message to mailbox A.
• Receive (A, message)—Receive a message from mailbox A.
• Blocking send: The sending process is blocked until the message is received
by the receiving process or by the mailbox.
• Non-blocking sends: The sending process sends the message and resumes
operation.
• Blocking receiver: The receiver blocks until a message is available.
• Nonblocking receive: The receiver retrieves either a valid message or a null.
Different combinations of send () and receive () are possible. When both send
() and receive () are blocking, we have a rendezvous between the sender and
the receiver.
Threads:
A thread is a single sequential flow of execution of tasks of a process, so it is also
known as thread of execution or thread of control. There is a way of thread
execution inside the process of any operating system. Apart from this, there can be
more than one thread inside a process.
Each thread has its own program counter, stack, and set of registers. But the
threads of a single process might share the same code and data/file. Threads are
also termed as lightweight processes as they share common resources.
A traditional (or heavy weight) process has a single thread of control. If a process
has multiple threads of control, it can perform more than one task at a time.
• User-level Threads
The user-level threads are implemented by users and the kernel is not
aware of the existence of these threads. It handles them as if they were
single-threaded processes. User-level threads are small and much
faster than kernel level threads. Also, there is no kernel involvement in
synchronization for user-level threads.
• Kernel-level Threads
Kernel-level threads are handled by the operating system directly and
the thread management is done by the kernel. The context information
for the process as well as the process threads is all managed by the
kernel. Because of this, kernel-level threads are slower than user-level
threads.
Benefits
Responsiveness: Multithreading an interactive application may allow a program to
continue running even if part of it is blocked or is performing a lengthy operation,
thereby increasing responsiveness to the user. This quality is especially useful in
designing user interfaces. A single- threaded application would be unresponsive to
the user until the operation had completed. In contrast, if the time-consuming
operation is performed in a separate thread, the application remains responsive to
the user.
Resource sharing: Processes can only share resources through techniques such as
shared memory and message passing. Such techniques must be explicitly arranged
by the programmer. However, threads share the memory and the resources of the
process to which they belong by default. The benefit of sharing code and data is
that it allows an application to have several different threads of activity within the
same address space.
Economy: Allocating memory and resources for process creation is costly. Because
threads share the resources of the process to which they belong, it is more
economical to create and context-switch threads.
Scalability. The benefits of multithreading can be even greater in a multiprocessor
architecture, where threads may be running in parallel on different processing
cores. A single-threaded process can run on only one processor, regardless how
many are available.
Multi-threading model:
Threads may be provided either at the user level, for user threads, or by the kernel,
for kernel threads. User threads are supported above the kernel and are managed
without kernel support, whereas kernel threads are supported and managed
directly by the operating system. Virtually all contemporary operating systems —
including Windows, Linux, MacOS X, and Solaris—support kernel threads.
Ultimately, a relationship must exist between user threads and kernel threads.
Process commands:
PS(Process Status):
PS command is used to list the currently running processes and their PIDs along
with some other information depends on different options.
Syntax:
ps [options]
Options for ps commands:
PID TTY TIME CMD
12330 pts/0 00:00:00 bash
21621 pts/0 00:00:00 ps
PID: the unique process ID
TTY: terminal type that the user is logged into
TIME: amount of CPU in minutes and seconds that the process has been running
CMD: name of the command that launched the process.
View Processes: View all the running processes use either of the following
option with ps –
ps -A or ps -e
View Processes not associated with terminal: View all processes except both
session leaders and processes not associated with a terminal.
ps -a
View all processes associated with this terminal: ps -T
View all the running processes: ps -r
Wait:
Wait command used to monitor the previous process, depends on previous
process return status it will return the exit status.
For example, if we want to wait to complete a particular process ID 13245, then
we should use “wait 13245” when process 13245 complete wait command return
the return values of 13245 exit status.
Syntax:
wait [options] ID
Sleep:
sleep command is used to create a dummy job. A dummy job helps in delaying
the execution. It takes time in seconds by default but a small suffix(s, m, h, d)
can be added at the end to convert it into any other format. This command
pauses the execution for an amount of time which is defined by NUMBER.
Syntax:
sleep number[suffix]
Exit:
exit command is used to exit from the current shell. It takes a parameter as a
number and exits the shell with a return of status number. If we did not provide
any parameter, it would return the status of the last executed command. The exit
command closes a script and exits the shell.
If we have more than one shell tab, the exit command will close the tab where it is
executed. This is a built-in command, and we cannot find a dedicated manual page
for this.
Syntax:
exit
Kill:
The kill is a very useful command in Linux that is used to terminate the process
manually. It sends a signal which ultimately terminates or kills a particular process
or group of processes. If the user does not specify a signal to send with the kill
command, the process is terminated using the default TERM signal.