Download as pdf or txt
Download as pdf or txt
You are on page 1of 34

lOMoARcPSD|37969926

Unit-2 - Os notes Unit-2

Operating Systems (Dr. A.P.J. Abdul Kalam Technical University)

Scan to open on Studocu

Studocu is not sponsored or endorsed by any college or university


Downloaded by Ayaka Kamisato (kamisatoayaka131203@gmail.com)
lOMoARcPSD|37969926

Unit-2

Concurrent Processes

Process Concept:

Process

A process is basically a program in execution. The execution of a process must progress in a


sequential fashion.

A process is defined as an entity which represents the basic unit of work to be implemented in the
system.

To put it in simple terms, we write our computer programs in a text file and when we execute
this program, it becomes a process which performs all the tasks mentioned in the program.

When a program is loaded into the memory and it becomes a process, it can be divided into four
sections ─ stack, heap, text and data. The following image shows a simplified layout of a process
inside main memory −

Component & Description

Stack
1 The process Stack contains the temporary data such as method/function
parameters, return address and local variables.

Heap
2
This is dynamically allocated memory to a process during its run time.

Text
3 This includes the current activity represented by the value of Program Counter
and the contents of the processor's registers.

Downloaded by Ayaka Kamisato (kamisatoayaka131203@gmail.com)


lOMoARcPSD|37969926

Data
4
This section contains the global and static variables.

Program:

A program is a piece of code which may be a single line or millions of lines. A computer
program is usually written by a computer programmer in a programming language. For example,
here is a simple program written in C programming language −

#include <stdio.h>

int main() {
printf("Hello, World! \n");
return 0;
}

A computer program is a collection of instructions that performs a specific task when executed
by a computer. When we compare a program with a process, we can conclude that a process is a
dynamic instance of a computer program.

A part of a computer program that performs a well-defined task is known as an algorithm. A


collection of computer programs, libraries and related data are referred to as a software.

Process Life Cycle

When a process executes, it passes through different states. These stages may differ in different
operating systems, and the names of these states are also not standardized.

In general, a process can have one of the following five states at a time.

Start
1
This is the initial state when a process is first started/created.

Ready
The process is waiting to be assigned to a processor. Ready processes are waiting to
2 have the processor allocated to them by the operating system so that they can run.
Process may come into this state after Start state or while running it by but
interrupted by the scheduler to assign CPU to some other process.

Running
3 Once the process has been assigned to a processor by the OS scheduler, the process
state is set to running and the processor executes its instructions.

4 Waiting
Process moves into the waiting state if it needs to wait for a resource, such as

Downloaded by Ayaka Kamisato (kamisatoayaka131203@gmail.com)


lOMoARcPSD|37969926

waiting for user input, or waiting for a file to become available.

Terminated or Exit
5 Once the process finishes its execution, or it is terminated by the operating system,
it is moved to the terminated state where it waits to be removed from main memory.

Process Control Block (PCB):

A Process Control Block is a data structure maintained by the Operating System for every
process. The PCB is identified by an integer process ID (PID). A PCB keeps all the information
needed to keep track of a process as listed below in the table −

Process State
1 The current state of the process i.e., whether it is ready, running, waiting, or
whatever.

Process privileges
2
This is required to allow/disallow access to system resources.

Process ID
3
Unique identification for each of the process in the operating system.

Pointer
4
A pointer to parent process.

Program Counter
5 Program Counter is a pointer to the address of the next instruction to be executed
for this process.

CPU registers
6 Various CPU registers where process need to be stored for execution for running
state.

CPU Scheduling Information


7 Process priority and other scheduling information which is required to schedule the
process.

8 Memory management information

Downloaded by Ayaka Kamisato (kamisatoayaka131203@gmail.com)


lOMoARcPSD|37969926

This includes the information of page table, memory limits, Segment table
depending on memory used by the operating system.

Accounting information
9 This includes the amount of CPU used for process execution, time limits,
execution ID etc.

1 IO status information
0 This includes a list of I/O devices allocated to the process.

The architecture of a PCB is completely dependent on Operating System and may contain
different information in different operating systems. Here is a simplified diagram of a PCB −

The PCB is maintained for a process throughout its lifetime, and is deleted once the process
terminates.

Principle of Concurrency:

Concurrency is the execution of multiple instruction sequences at the same time. It happens in
the operating system when there are several process threads running in parallel. The running
process threads always communicate with each other through shared memory or message
passing. Concurrency results in the sharing of resources resulting in problems like deadlocks and
resource starvation.
It helps in techniques like coordinating the execution of processes, memory allocation, and
execution scheduling for maximizing throughput.
There are several motivations for allowing concurrent execution

 Physical resource Sharing: Multiuser environment since hardware resources are limited
 Logical resource Sharing: Shared file (same piece of information)
 Computation Speedup: Parallel execution
 Modularity: Divide system functions into separation processes

Downloaded by Ayaka Kamisato (kamisatoayaka131203@gmail.com)


lOMoARcPSD|37969926

Relationship Between Processes of Operating System

The Processes executing in the operating system is one of the following two types:
 Independent Processes
 Cooperating Processes

Independent Processes
Its state is not shared with any other process.
 The result of execution depends only on the input state.
 The result of the execution will always be the same for the same input.
 The termination of the independent process will not terminate any other.

Cooperating System
Its state is shared along other processes.
 The result of the execution depends on relative execution sequence and cannot be predicted
in advanced(Non-deterministic).
 The result of the execution will not always be the same for the same input.
 The termination of the cooperating process may affect other process.

Process Operation in Operating System


Most systems support at least two types of operations that can be invoked on a process creation
and process deletion.
Process Creation
A parent process and then children of that process can be created. When more than one process is
created several possible implementations exist.
 Parent and child can execute concurrently.
 The Parents waits until all of its children have terminated.
 The parent and children share all resources in common.
 The children share only a subset of their parent’s resources.
 The parent and children share no resources in common.
Process Termination
A child process can be terminated in the following ways:
 A parent may terminate the execution of one of its children for a following reasons:
 The child has exceeded its allocation resource usage.
 The task assigned to its child is no longer required.
 If a parent has terminated than its children must be terminated.

Principles of Concurrency

The principles of concurrency in operating systems are designed to ensure that multiple
processes or threads can execute efficiently and effectively, without interfering with each other
or causing deadlock.

Downloaded by Ayaka Kamisato (kamisatoayaka131203@gmail.com)


lOMoARcPSD|37969926

 Interleaving − Interleaving refers to the interleaved execution of multiple processes or


threads. The operating system uses a scheduler to determine which process or thread to
execute at any given time. Interleaving allows for efficient use of CPU resources and ensures
that all processes or threads get a fair share of CPU time.
 Synchronization − Synchronization refers to the coordination of multiple processes or
threads to ensure that they do not interfere with each other. This is done through the use of
synchronization primitives such as locks, semaphores, and monitors. These primitives allow
processes or threads to coordinate access to shared resources such as memory and I/O
devices.
 Mutual exclusion − Mutual exclusion refers to the principle of ensuring that only one
process or thread can access a shared resource at a time. This is typically implemented using
locks or semaphores to ensure that multiple processes or threads do not access a shared
resource simultaneously.
 Deadlock avoidance − Deadlock is a situation in which two or more processes or threads are
waiting for each other to release a resource, resulting in a deadlock. Operating systems use
various techniques such as resource allocation graphs and deadlock prevention algorithms to
avoid deadlock.
 Process or thread coordination − Processes or threads may need to coordinate their
activities to achieve a common goal. This is typically achieved using synchronization
primitives such as semaphores or message passing mechanisms such as pipes or sockets.
 Resource allocation − Operating systems must allocate resources such as memory, CPU
time, and I/O devices to multiple processes or threads in a fair and efficient manner. This is
typically achieved using scheduling algorithms such as round-robin, priority-based, or real-
time scheduling.

Problems in Concurrency

 Sharing global resources: Sharing of global resources safely is difficult. If two processes
both make use of a global variable and both perform read and write on that variable, then the
order in which various read and write are executed is critical.
 Optimal allocation of resources: It is difficult for the operating system to manage the
allocation of resources optimally.
 Locating programming errors: It is very difficult to locate a programming error because
reports are usually not reproducible.

Downloaded by Ayaka Kamisato (kamisatoayaka131203@gmail.com)


lOMoARcPSD|37969926

 Locking the channel: It may be inefficient for the operating system to simply lock the
channel and prevents its use by other processes.

Advantages of Concurrency

 Running of multiple applications: It enable to run multiple applications at the same time.
 Better resource utilization: It enables that the resources that are unused by one application
can be used for other applications.
 Better average response time: Without concurrency, each application has to be run to
completion before the next one can be run.
 Better performance: It enables the better performance by the operating system. When one
application uses only the processor and another application uses only the disk drive then the
time to run both applications concurrently to completion will be shorter than the time to run
each application consecutively.

Drawbacks of Concurrency

 It is required to protect multiple applications from one another.


 It is required to coordinate multiple applications through additional mechanisms.
 Additional performance overheads and complexities in operating systems are required for
switching among applications.
 Sometimes running too many applications concurrently leads to severely degraded
performance.
Issues of Concurrency

 Non-atomic: Operations that are non-atomic but interruptible by multiple processes can
cause problems.
 Race conditions: A race condition occurs of the outcome depends on which of several
processes gets to a point first.
 Blocking: Processes can block waiting for resources. A process could be blocked for long
period of time waiting for input from a terminal. If the process is required to periodically
update some data, this would be very undesirable.
 Starvation: It occurs when a process does not obtain service to progress.
 Deadlock: It occurs when two processes are blocked and hence neither can proceed to
execute.

Producer Consumer Problem in OS

Producer-Consumer problem is a classical synchronization problem in the operating system.


With the presence of more than one process and limited resources in the system the
synchronization problem arises. If one resource is shared between more than one process at the
same time then it can lead to data inconsistency. In the producer-consumer problem, the producer
produces an item and the consumer consumes the item produced by the producer.

What is Producer Consumer Problem?

Downloaded by Ayaka Kamisato (kamisatoayaka131203@gmail.com)


lOMoARcPSD|37969926

Before knowing what is Producer-Consumer Problem we have to know what are Producer and
Consumer.

 In operating System Producer is a process which is able to produce data/item.


 Consumer is a Process that is able to consume the data/item produced by the
Producer.
 Both Producer and Consumer share a common memory buffer. This buffer is a space of a
certain size in the memory of the system which is used for storage. The producer
produces the data into the buffer and the consumer consumes the data from the buffer.

So, what are the Producer-Consumer Problems?

1. Producer Process should not produce any data when the shared buffer is full.
2. Consumer Process should not consume any data when the shared buffer is empty.
3. The access to the shared buffer should be mutually exclusive i.e at a time only one
process should be able to access the shared buffer and make changes to it.

For consistent data synchronization between Producer and Consumer, the above problem should
be resolved.

Solution For Producer Consumer Problem

To solve the Producer-Consumer problem three semaphores variable are used :

Semaphores are variables used to indicate the number of resources available in the system at a
particular time. semaphore variables are used to achieve `Process Synchronization.

Full

The full variable is used to track the space filled in the buffer by the Producer process. It is
initialized to 0 initially as initially no space is filled by the Producer process.

Empty

The Empty variable is used to track the empty space in the buffer. The Empty variable is initially
initialized to the BUFFER-SIZE as initially, the whole buffer is empty.

Mutex

Mutex is used to achieve mutual exclusion. mutex ensures that at any particular time only the
producer or the consumer is accessing the buffer.

Mutex - mutex is a binary semaphore variable that has a value of 0 or 1.

We will use the Signal() and wait() operation in the above-mentioned semaphores to arrive at a
solution to the Producer-Consumer problem.

Downloaded by Ayaka Kamisato (kamisatoayaka131203@gmail.com)


lOMoARcPSD|37969926

Signal() - The signal function increases the semaphore value by 1. Wait() - The wait operation
decreases the semaphore value by 1.

Let's look at the code of Producer-Consumer Process

The code for Producer Process is as follows :

void Producer(){
while(true){
// producer produces an item/data
wait(Empty);
wait(mutex);
add();
signal(mutex);
signal(Full);
}
}

Let's understand the above Producer process code :

 wait(Empty) - Before producing items, the producer process checks for the empty space
in the buffer. If the buffer is full producer process waits for the consumer process to
consume items from the buffer. so, the producer process executes wait(Empty) before
producing any item.
 wait(mutex) - Only one process can access the buffer at a time. So, once the producer
process enters into the critical section of the code it decreases the value of mutex by
executing wait(mutex) so that no other process can access the buffer at the same time.
 add() - This method adds the item to the buffer produced by the Producer process. once
the Producer process reaches add function in the code, it is guaranteed that no other
process will be able to access the shared buffer concurrently which helps in data
consistency.
 signal(mutex) - Now, once the Producer process added the item into the buffer it
increases the mutex value by 1 so that other processes which were in a busy-waiting state
can access the critical section.
 signal(Full) - when the producer process adds an item into the buffer spaces is filled by
one item so it increases the Full semaphore so that it indicates the filled spaces in the
buffer correctly.

The code for the Consumer Process is as follows :

void Consumer() {
while(true){
// consumer consumes an item
wait(Full);
wait(mutex);
consume();

Downloaded by Ayaka Kamisato (kamisatoayaka131203@gmail.com)


lOMoARcPSD|37969926

signal(mutex);
signal(Empty);
}
}

Let's understand the above Consumer process code :

 wait(Full) - Before the consumer process starts consuming any item from the buffer it
checks if the buffer is empty or has some item in it. So, the consumer process creates one
more empty space in the buffer and this is indicated by the full variable. The value of the
full variable decreases by one when the wait(Full) is executed. If the Full variable is
already zero i.e the buffer is empty then the consumer process cannot consume any item
from the buffer and it goes in the busy-waiting state.
 wait(mutex) - It does the same as explained in the producer process. It decreases the
mutex by 1 and restricts another process to enter the critical section until the consumer
process increases the value of mutex by 1.
 consume() - This function consumes an item from the buffer. when code reaches the
consuming () function it will not allow any other process to access the critical section
which maintains the data consistency.
 signal(mutex) - After consuming the item it increases the mutex value by 1 so that other
processes which are in a busy-waiting state can access the critical section now.
 signal(Empty) - when a consumer process consumes an item it increases the value of the
Empty variable indicating that the empty space in the buffer is increased by 1.

Why can mutex solve the producer consumer Problem ?

Mutex is used to solve the producer-consumer problem as mutex helps in mutual exclusion. It
prevents more than one process to enter the critical section. As mutexes have binary values i.e 0
and 1. So whenever any process tries to enter the critical section code it first checks for the
mutex value by using the wait operation.

wait(mutex);

wait(mutex) decreases the value of mutex by 1. so, suppose a process P1 tries to enter the critical
section when mutex value is 1. P1 executes wait(mutex) and decreases the value of mutex. Now,
the value of mutex becomes 0 when P1 enters the critical section of the code.

Now, suppose Process P2 tries to enter the critical section then it will again try to decrease the
value of mutex. But the mutex value is already 0. So, wait(mutex) will not execute, and P2 will
now keep waiting for P1 to come out of the critical section.

Now, suppose if P2 comes out of the critical section by executing signal(mutex).

Downloaded by Ayaka Kamisato (kamisatoayaka131203@gmail.com)


lOMoARcPSD|37969926

signal(mutex)

signal(mutex) increases the value of mutex by 1.mutex value again becomes 1. Now, the
process P2 which was in a busy-waiting state will be able to enter the critical section by
executing wait(mutex).

So, mutex helps in the mutual exclusion of the processes.

In the above section in both the Producer process code and consumer process code, we have the
wait and signal operation on mutex which helps in mutual exclusion and solves the problem of
the Producer consumer process.

Conclusion

 Producer Process produces data item and consumer process consumes data item.
 Both producer and consumer processes share a common memory buffer.
 Producer should not produce any item if the buffer is full.
 Consumer should not consume any item if the buffer is empty.
 Not more than one process should access the buffer at a time i.e mutual exclusion should
be there.
 Full, Empty and mutex semaphore help to solve Producer-consumer problem.
 Full semaphore checks for the number of filled space in the buffer by the producer
process
 Empty semaphore checks for the number of empty spaces in the buffer.
 mutex checks for the mutual exclusion.

Mutual exclusion:

Mutual exclusion in OS locks is a frequently used method for synchronizing


processes or threads that want to access some shared resource. Their work justifies their name, if
a thread operates on a resource, another thread that wants to do tasks on it must wait until the
first one is done with its process.

What is Mutual Exclusion in OS?

Mutual exclusion also known as Mutex is a unit of code that avert contemporaneous access to
shared resources. Mutual exclusion is concurrency control’s property that is installed for the
objective of averting race conditions.

In simple words, it's a condition in which a thread of execution does not ever get involved in a
critical section at the same time as a concurrent thread of execution so far using the critical
section. This critical section can be a period for which the thread of execution uses the shared
resource which can be defined as a data object, that different concurrent threads may attempt to
alter (where the number of concurrent read operations allowed is two but on the other hand two
write or one read and write is not allowed, as it may guide it to data instability).

Downloaded by Ayaka Kamisato (kamisatoayaka131203@gmail.com)


lOMoARcPSD|37969926

Mutual exclusion in OS is designed so that when a write operation is in the process then another
thread is not granted to use the very object before the first one has done writing on the critical
section after that releases the object because the rest of the processes have to read and write it.

Why is Mutual Exclusion Required?

An easy example of the importance of Mutual Exclusion can be envisioned by implementing a


linked list of multiple items, considering the fourth and fifth need removal. The deletion of the
node that sits between the other two nodes is done by modifying the previous node’s next
reference directing the succeeding node.

In a simple explanation, whenever node “i” wants to be removed, at that moment node “ith - 1” 's
next reference is modified, directing towards the node “ith + 1”. Whenever a shared linked list is
in the middle of many threads, two separate nodes can be removed by two threads at the same
time meaning the first thread modifies node “ith - 1” next reference, directing towards the
node “ith + 1”, at the same time second thread modifies node “ith” next reference, directing
towards the node “ith + 2”. Despite the removal of both achieved, linked lists required state is
not yet attained because node “i + 1” still exists in the list, due to node “ith - 1” next reference
still directing towards the node “i + 1”.

Now, this situation is called a race condition. Race conditions can be prevented by mutual
exclusion so that updates at the same time cannot happen to the very bit about the list.

Necessary Conditions for Mutual Exclusion

There are four conditions applied to mutual exclusion, which are mentioned below :

 Mutual exclusion should be ensured in the middle of different processes when accessing
shared resources. There must not be two processes within their critical sections at any
time.
 Assumptions should not be made as to the respective speed of the unstable processes.
 The process that is outside the critical section must not interfere with another for access
to the critical section.
 When multiple processes access its critical section, they must be allowed access in a
finite time, i.e. they should never be kept waiting in a loop that has no limits.

Example of Mutual Exclusion

There are many types of mutual exclusion, some of them are mentioned below :

 Locks :
It is a mechanism that applies restrictions on access to a resource when multiple threads
of execution exist.
 Recursive lock :
It is a certain type of mutual exclusion (mutex) device that is locked several times by the
very same process/thread, without making a deadlock. While trying to perform

Downloaded by Ayaka Kamisato (kamisatoayaka131203@gmail.com)


lOMoARcPSD|37969926

the "lock" operation on any mutex may fail or block when the mutex is already locked,
while on a recursive mutex the operation will be a success only if the locking thread is
the one that already holds the lock.
 Semaphore :
It is an abstract data type designed to control the way into a shared resource by multiple
threads and prevents critical section problems in a concurrent system such as a
multitasking operating system. They are a kind of synchronization primitive.
 Readers writer (RW) lock :
It is a synchronization primitive that works out reader-writer problems. It grants
concurrent access to the read-only processes, and writing processes require exclusive
access. This conveys that multiple threads can read the data in parallel however exclusive
lock is required for writing or making changes in data. It can be used to manipulate
access to a data structure inside the memory.

Conclusion

 Mutual exclusion in OS locks is a frequently used method for synchronizing processes or


threads that want to access some shared resource.
 Mutual exclusion is also known as Mutex.
 The critical section can be defined as a period for which the thread of execution accesses
the shared resource.
 Mutual exclusion is designed so that if a process is performing any write operation, then
no other process or thread is allowed to access or alter the same object.
 Mutual exclusion in OS is used to prevent race conditions.
 Mutual exclusion should be ensured in the middle of different processes when accessing
shared resources.
 The process that is outside the critical section must not interfere with another for access
to the critical section.
 Examples of mutual exclusion are locks, recursive locks, RW locks, semaphores, etc.

Critical Section Problem:

If multiple processes access the critical section concurrently, then results produced might be
inconsistent.
This problem is called as critical section problem.

The critical section refers to a specific part of a program where shared resources are accessed,
and concurrent execution may lead to conflicts or inconsistencies. It is essential for the operating
system to provide mechanisms like locks and semaphores to ensure proper synchronization and
mutual exclusion in the critical section. These safeguards prevent concurrent processes from
interfering with each other, maintaining the integrity of shared resources.

What is the Critical Section Problem in OS?

Downloaded by Ayaka Kamisato (kamisatoayaka131203@gmail.com)


lOMoARcPSD|37969926

When there is more than one process accessing or modifying a shared resource at the same time,
then the value of that resource will be determined by the last process. This is called the race
condition.

Consider an example of two processes, p1 and p2. Let value=3 be a variable present in the
shared resource.

Let us consider the following actions are done by the two processes,

value+3 // process p1
value=6
value-3 // process p2
value=3

The original value of,value should be 6, but due to the interruption of the process p2, the value is
changed back to 3. This is the problem of synchronization.

The critical section problem is to make sure that only one process should be in a critical section
at a time. When a process is in the critical section, no other processes are allowed to enter the
critical section. This solves the race condition.

Solutions to the Critical Section Problem

To effectively address the Critical Section Problem in operating systems, any solution must meet
three key requirements:

1. Mutual Exclusion: This means that when one process is executing within its critical
section, no other process should be allowed to enter its own critical section. This ensures
that shared resources are accessed by only one process at a time, preventing conflicts and
data corruption.

2. Progress: When no process is currently executing in its critical section, and there is a
process that wishes to enter its critical section, it should not be kept waiting indefinitely.
The system should enable processes to make progress, ensuring that they eventually get a
chance to access their critical sections.

3. Bounded Waiting: There must be a limit on the number of times a process can execute
in its critical section after another process has requested access to its critical section but
before that request is granted. This ensures fairness and prevents any process from being
starved of critical section access.

Various solutions have been developed to meet these requirements and manage the Critical
Section Problem. These solutions primarily use software-based locks for synchronization.

Here are some common approaches:

Downloaded by Ayaka Kamisato (kamisatoayaka131203@gmail.com)


lOMoARcPSD|37969926

1. Test-and-Set: This method involves using a shared boolean variable, typically called
"lock," and the "test_and_set" instruction, which atomically sets the lock to true.

2. Compare-and-Swap: Similar to "test_and-set," this approach also uses a shared boolean


variable but employs the "compare_and_swap" instruction. It sets the lock to true only if
the value passed to it matches an expected value.

3. Mutex Locks: Mutex (short for mutual exclusion) locks provide functions like
"acquire()" and "release()" that execute atomically. These locks ensure that only one
process can acquire the lock at a time.

4. Semaphores: Semaphores are more advanced synchronization tools. They use "wait()"
and "signal()" operations, executed atomically on a semaphore variable (typically an
integer). Semaphores can manage access to resources more flexibly.

5. Condition Variables: This approach maintains a queue of processes waiting to enter


their critical sections. It ensures orderly access by managing the waiting processes based
on certain conditions.

The essential principle across these solutions is to guarantee exclusive access to critical sections
while allowing processes to make progress and ensuring that no process is left waiting
indefinitely. The specific mechanisms and tools used may vary, but they all aim to maintain the
integrity of shared resources in the system.

Peterson’s Solution:

Peterson's solution was proposed to resolve the critical section problem involving only two
processes. This solution guarantees that it provides mutual exclusion, bounded waiting, and
progress of the processes.

The following code snippet shows how Peterson's algorithm works:

Algorithm for Pi process

do{
flag[i] = true;
turn = j;
while (flag[j] && turn == j);
//critical section

flag[i] = false;

//remainder section
}while(true);

Algorithm for Pj process

Downloaded by Ayaka Kamisato (kamisatoayaka131203@gmail.com)


lOMoARcPSD|37969926

do{
flag[j] = true;
turn = i;
while (flag[i] && turn == i);
//critical section

flag[j] = false;

//remainder section
}while(true);

Explanation

The processes and variables used in the algorithm need to be elaborated. Processes names used in
this solution are Pi and Pj, respectively. There are two variables that the two processes share:

 turn(int): The variable turn indicates whose turn it is to enter the critical section.
If turn == i, then Pi is allowed to enter their critical section.
 flag (boolean): The flag array indicates if a process is ready to enter the critical section.
If flag[i] = true, then it means that process Pi is ready.

Let's consider the algorithm of process Pi, the process i first raises a flag indicating a wish to
enter the critical section. Then, turn is set to j to allow the other process. The j enter the critical
section. Finally, the while loop will only allow one process to enter its critical section.

The Process i lowers the flag[i] in the exit section allowing the process j to continue if it has been
waiting.

Although it provides concurrency, Peterson's solution is limited to only two processes and
involves busy waiting, which exhausts the systems' resources.

Semaphores in Operating System


Semaphores are integer variables that are used to solve the critical section problem by using two
atomic operations, wait and signal that are used for process synchronization.

The definitions of wait and signal are as follows −

 Wait
The wait operation decrements the value of its argument S, if it is positive. If S is
negative or zero, then no operation is performed.
wait(S)
{
while (S<=0);

Downloaded by Ayaka Kamisato (kamisatoayaka131203@gmail.com)


lOMoARcPSD|37969926

S--;
}
 Signal
The signal operation increments the value of its argument S.
signal(S)
{
S++;
}

Types of Semaphores

There are two main types of semaphores i.e. counting semaphores and binary semaphores.
Details about these are given as follows −

 Counting Semaphores
These are integer value semaphores and have an unrestricted value domain. These semaphores
are used to coordinate the resource access, where the semaphore count is the number of available
resources. If the resources are added, semaphore count automatically incremented and if the
resources are removed, the count is decremented.
 Binary Semaphores

The binary semaphores are like counting semaphores but their value is restricted to 0 and 1. The
wait operation only works when the semaphore is 1 and the signal operation succeeds when
semaphore is 0. It is sometimes easier to implement binary semaphores than counting
semaphores.

Advantages of Semaphores

Some of the advantages of semaphores are as follows −

 Semaphores allow only one process into the critical section. They follow the mutual
exclusion principle strictly and are much more efficient than some other methods of
synchronization.
 There is no resource wastage because of busy waiting in semaphores as processor time is
not wasted unnecessarily to check if a condition is fulfilled to allow a process to access
the critical section.
 Semaphores are implemented in the machine independent code of the microkernel. So
they are machine independent.

Disadvantages of Semaphores

Downloaded by Ayaka Kamisato (kamisatoayaka131203@gmail.com)


lOMoARcPSD|37969926

Some of the disadvantages of semaphores are as follows −

 Semaphores are complicated so the wait and signal operations must be implemented in
the correct order to prevent deadlocks.
 Semaphores are impractical for last scale use as their use leads to loss of modularity. This
happens because the wait and signal operations prevent the creation of a structured layout
for the system.
 Semaphores may lead to a priority inversion where low priority processes may access the
critical section first and high priority processes later.

Test and Set Lock –

 Test and Set Lock (TSL) is a synchronization mechanism.


 It uses a test and set instruction to provide the synchronization among the processes
executing concurrently.

Test-and-Set Instruction

 It is an instruction that returns the old value of a


memory location and sets the memory location
value to 1 as a single atomic operation.
 If one process is currently executing a test-and-
set, no other process is allowed to begin
another test-and-set until the first process test-
and-set is finished.

It is implemented as-

Downloaded by Ayaka Kamisato (kamisatoayaka131203@gmail.com)


lOMoARcPSD|37969926

Initially, lock value is set to 0.


 Lock value = 0 means the critical section is currently vacant and no process is present
inside it.
 Lock value = 1 means the critical section is currently occupied and a process is present
inside it.

Working-

This synchronization mechanism works as explained in the following scenes-

Scene-01:

 Process P0 arrives.
 It executes the test-and-set(Lock) instruction.
 Since lock value is set to 0, so it returns value 0 to the while loop and sets the lock value
to 1.
 The returned value 0 breaks the while loop condition.
 Process P0 enters the critical section and executes.
 Now, even if process P0 gets preempted in the middle, no other process can enter the
critical section.
 Any other process can enter only after process P0 completes and sets the lock value to 0.

Scene-02:

 Another process P1 arrives.


 It executes the test-and-set(Lock) instruction.
 Since lock value is now 1, so it returns value 1 to the while loop and sets the lock value to
 The returned value 1 does not break the while loop condition.
 The process P1 is trapped inside an infinite while loop.
 The while loop keeps the process P1 busy until the lock value becomes 0 and its
condition breaks.

Scene-03:

 Process P0 comes out of the critical section and sets the lock value to 0.
 The while loop condition breaks.
 Now, process P1 waiting for the critical section enters the critical section.
 Now, even if process P1 gets preempted in the middle, no other process can enter the
critical section.

Downloaded by Ayaka Kamisato (kamisatoayaka131203@gmail.com)


lOMoARcPSD|37969926

 Any other process can enter only after process P1 completes and sets the lock value to 0.

Dining Philosophers Problem:


The dining philosophers problem states that there are 5 philosophers sharing a circular table and
they eat and think alternatively. There is a bowl of rice for each of the philosophers and 5
chopsticks. A philosopher needs both their right and left chopstick to eat. A hungry philosopher
may only eat if there are both chopsticks available. Otherwise a philosopher puts down their
chopstick and begin thinking again.

The dining philosopher is a classic synchronization problem as it demonstrates a large class of


concurrency control problems.

Solution of Dining Philosophers Problem

A solution of the Dining Philosophers Problem is to use a semaphore to represent a chopstick. A


chopstick can be picked up by executing a wait operation on the semaphore and released by
executing a signal semaphore.

The structure of the chopstick is shown below −

semaphore chopstick [5];

Initially the elements of the chopstick are initialized to 1 as the chopsticks are on the table and
not picked up by a philosopher.

The structure of a random philosopher i is given as follows −

do {
wait( chopstick[i] );
wait( chopstick[ (i+1) % 5] );
..
. EATING THE RICE
.
signal( chopstick[i] );
signal( chopstick[ (i+1) % 5] );
.
. THINKING
.
} while(1);

Downloaded by Ayaka Kamisato (kamisatoayaka131203@gmail.com)


lOMoARcPSD|37969926

In the above structure, first wait operation is performed on chopstick[i] and chopstick[ (i+1) %
5]. This means that the philosopher i has picked up the chopsticks on his sides. Then the eating
function is performed.

After that, signal operation is performed on chopstick[i] and chopstick[ (i+1) % 5]. This means
that the philosopher i has eaten and put down the chopsticks on his sides. Then the philosopher
goes back to thinking.

Difficulty with the solution

The above solution makes sure that no two neighboring philosophers can eat at the same time.
But this solution can lead to a deadlock. This may happen if all the philosophers pick their left
chopstick simultaneously. Then none of them can eat and deadlock occurs.

Some of the ways to avoid deadlock are as follows −

 There should be at most four philosophers on the table.


 An even philosopher should pick the right chopstick and then the left chopstick while an odd
philosopher should pick the left chopstick and then the right chopstick.
 A philosopher should only be allowed to pick their chopstick if both are available at the same
time.

Sleeping Barber Problem Solution in C

The sleeping barber dilemma was first posed by Dijkstra in 1965. This issue is based on a
fictitious situation in which there is a single barber at a barbershop. The waiting area and the
workroom are separated in the barbershop. Customers can wait in the waiting area on n seats;
however there is only one barber chair in the workroom.

Inter Process Communication

While a cooperating process might be impacted by other processes that are running, an
independent process is unaffected by the execution of other processes. Although it is possible to
assume that processes operating independently will function extremely effectively, there are
really numerous circumstances in which co-operative nature may be used to boost computing
speed, ease, and flexibility. Processes can interact with one another and coordinate their
operations through a technique called inter-process communication (IPC). These processes'
communication with one another might be thought of as a means of cooperation.

Problem:

Downloaded by Ayaka Kamisato (kamisatoayaka131203@gmail.com)


lOMoARcPSD|37969926

The problem is based on a fictitious barbershop with just one barber, which is a problem. There
is a barbershop with one barber, one chair for the barber, and n seats to wait for clients to sit in
the chair, if any.He needs to rouse the barber when a client shows up.

o The remaining customers can either wait if there are empty seats in the waiting area or
they can leave if there are no empty chairs when there are numerous clients and the
barber is cutting a customer's hair.

Solution:

Three semaphores are used in the solution to this issue. The first counts the number of customers
in the waiting area and is for the customer (customer in the barber chair is not included because
he is not waiting). The second mutex is used to give the mutual exclusion necessary for the
process to operate, and the barber 0 or 1 is used to determine if the barber is idle or working. The
client keeps a record of how many customers are currently waiting in the waiting area, and when
that number equals the number of chairs in the area, the next customer exits the barbershop.

The procedure barber is carried out when the barber arrives in the morning, forcing him to block
on the semaphore clients since it is originally 0. The barber then retires to bed till the first client
arrives.

Code:

Semaphore Customers = 0;
Semaphore Barber = 0;
Mutex Seats = 1;
int FreeSeats = N;

Barber {
while(true) {
/* waits for a customer (sleeps). */
down(Customers);

/* mutex to protect the number of available seats.*/


down(Seats);

/* a chair gets free.*/

Downloaded by Ayaka Kamisato (kamisatoayaka131203@gmail.com)


lOMoARcPSD|37969926

FreeSeats++;

/* bring customer for haircut.*/


up(Barber);

/* release the mutex on the chair.*/


up(Seats);
/* barber is cutting hair.*/
}
}

Customer {
while(true) {
/* protects seats so only 1 customer tries to sit
in a chair if that's the case.*/
down(Seats); //This line should not be here.
if(FreeSeats > 0) {

/* sitting down.*/
FreeSeats--;

/* notify the barber. */


up(Customers);

/* release the lock */


up(Seats);

/* wait in the waiting room if barber is busy. */


down(Barber);
// customer is having hair cut

Downloaded by Ayaka Kamisato (kamisatoayaka131203@gmail.com)


lOMoARcPSD|37969926

} else {
/* release the lock */
up(Seats);
// customer leaves
}
}
}

Analysis:

When the barber starts his shift, the barber procedure is carried out, and he checks to see whether
any clients are ready or not. Pick up the client for a haircut and block the client's semaphore if
anybody is available. If nobody requests a haircut, the barber goes to sleep.

The customer releases the mutex, the barber wakes up, and the waiting counter is increased if a
chair becomes available. The barber then enters the crucial portion, obtains the mutex, and
begins the haircut.

The client departs when the haircut is finished. Now the barber looks to see if there are any other
clients waiting for a haircut or not in the waiting area. The barber is going to bed if it is not.

Inter Process Communication

In general, Inter Process Communication is a type of mechanism usually provided by the


operating system (or OS). The main aim or goal of this mechanism is to provide communications
in between several processes. In short, the intercommunication allows a process letting another
process know that some event has occurred.

Let us now look at the general definition of inter-process communication, which will explain the
same thing that we have discussed above.

Definition

"Inter-process communication is used for exchanging useful information between numerous


threads in one or more processes (or programs)."

To understand inter process communication, you can consider the following given diagram that
illustrates the importance of inter-process communication.

Downloaded by Ayaka Kamisato (kamisatoayaka131203@gmail.com)


lOMoARcPSD|37969926

Role of Synchronization in Inter Process Communication

It is one of the essential parts of inter process communication. Typically, this is provided by
interprocess communication control mechanisms, but sometimes it can also be controlled by
communication processes.

These are the following methods that used to provide the synchronization:

1. Mutual Exclusion

2. Semaphore

3. Barrier

4. Spinlock

Mutual Exclusion:-

It is generally required that only one process thread can enter the critical section at a time. This
also helps in synchronization and creates a stable state to avoid the race condition.

Semaphore:-

Semaphore is a type of variable that usually controls the access to the shared resources by several
processes. Semaphore is further divided into two types which are as follows:

1. Binary Semaphore

2. Counting Semaphore

Barrier:-

Downloaded by Ayaka Kamisato (kamisatoayaka131203@gmail.com)


lOMoARcPSD|37969926

A barrier typically not allows an individual process to proceed unless all the processes does not
reach it. It is used by many parallel languages, and collective routines impose barriers.

Spinlock:-

Spinlock is a type of lock as its name implies. The processes are trying to acquire the spinlock
waits or stays in a loop while checking that the lock is available or not. It is known as busy
waiting because even though the process active, the process does not perform any functional
operation (or task).

Approaches to Interprocess Communication

We will now discuss some different approaches to inter-process communication which are as
follows:

These are a few different approaches for Inter- Process Communication:

1. Pipes

2. Shared Memory

3. Message Queue

4. Direct Communication

5. Indirect communication

6. Message Passing

7. FIFO

To understand them in more detail, we will discuss each of them individually.

Pipe:-

Downloaded by Ayaka Kamisato (kamisatoayaka131203@gmail.com)


lOMoARcPSD|37969926

The pipe is a type of data channel that is unidirectional in nature. It means that the data in this
type of data channel can be moved in only a single direction at a time. Still, one can use two-
channel of this type, so that he can able to send and receive data in two processes. Typically, it
uses the standard methods for input and output. These pipes are used in all types of POSIX
systems and in different versions of window operating systems as well.

Shared Memory:-

It can be referred to as a type of memory that can be used or accessed by multiple processes
simultaneously. It is primarily used so that the processes can communicate with each other.
Therefore the shared memory is used by almost all POSIX and Windows operating systems as
well.

Message Queue:-

In general, several different messages are allowed to read and write the data to the message
queue. In the message queue, the messages are stored or stay in the queue unless their recipients
retrieve them. In short, we can also say that the message queue is very helpful in inter-process
communication and used by all operating systems.

To understand the concept of Message queue and Shared memory in more detail, let's take a look
at its diagram given below:

Message Passing:-

It is a type of mechanism that allows processes to synchronize and communicate with each other.
However, by using the message passing, the processes can communicate with each other without
restoring the hared variables.

Usually, the inter-process communication mechanism provides two operations that are as
follows:

Downloaded by Ayaka Kamisato (kamisatoayaka131203@gmail.com)


lOMoARcPSD|37969926

o send (message)

o received (message)

Direct Communication:-

In this type of communication process, usually, a link is created or established between two
communicating processes. However, in every pair of communicating processes, only one link
can exist.

Indirect Communication

Indirect communication can only exist or be established when processes share a common
mailbox, and each pair of these processes shares multiple communication links. These shared
links can be unidirectional or bi-directional.

FIFO:-

It is a type of general communication between two unrelated processes. It can also be considered
as full-duplex, which means that one process can communicate with another process and vice
versa.

Some other different approaches

o Socket:-

It acts as a type of endpoint for receiving or sending the data in a network. It is correct for data
sent between processes on the same computer or data sent between different computers on the
same network. Hence, it used by several types of operating systems.

o File:-

A file is a type of data record or a document stored on the disk and can be acquired on demand
by the file server. Another most important thing is that several processes can access that file as
required or needed.

o Signal:-

As its name implies, they are a type of signal used in inter process communication in a minimal
way. Typically, they are the massages of systems that are sent by one process to another.
Therefore, they are not used for sending data but for remote commands between multiple
processes.

Usually, they are not used to send the data but to remote commands in between several
processes.

Downloaded by Ayaka Kamisato (kamisatoayaka131203@gmail.com)


lOMoARcPSD|37969926

Why we need interprocess communication?

There are numerous reasons to use inter-process communication for sharing the data. Here are
some of the most important reasons that are given below:

o It helps to speedup modularity

o Computational

o Privilege separation

o Convenience

o Helps operating system to communicate with each other and synchronize their actions as
well.

Process scheduling/ Process Generation:

The process scheduling is the activity of the process manager that handles the removal of the
running process from the CPU and the selection of another process on the basis of a particular
strategy.

Process scheduling is an essential part of a Multiprogramming operating systems. Such operating


systems allow more than one process to be loaded into the executable memory at a time and the
loaded process shares the CPU using time multiplexing.

Categories of Scheduling

There are two categories of scheduling:

1. Non-preemptive: Here the resource can’t be taken from a process until the process
completes execution. The switching of resources occurs when the running process
terminates and moves to a waiting state.
2. Preemptive: Here the OS allocates the resources to a process for a fixed amount of time.
During resource allocation, the process switches from running state to ready state or from
waiting state to ready state. This switching occurs as the CPU may give priority to other
processes and replace the process with higher priority with the running process.

Process Scheduling Queues

The OS maintains all Process Control Blocks (PCBs) in Process Scheduling Queues. The OS
maintains a separate queue for each of the process states and PCBs of all processes in the same
execution state are placed in the same queue. When the state of a process is changed, its PCB is
unlinked from its current queue and moved to its new state queue.

The Operating System maintains the following important process scheduling queues −

Downloaded by Ayaka Kamisato (kamisatoayaka131203@gmail.com)


lOMoARcPSD|37969926

 Job queue − This queue keeps all the processes in the system.
 Ready queue − This queue keeps a set of all processes residing in main memory, ready
and waiting to execute. A new process is always put in this queue.
 Device queues − The processes which are blocked due to unavailability of an I/O device
constitute this queue.

The OS can use different policies to manage each queue (FIFO, Round Robin, Priority, etc.). The
OS scheduler determines how to move processes between the ready and run queues which can
only have one entry per processor core on the system; in the above diagram, it has been merged
with the CPU.

Two-State Process Model

Two-state process model refers to running and non-running states which are described below −

S.N. State & Description

Running
1
When a new process is created, it enters into the system as in the running state.

Not Running
Processes that are not running are kept in queue, waiting for their turn to execute. Each
entry in the queue is a pointer to a particular process. Queue is implemented by using
2
linked list. Use of dispatcher is as follows. When a process is interrupted, that process is
transferred in the waiting queue. If the process has completed or aborted, the process is
discarded. In either case, the dispatcher then selects a process from the queue to execute.

Schedulers

Schedulers are special system software which handle process scheduling in various ways. Their
main task is to select the jobs to be submitted into the system and to decide which process to run.
Schedulers are of three types −

 Long-Term Scheduler

Downloaded by Ayaka Kamisato (kamisatoayaka131203@gmail.com)


lOMoARcPSD|37969926

 Short-Term Scheduler
 Medium-Term Scheduler

Long Term Scheduler

It is also called a job scheduler. A long-term scheduler determines which programs are admitted
to the system for processing. It selects processes from the queue and loads them into memory for
execution. Process loads into the memory for CPU scheduling.

The primary objective of the job scheduler is to provide a balanced mix of jobs, such as I/O
bound and processor bound. It also controls the degree of multiprogramming. If the degree of
multiprogramming is stable, then the average rate of process creation must be equal to the
average departure rate of processes leaving the system.

On some systems, the long-term scheduler may not be available or minimal. Time-sharing
operating systems have no long term scheduler. When a process changes the state from new to
ready, then there is use of long-term scheduler.

Short Term Scheduler

It is also called as CPU scheduler. Its main objective is to increase system performance in
accordance with the chosen set of criteria. It is the change of ready state to running state of the
process. CPU scheduler selects a process among the processes that are ready to execute and
allocates CPU to one of them.

Short-term schedulers, also known as dispatchers, make the decision of which process to execute
next. Short-term schedulers are faster than long-term schedulers.

Medium Term Scheduler

Medium-term scheduling is a part of swapping. It removes the processes from the memory. It
reduces the degree of multiprogramming. The medium-term scheduler is in-charge of handling
the swapped out-processes.

A running process may become suspended if it makes an I/O request. A suspended processes
cannot make any progress towards completion. In this condition, to remove the process from
memory and make space for other processes, the suspended process is moved to the secondary
storage. This process is called swapping, and the process is said to be swapped out or rolled out.
Swapping may be necessary to improve the process mix.

Comparison among Scheduler

S.N. Long-Term Scheduler Short-Term Scheduler Medium-Term Scheduler

1 It is a job scheduler It is a CPU scheduler It is a process swapping

Downloaded by Ayaka Kamisato (kamisatoayaka131203@gmail.com)


lOMoARcPSD|37969926

scheduler.

Speed is in between both


Speed is lesser than short Speed is fastest among
2 short and long term
term scheduler other two
scheduler.

It provides lesser control


It controls the degree of It reduces the degree of
3 over degree of
multiprogramming multiprogramming.
multiprogramming

It is almost absent or
It is also minimal in time It is a part of Time sharing
4 minimal in time sharing
sharing system systems.
system

It selects processes from It selects those processes It can re-introduce the


5 pool and loads them into which are ready to process into memory and
memory for execution execute execution can be continued.

Context Switching

A context switching is the mechanism to store and restore the state or context of a CPU in
Process Control block so that a process execution can be resumed from the same point at a later
time. Using this technique, a context switcher enables multiple processes to share a single CPU.
Context switching is an essential part of a multitasking operating system features.

When the scheduler switches the CPU from executing one process to execute another, the state
from the current running process is stored into the process control block. After this, the state for
the process to run next is loaded from its own PCB and used to set the PC, registers, etc. At that
point, the second process can start executing.

Downloaded by Ayaka Kamisato (kamisatoayaka131203@gmail.com)


lOMoARcPSD|37969926

Context switches are computationally intensive since register and memory state must be saved
and restored. To avoid the amount of context switching time, some hardware systems employ
two or more sets of processor registers. When the process is switched, the following information
is stored for later use.

 Program Counter
 Scheduling information
 Base and limit register value
 Currently used register
 Changed State
 I/O State information
 Accounting information

…………………..Thank You……................

Downloaded by Ayaka Kamisato (kamisatoayaka131203@gmail.com)

You might also like