Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1of 12

Module 2

PROCESS SYNCHRONIZATION AND DEADLOCKS


Process Synchronization:
Critical Section Problem, Peterson’s Solution,
Synchronization Hardware, Semaphores, Synchronization Problems, Monitors.
Deadlocks: System Model, Deadlock characterization, Methods for handling deadlocks,
Prevention, Detection, Avoidance, Recovery from deadlock.

Process Synchronization:
Process Synchronization is the coordination of execution of multiple processes in a multi-process system to
ensure that they access shared resources in a controlled and predictable manner. It aims to resolve the
problem of race conditions and other synchronization issues in a concurrent system.
The main objective of process synchronization is to ensure that multiple processes access shared resources
without interfering with each other, and to prevent the possibility of inconsistent data due to concurrent
access. To achieve this, various synchronization techniques such as semaphores, monitors, and critical
sections are used.
In a multi-process system, synchronization is necessary to ensure data consistency and integrity, and to avoid
the risk of deadlocks and other synchronization problems. Process synchronization is an important aspect of
modern operating systems, and it plays a crucial role in ensuring the correct and efficient functioning of
multi-process systems.
On the basis of synchronization, processes are categorized as one of the following two types:
 Independent Process: The execution of one process does not affect the execution of other processes.
 Cooperative Process: A process that can affect or be affected by other processes executing in the
system.
Process synchronization problem arises in the case of Cooperative process also because resources are shared
in Cooperative processes.
Race Condition:
When more than one process is executing the same code or accessing the same memory or any shared
variable in that condition there is a possibility that the output or the value of the shared variable is wrong so
for that all the processes doing the race to say that my output is correct this condition known as a race
condition. Several processes access and process the manipulations over the same data concurrently, then the
outcome depends on the particular order in which the access takes place. A race condition is a situation that
may occur inside a critical section. This happens when the result of multiple thread execution in the critical
section differs according to the order in which the threads execute. Race conditions in critical sections can be
avoided if the critical section is treated as an atomic instruction. Also, proper thread synchronization using
locks or atomic variables can prevent race conditions.

Critical Section Problem:

A critical section is a code segment that can be accessed by only one process at a time. The critical section
contains shared variables that need to be synchronized to maintain the consistency of data variables. So
the critical section problem means designing a way for cooperative processes to access shared resources
without creating data inconsistencies.
In the entry section, the process requests for entry in the Critical Section.
Any solution to the critical section problem must satisfy three requirements:
 Mutual Exclusion: If a process is executing in its critical section, then no other process is allowed to
execute in the critical section.
 Progress: If no process is executing in the critical section and other processes are waiting outside the
critical section, then only those processes that are not executing in their remainder section can
participate in deciding which will enter in the critical section next, and the selection can not be
postponed indefinitely.
 Bounded Waiting: A bound must exist on the number of times that other processes are allowed to enter
their critical sections after a process has made a request to enter its critical section and before that
request is granted.

Two Process Solutions:


I have started learning Critical Section Problem and its various solutions. To explain my question, let me first
give a brief background of it.

The general structure for a two Process Solution for Critical Section Problem- Algorithm 1 is:

turn = 0;
do
{
while (turn != 0) ; //if not P0's turn , wait indefinitely
// critical section of Process P0
turn = 1; //after P0 leaves critical section, lets P1 in
//remainder section
} while (1); //loop again
The problem with this Algorithm is that it doesn't support the necessary requirement of Progress. It forces the
critical section to be owned in equal turns by P0 -> P1 -> P0 -> P1 -> ... To get over this problem Algorithm 2 is
used where variable turn is replaced by an array flag[]. The general structure of algorithm 2 is:

do
{
flag[0] = T ;
while (flag[1]);//if flag[1] is true wait indefinitely

// critical section of Process P0


flag [0] = F; //P0 indicated it no longer needs to be in critical section
//remainder section
} while (1); //loop again
Here, a process can execute its critical section repeatedly if it needs. (Although this algorithm too doesn't
supports progress).Now my question, why can't we use the variable turn inside the do-while loop in Algorithm 1
in the same way as we use the variable flag[] in ALgorithm 2? Below code will explain what I mean: For process
0:

do

{
turn = 0;
while (turn != 0) ; //if not P0's turn , wait indefinitely
// critical section of Process P0
turn = 1; //after P0 leaves critical section, lets P1 in
//remainder section
} while (1); //loop again
For process 1:

Do

{ turn = 1;

while (turn != 1) ; //if not P0's turn , wait indefinitely

// critical section of Process P0


turn = 0; //after P1 leaves critical section, lets P0 in
//remainder section
} while (1); //loop again
Wouldn't the above code allow a process to repeatedly execute its critical section, if needed and hence solve
the problem in Algorithm 1? I know there is something wrong here or else this solution would have been used
generally, don't know just exactly what it is.

Algorithm 3

Peterson Solution
Peterson’s Solution is a classical software-based solution to the critical section problem. In Peterson’s
solution, we have two shared variables:
 boolean flag[i]: Initialized to FALSE, initially no one is interested in entering the critical section
 int turn: The process whose turn is to enter the critical section.
Peterson’s Solution preserves all three conditions:
 Mutual Exclusion is assured as only one process can access the critical section at any time.
 Progress is also assured, as a process outside the critical section does not block other processes from
entering the critical section.
 Bounded Waiting is preserved as every process gets a fair chance.
Synchronization Hardware
The following are the algorithms in the hardware approach of solving Process Synchronization problem:

1. Test and Set


2. Swap
Hardware instructions in many operating systems help in the effective solution of critical section problems.
1. Test and Set:
Here, the shared variable is lock which is initialized to false. TestAndSet(lock) algorithm works in this way – it
always returns whatever value is sent to it and sets lock to true. The first process will enter the critical
section at once as TestAndSet(lock) will return false and it’ll break out of the while loop. The other processes
cannot enter now as lock is set to true and so the while loop continues to be true. Mutual exclusion is
ensured. Once the first process gets out of the critical section, lock is changed to false. So, now the other
processes can enter one by one. Progress is also ensured. However, after the first process, any process can
go in. There is no queue maintained, so any new process that finds the lock to be false again can enter. So
bounded waiting is not ensured.
Test and Set Pseudocode –
//Shared variable lock initialized to false
boolean lock;

boolean TestAndSet (boolean &target){


boolean rv = target;
target = true;
return rv;
}
while(1){
while (TestAndSet(lock));
critical section
lock = false;
remainder section
}
2. Swap:
Swap algorithm is a lot like the TestAndSet algorithm. Instead of directly setting lock to true in the swap
function, key is set to true and then swapped with lock. First process will be executed, and in while(key),
since key=true , swap will take place and hence lock=true and key=false. Again next iteration takes place
while(key) but key=false , so while loop breaks and first process will enter in critical section. Now another
process will try to enter in Critical section, so again key=true and hence while(key) loop will run and swap
takes place so, lock=true and key=true (since lock=true in first process). Again on next iteration while(key) is
true so this will keep on executing and another process will not be able to enter in critical section. Therefore
Mutual exclusion is ensured. Again, out of the critical section, lock is changed to false, so any process finding
it gets t enter the critical section. Progress is ensured. However, again bounded waiting is not ensured for the
very same reason.
Swap Pseudocode –
// Shared variable lock initialized to false
// and individual key initialized to false;
boolean lock;
Individual key;
void swap(boolean &a, boolean &b){
boolean temp = a;
a = b;
b = temp;
}
while (1){
key = true;
while(key)
swap(lock,key);
critical section
lock = false;
remainder section
}

Semaphore
-> It is Synchronization tool that provides more sophisticated ways (than Mutex locks) for process to
synchronize their activities.

-> Semaphore is an integer variable used to solve the CS problem

-> It is using two indivisible (atomic) operations to handle CS


● wait() and signal()
● Originally called P() and V()

Definition of the signal() operation

The Signal Semaphore Operation is used to update the value of Semaphore. The Semaphore value is updated
when the new processes are ready to enter the Critical Section. We know that the semaphore value is
decreased by one in the wait operation when the process left the critical state. So, to counter balance the
decreased number 1 we use signal operation which increments the semaphore value. This induces the critical
section to receive more and more processes into it.
The most important part is that this Signal Operation is executed only when the process comes out of the
critical section. The value of semaphore cannot be incremented before the exit of process from the critical
section

signal(S)
{
S++;
}
Definition of the wait() operation

The Wait Operation is used for deciding the condition for the process to enter the critical state or wait for
execution of process. The Wait Operation works on the basis of Semaphore or Mutex Value. If the Semaphore
value is equal to zero then the Process has to wait for the Process to exit the Critical Section Area. This function
is only present until the process enters the critical state. If the Processes enters the critical state, then the P
Function or Wait Operation has no job to do. If the Process exits the Critical Section we have to reduce the
value of Semaphore.

Here, if the Semaphore value is greater than zero or positive then the Process can enter the Critical Section
Area.

wait(S)
{
while (S <= 0) ; // busy wait S--;

}
Types of Semaphores

Counting semaphore – integer value can range over an unrestricted domain.

Here, there are two sets of values of Semaphore in Counting Semaphore Concept. The two types of values are
values greater than and equal to one and other type is value equal to zero.

If the Value of Binary Semaphore is greater than or equal to 1, then the process has the capability to enter the
critical section area. If the value of Binary Semaphore is 0 then the process does not have the capability to enter
the critical section area.

Binary semaphore – integer value can range only between 0 and 1 Same as a mutex
lock.

Advantages of a Semaphore

o Semaphores are machine independent since their implementation and codes are written in the
microkernel's machine independent code area.
o They strictly enforce mutual exclusion and let processes enter the crucial part one at a time (only in the
case of binary semaphores).
o With the use of semaphores, no resources are lost due to busy waiting since we do not need any
processor time to verify that a condition is met before allowing a process access to the crucial area.
o Semaphores have the very good management of resources
o They forbid several processes from entering the crucial area. They are significantly more effective than
other synchronization approaches since mutual exclusion is made possible in this way.

Disadvantages of a Semaphore

o Due to the employment of semaphores, it is possible for high priority processes to reach the vital area
before low priority processes.
o Because semaphores are a little complex, it is important to design the wait and signal actions in a way
that avoids deadlocks.
o Programming a semaphore is very challenging, and there is a danger that mutual exclusion won't be
achieved.
o The wait ( ) and signal ( ) actions must be carried out in the appropriate order to prevent deadlocks.

Synchronization Problems:

1. Producer Consumer problem


Problem Statement – We have a buffer of fixed size. A producer can produce an item and can place in the
buffer. A consumer can pick items and can consume them. We need to ensure that when a producer is
placing an item in the buffer, then at the same time consumer should not consume any item. In this problem,
buffer is the critical section.
To solve this problem, we need two counting semaphores – Full and Empty. “Full” keeps track of number of
items in the buffer at any given time and “Empty” keeps track of number of unoccupied slots.
Initialization of semaphores –
mutex = 1
Full = 0 // Initially, all slots are empty. Thus full slots are 0
Empty = n // All slots are empty initially
Solution for Producer –
do{
//produce an item
wait(empty);
wait(mutex);
//place in buffer
signal(mutex);
signal(full);
}while(true)
When producer produces an item then the value of “empty” is reduced by 1 because one slot will be filled
now. The value of mutex is also reduced to prevent consumer to access the buffer. Now, the producer has
placed the item and thus the value of “full” is increased by 1. The value of mutex is also increased by 1
because the task of producer has been completed and consumer can access the buffer.
Solution for Consumer –
do{
wait(full);
wait(mutex);
// consume item from buffer
signal(mutex);
signal(empty);
}while(true)
As the consumer is removing an item from buffer, therefore the value of “full” is reduced by 1 and the value
is mutex is also reduced so that the producer cannot access the buffer at this moment. Now, the consumer
has consumed the item, thus increasing the value of “empty” by 1.
2. Dining-Philosophers Problem
Dining-Philosophers Problem is an example of a large-class concurrency control problem.
● It is a simple representation of the need to allocate several resources among several processes
in a deadlock-free and starvation-free manner
● Consider five philosophers who spend their lives thinking and eating.
● The philosophers share a circular table surrounded by five chairs, each belonging to one
philosopher.
● In the center of the table is a bowl of rice, and the table is laid with five single chopsticks
● Philosophers spend their lives alternating thinking and eating and tries to pick up the two chopsticks
that are closest to her.
● The chopsticks that are between her left and right neighbors.
● A philosopher may pick up one chopstick at a time.
● Obviously, she cannot pick up a chopstick that is already in the hand of a neighbor.
● When a hungry philosopher has both her chopsticks at the same time, she eats without releasing
her chopsticks.
● When she is finished eating, she puts down both of her chopsticks and starts thinking again.
❖ Need both chopsticks to eat,
❖ Release both chopstick when done In
the case of 5 philosophers
➔ Shared data
➔ Bowl of rice (data set)
➔ Semaphore chopstick [5]

● One simple solution is to represent each chopstick with a semaphore.


● A philosopher tries to grab a chopstick by executing a wait () operation on that
semaphore.
● She releases her chopsticks by executing the signal() operation on that semaphores.
● Thus, the shared data are semaphore chopstick[5];
● The structure of Philosopher i:

do
{
wait (chopstick[i] );
wait (chopStick[ (i + 1) % 5] );

// eat

signal (chopstick[i] );
signal (chopstick[ (i + 1) % 5] );

// think

} while (TRUE);

3. READERS WRITERS PROBLEM

The readers-writers problem is a classical problem of process synchronization, it relates to a data set such as a
file that is shared between more than one process at a time. Among these various processes, some are Readers
- which can only read the data set; they do not perform any updates, some are Writers - can both read and
write in the data sets.
The readers-writers problem is used for managing synchronization among various reader and writer process so
that there are no problems with the data sets, i.e. no inconsistency is generated.

Let's understand with an example - If two or more than two readers want to access the file at the same point in
time there will be no problem. However, in other situations like when two writers or one reader and one writer
wants to access the file at the same point of time, there may occur some problems, hence the task is to design
the code in such a manner that if one reader is reading then no writer is allowed to update at the same point of
time, similarly, if one writer is writing no reader is allowed to read the file at that point of time and if one writer
is updating a file other writers should not be allowed to update the file at the same point of time. However,
multiple readers can access the object at the same time.

Let us understand the possibility of reading and writing with the table given below:

TABLE 1

Case Process 1 Process 2 Allowed / Not Allowed

Case 1 Writing Writing Not Allowed

Case 2 Reading Writing Not Allowed

Case 3 Writing Reading Not Allowed

Case 4 Reading Reading Allowed


Monitors
● A high-level abstraction that provides a convenient and effective mechanism for process
synchronization
● Abstract Data Type
➔ Private variables only accessible by Public procedure
● Only one process may be active within the monitor at a time
● But not powerful enough to model some synchronization schemes

monitor monitor-name
{
// shared variable declarations
procedure P1 (…) { …. } procedure
Pn (…) {……} Initialization code (…) {
…}
}
Condition Variables

● condition x, y;
● Two operations are allowed on a condition variable:
➔ x.wait() – a process that invokes the operation is
suspended until x.signal()
➔ x.signal() – resumes one of processes (if any) that
invoked x.wait()
➔ If no x.wait() on the variable, then it has no effect on the
variable

Condition Variables Choices

➔ If process P invokes x.signal(), and process Q is suspended in x.wait(),


what should happen next?
◆ Both Q and P cannot execute in paralel. If Q is resumed, then P must
wait

➔ Options include
● Signal and wait – P waits until Q either leaves the monitor or it waits for
another condition
● Signal and continue – Q waits until P either leaves the monitor or it waits
for another condition
● Both have pros and cons – language implementer can decide
● Implemented in other languages including Mesa, C#, Java

You might also like