Professional Documents
Culture Documents
Unit2 (A) - Final - Process Synchronization
Unit2 (A) - Final - Process Synchronization
Process Synchronization:
Process Synchronization is the coordination of execution of multiple processes in a multi-process system to
ensure that they access shared resources in a controlled and predictable manner. It aims to resolve the
problem of race conditions and other synchronization issues in a concurrent system.
The main objective of process synchronization is to ensure that multiple processes access shared resources
without interfering with each other, and to prevent the possibility of inconsistent data due to concurrent
access. To achieve this, various synchronization techniques such as semaphores, monitors, and critical
sections are used.
In a multi-process system, synchronization is necessary to ensure data consistency and integrity, and to avoid
the risk of deadlocks and other synchronization problems. Process synchronization is an important aspect of
modern operating systems, and it plays a crucial role in ensuring the correct and efficient functioning of
multi-process systems.
On the basis of synchronization, processes are categorized as one of the following two types:
Independent Process: The execution of one process does not affect the execution of other processes.
Cooperative Process: A process that can affect or be affected by other processes executing in the
system.
Process synchronization problem arises in the case of Cooperative process also because resources are shared
in Cooperative processes.
Race Condition:
When more than one process is executing the same code or accessing the same memory or any shared
variable in that condition there is a possibility that the output or the value of the shared variable is wrong so
for that all the processes doing the race to say that my output is correct this condition known as a race
condition. Several processes access and process the manipulations over the same data concurrently, then the
outcome depends on the particular order in which the access takes place. A race condition is a situation that
may occur inside a critical section. This happens when the result of multiple thread execution in the critical
section differs according to the order in which the threads execute. Race conditions in critical sections can be
avoided if the critical section is treated as an atomic instruction. Also, proper thread synchronization using
locks or atomic variables can prevent race conditions.
A critical section is a code segment that can be accessed by only one process at a time. The critical section
contains shared variables that need to be synchronized to maintain the consistency of data variables. So
the critical section problem means designing a way for cooperative processes to access shared resources
without creating data inconsistencies.
In the entry section, the process requests for entry in the Critical Section.
Any solution to the critical section problem must satisfy three requirements:
Mutual Exclusion: If a process is executing in its critical section, then no other process is allowed to
execute in the critical section.
Progress: If no process is executing in the critical section and other processes are waiting outside the
critical section, then only those processes that are not executing in their remainder section can
participate in deciding which will enter in the critical section next, and the selection can not be
postponed indefinitely.
Bounded Waiting: A bound must exist on the number of times that other processes are allowed to enter
their critical sections after a process has made a request to enter its critical section and before that
request is granted.
The general structure for a two Process Solution for Critical Section Problem- Algorithm 1 is:
turn = 0;
do
{
while (turn != 0) ; //if not P0's turn , wait indefinitely
// critical section of Process P0
turn = 1; //after P0 leaves critical section, lets P1 in
//remainder section
} while (1); //loop again
The problem with this Algorithm is that it doesn't support the necessary requirement of Progress. It forces the
critical section to be owned in equal turns by P0 -> P1 -> P0 -> P1 -> ... To get over this problem Algorithm 2 is
used where variable turn is replaced by an array flag[]. The general structure of algorithm 2 is:
do
{
flag[0] = T ;
while (flag[1]);//if flag[1] is true wait indefinitely
do
{
turn = 0;
while (turn != 0) ; //if not P0's turn , wait indefinitely
// critical section of Process P0
turn = 1; //after P0 leaves critical section, lets P1 in
//remainder section
} while (1); //loop again
For process 1:
Do
{ turn = 1;
Algorithm 3
Peterson Solution
Peterson’s Solution is a classical software-based solution to the critical section problem. In Peterson’s
solution, we have two shared variables:
boolean flag[i]: Initialized to FALSE, initially no one is interested in entering the critical section
int turn: The process whose turn is to enter the critical section.
Peterson’s Solution preserves all three conditions:
Mutual Exclusion is assured as only one process can access the critical section at any time.
Progress is also assured, as a process outside the critical section does not block other processes from
entering the critical section.
Bounded Waiting is preserved as every process gets a fair chance.
Synchronization Hardware
The following are the algorithms in the hardware approach of solving Process Synchronization problem:
Semaphore
-> It is Synchronization tool that provides more sophisticated ways (than Mutex locks) for process to
synchronize their activities.
The Signal Semaphore Operation is used to update the value of Semaphore. The Semaphore value is updated
when the new processes are ready to enter the Critical Section. We know that the semaphore value is
decreased by one in the wait operation when the process left the critical state. So, to counter balance the
decreased number 1 we use signal operation which increments the semaphore value. This induces the critical
section to receive more and more processes into it.
The most important part is that this Signal Operation is executed only when the process comes out of the
critical section. The value of semaphore cannot be incremented before the exit of process from the critical
section
signal(S)
{
S++;
}
Definition of the wait() operation
The Wait Operation is used for deciding the condition for the process to enter the critical state or wait for
execution of process. The Wait Operation works on the basis of Semaphore or Mutex Value. If the Semaphore
value is equal to zero then the Process has to wait for the Process to exit the Critical Section Area. This function
is only present until the process enters the critical state. If the Processes enters the critical state, then the P
Function or Wait Operation has no job to do. If the Process exits the Critical Section we have to reduce the
value of Semaphore.
Here, if the Semaphore value is greater than zero or positive then the Process can enter the Critical Section
Area.
wait(S)
{
while (S <= 0) ; // busy wait S--;
}
Types of Semaphores
Here, there are two sets of values of Semaphore in Counting Semaphore Concept. The two types of values are
values greater than and equal to one and other type is value equal to zero.
If the Value of Binary Semaphore is greater than or equal to 1, then the process has the capability to enter the
critical section area. If the value of Binary Semaphore is 0 then the process does not have the capability to enter
the critical section area.
Binary semaphore – integer value can range only between 0 and 1 Same as a mutex
lock.
Advantages of a Semaphore
o Semaphores are machine independent since their implementation and codes are written in the
microkernel's machine independent code area.
o They strictly enforce mutual exclusion and let processes enter the crucial part one at a time (only in the
case of binary semaphores).
o With the use of semaphores, no resources are lost due to busy waiting since we do not need any
processor time to verify that a condition is met before allowing a process access to the crucial area.
o Semaphores have the very good management of resources
o They forbid several processes from entering the crucial area. They are significantly more effective than
other synchronization approaches since mutual exclusion is made possible in this way.
Disadvantages of a Semaphore
o Due to the employment of semaphores, it is possible for high priority processes to reach the vital area
before low priority processes.
o Because semaphores are a little complex, it is important to design the wait and signal actions in a way
that avoids deadlocks.
o Programming a semaphore is very challenging, and there is a danger that mutual exclusion won't be
achieved.
o The wait ( ) and signal ( ) actions must be carried out in the appropriate order to prevent deadlocks.
Synchronization Problems:
do
{
wait (chopstick[i] );
wait (chopStick[ (i + 1) % 5] );
// eat
signal (chopstick[i] );
signal (chopstick[ (i + 1) % 5] );
// think
} while (TRUE);
The readers-writers problem is a classical problem of process synchronization, it relates to a data set such as a
file that is shared between more than one process at a time. Among these various processes, some are Readers
- which can only read the data set; they do not perform any updates, some are Writers - can both read and
write in the data sets.
The readers-writers problem is used for managing synchronization among various reader and writer process so
that there are no problems with the data sets, i.e. no inconsistency is generated.
Let's understand with an example - If two or more than two readers want to access the file at the same point in
time there will be no problem. However, in other situations like when two writers or one reader and one writer
wants to access the file at the same point of time, there may occur some problems, hence the task is to design
the code in such a manner that if one reader is reading then no writer is allowed to update at the same point of
time, similarly, if one writer is writing no reader is allowed to read the file at that point of time and if one writer
is updating a file other writers should not be allowed to update the file at the same point of time. However,
multiple readers can access the object at the same time.
Let us understand the possibility of reading and writing with the table given below:
TABLE 1
monitor monitor-name
{
// shared variable declarations
procedure P1 (…) { …. } procedure
Pn (…) {……} Initialization code (…) {
…}
}
Condition Variables
● condition x, y;
● Two operations are allowed on a condition variable:
➔ x.wait() – a process that invokes the operation is
suspended until x.signal()
➔ x.signal() – resumes one of processes (if any) that
invoked x.wait()
➔ If no x.wait() on the variable, then it has no effect on the
variable
➔ Options include
● Signal and wait – P waits until Q either leaves the monitor or it waits for
another condition
● Signal and continue – Q waits until P either leaves the monitor or it waits
for another condition
● Both have pros and cons – language implementer can decide
● Implemented in other languages including Mesa, C#, Java