Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1of 16

UNIT -2

Process Synchronization
Process synchronization involves the coordination and control of concurrent processes to ensure
correct and predictable outcomes.
Need for synchronization:
1. Processes can execute concurrently. May be interrupted at any time, partially completing
execution.
2. Concurrent access to shared data may result in data inconsistency
3. Maintaining data consistency requires mechanisms to ensure the orderly execution of
cooperating processes
In multi programming or multi-threaded system different processes may be of independent
nature or cooperative. Independent process does not depend on any other process and it consists
of its own data and resources. Its execution will not affect other processes.
A cooperating process is one that can affect or be affected by other processes executing in the
system. Cooperating processes can either directly share a logical address space or be allowed to
share data only through files or messages.

Example: Producer–Consumer Problem:


Suppose that we wanted to provide a solution to the consumer-producer problem that fills all the
buffers. We can do so by having an integer counter that keeps track of the number of full
buffers. Initially, counter is set to 0. It is incremented by the producer after it produces a new
buffer and is decremented by the consumer after it consumes a buffer.
The code for the producer process is as follows:
while(true) {
/* produce an item */
while (counter == BUFFER_SIZE) ;
/* do nothing */
buffer[in] = next_produced;
in = (in + 1) % BUFFER_SIZE;
counter++;
}
The code for the producer process is as follows:
while (true) {
while (counter == 0)
; /* do nothing */
next_consumed = buffer[out];
out = (out + 1) % BUFFER_SIZE;
counter--;
/* consume the item in next consumed */
}
Although the producer and consumer routines shown above are correct separately, they may not
function correctly when executed concurrently.

Race Condition:
A race condition is an undesirable situation that occurs when a device or system attempts to
perform two or more operations at the same time and if they must be done in the proper sequence
correctly.
It is a situation where several processes access and manipulate the same data concurrently and
the outcome of the execution depends on the particular order in which the access takes place.
Race condition leads to data inconsistency and incomplete operations

 Critical Section Problem

Consider a system consisting of n processes {P0, P1, ..., Pn−1}. Each process has a segment of
code, called a critical section, in which the process may be changing common variables, updating
a table, writing a file, and so on. The important feature of the system is that, when one process is
executing in its critical section, no other process is allowed to execute in its critical section. That
is, no two processes are executing in their critical sections at the same time.

“Critical section is a code segment which is crucial part of the process that may be
shared or common to all other process.”
Critical section problem is to design protocol to solve this problem. Each process must ask
permission to enter critical section. has 3 sections:
Entry section, Exit section and Remainder section

The general structure of a typical process Pi to solve the critical section problem is given below:

Requirements / Solution to Critical-Section Problem:

1. Mutual Exclusion - If process Pi is executing in its critical section, then no other processes
can be executing in their critical sections
2. Progress - If no process is executing in its critical section and there exist some processes that
wish to enter their critical section, then the selection of the processes that will enter the critical
section next cannot be postponed indefinitely
3. Bounded Waiting - A bound must exist on the number of times that other processes are
allowed to enter their critical sections after a process has made a request to enter its critical
section and before that request is granted
 Assume that each process executes at a nonzero speed.
 No assumption concerning relative speed of the n processes
Operating system uses two approaches to handle critical section problem depending on if kernel
is preemptive or non- preemptive
1. Preemptive – allows preemption of process when running in kernel mode
2. Non-preemptive – runs until exits kernel mode, blocks, or voluntarily yields CPU
 Essentially free of race conditions in kernel mode

 Peterson’s Solution
 It a classic software-based solution to the critical-section problem. It provides good
algorithmic description of solving the problem. It is restricted for only Two process
solution.
 It assumes that the load and store machine-language instructions are atomic; that is,
cannot be interrupted.
 The processes share two shared variables:
 int turn;
 Boolean flag[2]
 The variable turn indicates whose turn it is to enter the critical section
 The flag array is used to indicate if a process is ready to enter the critical section.
flag[i] = true implies that process Pi is ready.
 Algorithm for each Process Pi is given as:
Peterson’s solution prove that the three Critical Section requirement are met:

1. Mutual exclusion is preserved


Pi enters CS only if:
either flag[j] = false or turn = i
2. Progress requirement is satisfied
3. Bounded-waiting requirement is met

Synchronization Hardware:
Software-based solutions such as Peterson’s are not guaranteed to work on modern computer
architectures. Many systems provide hardware support for implementing the critical section
code. several more solutions to the critical-section problem using techniques ranging from
hardware to software-based APIs available to both kernel developers and application
programmers. All these solutions are based on the idea of locking, that protecting critical regions
via locks. Modern machines provide special atomic hardware instructions.
Atomic = non-interruptible
General solution to solve critical section problem using synchronization hardware is using
locking mechanism shown below.
do {
acquire lock
critical section
release lock
remainder section
} while (TRUE);

Two types of hardware instructions are used:


1. test_and_set Instruction
2. compare_and_swap Instruction
1. test_and_set() Instruction:
It is a type of synchronization hardware instruction is that it is executed atomically.
The test and set() instruction can be defined as:
boolean test_and_set (boolean *target)
{
boolean rv = *target;
*target = TRUE;
return rv:
}
Using the test and set() instruction, we can implement mutual exclusion by declaring a
boolean variable lock, initialized to false. The structure of process Pi is shown below.
do {
while (test_and_set(&lock))
; /* do nothing */
/* critical section */
lock = false;
/* remainder section */
} while (true);

2. compare and swap() Instruction:


This instruction operates on three operands; The compare and swap() instruction is
defined as:
int compare _and_swap(int *value, int expected, int new_value) {
int temp = *value;
if (*value == expected)
*value = new_value;
return temp;
}

The operand value is set to new_value only if the expression (*value == expected) is
true. Regardless, compare and swap() always returns the original value of the variable
value. This instruction is also executed atomically and mutual exclusion can be provided
among the processes by using a global variable (lock) which is initialized to 0 initially.
The structure of process Pi is:
do {
while (compare_and_swap(&lock, 0, 1) != 0) ;
/* do nothing */
/* critical section */
lock = 0;
/* remainder section */
} while (true);

Mutex Locks:
The hardware-based solutions to the critical-section problem are complicated as well as
generally inaccessible to application programmers. OS designers build software tools to solve
critical section problem using simplest locking i.e mutex (mutual exclusion) lock). The
mutex lock protect critical regions and thus prevent race conditions by using two atomic
operations aquire() and release().
The acquire()function acquires the lock, and the release() function releases the lock, A mutex
lock has a boolean variable available whose value indicates if the lock is available or not. If
the lock is available, a call to acquire() succeeds, and the lock is then considered unavailable.
A process that attempts to acquire an unavailable lock is blocked until the lock is released.
The definition of acquire() and release() is as follows:

In order to solve critical section problem either acquire() or release() must be performed
atomically. Algorithm for a process Pi is as shown below.
The main disadvantage of the implementation of mutex lock is that it requires busy waiting.
While a process is in its critical section, any other process that tries to enter its critical section
must loop continuously in the call to acquire(). That’s why mutex lock is also called as
spinlock.

Semaphores:
It is a synchronization tool that provides more sophisticated ways (than Mutex locks) for
processes to synchronize their activities. Semaphore S – integer variable that can only be
accessed via two indivisible (atomic) operations: wait() and signal().
The wait() operation was originally termed P (from the Dutch proberen, “to test”); signal()
was originally called V (from verhogen, “to increment”). The definition of wait() and signal()
is as follows:

All modifications to the integer value of the semaphore


in the wait() and signal() operations must be executed indivisibly.
Two types of semaphores: Counting semaphore & Binary semaphore.
The value of a counting semaphore can range over an unrestricted domain. The value of a
binary semaphore can range only between 0 and 1.
Counting semaphores can be used to control access to a given resource consisting of a finite
number of instances. The semaphore is initialized to the number of resources available. Each
process that wishes to use a resource performs a wait() operation on the semaphore (thereby
decrementing the count). When a process releases a resource, it performs a signal() operation
(incrementing the count). When the count for the semaphore goes to 0, all resources are
being used.
Semaphore Implementation:
To implement semaphores under this definition, we define a semaphore as follows:
typedef struct {
int value;
struct process *list;
} semaphore;
Each semaphore has an integer value and a list of processes list. When a process must wait
on a semaphore, it is added to the list of processes. A signal() operation removes one process
from the list of waiting processes and awakens that process.
Now, the wait() semaphore operation can be defined as:

and the signal() semaphore operation can be defined as

Here two operations block() and weakup() are used to handle processes. The block()
operation suspends the process that invokes it. The wakeup(P) operation resumes the
execution of a blocked process P. These two operations are provided by the operating system
as basic system calls.
 Classic Problems of Synchronization
 Bounded-Buffer Problem
 Readers and Writers Problem
 Dining-Philosophers Problem

1. Bounded-Buffer Problem:
It is related to the producer – consumer processes. In this problem, the producer and consumer
processes share the following data structures:
int n;
semaphore mutex = 1;
semaphore empty = n;
semaphore full = 0
We assume that the pool consists of n buffers, each capable of holding one item. The mutex
semaphore provides mutual exclusion for accesses to the buffer pool and is initialized to the
value 1.
The empty and full semaphores count the number of empty and full buffers. The semaphore
empty is initialized to the value n; the semaphore full is initialized to the value 0.
The code for the producer process is shown below:
The ode for the consumer process is shown below:

2. The Readers–Writers Problem:


Suppose that a database is to be shared among several concurrent processes. Some of these
processes may want only to read the database referred as Readers, whereas others may want to
update the database referred as Writers.
When two readers access the shared data simultaneously, no adverse effects will occur.
However, if a writer and some other process (either a reader or a writer) access the database
simultaneously, then there will be problem of data inconsistency.
There Several variations of how readers and writers are considered – all involve some form of
priorities. One of the simplest solution is to allow multiple readers to read at the same time. Only
a single writer can access the shared data at the same time.
In the solution to the first readers–writers problem, the reader processes share the following data
structures:
semaphore rw mutex = 1;
semaphore mutex = 1;
int read_count = 0;

The semaphores mutex and rw_mutex are initialized to 1; read count is initialized to 0. The
semaphore rw_mutex is common to both reader and writer processes. The mutex semaphore is
used to ensure mutual exclusion when the variable read_count is updated. The read_count
variable keeps track of how many processes are currently reading the object. The semaphore
rw_ mutex functions as a mutual exclusion semaphore for the writers.
The code for a writer process is shown below:

The code for a reader process is shown below:

In this approach we can see if a writer is in the critical section and n readers are waiting, then one
reader is queued on rw_mutex, and n − 1 readers are queued on mutex. When a writer executes
signal(rw mutex),we may resume the execution of either the waiting readers or a single waiting
writer. The selection is made by the scheduler.
3. The Dining-Philosophers Problem
It is a classical example demonstrating multiple resource sharing. Consider five philosophers
who spend their lives thinking and eating. The philosophers share a circular table surrounded by
five chairs, each belonging to one philosopher. In the center of the table is a bowl of rice, and the
table is laid with five single chopsticks.

A philosopher may pick up only one chopstick at a time. One simple solution is to represent each
chopstick with a semaphore. A philosopher tries to grab a chopstick by executing a wait()
operation on that semaphore. Philosopher releases chopsticks by executing the signal() operation
on the appropriate semaphores.
Following Shared data will be used in this problem:
Bowl of rice (data set)
Semaphore chopstick [5] initialized to 1

The structure of philosopher i is shown below:


Although this solution guarantees that no two neighbors are eating simultaneously, but still there
may chance of deadlock. Several possible remedies to the deadlock problem are replaced by:
 Allow at most four philosophers to be sitting simultaneously at the table.
 Allow a philosopher to pick up her chopsticks only if both chopsticks are available.

Monitors
It is a high-level abstraction tool and an abstract data type that provides a convenient and
effective mechanism for process synchronization. Since it is abstract data type, the internal
variables only accessible by code within the procedure. It ensures that only one process may be
active within the monitor at a time. But not powerful enough to model some synchronization
schemes.
Monitor encapsulates data with a set of functions to operate on that data that are independent of
any specific implementation of the ADT. The structure of the monitor is as shown below:
Synchronization scheme can be defined by monitors with the use of one or more variables of
type condition:
condition x, y;

Two operations are allowed on a condition variable:


x.wait() – a process that invokes the operation is suspended until x.signal()
x.signal() – resumes one of processes (if any) that invoked x.wait()
If no x.wait() on the variable, then it has no effect on the variable

Dining-Philosophers Solution Using Monitors:


This solution imposes the restriction that a philosopher may pick up her chopsticks only if both
of them are available. For this purpose, we introduce the following data structure:
enum {THINKING, HUNGRY, EATING} state[5];
Philosopher i can set the variable state[i] = EATING only if her two neighbors are not eating:
(state[(i+4) % 5] != EATING) and(state[(i+1) % 5] != EATING).
The code of the monitor to the dining philosopher problem is as shown below:

You might also like