Professional Documents
Culture Documents
OS Unit-2 Notes
OS Unit-2 Notes
Process Synchronization
Process synchronization involves the coordination and control of concurrent processes to ensure
correct and predictable outcomes.
Need for synchronization:
1. Processes can execute concurrently. May be interrupted at any time, partially completing
execution.
2. Concurrent access to shared data may result in data inconsistency
3. Maintaining data consistency requires mechanisms to ensure the orderly execution of
cooperating processes
In multi programming or multi-threaded system different processes may be of independent
nature or cooperative. Independent process does not depend on any other process and it consists
of its own data and resources. Its execution will not affect other processes.
A cooperating process is one that can affect or be affected by other processes executing in the
system. Cooperating processes can either directly share a logical address space or be allowed to
share data only through files or messages.
Race Condition:
A race condition is an undesirable situation that occurs when a device or system attempts to
perform two or more operations at the same time and if they must be done in the proper sequence
correctly.
It is a situation where several processes access and manipulate the same data concurrently and
the outcome of the execution depends on the particular order in which the access takes place.
Race condition leads to data inconsistency and incomplete operations
Consider a system consisting of n processes {P0, P1, ..., Pn−1}. Each process has a segment of
code, called a critical section, in which the process may be changing common variables, updating
a table, writing a file, and so on. The important feature of the system is that, when one process is
executing in its critical section, no other process is allowed to execute in its critical section. That
is, no two processes are executing in their critical sections at the same time.
“Critical section is a code segment which is crucial part of the process that may be
shared or common to all other process.”
Critical section problem is to design protocol to solve this problem. Each process must ask
permission to enter critical section. has 3 sections:
Entry section, Exit section and Remainder section
The general structure of a typical process Pi to solve the critical section problem is given below:
1. Mutual Exclusion - If process Pi is executing in its critical section, then no other processes
can be executing in their critical sections
2. Progress - If no process is executing in its critical section and there exist some processes that
wish to enter their critical section, then the selection of the processes that will enter the critical
section next cannot be postponed indefinitely
3. Bounded Waiting - A bound must exist on the number of times that other processes are
allowed to enter their critical sections after a process has made a request to enter its critical
section and before that request is granted
Assume that each process executes at a nonzero speed.
No assumption concerning relative speed of the n processes
Operating system uses two approaches to handle critical section problem depending on if kernel
is preemptive or non- preemptive
1. Preemptive – allows preemption of process when running in kernel mode
2. Non-preemptive – runs until exits kernel mode, blocks, or voluntarily yields CPU
Essentially free of race conditions in kernel mode
Peterson’s Solution
It a classic software-based solution to the critical-section problem. It provides good
algorithmic description of solving the problem. It is restricted for only Two process
solution.
It assumes that the load and store machine-language instructions are atomic; that is,
cannot be interrupted.
The processes share two shared variables:
int turn;
Boolean flag[2]
The variable turn indicates whose turn it is to enter the critical section
The flag array is used to indicate if a process is ready to enter the critical section.
flag[i] = true implies that process Pi is ready.
Algorithm for each Process Pi is given as:
Peterson’s solution prove that the three Critical Section requirement are met:
Synchronization Hardware:
Software-based solutions such as Peterson’s are not guaranteed to work on modern computer
architectures. Many systems provide hardware support for implementing the critical section
code. several more solutions to the critical-section problem using techniques ranging from
hardware to software-based APIs available to both kernel developers and application
programmers. All these solutions are based on the idea of locking, that protecting critical regions
via locks. Modern machines provide special atomic hardware instructions.
Atomic = non-interruptible
General solution to solve critical section problem using synchronization hardware is using
locking mechanism shown below.
do {
acquire lock
critical section
release lock
remainder section
} while (TRUE);
The operand value is set to new_value only if the expression (*value == expected) is
true. Regardless, compare and swap() always returns the original value of the variable
value. This instruction is also executed atomically and mutual exclusion can be provided
among the processes by using a global variable (lock) which is initialized to 0 initially.
The structure of process Pi is:
do {
while (compare_and_swap(&lock, 0, 1) != 0) ;
/* do nothing */
/* critical section */
lock = 0;
/* remainder section */
} while (true);
Mutex Locks:
The hardware-based solutions to the critical-section problem are complicated as well as
generally inaccessible to application programmers. OS designers build software tools to solve
critical section problem using simplest locking i.e mutex (mutual exclusion) lock). The
mutex lock protect critical regions and thus prevent race conditions by using two atomic
operations aquire() and release().
The acquire()function acquires the lock, and the release() function releases the lock, A mutex
lock has a boolean variable available whose value indicates if the lock is available or not. If
the lock is available, a call to acquire() succeeds, and the lock is then considered unavailable.
A process that attempts to acquire an unavailable lock is blocked until the lock is released.
The definition of acquire() and release() is as follows:
In order to solve critical section problem either acquire() or release() must be performed
atomically. Algorithm for a process Pi is as shown below.
The main disadvantage of the implementation of mutex lock is that it requires busy waiting.
While a process is in its critical section, any other process that tries to enter its critical section
must loop continuously in the call to acquire(). That’s why mutex lock is also called as
spinlock.
Semaphores:
It is a synchronization tool that provides more sophisticated ways (than Mutex locks) for
processes to synchronize their activities. Semaphore S – integer variable that can only be
accessed via two indivisible (atomic) operations: wait() and signal().
The wait() operation was originally termed P (from the Dutch proberen, “to test”); signal()
was originally called V (from verhogen, “to increment”). The definition of wait() and signal()
is as follows:
Here two operations block() and weakup() are used to handle processes. The block()
operation suspends the process that invokes it. The wakeup(P) operation resumes the
execution of a blocked process P. These two operations are provided by the operating system
as basic system calls.
Classic Problems of Synchronization
Bounded-Buffer Problem
Readers and Writers Problem
Dining-Philosophers Problem
1. Bounded-Buffer Problem:
It is related to the producer – consumer processes. In this problem, the producer and consumer
processes share the following data structures:
int n;
semaphore mutex = 1;
semaphore empty = n;
semaphore full = 0
We assume that the pool consists of n buffers, each capable of holding one item. The mutex
semaphore provides mutual exclusion for accesses to the buffer pool and is initialized to the
value 1.
The empty and full semaphores count the number of empty and full buffers. The semaphore
empty is initialized to the value n; the semaphore full is initialized to the value 0.
The code for the producer process is shown below:
The ode for the consumer process is shown below:
The semaphores mutex and rw_mutex are initialized to 1; read count is initialized to 0. The
semaphore rw_mutex is common to both reader and writer processes. The mutex semaphore is
used to ensure mutual exclusion when the variable read_count is updated. The read_count
variable keeps track of how many processes are currently reading the object. The semaphore
rw_ mutex functions as a mutual exclusion semaphore for the writers.
The code for a writer process is shown below:
In this approach we can see if a writer is in the critical section and n readers are waiting, then one
reader is queued on rw_mutex, and n − 1 readers are queued on mutex. When a writer executes
signal(rw mutex),we may resume the execution of either the waiting readers or a single waiting
writer. The selection is made by the scheduler.
3. The Dining-Philosophers Problem
It is a classical example demonstrating multiple resource sharing. Consider five philosophers
who spend their lives thinking and eating. The philosophers share a circular table surrounded by
five chairs, each belonging to one philosopher. In the center of the table is a bowl of rice, and the
table is laid with five single chopsticks.
A philosopher may pick up only one chopstick at a time. One simple solution is to represent each
chopstick with a semaphore. A philosopher tries to grab a chopstick by executing a wait()
operation on that semaphore. Philosopher releases chopsticks by executing the signal() operation
on the appropriate semaphores.
Following Shared data will be used in this problem:
Bowl of rice (data set)
Semaphore chopstick [5] initialized to 1
Monitors
It is a high-level abstraction tool and an abstract data type that provides a convenient and
effective mechanism for process synchronization. Since it is abstract data type, the internal
variables only accessible by code within the procedure. It ensures that only one process may be
active within the monitor at a time. But not powerful enough to model some synchronization
schemes.
Monitor encapsulates data with a set of functions to operate on that data that are independent of
any specific implementation of the ADT. The structure of the monitor is as shown below:
Synchronization scheme can be defined by monitors with the use of one or more variables of
type condition:
condition x, y;