Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1of 10

Race Condition:

A race condition is a problem that occurs in an operating system (OS) where two or more processes or
threads are executing concurrently. The outcome of their execution depends on the order in which they
are executed. In a race condition, the exact timing of events is unpredictable, and the outcome of the
execution may vary based on the timing. This can result in unexpected or incorrect behavior of the
system.
For example: If two threads are simultaneously accessing and changing the same shared resource, such
as a variable or a file, the final state of that resource depends on the order in which the threads execute. If
the threads are not correctly synchronized, they can overwrite each other's changes, causing incorrect
results or even system crashes.

Effects of Race Condition in OS:


In an operating system, a race condition can have several effects, which are discussed below:

1. Deadlocks
A race condition can cause deadlocks, where two or more processes are awaiting the completion of a job
by one another but are unable to move forward, can be brought on by a race condition. This can happen if
two processes try to access the same resource simultaneously, and the operating system doesn't have a
mechanism to ensure that only one can access the resource at a time.

2. Data corruption
A race condition can cause data corruption, where two or more processes simultaneously try to
write/update the exact memory location. This can cause the data to be overwritten or mixed up, resulting
in incorrect results or program crashes.

3. Security vulnerabilities
A race condition can also create security vulnerabilities in an operating system. For example, an attacker
may be able to exploit a race condition to gain unauthorized access to a system or to escalate their
privileges.

4. Performance degradation
In some cases, a race condition can also cause performance degradation in an operating system. This can
happen if multiple processes are competing for the same resources, such as CPU time or memory, and the
operating system is unable to allocate these resources efficiently.

Critical regions:

Sometimes a process has to access shared memory or files, or do other critical things that can lead to
races. That part of the program where the shared memory is accessed is called the critical region or
critical section.

If we could arrange matters such that no two processes were ever in their critical regions at the same time,
we could avoid races. Although this requirement avoids race conditions, it is not sufficient for having
parallel processes cooperate correctly and efficiently using shared data.

We need four conditions to hold to have a good solution:

1. No two processes may be simultaneously inside their critical regions.


2. No assumptions may be made about speeds or the number of CPUs.
3. No process running outside its critical region may block other processes.
4. No process should have to wait forever to enter its critical region.

Here process A enters its critical region at time Tj. A little later, at time T2 process B attempts to enter its
critical region but fails because another process is already in its critical region and we allow only one at a
time.

Consequently, B is temporarily suspended until time 73 when A leaves its critical region, allowing B to
enter immediately. Eventually B leaves (at T4 ) and we are back to the original situation with no
processes in their critical regions.

Mutual exclusion with busy waiting:

In operating systems, mutual exclusion refers to the concept of ensuring that only one process or thread
can access a shared resource at a time. Busy waiting, also known as spin locking, is a technique where a
process or thread repeatedly checks for the availability of a resource in a tight loop instead of blocking or
yielding the CPU. While busy waiting can be simple to implement, it is generally considered inefficient
and wasteful of CPU resources compared to other synchronization techniques like blocking or signaling.

Disabling interrupts:

Disabling interrupts in an operating system refers to a technique where the processor's interrupt
mechanism is temporarily turned off. When interrupts are disabled, the processor ignores incoming
interrupts and does not switch to the interrupt handler code. Disabling interrupts can be used in certain
critical sections of code to achieve mutual exclusion or prevent race conditions.

Lock variables:

In operating systems, a lock variable is a synchronization mechanism used to protect shared resources
from simultaneous access by multiple processes or threads. It ensures mutual exclusion, meaning that
only one process or thread can acquire the lock at a time. A lock variable can be implemented using
different types of locks, such as a binary semaphore or a mutex.

Strict alternation:

Strict alternation is a synchronization pattern used in operating systems and concurrent programming to ensure that
two or more processes or threads take turns executing their critical sections in a specific order. It enforces a strict
alternating order of execution between participating entities, allowing each entity to execute its critical section one
after another.

Peterson’s solution:

Peterson's solution is a classical algorithm for mutual exclusion in operating systems and concurrent
programming. It provides a simple and efficient way to achieve mutual exclusion between two processes
or threads. The solution is named after Gary L. Peterson, who first described it in 1981. Peterson's
solution requires the use of two shared variables: turn and flag.

Sleep and wakeup:

In operating systems, sleep and wakeup are two synchronization primitives used to coordinate the
execution of processes or threads. They are typically used in scenarios where a process/thread needs to
wait for a certain condition to be satisfied before proceeding with its execution.

Sleep:

Sleep is an operating system function or system call that allows a process/thread to voluntarily suspend its
execution for a specified period of time or until a specific event occurs. When a process/thread invokes
the sleep operation, it releases the CPU and enters a waiting state, allowing other processes/threads to
execute. The process/thread will remain asleep until it is explicitly awakened or the specified sleep
duration expires.

Wakeup:

Wakeup is an operation that signals a sleeping process/thread to wake up and resume its execution. It is
typically performed by another process/thread or an external event when a specific condition that the
sleeping process/thread is waiting for becomes true. The wakeup operation transfers control back to the
awakened process/thread, allowing it to continue its execution from where it left off.

The producer consumer problem:

The producer-consumer problem is a classic synchronization problem in operating systems and


concurrent programming. It involves coordinating the actions of two entities, namely producers and
consumers, who share a common buffer or queue. The producers are responsible for producing data items
and adding them to the buffer, while the consumers retrieve and consume the data items from the buffer.

The goal is to ensure that the producers and consumers operate correctly and concurrently without any
issues such as data corruption, overflows, or underflows. The problem arises due to the need for
synchronization between the producers and consumers to avoid race conditions and ensure that the buffer
is accessed safely.
Semaphore:
Semaphore is simply a variable that is non-negative and shared between threads. A semaphore is a
signaling mechanism, and another thread can signal a thread that is waiting on a semaphore.

A semaphore uses two atomic operations,

1. Wait: The wait operation decrements the value of its argument S if it is positive. If S is negative or
zero, then no operation is performed.

1. wait(S)  
2. {  
3.    while (S<=0);  
4.    S--;  
5. }  

2. Signal for the process synchronization: The signal operation increments the value of its argument S.

1. signal(S)  
2. {  
3.    S++;  
4. }  

A semaphore either allows or reject access to the resource, depending on how it is set up.

Use of Semaphore

In the case of a single buffer, we can separate the 4 KB buffer into four buffers of 1 KB. Semaphore can
be associated with these four buffers, allowing users and producers to work on different buffers
simultaneously.

Types of Semaphore

Semaphore is distinguished by the operating system in two categories Counting semaphore and Binary


semaphore.

1. Counting Semaphore: The semaphore S value is initialized to the number of resources present in the


system. Whenever a process wants to access the resource, it performs the wait()operation on the
semaphore and decrements the semaphore value by one. When it releases the resource, it performs the
signal() operation on the semaphore and increments the semaphore value by one.
When the semaphore count goes to 0, it means the processes occupy all resources. A process needs to use
a resource when the semaphore count is 0. It executes the wait() operation and gets blocked until the
semaphore value becomes greater than 0.

2. Binary semaphore: The value of a semaphore ranges between 0and 1. It is similar to mutex lock, but
mutex is a locking mechanism, whereas the semaphore is a signaling mechanism. In binary semaphore, if
a process wants to access the resource, it performs the wait() operation on the semaphore and decrements
the value of the semaphore from 1 to 0. When it releases the resource, it performs a  signal() operation on
the semaphore and increments its value to 1. Suppose the value of the semaphore is 0 and a process wants
to access the resource. In that case, it performs wait() operation and block itself till the current process
utilizing the resources releases the resource.

Advantages of Semaphore

Here are the following advantages of semaphore, such as:

o It allows more than one thread to access the critical section.


o Semaphores are machine-independent.
o Semaphores are implemented in the machine-independent code of the microkernel.
o They do not allow multiple processes to enter the critical section.
o As there is busy and waiting in semaphore, there is never wastage of process time and resources.
o They are machine-independent, which should be run in the machine-independent code of the
microkernel.
o They allow flexible management of resources.
Disadvantage of Semaphores

Semaphores also have some disadvantages, such as:

o One of the biggest limitations of a semaphore is priority inversion.


o The operating system has to keep track of all calls to wait and signal semaphore.
o Their use is never enforced, but it is by convention only.
o The Wait and Signal operations require to be executed in the correct order to avoid deadlocks in
semaphore.
o Semaphore programming is a complex method, so there are chances of not achieving mutual
exclusion.
o It is also not a practical method for large scale use as their use leads to loss of modularity.
o Semaphore is more prone to programmer error
o , and it may cause deadlock or violation of mutual exclusion due to programmer error.

Mutex:
Mutex is a mutual exclusion object that synchronizes access to a resource. It is created with a unique
name at the start of a program. The mutex locking mechanism ensures only one thread can acquire the
mutex and enter the critical section. This thread only releases the mutex when it exits in the critical
section.

It is a special type of binary semaphore used for controlling access to the shared resource. It includes a
priority inheritance mechanism to avoid extended priority inversion problems. It allows current higher
priority tasks to be kept in the blocked state for the shortest time possible. However, priority inheritance
does not correct priority inversion but only minimizes its effect.

Example:

This is shown with the help of the following example,

1. wait (mutex);  
2. .....  
3. Critical Section  
4. .....  
5. signal (mutex);  

Use of Mutex
A mutex provides mutual exclusion, either producer or consumer who can have the key (mutex) and
proceed with their work. As long as the producer fills the buffer, the user needs to wait, and vice versa. In
Mutex lock, all the time, only a single thread can work with the entire buffer.

When a program starts, it requests the system to create a mutex object for a given resource. The system
creates the mutex object with a unique name or ID. Whenever the program thread wants to use the
resource, it occupies lock on mutex object, utilizes the resource and after use, it releases the lock on
mutex object. Then the next process is allowed to acquire the lock on the mutex object.

Meanwhile, a process has acquired the lock on the mutex object, and no other thread or process can
access that resource. If the mutex object is already locked, the process desiring to acquire the lock on the
mutex object has to wait and is queued up by the system till the mutex object is unlocked.

Advantages of Mutex

Here are the following advantages of the mutex, such as:

o Mutex is just simple locks obtained before entering its critical section and then releasing it.
o Since only one thread is in its critical section at any given time, there are no race conditions, and
data always remain consistent.

Disadvantages of Mutex

Mutex also has some disadvantages, such as:

o If a thread obtains a lock and goes to sleep or is preempted, then the other thread may not move
forward. This may lead to starvation.
o It can't be locked or unlocked from a different context than the one that acquired it.
o Only one thread should be allowed in the critical section at a time.
o The normal implementation may lead to a busy waiting state, which wastes CPU time.

Monitors in Operating System:


Monitors are used for process synchronization. With the help of programming languages, we can use a
monitor to achieve mutual exclusion among the processes.
 Example of monitors: Java Synchronized methods such as Java offers notify() and wait() constructs.
In other words, monitors are defined as the construct of programming language, which helps in
controlling shared data access.
The Monitor is a module or package which encapsulates shared data structure, procedures, and the
synchronization between the concurrent procedure invocations.
Characteristics of Monitors:

1. Inside the monitors, we can only execute one process at a time.


2. Monitors are the group of procedures, and condition variables that are merged together in a
special type of module.
3. If the process is running outside the monitor, then it cannot access the monitor’s internal variable.
But a process can call the procedures of the monitor.
4. Monitors offer high-level of synchronization
5. Monitors were derived to simplify the complexity of synchronization problems.
6. There is only one process that can be active at a time inside the monitor.

Message Passing:
Message Passing provides a mechanism to allow processes to communicate and to synchronize their
actions without sharing the same address space.
For example − chat programs on World Wide Web.
Now let us discuss the message passing step by step.
Step 1 − Message passing provides two operations which are as follows −
 Send message
 Receive message
Messages sent by a process can be either fixed or variable size.
Step 2 − For fixed size messages the system level implementation is straight forward. It makes the task
of programming more difficult.
Step 3 − The variable sized messages require a more system level implementation but the programming
task becomes simpler.
Step 4 − If process P1 and P2 want to communicate they need to send a message to and receive a
message from each other that means here a communication link exists between them.
Step 5 − Methods for logically implementing a link and the send() and receive() operations.
Given below is the structure of message passing technique −

Characteristics
The characteristics of Message passing model are as follows −
 Mainly the message passing is used for communication.
 It is used in distributed environments where the communicating processes are present on remote
machines which are connected with the help of a network.
 Here no code is required because the message passing facility provides a mechanism for
communication and synchronization of actions that are performed by the communicating processes.
 Message passing is a time consuming process because it is implemented through kernel (system
calls).
 It is useful for sharing small amounts of data so that conflicts need not occur.
 In message passing the communication is slower when compared to shared memory technique.
Classical IPC problems:

In operating systems, there are several classical Interprocess Communication (IPC) problems that arise in concurrent
programming scenarios. These problems involve coordination, synchronization, and communication between
multiple processes or threads to achieve correct and efficient execution. Here are some of the classical IPC
problems:

1. Dining Philosophers Problem: The problem represents a group of philosophers sitting around a table,
where each philosopher alternates between thinking and eating. The philosophers share a limited number of
forks placed between them. The challenge is to devise a solution to prevent deadlock and ensure that each
philosopher can acquire the required forks to eat.

2. Producer-Consumer Problem: This problem involves a shared buffer or queue where producers add data
items, and consumers retrieve and consume those items. The challenge is to ensure that producers and
consumers operate correctly and concurrently without issues such as data corruption, overflows, or
underflows.
3. Readers-Writers Problem: In this problem, multiple processes or threads are divided into readers and
writers. Readers only access shared data for reading, while writers modify the shared data. The goal is to
allow multiple readers to access the data simultaneously, but only one writer should have exclusive access
to the data to avoid data inconsistency.
4. Dining Philosophers Problem: The problem represents a group of philosophers sitting around a table, where
each philosopher alternates between thinking and eating. The philosophers share a limited number of forks
placed between them. The challenge is to devise a solution to prevent deadlock and ensure that each
philosopher can acquire the required forks to eat.
5. Sleeping Barber Problem: The problem involves a barber who has a limited number of chairs in the waiting
area and a barber chair for haircuts. Customers arrive and either wait in the waiting area if there are empty
chairs or leave if all chairs are occupied. The barber cuts the hair of one customer at a time. The challenge
is to synchronize the arrival of customers and ensure that the barber serves customers efficiently without
deadlock or starvation.
6. The Bounded-Buffer Problem: This problem focuses on a bounded buffer or queue shared between
producers and consumers. Producers attempt to add items to the buffer, while consumers retrieve and
consume items from it. The buffer has a fixed capacity, and the problem involves coordinating access and
preventing buffer overflow or underflow.

These IPC problems highlight the challenges of coordinating and synchronizing concurrent processes or threads to
achieve correct and efficient execution while avoiding issues such as race conditions, deadlocks, and resource
contention. Various synchronization primitives, such as semaphores, mutexes, condition variables, and monitors,
can be used to devise solutions for these classical IPC problems.

You might also like