Unit2 OS

You might also like

Download as pdf or txt
Download as pdf or txt
You are on page 1of 60

Concurrency:- Principles of concurrency, Mutual

Exclusion, Hardware Support, Semaphores,


Monitors, Message Passing, Reader/Writer Problem.

DeadLock and Starvation:- Principles of Deadlock,


Deadlock Prevention,Deadlock Avoidance, Deadlock
Detection, An Integrated Deadlock Strategy, Dining
Philosophers Problem.
What is concurrency?
Concurrency in operating systems refers to the ability of an operating system to handle multiple tasks or
processes at the same time.

Operating systems that support concurrency can execute multiple tasks simultaneously, leading to better
resource utilization, improved responsiveness, and enhanced user experience.

Concurrency is essential in modern operating systems due to the increasing demand for multitasking,
real-time processing, and parallel computing. It is used in a wide range of applications, including web
servers, databases, scientific simulations, and multimedia processing.

concurrency also introduces new challenges such as race conditions, deadlocks, and priority inversion,
which need to be managed effectively to ensure the stability and reliability of the system.
Principles of Concurrency
The principles of concurrency in operating systems are designed to ensure that multiple processes or
threads can execute efficiently and effectively, without interfering with each other or causing deadlock.

● Interleaving − Interleaving refers to the interleaved execution of multiple processes or threads. The
operating system uses a scheduler to determine which process or thread to execute at any given
time. Interleaving allows for efficient use of CPU resources and ensures that all processes or
threads get a fair share of CPU time.
● Synchronization − Synchronization refers to the coordination of multiple processes or threads to
ensure that they do not interfere with each other. This is done through the use of synchronization
primitives such as locks, semaphores, and monitors. These primitives allow processes or threads to
coordinate access to shared resources such as memory and I/O devices.
● Mutual exclusion − Mutual exclusion refers to the principle of ensuring that only one process or
thread can access a shared resource at a time. This is typically implemented using locks or
semaphores to ensure that multiple processes or threads do not access a shared resource
simultaneously.
● Deadlock avoidance − Deadlock is a situation in which two or more processes or threads are
waiting for each other to release a resource, resulting in a deadlock. Operating systems use various
techniques such as resource allocation graphs and deadlock prevention algorithms to avoid
deadlock.
● Process or thread coordination − Processes or threads may need to coordinate their activities to
achieve a common goal. This is typically achieved using synchronization primitives such as
semaphores or message passing mechanisms such as pipes or sockets.
● Resource allocation − Operating systems must allocate resources such as memory, CPU time,
and I/O devices to multiple processes or threads in a fair and efficient manner. This is typically
achieved using scheduling algorithms such as round-robin, priority-based, or real-time scheduling.
Concurrency Mechanisms

● Processes vs. Threads − An operating system can support concurrency using processes or
threads. A process is an instance of a program that can execute independently, while a thread is a
lightweight process that shares the same memory space as its parent process.
● Synchronization primitives − Operating systems provide synchronization primitives to coordinate
access to shared resources between multiple processes or threads. Common synchronization
primitives include semaphores, mutexes, and condition variables.
● Scheduling algorithms − Operating systems use scheduling algorithms to determine which
process or thread should execute next. Common scheduling algorithms include round-robin,
priority-based, and real-time scheduling.
● Message passing − Message passing is a mechanism used to communicate between processes or
threads. Messages can be sent synchronously or asynchronously and can include data, signals, or
notifications.
● Memory management − Operating systems provide memory management mechanisms to allocate
and manage memory resources. These mechanisms ensure that each process or thread has its own
memory space and can access memory safely without interfering with other processes or threads.
● Interrupt handling − Interrupts are signals sent by hardware devices to the operating system,
indicating that they require attention. The operating system uses interrupt handling mechanisms to
stop the current process or thread, save its state, and execute a specific interrupt handler to handle
the device's request.
Advantages of concurrency

● Improved performance − Concurrency allows multiple tasks to be executed simultaneously,


improving the overall performance of the system. By using multiple processors or threads, tasks can
be executed in parallel, reducing the overall processing time.
● Resource utilization − Concurrency allows better utilization of system resources, such as CPU,
memory, and I/O devices. By allowing multiple tasks to run simultaneously, the system can make
better use of its available resources.
● Responsiveness − Concurrency can improve system responsiveness by allowing multiple tasks to
be executed concurrently. This is particularly important in real-time systems and interactive
applications, such as gaming and multimedia.
● Scalability − Concurrency can improve the scalability of the system by allowing it to handle an
increasing number of tasks and users without degrading performance.
● Fault tolerance − Concurrency can improve the fault tolerance of the system by allowing tasks to
be executed independently. If one task fails, it does not affect the execution of other tasks.
Drawbacks of Concurrency :

● It is required to protect multiple applications from one another.


● It is required to coordinate multiple applications through additional mechanisms.
● Additional performance overheads and complexities in operating systems are required for
switching among applications.
● Sometimes running too many applications concurrently leads to severely degraded performance.
Issues of Concurrency :
● Non-atomic – Operations that are non-atomic but interruptible by multiple processes can cause
problems.
● Race conditions – A race condition occurs of the outcome depends on which of several processes
gets to a point first.
● Blocking – Processes can block waiting for resources. A process could be blocked for long period
of time waiting for input from a terminal. If the process is required to periodically update some data,
this would be very undesirable.
● Starvation – It occurs when a process does not obtain service to progress.
● Deadlock – It occurs when two processes are blocked and hence neither can proceed to execute.
Example:-

Void echo()
{
chin = getchar();
chout = chin;
putchar(chout);
}

1. P1 invokes the echo procedure and is interrupted immediately after getchar returns its value and
stores it in chin. At this point, the most recently entered character, x, is stored in variable chin.
2. Process P2 is activated and invokes the echo procedure, which runs to conclusion, inputting and
then displaying a single character, y, on the screen.
3. Process P1 is resumed. By this time, the value x has been overwritten in chin and therefore lost.
Instead, chin contains y, which is transferred to chout and displayed.
Key Terms related to concurrency
Critical Section: A section of code within a process that requires access to shared resources and that
may not be executed while another process is in a corresponding section of code.
Deadlock: A situation in which two or more processes are unable to proceed because each is waiting for
one of the others to do something.
Livelock: A situation in which two more processes continuously change their state in response to
changes in the other processes without doing any useful work.
Mutual exclusion: The requirement that when one process is in a critical section that accesses shared
resources. No other process may be in a critical section that accesses any of those shared resources.
Race Condition: A situation in which multiple threads and processes read and write a shared data item
and the final result depends on the relative timing of their execution.
Starvation: A situation in which a runnable process is overlooked indefinitely by the scheduler; although it
is able to proceed, it is never chosen.
Race Condition
A race condition is a problem that occurs in an operating system (OS) where two or more processes or
threads are executing concurrently. The outcome of their execution depends on the order in which they
are executed. In a race condition, the exact timing of events is unpredictable, and the outcome of the
execution may vary based on the timing. This can result in unexpected or incorrect behavior of the
system.
For example:
If two threads are simultaneously accessing and changing the same shared resource, such as a variable
or a file, the final state of that resource depends on the order in which the threads execute. If the threads
are not correctly synchronized, they can overwrite each other's changes, causing incorrect results or even
system crashes.
OS concerns:-
Design and management issues raised by existence of concurrency:-

The OS must:-

1. Be able to keep track of each process.


2. Allocate and deallocate resources for each active process.
3. Protect the data and physical resource of each process against interference
of other processes.
4. Ensure that the processes and output are independent of the processing
speed.
Process Interaction
Resource competition
Concurrent process comes into conflict when are coming into use of same
resource.

For example:- I/O devices, Memory, Processor time, Clock

In the case of competing processes three control problems must be faced:

1. The need of mutual exclusion


2. Deadlock
3. Starvation
Illustration of Mutual Exclusion
Requirement for Mutual Exclusion
1. Mutual exclusion must be enforced: Only one process at a time is allowed into its critical section,
among all processes that have critical sections for the same resource or shared object.
2. A process that halts in its noncritical section must to do without interfering with other process.
3. It must not be possible for a process requiring access to a critical section to be delayed indefinitely:
no deadlock or starvation.
4. When no process is in a critical section, any process that requests entry to its critical section must
be permitted to enter without delay.
5. No assumptions are made about relative process speeds or number of processors.
6. A process remains inside its critical section for a finite time only.
Interrupt Disabling
The processor can not switch to another process. Thus, once a process has disabled interrupts, it can
examine and update the shared memory without fear that any other process will intervene.

● Uniprocessor system
● Disabling interrupt guarantees mutual exclusion

Disadvantages:

● The efficiency of execution could be noticeably degraded.


● This approach will not work in multiprocessor architecture.
Special Machine Instruction
Machine instructions are machine code programs or commands. In other words, commands written in
the machine code of a computer that it can recognize and subsequently execute.

Machine code or machine language refers to a computer programming language consisting of a string of
ones and zeros, i.e., binary code. Computers can respond to machine code directly, i.e., without any
direction or conversion.
Special Machine Instructions

● These type of instructions are normally, access to a memory location excludes other access to the
same location
● Extensions : Designers have proposed machine instructions that performs two instructions
atomically (indivisible) on the same memory location (e.g. reading and writing)
● The execution of such an instruction is also mutually exclusive (even on Multiprocessors)
● They can be used to provide mutual exclusion but need to be complimented by other mechanisms
to satisfy the other requirements of critical solution problem
Techniques are used for hardware to the critical solution problem

1. Test and Set Instruction

2. Swap Instruction
1. Test and Set Instruction :

Set a memory location to 1 and return the previous value of the location. If a return value is 1 that means
lock acquired by someone else. If the return value is 0 then lock is free and it will set to 1. Test and modify
the content of a word automatically

int test_and_set (int &target) {

int tmp ;

tmp = target ;

target =1 ; /* True */

return ( tmp )

}
Implementing mutual exclusion with test and set :

external bool lock (false) ;

do {

while ( test_and_set (lock)) ; /* Wait for Lock */

critical_section ( ) ;

Lock =false ; /* release the Lock */

remainder_section ( ) ;

} while (1) ;
2. Swap(Exchange) Instruction :

● Atomically swaps two variables


● Another variation on the test and set is an atomic swap of two Boolean variables

void Swap ( boolean &a , boolean &b ) {

boolean temp = a ;

a= b ;

b = temp ;

}
Shared Data : boolean lock = false ;

Process Pi :

do {

key = true ;

while ( key == true )

Swap ( lock , key ) ;

< critical section >

lock = false ;

< remainder section >

}
Advantages of Special Machine Instruction
1. Applicable to any number of processes on either a single processor or
multiple processors sharing main memory
2. Simple and easy to verify
3. It can be used to support multiple critical sections; each critical section can be
defined by its own variable.
Disadvantages of Special Machine Instruction
1. Busy waiting is employed, thus while a process is waiting for access to critical
section, it continues to consume processor time.
2. Starvation is possible, when a process leaves a critical section and more than
one process is waiting, the selection of waiting process is arbitrary. Thus,
some process could indefinitely be denied access.
3. Deadlock is possible.
Semaphores
Semaphores are integer variables that are used to solve the critical section problem by using these three
operations, semWait and semSignal that are used for process synchronization.

1. A semaphore may be initialized to a nonnegative value

2. The semWait operation decrements the semaphore value. If the value becomes negative, then the
process executing the semWait is blocked. Otherwise, the process continues execution.

3. The semSignal operation increments the semaphore value. If the value is less than or equal to zero,
then a process blocked by a semWait operation is unblocked
Types of Semaphore
Counting Semaphores
These are integer value semaphores and have an unrestricted value domain. These semaphores are
used to coordinate the resource access, where the semaphore count is the number of available
resources. If the resources are added, semaphore count automatically incremented and if the resources
are removed, the count is decremented.

Binary Semaphores
The binary semaphores are like counting semaphores but their value is restricted to 0 and 1. The wait
operation only works when the semaphore is 1 and the signal operation succeeds when semaphore is 0. It
is sometimes easier to implement binary semaphores than counting semaphores.
Strong/Weak Semaphores
A queue is used to hold processes waiting on the semaphore

• Strong Semaphores - The process that has been blocked the longest is released
from the queue first (FIFO)

• Weak Semaphores - The order in which processes are removed from the queue
is not specified
Semaphore Mechanism
Example of strong Semaphore
● Here processes A,B,C depend on a result from process D.
1. Initially, A is running; B,C,D are ready; and the semaphore count is 1, indicating D’s result is
available. When A issues a semWait instruction on semaphore s,the semaphore decrements to 0,
and A can continue to execute; subsequently it rejoins the ready queue.
2. Then B runs and issues semWait instruction, and is blocked, allowing D to run.
3. When D completes new result, it issues a semSignal instruction, which allows B to m ove to ready
queue
4. D rejoins the ready queue and C begins to run
5. Bit is blocked when it issues a semWait instruction. Similarly A and B are blocked on the semaphore,
allowing D to resume execution
6. When D has a result, it issues semSignal instruction, which transfers C to the ready queue.Later
cycles of D will release A and B from the Blocked state.
Mutual Exclusion using Semaphore
The Producer/Consumer Problem
One of the most common problems faced in concurrent processing:

The Producer/Consumer problem that is:

● There are one or more producer generating some type of data(records, characters) and placing
these in buffer
● There is a single consumer is taking items out of the buffer one at a time.
● only one producer or consumer may access the buffer at any one time

The Problem: ensure that the producer can’t add data into full buffer and consumer can’t remove data
from an empty buffer
● The figure illustrates the structure of buffer b.
● The producer can generate items can store
them at its own space.
● Each time an index into the buffer is
incremented. The consumer proceeds in a
similar fashion
● But must make sure that it does not attempt
to read from an empty buffer.
● Hence, the consumer makes sure that the
producer has advanced beyond it (in>out)
before proceeding.
Producer code for Bounded buffer
Int count = 0;
V = item to be added
while(true)
{ b = is buffer

/* produce item v*/ n = is no of values in buffer


while(count == n);
in = is next empty buffer
/*do nothing*/;
out = is first filled buffer
b[in]=v;
in=(in+1)%n;
}
Consumer code for Bounded buffer
while(true)
w = item to be removed
{
b = is buffer
while((count==0)
n = is no of values in buffer
/*do nothing*/;
in = is next empty buffer
w=b[out];
out = is first filled buffer
out=(out+1)%n;

/*consume item w*/;

}
Semaphore with empty and full semaphore
void producer( void )

wait ( empty );

wait(S);

Produce_item(v)

buffer[ in ] = item P;

in = (in + 1)mod n

signal(S);

signal(full);

}
void consumer(void)

wait( full);

wait(S);

w= buffer[ out ];

out = ( out + 1 ) mod n;

signal(S);

signal(empty);

}
Monitors
Programming language construct that provides equivalent functionality to that of
semaphores and is easier to control

Implemented in a number of programming languages – including Concurrent


Pascal, Pascal-Plus, Modula-2, Modula-3, and Java

Has also been implemented as a program library

Software module consisting of one or more procedures, an initialization sequence,


and local data
Monitor Characteristics:
Local data variables are accessible only by the monitor’s procedures and not by
any external procedure

Process enters monitor by invoking one of its procedures

Only one process may be executing in the monitor at a time, any other process
that has invoked the monitor the monitor is blocked, waiting for the monitor to
become available.
Monitors : Mutual Exclusion
If there is a process executing in a
monitor, any process that calls a
monitor procedure is blocked outside
of the monitor.

When monitor has no executing


process, one process will be let in.
Syntax:
monitor {
● All variables are private
//shared variable declarations
● Monitor procedure are public,
data variables; however some procedures can
Procedure P1() { ... } be made private, so that they
Procedure P2() { ... } can be used with in monitor.
.
● Initialization procedure:
(Constructor) executed only
.
once they are created.
Procedure Pn() { ... }

Initialization Code() { ... }

}
Condition Variables
Synchronisation achieved by the use of condition variables that are contained
within the monitor and accessible only within the monitor :–

Condition variables are operated on by two functions:

• cwait(c): suspend execution of the calling process on condition c. The monitor is


now available for use by another process.

• csignal(c): resume execution of some process blocked after a cwait on the


same condition. If there are several such processes, choose one of them; if there
is no such process, do nothing.
The monitor uses two condition variables:

notfull: is true when there is room to add at least one character to the buffer

notempty: is true when there is at least one character in buffer.


Solution using a monitor for producer/consumer problem
● In above solution a producer can add characters to the buffer only bey the
means of the procedure append inside the monitor append inside the
monitor
● The producer does not have direct access to buffer. The producer first checks
the condition notfull to determine if there is space available in the buffer.
● If not, the process executing the monitor is blocked on that condition.
● Some other process(producer or consumer) may now enter the monitor.
● Later, when the buffer is no longer full, the blocked process may be removed
from the queue, reactivated and resume processing.
● After placing a character in the buffer, the process signals the notempty
condition.
● A similar description can be made of the consumer function.
Types of monitor:
Monitors after a signal, the released process and the signaling process may be
executing in the monitor.

There are two common and popular approaches to address this problem:

Hoare Type (proposed by C.A.R.Hoare): The released process takes the


monitor and the signaling process waits somewhere.

Mesa Type (proposed by Lampson and Redell): The released process waits
somewhere and the signaling process continues to use the monitor.
What do you mean by “waiting somewhere”?

The signaling process (Hoare type) or the released process (Mesa type) must wait
somewhere. You could consider there is a waiting bench in a monitor for these
processes to wait. As a result, each process that involves in a monitor call can be in
one of the four states:
Active: The running one
Entering: Those blocked by the monitor
Waiting: Those waiting on a condition variable
Inactive: Those waiting on the waiting bench
Drawback of Hoare’s Monitor
1. If the process issuing the csignal has not finished with the monitor, then two
additional process switches are required:
● One to block this process
● another to resume it when the monitor becomes available.
2. Process scheduling associated with a signal must be perfectly reliable. When
csignal is issued, a process from the corresponding condition queue must be
activated immediately and the scheduler must ensure that no other process
enters the monitor before activation.
Otherwise, the condition under which the process was activated could
change.

You might also like