Download as ppt, pdf, or txt
Download as ppt, pdf, or txt
You are on page 1of 79

UNIT–III: Process Synchronization,Interprocess

Communication
Critical section is a part of the code where shared resources are accessed. To prevent conflicts
and ensure data consistency, only one process should be allowed in the critical section at a time.
This is achieved by using special techniques to control access to shared resources, making sure
that only one process can use a resource at a time. The goal is to prevent race conditions and
ensure that processes can work together without interfering with each other.
2
• A Cooperating process is one that can affect or be affected by
other processes executing in the system.
• Cooperating processes can either directly share a logical
address space or be allowed to share data only through files or
messages.
• The former is achieved through the use of lightweight
processes or threads.
• Concurrent access to shared data may result in data
inconsistency.
• Here, we discuss various mechanisms to ensure the orderly
execution of cooperating processes that share a logical address
space, so that data consistency is maintained.

3
Background..
• Concurrent access to shared data may result in data
inconsistency
• Maintaining data consistency requires mechanisms to ensure
the orderly execution of cooperating processes
• Suppose that we wanted to provide a solution to the
consumer-producer problem that fills all the buffers. We can
do so by having an integer count that keeps track of the
number of full buffers. Initially, count is set to 0. It is
incremented by the producer after it produces a new buffer
and is decremented by the consumer after it consumes a
buffer.

4
Inter Process Communication
There are 2 types of process –
– Independent Processes – Processes which
do not share data with other processes .
– Cooperating Processes – Processes that
shares data with other processes.
• Cooperating process require Interprocess
communication (IPC) mechanism.
• Inter Process Communication is the
mechanism by which cooperating process
share data and information.
There are 2 ways by which Interprocess
communication is achieved –
Shared memory
Message Passing
1.Shared Memory

•A particular region of memory is shared


between cooperating process.
•Cooperating process can exchange
information by reading and writing data to
this shared region.
•It’s faster than Memory Passing, as Kernel is
required only once, that is, setting up a shared
memory. After that, kernel assistance is not
required.
2.Message Passing

•Communication takes place by exchanging


messages directly between cooperating process.
•Easy to implement
•Useful for small amount of data.
•Implemented using SystemCalls, so takes more
time than Shared Memory.
•A message-passing facility provides at least two
operations:
send(message) receive(message)
•Messages sent by a process can be either fixed or
variable in size. If only fixed-sized messages can
be sent, the system-level implementation is
straightforward.
•If it is variable-sized messages then it require a
more complex system- level implementation.
If processes P and Q want to communicate, they must send
messages to and receive messages from each other: a
communication link must exist between them. This link
can be implemented in a variety of ways.
Here are several methods for logically implementing a link
and the send()/receive() operations:
1.Direct or indirect communication (Naming)
2.Synchronous or asynchronous
communication(Synchronization)
3.Automatic or explicit buffering(Buffering)
1.Naming(message passing
through communication link)
Direct communication link:
Under , each process that wants to communicate must
explicitly name the recipient or sender of the communication.
In this scheme, the send() and receive() primitives are defined
as:
• send(P, message)—Send a message to process P.
• receive(Q, message)—Receive a message from process Q
Indirect communication link:
•With indirect communication, the messages are sent to and
received from mailboxes.
• A mailbox can be viewed abstractly as an object into which
messages can be placed by processes and from which
messages can be removed. Each mailbox has a unique
identification.
•A process can communicate with another process via a
number of different mailboxes, but two processes can
communicate only if they have a shared mailbox. The send()
and receive() primitives are defined as follows:
send(A, message)—Send a message to mailbox A.
receive(A, message)—Receive a message from mailbox A.
2.Synchronization(Message passing
through exchanging messages)
Communication between processes takes place
through calls to send() and receive() primitives.
There are different design options for
implementing each primitive.
Message passing may be either blocking or
nonblocking— also known as synchronous and
asynchronous.
1.Blocking send: The sending process is
blocked until the message is received by the
receiving process or by the mailbox.
2.Non Blocking send: The sending process
sends the message and resumes operation.
3.Blocking receive: The receiver blocks until a
message is available.
4.Non Blocking receive: The receiver retrieves
either a valid message or a null.
3.Buffering
Whether communication is direct or indirect, messages
exchanged by communicating processes reside in a
temporary queue. Basically, such queues can be
implemented in three ways:
1.Zero Capacity/(Message system with no buffering):
The queue has a maximum length of zero; thus, the link
cannot have any messages waiting in it. In this case, the
sender must block until the recipient receives the message.
2.Bounded Capacity/(Message system with automatic
buffering): The queue has finite length n; thus, at most n
messages can reside in it. If the queue is not full when a
new message is sent, the message is placed in the queue,
and the sender can continue execution without waiting.
If the link is full, the sender must block until space
is available in the queue.

3.Unbounded Capacity /(Message system with


automatic buffering): The queue’s length is
potentially infinite; thus, any number of messages
can wait in it. The sender never blocks.
Producer
while (true) {
/* produce an item and put in nextProduced */
while (count == BUFFER_SIZE); // do nothing
buffer [in] = nextProduced;
in = (in + 1) % BUFFER_SIZE;
count++;
}

21
Consumer

while (true) {
while (count == 0) ; // do nothing
nextConsumed = buffer[out];
out = (out + 1) % BUFFER_SIZE;
count--;
/* consume the item in nextConsumed*/
}

22
Race Condition
• count++ could be implemented as
register1 = count
register1 = register1 + 1
count = register1

• count-- could be implemented as


register2 = count
register2 = register2 - 1
count = register2

• Consider this execution interleaving with “count = 5” initially:


S0: producer execute register1 = count {register1 = 5}
S1: producer execute register1 = register1 + 1 {register1 = 6}
S2: consumer execute register2 = count {register2 = 5}
S3: consumer execute register2 = register2 - 1 {register2 = 4}
S4: producer execute count = register1 {count = 6 }
S5: consumer execute count = register2 {count = 4}
23
• Note that we arrived at the incorrect state “counter==4”,
indicating that four buffers are full, when, in fact, five buffers
are full.
• If we reversed the order of the statements at T4 and T5, we
would arrive at the incorrect state “counter==6”.
• This would be because we allowed both processes to
manipulate the variable counter concurrently.
• A situation where several processes access and manipulate the
same data concurrently and the outcome of the execution
depends on the particular order in which the access takes
place, is called a race condition.
• To guard against race condition, we need to ensure that only
one process at a time can be manipulating the variable counter.
• To guarantee this, we require process synchronization and
coordination.
24
Critical Section Problem
Critical Section
A Section of Code that accesses Some Shared Data in a mutually
exclusive manner.

Consider System of n Processes {p0, p1, … pn-1}

Each process has Critical Section Segment of Code


• Process may be Changing
Common Variables,
Updating table,
Writing file,
etc.

• When One process in Critical Section, No other may be in its Critical


Section

Each Process must ask permission


to Enter Critical Section in Entry Section,
may follow Critical Section with Exit Section,
then Remainder Section
General Structure
|<<

A Critical Section Environment Contains:


Entry Section Code requesting entry into the Critical
Section.
Critical Section Code in which Only One Process
can execute at any one time.
Exit Section End of the Critical Section,
releasing or allowing others in.
Remainder Section Rest of the Code AFTER the Critical Section.
Solution to Critical–Section Problem
1. Mutual Exclusion
If process Pi is executing in its Critical Section,
then No Other Processes can be executing in the Critical Sections.
2. Progress
If No process is executing in its Critical Section and there exist
some processes that wish to enter the Critical Section,
Then the Selection of the processes that will enter the Critical
Section Next Cannot be Postponed indefinitely.
3. Bounded Waiting
A bound must exist on the No. of times that Other processes are
allowed to enter the Critical Sections after a process has made a
request to enter its Critical Section and before that request is
granted
Peterson’s Solution
• Good algorithmic description of solving the problem
• Two process solution (Resitricted for only 2 processes)
• The two processes share two variables:
– int turn;
– Boolean flag[2]

• The variable turn indicates whose turn it is to enter the


critical section
• The flag array is used to indicate if a process is ready to
enter the critical section. flag[i] = true implies that
process Pi is ready!

28
Algorithm for Process Pi
do {
flag[i] = true;
turn = j;
while (flag[j] && turn = = j);
critical section
flag[i] = false;
remainder section
} while (true);

29
Peterson’s Solution (Cont.)
• Provable that the three CS requirement are met:
1. Mutual exclusion is preserved
Pi enters CS only if:
either flag[j] = false or turn = i
2. Progress requirement is satisfied
3. Bounded-waiting requirement is met

30
Synchronization Hardware
• Many systems provide hardware support for implementing the critical section code.

• All solutions below based on idea of locking


– Protecting critical regions via locks

• Uniprocessors – could disable interrupts


– Currently running code would execute without preemption
– Generally too inefficient on multiprocessor systems
• Operating systems using this not broadly scalable

• Modern machines provide special atomic hardware instructions


• Atomic = non-interruptible
1. Either test memory word and set value
2. Or swap contents of two memory words

31
Solution to Critical-section Problem Using Locks

do {
acquire lock
critical section
release lock
remainder section
} while (TRUE);

32
test_and_set Instruction
Definition:
boolean test_and_set (boolean *target)
{
boolean rv = *target;
*target = TRUE;
return rv:
}
1. Executed atomically
2. Returns the original value of passed parameter
3. Set the new value of passed parameter to “TRUE”.

33
Solution using test_and_set()
 Shared Boolean variable lock, initialized to FALSE

 Solution:
do {
while (test_and_set(&lock));
/* do nothing */
/* critical section */
lock = false;
/* remainder section */
} while (true);

34
35
do { Bounded-waiting Mutual Exclusion
waiting[i] = TRUE;
with TestandSet()
key = TRUE;
while (waiting[i] && key)
key = TestAndSet(&lock);
Common data structures are
boolean waiting[n];
waiting[i] = FALSE;
boolean lock;
// critical section Initialized to FALSE
j = (i + 1) % n;
while ((j != i) && !waiting[j])
j = (j + 1) % n;
if (j == i)
lock = FALSE;
else
waiting[j] = FALSE;
} while (TRUE);

36
Turn Variable or Strict Alternation Approach

•Turn Variable or Strict Alternation Approach is the software mechanism


implemented at user mode. It is a busy waiting solution which can be
implemented only for two processes. In this approach, A turn variable is used
which is actually a lock.
•This approach can only be used for only two processes. In general, let the two
processes be Pi and Pj. They share a variable called turn variable. The pseudo
code of the program can be given as following.

For Process Pi For Process Pj


Non - CS Non - CS
while (turn ! = i); while (turn ! = j);
Critical Section Critical Section
turn = j; turn = i ;
Non - CS Non - CS

37
•The actual problem of the lock variable approach was the fact that the
process was entering in the critical section only when the lock variable is 1.
More than one process could see the lock variable as 1 at the same time hence
the mutual exclusion was not guaranteed there.

•This problem is addressed in the turn variable approach. Now, A process can
enter in the critical section only in the case when the value of the turn
variable equal to the PID of the process.

•There are only two values possible for turn variable, i or j. if its value is not i
then it will definitely be j or vice versa
.
•In the entry section, in general, the process Pi will not enter in the critical
section until its value is j or the process Pj will not enter in the critical section
until its value is i.

•Initially, two processes Pi and Pj are available and want to execute into
critical section.

38
The turn variable is equal to i hence Pi will get the chance to enter into the critical
section. The value of Pi remains I until Pi finishes critical section.

39
Pi finishes its critical section and assigns j to turn variable. Pj will get the chance
to enter into the critical section. The value of turn remains j until Pj finishes its
critical section.

Analysis of Strict Alternation approach:


Let's analyze Strict Alternation approach on the basis of four requirements.

Mutual Exclusion:
The strict alternation approach provides mutual exclusion in every case. This
procedure works only for two processes. The pseudo code is different for both of
the processes. The process will only enter when it sees that the turn variable is
equal to its Process ID otherwise not Hence No process can enter in the critical
section regardless of its turn.
40
Progress:
Progress is not guaranteed in this mechanism. If Pi doesn't want to get enter into the critical
section on its turn then Pj got blocked for infinite time. Pj has to wait for so long for its turn
since the turn variable will remain 0 until Pi assigns it to j.

Portability:
The solution provides portability. It is a pure software mechanism implemented at user
mode and doesn't need any special instruction from the Operating System.

41
Semaphores
•Synchronization tool
•Does not require busy waiting
•Semaphore S – integer Variable
•Two standard operations modify S: wait() and signal()
– Originally called P() and V()
• P (from the Dutch proberen, “to test”)
• V (verhogen, “to increment”)
•Less Complicated
•Can only be accessed via two indivisible (atomic) operations
wait (S) {
while (S <= 0)
; // no-op
S--;
}
signal (S) {
S++;
}
Semaphore Types
1. Counting Semaphore
• Integer value can range over an unrestricted domain. It is used to control
access to a resource that has multiple instances.

2. Binary Semaphore
– Integer value can range only between 0 and 1;
– Can be simpler to implement
– Also known as mutex locks, as they are locks that provide mutual exclusion.
– If s=1 it mean no process is in critical section.

Advantages :
1. Provides mutual exclusion
2. Can solve various synchronization problems

1. Provides Mutual Exclusion- General Structure of a Process


do{
Semaphore S=1;
wait (S);
//Critical Section
signal (S);
//Remainder Section
}while(true)
Semaphores - Advantages
2. Can solve various synchronization problems
Consider P1 and P2 that require S1 to happen before S2
Create a semaphore “synch” initialized to 0
P1: //synch=0
S1;
signal(synch);
P2:
wait(synch);
S2;
Can implement a counting semaphore S as a binary
semaphore

44
Semaphore Implementation
• Must guarantee that no two processes can execute wait () and
signal () on the same semaphore at the same time
• Thus, implementation becomes the critical section problem
where the wait and signal code are placed in the critical section
– Could now have busy waiting in critical section
implementation
• Note that applications may spend lots of time in critical sections
and therefore this is not a good solution.
• Busy waiting wastes CPU cycles that some other process might
be able to use productively. This type of Semaphore is also called
a Spinlock because the process “spins” while waiting for the
lock.

45
Semaphore Implementation
with no Busy waiting

• With each semaphore there is an associated waiting queue


• Each entry in a waiting queue has two data items:
– value (of type integer)
– pointer to next record in the list

• Two operations:
– block – place the process invoking the operation on the
appropriate waiting queue
– wakeup – remove one of processes in the waiting queue
and place it in the ready queue

46
Disadvantages of Semaphores
• The main disadvantage is it requires busy waiting
• While a process is in its critical section, any other
process that tries to enter its critical section must loop
continuously in the entry code.
• Busy waiting wastes CPU cycles that some other
process might be able to use productively.
• This type of semaphore is also called a spinlock
because the process “spins” while waiting for the lock.

47
To overcome the need for busy waiting, we can modify the definition of wait()
and signal() semaphore operations
• When a process executes the wait() operation and finds that
the semaphore value is not positive, it must wait
• However, rather than engaging in busy waiting, the process
can block itself.
• The block operation places a process into a waiting queue
associated with the semaphore, and the state of the process is
switched to the waiting state.
• Then control is transferred to the CPU scheduler, which
selects another process to execute.

48
Semaphore Implementation with no Busy waiting (Cont.)
Struct Semaphore
{ int value;
Struct process *list; }semaphore;

• Implementation of wait:
wait(semaphore *S) {
S->value-- ;
if (S->value < 0)
{ add this process to S->list;
block();
}
}
• Implementation of signal:
signal(semaphore *S)
{ S->value++;
if (S->value <= 0)
{ remove a process P from S->list;
wakeup(P);
} 49
Deadlocks and Starvation
• The implementation of a semaphore with a waiting queue may result in a
situation where two or more processes are waiting indefinitely for an event
that can be caused only by one of the waiting processes. The event is the
execution of signal() operation. When such a state is reached, these
processes are said to be deadlocked.
• To illustrate this, we consider a system consisting of two processes, P0 and
P1, each accessing two semaphores, S and Q, set to the value 1:

50
• Suppose that P0 executes wait(S) and then P1 executes wait(Q), it must
wait until P1 executes signal(Q). Similarly, when P1 executes wait(S), it
must wait until P0 executes signal(S).
• Since these signal() operations cannot be executed, P0 and P1 are
deadlocked.
• We say that a set of processes is in a deadlocked state when every process
in the set is waiting for an event that can be caused only by another process
in the set. The event with which we are mainly concerned here are resource
acquisition and release.
• Another problem related to deadlock is indefinite blocking, or starvation, a
situation in which processes wait indefinitely within the semaphore.
• In definite blocking may occur if we add and remove processes from the
list associated with a semaphore in LIFO(Last-In-First-Out) order.

51
Questions on Semaphore
• A counting semaphore was initialized to 10. Then 6 P
(wait) operations and 4V (signal) operations were
completed on this semaphore. The resulting value of
the semaphore is
(a) 0 (b) 8 (c) 10 (d) 12

52
• The following program consists of 3 concurrent processes and 3 binary
semaphores. The semaphores are initialized as S0=1, S1=0, S2=0.
Process P0 Process P1 Process P2
while (true) { wait (S1); wait (S2);
wait (S0); Release (S0); release (S0);
print (0);
release (S1);
release (S2);
}

How many times will process P0 print '0'?


(a) At least twice (b) Exactly twice
(c) Exactly thrice (d) Exactly once

53
• A shared variable x, initialized to zero, is operated on by four concurrent processes
W, X, Y, Z as follows. Each of the processes W and X reads x from memory,
increments by one, stores it to memory, and then terminates. Each of the processes
Y and Z reads x from memory, decrements by two, stores it to memory, and then
terminates. Each process before reading x invokes the P operation (i.e., wait) on a
counting semaphore S and invokes the V operation (i.e., signal) on the semaphore S
after storing x to memory. Semaphore S is initialized to two. What is the maximum
possible value of x after all processes complete execution?(Gate CSE 2013)
A) -2
B) -1
C) 1
D) 2

54
MUTEX (MUTual EXclusion)
• Used to synchronize two threads.

• When you have two threads attempting to access a


single resource, the first block of code attempting
access -> set the mutex before entering the code.

• When the second code block attempts access, it sees


the mutex is set and waits until the first block of
code is complete (and un-sets the mutex), then
continues.

55
Difference between Lock, Mutex
and Semaphore
• A lock allows only one thread to enter the part
that's locked and the lock is not shared with any
other processes.

• A mutex is the same as a lock but it can be


system wide.

• A semaphore does the same as a mutex but


allows x number of threads to enter.
56
Classic Problems of Synchronization
• Bounded–Buffer Problem
• Readers and Writers Problem
• Dining–Philosophers Problem
Producers–Consumers with Bounded buffers
|<<

Producer Process “Produces" information


“Consumed" by a Consumer process.
Solution must Satisfy the following:
1. A Producer must not Overwrite a Full buffer
2. A Consumer must not Consume an Empty buffer
3. Producers and Consumers must access buffers
in a mutually exclusive manner
4. Information must be Consumed in the Same Order
in which it is put into the buffers (Optional)
Bounded-Buffer Problem
• N buffers, each can hold one item

• Semaphore mutex initialized to the value 1

• Semaphore full initialized to the value 0

• Semaphore empty initialized to the value N

59
Bounded Buffer Problem (Cont.)
• The structure of the producer process

do {

// produce an item in nextp

wait (empty);
wait (mutex);

// add the item to the buffer

signal (mutex);
signal (full);
} while (TRUE);
60
Bounded Buffer Problem (Cont.)
• The structure of the consumer process

do {
wait (full);
wait (mutex);

// remove an item from buffer to nextc

signal (mutex);
signal (empty);

// consume the item in nextc

} while (TRUE);
61
Readers-Writers Problem
• A data set is shared among a number of concurrent processes
– Readers – only read the data set; they do not perform any
updates
– Writers – can both read and write

• Problem – allow multiple readers to read at the same time


– Only one single writer can access the shared data at the same
time
• Several variations of how readers and writers are treated – all
involve priorities
• Shared Data
– Data set
– Semaphore mutex initialized to 1
– Semaphore wrt initialized to 1
– Integer readcount initialized to 0
62
Readers-Writers Problem (Cont.)
• The structure of a writer process

do {
wait (wrt) ;

// writing is performed

signal (wrt) ;
} while (TRUE);

63

Readers-Writers Problem (Cont.)
The structure of a reader process

do {
wait (mutex) ; //Ensure that no other reader can execute the <Entry> section while you are in it
readcount ++ ; //Indicate that you are a reader trying to enter the Critical Section
Entry Section

if (readcount == 1) //Checks if you are the first reader trying to enter CS


wait (wrt) ; //If you are the first reader, lock the resource from writers. Resource stays
reserved for subsequent readers
signal (mutex) //Release <Entry> Section. Let other readers enter the <Entry> section, now that
you are done with it.
// reading is performed – Critical Section

wait (mutex) ; //Ensure that no other reader can execute the <Exit> section while you are in it
readcount - - ; //Indicate that you are no longer needing the shared resource. One less readers
if (readcount == 0) //Checks if you are the last (only) reader who is reading the shared file
Exit Section

signal (wrt) ; //If you are last reader, then you can unlock the resource. This makes it available
to writers.
signal (mutex) ; //Let other readers enter the <Exit> section, now that you are done with it.
} while (TRUE);

64
Readers/Writers problem’s
solution
1. The first reader must lock the resource(shared file) if such is available. Once the file is locked
from writers, it may be used by many subsequent readers without having them to re-lock it
again.
2. Before entering the CS, every new reader must go through the entry section. However, there
may only be a single reader in the entry section at a time. This is done to avoid race conditions
on the readers (e.g. two readers increment the readcount at the same time, so no one feels
entitled to lock the resource from writers). To accomplish this, every reader which enters the
<ENTRY Section> will lock the <ENTRY Section> for themselves until they are done with it. Note:
readers are not locking the resource. They are only locking the entry section so no other reader
can enter it while they are in it. Once the reader is done executing the entry section, it will
unlock it by signalling the mutex semaphore. Same is valid for the <EXIT Section>. There can be
no more than a single reader in the exit section at a time, therefore, every reader must claim
and lock the Exit section for themselves before using it.
3. Once the first reader is in the entry section, it will lock the resource. Doing this will prevent any
writers from accessing it. Subsequent readers can just utilize the locked (from writers)
resource. The very last reader (indicated by the readcount variable) must unlock the resource,
thus making it available to writers.
4. In this solution, every writer must claim the resource individually. This means that a stream of
readers can subsequently lock all potential writers out and starve them. This is so, because
after the first reader locks the resource, no writer can lock it, before it gets released. And it will
only be released by the very last reader. Hence, this solution does not satisfy fairness.

65
Readers-Writers Problem Variations
• First variation – no reader kept waiting unless writer
has permission to use shared object

• Second variation – once writer is ready, it performs


write asap

• Both may have starvation leading to even more


variations

• Problem is solved on some systems by kernel providing


reader-writer locks

66
Dining-Philosophers Problem

• Philosophers spend their lives thinking and eating


• Don’t interact with their neighbors, occasionally try to pick up 2 chopsticks (one at a time) to eat from bowl
– Need both to eat, then release both when done
• In the case of 5 philosophers
– Shared data
• Bowl of rice (data set)
• Semaphore chopstick [5] initialized to 1

67
Dining-Philosophers Problem
Algorithm
• The structure of Philosopher i:

do {
wait ( chopstick[i] );
wait ( chopStick[ (i + 1) % 5] );

// eat

signal ( chopstick[i] );
signal (chopstick[ (i + 1) % 5] );

// think

} while (TRUE);

• What is the problem with this algorithm?

68
Problems with Semaphores
• Incorrect use of semaphore operations:

– signal (mutex) …. wait (mutex)

– wait (mutex) … wait (mutex)

– Omitting of wait (mutex) or signal (mutex)


(or both)

• Deadlock and starvation


69
Monitors
• A high-level abstraction that provides a convenient and effective
mechanism for process synchronization
• An abstract data type that encapsulates private data with public
methods to operate on that data
• A monitor presents a set of programmer-defined operations that are
provided mutual exclusion within the monitor.
• Abstract data type, internal variables only accessible by code within
the procedure
• Only one process may be active within the monitor at a time

monitor monitor-name
{
// shared variable declarations
procedure P1 (…) { …. }

procedure Pn (…) {……}

Initialization code (…) { … }


}
70
}
Contd..
• The monitor cannot be used directly by the various
processes. Thus, a procedure defined within a
monitor can access only those variables declared
locally within the monitor and its formal
parameters.
• Similarly, the local variables of a monitor can be
accessed by only the local procedures.
• The monitor construct ensures that only one
process at a time can active within the monitor.
(shown diagrammatically in next slide)

71
Schematic view of a Monitor

72
Condition Variables
• But Monitors are not powerful enough to model some
synchronization schemes.
• So the solution is provided by Condition Variables.

Declaration:
condition x, y;

• Two operations on a condition variable:


– x.wait () – a process invoking this operation is suspended
until another process invokes x.signal ()
– x.signal () – resumes exactly one suspended process (if any)
that invoked x.wait ()
• If no x.wait () on the variable, then it has no effect on the
variable

73
Synchronization Examples
• Windows XP

• Linux

74
Windows XP Synchronization
• Uses interrupt masks to protect access to global resources on
uniprocessor systems

• Uses spinlocks on multiprocessor systems


– Spinlocking-thread will never be preempted

• Also provides dispatcher objects. With these objects, threads


synchronize using mechanisms mutexes, semaphores, events, and
timers

– Events
• An event acts much like a condition variable
– Timers notify one or more thread when time expired
– Dispatcher objects either signaled-state (object available) or non-
signaled state (thread will block)

75
Linux Synchronization
• Linux:
– Prior to kernel Version 2.6, disables interrupts to
implement short critical sections
– Version 2.6 and later, fully preemptive

• Linux provides:
– semaphores
– spinlocks
– reader-writer versions of both

• On single-cpu system, spinlocks replaced by


enabling and disabling kernel preemption

76
• In a slightly modified implementation, it
would be possible for a semaphore's value to
be less than zero. When a process executes
wait(), the semaphore count is automatically
decremented. The magnitude of the negative
value would determine how many processes
were waiting on the semaphore:

77
Exam Questions
1. Explain briefly all classic problems of synchronization.
2. What is the Critical Section?
3. What is Critical–Section problem? Explain in detail.
4. Explain the following:
a. Critical Section
b. Starvation
c. Critical resource
5. What is the significance of Checkpoints?
6. Analyse the various mechanisms of inter processor
communication in Unix.
7. What is meant by Semaphore? Explain with an example.
8. What is Semaphore? What are the types?
9. What is a Semaphore? Explain the two operations.
Exam Questions |<<

10. What is binary semaphore and counting semaphore?


11. What are the necessary conditions for deadlock?
12. Discuss about mutual exclusion.
13. What is a Deadlock? Explain Banker's Algorithm.
14. Explain Banker’s algorithm in detail.
15. Explain the Banker's algorithm.
16. How deadlock is avoided in single–instance resource type
systems?
17. What is a safe-state and what is its use in deadlock avoidance?
18. Explain the Safety Condition.
19. Discuss various deadlock prevention strategies.

You might also like