Professional Documents
Culture Documents
OS (3 CHP)
OS (3 CHP)
OS (3 CHP)
PROCESS
SYNCHRONISATION
COMPETINGPROCESSES
Achieving Mutualexclusion
THE CRITICAT SECTION PROBLEM
Two-Process Solutions
Multiple Process Solutions
SYNCHRONIZATION HARDWARE
EMAPHORES
Usage
Implementation
Deadlocks and Starvation
Binarysemaphores
CLASSIcAL PROBLEMS ON SYNGHRONZATION
The BoundedbBiffer Problem
The Readcrs-Wrlters Problem
The Dining Philasophers Problem
CRITICA REGIONS
MONITORS
66 Process 8ynchronla
3.1. INTRODUCTION
reusable since it is not feasible to send the output from several processes to the
samephysca printer Some kinds of resources are not subject to this sharing
problem, notably disk files which, with some restrictions, can be shared by several
proceses simultaneously.
Aconsumable resource is a transient data 1tem or a signal which is created
by one process and ceases to exist when recelved by another. A typical consumable
resource is a message sent between one process and another.
The principal problemn created by the need for mutual exclusion 1s the
potential danger of deadlock. A process is said to be deadlocked if it is walting for
an event which will never occur. In the context of resource sharing. this can occur
when two or more processes are each waiting for a resource held by the others. For
example, assume that process Pl has a printer allocated to it and attempts to open
a file F. Process P2, meanwhile, has already obtained exclusive access to fille F and
now requests the printer. Each process is waiting for a resource held by the other,
while holdingaresource required by the other. In principal more than two processes
could be involved in such a deadlocked circle.
Processes that work together to some common purpose or within some shared
environment require to 'talk to each other. A number of techniques are available
in this respect, such as shared memory, shared file or message system. The sinmplest
form of such communication is a signal from one process to another indicating
completion of a event on which the second process is waiting: this is kmown as
synchronisation. Process synchronization is the task of organizing the access of
several concurrent processes to shared (i.e. jointly used) resources without causing
any conflicts. In this.chapter, we discuss in more detail the topic of competing
processes and process synchronization.
As we indicated in
the introduction, the mutual excluslon requirement is
he over-riding consideration
in dealing with competing processes. In the case of
shared physical resources, such as a printer or a mnemory or a shared variable ,it is
learly necessary to avoid-data corTuption and intermingling. but it is equally
alid in many other contexts.
The problem is one of mutual exclusion; in effect, we must insist that these
ensitive operations can only be performed by one process at a time. An important
oint to note is that the potential for interprocess clashes only arises at certain
68 Process 8ynchronla
nc ight system which will allow one process to enter its criuical reglon
nen stop all other processes from entering thelr critical region. Tnis situation
represen
Figure 3.1 for two such processes A and B; the shaded area
srated in
the critical region of the process, and the arrows indicate the curtent polnt
CNCCudon. critical reglons and the principle of mutual exclusion related to ead
ea
other.
Assume that two processes are sharing some resource and, therefore, ha
each a critical reglon of code. We requfre to guard entry to the critical reglons
ensuring that whem one process has entered Its reglon, the other process is there
barred from entry untl the tirst process edts. This appears to have the characterisu
of a gate which can only be open for one process at a time.
Process 8ynchronisatlon 69
sharedintx=3:
process1 0
printf(%d".x);
process 20
x= x+1;
printf%d'. z:
LOAD r1<-X
ADD r1+1->r1
STORE rl->X
A critical region is part of the code of a process which accesses some shared
esource which could potentially be accessed by another process at the same time.
ro ensure mutual excluslon. the processes involved must not be in their critical
eglons simultaneously. There are some hardware and software solutlons to achieve
nutual excluslon They are:
n use to cooperate. Each process must request permission to enter its crtiT
econ. The section of code implementing this request is the entry section,
critcal section may be followed by an exit section. The remaining code is
remainder section.
do
ert secilon
ctllcal seculon
exitsacion
gemandar sectton
while
while (tutE
crltical secifon
turn
emainder section
whild (1iD
Our first approach is to let the processes share a common integer variable
urn initialized to 0 (or 1). If turn == i, then process Pis allowed to execute in its
ritical section. The structure of process P, is shown above. This solution ensures
hat only one process at a time can be in its critical section. However. it does not
atisly the progress requirement, since it requires strict alternation of processes in
he execution of the critical section. For example, if turn == 0 and P, is ready to enter
s critical section, P, cannot do so, even thcugh P may be in its remainder section.
1gorithm 2
The problem with algorithm 1 is that it does not retain sufflcient informmation
bout the state of each process: it remembers only which process is allowed to
nter its critical section. To remedy this problem, we can replace the variable turn
dth the following array:
Lzboolean flagi2]:
The elements of the array are initialized to false. If flag [1] 1s true, this value
idicates that P, is ready to enter the critical section. The structure of process P, is
1own below:
eptidat segtfoa
emaincder segto
wbne
The structure of process Pin algorithm 2
Procese Synchronlaa
72
atl
slgnalingthat
toIn this algorithm. process P, Arst sets flag [| to be true,proces that
that ess PIs noi
Pis not
ready
ready to enter
its critical section. Then. P, checks to verify al
to
ready to enter
ready P, were ready, then
P, would walt until p
its (that is, until ha
critical section. If
indicated
aicated that it no longer
longer neded
needed to to be in the critical section
was false). At thi: point, P, would enter eritical
the critical section onOn eung the crts
exitng tne
section, P would set lag [i] to be false, allowing the other process it iswaiting
(1f crt
waittng
mutual-exclusion requiremen
tratent
Cer its critical section. In this solution, the
satisfied. Unfortunately. the rogress requirement is not met. To llust th
problem, we conslder the following execution sequence
Algorithmn3
boolean flagi2
int turn;
Initially aglo) = flag [1] = false, and the value of turn is immaterial (but is
either 0 or 1). The structure of process P,is shown below.
do
lag I)E fue
tum=f
VileMagiikriun
tlcal secoton
lag 0 als
remamder seciton
Jwtille 0¥
The structure of process P, in algorithm 3
Proceas Synchronisation 73
To prove property 1, we note that each P, enters its critical section only if
either flag l] == false or turm == 1. Also note that, if both processes can be executing
in their critical sections at the same time. then flag [O] == flag [l] == true. These two
observations imply that P, and P, could not have successfully executed their while
statenments at about the same time. since the value of turn can be either O or 1, but
cannot be both. Hence, one of the processes-say P-must have successfiilly executed
the while statement, whereas Pi had to execute at least' one additional statement
(turn ==j *). However. since, at that time, flag jl == true, and turn ==j. and this
condition will persist as long as P, is in its critical section, resulting in P: Mutuaal
exclusion is preserved.
do
choosingli= tue
numberlemaxtnumber iO uumber lmumbe
choosingl= false:
for=0JnJ
whle(choosingj
while humber U1 0) & number
critical section fel
numberl-0
emanier sectiou
iwhile
The structure of process P in the bakery algorithm
To prove that the bakery algorithm is correct, we
is in its critical section and P, (k != need first to show that, if P
1)
has already chosen its number kl= 0,
(number [I 9 < (number [k] .k). then
do
Whe resttAndSctogk
critiealserton
gak faise
eoaindlr secto
Matual-exclusion implementation
with TestAndSet
76 Process 8ynchronlsat
The Swap instruction, defined as shown below operates on the contetentson
wO ntomically.
Words: like the TestAndSet instruction, it is executed atomicauy.
do
boolean temp=a
abEtempP
b
do
key=true
whle (keytruejSwap (Clock key)
criical section
lock= false,
remaindersection
while (0
Mutual-exclusion implementation with the Swap instruction
boolean waltingln]
boolean 2lock;
do
walftingil}=
key e true
truer
wile wafingul 88key)
key TesiAndsetiloek,walttigl=fasei
crticalsection
77
process 5ynchronisatlon
e (41) % n
wHlle(S0&8 lwalting0D
JE0+1 %n
in) ock=falsc,
else
waltingUE false
remalnder sectlon
whilc (4)
3.5 SEMAPHORES
The solutions to the critical-section problem presented so far are not easy
to generalize to more complex problems. To overcome this difficulty, we can use a
synchronization tool called a semaphore. A sermaphore S is an integer varlable
that, apart from initialization, is accessed only through two standard atomic
operations: walt and signal. These operations were originally termed P (for walt;
from the Dutch proberen, to test) and V (for signal; from verhogen, to increment).
The classical definitlon of walt in pseudocode is
walt (S)
while (S<= 0)
no-op
S
//
The classical definitions of signal in pseudocode is
78
Pro 8ynchron
signal(S)
S+
Modifications
to the integer value of the semaphore in the
operations must be executed wait ana
indivisibly. That is, when one process
semaphore value, no moi
value. In idition, other process can simultaneously modiy that same s
SIS<=0). and its in the case of the wait (S). the testing of the intedaph
possible modification (S--). must also be executed
uption. We shall see how these operations can be ue
see how semaphores implemented. First,
First
can be used. let
3.5.1 Usage
We can use semaphores to deal with the n-process
Then processes share a semaphore, mutex critical-sectionr
(standing 1or mutual exclte bl
initialized to 1. Each process P, is organized as shown lusto-
here.
do
wait (mutex)
critical section
signal (mutex);
remainder section
while (1):
3.5.1.1 Mutual-exclusion implementation with
semaphores
We can also use semaphores to solve various
synchronization problems.m
example, consider two concurrently running processes:
and P, with a statement 31. Suppose that we P with a statement
30 has completed. We can implement this require that 31 be executed only af
scheme
share a common semaphore synch, initialized to 0, andreadily by letting P, and
by inserting the stateme-
S,
signal(synch):
in process P,. and the statements
walt (synch);
S
in process P2.
Because synch is initialized to O, P, will execute 31 only after Pha
invoked signal (synch), whích is after 30.
3.5.2 Implementatlon
To overcome the need for busy waiting, we can modify the definition of the
walt and signal semaphore operations. When a process executes the wait operation
and finds that the semaphore value is not positve, it must walt. However, rather
than busy wating, the process can block itself. The block operation places a process
into a waiting queue associated with the semaphore, and the state of the process is
switched to the waiting state. Then, control is transferred to the CPU scheduler,
which selects another process to execute.
typedef struct
int value;
struct process "L
semaphore:
Each semaphore has an integer value and a list of processes. When a process
must wait on a semaphore, it is added to the list of processes. A signal operation
removes one process from the list of waiting processes and awakens that process.
S.value-
if (S.value < 0)
S.value++
if (S.value <= 0)
remove a process P from S.L:
wakeup(Pl:
The block operation suspends the process that invokes 1t. The wakeunn
Operatton resumes the execution of a blocked process P. These fwo operau
provided by the operating system as basic system calls.
In a uniprocessor environment (that is, where only one CPU exidsts), we can
simply inhibit interrupts during the time the wait and signal operations art
executing. This scheme works in a uniprocessor environment because, onct
interrupts are inhibited, instructions from different processes cannot be interleaved
Only the currently running process executes, until interrupts are reenabled and
the scheduler can regain control. In a multuprocessor environment, iInhibiting
interrupts does not work. Instructions from diferent processes (running on differen
processors) maybe interleaved in some arbitrary way.
It is important to admit that we have not completely eliminated busy waltin
with this definition of the walt and slgnal operations. Rather, we have remove
busy waiting from the entry to the critical sections of application programs
Furthenmore, we have limlted busy waiting to only the critical sections of the wa
and signal operations, and these sections are short (tf properly coded, they shoud
rocess Synchronlsatlon 81
never
e no more than about 10 instructions). Thus, the critical section is almost
ccupled, and busy waltng occurs rarely, and then for only a short time. An entirely
ufferent situation exists with application programs whose critical sections may be
ong (minutes or even hours) or may
be almost always occupied. In this case, busy
vaiting is extremely inefficient.
wait(S1);
T
C--
f (C <0)
signal(S1)
wait($2);
signal(Sl):
82 Process Synchronlaat
The signal eration on the counting
semaphore S can be implemented
follows:
wait(S1)
C+
if (C=O) signal($2):
else
signal(S):
3.6 CLASSICAL PROBLEMS OF SYNCHRONIZATION
do
produce andieuinnextp
waltempty
watt (mgte x
adanep to buffe
signalmuios
sgnalfu
wile (P
The structure of the producer process
The code for the consumer proces is shown here.
do
waiiiful
wall mutex
process Synchronlsatlon 83
To ensure that these difficulties do not arise, we require that the writers
have exclusive access to the shared object. This synchronization problem is referred
to as the readers-writers problem. Since it was originally stated, it has been used
to test nearly every new synchronization primitive. The readers. writers problem
has several variations, all involving priorities. The simplest one, referred to as the
first readers-writers problem, requires that no reader will be kept waiting unless a
Writer has already obtained permission to use the shared object. In other words, no
reader should wait for other readers to finish simply because a writer is waiting.
The second readers-Writers problem; requires that, once a writer is ready, that writer
performs its write as soon possible. In other words, if a writer is walting to access
the object, no new readers may start reading. A solution to elther problem may
result in starvation. In the first case, writers may starve; in the second case, Teaders
may starve. For this reason, other variants of the problem have been proposed. In
this section, we present a solution to the first readers-writers problem.
waltfmutex
readcount
frreadcoumtsignalwrt
signalmutex
The structure of a reader process
Consider five philosophers who spend their lives thinking and eating.
philosophers share a common circular table surrounded by five chairs, ead
belonging to one philosopher. In the center of the table is a bowl of rice, and ul
table is lald with five single chopsticks. When a philosopher thinks, she does
interact wth her coleagues. From time to time, a philosopher gets hungy
tries to plck up the two chopsticks that are closest to her (the chopsticks that a
between her and her left and right nelghbors). A philosopher may pick up onyo
chopstick at a time. Obviously, she cannot pick up a chopstick that is already
the hand of a nelghbor. When a hungry philosopher has both her chopsticks al
same time, she eats without releasing her chopsticks. When she is finished ea
she puts down both of her chopsticks and starts thinking again.
rocess 8ynchronisatlon 85
o RICE
semaphore chopstick[5]:
where all the elements of chopstick are initialized to 1. The structure of philosopher
is shown below.
alighopsdeki
walleopsteslis
onsteKD
gnaltonopsIA
Suppose that a process interchanges the order in whtch the walt and signa
operations on the semaphore mutex are executed, resulting in the following
execution:
slgnal(mutex)
critical section
wait(mutex);
3. Suppose that a process omits the walt (mutex), or the signal (mutex), or both.
In this case, elther mutual excluston is violated or a deadlock will occur.
These examples llustrate that various types oferrors can be generated easlly
when semaphores are used incorrectly to solve the critlcal-section problem. Simlar
problems may arise in the other synchronizatlon models we have discussed.
struct buffer
tem poolfn:
int count, in. out;
The producer process inserts a new item nextp into the shared bulfer y
executing fegion buffer when
(count < n
poolin nextp:
in = (in+1) % n;
countt+
utsitn
The consumer process removes an item from the shared buffer and put
nextc by executing
monitoE TmoniftoT 1ame
procedite body P
cedure bodyPl)
dure body PA
tftaltzaion.cote
Syntax of a monitor
The monitor construct ensures that only one process at a time can be
active
code this
within the monitor. Consequently, the programmer does not need to
synchronization constraint explicitly.
Process 8ynchronlaan
90
Entry queue
Shared Data
00-
Operations
Initialisation Code
condition x, y:.
The only operations that can be invoked on a condition variable are wat
and signal. The operation x.walt0 means that the process invoking this operatim
is suspended until another process invokes x.signal( ). The x.signal) operation
resumes exactly one suspended process. Ifno process is suspended, then the signa
operation has no effect: that is, the stateofx is as though the operation were nev
executed (Figure 3.3). Contrast this operation with the signal operation assoclated
with semaphores, which always affects the state of the semaphore.
Entry queue
Queues assoclated
with XY
conditions
Shared Data
HHHHHI
O0--
Operations
Initialisation Code
2. Since P was already executing in the monitor, cholce 2 seems more reasonable.
However, if we allow process P to continue, the "loglcal" condition for which 9 was
waiting may no longer hold by the time 9 is resumed.
Philosopher 1
can set the varlable state [1] = eating only if her two neighbors
are not eating: (state I(t+4) % 5) 1= eating and (statell+1) % 5] 1= eating. We also
need to declare
Condition seli{5
Process synchronla
monitor dp
void putdownint i
votd testlint i}
(stateli + 1) 6 5] 1=eating))
statelil = eating:
selflsignal0
Toid tnit0
P.plckup(1):
eat
dp.putdown():
It is easy to show that this solution ensures that no two nelghbors are eating
simultaneously, and that no deadlocks will occur. We note, however, that it is
possible for a philosopher to starve to death. We shall now consider a possible
implementation of the monitor mechanism using semaphores. For each monitor, a
semaphore mutex (initialized to 1) is provided, A process must execute walt (mutex)
before entering the monitor, and must execute signal (mutex) after leaving the
monitor.
Since a signaling process must wait until the resumed process either leaves
or waits, an additional semaphore, next, is introduced, initialized to 0,. on which
the slgnaling processes may suspend themselves. An integer varlable i next_count
will also be provided to count the number of processes suspended'on next. Thus,
each external procedure F will. be replaced by.
wait (mutex.);
body of F
if (next_count> 0)
signalnext:
else
signal(mutex):
Mutual exclusion within a monitor is ensured.:
We can now describe how condition variables are implemented. For each
condition x, we introduce a semaphore xsem and an integer varlable x_count,
both initialized to O. The operation x.wait can now be implemernted as
X_count++
if (next_count> 0)
signalnext:
else
signal(mutex);
wait (x_sem);
X_count-;
next_count++;
signal (x_sem):
wait{next):
next_count--;
This implementation
is applicable to the definitlons of
both Hoarè and Brinch-Hansen. monitors gven
x.wait(c)
where c is an integer expression that
is evaluated when the wait operatlon
executed. The value of c which is called a
name of the process that is suspended. priorlty number, is then stored with t
When x.signal is executed, the proces
with the smallest associated priority
number is resumed next.
QUESTIONS
1. What is meant by cooperating processes?
2. What is process synchronization?
3. What is race condition?
4 Define critical section.
5. How to achieve mutual exclusion?
6. What are the requirements to solve critical
section problem?
7 Write an algorithm to solve critical section
problem.
8. What are the problems faced in critical
section problem?
9. Write an algorithm for solving producer and consumer
problem.
10. Explain bakery algorithm.
11. What is the hardware solution for process synchronization?
12. Explain TestAndSet instruction.
13. How mutual exclusion is achleved using testandset instruction?
96
Process 8ynchronisatlon
EXAMINATION QUESTIONS
*12. Explain the role of a semaphore and its types. Nov-Dec 2006
7
CHAPTER 4
DEADLOCK
EXAMPLES OE DEADLOCK
INDEEINITE PosTPONEMENT
SYSTEM MODEL
DEADIoCK CHARACTERIZATION
Necessary Conditions
Resource Allocation Graph
METHoDS FOR HANDLING DEADLOCKS
DEADLOCK PREVENTION
DEADLOCK AVODANCE
Safe Stafe
Resource Alocation Graph Algorithm
Bankerte Algorithm
DEADIOcK DETEeTION
Stngle nstance of EachiRCSource Type
Detection-Algorithm Usago
RECOVERY FROM DEADLoCK
RrocessTerminaton
ResomrcPrecmpton