Professional Documents
Culture Documents
Unit 3 DS R2017
Unit 3 DS R2017
Unit 3 DS R2017
Performance metrics
The performance of mutual exclusion algorithms is generally measured by the following four metrics:
• Message complexity This is the number of messages that are required per CS execution by a site.
• Synchronization delay After a site leaves the CS, it is the time required and before the next site enters the
CS (see Figure 9.1). Note that normally one or more sequential message exchanges may be required after a
site exits the CS and before the next site can enter the CS.
• Response time This is the time interval a request waits for its CS execution to be over after its request
messages have been sent out (see Figure 9.2). Thus, response time does not include the time a request waits
at a site before its request messages have been sent out.
• System throughput This is the rate at which the system executes requests for the CS. If SD is the
synchronization delay and E is the average critical section execution time, then the throughput is given by
the following equation:
System throughput = 1/(SD+E)
1
LAMPORT‘S ALGORITHM
Lamport developed a distributed mutual exclusion algorithm (Algorithm 9.1) as an illustration of his
clock synchronization scheme. The algorithm is fair in the sense that a request for CS are executed in the
order of their timestamps and time is determined by logical clocks. When a site processes a request for the
CS, it updates its local clock and assigns the request a timestamp. The algorithm executes CS requests in the
increasing order of timestamps. Every site Si keeps a queue, request_queuei, which contains mutual
exclusion requests ordered by their timestamps. This algorithm requires communication channels to deliver
messages in FIFO order.
2
When a site removes a request from its request queue, its own request may come at the top of the queue,
enabling it to enter the CS. Clearly, when a site receives a REQUEST, REPLY, or RELEASE message, it
updates its clock using the timestamp in the message.
Correctness
Theorem Lamport’s algorithm achieves mutual exclusion.
Proof Proof is by contradiction. Suppose two sites Si and Sj are executing the CS concurrently. For this to
happen conditions L1 and L2 must hold at both the sites concurrently. This implies that at some instant in
time, say t, both Si and Sj have their own requests at the top of their request_queues and condition L1 holds
at them. Without loss of generality, assume that Si’s request has smaller timestamp than the request of Sj .
From condition L1 and FIFO property of the communication channels, it is clear that at instant t the request
of Si must be present in request_queuej when Sj was executing its CS. This implies that Sj’s own request is
at the top of its own request_queue when a smaller timestamp request, Si’s request, is present in the
request_queuej – a contradiction! Hence, Lamport’s algorithm achieves mutual exclusion.
Example
Performance
For each CS execution, Lamport’s algorithm requires (N −1) REQUEST messages, (N −1) REPLY
messages, and (N −1) RELEASE messages. Thus, Lamport’s algorithm requires 3(N −1) messages per CS
invocation. The synchronization delay in the algorithm is T.
RICART-AGRAWALA ALGORITHM
Essential requirements for mutual exclusion
• ME1: (safety) At most one process may execute in the critical section (CS) at a time.
• ME2: (liveness) Requests to enter and exit the critical section eventually succeed. Condition ME2 implies
freedom from both deadlock and starvation.
• ME3: (⟶ ordering) If one request to enter the CS happened-before another, then entry to the CS is granted in
that order.
Ricart and Agrawala developed an algorithm to implement mutual exclusion between N peer processes that
is based upon multicast. The basic idea is that processes that require entry to a critical section multicast a request
message, and can enter it only when all the other processes have replied to this message. The conditions under which a
process replies to a request are designed to ensure that conditions ME1–ME3 are met.
The processes p1, p2, …,pN bear distinct numeric identifiers. They are assumed to possess communication
channels to one another, and each process pi keeps a Lamport clock, updated according to the rules LC1 and LC2 of
Section 14.4. Messages requesting entry are of the form <T, pi >, where T is the sender’s timestamp and pi is the
sender’s identifier.
3
Each process records its state of being outside the critical section (RELEASED), wanting entry (WANTED)
or being in the critical section (HELD) in a variable state. The protocol is given in Figure 15.4.
• If a process requests entry and the state of all other processes is RELEASED, then all processes will reply
immediately to the request and the requester will obtain entry.
• If some process is in the state HELD, then that process will not reply to requests until it has finished with the
critical section, and so the requester cannot gain entry in the meantime.
• If two or more processes request entry at the same time, then whichever process’s request bears the lowest
timestamp will be the first to collect N – 1 replies, granting it entry next.
• If the requests bear equal Lamport timestamps, the requests are ordered according to the processes’
corresponding identifiers.
Algorithm Illustration
Consider a situation involving three processes, p1, p2 and p3, shown in Figure 15.5. Let us assume that p3 is not
interested in entering the critical section, and that p1 and p2 request entry concurrently. The timestamp of p1’s request
is 41, and that of p2 is 34. When p3 receives their requests, it replies immediately. When p2 receives p1’s request, it
finds that its own request has the lower timestamp and so does not reply, holding p1 off. However, p1 finds that p2 ’s
request has a lower timestamp than that of its own request and so replies immediately. On receiving this second reply,
p2 can enter the critical section. When p2 exits the critical section, it will reply to p1 ’s request and so grant it entry.
The advantage of this algorithm is that its synchronization delay is only one message transmission time.
Performance
For each CS execution, the Ricart–Agrawala algorithm requires (N – 1) REQUEST messages and (N
– 1) REPLY messages. Thus, it requires 2(N −1) messages per CS execution. The synchronization delay in
the algorithm is T.
4
Unfortunately, the algorithm is deadlock-prone. Consider three processes, p1, p2 and p3, with V1={p1,p2},
V2={p2,p3} and V3={p3,p1}. If the three processes concurrently request entry to the critical section, then it is it is
possible for p1 to reply to itself and hold off p2, for p2 to reply to itself and hold off p3, and for p3 to reply to itself and
hold off p1. Each process has received one out of two replies, and none can proceed.
5
6
DEADLOCK DETECTION IN DISTRIBUTED SYSTEMS
7
8
9
10
11
12
13
14
15
16
Chandy–Misra–Haas algorithm for the AND model
It is a fully distributed deadlock detection algorithm for AND model. This is considered
an edge-chasing, probe-based algorithm.
Algorithm
If a process makes a request for a resource which fails or times out,
• the process generates a probe message which contains (probe originator-id, sender-
id, receiver-id) and sends it to each of the processes holding one or more of its
requested resources.
Example
• In this case P1 initiates the probe message, so that all the messages shown
have P1 as the initiator.
• When the probe message is received by process P3, it modifies it and sends it to two
more processes.
• Eventually, the probe message returns to process P1. Deadlock!
18
Chandy–Misra–Haas algorithm for the OR model
Performance analysis
For every deadlock detection, the algorithm exchanges e query messages and e reply
messages, where e = n(n−1) is the number of edges.
19