Download as doc, pdf, or txt
Download as doc, pdf, or txt
You are on page 1of 2

Aim: To study and implement Distributed Mutual Exclusion.

Theory: Mutual Exclusion (often abbreviated to mutex) algorithms are used in concurrent programming to avoid the simultaneous use of a common resource, such as a global variable, by pieces of computer code called critical sections. A critical section is a piece of code in which a process or thread accesses a common resource. The critical section by itself is not a mechanism or algorithm for mutual exclusion. A program, process, or thread can have the critical section in it without any mechanism or algorithm which implements mutual exclusion. Examples of such resources are fine-grained flags, counters or queues, used to communicate between code that runs concurrently, such as an application and its interrupt handlers. The synchronization of access to those resources is an acute problem because a thread can be stopped or started at any time. To illustrate: suppose a section of code is altering a piece of data over several program steps, when another thread, perhaps triggered by some unpredictable event, starts executing. If this second thread reads from the same piece of data, the data, which is in the process of being overwritten, is in an inconsistent and unpredictable state. If the second thread tries overwriting that data, the ensuing state will probably be unrecoverable. These shared data being accessed by critical sections of code, must therefore be protected, so that other processes which read from or write to the chunk of data are excluded from running. A mutex is also a common name for a program object that negotiates mutual exclusion among threads, also called a lock. There are both software and hardware solutions for enforcing mutual exclusion. The different solutions are shown below. Hardware Solutions: On a uniprocessor system a common way to achieve mutual exclusion inside kernels is to disable interrupts for the smallest possible number of instructions that will prevent corruption of the shared data structure, the critical section. This prevents interrupt code from running in the critical section that also protects against interruptbased process-change. In a computer in which several processors share memory, an indivisible test-and-set of a flag could be used in a tight loop to wait until the other processor clears the flag. The test-and-set performs both operations without releasing the memory bus to another processor. When the code leaves the critical section, it clears the flag. This is called a "spinlock" or "busy-wait". Similar atomic multiple-operation instructions, e.g., compare-and-swap, are commonly used for lock-free manipulation of linked lists and other data structures.

Software Solutions:
Beside the hardware supported solution, some software solutions exist that use "busywait" to achieve the goal. Examples of these include the following: Dekker's algorithm Peterson's algorithm Lamport's bakery algorithm The black-white bakery algorithm Szymanski's Algorithm Lamport Bekarys Algorithm: Lamport's Bakery Algorithm is a computer algorithm devised by computer scientist Leslie Lamport, which is intended to improve the safety in the usage of shared resources among multiple threads by means of mutual exclusion. In computer science, it is common for multiple threads to simultaneously access the same resources. Data corruption can occur if two or more threads try to write into the same memory location, or if one thread reads a memory location before another has finished writing into it. Lamport's bakery algorithm is one of many mutual exclusion algorithms designed to prevent concurrent threads entering critical sections of code concurrently to eliminate the risk of data corruption. Algorithm: Requesting Process: 1. Enters its request in its own queue (ordered by time stamps) 2. Sends a request to every node. 3. Wait for replies from all other nodes. 4. If own request is at the head of the queue and all replies have been received, enter critical section. 5. Upon exiting the critical section, send a release message to every process. Other Processes: 1. After receiving a request, send a reply and enter the request in the request queue (ordered by time stamps) 2. After receiving release message, remove the corresponding request from the request queue. 3. If own request is at the head of the queue and all replies have been received, enter critical section. Conclusion: Hence we have studied and implemented distributed mutual exclusion successfully.

You might also like