Professional Documents
Culture Documents
1 Unit DC
1 Unit DC
1. Definition:
In synchronous execution, tasks or processes are coordinated in time, and their execution is synchronized with a common clock or
time reference.
2. Characteristics:
3. Advantages:
4. Disadvantages:
Overhead: Synchronization can introduce overhead, especially if tasks have varying execution times.
Resource Utilization: Idle times may occur as tasks wait for synchronization points.
Limited Parallelism: May not fully exploit available parallelism if tasks have different execution times.
5. Use Cases:
Asynchronous Execution:
1. Definition:
In asynchronous execution, tasks or processes progress independently of each other, and there is no global notion of time or
synchronization points.
2. Characteristics:
Tasks progress at their own pace, without waiting for others to reach specific points.
No global clock or time reference governs task execution.
Emphasizes flexibility and responsiveness to varying workloads.
3. Advantages:
Parallelism: Can fully exploit available parallelism, as tasks are not constrained by synchronization.
Responsiveness: Can adapt to varying workloads and resource availability.
Resource Utilization: Minimizes idle times, leading to better resource utilization.
4. Disadvantages:
Complexity: Designing and reasoning about asynchronous systems can be more complex.
Non-determinism: Execution may be less predictable and more challenging to reproduce.
Coordination Challenges: Coordination between tasks may require explicit communication mechanisms.
5. Use Cases:
Comparison:
1. Coordination:
2. Determinism:
3. Flexibility:
5. Complexity:
In summary, the choice between synchronous and asynchronous execution depends on the specific requirements of the system, including the
need for determinism, coordination, and adaptability to varying workloads.
The provided text discusses different communication primitives in distributed systems, focusing on the concepts of blocking/non-blocking and
synchronous/asynchronous operations. Here's a summary and explanation:
Send(): Sends data to a specified destination. It has parameters for the destination and the user buffer containing the data to be
sent.
Receive(): Receives data from a specified source. It has parameters for the source and the user buffer into which the data is to be
received.
Buffered Option: The standard option where data is copied from the user buffer to the kernel buffer before being sent over the
network.
Unbuffered Option: Data is sent directly from the user buffer to the network.
Synchronous Primitives:
Completion of the Send primitive occurs only after the corresponding Receive primitive is invoked and the receive operation is
completed.
For the Receive primitive, completion occurs when the data is copied into the receiver's user buffer.
Asynchronous Primitives:
The Send primitive is asynchronous if control returns to the invoking process after the data is copied out of the user buffer.
Asynchronous Receive primitives are not explicitly defined.
Blocking Primitives:
Control returns to the invoking process after the processing (whether synchronous or asynchronous) completes.
Blocking Wait() calls can be used to wait for the completion of the operation.
Non-blocking Primitives:
Control returns immediately after invocation, even if the operation has not completed.
A system-generated handle is returned, which can be used to check the status of completion.
For non-blocking primitives, a handle is returned, and a process can use a Wait call to check for completion.
The Wait call can either periodically check the handle or block until one of the parameter handles is posted.
A blocking Wait() call is commonly used after issuing a non-blocking primitive to check the status of completion.
Four versions of the Send primitive: synchronous blocking, synchronous non-blocking, asynchronous blocking, and asynchronous
non-blocking.
For the Receive primitive, there are blocking synchronous and non-blocking synchronous versions.
7. Timing Diagram:
The timing diagram illustrates three timelines for each process: process execution, user buffer data transfer, and
kernel/communication subsystem actions.
In summary, the text provides a comprehensive overview of the various communication primitives used in distributed systems, considering
different combinations of blocking/non-blocking and synchronous/asynchronous operations. The use of handles and the Wait mechanism is
highlighted for managing asynchronous and non-blocking primitives.
This section of the text describes different versions of Send and Receive primitives, illustrating their characteristics and usage in distributed
systems:
1. Blocking Synchronous Send (Figure 1.8(a)):
Data is copied from the user buffer to the kernel buffer and then sent over the network.
Control returns to the sender after the data is copied to the receiver's system buffer, and an acknowledgment is received.
The communication appears instantaneous due to the synchronous nature of the operation.
Control returns to the sender as soon as the data copy from the user buffer to the kernel buffer is initiated.
A handle is set with the location for the user process to check for completion.
The user process can periodically check the handle or use a blocking Wait operation to wait for completion.
The user process invoking Send is blocked until the data is copied from the user's buffer to the kernel buffer.
For the unbuffered option, the user process is blocked until the data is copied from the user's buffer to the network.
The user process invoking Send is blocked until the transfer of data from the user's buffer to the kernel buffer is initiated.
For the unbuffered option, the user process is blocked until the data transfer from the user's buffer to the network is initiated.
Control returns to the user process as soon as the transfer is initiated, with a handle for checking completion later using the Wait
operation.
The Receive call blocks until the expected data arrives and is written to the specified user buffer.
Control is returned to the user process after the data is received.
The Receive call registers with the kernel and returns a handle for the user process to check for completion.
The kernel posts the location after the expected data arrives and is copied to the user-specified buffer.
The user process can check for completion using the Wait operation.
The text emphasizes that a synchronous Send simplifies program logic as it creates an illusion of instantaneous communication. However, it
also notes that this "instantaneity" is an illusion, and there might be delays. Non-blocking primitives, especially asynchronous ones, are useful
when dealing with large data items, allowing processes to perform other instructions in parallel. However, they increase the complexity for the
programmer, as they need to manage the completion of operations for meaningful buffer reuse. Overall, blocking primitives are considered
conceptually easier to use.
Designing and building distributed systems pose several challenges from a system perspective. The key functions that need to be addressed
include:
1. Communication:
Designing mechanisms for communication among processes, such as RPC, ROI, and choosing between message-oriented and
stream-oriented communication.
2. Processes:
3. Naming:
Devising robust schemes for names, identifiers, and addresses to locate resources and processes transparently and at scale.
Addressing challenges in naming for mobile systems without relying on static geographical topologies.
4. Synchronization:
Designing schemes for efficient data storage and access across the network.
Reconsidering traditional file system design in the context of a distributed system.
7. Fault Tolerance:
Maintaining correct and efficient operation despite failures in links, nodes, and processes.
Implementing mechanisms like process resilience, reliable communication, distributed commit, checkpointing and recovery,
agreement and consensus, failure detection, and self-stabilization.
8. Security:
Incorporating various aspects of cryptography, secure channels, access control, key management (generation and distribution),
authorization, and secure group management.
Several large-scale distributed system projects, including Globe and Globus, are working on efficiently providing these functions. The Grid
infrastructure for large-scale distributed computing is another ambitious project addressing these challenges.
image ψ
In synchronous communication, both the sender and receiver need to be ready and
explicitly synchronize their communication. This is in contrast to asynchronous
communication, where processes can send and receive messages independently, and
the ordering is not predetermined.
```plaintext
1. process Pi:
2. send_message(Q)
3. receive_message(R)
4. process Pj:
5. receive_message(Q)
6. send_message(R)
```
1. Pi sends message Q.
2. Pj is waiting for message Q, but it hasn't arrived yet.
3. Pi is waiting for message R, but it hasn't arrived yet.
Here, both processes are waiting for a message that the other process is
supposed to send, resulting in a deadlock. In a synchronous system, the
communication is blocking, and both processes must be ready at the same time
for the communication to occur. If the synchronization points are not
coordinated properly, deadlock can occur.
In synchronous communication, both the sender and receiver need to be ready and explicitly synchronize their communication. This is in
contrast to asynchronous communication, where processes can send and receive messages independently, and the ordering is not
predetermined.
1. process Pi:
2. send_message(Q)
3. receive_message(R)
4. process Pj:
5. receive_message(Q)
6. send_message(R)
In an asynchronous system, this code may work correctly. Process Pi sends a message Q and then waits to receive a message R. Meanwhile,
process Pj waits to receive message Q and then sends message R. The asynchrony allows processes to proceed independently.
However, if you execute this code on a synchronous system where communication is synchronous, it can lead to deadlock. Consider the
following sequence:
1. Pi sends message Q.
2. Pj is waiting for message Q, but it hasn't arrived yet.
3. Pi is waiting for message R, but it hasn't arrived yet.
Here, both processes are waiting for a message that the other process is supposed to send, resulting in a deadlock. In a synchronous system,
the communication is blocking, and both processes must be ready at the same time for the communication to occur. If the synchronization
points are not coordinated properly, deadlock can occur.
This highlights an important point: algorithms designed for asynchronous systems may not be directly applicable or may need modification to
work correctly in synchronous systems. Asynchronous algorithms may rely on the flexibility of processes to make progress independently, and
imposing synchronous communication can introduce coordination challenges that need to be carefully addressed to avoid issues like deadlock.