Download as pdf or txt
Download as pdf or txt
You are on page 1of 5

Synchronous Execution:

1. Definition:

In synchronous execution, tasks or processes are coordinated in time, and their execution is synchronized with a common clock or
time reference.

2. Characteristics:

Processes progress in a lockstep fashion, executing predefined steps simultaneously.


There is a shared notion of time, and tasks are expected to complete within specific time intervals.
Synchronization points are well-defined, and tasks wait for each other to reach these points before proceeding.

3. Advantages:

Simplicity: Synchronous execution is often simpler to design and reason about.


Determinism: Execution is predictable and reproducible, making debugging and analysis easier.
Coordination: Tasks naturally coordinate through synchronization points.

4. Disadvantages:

Overhead: Synchronization can introduce overhead, especially if tasks have varying execution times.
Resource Utilization: Idle times may occur as tasks wait for synchronization points.
Limited Parallelism: May not fully exploit available parallelism if tasks have different execution times.

5. Use Cases:

Real-time systems where tasks must meet strict deadlines.


Simulations where time synchronization is crucial.
Control systems with coordinated actions.

Asynchronous Execution:

1. Definition:

In asynchronous execution, tasks or processes progress independently of each other, and there is no global notion of time or
synchronization points.

2. Characteristics:

Tasks progress at their own pace, without waiting for others to reach specific points.
No global clock or time reference governs task execution.
Emphasizes flexibility and responsiveness to varying workloads.

3. Advantages:

Parallelism: Can fully exploit available parallelism, as tasks are not constrained by synchronization.
Responsiveness: Can adapt to varying workloads and resource availability.
Resource Utilization: Minimizes idle times, leading to better resource utilization.

4. Disadvantages:

Complexity: Designing and reasoning about asynchronous systems can be more complex.
Non-determinism: Execution may be less predictable and more challenging to reproduce.
Coordination Challenges: Coordination between tasks may require explicit communication mechanisms.

5. Use Cases:

Parallel computing where tasks have varying execution times.


Distributed systems where tasks operate independently across different nodes.
Systems with dynamic workloads that benefit from adaptive scheduling.

Comparison:

1. Coordination:

Synchronous: Tasks naturally coordinate through predefined synchronization points.


Asynchronous: Coordination may require explicit communication mechanisms.

2. Determinism:

Synchronous: Execution is deterministic and predictable.


Asynchronous: Execution may be non-deterministic, especially when tasks operate independently.

3. Flexibility:

Synchronous: Less flexible due to the rigid synchronization points.


Asynchronous: More flexible, adapts well to varying workloads.
4. Resource Utilization:

Synchronous: May lead to idle times and underutilization of resources.


Asynchronous: Minimizes idle times and better utilizes available resources.

5. Complexity:

Synchronous: Typically simpler to design and reason about.


Asynchronous: Can be more complex due to the lack of a global time reference.

In summary, the choice between synchronous and asynchronous execution depends on the specific requirements of the system, including the
need for determinism, coordination, and adaptability to varying workloads.

The provided text discusses different communication primitives in distributed systems, focusing on the concepts of blocking/non-blocking and
synchronous/asynchronous operations. Here's a summary and explanation:

1. Message Send and Receive Primitives:

Send(): Sends data to a specified destination. It has parameters for the destination and the user buffer containing the data to be
sent.
Receive(): Receives data from a specified source. It has parameters for the source and the user buffer into which the data is to be
received.

2. Buffered and Unbuffered Options:

Buffered Option: The standard option where data is copied from the user buffer to the kernel buffer before being sent over the
network.
Unbuffered Option: Data is sent directly from the user buffer to the network.

3. Synchronous and Asynchronous Primitives:

Synchronous Primitives:

Completion of the Send primitive occurs only after the corresponding Receive primitive is invoked and the receive operation is
completed.
For the Receive primitive, completion occurs when the data is copied into the receiver's user buffer.

Asynchronous Primitives:

The Send primitive is asynchronous if control returns to the invoking process after the data is copied out of the user buffer.
Asynchronous Receive primitives are not explicitly defined.

4. Blocking and Non-blocking Primitives:

Blocking Primitives:

Control returns to the invoking process after the processing (whether synchronous or asynchronous) completes.
Blocking Wait() calls can be used to wait for the completion of the operation.

Non-blocking Primitives:

Control returns immediately after invocation, even if the operation has not completed.
A system-generated handle is returned, which can be used to check the status of completion.

5. Handle and Wait Mechanism:

For non-blocking primitives, a handle is returned, and a process can use a Wait call to check for completion.
The Wait call can either periodically check the handle or block until one of the parameter handles is posted.
A blocking Wait() call is commonly used after issuing a non-blocking primitive to check the status of completion.

6. Versions of Send and Receive Primitives:

Four versions of the Send primitive: synchronous blocking, synchronous non-blocking, asynchronous blocking, and asynchronous
non-blocking.
For the Receive primitive, there are blocking synchronous and non-blocking synchronous versions.

7. Timing Diagram:

The timing diagram illustrates three timelines for each process: process execution, user buffer data transfer, and
kernel/communication subsystem actions.

In summary, the text provides a comprehensive overview of the various communication primitives used in distributed systems, considering
different combinations of blocking/non-blocking and synchronous/asynchronous operations. The use of handles and the Wait mechanism is
highlighted for managing asynchronous and non-blocking primitives.

This section of the text describes different versions of Send and Receive primitives, illustrating their characteristics and usage in distributed
systems:
1. Blocking Synchronous Send (Figure 1.8(a)):

Data is copied from the user buffer to the kernel buffer and then sent over the network.
Control returns to the sender after the data is copied to the receiver's system buffer, and an acknowledgment is received.
The communication appears instantaneous due to the synchronous nature of the operation.

2. Non-blocking Synchronous Send (Figure 1.8(b)):

Control returns to the sender as soon as the data copy from the user buffer to the kernel buffer is initiated.
A handle is set with the location for the user process to check for completion.
The user process can periodically check the handle or use a blocking Wait operation to wait for completion.

3. Blocking Asynchronous Send (Figure 1.8(c)):

The user process invoking Send is blocked until the data is copied from the user's buffer to the kernel buffer.
For the unbuffered option, the user process is blocked until the data is copied from the user's buffer to the network.

4. Non-blocking Asynchronous Send (Figure 1.8(d)):

The user process invoking Send is blocked until the transfer of data from the user's buffer to the kernel buffer is initiated.
For the unbuffered option, the user process is blocked until the data transfer from the user's buffer to the network is initiated.
Control returns to the user process as soon as the transfer is initiated, with a handle for checking completion later using the Wait
operation.

5. Blocking Receive (Figure 1.8(a)):

The Receive call blocks until the expected data arrives and is written to the specified user buffer.
Control is returned to the user process after the data is received.

6. Non-blocking Receive (Figure 1.8(b)):

The Receive call registers with the kernel and returns a handle for the user process to check for completion.
The kernel posts the location after the expected data arrives and is copied to the user-specified buffer.
The user process can check for completion using the Wait operation.

The text emphasizes that a synchronous Send simplifies program logic as it creates an illusion of instantaneous communication. However, it
also notes that this "instantaneity" is an illusion, and there might be delays. Non-blocking primitives, especially asynchronous ones, are useful
when dealing with large data items, allowing processes to perform other instructions in parallel. However, they increase the complexity for the
programmer, as they need to manage the completion of operations for meaningful buffer reuse. Overall, blocking primitives are considered
conceptually easier to use.

Designing and building distributed systems pose several challenges from a system perspective. The key functions that need to be addressed
include:

1. Communication:

Designing mechanisms for communication among processes, such as RPC, ROI, and choosing between message-oriented and
stream-oriented communication.

2. Processes:

Managing processes and threads at clients and servers.


Handling code migration.
Designing software and mobile agents for efficient execution.

3. Naming:

Devising robust schemes for names, identifiers, and addresses to locate resources and processes transparently and at scale.
Addressing challenges in naming for mobile systems without relying on static geographical topologies.

4. Synchronization:

Implementing mechanisms for synchronization and coordination among processes.


Addressing issues like mutual exclusion, leader election, clock synchronization (physical and logical), and global state recording.

5. Data Storage and Access:

Designing schemes for efficient data storage and access across the network.
Reconsidering traditional file system design in the context of a distributed system.

6. Consistency and Replication:

Managing replication of data objects for scalability and fast access.


Handling consistency among replicas/caches in a distributed setting.
Deciding the granularity of data access.

7. Fault Tolerance:

Maintaining correct and efficient operation despite failures in links, nodes, and processes.
Implementing mechanisms like process resilience, reliable communication, distributed commit, checkpointing and recovery,
agreement and consensus, failure detection, and self-stabilization.

8. Security:

Incorporating various aspects of cryptography, secure channels, access control, key management (generation and distribution),
authorization, and secure group management.

9. API and Transparency:

Providing a user-friendly API for communication and other specialized services.


Ensuring transparency in aspects like access, location, migration, replication, concurrency, and failure to hide implementation details
from users.

10. Scalability and Modularity:

Distributing algorithms, data objects, and services as much as possible.


Leveraging techniques like replication, caching, cache management, and asynchronous processing to achieve scalability.

Several large-scale distributed system projects, including Globe and Globus, are working on efficiently providing these functions. The Grid
infrastructure for large-scale distributed computing is another ambitious project addressing these challenges.

image ψ

In synchronous communication, both the sender and receiver need to be ready and
explicitly synchronize their communication. This is in contrast to asynchronous
communication, where processes can send and receive messages independently, and
the ordering is not predetermined.

Consider the code snippet in Figure 6.4:

```plaintext
1. process Pi:
2. send_message(Q)
3. receive_message(R)

4. process Pj:
5. receive_message(Q)
6. send_message(R)
```

In an asynchronous system, this code may work correctly. Process Pi sends a


message Q and then waits to receive a message R. Meanwhile, process Pj waits to
receive message Q and then sends message R. The asynchrony allows processes to
proceed independently.

However, if you execute this code on a synchronous system where communication


is synchronous, it can lead to deadlock. Consider the following sequence:

1. Pi sends message Q.
2. Pj is waiting for message Q, but it hasn't arrived yet.
3. Pi is waiting for message R, but it hasn't arrived yet.

Here, both processes are waiting for a message that the other process is
supposed to send, resulting in a deadlock. In a synchronous system, the
communication is blocking, and both processes must be ready at the same time
for the communication to occur. If the synchronization points are not
coordinated properly, deadlock can occur.

This highlights an important point: algorithms designed for asynchronous


systems may not be directly applicable or may need modification to work
correctly in synchronous systems. Asynchronous algorithms may rely on the
flexibility of processes to make progress independently, and imposing
synchronous communication can introduce coordination challenges that need to be
carefully addressed to avoid issues like deadlock.

In synchronous communication, both the sender and receiver need to be ready and explicitly synchronize their communication. This is in
contrast to asynchronous communication, where processes can send and receive messages independently, and the ordering is not
predetermined.

Consider the code snippet in Figure 6.4:

1. process Pi:
2. send_message(Q)
3. receive_message(R)

4. process Pj:
5. receive_message(Q)
6. send_message(R)

In an asynchronous system, this code may work correctly. Process Pi sends a message Q and then waits to receive a message R. Meanwhile,
process Pj waits to receive message Q and then sends message R. The asynchrony allows processes to proceed independently.

However, if you execute this code on a synchronous system where communication is synchronous, it can lead to deadlock. Consider the
following sequence:

1. Pi sends message Q.
2. Pj is waiting for message Q, but it hasn't arrived yet.
3. Pi is waiting for message R, but it hasn't arrived yet.

Here, both processes are waiting for a message that the other process is supposed to send, resulting in a deadlock. In a synchronous system,
the communication is blocking, and both processes must be ready at the same time for the communication to occur. If the synchronization
points are not coordinated properly, deadlock can occur.

This highlights an important point: algorithms designed for asynchronous systems may not be directly applicable or may need modification to
work correctly in synchronous systems. Asynchronous algorithms may rely on the flexibility of processes to make progress independently, and
imposing synchronous communication can introduce coordination challenges that need to be carefully addressed to avoid issues like deadlock.

You might also like