Professional Documents
Culture Documents
Distributed-Memory Parallel Programming With MPI: Supervised By: Dr. Shaima Hagras
Distributed-Memory Parallel Programming With MPI: Supervised By: Dr. Shaima Hagras
Distributed-Memory Parallel Programming With MPI: Supervised By: Dr. Shaima Hagras
parallel programming
with MPI
1.
2.
3.
Introduction
Topic one
What is MPI ?
• Standard for Message Passing Interface
• is a standardized interface for exchanging messages between multiple computers running a
parallel program across distributed memory.
• The MPI standard defines the syntax and semantics of library routines that are useful to a wide
range of users writing portable message-passing programs in C, C++, and Fortran.
• MPI supports both point to point and collective communication between processes, as well as
derived data types, one sided communication, dynamic process management, and I/O.
• MPI is widely used for developing and running parallel applications on the Windows platform, as
well as on Linux and other operating system.
Com_1 Com_2
Com_3 Com_4
processes are the actual instances of the program
that are running.
Why MPI ?
1. MPI, too, has mechanisms that make reductions much simpler and in most
cases more efficient than looping over all ranks and collecting results.
2. Collective communication, as opposed to point-to-point communication.
3. collective communication is that it implies a synchronization point among
processes. This means that all processes must reach a point in their code
before they can all begin executing again.
An MPI broadcast: The “root” process (rank 1 in this example) sends the same message to all
others. Every rank in the communicator must call MPI_Bcast() with the same root argument.
Data distribution :
Data distribution in MPI refers to how data is distributed among processes in a parallel computing
environment that uses the Message Passing Interface (MPI).
MPI allows processes to communicate and exchange data in a distributed-memory system. Proper
data distribution is crucial for load balancing and efficient parallel execution of algorithms.
MPI_Scatter
Although the root process (process zero) contains the entire array of data,
MPI_Scatter will copy the appropriate element into the receiving buffer of
the process. Here is what the function prototype of MPI_Scatter looks like.
MPI_Gather :