Professional Documents
Culture Documents
Lab Assesment 10 Parallel & Distributed Computing (L31+32) : Dated: 18/10/2020 Assessment 10 Muskan Agrawal 18BCE0707
Lab Assesment 10 Parallel & Distributed Computing (L31+32) : Dated: 18/10/2020 Assessment 10 Muskan Agrawal 18BCE0707
Lab Assesment 10 Parallel & Distributed Computing (L31+32) : Dated: 18/10/2020 Assessment 10 Muskan Agrawal 18BCE0707
ASSESSMENT 10 18BCE0707
LAB ASSESMENT 10
PARALLEL
&
DISTRIBUTED COMPUTING
(L31+32)
CODE:
#include<stdio.h>
#include<math.h>
#include<mpi.h>
int main(int argc, char *argv[]){
int rank,size,i,j;
MPI_Init(&argc,&argv);
MPI_Comm_rank(MPI_COMM_WORLD,&rank);
MPI_Comm_size(MPI_COMM_WORLD,&size);
int b[4]={0,0,0,0};
int sndcnt,revcnt;
revcnt=1;
sndcnt=1;
MPI_Gather(&rank,sndcnt,MPI_INT,&b,revcnt,MPI_INT,3,MPI_COMM_WOR
LD);
printf("Processor: %d and b:{%d,%d,%d,%d}\n",rank,b[0],b[1],b[2],b[3]);
MPI_Finalize();
return 0;
}
CODE SNIPPET:
Dated: 18/10/2020 MUSKAN AGRAWAL
ASSESSMENT 10 18BCE0707
EXECUTION:
REMARKS:
1. This assignment was to gain insights on the function MPI_Gather and
above is a snapshot of the output containing the value of array b for each
process rank.
2. MPI_Gather is the inverse of MPI_Scatter. Instead of spreading elements
from one process to many processes, MPI_Gather takes elements from
many processes and gathers them to one single process. This routine is
highly useful to many parallel algorithms, such as parallel sorting and
searching.
3. MPI_Gather takes elements from each process and gathers them to the
root process. The elements are ordered by the rank of the process from
which they were received. The function prototype for MPI_Gather is
identical to that of MPI_Scatter. In MPI_Gather, only the root process
needs to have a valid receive buffer.