Lab Assesment 10 Parallel & Distributed Computing (L31+32) : Dated: 18/10/2020 Assessment 10 Muskan Agrawal 18BCE0707

You might also like

Download as pdf or txt
Download as pdf or txt
You are on page 1of 3

Dated: 18/10/2020 MUSKAN AGRAWAL

ASSESSMENT 10 18BCE0707

LAB ASSESMENT 10

PARALLEL
&
DISTRIBUTED COMPUTING
(L31+32)

AIM: Assume the variable rank contains the process rank


and root is 3. What will be stored in array b [ ] on each of
four processes if each executes the following code
fragment?

int b [4] = {0 , 0 , 0 , 0};


MPI_Gather ( & rank , 1 , MPI_INT , b , 1 , MPI_INT , root
,MPI_COMM_WORLD);

Hint. The function prototype is as follows:

int MPI_Gather ( void * sendbuf , // pointer to send buffer


int sendcount , // number of items to send
MPI_Datatype sendtype , // type of send buffer data
void * recvbuf , // pointer to receive buffer
int recvcount , // items to receive per process
MPI_Datatype recvtype , // type of receive buffer data
int root , // rank of receiving process
MPI_Comm comm ) // MPI communicator to use
Dated: 18/10/2020 MUSKAN AGRAWAL
ASSESSMENT 10 18BCE0707

CODE:
#include<stdio.h>
#include<math.h>
#include<mpi.h>
int main(int argc, char *argv[]){
int rank,size,i,j;
MPI_Init(&argc,&argv);
MPI_Comm_rank(MPI_COMM_WORLD,&rank);
MPI_Comm_size(MPI_COMM_WORLD,&size);
int b[4]={0,0,0,0};
int sndcnt,revcnt;
revcnt=1;
sndcnt=1;

MPI_Gather(&rank,sndcnt,MPI_INT,&b,revcnt,MPI_INT,3,MPI_COMM_WOR
LD);
printf("Processor: %d and b:{%d,%d,%d,%d}\n",rank,b[0],b[1],b[2],b[3]);
MPI_Finalize();
return 0;
}

CODE SNIPPET:
Dated: 18/10/2020 MUSKAN AGRAWAL
ASSESSMENT 10 18BCE0707

EXECUTION:

REMARKS:
1. This assignment was to gain insights on the function MPI_Gather and
above is a snapshot of the output containing the value of array b for each
process rank.
2. MPI_Gather is the inverse of MPI_Scatter. Instead of spreading elements
from one process to many processes, MPI_Gather takes elements from
many processes and gathers them to one single process. This routine is
highly useful to many parallel algorithms, such as parallel sorting and
searching.
3. MPI_Gather takes elements from each process and gathers them to the
root process. The elements are ordered by the rank of the process from
which they were received. The function prototype for MPI_Gather is
identical to that of MPI_Scatter. In MPI_Gather, only the root process
needs to have a valid receive buffer.

You might also like