Assignment Individual - 1 ParallelProg

You might also like

Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1of 6

COURSE : BCS3413 Parallel Programming

STUDENT NAME : Azrul Bin Hazizan (B01150007)


ASSIGNMENT : Assignment Individual 1
SUBMISSION DATE : 10th October 2017

ASSIGNMENT TOPIC : MPI Programming

STUDENT DECLARATION

I declare that this material, which I now submit for assessment, is entirely my own work and has
not been taken from the work of others, save and to the extent that such work has been cited and
acknowledged within the text of my work.

I understand that plagiarism, collusion, and copying are grave and serious offences in the university
and accept the penalties that would be imposed should I engage in plagiarism, collusion or copying.
I have read and understood the Assignment Regulations set out in the assignment documentation.

I have identified and included the source of all facts, ideas, opinions, and viewpoints of others in
the assignment references. Direct quotations from books, journal articles, internet sources, module
text, or any other source whatsoever are acknowledged and the source cited are identified in the
assignment references.

This assignment, or any part of it, has not been previously submitted by me or any other person for
assessment on this or any other course of study

DATE : 10th October 2017


:
SIGNATURE

MARKS :
COMMENT :
Question 1

Suppose comm_sz =4 and suppose that x is a vector with n = 14 components.


a) How would the components of x be distributed among the processes in a program that used
a block distribution?
b) How would the components of x be distributed among the processes in a program that used
a cyclic distribution?
c) How would the components of x be distributed among the processes in a program that used
a block-cyclic distribution with block size b=2?
You should try to make your distributions general so that they could be used regardless of what
comm_sz and n are. You should also try to make your distributions “fair” so that if q and r are any
two processes, the difference between the number of components assigned to q and the number of
components assigned to r is as small as possible.

Answer:

a) Block Distribution
Process
0 0 1 2 12
1 3 4 5 13
2 6 7 8
3 9 10 11

b) Cyclic Distribution
Process
0 0 4 8 12
1 1 5 9 13
2 2 6 10
3 3 7 11

c) Block-Cyclic (Block size = 2)


Process
0 0 1 8 9
1 2 3 10 11
2 4 5 12 13
3 6 7
Question 2

Suppose comm_sz=8 and n=16.


a) Draw a diagram that shows how MPI_Scatter can be implemented using tree-structured
communication with comm_sz processes when process 0 needs to distribute an array
containing n elements.
Answer:

b) Draw a diagram that shows how MPI_Gather can be implemented using tree-structured
communication when an n-element array that has been distributed among comm_sz
processes needs to be gathered onto process 0.
Question 3

The following are some collective communication functions available in MPICH. Briefly explain their
use with an example.
MPI_Alltoall ( ), MPI_Scatterv ( ), MPI_Gatherv ( ), MPI_Alltoallv ( ), MPI_Allgatherv ( )

Answer:

a. MPI_Alltoall ( )
- Used to send data from all to all processes
- E.g. suppose there are four processes including the root, each with array u. When those
processes gone through all-to-all operation,
MPI_Alltoall (u, 2, MPI_INT, v, 2, MPI_INT, MPI_COMM_WORLD);
The data will be distributed as shown below on the array v;
Rank array u array v
0 10 11 12 13 14 15 16 17 10 11 20 21 30 31 40 41

1 20 21 22 23 24 25 26 27 12 13 22 23 32 33 42 43

2 30 31 32 33 34 35 36 37 14 15 24 25 34 35 44 45

3 40 41 42 43 44 45 46 47 16 17 26 27 36 37 46 47

b. MPI_Scatterv ( )
- Used to Scatters a buffer in parts to all tasks in a group
- Example:
o The root process scatters sets of 100 ints to the other processes, but the sets of 100 are
stride ints apart in the sending buffer. Requires use of MPI_SCATTERV. Assume Stride >=
100.
c. MPI_GatherV ( )
- Used to gather into specified locations from all processes in a group
- Example:
o Now have each process send 100 ints to root, but place each set (of 100)
stride ints apart at receiving end. Use MPI_GATHERV and the displs
argument to achieve this effect. Assume stride >= 100.
d. MPI_Alltoallv ( )
- Used to Sends data from all to all processes with a displacement
- Example:

e. MPI_AllGatherv ( )
- Used to gathers data from all processes and delivers it to all. Each process may
contribute a different amount of data.
- Example:

You might also like