Professional Documents
Culture Documents
PDC-Lab 21BCE10419
PDC-Lab 21BCE10419
Submitted by:
BACHELOR OF TECHNOLOGY
in
CSE (CORE)
Submited to:
1
INDEX
2
Lab No: 1
OPENMP – BASIC AND FUNDAMENTALS
Aim:
To understand the basic concepts and fundamentals of OpenMP parallel programming model and
its application in parallel computing.
Objective :
Introduction :
Key Points:
2. Shared Memory Model: OpenMP operates on the shared memory model, where multiple
threads access shared data, simplifying communication and synchronization.
3. Directives: OpenMP directives like parallel, for, and sections are added to code to specify
areas for parallel execution, making code parallelization straightforward.
4. Synchronization: OpenMP provides mechanisms like barriers and locks for synchronizing
access to shared resources, ensuring data integrity in parallel execution.
5. Data Scope: Understanding data scope (private, shared, etc.) is crucial for managing data
access and ensuring correctness in parallel programs.
6. Performance: Considerations like load balancing and minimizing overhead are important
for optimizing performance in parallel programs.
7. Best Practices: Adhering to best practices ensures efficient utilization of OpenMP, leading
to better performance and scalability in parallel programs.
3
Conclusion:
In conclusion, this lab provided a foundational understanding of OpenMP and its role in parallel
programming. By grasping basic concepts, directives, synchronization mechanisms, and
performance considerations, learners are better equipped to write efficient parallel programs
using OpenMP. Understanding these fundamentals lays a solid groundwork for exploring more
advanced parallel programming techniques.
4
Lab No: 2
OPENMP – PROGRAM FOR VECTOR ADDITION
Aim:
To examine the scenario, where we are going to take two one-dimensional arrays, each of size of 5.
We will then create 5 threads. Each thread will be responsible for one addition operation.
Procedure:
PROGRAM:
Vector addition:
#include <stdlib.h>
#include <stdio.h>
#include <omp.h>
#define ARRAY_SIZE 20
#define NUM_THREADS 20
int * a;
5
int * b;
int * c;
int n = ARRAY_SIZE;
int n_per_thread;
a = (int *) malloc(sizeof(int)*n);
b = (int *) malloc(sizeof(int)*n);
c = (int *) malloc(sizeof(int)*n);
a[i] = i;
b[i] = i;
omp_set_num_threads(total_threads);
n_per_thread = n/total_threads;
c[i] = a[i]+b[i];
printf("i\ta[i]\t+\tb[i]\t=\tc[i]\n");
}
6
free(a); free(b); free(c);
return 0;
OUTPUT:
RESULT:
7
Ex. No: 3
OPENMP – PROGRAM FOR DOT PRODUCT
AIM:
PROCEDURE:
PROGRAM:
Dot product:
//Dot Product
#include <stdio.h>
#include <stdlib.h>t
#include <omp.h>
#define SIZ 5
8
float a[SIZ], b[SIZ], dotprod,dpp;
for(i=0;i<SIZ;i++)
a[i]=1.0*(i+1);
b[i]=1.0*(i+2);
for (i=0;i<SIZ;i++)
printf("\n");
tid=omp_get_thread_num();
for(i=0;i<SIZ;i++)
dpp+=a[i]*b[i];
9
}
dotprod=dpp;
printf("thread %d\n",tid);
OUTPUT:
10
RESULT:
For dot product - The parallelized dot product calculation exhibited notable speedup,
especially when dealing with large input vectors. The use of the reduction (+:
dotProduct) clause effectively handled the parallel reduction operation, ensuring accurate
results.
11
Ex. No: 4 OpenMP program to demonstrate the sharing of a loop iteration by
number of threads
AIM:
To illustrate the sharing of loop iterations among multiple threads using OpenMP
directives with a chunk size of 10.
PROCEDURE:
PROGRAM:
#include <stdio.h>
#include <omp.h>
#define CHUNK_SIZE 10
#define ARRAY_SIZE 100
int main() {
int array[ARRAY_SIZE];
int i;
// Initialize array
for (i = 0; i < ARRAY_SIZE; i++) {
array[i] = i;
}
return 0;
}
OUTPUT:
RESULT:
The program demonstrates the sharing of loop iterations among threads using
OpenMP directives with a chunk size of 10. Each thread processes a chunk of 10
iterations sequentially, showcasing parallel execution of loop iterations.
Adjusting the chunk size may affect thread workload distribution and program
performance.
13
Ex. No: 5
OpenMP program to demonstrate the sharing of
works of a section using threads
AIM:
Write a open MP program to demonstrate the sharing of works of a section using
threads. You can perform arithmetic operations on the one dimensional array
and this section load can be shared by the threads.
PROCEDURE:
PROGRAM:
#include <stdio.h>
#include <omp.h>
int main() {
int array[ARRAY_SIZE];
int i;
// Initialize array
for (i = 0; i < ARRAY_SIZE; i++) {
array[i] = i + 1; // Assign values from 1 to 100
}
14
int sum = 0;
int max_value = 0;
return 0;
}
OUTPUT:
RESULT:
The program demonstrates the sharing of work among threads using OpenMP
sections. Two sections are defined: one calculates the sum of array elements, and the
other finds the maximum value in the array. Each section's workload is distributed
among multiple threads, showcasing parallel execution and efficient utilization of
resources.
16
Exp. No: 6
MPI – BASICS OF MPI.
AIM:
PROCEDURE:
PROGRAM:
MPI_Init(&argc, &argv);
MPI_Comm_rank(MPI_COMM_WORLD, &rank);
MPI_Comm_size(MPI_COMM_WORLD, &size);
17
MPI_Reduce(&local_sum, &sum, 1, MPI_INT, MPI_SUM, 0,
MPI_COMM_WORLD);
if (rank == 0) {
printf("Sum calculated by process %d: %d\n", rank, sum);
}
MPI_Finalize(); return 0;
}
OUTPUT:
RESULT:
The MPI program effectively parallelizes the summation task, distributing the
workload among multiple processes. This parallel approach can significantly
enhance the computational efficiency for large-scale problems. The program
produces the correct result, as the final output matches the expected sum of
numbers from 1 to 10 (55). The use of MPI_Reduce ensures accurate aggregation
of local sums. The scalability of the program is evident as it can adapt to
different numbers of MPI processes. The parallel nature of the computation
allows for efficient utilization of resources in a distributed-memory system.
18
Ex. No: 7
MPI – COMMUNICATION BETWEEN MPI PROCESS.
AIM:
PROCEDURE:
PROGRAM:
MPI_Init(&argc, &argv);
MPI_Comm_rank(MPI_COMM_WORLD, &rank);
MPI_Comm_size(MPI_COMM_WORLD, &size);
if (size < 2) {
fprintf(stderr, "Run Successful\n");
MPI_Abort(MPI_COMM_WORLD, 1);
}
if (rank == 0) {
// Process 0 sends a message to Process 1
19
strcpy(message, "Hello from process 0"); MPI_Send(message,
strlen(message) + 1, MPI_CHAR, 1, 0,
MPI_COMM_WORLD);
} else if (rank == 1) {
// Process 1 receives the message from Process 0 MPI_Recv(message, 100,
MPI_CHAR, 0, 0, MPI_COMM_WORLD,
MPI_STATUS_IGNORE);
printf("Received message on process 1: %s\n", message);
}
MPI_Finalize(); return 0;
}
OUTPUT:
RESULT:
Thank you
20