Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1of 6

EXPERIMENT NO.

6
AIM: To learn basics of OpenMP API (Open Multi-Processor API)

THEORY

What is OpenMP?
1. An Application Program Interface (API) that may be used to explicitly direct multi-
threaded, shared memory parallelism.
2. Comprised of three primary API components:
1) Compiler Directives
2) Runtime Library Routines
3) Environment Variables
3. An abbreviation for: Open Multi-Processing

Goals of OpenMP:
1) Standardization: Provide a standard among a variety of shared memory
architectures/platforms
Jointly defined and endorsed by a group of major computer hardware and software
vendors
2) Lean and Mean: Establish a simple and limited set of directives for programming shared
memory machines.
Significant parallelism can be implemented by using just 3 or 4 directives. This goal is
becoming less meaningful with each new release, apparently.
3) Ease of Use: Provide capability to incrementally parallelize a serial program, unlike
message-passing libraries which typically require an all or nothing approach.
Provide the capability to implement both coarse-grain and fine-grain parallelism.
4) Portability: The API is specified for C/C++ and FORTRAN. Most major platforms have
been implemented including Unix/Linux platforms and Windows.

This study source was downloaded by 100000823978223 from CourseHero.com on 01-26-2022 12:22:24 GMT -06:00

https://www.coursehero.com/file/58867299/EXP-6pdf/
OpenMP Programming Model:

1) Shared Memory Model:


OpenMP is designed for multi-processor/core, shared memory machines. The
underlying architecture can be shared memory UMA or NUMA.

2) Thread Based Parallelism:


OpenMP programs accomplish parallelism exclusively through the use of threads.
A thread of execution is the smallest unit of processing that can be scheduled by an
operating system. The idea of a subroutine that can be scheduled to run autonomously
might help explain what a thread is.
Threads exist within the resources of a single process. Without the process, they cease
to exist.
Typically, the numbers of threads match the number of machine processors/cores.
However, the actual use of threads is up to the application.

3) Fork - Join Model:


All OpenMP programs begin as a single process: the master thread. The master thread
executes sequentially until the first parallel region construct is encountered.

FORK: the master thread then creates a team of parallel threads.


The statements in the program that are enclosed by the parallel region construct are
then executed in parallel among the various team threads.

This study source was downloaded by 100000823978223 from CourseHero.com on 01-26-2022 12:22:24 GMT -06:00

https://www.coursehero.com/file/58867299/EXP-6pdf/
JOIN: When the team threads complete the statements in the parallel region construct,
they synchronize and terminate, leaving only the master thread.

The number of parallel regions and the threads that comprise them are arbitrary.

OpenMP API Overview


Three Components: The OpenMP API is comprised of three distinct components:
1) Compiler Directives (44)
2) Runtime Library Routines (35)
3) Environment Variables (13)

Compiler Directives:
Compiler directives appear as comments in your source code and are ignored by compilers
unless you tell them otherwise - usually by specifying the appropriate compiler flag, as
discussed in the Compiling section later.

This study source was downloaded by 100000823978223 from CourseHero.com on 01-26-2022 12:22:24 GMT -06:00

https://www.coursehero.com/file/58867299/EXP-6pdf/
Run-time Library Routines:
The OpenMP API includes an ever-growing number of runtime library routines.
These routines are used for a variety of purposes:

 Setting and querying the number of threads


 Querying a thread's unique identifier (thread ID), a thread's ancestor's identifier, the
thread team size
 Setting and querying the dynamic threads feature
 Querying if in a parallel region, and at what level
 Setting and querying nested parallelism
 Setting, initializing and terminating locks and nested locks
 Querying wall clock time and resolution.

Function Description
void omp_set_num_threads(int num_threads) Dynamically set the number of threads to
use for this region.

int omp_get_num_threads(void) Determine what the current number of


threads is that is allowed to execute a
region.
int omp_get_max_threads(void) Obtains the maximum number of threads
ever allowed with this OpenMP*
implementation.
int omp_get_thread_num(void) Determines the unique thread number of
the thread currently executing this section
of code.

This study source was downloaded by 100000823978223 from CourseHero.com on 01-26-2022 12:22:24 GMT -06:00

https://www.coursehero.com/file/58867299/EXP-6pdf/
int omp_get_num_procs(void) Determines the number of processors on
the current machine.
int omp_in_parallel(void) Returns non-zero if it is called within the
dynamic extent of a parallel region
executing in parallel, otherwise it returns
zero.
void omp_set_dynamic(int dynamic_threads) Enable or disable dynamic adjustment of
the number of threads used to execute a
parallel region.
int omp_get_dynamic(void) Returns non-zero if dynamic thread
adjustment is enabled and returns zero
otherwise.
void omp_set_nested(int nested) Enable or disable nested parallelism. If
parameter is non-zero, enable. Default is
disabled.
int omp_get_nested(void) Always returns zero in the current version
of compiler.

Environment Variables:
OpenMP provides several environment variables for controlling the execution of parallel code
at run-time.
These environment variables can be used to control such things as:

 Setting the number of threads


 Specifying how loop iterations are divided
 Binding threads to processors
 Enabling/disabling nested parallelism; setting the maximum levels of nested parallelism
 Enabling/disabling dynamic threads
 Setting thread stack size
 Setting thread wait policy.

This study source was downloaded by 100000823978223 from CourseHero.com on 01-26-2022 12:22:24 GMT -06:00

https://www.coursehero.com/file/58867299/EXP-6pdf/
CODE:
#include <omp.h>
#include <stdio.h>
#include <stdlib.h>

int main(int argc, char *argv[])


{
int nthreads, tid ;
/* Fork a team of threads with each thread having a private tid variable */
#pragma omp parallel private(tid)
{
/* Obtain and print thread id */
tid = omp_get_thread_num();
printf("Hello World from thread = %d\n", tid);
/* Only master thread does this */
if (tid == 0)
{
nthreads = omp_get_num_threads();
printf("Number of threads = %d\n", nthreads);
}
} /* All threads join master thread and terminate */
return 0;
}

OUTPUT:

CONCLUSION:
Thus,
1. Basics of OpenMP are now clear.
2. Various concepts such as parallelism, run-time libraries and environment variables were
studied in detail.

This study source was downloaded by 100000823978223 from CourseHero.com on 01-26-2022 12:22:24 GMT -06:00

https://www.coursehero.com/file/58867299/EXP-6pdf/

You might also like