Download as doc, pdf, or txt
Download as doc, pdf, or txt
You are on page 1of 1

Using OpenMP, you can parallelize loops, regions, and sections or straight-line code blocks, whenever dependences do not

forbids them being executed in parallel !n addition, because OpenMP employs the simple fork-"oin execution model, it allows the compiler and runtime library to compile and run OpenMP programs efficiently with lower threading overhead #owever, you can improve your application performance by further reducing threading overhead $hey provides measured costs of a set of OpenMP constructs and clauses on a %-way !ntel &eon processorbased system running at ' ( gigahertz with the !ntel compiler and runtime library )ou can see that the cost for each construct or clause is small Most of them are less than * microseconds except the schedule+dynamic, clause $he schedule+dynamic, clause takes -( microseconds, because its default chunk size is ., which is too small !f you use schedule+dynamic,./,, its cost is reduced to - ( microseconds

Parallel pragma suspend or resume of thread:


$he overhead can be removed by entering a parallel region once, then dividing the work within the parallel region $he parallel for pragma could be used to split the iterations of a loop across multiple threads 0hen the compiler generated thread is executed, the iterations of the loop are distributed among threads 1t the end of the parallel region, the threads are suspended and they wait for the next parallel region, loop, or sections 1 suspend or resume operation, while significantly lighter weight than create or terminate operations, still creates overhead and may be unnecessary when two parallel regions, loops, or sections are ad"acent as shown in the following example

Department CSE, SCAD CET

You might also like