Professional Documents
Culture Documents
High Performance Computing by Parallel Processing
High Performance Computing by Parallel Processing
High Performance Computing by Parallel Processing
Parallel processing, the method of having many small tasks solve one large problem, has
emerged as a key enabling technology in modern computing. In the recent years the
number of transistors in the microprocessors and other hardware components have been
drastically increasing which has proven remarkably astute, but on the other hand the cost
of building such “high – end” machineries has also increased. Thus parallel processing is
largely adopted for building both for high-performance scientific computing and for more
``general-purpose'' applications, which demand for higher performance, lower cost, and
sustained productivity.
There are many ways in achieving parallel processing, the one most economic and highly
feasible is implemented through “Distributed Computing “.
In the simplest sense, it is the simultaneous use of multiple compute resources to solve a
computational problem:
By assigning every CPU a concurrent part of the problem the execution is incredibly fast,
which is proven by
2
Amdahl's Law:
Which states that potential program speedup is defined by the fraction of code (P) that
can be parallelized:
1
speedup = --------
1 - P
3
If none of the code can be parallelized, P = 0 and the speedup = 1 (no speedup).
If all of the code is parallelized, P = 1 and the speedup is infinite (in theory).
If 50% of the code can be parallelized, maximum speedup = 2, meaning the code will
run twice as fast.
Thus by parallelizing the concurrent parts of the programs , with the same computer
configuration we can speed up the execution significantly.
RAHUL.R
4thSem Computer Sc
1MV07CS079