Professional Documents
Culture Documents
Performance Measuring Metrics For Computer System
Performance Measuring Metrics For Computer System
Performance Measuring Metrics For Computer System
Day-1
What is Parallelism?
Sequential computing
o The machine finishes each instruction (completely) before it starts the next one.
o It is exactly what a sequential computer does.
Concurrent computing
o A form of computing in which several computations are executed concurrently (by different processors or processing
elements)—during overlapping time periods.
o Here, execution of the next instructions can start before finishing the earlier instructions.
Day-1
What is Parallelism?
• Several activities may happen at the same time. This is the basic principle of parallelism.
• In computer science -
o In uni-processor/multi-processor system, there might appear to be computing at the same time for several timeshared
users.
In uni-processor, we have the illusion of instructions being executed simultaneously, i. e., the illusion of parallelism.
In fact, the processor is executing instructions for only one job at a time, so it is not true parallelism.
For actual parallelism, we need to have several physical processors or processing elements that will be involved in
computation separately.
Day-1
• It seems that “concurrency” and “parallelism” are synonyms. But there is a bit difference
between the two.
o We reserve “parallelism” to refer to situations where actions truly happen at the same time by
distinct processing agents (CPUs) simultaneously.
o “Concurrency” refers to both situations -- ones which are truly parallel and ones which have the
illusion of parallelism, as execution of tasks is time-sliced on one processing agent (CPU).
Note In context of parallel computing, CPU is often called as PE (processing element) , and a PE with its
own memory as CE (computing element).
Day-1
• It is a sub-field of computer science which includes ideas from theoretical computer science, computer
architecture, programming languages, algorithms.
Application areas –
• Computer graphics and image processing
• Artificial intelligence
• Healthcare
• Numerical computing
• Parallel computing,
o It is usually confined to solve a single problem on multiple processing elements or processors.
o The processors are tightly coupled (highly dependent on each other, changes of one require changes of others )
for coordination.
• Distributed computing,
o Not necessary to get the solution of a single problem (e.g., several remote requests (queries) may be resolved in
distributed computing).
o The processors are loosely coupled, usually separated by a distance of many feet or miles to form a network .
Note Distributed computing is a kind of parallel computing but problems to be solved may be different.
Examples:
o Parallel Computing: 10 People pulling a rope to lift a rock
o Distributed Computing: 10 people pulling 10 ropes to lift 10 rocks
Grid Computing
Three Persons: one in India, one in USA and the other in Norway
working for completing a project. They are connected over Net.
• Speedup of a system:
o In computer science, speedup is a measure that gives the relative performance of two systems processing the same
problem.
o The speedup of a parallel system is defined as: speedup(S) = T1(N)/TP(N), where T1(N) is the execution-time for the best
sequential algorithm on a problem of size N, and T p(N) is defined to be the execution-time of the parallel algorithm using
P processors
•
Day-1/2