Download as pdf or txt
Download as pdf or txt
You are on page 1of 31

Parallel and

Distributed
Computing

Department of CS,
GCU, Lahore

Preliminaries

Parallel and Distributed Computing Amdahl’s Law

Efficiency and
(CS-3216) Scalability of a
Parallel Algorithm

Lecture 02 (Metrics of Parallel Performance) Reference Books

Muhammad Umair Sadiq


Parallel and
Agenda Distributed
Computing

Department of CS,
GCU, Lahore

Preliminaries

Amdahl’s Law
Preliminaries Efficiency and
Scalability of a
Parallel Algorithm

Reference Books
Amdahl’s Law

Efficiency and Scalability of a Parallel Algorithm

Reference Books
Parallel and
Agenda Distributed
Computing

Department of CS,
GCU, Lahore

Preliminaries

Amdahl’s Law
Preliminaries Efficiency and
Scalability of a
Parallel Algorithm

Reference Books
Amdahl’s Law

Efficiency and Scalability of a Parallel Algorithm

Reference Books
Parallel and
Multi-programming Distributed
Computing

Department of CS,
GCU, Lahore

Preliminaries

▶ Keeping multiple programs in the main memory at the Amdahl’s Law

same time which are ready for execution Efficiency and


Scalability of a
Parallel Algorithm
▶ A single program cannot, in general, keep either the
Reference Books
CPU or the I/O devices busy at all times
▶ Multiprogramming increases CPU utilization by
organizing jobs (code and data) so that the CPU always
has one to execute
▶ Multiprogrammed systems provide an environment in
which the various system resources (for example, CPU,
memory, and peripheral devices) are utilized effectively
Parallel and
Multi-programming Distributed
Computing

Department of CS,
GCU, Lahore

Preliminaries

▶ Keeping multiple programs in the main memory at the Amdahl’s Law

same time which are ready for execution Efficiency and


Scalability of a
Parallel Algorithm
▶ A single program cannot, in general, keep either the
Reference Books
CPU or the I/O devices busy at all times
▶ Multiprogramming increases CPU utilization by
organizing jobs (code and data) so that the CPU always
has one to execute
▶ Multiprogrammed systems provide an environment in
which the various system resources (for example, CPU,
memory, and peripheral devices) are utilized effectively
Parallel and
Time sharing (Multi-Tasking) Distributed
Computing

Department of CS,
GCU, Lahore

▶ Time sharing (or multitasking) is a logical extension of Preliminaries

Amdahl’s Law
multiprogramming
Efficiency and
▶ In time-sharing systems, the CPU executes multiple jobs Scalability of a
Parallel Algorithm
by switching among them, but the switches occur so Reference Books
frequently that the users can interact with each
program while it is running
▶ A time-shared operating system allows many users to
share the computer simultaneously
▶ As the system switches rapidly from one user to the
next, each user is given the impression that the entire
computer system is dedicated to his use, even though it
is being shared among many users
Parallel and
Time sharing (Multi-Tasking) Distributed
Computing

Department of CS,
GCU, Lahore

▶ Time sharing (or multitasking) is a logical extension of Preliminaries

Amdahl’s Law
multiprogramming
Efficiency and
▶ In time-sharing systems, the CPU executes multiple jobs Scalability of a
Parallel Algorithm
by switching among them, but the switches occur so Reference Books
frequently that the users can interact with each
program while it is running
▶ A time-shared operating system allows many users to
share the computer simultaneously
▶ As the system switches rapidly from one user to the
next, each user is given the impression that the entire
computer system is dedicated to his use, even though it
is being shared among many users
Parallel and
Time sharing (Multi-Tasking) Distributed
Computing

Department of CS,
GCU, Lahore

▶ Time sharing (or multitasking) is a logical extension of Preliminaries

Amdahl’s Law
multiprogramming
Efficiency and
▶ In time-sharing systems, the CPU executes multiple jobs Scalability of a
Parallel Algorithm
by switching among them, but the switches occur so Reference Books
frequently that the users can interact with each
program while it is running
▶ A time-shared operating system allows many users to
share the computer simultaneously
▶ As the system switches rapidly from one user to the
next, each user is given the impression that the entire
computer system is dedicated to his use, even though it
is being shared among many users
Parallel and
Multi-processing vs Multi-threading Distributed
Computing

Department of CS,
GCU, Lahore

Preliminaries

Amdahl’s Law

▶ Multithreading refers to the ability of a processor to Efficiency and


Scalability of a
execute multiple threads concurrently. In this, a Parallel Algorithm

common address space is shared by all the threads Reference Books

▶ In Multiprocessing, CPUs are added for increasing


computing speed of the system. Because of
Multiprocessing, There are many processes are executed
simultaneously. Every process owns a separate address
space
Parallel and
Multi-processing vs Multi-threading Distributed
Computing

Department of CS,
GCU, Lahore

Preliminaries

Amdahl’s Law

▶ Multithreading refers to the ability of a processor to Efficiency and


Scalability of a
execute multiple threads concurrently. In this, a Parallel Algorithm

common address space is shared by all the threads Reference Books

▶ In Multiprocessing, CPUs are added for increasing


computing speed of the system. Because of
Multiprocessing, There are many processes are executed
simultaneously. Every process owns a separate address
space
Parallel and
Agenda Distributed
Computing

Department of CS,
GCU, Lahore

Preliminaries

Amdahl’s Law
Preliminaries Efficiency and
Scalability of a
Parallel Algorithm

Reference Books
Amdahl’s Law

Efficiency and Scalability of a Parallel Algorithm

Reference Books
Parallel and
Amdahl’s Laws Distributed
Computing

Department of CS,
GCU, Lahore

Preliminaries

Amdahl’s Law

Efficiency and
Scalability of a
Parallel Algorithm

▶ Amdahl’s Law was formalized in 1967 Reference Books

▶ It shows an upper-bound on the maximum speedup


that can be achieved
Parallel and
Speedup Distributed
Computing

Department of CS,
GCU, Lahore

Preliminaries

▶ In parallel computing, speedup is the measure of the Amdahl’s Law

Efficiency and
increase of performance of parallel algorithm compared Scalability of a
Parallel Algorithm
to sequential algorithm
Reference Books
▶ It is the ratio of sequential execution time to parallel
execution time
▶ Suppose you have a sequential code for a problem that
can be executed in total T (s) time, and T (p) be the
parallel time for the same algorithm over p processors,
then speedup is calculated as follows:
▶ Speedup = T (s)
T (p)
Parallel and
Speedup Distributed
Computing

Department of CS,
GCU, Lahore

Preliminaries

▶ In parallel computing, speedup is the measure of the Amdahl’s Law

Efficiency and
increase of performance of parallel algorithm compared Scalability of a
Parallel Algorithm
to sequential algorithm
Reference Books
▶ It is the ratio of sequential execution time to parallel
execution time
▶ Suppose you have a sequential code for a problem that
can be executed in total T (s) time, and T (p) be the
parallel time for the same algorithm over p processors,
then speedup is calculated as follows:
▶ Speedup = T (s)
T (p)
Parallel and
Amdahl’s Laws Distributed
Computing

Department of CS,
GCU, Lahore

Preliminaries

Amdahl’s Law

Efficiency and
▶ Suppose that fraction of total time that the algorithm Scalability of a
Parallel Algorithm
must consume in serial executions is f Reference Books

▶ This implies fraction of parallel potion is 1 − f


▶ Now, if total number of processes are p then Amdahl’s
law states that speedup (S) is equal to
▶ S= 1
f + 1−f
p
Parallel and
Amdahl’s Laws Distributed
Computing

Department of CS,
GCU, Lahore

Preliminaries

Amdahl’s Law

Efficiency and
▶ Suppose that fraction of total time that the algorithm Scalability of a
Parallel Algorithm
must consume in serial executions is f Reference Books

▶ This implies fraction of parallel potion is 1 − f


▶ Now, if total number of processes are p then Amdahl’s
law states that speedup (S) is equal to
▶ S= 1
f + 1−f
p
Parallel and
Derivation Distributed
Computing

Department of CS,
GCU, Lahore

Preliminaries

T (s) Amdahl’s Law


▶ S= T (p) Efficiency and
▶ If the serialT (p) can be calculated as follows: Scalability of a
Parallel Algorithm
▶ T (p) = serial comp time + parallel comp time Reference Books
▶ T (p) = f .T (s) + (1−f p).T (s)
T (s)
▶ S= (1−f ).T (s)
f .T (s)+ p
T (s)
▶ S=
T (s).(f + 1−f
p
)

▶ S= 1
f + 1−f
p
Parallel and
Derivation Distributed
Computing

Department of CS,
GCU, Lahore

Preliminaries

T (s) Amdahl’s Law


▶ S= T (p) Efficiency and
▶ If the serialT (p) can be calculated as follows: Scalability of a
Parallel Algorithm
▶ T (p) = serial comp time + parallel comp time Reference Books
▶ T (p) = f .T (s) + (1−f p).T (s)
T (s)
▶ S= (1−f ).T (s)
f .T (s)+ p
T (s)
▶ S=
T (s).(f + 1−f
p
)

▶ S= 1
f + 1−f
p
Parallel and
Derivation Distributed
Computing

Department of CS,
GCU, Lahore

Preliminaries

T (s) Amdahl’s Law


▶ S= T (p) Efficiency and
▶ If the serialT (p) can be calculated as follows: Scalability of a
Parallel Algorithm
▶ T (p) = serial comp time + parallel comp time Reference Books
▶ T (p) = f .T (s) + (1−f p).T (s)
T (s)
▶ S= (1−f ).T (s)
f .T (s)+ p
T (s)
▶ S=
T (s).(f + 1−f
p
)

▶ S= 1
f + 1−f
p
Parallel and
Derivation Distributed
Computing

Department of CS,
GCU, Lahore

Preliminaries

T (s) Amdahl’s Law


▶ S= T (p) Efficiency and
▶ If the serialT (p) can be calculated as follows: Scalability of a
Parallel Algorithm
▶ T (p) = serial comp time + parallel comp time Reference Books
▶ T (p) = f .T (s) + (1−f p).T (s)
T (s)
▶ S= (1−f ).T (s)
f .T (s)+ p
T (s)
▶ S=
T (s).(f + 1−f
p
)

▶ S= 1
f + 1−f
p
Parallel and
Amdahl’s Laws Distributed
Computing

Department of CS,
GCU, Lahore

Preliminaries

Amdahl’s Law

Efficiency and
Scalability of a
Parallel Algorithm
▶ S= 1
Reference Books
f + 1−f
p
▶ What if you have infinite number of processors?
▶ What you have to do for further speedup?
Parallel and
Amdahl’s Laws Examples Distributed
Computing

Department of CS,
GCU, Lahore

Preliminaries
▶ Suppose 70% of a sequential algorithm is parallelizable Amdahl’s Law

portion. The remaining part must be calculated Efficiency and


Scalability of a
sequentially. Calculate maximum theoretical speedup Parallel Algorithm
for parallel variant of this algorithm using Reference Books
1. 4 processors
2. infinite processors
▶ Suppose 25% of a sequential algorithm is parallelizable
portion. The remaining part must be calculated
sequentially. Calculate maximum theoretical speedup
for parallel variant of this algorithm using 5 processors
and infinite processors
Parallel and
Amdahl’s Laws Examples Distributed
Computing

Department of CS,
GCU, Lahore

Preliminaries
▶ Suppose 70% of a sequential algorithm is parallelizable Amdahl’s Law

portion. The remaining part must be calculated Efficiency and


Scalability of a
sequentially. Calculate maximum theoretical speedup Parallel Algorithm
for parallel variant of this algorithm using Reference Books
1. 4 processors
2. infinite processors
▶ Suppose 25% of a sequential algorithm is parallelizable
portion. The remaining part must be calculated
sequentially. Calculate maximum theoretical speedup
for parallel variant of this algorithm using 5 processors
and infinite processors
Parallel and
Amdahl’s Laws Example Distributed
Computing

Department of CS,
GCU, Lahore

Preliminaries

Amdahl’s Law

Efficiency and
Scalability of a
▶ (Bonus Question) From the maximum speedup obtain Parallel Algorithm

in previous question, determine, according to Amdahl’s Reference Books

law, how many processors are needed to achieve


maximum theoretical speedup while sequential portion
remains the same
▶ The answer may be surprising
Parallel and
Agenda Distributed
Computing

Department of CS,
GCU, Lahore

Preliminaries

Amdahl’s Law
Preliminaries Efficiency and
Scalability of a
Parallel Algorithm

Reference Books
Amdahl’s Law

Efficiency and Scalability of a Parallel Algorithm

Reference Books
Parallel and
Parallel Efficiency Distributed
Computing

Department of CS,
GCU, Lahore

Preliminaries

Amdahl’s Law

Efficiency and
Scalability of a
Parallel Algorithm
▶ Parallel efficiency is the measure of effectiveness of the Reference Books
resource utilization. It is the ratio of speedup to the
number of compute nodes
▶ E = speedup
p
Parallel and
Scalability Distributed
Computing

Department of CS,
GCU, Lahore

▶ The word scalabe has a wide variety of informal uses. Preliminaries

Amdahl’s Law
▶ Informally, a technology is scalable if it can handle
Efficiency and
ever-increasing problem sizes. Scalability of a
Parallel Algorithm
▶ However, in discussions of parallel program performance, Reference Books

scalability has a somewhat more formal definition.


▶ Suppose we run a parallel program with a fixed number
of processes/threads and a fixed input size, and we
obtain an efficiency E . Suppose we now increase the
number of processes/threads that are used by the
program. If we can find a corresponding rate of increase
in the problem size so that the program always has
efficiency E , then the program is scalable.
Parallel and
Scalability Distributed
Computing

Department of CS,
GCU, Lahore

▶ The word scalabe has a wide variety of informal uses. Preliminaries

Amdahl’s Law
▶ Informally, a technology is scalable if it can handle
Efficiency and
ever-increasing problem sizes. Scalability of a
Parallel Algorithm
▶ However, in discussions of parallel program performance, Reference Books

scalability has a somewhat more formal definition.


▶ Suppose we run a parallel program with a fixed number
of processes/threads and a fixed input size, and we
obtain an efficiency E . Suppose we now increase the
number of processes/threads that are used by the
program. If we can find a corresponding rate of increase
in the problem size so that the program always has
efficiency E , then the program is scalable.
Parallel and
Scalability Distributed
Computing

Department of CS,
GCU, Lahore

▶ The word scalabe has a wide variety of informal uses. Preliminaries

Amdahl’s Law
▶ Informally, a technology is scalable if it can handle
Efficiency and
ever-increasing problem sizes. Scalability of a
Parallel Algorithm
▶ However, in discussions of parallel program performance, Reference Books

scalability has a somewhat more formal definition.


▶ Suppose we run a parallel program with a fixed number
of processes/threads and a fixed input size, and we
obtain an efficiency E . Suppose we now increase the
number of processes/threads that are used by the
program. If we can find a corresponding rate of increase
in the problem size so that the program always has
efficiency E , then the program is scalable.
Parallel and
Agenda Distributed
Computing

Department of CS,
GCU, Lahore

Preliminaries

Amdahl’s Law
Preliminaries Efficiency and
Scalability of a
Parallel Algorithm

Reference Books
Amdahl’s Law

Efficiency and Scalability of a Parallel Algorithm

Reference Books
Parallel and
Reference Books Distributed
Computing

Department of CS,
GCU, Lahore

Preliminaries

Amdahl’s Law

Efficiency and
▶ Introduction to Parallel Computing, Second Scalability of a
Parallel Algorithm
Edition, by Ananth Grama
Reference Books
▶ Parallel programming in C with MPI and OpenMP by
Michael J. Quinn
▶ An Introduction to Parallel Programming by Peter S.
Pacheco
▶ Professional CUDA C Programming by John Cheng

You might also like