Professional Documents
Culture Documents
Speedups and Amdahls Law 1pp
Speedups and Amdahls Law 1pp
Parallel Programming:
Speedups and Amdahl’s law
Mike Bailey
mjb@cs.oregonstate.edu
Computer Graphics
speedups.and.amdahls.law.pptx mjb – March 21, 2021
Definition of Speedup 2
T1
Speedupn
Tn
where T1 is the execution time on one core
and Tn is the execution time on n cores.
Note that Speedupn should be > 1.
Speedupn
Efficiencyn
n
Computer Graphics
mjb – March 21, 2021
However, Multicore is not a Free Lunch: 3
Amdahl’s Law
There are always some fraction of the total operation that is inherently
sequential and cannot be parallelized no matter what you do. This includes
reading data, setting up calculations, control logic, storing results, etc.
T1 1 1
Speedupn
Tn Fparallel Fparallel
Fsequential (1 Fparallel )
n n
Portion
Sequential
Parallel Portion
Portion Sequential
Portion
Sequential
Parallel Portion
Portion Parallel
Portion
1 2 4 ~∞
# of Cores
Computer Graphics
mjb – March 21, 2021
5
SpeedUp as a Function of Number of Processors and Fparallel
Fparallel :
90%
80%
60%
40%
20%
Computer Graphics
mjb – March 21, 2021
6
SpeedUp as a Function of Fparallel and Number of Processors
N=10
X Speedup
N=3
N=2
N=1
Parallel Fraction
Computer Graphics
mjb – March 21, 2021
𝑺𝑵 7
SpeedUp Efficiency as a Function of Number of Processors and Fparallel
𝑵
0.90
0.80 Fparallel :
90%
SpeedUp Efficiency
0.70
80%
0.60
0.50
0.40
0.30
60%
0.20
40%
0.10
20%
0.00
1 6 11 16 21 26
# of Processors
Computer Graphics
mjb – March 21, 2021
𝑺𝑵 8
SpeedUp Efficiency as a Function of Fparallel and Number of Processors
𝑵
0.90
0.80
N=1
SpeedUp Efficiency
N=2
0.70
0.60 N=3
0.50
0.40
0.30
0.20
0.10
N=10
0.00
0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9
Fparallel
Computer Graphics
mjb – March 21, 2021
You can also solve for Fparallel using Amdahl’s Law if you 9
know your speedup and the number of processors
1 Tn Tn T1 T1 Tn
1 1
S T1 T1 T1 n(T1 Tn ) n T1 Tn n 1
F 1
1 n 1 n 1 n n 1 T1 (n 1) (n 1) T1 (n 1) Speedup
n n n n
If you’ve got several (n,S) values, you can take the average (which is actually a least squares fit):
ni T1 Tni
Fi , i 2..N
(ni 1) T1
N
F i
F i 2 note that when i=1, Tni T1
N 1
Computer Graphics
mjb – March 21, 2021
10
Amdahl’s Law can also give us the Maximum Possible SpeedUp
Note that these fractions put an upper bound on how much benefit you will
get from adding more processors:
1 1
max Speedup limSpeedup
n Fsequential 1 Fparallel
Fparallel maxSpeedup
0.00 1.00
0.10 1.11
0.20 1.25
0.30 1.43
0.40 1.67
0.50 2.00
0.60 2.50
0.70 3.33
0.80 5.00
0.90 10.00
0.95 20.00
0.99 100.00
Computer Graphics
mjb – March 21, 2021
A More Optimistic Take on Amdahl’s Law: 11
The Gustafson-Baris Observation
Gustafson observed that as you increase the number of processors, you have
a tendency to attack larger and larger versions of the problem. He also
observed that when you use the same parallel program on larger datasets, the
parallel fraction, Fp, increases.
Let P be the amount of time spent on the parallel portion of an original task
and S spent on the serial portion. Then
P or P PFp
Fp S
PS Fp
Parallel Serial
Time Time
Computer Graphics
mjb – March 21, 2021
A More Optimistic Take on Amdahl’s Law: 12
The Gustafson-Baris Observation
N N
F
'
p
N S 1 Fp
N
Fp
Computer Graphics
mjb – March 21, 2021
A More Optimistic Take on Amdahl’s Law: 13
The Gustafson-Baris Observation
Computer Graphics
mjb – March 21, 2021
A More Optimistic Take on Amdahl’s Law: 14
The Gustafson-Baris Observation
Or, graphing it:
Computer Graphics
mjb – March 21, 2021
A More Optimistic Take on Amdahl’s Law: 15
The Gustafson-Baris Observation
We can also turn Fp’ into a Maximum Speedup:
Computer Graphics
mjb – March 21, 2021