Download as pptx, pdf, or txt
Download as pptx, pdf, or txt
You are on page 1of 11

Speedup and Efficiency in

Parallel Systems

12-9-2023
The Demand for Computational Speedup

 There is a continual demand for greater


computational power from computer systems.
 Great computational power is required for numerical
simulations of scientific and engineering problems.
 Since, traditional computers have only one processor,
one way to increase its speed is by using
multiprocessor systems.
Speedup Factor

 While performing computations, one has to observe


that how much faster the multiprocessor solves the
problem under consideration.
 We can consider the best sequential algorithm
running on a single processor and the best parallel
algorithm running on a multiprocessor
 The speedup factor is the measure of relative
performance denoted as S(p) and calculated as
Speedup Factor
Real World Example
 Imagine you have a big basket of apples, and your job is to count how
many apples are in it. You have two ways to do it:

 Sequentially: You pick up each apple one by one and count it. This takes
some time, let's say 100 seconds.

 In Parallel: You have four friends, and each of you grabs a part of the
apples. You all count your apples simultaneously. This way, it takes only 25
seconds for all of you to count your respective apples.

 Now, let's calculate the "speedup factor," which tells us how much faster
the parallel method is compared to the sequential method.

 Speedup Factor Formula: Speedup Factor = Sequential Time / Parallel


Time

 In our example, the speedup factor would be: 100 seconds (sequential) /
25 seconds (parallel) = 4
Real World Example
(In Series)
import time
import multiprocessing

# Function to count apples sequentially


def count_apples_sequentially(apples):
start_time = time.time()
count = 0
for apple in apples:
count += 1
end_time = time.time()
return count, end_time - start_time
Real World Example
(In Parallel)
# Function to count apples in parallel
def count_apples_in_parallel(apples, num_processes):
chunk_size = len(apples) // num_processes
start_time = time.time()

# Create a pool of processes


pool = multiprocessing.Pool(processes=num_processes)

# Divide the apples among the processes


results = pool.map(count_chunk, [(apples[i:i+chunk_size]) for i in range(0,
len(apples), chunk_size)])

# Sum up the counts from all processes


total_count = sum(results)

end_time = time.time()
return total_count, end_time - start_time
Efficiency

• It’s sometimes useful to know that how long processors


are being used on the computation, which can be found in
system efficiency. It’s defined as.
Maximum Speedup

• Several factors will appear as overhead which will limit the


performance of parallel processors
• The time when no processor is performing a useful task
means it is in an idle state
• Communication time between processes
Maximum Speedup
Conclusion

Speedup and efficiency cannot simultaneously be low,


regardless of scheduling discipline or software structure

The result bounds the efficiency cost and speedup benefit


possible by altering the number of allocated processors.

11

You might also like