Download as pdf or txt
Download as pdf or txt
You are on page 1of 12

Speedup:

1. Speedup: Speedup is the improvement in task completion time through


parallelism or optimization.
Equation: Speedup = Sequential Execution Time / Parallel Execution
Time

2. Linear Speedup: Linear speedup is when execution time reduces


directly with the number of processors, following Speedup = N (ideal).
Equation: Speedup = N (number of processors)

3. Superlinear Speedup: Superlinear speedup is when execution time


decreases more than expected due to enhanced resource utilization.
Equation: Speedup > N (number of processors)

4. Non-linear Speedup: Non-linear speedup is when execution time


reduction is less than predicted due to factors like communication
overhead.
Equation: Speedup < N (number of processors)

5. Occasionally speedup that appears to be superlinear may occur, but can


be explained by other reasons such as
 The extra memory in parallel system.
 A sub-optimal sequential algorithm used.
 Luck, in case of algorithm that has a random aspect in its design
(e.g., random selection).
6. Superlinear speedup occurs:
 Real-Time requirements where meeting deadlines is part of the
problem requirements.
 Problems where all data is not initially available, but has to be
processed after it arrives.
 Real life situations such as a “person who can only keep a
driveway open during a severe snowstorm with the help of
friends”.
7.
8. Elbowing out in Parallel Processing: Decreased performance as
additional parallel tasks compete for resources.

9. Cost in Parallel Processing: Resources expended for implementing


and maintaining parallel computing systems.
Cost=n * tp
10.Efficiency in Parallel Processing: Effective utilization of resources to
achieve improved performance in parallel computing tasks.

11.
12.
13.
14.limitations in applying Amdahl’s Law:
i. Gustafon’s Law: The proportion of the computations that are
sequential normally decreases as the problem size increases.
ii. Its proof focuses on the steps in a particular algorithm, and
does not consider that other algorithms with more parallelism
may exist
iii. Amdahl’s law applies only to ‘standard’ problems were
superlinearity can not occur
15.
Amdahl's Law Gustafson's Law
Amdahl’s Law focuses on fixed Gustafson’s Law considers variable
problem size problem size.
Amdahl’s Law emphasizes Gustafson’s Law emphasizes scaling
optimizing the serial portion up the workload.
Amdahl’s Law limits the speedup by Gustafson’s Law focuses on efficient
the non-parallelizable fraction utilization of parallel resources.
Underlines the need to optimize Stresses benefits from parallelism in
sequential portion. real-world, large-scale problems.

16.Math-01 :

17.Math-02 :
18.Math-03 :

19.Amdahl Effect: Speedup is usually an increasing function of the


problem size.

20.Parallel System: Multiple processors collaborating for tasks in a


computer system.

21.Scalability of a Parallel System: Capability to improve performance


as processor count increases.

22.A Scalable System: Maintains efficiency while accommodating more


resources.

23.Isoefficiency in Parallel Processing: Sustains efficiency with


increasing problem size and processors.
1. Flynn's Classification:
i. SISD (single instruction stream over a single data stream)
ii. SIMD (single instruction stream over a single multiple data streams)
iii. MIMD (multiple instruction streams over multiple data streams)
iv. MISD ( multiple single instruction stream and a single data stream)

2. Development Layer:
3. Three shared- memory multiprocessor models:
i. The uniform memory-access(UMA) model
ii. The nonuniform-memory-access(NUMA)model
iii. The cache-only memory architecture(COMA)model
4. Pipelining in Superscolor Processor:

5. Shared Variable Model: A shared variable model in parallel processing refers to


a programming approach where multiple processes or tasks can access and modify
shared data, allowing them to work together on a common task. This model requires
careful synchronization to avoid conflicts and ensure data integrity, and it's
commonly used in parallel computing to achieve efficient multi-processing.

6. Message Passing: Message passing in parallel processing is a method of


communication between different processes or computing nodes. It involves
sending and receiving messages to share data and coordinate tasks. This approach
is commonly used in parallel and distributed computing to enable collaboration
between separate processes or systems.
7. Critical Section: A critical section (CS) is a code segment accessing shared
variables, which must be executed by only one process at a time and which, once
started, must be completed without interruption. Condition:

a) Mutule exclusion —At most one process executing the CS at a time.


b) No deadlock waiting —No circular wait by two or more processes trying
to enter the CS; at least one will succeed.
c) Nonpreemotion —No interruption until completion, once entered the CS.
d) Eventual Entry —A process attempting to enter its CS will eventually
succeed.

8. Differences between multithreading, multiprocessing, multiprogramming,


and multitasking
S. Multiprogramming Multitasking Multithreading Multiprocessing
N
o.

1. In In Multitasking, a Multithreading is an In multiprocessing,


multiprogramming, single resource is extended form of multiple processing units
multiple programs used to process multitasking. are used by a single
execute at a same multiple tasks. device.
time on a single
device.

2. The process resides The process resides More than one thread The process switches
in the main memory. in the same CPU. processed on a single from one to another CPU
CPU. as multiple processing
units are used.

3. It uses batch OS. It is time sharing as The tasks are always It carries multiple
The CPU is utilized the task assigned further divided into processors to execute the
completely while switches regularly. sub tasks. task.
execution.

4. The processing is Multitasking follows It allows a single A large amount of work


slower, as a single the concept of process to get can be done in a short
job resides in the context switching. multiple code period of time.
main memory while segments.
execution.

9. Language Features for Parallelism


i. Optimization Feature
ii. Availability Feature
iii. Synchronization Communication Feature
iv. Data Parallelism Feature
v. Process Management Feature
vi. Control of Parallelism

10. Parallel Code generation figure:

11.
CISC RISC
1.It has a microprogramming unit. It has a hard-wired unit of programming.
2.The instruction set has various
The instruction set is reduced, and most of
different instructions that can be used for
these instructions are very primitive.
complex operations.
3.Performance is optimized with Performance is optimized which emphasis
emphasis on hardware. on software
4.Only single register set Multiple register sets are present
5.They are mostly less or not pipelined This type of processors are highly pipelined
CISC RISC
6.Execution time is very high Execution time is very less
7.Code expansion is not a problem. Code expansion may create a problem.
8.Decoding of instructions is complex. The decoding of instructions is simple.
9.It requires external memory for It doesn’t require external memory for
calculations calculations
10.Examples of CISC processors are the Common RISC microprocessors are ARC,
System/360, VAX, AMD, and Intel x86 Alpha, ARC, ARM, AVR, PA-RISC, and
CPUs. SPARC.
11.Instructions can take several clock
Single-cycle for each instruction
cycles
Heavy use of RAM (can cause bottlenecks
12.More efficient use of RAM than RISC
if RAM is limited)
13.Simple, standardized instructions Complex and variable-length instructions
14.A small number of fixed-length
A large number of instructions
instructions
15.Limited addressing modes Compound addressing modes
16.Important applications are Security Important applications are : Smartphones,
systems, Home automation. PDAs.

17.Varying formats (16-64 bits for each


fixed (32-bit) format
instruction).

18Unified cache for instructions and


Separate data and instruction cache.
data.

You might also like