Download as pdf or txt
Download as pdf or txt
You are on page 1of 9

Flynn's Clnuiflcntlon:

1. SISD (single instruction stream over a single data stream)

2. SIMD (single instruction stream over multiple data stream)

3. MISD (Multiple instruction stream and a single data stream)


4. MIMD (Multiple instruction stream over multiple data stream)

Six layers for computer system development

Shared-Memory Multiprocessor:
1. Uniform Memory Access Model (UMA)

2. Non-uniform Memory Access Model (NUMA)


3. Cache Only Memory Architecture (COMA)

Pipelining in Superscolor Processor:

Shared Variable model and message passing model figure:

Shared Variable Model: A shared variable model in parallel processing refers to a


programming approach where multiple processes or tasks can access and modify shared
data, allowing them to work together on a common task. This model requires careful
synchronization to avoid conflicts and ensure data integrity, and it's commonly used in
parallel computing to achieve efficient multi-processing.

Message Passing: Message passing in parallel processing is a method of communication


between different processes or computing nodes. It involves sending and receiving
messages to share data and coordinate tasks. This approach is commonly used in parallel
and distributed computing to enable collaboration between separate processes or systems.
Critical Section: A critical section (CS) is a code segment accessing
shared variables, which must be executed by only one process at a time
and which, once started, must be completed without interruption. In other
words, a CS operation is indivisible and satisfies the following
requirements:

1. Mutule exclusion —At most one process executing the CS at a


time.
2. No deadlock waiting —No circular wait by two or more processes
trying to enter the CS; at least one will succeed.
3. Nonpreemotion —No interruption until completion, once entered
the CS.
4. Eventual Entry —A process attempting to enter its CS will
eventually succeed.

1. Multithreading: Multithreading is a technique in which a single


process is divided into multiple threads of execution that can run
concurrently. Each thread can perform a specific task within the same
process. Multithreading is commonly used to take advantage of multiple
CPU cores and improve program efficiency.

2. Multiprocessing: Multiprocessing involves the use of multiple


processors or CPU cores to execute multiple tasks or processes
simultaneously. It can be used to achieve parallelism in computing,
allowing different tasks to run independently on separate processors.

3. Multiprogramming: Multiprogramming is a technique used in


operating systems where multiple programs are loaded into memory at
the same time and the CPU scheduler switches between them to ensure
that the CPU is always executing some program. This allows for efficient
use of CPU time and keeps the system responsive.

4. Multitasking: Multitasking is a broader concept that encompasses


both multiprogramming and multithreading. It refers to the ability of an
operating system to run multiple tasks or processes concurrently,
whether they are different programs or threads within a single program.
Multitasking aims to provide the illusion of parallelism and
responsiveness to the user.

In summary:

- Multithreading: Concurrent execution of multiple threads within a


single process.
- Multiprocessing: Simultaneous execution of multiple processes on
multiple processors or CPU cores.
- Multiprogramming: Loading multiple programs into memory and
switching between them to maximize CPU utilization.
- Multitasking: Running multiple tasks or processes concurrently to
provide a responsive computing environment.

Key differences between multithreading, multiprocessing,


multiprogramming, and multitasking:

1. Multithreading:

- Involves the concurrent execution of multiple threads within a single


process.
- Threads within the same process share the same memory space and
resources.
- Primarily used to improve program efficiency by utilizing multiple CPU
cores or CPUs.
- Threads within a process can communicate and share data more
easily.
- Useful for tasks that can be divided into smaller, parallelizable
subtasks.

2. Multiprocessing:

- Involves the simultaneous execution of multiple independent


processes.
- Each process runs in its own memory space and has its own set of
resources.
- Often used to achieve true parallelism by utilizing multiple CPUs or
CPU cores.
- Processes do not share memory by default, which can make
communication between them more complex.
- Suitable for tasks that can be completely parallelized and run
independently.

3. Multiprogramming:

- Refers to the practice of loading and executing multiple programs into


memory simultaneously.
- The operating system's CPU scheduler switches between these
programs to maximize CPU utilization.
- Programs may belong to different users or processes and may not
necessarily run in parallel.
- Helps keep the CPU busy and improves system responsiveness.

4. Multitasking:

- A broader concept that encompasses both multiprogramming and


multithreading.
- Involves the concurrent execution of multiple tasks or processes.
- Tasks can be different programs or threads within the same program.
- Aims to provide the illusion of parallelism and ensures that the
computer remains responsive to user interactions.
- Commonly used in modern operating systems to manage and
prioritize the execution of multiple tasks.

In summary, multithreading and multiprocessing are more focused on


parallelism at the thread and process level, respectively, while
multiprogramming and multitasking are broader concepts that involve
managing and executing multiple tasks or programs concurrently to
improve system efficiency and responsiveness.

Language Features for Parallelism

1. Optimization Feature
2. Availability Feature
3. Synchronization Communication Feature
4. Data Parallelism Feature
5. Process Management Feature
6. Control of Parallelism

Parallel Code generation figure:

You might also like