Parallel and Distributed Computing

You might also like

Download as pdf or txt
Download as pdf or txt
You are on page 1of 24

Parallel and

Distributed
computing

Week 1
By Uswa Adnan
Reference book
“ Parallel Programming: Techniques and Applications Using
Networked Workstations and Parallel Computers
2nd Edition”
What is Computing

Computing is the process of using computer technology to


complete a given goal-oriented task. Computing may
include the design and development of software and
hardware systems for a broad range of purposes - often
structuring, processing and managing any kind of
information
Serial Computing
Before taking a toll on Parallel Computing, first let’s look at the background of computations of a computer
software and why it failed for the modern era.

Computer software were written conventionally for serial computing. This meant that to solve a problem, an
algorithm divides the problem into smaller instructions. These discrete instructions are then executed on
Central Processing Unit of a computer one by one. Only after one instruction is finished, next one starts.

Real life example of this would be people standing in a queue waiting for movie ticket and there is only
cashier. Cashier is giving ticket one by one to the persons. Complexity of this situation increases when
there are 2 queues and only one cashier.

So, in short Serial Computing is following:


1. In this, a problem statement is broken into discrete instructions.
2. Then the instructions are executed one by one.
3. Only one instruction is executed at any moment of time.
Serial Computing
Look at point 3. This was causing a huge problem in computing industry
as only one instruction was getting executed at any moment of time.
This was a huge waste of hardware resources as only one part of the
hardware will be running for a particular instruction and of time. As
problem statements were getting heavier and bulkier, so does the
amount of time in execution of those statements. Example of
processors are Pentium 3 and Pentium 4. Now let’s come back to our
real-life problem. We could say that complexity will decrease when there
are 2 queues and 2 cashier giving tickets to 2 persons simultaneously.
This is an example of Parallel Computing.
Serial Computing
Processing in Parallel Computers
Parallel Computers divides a task into multiple subtasks
and executes them simultaneously by using multiple
processors to increase the speed and efficiency. This
process increases the speed of execution of program as
a whole. Memory in parallel systems can either be
shared or distributed. Parallel computers provides
concurrency and saves time and money.
Types of Parallel Computers
Two Types of Parallel Computers

1. Shared Memory Multiprocessor ( Parallel Computing)


2. Distributed Memory Multi Computers ( Distributed Computing )
Parallel Computing

1. Parallel computing refers to the process of breaking down


larger problems into smaller, independent, often similar parts
that can be executed simultaneously by multiple processors
communicating via shared memory, the results of which are
combined upon completion as part of an overall algorithm.
Parallel Computing
Parallel Computing
Processor: The central processing unit (CPU), also called a
processor, is located inside the computer case on the motherboard. It
is sometimes called the brain of the computer. It is an electronic circuit
that handle all the instructions it receives from hardware and software running
on the computer. Almost all the processes are dependent on the operations of
the processor. It provide the instructions and processing power to the
operation.A computer can have one or multiple processors. .

Core: A core, or CPU core, is the "brain" of a processor.


It receives instructions, and performs calculations, or
operations, to satisfy those instructions.Located inside the
processor. A CPU can have multiple cores.
Parallel Computing
Dual-core processors.
As the name suggests, dual-core or double core processor has 2 cores for each
physical processor. There has been 2 processor joined with each other to boost the
productivity and their store as well as reserve controllers are in the solitary
coördinate circuit.

Quad-core processor.
Quad core is chip with four independent devices that read and implement info as well
as tasks to be done in the central processing unit (CPU).Within this 4 core processor
chip, each chip operates in the combine with various circuits as a store, input/ return,
and memory management. Therefore, the individual core can fuse multiple programs
and can even execute those, increasing the speed of total programs suitable for
parallel preparing.
Parallel Computing
Hexacore processor.
The hexacore processor is another multi-core processor that is developed with six core as we
mentioned that the much more cores built-in chip the extra faster it can manage tasks than any
kind of various other chip.This multi-core processor was initially presented in 2010. It was
introduced in Intel core i7 Hexacore processor.

Octa-core processor.
Octa-core processor is easy for any person to recognize the name itself. It presumes that octa
means 8, that suggests that this multi-core processor is comprised of 8 self-governing cores to deal
with tasks a lot more much faster and efficiently than any kind of various other processor The
octa-core processor can get through this faster than the quad-core processor of comparable items.
Octa-core processor is at some point faster than any of those core processor listed above.
Parallel Computing
Deca-core processor.
As double core processor are comprised of 2 cores, quad-core with 4 cores, hexacore
with 6 cores and octa-core processor with 8 cores. Deca-core is likewise composed of
10 completely independent systems of the core that are basically made to manage
and execute task really successfully than any other processor made till now.
Difference between processor and core
Core & Processor
Why We Use Parallel Computing
1.Save Time : Throwing more resources at a single task will shorter its completion
time and faster the processing speed.

2. Solve Large and Complex Problems: Many problems are so large and
complex that is impractical or impossible to solve these problems with serial
computing. For Example Web Search engine /Database perform millions of transactions
every second so parallel computing is the key in such situations.

3. Save Money: By comparing with distributed systems in parallel architecture we can


take the work of several computer from one computer by increasing its processing power(
Number of Processor)only instead of using stand alone multiple computer systems so it
become cost effective to use parallel system.
Why We Use Parallel Computing
4.Provide Concurrency : single computer resource can do one thing at a time
multiple computer resources can do many things simultaneously.

5. Make Better use of Underling parallel hardware: Modern computer even


our mobiles and laptops are parallel in architecture with multiple processors or
cores.parallel softwares are specially intended with for parallel hardware with multiple
cores and threads etc

6. Real World is massively complex:In real world many complex and


interrelated event are happening at the same time yet with temporal
sequence.parallel computing is much suitable for modeling
simulations and complex real world phenomena.
Applications of Parallel Computing

Applications of Parallel Computing:


● Databases and Data mining.
● Real time simulation of systems.
● Science and Engineering.
● Advanced graphics, augmented reality, and virtual
reality.
Limitations of Parallel Computing

● It addresses such as communication and synchronization between


multiple sub-tasks and processes which is difficult to achieve.
● The algorithms must be managed in such a way that they can be
handled in a parallel mechanism.
● The algorithms or programs must have low coupling and high
cohesion. But it’s difficult to create such programs.
● More technically skilled and expert programmers can code a
parallelism-based program well.
Distributed Computing
Distributed computing is a computing concept that, in its most
general sense, refers to multiple computer systems working on a
single problem. In distributed computing, a single problem is
divided into many parts, and each part is solved by different
computers. As long as the computers are networked, they can
communicate with each other to solve the problem by using
message passing techniques.The computers that are in a distributed
system can be physically close together and connected by a local
network, or they can be geographically distant and connected by a
wide area network.
Why use Distributed computing

1. Historical must Modernize: Computing resources that used to operate independently now
need to work together to fulfill the requirements of organization. For Example : Consider an office
that acquire personal computers for individuals for work in past but now by increasing the need of
data and resource sharing the individual computers must need to connect together by some mean
of communication

2.Functional: If there is any special function hardware or software is available over network then
that functionality does not have to be duplicated on every computer system or node that need to
access that. For example : In an organization central software is shared only on network and only
connected nodes can access that software.

3. Economical: it may be more cost-effective to have many small computers working


together then one large computer of equivalent power.
Why use Distributed computing

4. Flexible: In distributed architecture if more nodes are needed another unit is added in
place rather than bringing the whole system down and replace it with an upgraded one.

5. More Reliable and Available: The Functionality and resources are shared over all the
computers so if one is unavailable then other one can do the same task.
Difference between Parallel and distributed Computing

You might also like