Download as pptx, pdf, or txt
Download as pptx, pdf, or txt
You are on page 1of 17

De sistemas en paralelo a

sistemas distribuidos

Prof. Carlos Andrés Méndez

Operating System Concepts – 10th Edition Silberschatz, Galvin and Gagne ©2018
Outline
▪ Review
▪ From parallel to distributed systems
• Pitfalls in distributed computing
• New requirement: scalability
• Event-driven Server Architectures
• Grid and cloud computing

Operating System Concepts – 10th Edition 4.2 Silberschatz, Galvin and Gagne ©2018
Outline
▪ Review
▪ From parallel to distributed systems
• Pitfalls in distributed computing
• New requirement: scalability
• Event-driven Server Architectures
• Grid and cloud computing

Operating System Concepts – 10th Edition 4.3 Silberschatz, Galvin and Gagne ©2018
Parallel vs distributed: summary

▪ Parallelism system
• Shared memory or
• Close physically colocated
• No concurrency
HPC IBM's Blue Gene/P massively parallel
• Scientists supercomputer

▪ Distributed system
• Physically remote
• Concurrency
• Business Cloud computing

Operating System Concepts – 10th Edition 4.4 Silberschatz, Galvin and Gagne ©2018
Concurrency vs. Parallelism
Concurrency is a conceptual property of a program, while parallelism is a
runtime state.[1]
In terms of scheduling, parallelism can only be achieved if the hardware
architecture supports parallel execution, like multi-core or multi-processor
systems do. A single core machine will also be able to execute multiple threads
concurrently, however it can never provide true parallelism.[1]

[1] April, B. E. (2012). Concurrent Programming for Scalable Web Architectures

Operating System Concepts – 10th Edition 4.5 Silberschatz, Galvin and Gagne ©2018
Parallel vs distributed: Concurrency perspective

In parallel computing, a computational task is typically broken down into


several, often many, very similar sub-tasks that can be processed independently
and whose results are combined afterwards, upon completion. In contrast, in
concurrent computing, the various processes often do not address related tasks;
when they do, as is typical in distributed computing, the separate tasks may
have a varied nature and often require some inter-process communication
during execution. [2]
Distributed systems are inherently concurrent and parallel, thus concurrency
control is also essential (Synchronization). [1]

[1] April, B. E. (2012). Concurrent Programming for Scalable Web Architectures


[2] "Parallelism vs. Concurrency". Haskell Wiki
Operating System Concepts – 10th Edition 4.6 Silberschatz, Galvin and Gagne ©2018
Synchronization and Coordination as Concurrency Control
▪ Regardless of the actual programming model, there must be an implicit or
explicit control over concurrency in critical region.
▪ Synchronization and coordination are two mechanisms attempting to
tackle this.
• Synchronization, or more precisely competition synchronization as
labeled by Sebesta [Seb05].
• Coordination, sometimes also named cooperation synchronization
(Sebesta [Seb05]), aims at the orchestration of collaborating activities.
▪ Process synchronization through:
• Semaphore
• Locks
• Monitors
• Atomic variables

Operating System Concepts – 10th Edition 4.7 Silberschatz, Galvin and Gagne ©2018
Outline
▪ Review
▪ From parallel to distributed systems
• Pitfalls in distributed computing
• New requirement: scalability
• Event-driven Server Architectures
• Grid and cloud computing

Operating System Concepts – 10th Edition 4.8 Silberschatz, Galvin and Gagne ©2018
Pitfalls in distributed computing
▪ Programming distributed systems introduces a set of additional challenges
compared to regular programming.
▪ The “Fallacies of Distributed Computing”:
• The network is reliable.
• Latency is zero.
• Bandwidth is infinite.
• The network is secure.
• Topology doesn't change.
• There is one administrator.
• Transport cost is zero.
• The network is homogeneous.

Operating System Concepts – 10th Edition 4.9 Silberschatz, Galvin and Gagne ©2018
Programming languages on distributed systems

Ghosh et al. have considered the impact of programming languages on


distributed systems. They pointed out that mainstream languages like Java and
C++ are still the most popular choice of developing distributed systems.
Recently, there has been an increasing interest in various alternative
programming languages embracing high-level concurrency and distributed
computing.
These languages focus on important concepts and idioms for distributed systems,
such as component abstractions, fault tolerance and distribution mechanisms.

Operating System Concepts – 10th Edition 4.10 Silberschatz, Galvin and Gagne ©2018
New requirement: scalability

Scalability is a non-functional property of a system that describes the ability to


appropriately handle increasing (and decreasing) workloads.
There are two basic strategies for scaling—vertical and horizontal.
In case of vertical scaling, additional resources are added to a single node. As a
result, the node can then handle more work and provides additional capacities.
Additional resources include more or faster CPUs, more memory or in case of
virtualized instances, more physical shares of the underlying machine. In contrast,
horizontal scaling adds more nodes to the overall system.

Operating System Concepts – 10th Edition 4.11 Silberschatz, Galvin and Gagne ©2018
Scalability and Concurrency: Amdahl’s Law
▪ Identifies performance gains from adding additional cores to an application that
has both serial and parallel components
▪ S is serial portion
▪ N processing cores

▪ That is, if application is 75% parallel / 25% serial, moving from 1 to 2 cores
results in speedup of 1.6 times
▪ As N approaches infinity, speedup approaches 1 / S
Serial portion of an application has disproportionate effect on
performance gained by adding additional cores

Operating System Concepts – 10th Edition 4.12 Silberschatz, Galvin and Gagne ©2018
Server Architectures
There are traditionally two competitive server architectures—one is based on
threads, the other on events. Over time, more sophisticated variants emerged,
sometimes combining both approaches. There has been a long controversy,
whether threads or events are generally the better fundament for high
performance.

Operating System Concepts – 10th Edition 4.13 Silberschatz, Galvin and Gagne ©2018
Event-driven Server Architectures

As an alternative to synchronous blocking I/O, the event-driven approach is


also common in server architectures.
Due to the asynchronous/non-blocking call semantics, other models than the
previously outlined thread-per-connection model are needed.
A common model is the mapping of a single thread to multiple connections.
The thread then handles all occurring events from I/O operations of these
connections and requests.

Operating System Concepts – 10th Edition 4.14 Silberschatz, Galvin and Gagne ©2018
Event-driven Server Architectures

New events are queued and the thread executes a so-called event loop—
dequeuing events from the queue, processing the event, then taking the next
event or waiting for new events to be pushed.

Operating System Concepts – 10th Edition 4.15 Silberschatz, Galvin and Gagne ©2018
Event-driven Server Architectures

As a result, the control flow of an application following the event-driven style is


somehow inverted. Instead of sequential operations, an event-driven program
uses a cascade of asynchronous calls and callbacks that get executed on events.
This notion often makes the flow of control less obvious and complicates
debugging.
The usage of event-driven server architectures has historically depended on the
availability of asynchronous/non-blocking I/O operations.

Operating System Concepts – 10th Edition 4.16 Silberschatz, Galvin and Gagne ©2018
Grid and cloud computing

Operating System Concepts – 10th Edition 4.17 Silberschatz, Galvin and Gagne ©2018

You might also like