Overview of The Computing Paradigm: 1.1 Recent Trends in Distributed Computing

You might also like

Download as pdf or txt
Download as pdf or txt
You are on page 1of 5

CHAPTER

1
OVERVIEW OF THE
COMPUTING PARADIGM

Automatic computing has changed the way humans can solve problems
and the different ways in which problems can be solved. Computing has
changed the perception and even the world more than any other innovation
in the recent past. Still, a lot of revolution is going to happen in computing.
Understanding computing provides deep insights and generates reasoning
in our minds about our universe.
Over the last couple of years, there has been an increased interest in
reducing computing processors’ powers. This chapter aims to understand
different distributed computing technologies like peer to peer, cluster, utility,
grid, cloud, fog and jungle computing, and make comparisons between them.

1.1 RECENT TRENDS IN DISTRIBUTED COMPUTING


A method of computer processing in which different parts of a program are
executed simultaneously on two or more computers that are communicating
with each other over a network is termed distributed computing. In dis-
tributed computing, the processing requires that a program be segmented
into sections that can run simultaneously; it is also required that the division
of the program should consider different environments on which the differ-
ent sections of the program will be executing. Three significant characteris-
tics of distributed systems are concurrency of components, lack of a global
clock and independent failure of components.
2 • CLOUD COMPUTING BASICS

A program that runs in a distributed system is called a distributed


program, and distributed programming is the process of writing such pro-
grams. Distributed computing also refers to solving computational problems
using the distributed systems. Distributed computing is a model in which
resources of a system are shared among multiple computers to improve effi-
ciency and performance as shown in Figure 1.1.

FIGURE 1.1 Workflow of distributed systems

A distributed computing system has the following characteristics:


„„ It consists of several independent computers connected via a communi-
cation network.
„„ The message is being exchanged over the network for communication.
„„ Each computer has its own memory and clock and runs its own operat-
ing system.
„„ Remote resources are accessed through the network.
Various classes of distributed computing are shown in Figure 1.2 and will be
discussed further in the subsequent sections.

FIGURE 1.2 Taxonomy of distributed computing


OVERVIEW OF THE COMPUTING PARADIGM • 3

1.1.1 Peer to Peer Computing


When computers moved into mainstream use, personal computers (PCs)
were connected together through LANs (Local Area Networks) to central
servers. These central servers were much more powerful than the PCs, so
any large data processing can take place on these servers. PCs have now
become much more powerful, and capable enough to handle the data pro-
cessing locally rather than on central servers. Due to this, peer-to-peer (P2P)
computing can now occur when individual computers bypass central servers
to connect and collaborate directly with each other.
A peer is a computer that behaves as a client in the client/server model.
It also contains an additional layer of software that allows it to perform server
functions. The peer computer can respond to requests from other peers by
communicating a message over the network.
P2P computing refers to a class of systems and applications that employ
distributed resources to perform a critical function in a decentralized man-
ner. The resources encompass computing power, data (storage and content),
network bandwidth, and presence
(computers, humans and other
resources) [3]. P2P computing is a
network-based computing model
for applications where comput-
ers share resources and services
via direct exchange as shown in
Figure 1.3.
Technically, P2P provides the
opportunity to make use of vast
untapped resources that go unused FIGURE 1.3 Peer to peer network
without it. These resources include
processing power for large-scale computations and enormous storage poten-
tial. The P2P mechanism can also be used to eliminate the risk of a single
point of failure. When P2P is used within the enterprise, it may be able to
replace some costly data center functions with distributed services between
clients. Storage, for data retrieval and backup, can be placed on clients. P2P
applications build up functions such as storage, computations, messaging,
security, and file distribution through direct exchanges between peers.

1.1.2 Cluster Computing


Cluster computing consists of a collection of interconnected standalone
computers cooperatively working together as a single integrated comput-
ing resource to take advantage of the parallel processing power of those
4 • CLOUD COMPUTING BASICS

standalone computers. Computer clusters have each node set to carry out
the same tasks, controlled and scheduled by software. The components of
a cluster are connected to each other through fast local area networks as
shown in Figure 1.4. Clustered computer systems have proven to be effec-
tive in handling a heavy workload with large datasets. Deploying a cluster
increases performance and fault tolerance.

FIGURE 1.4 Cluster computing

Some major advantages of cluster computing are manageability, single


system image and high availability. In the cluster software is automatically
installed and configured, and the nodes of the cluster can be added and
managed easily. So, it is an open system that is very easy to deploy and cost-
effective to acquire and manage. Cluster computing contains some disad-
vantages also. It is hard to manage cluster computing without experience.
When the size of the cluster is large, it is difficult to find out if something
fails. Its programming environment is hard to be improved when software
on some node is different from the other.
The use of clusters as a computing platform is not just limited to scien-
tific and engineering applications; there are many business applications that
benefit from the use of clusters. This technology improves the performance
of applications by using parallel computing on different machines and also
enables the shared use of distributed resources.

1.1.3 Utility Computing


Utility computing is a service provisioning model in which a service provider
makes computing resources and infrastructure management available to the
OVERVIEW OF THE COMPUTING PARADIGM • 5

customer as per the need, and charges them for specific usage rather than
a fixed rate. It has an advantage of being low cost with no initial setup cost
to afford the computer resources. This repackaging of computing services is
the foundation of the shift to on-demand computing, software as a service,
and cloud computing models.
The customers need not to buy all the hardware, software, and licenses
to do business. Instead, the customer relies on another party to provide these
services. Utility computing is one of the most popular IT service models pri-
marily because of the flexibility and economy it provides. This model is based
on that used by conventional utilities such as telephone services, electricity,
and gas. Customers have access to
a virtually unlimited supply of com-
puting solutions over the Internet
or a virtual private network (VPN),
which can be used whenever, wher-
ever required. The back-end infra-
structure and computing resources
management and delivery are gov-
erned by the provider. Utility com-
puting solutions can include virtual
software, virtual servers, virtual
storage, backup, and many more IT
solutions. Multiplexing, multitask-
ing, and virtual multitenancy have
brought us to the utility computing
business as shown in Figure 1.5. FIGURE 1.5 Utility computing

1.1.4 Grid Computing


A scientist studying proteins logs into a computer using an entire network of
computers to analyze data. A businessman accesses his company’s network
through a Personal Digital Assistant in order to forecast the future of a par-
ticular stock. An army official accesses and coordinates computer resources
on three different military networks to formulate a battle strategy. All these
scenarios have one thing in common: they rely on a concept called grid com-
puting. At its most basic level, grid computing is a computer network in
which each computer’s resources are shared with every other computer in
the system. Processing power, memory and data storage are all community
resources that authorized consumers can tap into and leverage for specific
tasks. A grid computing system can be as simple as a collection of similar
computers running on the same operating system or as complex as Internet
worked systems comprised of every computer platform you can think of.

You might also like