Download as pdf or txt
Download as pdf or txt
You are on page 1of 32

IV CSE CS8791-Cloud Computing

UNIT I
INTRODUCTION
Introduction to Cloud Computing – Definition of Cloud – Evolution of Cloud Computing –
Underlying Principles of Parallel and Distributed Computing – Cloud Characteristics –
Elasticity in Cloud – On-demand Provisioning.

Introduction to Cloud Computing


1.Explain in detail about Cloud Computing with an example.
Cloud: The” cloud" refers to servers that are accessed over the Internet, and the software and
databases that run on those servers. Cloud servers are located in data centers all over the world.
By using cloud computing, users and companies don't have to manage physical servers
themselves or run software applications on their own machines.
The cloud enables users to access the same files and applications from almost any device, because
the computing and storage takes place on servers in a data center, instead of locally on the user
device.
Definition of Cloud (Define Cloud Computing.)
Cloud computing is a model for enabling ubiquitous, convenient, on-demand network access to a
shared pool of configurable computing resources (e.g., networks, servers, storage, applications,
and services) that can be rapidly provisioned and released with minimal management effort or
service provider interaction.
How Cloud Computing Works?
➢ A cloud computing system keeps its critical data on internet servers rather than
distributing copies of data files to individual client devices.
➢ Video-sharing cloud services like Netflix, for example, stream data across the internet to a
player application on the viewing device rather than sending customers DVD or Blu-ray
physical discs.

Figure 1: Working of Cloud


➢ Clients must be connected to the internet in order to use cloud services. Some video games
on the Xbox Live service, for example, can only be obtained online (not on physical disc),
while some others also cannot be played without being connected.

Prepared By, N.Gobinathan, AP/CSE Page 1


IV CSE CS8791-Cloud Computing

➢ Some industry observers expect cloud computing to keep increasing in popularity in


coming years.
➢ The Chromebook is one example of how all personal computers might evolve in the future
under this trenddevices with minimal local storage space and few local applications
besides the web browser.
Advantages of Cloud Computing
There are many advantages of cloud computing, some of them are explained as follows.
Improved accessibility : The cloud computing provides efficient access to services and
resources from anywhere, at any time, on any device.
Optimum Resource Utilization : Servers, storage and network resources are better utilized
in the cloud environment as they are shared among multiple users, thus it cut down the
wastage of resources.
Scalability and Speed : The cloud computing provides high scalability where capacity of
hardware, software or network resources can be easily increased or decreased based on
demand. In this, organizations do not have to invest money and time behind buying and setting
up the hardware, software and other resources instead, they can easily scale up or scale down
their resources or services running on cloud as per demand with the rapid speed of access.
Minimizes licensing Cost of the Softwares : The remote delivery of software applications
saves licensing cost such that users do not need to buy or renew expensive software licenses
or programs.
Less personnel training : The users of cloud do not need any personal training to deploy or
access the cloud services as the self-service portals of cloud used to have user friendly GUIs
where anyone can work with cloud services easily.
Flexibility of work practices : Cloud computing provides freedom of access to their users
such that the employees can work more flexibly in their work practices. The flexibility of access
to the cloud data allow employees to work from home or on holiday through internet.
Sharing of resources and costs : The cloud computing fulfills the requirement of users to
access resources through a shared pool which can be scaled easily and rapidly to any size. The
sharing of resources saves huge cost and makes efficient utilization of the infrastructure.
Minimize spending on technology infrastructure : As public cloud services are readily
available, the Pay as you go or subscription based utility feature of cloud allows you to access
the cloud services economically at cheaper rate. Therefore, it reduces the spending on in-house
infrastructure.
Maintenance is easier : As cloud computing services are provided by service provider
through internet, the maintenance of services is easier and managed by cloud service
providers itself.

Prepared By, N.Gobinathan, AP/CSE Page 2


IV CSE CS8791-Cloud Computing

Less Capital Expenditure : There is no need to spend big money on hardware, software or
licensing fees so capital expenditure is very less.
On-demand self-service : The cloud provides automated provisioning of services on demand
through self-service websites called portals.
Broad network access : The cloud services and resources are provided through a location
independent broad network using standardized methods.
Resource pooling : The cloud service provider adds resources together into a resource
pool through which user can fulfill their requirements and pool can be made easily available
for multitenant environment.
Measured services : The usage of cloud services can be easily measured using different
measuring tools to generate a utility-based bill. Some of the tools can be used to generate a
report of usage, audit and monitored services.
Rapid elasticity : The cloud services can be easily, elastically and rapidly provisioned
and released through a self-service portal.
Server Consolidation : The server consolidation in cloud computing uses an effective
approach to maximize the resource utilization with minimizing the energy consumption in a
cloud computing environment. The virtualization technology provides the feature of server
consolidation in cloud computing.
Multi-tenancy : A multi-tenancy in cloud computing architecture allows customers to share
same computing resources in different environment. Each tenant's data is isolated and remains
invisible to other tenants. It provides individualized space to
the users for storing their projects and data.
Types of Cloud Deployment:
The most common cloud deployments are:
Private cloud: A private cloud is a server, data center, or distributed network wholly dedicated
to one organization.
Public cloud: A public cloud is a service run by an external vendor that may include servers in
one or multiple data centers. Unlike a private cloud, public clouds are shared by multiple
organizations. Using virtual machines, individual servers may be shared by different companies, a
situation that is called "multitenancy" because multiple tenants are renting server space within
the same server.
Hybrid cloud: Hybrid cloud deployments combine public and private clouds, and may even
include on-premises legacy servers. An organization may use their private cloud for some services
and their public cloud for others, or they may use the public cloud as backup for their private cloud.
Multi cloud: Multi cloud is a type of cloud deployment that involves using multiple public clouds.
The service models are categorized into three basic models:

Prepared By, N.Gobinathan, AP/CSE Page 3


IV CSE CS8791-Cloud Computing

1) Software-as-a-Service (SaaS)
2) Platform-as-a-Service (PaaS)
3) Infrastructure-as-a-Service (IaaS)

Fig:1.1
1) Software-as-a-Service (SaaS)
• SaaS is known as 'On-Demand Software'.
• It is a software distribution model. In this model, the applications are hosted by a cloud service
provider and publicized to the customers over internet.
• In SaaS, associated data and software are hosted centrally on the cloud server.
• User can access SaaS by using a thin client through a web browser.
• CRM, Office Suite, Email, games, etc. are the software applications which are provided as a
service through Internet.
• The companies like Google, Microsoft provide their applications as a service to the end users.
Advantages of SaaS
• SaaS is easy to buy because the pricing of SaaS is based on monthly or annual fee and it allows
the organizations to access business functionalities at a small cost, which is less than licensed
applications.
• SaaS needed less hardware, because the software is hosted remotely, hence organizations do
not need to invest in additional hardware.
• Less maintenance cost is required for SaaS and do not require special software or hardware
versions.
Disadvantages of SaaS
• SaaS applications are totally dependent on Internet connection. They are not usable without
Internet connection.
• It is difficult to switch amongst the SaaS vendors.

Prepared By, N.Gobinathan, AP/CSE Page 4


IV CSE CS8791-Cloud Computing

2) Platform-as-a-Service (PaaS)
• PaaS is a programming platform for developers. This platform is generated for the
programmers to create, test, run and manage the applications.
• A developer can easily write the application and deploy it directly into PaaS layer.
• PaaS gives the runtime environment for application development and deployment tools.
• Google Apps Engine(GAE), Windows Azure, SalesForce.com are the examples of PaaS.
Advantages of PaaS
• PaaS is easier to develop. Developer can concentrate on the development and innovation
without worrying about the infrastructure.
• In PaaS, developer only requires a PC and an Internet connection to start building
applications.
Disadvantages of PaaS
• One developer can write the applications as per the platform provided by PaaS vendor hence
the moving the application to another PaaS vendor is a problem.
3) Infrastructure-as-a-Service (IaaS)
• IaaS is a way to deliver a cloud computing infrastructure like server, storage, network and
operating system.
• The customers can access these resources over cloud computing platform i.e Internet as an
on-demand service.
• In IaaS, you buy complete resources rather than purchasing server, software, datacenter
space or network equipment.
• IaaS was earlier called as Hardware as a Service (HaaS). It is a Cloud computing platform
based model.
• HaaS differs from IaaS in the way that users have the bare hardware on which they can deploy
their own infrastructure using most appropriate software.
Advantages of IaaS
• In IaaS, user can dynamically choose a CPU, memory storage configuration according to need.
• Users can easily access the vast computing power available on IaaS Cloud platform.
Disadvantages of IaaS
• IaaS cloud computing platform model is dependent on availability of Internet and
virtualization services.
Evolution of Cloud Computing
2. Discuss about the evolution of Cloud Computing.)(or)Formulate stage by stage
Evolution of cloud with neat sketch and formulate any three benefits, drawback

Prepared By, N.Gobinathan, AP/CSE Page 5


IV CSE CS8791-Cloud Computing

achieved by it in the banking and insurance sectors.(Nov/Dec 2021)(May-


2022)(Nov/Dec 2022)
Evolution of Cloud Computing:
The cloud computing becomes very popular in short span of time along with delivery of
prominent and unique benefits which were never before. Therefore, it is important to understand
evaluation of cloud computing. In this section we are going to understand the evolution of cloud
computing with respect to hardware, internet, protocol, computingand processing technologies.
Evolution of Hardware:
a) First-Generation Computers : The first-generation computing hardware called Mark and
Colossus, which was used for solving the binary arithmetic. It was developed in 1930 which
became foundation for programming languages, computer processing and terminologies. The
first generation of computer was evolved with second version in 1943 at Harvard university which
was an electromechanical programable computer by mark and colossus. It was developed
using vacuum tubes and hardwire circuits where punch cards were used to stored data.
b) Second-Generation Computers : The second-generation computing hardware
called ENIAC (Electronic Numerical Integrator and Computer) builded in 1946, which was capable
to solve range of computing problems. It was capable to perform one lakh calculations per seconds.
It was composed of thermionic valves, transistors and circuits.
c) Third-Generation Computers : The Third-generation computers were produced in1958 using
integrated circuits (IC’s). The first mainframe computer by IBM was developed in this era. The IBM
360 has got more processing and storage capabilities due to the integrated circuit. The
minicomputer was also developed in this era. At the later stage, intel has released first commercial
microprocessor called intel 4004 which has multiple transistors integrated on a single chip to
perform processing at faster speed.
d) Fourth-Generation Computers : The fourth-generation computer have introduced
microprocessors with single integrated circuits and random-access memory for performing
execution of millions of instructions per seconds. In this phase IBM have developed an personal
computers in 1981 along with LSI and VLSI Microchips.
Evolution of Internet and Protocols
The evolution of internet has begun in 1930 with the concept of MEMEX for storing books
records and communication. In 1957 Soviet Union have launched the first satellite to create
Advanced Research Project Agency (ARPA) for US military. The internet was firstly introduced
with the creation of ARPANET in 1967 where they have used internet message processors (IMP)
at each site for communication. In ARPANET, initially host to host protocols were used for
communication which were evolved with application protocols like FTP and SMTP in 1983.

Prepared By, N.Gobinathan, AP/CSE Page 6


IV CSE CS8791-Cloud Computing

ARPANET has introduced a flexible and powerful TCP-IP protocol suit which is used over
the internet till today.
The internet protocol had initial version IPV4 which again evolved with new generation
IPV6 protocol. Microsoft had developed a Windows 95 operating system with integrated browser
called Internet Explorer along with supporting dial-up TCP/IP protocols.
The first web server work on hypertext transfer protocol released in 1996 followed
by various scriptings supported web servers and web browsers.
Evolution of Computing Technologies
✓ Few decades ago, the popular computing technology for processing a complex and large
computational problem was “Cluster computing”.
✓ It has group of computers were used to solve a larger computational problem as a single
unit.
✓ It was designed such a way that the computational load used to divide in to similar unit
of work and allocated across multiple processors which is balanced across the several
machines.
✓ The grid computing is nothing but the group of interconnected independent computers
intended to solve a common computational problem as a single unit.
✓ So further, grid computing is evolved with the cloud computing where centralized entity
like data centers is used to offer different computing services to others which is similar
to grid computing model.
✓ The cloud computing becomes more popular with the introduction of “Virtualization”
technology.
✓ The Virtualization is a method of running multiple independent virtual operating
systems on a single physical computer. It saves hardware cost due to consolidation of
multiple servers along with maximum throughput and optimum resource utilization.
Evolution of Processing Technologies
o When computers were initially launched, people used to work with mechanical
devices, vacuum tubes, transistors, etc.
o Then with the advent of Small-Scale Integration (SSI), Medium Scale Integration
(MSI), Large Scale Integration (LSI), and Very Large-Scale Integration (VLSI)
technology, circuits with very small dimension became more reliable and faster.
o This development in hardware technology gave new dimension in designing
processors and its peripherals.
o The processing is nothing but the execution of programs, applications or tasks on
one or more computers.

Prepared By, N.Gobinathan, AP/CSE Page 7


IV CSE CS8791-Cloud Computing

o The two basic approaches of processing are serial and parallel processing. In serial
processing, the given problem or task is broken into a discrete series of instructions.
These instructions are executed on a single processor sequentially. In Parallel
processing, the tasks of programming instructions are executed simultaneously
across multiple processors with the objective of running program in a lesser time.
o The vector processing was used in certain applications where the data generated in
the form of vectors or matrices. The next advancement to vector processing was the
development of symmetric multiprocessing systems (SMP).
o As multiprogramming and vector processing system has limitation of managing the
resources in master slave model, the symmetric multiprocessing systems was
designed to address that problem.
o The SMP systems is intended to achieve sequential consistency where each
processor is assigned an equal number of OS tasks. These processors are
responsible for managing the workflow of task execution as it passes through the
system.
o Lastly, Massive parallel processing (MPP) is developed with many independent
arithmetic units or microprocessors that runs in parallel and are interconnected
to act as a single very large computer.
o Today, the massively parallel processor arrays can be implemented into a single-
chip which becomes cost effective due to the integrated circuit technology and it is
mostly used in advanced computing applications used in artificial intelligence.

Underlying Principles of Parallel and Distributed Computing


3. Explain in detail about principles of Parallel and distributed computing with an
example.(or) Illustrate the underlying principles of parallel and distributed
computing.(Nov/Dec 2021)(Nov/Dec 2022)(May-2023)
Elements of parallel computing
What is parallel processing?
o Processing of multiple tasks simultaneously on multiple processors is called parallel
processing. The parallel program consists of multiple active processes (tasks)
simultaneously solving a given problem.
o A given task is divided into multiple subtasks using a divide-and-conquer technique,
and each subtask is processed on a different central processing unit (CPU).
Programming on a multiprocessor system using the divide-and-conquer technique
is called parallel programming.

Prepared By, N.Gobinathan, AP/CSE Page 8


IV CSE CS8791-Cloud Computing

The development of parallel processing is being influenced by many factors. The prominent
among them include the following:
➢ Computational requirements are ever increasing in the areas of both scientific and business
computing. The technical computing problems, which require high-speed computational
power, are related to life sciences, aerospace, geographical information systems,
mechanical design and analysis, and the like.
➢ Sequential architectures are reaching physical limitations as they are constrained by the
speed of light and thermodynamics laws. The speed at which sequential CPUs can operate
is reaching saturation point (no more vertical growth), and hence an alternative way to get
high computational speed is to connect multiple CPUs (opportunity for horizontal growth).
➢ Hardware improvements in pipelining, superscalar, and the like are non-scalable and
require sophisticated compiler technology. Developing such compiler technology is a
difficult task.
➢ Vector processing works well for certain kinds of problems. It is suitable mostly for
scientific problems (involving lots of matrix operations) and graphical processing. It is not
useful for other areas, such as databases.
➢ The technology of parallel processing is mature and can be exploited commercially; there
is already significant R&D work on development tools and environments.
➢ Significant development in networking technology is paving the way for heterogeneous
computing.
Hardware architectures for parallel processing
The core elements of parallel processing are CPUs. Based on the number of instruction and data
streams that can be processed simultaneously, computing systems are classified into the following
four categories:
• Single-instruction, single-data (SISD) systems
• Single-instruction, multiple-data (SIMD) systems
• Multiple-instruction, single-data (MISD) systems
• Multiple-instruction, multiple-data (MIMD) systems

Fig.:Flynn’sClassificationforparallelcomputers
Single-instruction, single-data (SISD) systems
An SISD computing system is a uniprocessor machine capable of executing a single instruction,
which operates on a single data stream. (in figure 2)
Prepared By, N.Gobinathan, AP/CSE Page 9
IV CSE CS8791-Cloud Computing

In SISD, machine instructions are processed sequentially; hence computers adopting this model
are popularly called sequential computers.

Figure 2: Single-instruction, single-data (SISD) architecture.


Most conventional computers are built using the SISD model. All the instructions and data to be
processed have to be stored in primary memory. The speed of the processing element in the SISD
model is limited by the rate at which the computer can transfer information internally. Dominant
representative SISD systems are IBM PC, Macintosh, and workstations.
Single-instruction, multiple-data (SIMD) systems
An SIMD computing system is a multiprocessor machine capable of executing the same instruction
on all the CPUs but operating on different data streams (see Figure 3). Machines based on an
SIMD model are well suited to scientific computing since they involve lots of vector and matrix
operations. For instance, statements such as
Ci = Ai * Bi
can be passed to all the processing elements (PEs); organized data elements of vectors A and B can
be divided into multiple sets (N-sets for N PE systems); and each PE can process one data set.
Dominant representative SIMD systems are Cray’s vector processing machine and Thinking
Machines’ cm.

Prepared By, N.Gobinathan, AP/CSE Page 10


IV CSE CS8791-Cloud Computing

Figure 3: Single-instruction, multiple-data (SIMD) architecture.


Multiple-instruction, single-data (MISD) systems
An MISD computing system is a multiprocessor machine capable of executing different
instructions on different PEs but all of them operating on the same data set (see Figure4). For
instance, statements such as y = sin(x) + cos(x) + tan(x)

Figure 4: Multiple-instruction, single-data (MISD) architecture.


perform different operations on the same data set. Machines built using the MISD model are not
useful in most of the applications; a few machines are built, but none of them are available
commercially. They became more of an intellectual exercise than a practical configuration.
Multiple-instruction, multiple-data (MIMD) systems
An MIMD computing system is a multiprocessor machine capable of executing multiple
instructions on multiple data sets (see Figure 5). Each PE in the MIMD model has separate
instruction and data streams; hence machines built using this model are well suited to any kind of
application. Unlike SIMD and MISD machines, PEs in MIMD machines work asynchronously. MIMD

Prepared By, N.Gobinathan, AP/CSE Page 11


IV CSE CS8791-Cloud Computing

machines are broadly categorized into shared-memory MIMD and distributed-memory MIMD
based on the way PEs are coupled to the main memory.

Figure 5: Multiple-instructions, multiple-data (MIMD) architecture


Shared memory MIMD machines
In the shared memory MIMD model, all the PEs are connected to a single global memory and they
all have access to it (see Figure 6). Systems based on this model are also called tightly coupled
multiprocessor systems. The communication between PEs in this model takes place through the
shared memory; modification of the data stored in the global memory by one PE is visible to all
other PEs. Dominant representative shared memory MIMD systems are Silicon Graphics machines
and Sun/IBM’s SMP (Symmetric Multi-Processing).

Figure 6: Shared (left) and distributed (right) memory MIMD architecture.

Distributed memory MIMD machines


➢ In the distributed memory MIMD model, all PEs have a local memory. Systems based on
this model are also called loosely coupled multiprocessor systems.
➢ The communication between PEs in this model takes place through the interconnection
network.
➢ The network connecting PEs can be configured to tree, mesh, cube, and so on. Each PE
operates asynchronously, and if communication/synchronization among tasks is
necessary, they can do so by exchanging messages between them.

Prepared By, N.Gobinathan, AP/CSE Page 12


IV CSE CS8791-Cloud Computing

➢ The shared-memory MIMD architecture is easier to program but is less tolerant to failures
and harder to extend with respect to the distributed memory MIMD model. Failures in a
shared-memory MIMD affect the entire system, whereas this is not the case of the
distributed model, in which each of the PEs can be easily isolated.
➢ Moreover, shared memory MIMD architectures are less likely to scale because the addition
of more PEs leads to memory contention. This is a situation that does not happen in the
case of distributed memory, in which each PE has its own memory. As a result, distributed
memory MIMD architectures are most popular today.
Shared Memory Architecture for Parallel Computers
An important characteristic of shared memory architecture is that there are more than one
processor and all processors share same memory with global address space. In this, the processors
operate independently and share same memory resources. Changes in a memory location done by
one processor are visible to all other processors.
Based upon memory access time, the shared memory is further classified into uniform
memory access (UMA) architecture and non-uniform memory access (NUMA) architecture which
are discussed as follows :
1. Uniform memory access (UMA) : An UMA architecture comprises two or more processors with
identical characteristics. The UMA architectures are also called as symmetric multiprocessors. The
processors share the same memory and are interconnected by bus-shared interconnection
scheme such that the memory access time is almost same. The IBM S/390 is an example of UMA
architecture which is shown in Fig. 7(a).
2. Non-uniform memory access (NUMA) : This architecture uses one or more
symmetric multiprocessors that are physically linked. A portion of memory is allocated with each
processor. Therefore, access to the local memory becomes faster than the remote memory. In this
mechanism, all processors do not get equal access time to the memory which is connected by the
interconnection network; therefore, the memory access across the link is always slow, The NUMA
architecture is shown in Fig. 7(b).

Fig. 7

Prepared By, N.Gobinathan, AP/CSE Page 13


IV CSE CS8791-Cloud Computing

Distributed Memory Architecture for Parallel Computers:


In the distributed memory system, the concept of global memory is not used as each processor
uses its own internal (local) memory for computing.

Fig. 8 : Distributed memory architecture.


Therefore, changes made by one processor in its local memory have no effect on the memory of
other processors, and memory addresses in one processor cannot be mapped with other
processors. Distributed memory systems require a communication network to connect inter-
processor memory, as shown in Fig. 8. The distributed memory architecture is also called as
message passing architecture. The speed and performance of this type of architecture depends
upon the way the processors are connected.
Cloud Characteristics
4. Explain in detail about characteristics of Cloud Computing. (or)Elaborate the various
cloud characteristics and its benefits.(May-2023)
Cloud Characteristics:
The NIST have defined five essential characteristics of cloud computing, which are explained as
follows.
• On-demand self-service : Each consumer can separately provisioned computing capabilities
like server time, compute resources, network and storage, as needed automatically without
requiring human interaction with service provider.
• Broad network access : The cloud capabilities are available over the network which are
provisioned through a standardize network mechanisms that are used by heterogeneous client’s
platforms like thin or thick client, mobile phones, tablets, laptops, and workstations.
• Resource pooling : The cloud service provider’s computing resources are pooled together to
serve multiple consumers using a multi-tenant model, with different physical and virtual
resources. These resources are dynamically assigned and reassigned as per consumer demand.
The examples of resources include storage, processing, memory, and network bandwidth. These
resources are provided in a location independence manner where customer generally has no
control or knowledge over the exact location of the provided resources.

Prepared By, N.Gobinathan, AP/CSE Page 14


IV CSE CS8791-Cloud Computing

• Rapid elasticity : In cloud, the different resource capabilities can be elastically provisioned
and released automatically as per demand. To scale rapidly outward and inward the elasticity
required. To the consumers, the capabilities are available for provisioning appears to be unlimited
and can be seized in any measure at any time.
• Measured service : Cloud systems automatically control and optimize the resource use by
consumers. They are controlled by leveraging the metering capability at some level of abstraction
appropriate to the type of service (e.g., storage, processing, bandwidth, and active user accounts).
The cloud system provides a mechanism for measuring the usage of resources for monitoring,
controlling, and billing purposes. They are reported for providing transparency for both the
providers and consumers of the utilized service.
Apart from that there are some other characteristics of cloud computing are given as
follows :
a) Cloud computing mostly uses Open Source REST based APIs (Application Programmer
Interface) builded on web services that are universally available and allow users to access the
cloud services through web browser easily and efficiently.
b) Most of the cloud services are location independent which are provisioned at any time, from
anywhere and on any devices through internet.
c) It provides agility to improve the reuse of Cloud resources.
d) It provides end-user computing where users have their own control on the resources used
by them opposed to the control of a centralized IT service.
e) It provides Multi-tenancy environment for sharing a large pool of resources to the users with
additive features like reliability, scalability, elasticity, security etc.
Elasticity in Cloud
5. Explain in detail about Elasticity in cloud computing with an example.)Nov/Dec
2020(or) Describe in detail about Elasticity in cloud and on-demand
provisioning.(May-2022)
Elasticity in Cloud
The cloud computing comprises one of the important characteristics called
“Elasticity”. The elasticity is very important for mission critical or business critical applications
where any compromise in the performance may leads to huge business loss. So, elasticity comes
into picture where additional resources are provisioned for such application to meet the
performance requirements and demands.
It works such a way that when number of user access increases, applications are
automatically provisioned the extra computing, storage and network resources like CPU,
Memory, Storage or bandwidth and when a smaller number of users are there it will automatically
decrease those as per requirement.

Prepared By, N.Gobinathan, AP/CSE Page 15


IV CSE CS8791-Cloud Computing

The Elasticity in cloud is a popular feature associated with scale-out solutions (horizontal
scaling), which allows for resources to be dynamically added or removed when needed. It is
generally associated with public cloud resources which is commonly featured in pay-per-use or
pay-as-you-go services.
The Elasticity is the ability to grow or shrink infrastructure resources (like compute,
storage or network) dynamically as needed to adapt to workload changes in the applications in
an autonomic manner.
It makes make maximum resource utilization which result in savings in infrastructure
costs overall. Depends on the environment, elasticity is applied on resources in the infrastructure
that is not limited to hardware, software, connectivity, QoS and other policies. The elasticity is
completely depending on the environment as sometimes it may become negative trait where
performance of certain applications must have guaranteed performance.
The elasticity is mostly used in IT organizations where during the peak hours when all
employees are working on cloud (i.e. between 9 AM to 9 PM), the resources are scaled into the
highest mark while during non-peak hours when limited employees are working (i.e. between 9
PM to 9 AM), the resources are scaled-out to lowest mark where a discrete bill is generated for
low usage and high usage which saves the huge cost. Another example of elasticity is
Indian railways train booking service called IRCTC. Earlier during the tatkal booking period, the
website used to be crashed due to the incapability of servers to handles too many users’ requests
for booking a ticket at specific time. But nowadays it won’t happen because of elasticity provided
by cloud for the servers such a way that during the tatkal booking period the infrastructure
resources are automatically scaled in as per users request so that website never stops in between
and scaled out when a smaller number of users are there. This may lead to provide a huge
flexibility and reliability for the customers who are using the service.

Figure 1.5.1: Two different Approaches for Measuring Provisioning Times


Benefits/Pros of Elastic Cloud Computing
Elastic Cloud Computing has numerous advantages. Some of them are as follow: -

Prepared By, N.Gobinathan, AP/CSE Page 16


IV CSE CS8791-Cloud Computing

• Cost Efficiency: - Cloud is available at much cheaper rates than traditional approaches and
can significantly lower the overall IT expenses. By using cloud solution companies can save
licensing fees as well as eliminate overhead charges such as the cost of data storage, software
updates, management etc.
• Convenience and continuous availability: - Cloud makes easier access of shared
documents and files with view and modify choice. Public clouds also offer services that are
available wherever the end user might be located. Moreover it guaranteed continuous availability
of resources and In case of system failure; alternative instances are automatically spawned on
other machines.
• Backup and Recovery: - The process of backing up and recovering data is easy as
information is residing on cloud simplified and not on a physical device. The various cloud
providers offer reliable and flexible backup/recovery solutions.
• Cloud is environmentally friendly:-The cloud is more efficient than the typical IT
infrastructure and it takes fewer resources to compute, thus saving energy.
• Scalability and Performance: - Scalability is a built-in feature for cloud deployments.
Cloud instances are deployed automatically only when needed and as a result enhance
performance with excellent speed of computations.
• Increased Storage Capacity: -The cloud can accommodate and store much more data
compared to a personal computer and in a way offers almost unlimited storage capacity.
Disadvantages/Cons of Elastic Cloud Computing: -
• Security and Privacy in the Cloud: - Security is the biggest concern in cloud computing.
Companies essentially hide their private data and information over cloud as remote based cloud
infrastructure is used, it is then up to the cloud service provider to manage, protect and retain data
confidential.
• Limited Control: - Since the applications and services are running remotely companies,
users and third party virtual environments have limited control over the function and execution
of the hardware and software.
• Dependency and vendor lock-in: - One of the major drawbacks of cloud computing is the
implicit dependency on the provider. It is also called “vendor lock-in”. As it becomes difficult to
migrate vast data from old provider to new. So, it is advisable to select vendor very carefully.
• Increased Vulnerability: - Cloud based solutions are exposed on the public internet
therefore are more vulnerable target for malicious users and hackers. As we know nothing is
completely secure over Internet even the biggest organizations also suffer from serious attacks
and security breaches.

Prepared By, N.Gobinathan, AP/CSE Page 17


IV CSE CS8791-Cloud Computing

Resource Provisioning
6. Explain in detail the Resource Provisioning and its methods in cloud computing?
Providers supply cloud services by signing SLAs with end users. The SLAs must commit
sufficient resources such as CPU, memory, and bandwidth that the user can use for a preset
period. Under provisioning of resources will lead to broken SLAs and penalties. Over
provisioning of resources will lead to resource underutilization, and consequently, a decrease
in revenue for the provider. Deploying an autonomous system to efficiently provision
resources to users is a challenging problem.
The
difficulty comes from the unpredictability of consumer demand, software and hardware failures,
heterogeneity of services, power management, and conflicts in signed SLAs between consumers
and service providers.
Efficient VM provisioning depends on the cloud architecture and management of cloud
infrastructures. Resource provisioning schemes also demand fast discovery of services and data
in cloud computing infrastructures. In a virtualized cluster of servers, this demands efficient
installation of VMs, live VM migration, and fast recovery from failures. To deploy VMs, users treat
them as physical hosts with customized operating systems for specific applications. For example,
Amazon’s EC2 uses Xen as the virtual machine monitor (VMM). The same VMM is used in IBM’s
Blue Cloud Resource.

In the EC2 platform, some predefined VM templates are also provided. Users can choose
different kinds of VMs from the templates. IBM’s Blue Cloud does not provide any VM templates.
In general, any type of VM can run on top of Xen. Microsoft also applies virtualization in its Azure
cloud platform.
The provider should offer resource-economic services. Power-efficient schemes for
caching, query processing, and thermal management are mandatory due to increasing energy
waste by heat dissipation from data centers. Public or private clouds promise to streamline the
on-demand provisioning of software, hardware, and data as a service, achieving economies of
scale in IT deployment and operation.
Provisioning Methods
In case (a), Overprovisioning with the peak load causes heavy resource waste (shaded area).
(b), Underprovisioning (along the capacity line) of resources results in losses by both user and
provider in that paid demand by the users (the shaded area above the capacity) is not served and
wasted resources still exist for those demanded areas below the provisioned capacity.
(c), the constant provisioning of resources with fixed capacity to a declining user demand could
result in even worse resource waste. The user may give up the service by canceling the demand,

Prepared By, N.Gobinathan, AP/CSE Page 18


IV CSE CS8791-Cloud Computing

resulting in reduced revenue for the provider. Both the user and provider may be losers in
resource provisioning without elasticity.

Three resource-provisioning methods are presented in the following sections.


The demand-driven method provides static resources and has been used in grid computing for
many years.
The event driven method is based on predicted workload by time.
The popularity-driven method is based on Internet traffic monitored. We characterize these
resource provisioning methods as follows (see Figure 4.25).

Demand-Driven Resource Provisioning or On Demand Resource Provisioning


This method adds or removes computing instances based on the current utilization level of
the allocated resources.
The demand-driven method automatically allocates two Xeon processors for the user
application, when the user was using one Xeon processor more than 60 percent of the time for an
extended period. In general, when a resource has surpassed a threshold for a certain amount of

Prepared By, N.Gobinathan, AP/CSE Page 19


IV CSE CS8791-Cloud Computing

time, the scheme increases that resource based on demand. When a resource is below a threshold
for a certain amount of time, that resource could be decreased accordingly. Amazon implements
such an auto-scale feature in its EC2 platform. This method is easy to implement. The scheme does
not work out right if the workload changes abruptly.
The x-axis in Figure 4.25 is the time scale in milliseconds. In the beginning, heavy
fluctuations of CPU load are encountered. All three methods have demanded a few VM instances
initially. Gradually, the utilization rate becomes more stabilized with a maximum of 20 VMs (100
percent utilization) provided for demand-driven provisioning in Figure 4.25(a). However, the
event-driven method reaches a stable peak of 17 VMs toward the end of the event and drops
quickly in Figure 4.25(b). The popularity provisioning shown in Figure 4.25(c) leads to a similar
fluctuation with peak VM utilization in the middle of the plot.
Event-Driven Resource Provisioning
This scheme adds or removes machine instances based on a specific time event. The scheme works
better for seasonal or predicted events such as Christmastime in the West and the Lunar New Year
in the East. During these events, the number of users grows before the event period and then
decreases during the event period. This scheme anticipates peak traffic before it happens. The
method results in a minimal loss of QoS, if the event is predicted correctly. Otherwise, wasted
resources are even greater due to events that do not follow a fixed pattern.
Popularity-Driven Resource Provisioning
In this method, the Internet searches for popularity of certain applications and creates the
instances by popularity demand. The scheme anticipates increased traffic with popularity. Again,
the scheme has a minimal loss of QoS, if the predicted popularity is correct. Resources may be
wasted if traffic does not occur as expected. In Figure 4.25(c), EC2 performance by CPU utilization
rate (the dark curve with the percentage scale shown on the left) is plotted against the number of
VMs provisioned (the light curves with scale shown on the right, with a maximum of 20 VMs
provisioned).
Dynamic Resource Deployment
The cloud uses VMs as building blocks to create an execution environment across multiple
resource sites. The InterGrid-managed infrastructure was developed by a Melbourne University
group. Dynamic resource deployment can be implemented to achieve scalability in performance.
The Inter-Grid is a Java-implemented software system that lets users create execution cloud
environments on top of all participating grid resources. Peering arrangements established
between gateways enable the allocation of resources from multiple grids to establish the execution
environment. In Figure 4.26, a scenario is illustrated by which an intergrid gateway (IGG) allocates
resources from a local cluster to deploy applications in three steps:
(1) Requesting the VMs,

Prepared By, N.Gobinathan, AP/CSE Page 20


IV CSE CS8791-Cloud Computing

(2) Enacting the leases, and


(3) Deploying the VMs as requested. Under peak demand, this IGG interacts with another IGG that
can allocate resources from a cloud computing provider.

A grid has predefined peering arrangements with other grids, which the IGG manages.
Through multiple IGGs, the system coordinates the use of Inter Grid resources. An IGG is aware of
the peering terms with other grids, selects suitable grids that can provide the required resources,
and replies to requests from other IGGs. Request redirection policies determine which peering
grid Inter Grid selects to process a request and a price for which that grid will perform the task.
An IGG can also allocate resources from a cloud provider. The cloud system creates a virtual
environment to help users deploy their applications. These applications use the distributed grid
resources. The InterGrid allocates and provides a distributed virtual environment (DVE). This is a
virtual cluster of VMs that runs isolated from other virtual clusters.
A component called the DVE manager performs resource allocation and management on
behalf of specific user applications. The core component of the IGG is a scheduler for implementing
provisioning policies and peering with other gateways. The communication component provides
an asynchronous message-passing mechanism. Received messages are handled in parallel by a
thread pool.
Provisioning of Storage Resources: The data in CC is stored in the clusters of the cloud provider
and can be accessed anywhere in the world. Ex: email. For data storage, distributed file system,
tree structure file system, and others can be used. Ex: GFS, HDFS, MS-Cosmos. This method
provides a 62 convenient coding platform for the developers. The storage methodologies and their
features can be found in Table 4.8

Prepared By, N.Gobinathan, AP/CSE Page 21


IV CSE CS8791-Cloud Computing

7. Explain in detail about Challenges in Security and Data Lock –in and
Standardization.(Nov/Dec 2020)

Cloud computing, an emergent technology, has placed many challenges in different aspects of
data and infrmation handling. Some of these are shown in the following diagram:

Security and Privacy

Security and Privacy of information is the biggest challenge to cloud computing. Security and
privacy issues can be overcome by employing encryption, security hardware and security
applications.

Portability

This is another challenge to cloud computing that applications should easily be migrated from
one cloud provider to another. There must not be vendor lock-in. However, it is not yet made
possible because each of the cloud provider uses different standard languages for their platforms.

Interoperability

It means the application on one platform should be able to incorporate services from the other
platforms. It is made possible via web services, but developing such web services is very complex.

Computing Performance

Prepared By, N.Gobinathan, AP/CSE Page 22


IV CSE CS8791-Cloud Computing

Data intensive applications on cloud requires high network bandwidth, which results in high cost.
Low bandwidth does not meet the desired computing performance of cloud application.

Reliability and Availability

It is necessary for cloud systems to be reliable and robust because most of the businesses are now
becoming dependent on services provided by third-party.

Challenges in Cloud Computing:

Some of the challenges in cloud computing are explained as follows :

Data Protection:

The data protection is the crucial element of security that warrants scrutiny. In cloud, as data is
stored on remote data center and managed by third party vendors. So, there is a fear of losing
confidential data. Therefore, various cryptographic techniques have to be

implemented to protect the confidential data.

Data Recovery and Availability:

In cloud, the user’s data is scattered across the multiple datacenters therefore the recovery of
such data is very difficult as user never comes to know what is the exact location of their data and
don’t know how to recover that data. The availability of the cloud services are highly associated
with downtime of the services which is mentioned in the agreement called Service Level
Agreement (SLA). Therefore, any compromise in SLA may leads increase in downtime with less
availability and may harm your business productivity.

Regulatory and Compliance Restrictions:

Many of the countries have Compliance Restrictions and regulation on usage of Cloud services.
Therefore, the Government regulations in such countries do not allow providers to share
customer's personal information and other sensitive information to outside states or country. In
order to meet such requirements, cloud providers need to setup a data center or a storage site
exclusively within that country to comply with regulations.

Management Capabilities:

The involvement of multiple cloud providers for in house services may leads to difficulty
in management.

Interoperability and Compatibility Issue:

Prepared By, N.Gobinathan, AP/CSE Page 23


IV CSE CS8791-Cloud Computing

The services hosted by the organizations should have freedom to migrate the services in or out
of the cloud which is very difficult in public clouds. The compatibility issue comes when
organization wants to change the service provider. Most of the public cloud provides vendor
dependent APIs for access and they may have their own proprietary solutions which may not be
compatible with other providers.

Part-A
1. Define Cloud Computing.(Nov/Dec 2021)
Cloud computing.

According to NIST, Cloud computing is a model for enabling ubiquitous, convenient, on-
demand network access to a shared pool of configurable computing resources (e.g., networks,
servers, storage, applications, and services) that can be rapidly provisioned and released
with minimal management effort or service provider interaction.
• Cloud computing is the on-demand availability of computer system resources, especially
data storage and computing power, without direct active management by the user.
• Cloud computing allows you to set up a virtual office to give you the flexibility of connecting
to your business anywhere, any time.
• Moving to cloud computing may reduce the cost of managing and maintaining your IT
systems. Rather than purchasing expensive systems and equipment for your business.
2. Enlist the pros and cons of cloud computing. Dec.-19
The pros and cons of cloud computing are
Pros of Cloud computing
• Improved accessibility
• Optimum Resource Utilization
• Scalability and Speed
• Minimizes licensing Cost of the Softwares
• On-demand self-service
• Broad network access
• Resource pooling
• Rapid elasticity
Cons of Cloud computing
• Security
• Privacy and Trust
• Vendor lock-in
• Service Quality
• Cloud migration issues

Prepared By, N.Gobinathan, AP/CSE Page 24


IV CSE CS8791-Cloud Computing

• Data Protection
• Data Recovery and Availability
• Regulatory and Compliance Restrictions
• Management Capabilities
• Interoperability and Compatibility Issue.
3. What are the different deployment model of cloud computing? (May-2022)
Various deployment model of cloud computing are
❖ Public Cloud
❖ Private Cloud
❖ Hybrid Cloud
❖ Community Cloud
4. List the Characteristics of Cloud computing?
Cloud computing has some interesting characteristics that bring benefits to both cloud
service consumers (CSCs) and cloud service providers (CSPs). These characteristics are
• No up-front commitments
• On-demand access
• Nice pricing
• Simplified application acceleration and scalability
• Efficient resource allocation
• Energy efficiency
• Seamless creation and use of third-party services
5. Write short notes on Public cloud?
Public cloud:
➢ Services and Infrastructure are hosted on premise of cloud provider and are provisioned for
open use by general public.
➢ The end users can access the services via public network like internet.
6. Write short notes on Private cloud?
Private cloud:
➢ Private clouds are designed and maintained by a single enterprise to meet the specific
needs of that enterprise.
➢ Private clouds need to set up a structure that is entirely built for a single business cloud
solutions and that are either hosted on-site or in a specific service provider’s data center.
7. Write short notes on Hybrid cloud?
Hybrid cloud:
➢ Hybrid cloud computing is an environment that combines public clouds and private
clouds by allowing data and applications to be shared between them.
Prepared By, N.Gobinathan, AP/CSE Page 25
IV CSE CS8791-Cloud Computing

➢ A hybrid cloud is ideal for scalability, flexibility, and security.


➢ A perfect example of this scenario would be that of an organization who uses the private
cloud to secure their data and interacts with its customers using the public cloud.
8. Write short notes on Community cloud?
Community cloud
➢ Community cloud is a cloud infrastructure that allows systems and services to be accessible
by a group of several organizations to share the information.
➢ It is a mutually shared model between organizations that belong to a particular community
such as banks, government organizations, or commercial enterprises.
9. What is parallel computing?
➢ Processing of multiple tasks simultaneously on multiple processors is called parallel
processing. A given task is divided into multiple subtasks using a divide-and-conquer
technique, and each subtask is processed on a different central processing unit (CPU).
10. What are the different hardware architectures for parallel processing?
• Single-instruction, single-data (SISD) systems
• Single-instruction, multiple-data (SIMD) systems
• Multiple-instruction, single-data (MISD) systems
• Multiple-instruction, multiple-data (MIMD) systems
11. What are different levels of parallelism?
Levels of parallelism are decided based on the lumps of code that can be a potential candidate
for parallelism. All these approaches have a common goal: to boost processor efficiency by
hiding latency.
• Large grain (or task level)
• Medium grain (or control level)
• Fine grain (data level)
• Very fine grain (multiple-instruction issue)
12. What is distributed computing?
A distributed system is a collection of independent computers that appears to its users as
a single coherent system. A distributed system is one in which components located at
networked computers communicate and coordinate their actions only by passing messages.
13. What are the two architectural styles for distributed computing?
The architectural styles for distributed computing divided into two major classes:
➢ Software architectural styles
➢ System architectural styles.
14. Define Component and connectors .

Prepared By, N.Gobinathan, AP/CSE Page 26


IV CSE CS8791-Cloud Computing

A component represents a unit of software that encapsulates a function or a feature


of the system. Examples of components can be programs, objects, processes, pipes, and
filters.
A connector is a communication mechanism that allows cooperation and
coordination among components.
15. What are the types of Cloud service model?
• Infrastructure as a Service
• Platform as a Service
• Software as a Service
16. What is elasticity in cloud computing?(or) What is Cloud Elasticity(May-2023)
In cloud computing, elasticity is defined as "the degree to which a system is able to adapt
to workload changes by provisioning and de-provisioning resources in an autonomic
manner, such that at each point in time the available resources match the current demand
as closely as possible". The dynamic adaptation of capacity, e.g., by altering the use of
computing resources, to meet a varying workload is called "elastic computing".
17. What is the working principle of Cloud Computing?
The cloud is a collection of computers and servers that are publicly accessible via the
Internet. This hardware is typically owned and operated by a third party on a consolidated
basis in one or more data center locations. The machines can run any combination of
operating systems.
18. What are the advantages of cloud services?
✓ If the user’s PC crashes host application and document both remain unaffected in
the cloud.
✓ An individual user can access applications and documents from any location on any
PC. Because documents are hosted in the cloud, multiple users can collaborate on
the same document in real time, using any available Internet connection. Documents
are not machine-centric
19. List the companies who offer cloud service development?
• Amazon
• Google App Engine
• IBM
• Salesforce.com
20.What is the working principle of Cloud Computing?
The cloud is a collection of computers and servers that are publicly accessible via the
Internet. This hardware is typically owned and operated by a third party on a consolidated

Prepared By, N.Gobinathan, AP/CSE Page 27


IV CSE CS8791-Cloud Computing

basis in one or more data center locations. The machines can run any combination of
operating systems.
21. What is Infrastructure as a Service (IaaS)?
✓ This model puts together infrastructures demanded by users—namely servers, storage,
networks, and the data center fabric.
✓ The user can deploy and run on multiple VMs running guest OSes on specific applications.
✓ The user does not manage or control the underlying cloud infrastructure, but can specify
when to request and release the needed resources.
22. Bring out the differences between private cloud and public cloud?

23. What is Platform as a Service (PaaS)?


✓ This model enables the user to deploy user-built applications onto a virtualized cloud
platform. PaaS includes middleware, databases, development tools, and some runtime
support such as Web 2.0 and Java.
✓ The platform includes both hardware and software integrated with specific programming
interfaces. The provider supplies the API and software tools (e.g., Java, Python, Web 2.0,
.NET). The user is freed from managing the cloud infrastructure.
24. What is Software as a Service (SaaS)?
✓ This refers to browser-initiated application software over thousands of paid cloud
customers. The SaaS model applies to business processes, industry applications, consumer
relationship management (CRM), enterprise resources planning (ERP), human resources
(HR), and collaborative applications.
✓ On the customer side, there is no upfront investment in servers or software licensing. On the
provider side, costs are rather low, compared with conventional hosting of user applications
25. What is cloud service management?
Cloud Service Management includes all of the service-related functions that are necessary for
the management and operation of those services required by or proposed to cloud
consumers.
26. What is on demand of cloud computing? Nov/Dec 2022

Prepared By, N.Gobinathan, AP/CSE Page 28


IV CSE CS8791-Cloud Computing

On-demand (OD) computing is an increasingly popular enterprise model in which computing


resources are made available to the user as needed. The resources may be maintained within
the user's enterprise, or made available by a service provider
27. What is Cloud Scalability?
Cloud scalability is the ability of the system’s infrastructure to handle growing workload
requirements while retaining a consistent performance adequately.
28. What is the difference between Cloud Elasticity and Cloud Scalability?(Nov/Dec
2022)
Cloud Elasticity Cloud Scalability
Cloud Elasticity is a tactical resource Cloud Scalability is a strategic
allocation operation. It provides the resource allocation operation.
necessary resources required for the Scalability handles the increase and
current task and handles varying decrease of resources according to
loads for short periods. For example, the system's workload demands.
running a sentiment analysis
algorithm, doing database backups
or just taking on user traffic surges
on a website

29. What are the main benefits of both scalability and elasticity?
Cost-effectiveness. Cloud scalability and cloud elasticity features constitute an effective
resource management strategy:
The pay-per-use model makes cloud elasticity the proper answer for sudden surges of
workload demand (vital for streaming services and marketplaces);
The pay-as-you-expand model allows to plan out gradual growth of the infrastructure in
sync with growing requirements (especially handy for ad tech systems);
Consistent performance - scalability and elasticity features operate resources in a way
that keeps the system’s performance smooth, both for operators and customers.
Service availability. Scalability enables stable growth of the system, while elasticity tackles
immediate resource demands.
30. What are the types of cloud scalability?
There are several types of cloud scalability:
✓ Vertical, aka Scale-Up - the ability to handle an increasing workload by adding
resources to the existing infrastructure. It is a short term solution to cover immediate
needs.

Prepared By, N.Gobinathan, AP/CSE Page 29


IV CSE CS8791-Cloud Computing

✓ Horizontal, aka Scale-Out - the expansion of the existing infrastructure with new
elements to tackle more significant workload requirements. It is a long term solution
aimed to cover present and future resource demands with room for expansion.
✓ Diagonal scalability is a more flexible solution that combines adding and removal of
resources according to the current workload requirements. It is the most cost-effective
scalability solution by far.
31. What are the three cases of static cloud resource provisioning policies?
❖ Over provisioning with the peak load causes heavy resource waste (shaded area).
❖ Under provisioning (along the capacity line) of resources results in losses by both
user and provider in that paid demand by the users (the shaded area above the capacity) is
not served and wasted resources still exist for those demanded areas below the provisioned
capacity.
❖ constant provisioning of resources with fixed capacity to a declining user demand
could result in even worse resource waste
32. What are the three resource-provisioning methods?(Nov/Dec 2022)
Three resource-provisioning methods are presented in the following sections.
✓ Demand-driven method provides static resources and has been used in grid computing for
many years.
✓ Event-driven method is based on predicted workload by time.
✓ Popularity-driven method is based on Internet traffic monitored
33. List the main characteristics of cloud computing?
• Resources Pooling.
• On-Demand Self-Service.

• Easy Maintenance.

• Scalability And Rapid Elasticity.

• Economical.

• Measured And Reporting Service.

• Security.

• Automation.

34. Illustrate the virtual appliances in cloud computing.


A virtual appliance is a pre-installed and pre-configured software solution on one
or more virtual machines that is optimized for a specific function.Virtual appliances play
key roles in the quick provisioning of operating systems and applications in PaaS and SaaS
cloud delivery models.

Prepared By, N.Gobinathan, AP/CSE Page 30


IV CSE CS8791-Cloud Computing

35. Depict the importance of on demand provisioning in e-commerce


applications.(Nov/Dec 2021).
Online shopping
Mobile and web application
Online booking
Online purchasing
E banking
Finance
Manufacturing
Faster payment
One to one marketing
Mobility.
36. Differentiate between private and public Cloud.(May-2022)
S.No. Public Cloud Private Cloud
1 When the computing infrastructure When the computing infrastructure and
and resources are shared to the the resources are shared to the private
public via the internet, it is known network via the internet, it is known as a
as a public cloud. private cloud.
2 A public cloud is like a multi-tenant A private cloud is like a single-tenant in
in which the network is managed which the network is handled by the in-
by your service provider. house team.
3 Here the data of several enterprises Here the data of a single enterprise is
is stored. stored.
4 It supports the activity performed It supports the activity performed over the
over the public network or private network or internet.
internet.
5 The scalability is high in a public The scalability is limited in a private cloud.
cloud.
6 Reliability is moderate here. Reliability is high here.

7 The security depends on the It delivers a high class of security.


service provider.
8 It is affordable as compared to the It is expensive as compared to the public
private cloud. cloud.
9 In the public cloud, the The performance is high in a private cloud.
performance is low to medium.

Prepared By, N.Gobinathan, AP/CSE Page 31


IV CSE CS8791-Cloud Computing

10 It covers the shared servers. It covers the devoted servers.

37. Difference between Distributed computing, Grid computing and Cloud computing
Feature Distributed Grid Computing Cloud Computing
Computing
Computing Client server and Distributed Client-server
architecture peer to peer computing computing
Scalability Low to moderate Low to moderate High.
Flexibility Moderate Less More
Management Decentralized Decentralized Centralized
Owned and Organizations Organizations Cloud service
Managed by providers
Provisioning Application and Application Service oriented.
service oriented oriented
Accessibility Through Through grid Through standard
communication middleware web protocols
protocols like RPC,
MoM, IPC, RMI
Resource allocation pre-reserved pre-reserved on-demand

38. State the advantages of on-demand provisioning in cloud.(May-2023)


• Flexibility to meet fluctuating demands. Users can quickly increase or decrease their
computing resources as needed -- either short-term or long-term.
• Removes the need to purchase, maintain and upgrade hardware.

Prepared By, N.Gobinathan, AP/CSE Page 32

You might also like