Professional Documents
Culture Documents
CS8791-CC Unit-I
CS8791-CC Unit-I
UNIT I
INTRODUCTION
Introduction to Cloud Computing – Definition of Cloud – Evolution of Cloud Computing –
Underlying Principles of Parallel and Distributed Computing – Cloud Characteristics –
Elasticity in Cloud – On-demand Provisioning.
Less Capital Expenditure : There is no need to spend big money on hardware, software or
licensing fees so capital expenditure is very less.
On-demand self-service : The cloud provides automated provisioning of services on demand
through self-service websites called portals.
Broad network access : The cloud services and resources are provided through a location
independent broad network using standardized methods.
Resource pooling : The cloud service provider adds resources together into a resource
pool through which user can fulfill their requirements and pool can be made easily available
for multitenant environment.
Measured services : The usage of cloud services can be easily measured using different
measuring tools to generate a utility-based bill. Some of the tools can be used to generate a
report of usage, audit and monitored services.
Rapid elasticity : The cloud services can be easily, elastically and rapidly provisioned
and released through a self-service portal.
Server Consolidation : The server consolidation in cloud computing uses an effective
approach to maximize the resource utilization with minimizing the energy consumption in a
cloud computing environment. The virtualization technology provides the feature of server
consolidation in cloud computing.
Multi-tenancy : A multi-tenancy in cloud computing architecture allows customers to share
same computing resources in different environment. Each tenant's data is isolated and remains
invisible to other tenants. It provides individualized space to
the users for storing their projects and data.
Types of Cloud Deployment:
The most common cloud deployments are:
Private cloud: A private cloud is a server, data center, or distributed network wholly dedicated
to one organization.
Public cloud: A public cloud is a service run by an external vendor that may include servers in
one or multiple data centers. Unlike a private cloud, public clouds are shared by multiple
organizations. Using virtual machines, individual servers may be shared by different companies, a
situation that is called "multitenancy" because multiple tenants are renting server space within
the same server.
Hybrid cloud: Hybrid cloud deployments combine public and private clouds, and may even
include on-premises legacy servers. An organization may use their private cloud for some services
and their public cloud for others, or they may use the public cloud as backup for their private cloud.
Multi cloud: Multi cloud is a type of cloud deployment that involves using multiple public clouds.
The service models are categorized into three basic models:
1) Software-as-a-Service (SaaS)
2) Platform-as-a-Service (PaaS)
3) Infrastructure-as-a-Service (IaaS)
Fig:1.1
1) Software-as-a-Service (SaaS)
• SaaS is known as 'On-Demand Software'.
• It is a software distribution model. In this model, the applications are hosted by a cloud service
provider and publicized to the customers over internet.
• In SaaS, associated data and software are hosted centrally on the cloud server.
• User can access SaaS by using a thin client through a web browser.
• CRM, Office Suite, Email, games, etc. are the software applications which are provided as a
service through Internet.
• The companies like Google, Microsoft provide their applications as a service to the end users.
Advantages of SaaS
• SaaS is easy to buy because the pricing of SaaS is based on monthly or annual fee and it allows
the organizations to access business functionalities at a small cost, which is less than licensed
applications.
• SaaS needed less hardware, because the software is hosted remotely, hence organizations do
not need to invest in additional hardware.
• Less maintenance cost is required for SaaS and do not require special software or hardware
versions.
Disadvantages of SaaS
• SaaS applications are totally dependent on Internet connection. They are not usable without
Internet connection.
• It is difficult to switch amongst the SaaS vendors.
2) Platform-as-a-Service (PaaS)
• PaaS is a programming platform for developers. This platform is generated for the
programmers to create, test, run and manage the applications.
• A developer can easily write the application and deploy it directly into PaaS layer.
• PaaS gives the runtime environment for application development and deployment tools.
• Google Apps Engine(GAE), Windows Azure, SalesForce.com are the examples of PaaS.
Advantages of PaaS
• PaaS is easier to develop. Developer can concentrate on the development and innovation
without worrying about the infrastructure.
• In PaaS, developer only requires a PC and an Internet connection to start building
applications.
Disadvantages of PaaS
• One developer can write the applications as per the platform provided by PaaS vendor hence
the moving the application to another PaaS vendor is a problem.
3) Infrastructure-as-a-Service (IaaS)
• IaaS is a way to deliver a cloud computing infrastructure like server, storage, network and
operating system.
• The customers can access these resources over cloud computing platform i.e Internet as an
on-demand service.
• In IaaS, you buy complete resources rather than purchasing server, software, datacenter
space or network equipment.
• IaaS was earlier called as Hardware as a Service (HaaS). It is a Cloud computing platform
based model.
• HaaS differs from IaaS in the way that users have the bare hardware on which they can deploy
their own infrastructure using most appropriate software.
Advantages of IaaS
• In IaaS, user can dynamically choose a CPU, memory storage configuration according to need.
• Users can easily access the vast computing power available on IaaS Cloud platform.
Disadvantages of IaaS
• IaaS cloud computing platform model is dependent on availability of Internet and
virtualization services.
Evolution of Cloud Computing
2. Discuss about the evolution of Cloud Computing.)(or)Formulate stage by stage
Evolution of cloud with neat sketch and formulate any three benefits, drawback
ARPANET has introduced a flexible and powerful TCP-IP protocol suit which is used over
the internet till today.
The internet protocol had initial version IPV4 which again evolved with new generation
IPV6 protocol. Microsoft had developed a Windows 95 operating system with integrated browser
called Internet Explorer along with supporting dial-up TCP/IP protocols.
The first web server work on hypertext transfer protocol released in 1996 followed
by various scriptings supported web servers and web browsers.
Evolution of Computing Technologies
✓ Few decades ago, the popular computing technology for processing a complex and large
computational problem was “Cluster computing”.
✓ It has group of computers were used to solve a larger computational problem as a single
unit.
✓ It was designed such a way that the computational load used to divide in to similar unit
of work and allocated across multiple processors which is balanced across the several
machines.
✓ The grid computing is nothing but the group of interconnected independent computers
intended to solve a common computational problem as a single unit.
✓ So further, grid computing is evolved with the cloud computing where centralized entity
like data centers is used to offer different computing services to others which is similar
to grid computing model.
✓ The cloud computing becomes more popular with the introduction of “Virtualization”
technology.
✓ The Virtualization is a method of running multiple independent virtual operating
systems on a single physical computer. It saves hardware cost due to consolidation of
multiple servers along with maximum throughput and optimum resource utilization.
Evolution of Processing Technologies
o When computers were initially launched, people used to work with mechanical
devices, vacuum tubes, transistors, etc.
o Then with the advent of Small-Scale Integration (SSI), Medium Scale Integration
(MSI), Large Scale Integration (LSI), and Very Large-Scale Integration (VLSI)
technology, circuits with very small dimension became more reliable and faster.
o This development in hardware technology gave new dimension in designing
processors and its peripherals.
o The processing is nothing but the execution of programs, applications or tasks on
one or more computers.
o The two basic approaches of processing are serial and parallel processing. In serial
processing, the given problem or task is broken into a discrete series of instructions.
These instructions are executed on a single processor sequentially. In Parallel
processing, the tasks of programming instructions are executed simultaneously
across multiple processors with the objective of running program in a lesser time.
o The vector processing was used in certain applications where the data generated in
the form of vectors or matrices. The next advancement to vector processing was the
development of symmetric multiprocessing systems (SMP).
o As multiprogramming and vector processing system has limitation of managing the
resources in master slave model, the symmetric multiprocessing systems was
designed to address that problem.
o The SMP systems is intended to achieve sequential consistency where each
processor is assigned an equal number of OS tasks. These processors are
responsible for managing the workflow of task execution as it passes through the
system.
o Lastly, Massive parallel processing (MPP) is developed with many independent
arithmetic units or microprocessors that runs in parallel and are interconnected
to act as a single very large computer.
o Today, the massively parallel processor arrays can be implemented into a single-
chip which becomes cost effective due to the integrated circuit technology and it is
mostly used in advanced computing applications used in artificial intelligence.
The development of parallel processing is being influenced by many factors. The prominent
among them include the following:
➢ Computational requirements are ever increasing in the areas of both scientific and business
computing. The technical computing problems, which require high-speed computational
power, are related to life sciences, aerospace, geographical information systems,
mechanical design and analysis, and the like.
➢ Sequential architectures are reaching physical limitations as they are constrained by the
speed of light and thermodynamics laws. The speed at which sequential CPUs can operate
is reaching saturation point (no more vertical growth), and hence an alternative way to get
high computational speed is to connect multiple CPUs (opportunity for horizontal growth).
➢ Hardware improvements in pipelining, superscalar, and the like are non-scalable and
require sophisticated compiler technology. Developing such compiler technology is a
difficult task.
➢ Vector processing works well for certain kinds of problems. It is suitable mostly for
scientific problems (involving lots of matrix operations) and graphical processing. It is not
useful for other areas, such as databases.
➢ The technology of parallel processing is mature and can be exploited commercially; there
is already significant R&D work on development tools and environments.
➢ Significant development in networking technology is paving the way for heterogeneous
computing.
Hardware architectures for parallel processing
The core elements of parallel processing are CPUs. Based on the number of instruction and data
streams that can be processed simultaneously, computing systems are classified into the following
four categories:
• Single-instruction, single-data (SISD) systems
• Single-instruction, multiple-data (SIMD) systems
• Multiple-instruction, single-data (MISD) systems
• Multiple-instruction, multiple-data (MIMD) systems
Fig.:Flynn’sClassificationforparallelcomputers
Single-instruction, single-data (SISD) systems
An SISD computing system is a uniprocessor machine capable of executing a single instruction,
which operates on a single data stream. (in figure 2)
Prepared By, N.Gobinathan, AP/CSE Page 9
IV CSE CS8791-Cloud Computing
In SISD, machine instructions are processed sequentially; hence computers adopting this model
are popularly called sequential computers.
machines are broadly categorized into shared-memory MIMD and distributed-memory MIMD
based on the way PEs are coupled to the main memory.
➢ The shared-memory MIMD architecture is easier to program but is less tolerant to failures
and harder to extend with respect to the distributed memory MIMD model. Failures in a
shared-memory MIMD affect the entire system, whereas this is not the case of the
distributed model, in which each of the PEs can be easily isolated.
➢ Moreover, shared memory MIMD architectures are less likely to scale because the addition
of more PEs leads to memory contention. This is a situation that does not happen in the
case of distributed memory, in which each PE has its own memory. As a result, distributed
memory MIMD architectures are most popular today.
Shared Memory Architecture for Parallel Computers
An important characteristic of shared memory architecture is that there are more than one
processor and all processors share same memory with global address space. In this, the processors
operate independently and share same memory resources. Changes in a memory location done by
one processor are visible to all other processors.
Based upon memory access time, the shared memory is further classified into uniform
memory access (UMA) architecture and non-uniform memory access (NUMA) architecture which
are discussed as follows :
1. Uniform memory access (UMA) : An UMA architecture comprises two or more processors with
identical characteristics. The UMA architectures are also called as symmetric multiprocessors. The
processors share the same memory and are interconnected by bus-shared interconnection
scheme such that the memory access time is almost same. The IBM S/390 is an example of UMA
architecture which is shown in Fig. 7(a).
2. Non-uniform memory access (NUMA) : This architecture uses one or more
symmetric multiprocessors that are physically linked. A portion of memory is allocated with each
processor. Therefore, access to the local memory becomes faster than the remote memory. In this
mechanism, all processors do not get equal access time to the memory which is connected by the
interconnection network; therefore, the memory access across the link is always slow, The NUMA
architecture is shown in Fig. 7(b).
Fig. 7
• Rapid elasticity : In cloud, the different resource capabilities can be elastically provisioned
and released automatically as per demand. To scale rapidly outward and inward the elasticity
required. To the consumers, the capabilities are available for provisioning appears to be unlimited
and can be seized in any measure at any time.
• Measured service : Cloud systems automatically control and optimize the resource use by
consumers. They are controlled by leveraging the metering capability at some level of abstraction
appropriate to the type of service (e.g., storage, processing, bandwidth, and active user accounts).
The cloud system provides a mechanism for measuring the usage of resources for monitoring,
controlling, and billing purposes. They are reported for providing transparency for both the
providers and consumers of the utilized service.
Apart from that there are some other characteristics of cloud computing are given as
follows :
a) Cloud computing mostly uses Open Source REST based APIs (Application Programmer
Interface) builded on web services that are universally available and allow users to access the
cloud services through web browser easily and efficiently.
b) Most of the cloud services are location independent which are provisioned at any time, from
anywhere and on any devices through internet.
c) It provides agility to improve the reuse of Cloud resources.
d) It provides end-user computing where users have their own control on the resources used
by them opposed to the control of a centralized IT service.
e) It provides Multi-tenancy environment for sharing a large pool of resources to the users with
additive features like reliability, scalability, elasticity, security etc.
Elasticity in Cloud
5. Explain in detail about Elasticity in cloud computing with an example.)Nov/Dec
2020(or) Describe in detail about Elasticity in cloud and on-demand
provisioning.(May-2022)
Elasticity in Cloud
The cloud computing comprises one of the important characteristics called
“Elasticity”. The elasticity is very important for mission critical or business critical applications
where any compromise in the performance may leads to huge business loss. So, elasticity comes
into picture where additional resources are provisioned for such application to meet the
performance requirements and demands.
It works such a way that when number of user access increases, applications are
automatically provisioned the extra computing, storage and network resources like CPU,
Memory, Storage or bandwidth and when a smaller number of users are there it will automatically
decrease those as per requirement.
The Elasticity in cloud is a popular feature associated with scale-out solutions (horizontal
scaling), which allows for resources to be dynamically added or removed when needed. It is
generally associated with public cloud resources which is commonly featured in pay-per-use or
pay-as-you-go services.
The Elasticity is the ability to grow or shrink infrastructure resources (like compute,
storage or network) dynamically as needed to adapt to workload changes in the applications in
an autonomic manner.
It makes make maximum resource utilization which result in savings in infrastructure
costs overall. Depends on the environment, elasticity is applied on resources in the infrastructure
that is not limited to hardware, software, connectivity, QoS and other policies. The elasticity is
completely depending on the environment as sometimes it may become negative trait where
performance of certain applications must have guaranteed performance.
The elasticity is mostly used in IT organizations where during the peak hours when all
employees are working on cloud (i.e. between 9 AM to 9 PM), the resources are scaled into the
highest mark while during non-peak hours when limited employees are working (i.e. between 9
PM to 9 AM), the resources are scaled-out to lowest mark where a discrete bill is generated for
low usage and high usage which saves the huge cost. Another example of elasticity is
Indian railways train booking service called IRCTC. Earlier during the tatkal booking period, the
website used to be crashed due to the incapability of servers to handles too many users’ requests
for booking a ticket at specific time. But nowadays it won’t happen because of elasticity provided
by cloud for the servers such a way that during the tatkal booking period the infrastructure
resources are automatically scaled in as per users request so that website never stops in between
and scaled out when a smaller number of users are there. This may lead to provide a huge
flexibility and reliability for the customers who are using the service.
• Cost Efficiency: - Cloud is available at much cheaper rates than traditional approaches and
can significantly lower the overall IT expenses. By using cloud solution companies can save
licensing fees as well as eliminate overhead charges such as the cost of data storage, software
updates, management etc.
• Convenience and continuous availability: - Cloud makes easier access of shared
documents and files with view and modify choice. Public clouds also offer services that are
available wherever the end user might be located. Moreover it guaranteed continuous availability
of resources and In case of system failure; alternative instances are automatically spawned on
other machines.
• Backup and Recovery: - The process of backing up and recovering data is easy as
information is residing on cloud simplified and not on a physical device. The various cloud
providers offer reliable and flexible backup/recovery solutions.
• Cloud is environmentally friendly:-The cloud is more efficient than the typical IT
infrastructure and it takes fewer resources to compute, thus saving energy.
• Scalability and Performance: - Scalability is a built-in feature for cloud deployments.
Cloud instances are deployed automatically only when needed and as a result enhance
performance with excellent speed of computations.
• Increased Storage Capacity: -The cloud can accommodate and store much more data
compared to a personal computer and in a way offers almost unlimited storage capacity.
Disadvantages/Cons of Elastic Cloud Computing: -
• Security and Privacy in the Cloud: - Security is the biggest concern in cloud computing.
Companies essentially hide their private data and information over cloud as remote based cloud
infrastructure is used, it is then up to the cloud service provider to manage, protect and retain data
confidential.
• Limited Control: - Since the applications and services are running remotely companies,
users and third party virtual environments have limited control over the function and execution
of the hardware and software.
• Dependency and vendor lock-in: - One of the major drawbacks of cloud computing is the
implicit dependency on the provider. It is also called “vendor lock-in”. As it becomes difficult to
migrate vast data from old provider to new. So, it is advisable to select vendor very carefully.
• Increased Vulnerability: - Cloud based solutions are exposed on the public internet
therefore are more vulnerable target for malicious users and hackers. As we know nothing is
completely secure over Internet even the biggest organizations also suffer from serious attacks
and security breaches.
Resource Provisioning
6. Explain in detail the Resource Provisioning and its methods in cloud computing?
Providers supply cloud services by signing SLAs with end users. The SLAs must commit
sufficient resources such as CPU, memory, and bandwidth that the user can use for a preset
period. Under provisioning of resources will lead to broken SLAs and penalties. Over
provisioning of resources will lead to resource underutilization, and consequently, a decrease
in revenue for the provider. Deploying an autonomous system to efficiently provision
resources to users is a challenging problem.
The
difficulty comes from the unpredictability of consumer demand, software and hardware failures,
heterogeneity of services, power management, and conflicts in signed SLAs between consumers
and service providers.
Efficient VM provisioning depends on the cloud architecture and management of cloud
infrastructures. Resource provisioning schemes also demand fast discovery of services and data
in cloud computing infrastructures. In a virtualized cluster of servers, this demands efficient
installation of VMs, live VM migration, and fast recovery from failures. To deploy VMs, users treat
them as physical hosts with customized operating systems for specific applications. For example,
Amazon’s EC2 uses Xen as the virtual machine monitor (VMM). The same VMM is used in IBM’s
Blue Cloud Resource.
In the EC2 platform, some predefined VM templates are also provided. Users can choose
different kinds of VMs from the templates. IBM’s Blue Cloud does not provide any VM templates.
In general, any type of VM can run on top of Xen. Microsoft also applies virtualization in its Azure
cloud platform.
The provider should offer resource-economic services. Power-efficient schemes for
caching, query processing, and thermal management are mandatory due to increasing energy
waste by heat dissipation from data centers. Public or private clouds promise to streamline the
on-demand provisioning of software, hardware, and data as a service, achieving economies of
scale in IT deployment and operation.
Provisioning Methods
In case (a), Overprovisioning with the peak load causes heavy resource waste (shaded area).
(b), Underprovisioning (along the capacity line) of resources results in losses by both user and
provider in that paid demand by the users (the shaded area above the capacity) is not served and
wasted resources still exist for those demanded areas below the provisioned capacity.
(c), the constant provisioning of resources with fixed capacity to a declining user demand could
result in even worse resource waste. The user may give up the service by canceling the demand,
resulting in reduced revenue for the provider. Both the user and provider may be losers in
resource provisioning without elasticity.
time, the scheme increases that resource based on demand. When a resource is below a threshold
for a certain amount of time, that resource could be decreased accordingly. Amazon implements
such an auto-scale feature in its EC2 platform. This method is easy to implement. The scheme does
not work out right if the workload changes abruptly.
The x-axis in Figure 4.25 is the time scale in milliseconds. In the beginning, heavy
fluctuations of CPU load are encountered. All three methods have demanded a few VM instances
initially. Gradually, the utilization rate becomes more stabilized with a maximum of 20 VMs (100
percent utilization) provided for demand-driven provisioning in Figure 4.25(a). However, the
event-driven method reaches a stable peak of 17 VMs toward the end of the event and drops
quickly in Figure 4.25(b). The popularity provisioning shown in Figure 4.25(c) leads to a similar
fluctuation with peak VM utilization in the middle of the plot.
Event-Driven Resource Provisioning
This scheme adds or removes machine instances based on a specific time event. The scheme works
better for seasonal or predicted events such as Christmastime in the West and the Lunar New Year
in the East. During these events, the number of users grows before the event period and then
decreases during the event period. This scheme anticipates peak traffic before it happens. The
method results in a minimal loss of QoS, if the event is predicted correctly. Otherwise, wasted
resources are even greater due to events that do not follow a fixed pattern.
Popularity-Driven Resource Provisioning
In this method, the Internet searches for popularity of certain applications and creates the
instances by popularity demand. The scheme anticipates increased traffic with popularity. Again,
the scheme has a minimal loss of QoS, if the predicted popularity is correct. Resources may be
wasted if traffic does not occur as expected. In Figure 4.25(c), EC2 performance by CPU utilization
rate (the dark curve with the percentage scale shown on the left) is plotted against the number of
VMs provisioned (the light curves with scale shown on the right, with a maximum of 20 VMs
provisioned).
Dynamic Resource Deployment
The cloud uses VMs as building blocks to create an execution environment across multiple
resource sites. The InterGrid-managed infrastructure was developed by a Melbourne University
group. Dynamic resource deployment can be implemented to achieve scalability in performance.
The Inter-Grid is a Java-implemented software system that lets users create execution cloud
environments on top of all participating grid resources. Peering arrangements established
between gateways enable the allocation of resources from multiple grids to establish the execution
environment. In Figure 4.26, a scenario is illustrated by which an intergrid gateway (IGG) allocates
resources from a local cluster to deploy applications in three steps:
(1) Requesting the VMs,
A grid has predefined peering arrangements with other grids, which the IGG manages.
Through multiple IGGs, the system coordinates the use of Inter Grid resources. An IGG is aware of
the peering terms with other grids, selects suitable grids that can provide the required resources,
and replies to requests from other IGGs. Request redirection policies determine which peering
grid Inter Grid selects to process a request and a price for which that grid will perform the task.
An IGG can also allocate resources from a cloud provider. The cloud system creates a virtual
environment to help users deploy their applications. These applications use the distributed grid
resources. The InterGrid allocates and provides a distributed virtual environment (DVE). This is a
virtual cluster of VMs that runs isolated from other virtual clusters.
A component called the DVE manager performs resource allocation and management on
behalf of specific user applications. The core component of the IGG is a scheduler for implementing
provisioning policies and peering with other gateways. The communication component provides
an asynchronous message-passing mechanism. Received messages are handled in parallel by a
thread pool.
Provisioning of Storage Resources: The data in CC is stored in the clusters of the cloud provider
and can be accessed anywhere in the world. Ex: email. For data storage, distributed file system,
tree structure file system, and others can be used. Ex: GFS, HDFS, MS-Cosmos. This method
provides a 62 convenient coding platform for the developers. The storage methodologies and their
features can be found in Table 4.8
7. Explain in detail about Challenges in Security and Data Lock –in and
Standardization.(Nov/Dec 2020)
Cloud computing, an emergent technology, has placed many challenges in different aspects of
data and infrmation handling. Some of these are shown in the following diagram:
Security and Privacy of information is the biggest challenge to cloud computing. Security and
privacy issues can be overcome by employing encryption, security hardware and security
applications.
Portability
This is another challenge to cloud computing that applications should easily be migrated from
one cloud provider to another. There must not be vendor lock-in. However, it is not yet made
possible because each of the cloud provider uses different standard languages for their platforms.
Interoperability
It means the application on one platform should be able to incorporate services from the other
platforms. It is made possible via web services, but developing such web services is very complex.
Computing Performance
Data intensive applications on cloud requires high network bandwidth, which results in high cost.
Low bandwidth does not meet the desired computing performance of cloud application.
It is necessary for cloud systems to be reliable and robust because most of the businesses are now
becoming dependent on services provided by third-party.
Data Protection:
The data protection is the crucial element of security that warrants scrutiny. In cloud, as data is
stored on remote data center and managed by third party vendors. So, there is a fear of losing
confidential data. Therefore, various cryptographic techniques have to be
In cloud, the user’s data is scattered across the multiple datacenters therefore the recovery of
such data is very difficult as user never comes to know what is the exact location of their data and
don’t know how to recover that data. The availability of the cloud services are highly associated
with downtime of the services which is mentioned in the agreement called Service Level
Agreement (SLA). Therefore, any compromise in SLA may leads increase in downtime with less
availability and may harm your business productivity.
Many of the countries have Compliance Restrictions and regulation on usage of Cloud services.
Therefore, the Government regulations in such countries do not allow providers to share
customer's personal information and other sensitive information to outside states or country. In
order to meet such requirements, cloud providers need to setup a data center or a storage site
exclusively within that country to comply with regulations.
Management Capabilities:
The involvement of multiple cloud providers for in house services may leads to difficulty
in management.
The services hosted by the organizations should have freedom to migrate the services in or out
of the cloud which is very difficult in public clouds. The compatibility issue comes when
organization wants to change the service provider. Most of the public cloud provides vendor
dependent APIs for access and they may have their own proprietary solutions which may not be
compatible with other providers.
Part-A
1. Define Cloud Computing.(Nov/Dec 2021)
Cloud computing.
According to NIST, Cloud computing is a model for enabling ubiquitous, convenient, on-
demand network access to a shared pool of configurable computing resources (e.g., networks,
servers, storage, applications, and services) that can be rapidly provisioned and released
with minimal management effort or service provider interaction.
• Cloud computing is the on-demand availability of computer system resources, especially
data storage and computing power, without direct active management by the user.
• Cloud computing allows you to set up a virtual office to give you the flexibility of connecting
to your business anywhere, any time.
• Moving to cloud computing may reduce the cost of managing and maintaining your IT
systems. Rather than purchasing expensive systems and equipment for your business.
2. Enlist the pros and cons of cloud computing. Dec.-19
The pros and cons of cloud computing are
Pros of Cloud computing
• Improved accessibility
• Optimum Resource Utilization
• Scalability and Speed
• Minimizes licensing Cost of the Softwares
• On-demand self-service
• Broad network access
• Resource pooling
• Rapid elasticity
Cons of Cloud computing
• Security
• Privacy and Trust
• Vendor lock-in
• Service Quality
• Cloud migration issues
• Data Protection
• Data Recovery and Availability
• Regulatory and Compliance Restrictions
• Management Capabilities
• Interoperability and Compatibility Issue.
3. What are the different deployment model of cloud computing? (May-2022)
Various deployment model of cloud computing are
❖ Public Cloud
❖ Private Cloud
❖ Hybrid Cloud
❖ Community Cloud
4. List the Characteristics of Cloud computing?
Cloud computing has some interesting characteristics that bring benefits to both cloud
service consumers (CSCs) and cloud service providers (CSPs). These characteristics are
• No up-front commitments
• On-demand access
• Nice pricing
• Simplified application acceleration and scalability
• Efficient resource allocation
• Energy efficiency
• Seamless creation and use of third-party services
5. Write short notes on Public cloud?
Public cloud:
➢ Services and Infrastructure are hosted on premise of cloud provider and are provisioned for
open use by general public.
➢ The end users can access the services via public network like internet.
6. Write short notes on Private cloud?
Private cloud:
➢ Private clouds are designed and maintained by a single enterprise to meet the specific
needs of that enterprise.
➢ Private clouds need to set up a structure that is entirely built for a single business cloud
solutions and that are either hosted on-site or in a specific service provider’s data center.
7. Write short notes on Hybrid cloud?
Hybrid cloud:
➢ Hybrid cloud computing is an environment that combines public clouds and private
clouds by allowing data and applications to be shared between them.
Prepared By, N.Gobinathan, AP/CSE Page 25
IV CSE CS8791-Cloud Computing
basis in one or more data center locations. The machines can run any combination of
operating systems.
21. What is Infrastructure as a Service (IaaS)?
✓ This model puts together infrastructures demanded by users—namely servers, storage,
networks, and the data center fabric.
✓ The user can deploy and run on multiple VMs running guest OSes on specific applications.
✓ The user does not manage or control the underlying cloud infrastructure, but can specify
when to request and release the needed resources.
22. Bring out the differences between private cloud and public cloud?
29. What are the main benefits of both scalability and elasticity?
Cost-effectiveness. Cloud scalability and cloud elasticity features constitute an effective
resource management strategy:
The pay-per-use model makes cloud elasticity the proper answer for sudden surges of
workload demand (vital for streaming services and marketplaces);
The pay-as-you-expand model allows to plan out gradual growth of the infrastructure in
sync with growing requirements (especially handy for ad tech systems);
Consistent performance - scalability and elasticity features operate resources in a way
that keeps the system’s performance smooth, both for operators and customers.
Service availability. Scalability enables stable growth of the system, while elasticity tackles
immediate resource demands.
30. What are the types of cloud scalability?
There are several types of cloud scalability:
✓ Vertical, aka Scale-Up - the ability to handle an increasing workload by adding
resources to the existing infrastructure. It is a short term solution to cover immediate
needs.
✓ Horizontal, aka Scale-Out - the expansion of the existing infrastructure with new
elements to tackle more significant workload requirements. It is a long term solution
aimed to cover present and future resource demands with room for expansion.
✓ Diagonal scalability is a more flexible solution that combines adding and removal of
resources according to the current workload requirements. It is the most cost-effective
scalability solution by far.
31. What are the three cases of static cloud resource provisioning policies?
❖ Over provisioning with the peak load causes heavy resource waste (shaded area).
❖ Under provisioning (along the capacity line) of resources results in losses by both
user and provider in that paid demand by the users (the shaded area above the capacity) is
not served and wasted resources still exist for those demanded areas below the provisioned
capacity.
❖ constant provisioning of resources with fixed capacity to a declining user demand
could result in even worse resource waste
32. What are the three resource-provisioning methods?(Nov/Dec 2022)
Three resource-provisioning methods are presented in the following sections.
✓ Demand-driven method provides static resources and has been used in grid computing for
many years.
✓ Event-driven method is based on predicted workload by time.
✓ Popularity-driven method is based on Internet traffic monitored
33. List the main characteristics of cloud computing?
• Resources Pooling.
• On-Demand Self-Service.
• Easy Maintenance.
• Economical.
• Security.
• Automation.
37. Difference between Distributed computing, Grid computing and Cloud computing
Feature Distributed Grid Computing Cloud Computing
Computing
Computing Client server and Distributed Client-server
architecture peer to peer computing computing
Scalability Low to moderate Low to moderate High.
Flexibility Moderate Less More
Management Decentralized Decentralized Centralized
Owned and Organizations Organizations Cloud service
Managed by providers
Provisioning Application and Application Service oriented.
service oriented oriented
Accessibility Through Through grid Through standard
communication middleware web protocols
protocols like RPC,
MoM, IPC, RMI
Resource allocation pre-reserved pre-reserved on-demand