Download as pdf or txt
Download as pdf or txt
You are on page 1of 87

MCS-227

Cloud Computing
Indira Gandhi and IoT
National Open University
School of Computer and
Information Sciences

Block

2
RESOURCE PROVISIONING,
LOAD BALANCING AND SECURITY
UNIT 4
Resource Pooling, Sharing and Provisioning
UNIT 5
Scaling
UNIT 6
Laod Balancing
UNIT 7
Security Issues in Cloud Computing
BLOCK INTRODUCTION
The title of the block is Resource Provisioning, Load Balancing and Security. The objectives
of this block are to make you understand about the underlying concepts of Resource
Provisioning, Load Balancing and Security in Cloud Computing.
The block is organized into 4 units:

Unit 4 covers the overview resource pooling, resource pooling architecture, resource sharing,
resource provisioning and various approaches of it. Towards end VM sizing is discussed.
Unit 5 covers the overview of cloud elasticity, scaling primitives, scaling strategies
(proactive and reactive scaling), auto scaling and types of scaling.
Unit 6 covers load balancing, goals of load balancing, levels of load balancing, load
balancing algorithms and load balancing as a service.
Unit 7 covers cloud security, how it is different from traditional(legacy) IT security, cloud
computing security requirements, challenges in providing cloud security, threats, ensuring
security, Identity and Access management and Security-as-a-Service.
PROGRAMME DESIGN COMMITTEE
Prof. (Retd.) S.K. Gupta , IIT, Delhi Sh. Shashi Bhushan Sharma, Associate Professor, SOCIS, IGNOU
Prof. T.V. Vijay Kumar JNU, New Delhi Sh. Akshay Kumar, Associate Professor, SOCIS, IGNOU
Prof. Ela Kumar, IGDTUW, Delhi Dr. P. Venkata Suresh, Associate Professor, SOCIS, IGNOU
Prof. Gayatri Dhingra, GVMITM, Sonipat Dr. V.V. Subrahmanyam, Associate Professor, SOCIS, IGNOU
Mr. Milind Mahajan,. Impressico Business Solutions, Sh. M.P. Mishra, Assistant Professor, SOCIS, IGNOU
New Delhi Dr. Sudhansh Sharma, Assistant Professor, SOCIS, IGNOU

COURSE DESIGN COMMITTEE


Prof. T.V. Vijay Kumar, JNU, New Delhi Sh. Shashi Bhushan Sharma, Associate Professor, SOCIS, IGNOU
Prof. S. Balasundaram, JNU, New Delhi Sh. Akshay Kumar, Associate Professor, SOCIS, IGNOU
Prof. D. P. Vidyarthi, JNU, New Delhi Dr. P. Venkata Suresh, Associate Professor, SOCIS, IGNOU
Dr. Ayesha Choudhary, JNU, New Delhi Dr. V.V. Subrahmanyam, Associate Professor, SOCIS, IGNOU
Prof. Anjana Gosain, USICT, GGSIPU, New Delhi Sh. M.P. Mishra, Assistant Professor, SOCIS, IGNOU
Dr. Sudhansh Sharma, Assistant Professor, SOCIS, IGNOU

SOCIS FACULTY
Prof. P. Venkata Suresh, Director, SOCIS, IGNOU
Prof. V.V. Subrahmanyam, SOCIS, IGNOU
Prof. Sandeep Singh Rawat, SOCIS, IGNOU
Prof. Divakar Yadav, SOCIS, IGNOU
Dr. Akshay Kumar, Associate Professor, SOCIS, IGNOU
Dr. M.P. Mishra, Associate Professor, SOCIS, IGNOU
Dr. Sudhansh Sharma, Assistant Professor, SOCIS, IGNOU

BLOCK PREPARATION TEAM


Course Editor Course Writers
Prof. D. P.Vidyarthi Unit 4: Miss Jyoti Bisht, Research Scholar
School of Computer and System Sciences (SC&SS), School of Computer and Information Sciences
Jawaharlal Nehru University IGNOU, New Delhi
New Delhi Unit 5: Dr. Manish Kumar, (Former) Assistant Professor
School of Computer and Information Sciences
Language Editor IGNOU, New Delhi
Prof. Parmod Kumar Unit 6: Dr. S. Nagaprasad, Lecturer
School of Humanities Dept. Of Computer Science
IGNOU Tara Govt. Degree and P.G College
New Delhi Sangareddy, Telangana
Unit 7: Dr. Swathi Kailasam, Associate Professor
Dept. Of Computer Science
Koneru Lakshmaiah Education Foundation
(KLEF), Guntur Dist, Andhra Pradesh

Course Coordinator: Prof. V.V. Subrahmanyam

Print Production
Mr. Sanjay Aggarwal, Assistant Registrar (Publication), MPDD

Dec, 2023
Indira Gandhi National Open University, 2023
ISBN-
All rights reserved. No part of this work may be reproduced in any form, by mimeograph or any other means, without permission in writing from
the Indira Gandhi National Open University.
Further information on the Indira Gandhi National Open University courses may be obtained from the University’s office at Maidan Garhi, New
Delhi-110068.
Printed and published on behalf of the Indira Gandhi National Open University, New Delhi by MPDD, IGNOU.
UNIT 4 RESOURCE POOLING, SHARING
AND PROVISIONING

4.1 Introduction
4.2 Objectives
4.3 Resource Pooling
4.4 Resource Pooling Architecture
4.4.1 Server Pool
4.4.2 Storage Pool
4.4.3 Network Pool
4.5 Resource Sharing
4.5.1 Multi Tenancy
4.5.2 Types of Tenancy
4.5.3 Tenancy at Different Level of Cloud Services
4.6 Resource Provisioning and Approaches
4.6.1 Static Approach
4.6.2 Dynamic Approach
4.6.3 Hybrid Approach
4.7 VM Sizing
4.8 Summary
4.9 Solutions / Answers
4.10 Further Readings

4.1 INTRODUCTION

Resource pooling is the one of the essential attributes of Cloud Computing


technology which separates cloud computing approach from the traditional IT
approach. Resource pooling along with virtualization and sharing of resources,
leads to dynamic behavior of the cloud. Instead of allocating resources
permanently to users, they are dynamically provisioned on a need basis. This
leads to efficient utilization of resources as load or demand changes over a
period of time. Multi-tenancy allows a single instance of an application
software along with its supporting infrastructure to be used to serve multiple
customers. It is not only economical and efficient to the providers, but may
also reduce the charges for the consumers.

4.2 OBJECTIVES

After going through this unit, you should be able to:

• describe the concept of resource pooling


• discuss various resource pooling architectures
• understand different resource sharing techniques
• define various provisioning approaches
• Know on the resource pricing

1
Resource Provisioning,
Load Balancing and
Security 4.3 RESOURCE POOLING

Resource pool is a collection of resources available for allocation to users. All


types of resources – compute, network or storage, can be pooled. It creates a
layer of abstraction for consumption and presentation of resources in a
consistent manner. A large pool of physical resources is maintained in cloud
data centers and presented to users as virtual services. Any resource from this
pool may be allocated to serve a single user or application, or can be even
shared among multiple users or applications. Also, instead of allocating
resources permanently to users, they are dynamically provisioned on a need
basis. This leads to efficient utilization of resources as load or demand changes
over a period of time.

For creating resource pools, providers need to set up strategies for categorizing
and management of resources. The consumers have no control or knowledge of
the actual locations where the physical resources are located. Although some
service providers may provide choice for geographic location at higher
abstraction level like- region, country, from where customer can get resources.
This is generally possible with large service providers who have multiple data
centers across the world.

Figure 1: Pooling of Physical and Virtual Resources

Resource pooling in cloud computing refers to the practice of aggregating and


sharing computing resources to serve multiple users or clients. It is a
fundamental concept that enables efficient utilization and management of
resources in a cloud environment.

Here are some key aspects of resource pooling:

1. Shared Infrastructure: Cloud providers maintain vast pools of


computing resources such as servers, storage, and networking devices.
These resources are shared among multiple users or tenants, allowing
for better resource utilization and cost-effectiveness.
2. Multi-tenancy: Multiple users or organizations can utilize the same
physical resources simultaneously, while their data and applications
remain logically isolated and secure. This multi-tenancy model ensures

2
Resource Pooling,
that resources are dynamically allocated based on demand, optimizing Sharing and
Provisioning
their utilization.
3. On-Demand Self-Service: Users can access and provision resources as
needed without direct interaction with the cloud provider. This self-
service aspect allows for quick scalability and flexibility, enabling
users to increase or decrease their resource usage based on current
requirements.
4. Elasticity: Resource pooling enables elasticity, allowing users to scale
their resources up or down dynamically in response to changing
workloads. This scalability ensures that users can handle fluctuations in
demand efficiently without overprovisioning or underutilizing
resources.
5. Virtualization: Technologies like virtualization play a crucial role in
resource pooling by abstracting physical resources and creating virtual
instances that can be allocated and managed more flexibly.
Virtualization enables better resource allocation, isolation, and
management.
Overall, resource pooling in cloud computing allows for a more
efficient and flexible utilization of computing resources, providing
users with the benefits of scalability, cost-effectiveness, and on-demand
access to a wide array of resources.

In the next section let us study the resource pooling architecture.

4.4 RESOURCE POOLING ARCHITECTURE

Each pool of resources is made by grouping multiple identical resources for


example – storage pools, network pools, server pools etc. A resource pooling
architecture is then built by combining these pools of resources. An automated
system is needed to be established in order to ensure efficient utilization and
synchronization of pools.

Computation resources are majorly divided into three categories – Server ,


Storage and Network. Sufficient quantities of physical resources of all three
types are hence maintained in a data center.

4.4.1 Server Pools

Server pools are composed of multiple physical servers along with operating
system, networking capabilities and other necessary software installed on it.
Virtual machines are then configured over these servers and then combined to
create virtual server pools. Customers can choose virtual machine
configurations from the available templates (provided by cloud service
provider) during provisioning. Also, dedicated processor and memory pools
are created from processors and memory devices and maintained separately.
These processor and memory components from their respective pools can then
be linked to virtual servers when demand for increased capacity arises. They
3
Resource Provisioning,
Load Balancing and can further be returned to the pool of free resources when load on virtual
Security
servers decreases.

4.4.2 Storage Pools

Storage resources are one of the essential components needed for improving
performance, data management and protection. It is frequently accessed by
users or applications as well as needed to meet growing requirements,
maintaining backups, migrating data, etc.

Storage pools are composed of file based, block based or object based storage
made up of storage devices like- disk or tapes and available to users in
virtualized mode.

i. File based Storage: It is needed for applications that require file


system or shared file access. It can be used to maintain repositories,
development, user home directories, etc.
ii. Block based Storage: It is a low latency storage needed for
applications requiring frequent access like databases. It uses block level
access hence needs to be partitioned and formatted before use.
iii. Object based Storage: it is needed for applications that require
scalability, unstructured data and metadata support. It can be used for
storing large amounts of data for analytics, archiving or backups.

4.4.3 Network Pools

Resources in pools can be connected to each other, or to resources from other


pools, by network facility. They can further be used for load balancing, link
aggregation, etc.

Network pools are composed of different networking devices like- gateways,


switches, routers, etc. Virtual networks are then created from these physical
networking devices and offered to customers. Customers can further build their
own networks using these virtual networks.

Generally, dedicated pools of resources of different types are maintained by


data centers. They may also be created specific to applications or consumers.
With the increasing number of resources and pools, it becomes very complex
to manage and organize pools. Hierarchical structure can be used to form
parent-child, sibling, or nested pools to facilitate diverse resource pooling
requirements.

 Check Your Progress 1


1. What is a Resource Pool?

…………………………………………………………………………………………
…………………………………………………………………………………………
…………………………………………………………………………………………

4
Resource Pooling,
2. Explain Resource Pooling Architecture. Sharing and
Provisioning
…………………………………………………………………………………
…………………………………………………………………………………
…………………………………………………………………………………

3. What are the various types of storage pools available? Explain.


…………………………………………………………………………………
…………………………………………………………………………………
………………………………………………………………………………..

4.5 RESOURCE SHARING

Cloud computing technology makes use of resource sharing in order to


increase resource utilization. At a time, a huge number of applications can be
running over a pool. But they may not attain peak demands at the same time.
Hence, sharing them among applications can increase average utilization of
these resources.

Although resource sharing offers multiple benefits like – increasing utilization,


reduces cost and expenditure, but also introduces challenges like – assuring
quality of service (QoS) and performance. Different applications competing for
the same set of resources may affect run time behavior of applications. Also,
the performance parameters like- response and turnaround time are difficult to
predict. Hence, sharing of resources requires proper management strategies in
order to maintain performance standards.

4.5.1 Multi-Tenancy

Multi-tenancy is one of the important characteristics found in public clouds.


Unlike traditional single tenancy architecture which allocates dedicated
resources to users, multi-tenancy is an architecture in which a single resource
is used by multiple tenants (customers) who are isolated from each other.
Tenants in this architecture are logically separated but physically connected. In
other words, a single instance of a software can run on a single server but can
server multiple tenants. Here, data of each tenant is kept separately and
securely from each other. Fig 2 shows single tenancy and multi-tenancy
scenarios.

Multi-tenancy leads to sharing of resources by multiple users without the user


being aware of it. It is not only economical and efficient to the providers, but
may also reduce the charges for the consumers. Multi-tenancy is a feature
enabled by various other features like- virtualization, resource sharing,
dynamic allocation from resource pools.

In this model, physical resources cannot be pre-occupied by a particular user.


Neither the resources are allocated to an application dedicatedly. They can be
5
Resource Provisioning,
Load Balancing and utilized on a temporary basis by multiple users or applications as and when
Security needed. The resources are released and returned to a pool of free resources
when demand is fulfilled which can further be used to serve other
requirements. This increases the utilization and decreases investment.

Figure 2: Single Tenancy Vs Multi-Tenancy

4.5.2 Types of Tenancy

There are two types of tenancy namely single tenancy and multi-tenancy.

In single tenancy architecture, a single instance of application software along


with its supporting infrastructure is used to serve a single customer. Customers
have their own independent instances and databases which are dedicated to
them. Since there is no sharing with this type of tenancy, it provides better
security but costs more to the customers.

In multi-tenancy architecture, a single instance of application software along


with its supporting infrastructure can be used to serve multiple customers.
Customers share a single instance and database. Customer’s data is isolated
from each other and remains invisible to others. Since users are sharing the
resources, it costs less to them as well as is efficient for the providers.

Multi-tenancy can be implemented in three ways:

1. Single Multi-Tenant Database - It is the simplest form where a single


application instance and a database instance is used to host the tenants. It is a
highly scalable architecture where more tenants can be added to the. It also
reduces cost due to sharing of resources but increases operational complexity.

2. One Database per Tenant – It is another form where a single application


instance and separate database instances are used for each tenant. Its scalability
is low and costs higher as compared to a single multi-tenant database due to
overhead included by adding each database. Due to separate database
instances, its operational complexity is less.

3. One App Instance and One Database per Tenant - It is the architecture
where the whole application is installed separately for each tenant. Each tenant
6
Resource Pooling,
has its own separate app and database instance. This allows a high degree of Sharing and
data isolation but increases the cost. Provisioning

4.5.3 Tenancy at Different Level of Cloud Services

Multi-tenancy can be applied not only in public clouds but also in private or
community deployment models. Also, it can be applied to all three service
models – Infrastructure as a Service (IaaS), Platform as a Service (PaaS) and
Software as a Service (SaaS). Multi-tenancy when performed at infrastructure
level, makes other levels also multi-tenant to certain extent.

Multi-tenancy at IaaS level can be done by virtualization of resources and


customers sharing the same set of resources virtually without affecting others.
In this, customers can share infrastructure resources like- servers, storage and
network.

Multi-tenancy at PaaS level can be done by running multiple applications


from different vendors over the same operating system. This removes the need
for separate virtual machine allocation and leads to customers sharing
operating systems. It increases utilization and ease maintenance.

Multi-tenancy at SaaS level can be done by sharing a single application


instance along with a database instance. Hence a single application serves
multiple customers. Customers may be allowed to customize some of the
functionalities like- change view of interface but they are not allowed to edit
applications since it is serving other customers also.

4.6 RESOURCE PROVISIONING AND


APPROACHES

Resource provisioning is the process of allocating resources to applications or


the customers. When a customer demands resources, they must be provisioned
automatically from a shared pool of configurable resources. Virtualization
technology makes the allocation of resources faster. It allows creation of
virtual machines in minutes, where customers can choose configurations of
their own. Proper management of resources is needed for rapid provisioning.

Resource provisioning is required to be done efficiently. Physical resources are


not allocated to users directly. Instead, they are made available to virtual
machines, which in turn are allocated to users and applications. Resources can
be assigned to virtual machines using various provisioning approaches. There
can be three types of resources provisioning approaches– static, dynamic and
hybrid.

7
Resource Provisioning,
Load Balancing and 4.6.1 Static Approach
Security
In static resource provisioning, resources are allocated to virtual machines only
once, at the beginning according to user’s or application’s requirement. It is
not expected to change further. Hence, it is suitable for applications that have
predictable and static workloads. Once a virtual machine is created, it is
expected to run without any further allocations.

Although there is no runtime overhead associated with this type of


provisioning, it has several limitations. For any application, it may be very
difficult to predict future workloads. It may lead to over-provisioning or under-
provisioning of resources. Under-provisioning is the scenario when actual
demand for resources exceeds the available resources. It may lead to service
downtime or application degradation. This problem may be avoided by
reserving sufficient resources in the beginning. But reserving large amounts of
resources may lead to another problem called Over-provisioning. It is a
scenario in which the majority of the resources remain un-utilized. It may lead
to inefficiency to the service provided and incurs unnecessary cost to the
consumers. Fig 3 shows the under-provisioning and Fig 4 shows over-
provisioning scenarios.

Figure 3: Problem of Resource Under-Provisioning

Figure 4: Problem of Resource Over-Provisioning


8
Resource Pooling,
Sharing and
Provisioning
4.6.2 Dynamic Approach

In dynamic provisioning, as per the requirement, resources can be allocated or


de-allocated during run-time. Customers in this case don’t need to predict
resource requirements. Resources are allocated from the pool when required
and removed from the virtual machine and returned back to the pool of free
resources when no more are required. This makes the system elastic. This
approach allows customers to be charged per usage basis.

Dynamic provisioning is suited for applications where demands for resources


are un-predictable or frequently varies during run-time. It is best suited for
scalable applications. It can adapt to changing needs at the cost of overheads
associated with run-time allocations. This may lead to a small amount of delay
but solves the problem of over-provisioning and under-provisioning.

4.6.3 Hybrid Approach

Dynamic provisioning although solves the problems associated with static


approach but may lead to overheads at run-time. Hybrid approach solves the
problem by combining the capabilities of static and dynamic provisioning.
Static provisioning can be done in the beginning when creating a virtual
machine in order to limit the complexity of provisioning. Dynamic
provisioning can be done later for re-provisioning when the workload changes
during run-time. This approach can be efficient for real-time applications.

4.7 VM SIZING

Virtual machine (VM) sizing is the process of estimating the amount of


resources that a VM should be allocated. Its objective is to make sure that VM
capacity is kept proportionate to the workload. This estimation is based upon
various parameters specified by the customer. VM sizing is done at the
beginning in case of static provisioning. In dynamic provisioning, VM size can
be changed depending upon the application workload.

4.7.1 Common Types of VM Sizing

In cloud computing, virtual machine (VM) sizing involves selecting the


appropriate configuration of resources (such as CPU, memory, storage, and
networking) for a virtual machine based on the workload requirements. Here
are some common types of VM sizing:

a. Fixed or Predefined Sizing: Cloud providers often offer predefined


VM instance types with fixed configurations of CPU cores, memory,
and storage. Users can choose from these predefined sizes that best

9
Resource Provisioning,
Load Balancing and match their workload requirements. For example, an instance type
Security might offer configurations like "small," "medium," or "large" with
specific allocations of resources.

b. Custom Sizing: Some cloud platforms allow users to customize the


allocation of resources according to their specific needs. Users can
manually select the amount of CPU cores, memory, storage size, and
other parameters to create a VM configuration tailored to their
workload requirements. This offers greater flexibility but requires more
detailed understanding of the workload's resource needs.

c. Burstable Sizing: Certain VM types in cloud environments offer


burstable performance. These instances provide a baseline level of
resources with the ability to temporarily increase resources when the
workload demands it. For instance, a VM might have a low baseline
CPU allocation but can burst to higher CPU performance for short
periods when needed.

d. Optimized Sizing for Specialized Workloads: Cloud providers may


offer specialized VM types optimized for specific workloads such as
high-performance computing (HPC), memory-intensive applications,
storage-focused tasks, or GPU-accelerated workloads. These instances
come with configurations tailored to the requirements of these
particular workloads.

e. Auto-Scaling: Some cloud services provide auto-scaling capabilities


where VMs automatically adjust their resource allocations based on
workload demand. This dynamic resizing ensures that the VM scales up
or down in real-time to efficiently handle varying workloads without
manual intervention.

Choosing the right VM sizing involves understanding the workload


characteristics, performance requirements, budget constraints, and scalability
needs. It's essential to strike a balance between provisioning enough resources
to meet performance expectations without overspending on resources that
won't be fully utilized. Regular monitoring and optimization of VM sizes based
on changing workload patterns can help in efficient resource utilization and
cost management in the cloud.

 Check Your Progress 2


1. What is a Resource Provisioning?

…………………………………………………………………………………………
…………………………………………………………………………………………
…………………………………………………………………………………………

2. Explain various resource provisioning approaches.

10
Resource Pooling,
………………………………………………………………………………… Sharing and
Provisioning
…………………………………………………………………………………
…………………………………………………………………………………

3. Explain the problems of over-provisioning and under-provisioning.


…………………………………………………………………………………
…………………………………………………………………………………
…………………………………………………………………………………

4.8 SUMMARY

In this unit an important characteristic of Cloud Computing technology called


Resource pooling was discussed. It is a collection of resources available for
allocation to users. A large pool of physical resources - storage, network and
server pools are maintained in cloud data centres and presented to users as
virtual services. Resources may be allocated to serve a single user or
application, or can be even shared among multiple users or applications.
Resources can be assigned to virtual machines using static, dynamic and
hybrid provisioning approaches.

4.9 SOLUTIONS / ANSWERS

Check Your Progress 1

1. Resource pool is a collection of resources available for allocation to


users. All types of resources – compute, network or storage, can be
pooled. It creates a layer of abstraction for consumption and
presentation of resources in a consistent manner. A large pool of
physical resources is maintained in cloud data centers and presented to
users as virtual services. Any resource from this pool may be allocated
to serve a single user or application, or can be even shared among
multiple users or applications. Also, instead of allocating resources
permanently to users, they are dynamically provisioned on a need basis.
This leads to efficient utilization of resources as load or demand
changes over a period of time.

2. A resource pooling architecture is composed of Server, storage and


network pools. An automated system is needed to be established in
order to ensure efficient utilization and synchronization of pools.

a) Server pools - They are composed of multiple physical servers along


with operating system, networking capabilities and other necessary
software installed on it.

11
Resource Provisioning,
Load Balancing and b) Storage pools – They are composed of file based, block based or
Security object based storage made up of storage devices like- disk or tapes and
available to users in virtualized mode.
c) Network pools - They are composed of different networking devices
like- gateways, switches, routers, etc. Virtual networks are then created
from these physical networking devices and offered to customers.
Customers can further build their own networks using these virtual
networks.

3. Storage pools are composed of file based; block based or object based
storage.

a) File based storage – it is needed for applications that require file


system or shared file access. It can be used to maintain repositories,
development, user home directories, etc.
b) Block based storage – it is a low latency storage needed for
applications requiring frequent access like databases. It uses block level
access hence needs to be partitioned and formatted before use.
c) Object based storage – it is needed for applications that require
scalability, unstructured data and metadata support. It can be used for
storing large amounts of data for analytics, archiving or backups.

Check Your Progress 2

1. Resource provisioning is the process of allocating resources to


applications or the customers. When a customer demands resources, they
must be provisioned automatically from a shared pool of configurable
resources.

2. There can be three types of resources provisioning approaches– static,


dynamic and hybrid.

a) In static resource provisioning, resources are allocated to virtual


machines only once, at the beginning according to user’s or
application’s requirement. It is not expected to change further. It
is suitable for applications that have predictable and static
workloads.

b) In dynamic provisioning, as per the requirement, resources can


be allocated or de-allocated during run-time. Customers in this
case don’t need to predict resource requirements. It is suited for
applications where demands for resources are un-predictable or
frequently varies during run-time.

c) Hybrid Provisioning combines the capabilities of static and


dynamic provisioning. Static provisioning is done in the

12
Resource Pooling,
beginning when creating virtual machines in order to limit the Sharing and
complexity of provisioning. Dynamic provisioning is done later Provisioning
for re-provisioning when the workload changes during run-time.
This approach can be efficient for real-time applications.

3. Under-provisioning is the scenario when actual demand for resources


exceeds the available resources. It may lead to service downtime or
application degradation. This problem may be avoided by reserving
sufficient resources in the beginning.

Reserving large amounts of resources may lead to another problem


called Over-provisioning. It is a scenario in which the majority of the
resources remain un-utilized. It may lead to inefficiency to the service
provided and incurs unnecessary cost to the consumers.

4.10 FURTHER READINGS

1. Cloud Computing: Principles and Paradigms, Rajkumar Buyya, James


Broberg and Andrzej M. Goscinski, Wiley, 2011.
2. Mastering Cloud Computing, Rajkumar Buyya, Christian Vecchiola,
and Thamarai Selvi, Tata McGraw Hill, 2013.
3. Essentials of cloud Computing: K. Chandrasekhran, CRC press, 2014.
4. Cloud Computing, Sandeep Bhowmik, Cambridge University Press,
2017.

13
Resource Provisioning,
Load Balancing and Security

UNIT 5 SCALING

Structure

5.0 Introduction
5.1 Objectives
5.2 Cloud Elasticity
5.3 Scaling Primitives
5.4 Scaling Strategies
5.4.1 Proactive Scaling
5.4.2 Reactive Scaling
5.4.3 Combinational Scaling
5.5 Auto Scaling in Cloud
5.6 Types of Scaling
5.6.1 Vertical Scaling or Scaling Up
5.6.2 Horizontal Scaling or Scaling Out
5.7 Summary
5.8 Solutions/Answers
5.9 Further Readings

5.0 INTRODUCTION

In the earlier unit we had studied resource pooling, sharing and provisioning in
cloud computing. In this unit let us study other important characteristic
features of cloud computing – Cloud Elasticity and Scaling.

The Scalability in cloud computing refers to the flexibility of allocating IT


resources as per the demand. Various applications running on cloud instances
experience variable traffic loads and hence the need of scaling arises. The need
of such applications can be of different types such as CPU allocation, Memory
expansion, storage and networking requirements etc. To address these different
requirements, virtual machines are one of the best ways to achieve scaling.
Each of the virtual machines is equipped with a minimum set of configurations
for CPU, Memory and storage. As and when required, the machines can be
configured to meet the traffic load. This is achieved by reconfiguring the
virtual machine for better performance for the target load. Sometimes it is quite
difficult to manage such on demand configurations by the persons, hence auto
scaling techniques plays a good role.

In this unit we will focus on the various methods and algorithms used in the
process of scaling. We will discuss various types of scaling, their usage and a
few examples. We will also discuss the importance of various techniques in
saving cost and man efforts by using the concepts of cloud scaling in highly

1
Scaling
dynamic situations. The suitability of scaling techniques in different scenarios
is also discussed in detail.

Understanding elasticity property of cloud is important to study Scaling in


cloud computing. Cloud Elasticity is discussed in the next section.

5.1 OBJECTIVES

After going through this unit you should be able to:


• understand the concept of cloud elasticity and its importance
• list the advantages of cloud elasticity and some use cases.
• describe scaling and its advantage;
• understand the different scaling techniques;
• learn about the scaling up and down approaches;
• understand the basics of auto scaling; and
• compare among proactive and reactive scaling.

5.2 CLOUD ELASTICITY

Cloud Elasticity is the property of a cloud to grow or shrink capacity for CPU,
memory, and storage resources to adapt to the changing demands of an
organization. Cloud Elasticity can be automatic, without need to perform
capacity planning in advance of the occasion, or it can be a manual process
where the organization is notified they are running low on resources and can
then decide to add or reduce capacity when needed. Monitoring tools offered
by the cloud provider dynamically adjust the resources allocated to an
organization without impacting existing cloud-based operations.

The extent of a cloud provider's elasticity is gauged by its capability to


autonomously scale resources in response to workload fluctuations, alleviating
the need for constant resource monitoring by IT administrators. This proactive
provisioning and deprovisioning of CPU, memory, and storage resources align
closely with demand, avoiding surplus capacity or resource shortages.
Cloud Elasticity, often linked with horizontal scaling architecture, is
commonly associated with pay-as-you-go models offered by public cloud
providers. This approach enables real-time adjustments in cloud expenses by
spinning up or down virtual machines based on fluctuating demand for specific
applications or services.

This flexibility empowers businesses and IT organizations to seamlessly


address unexpected surges in demand without the need for idle backup
equipment. Leveraging Cloud Elasticity allows organizations to 'cloudburst,'
shifting operations to the cloud when demand peaks and returning to on-
premises setups once demand subsides. Ultimately, Cloud Elasticity results in
Resource Provisioning,
substantial savings, reducing infrastructure costs, human resource allocation, Load Balancing and Security
and overall IT expenses.

5.2.1 Importance of Cloud Elasticity

In the absence of Cloud Elasticity, organizations would face paying for largely
unused capacity and handling the ongoing management and upkeep of that
capacity, including tasks like OS upgrades, patching, and addressing
component failures. Cloud Elasticity serves as a defining factor in cloud
computing, setting it apart from other models like client-server setups, grid
computing, or traditional infrastructure.

Cloud Elasticity acts as a vital tool for businesses, preventing both over-
provisioning (allocating more IT resources than necessary for current
demands) and under-provisioning (failing to allocate sufficient resources to
meet existing or imminent demands).

Over-provisioning leads to unnecessary spending, wasting valuable capital that


could be better utilized elsewhere. Even within the realm of public cloud
usage, the absence of elasticity could result in thousands of dollars squandered
annually on unused virtual machines (VMs).

Conversely, under-provisioning can result in an inability to meet existing


demand, causing unacceptable delays, dissatisfaction among users, and
ultimately, loss of business as customers opt for more responsive
organizations. The lack of Cloud Elasticity thus translates to potential business
losses and significant impacts on the bottom line.

5.2.2 How does it Work?

Cloud Elasticity empowers organizations to swiftly adjust their capacity, either


automatically or manually, by scaling up or down. It encompasses the concept
of 'cloud bursting,' where on-premises infrastructure extends into the public
cloud, especially to meet sudden or seasonal surges in demand. Moreover,
Cloud Elasticity involves the ability to expand or reduce resources utilized by
cloud-based applications.

This elasticity can be activated automatically, responding to workload patterns,


or initiated manually, often within minutes. Previously, without the benefits of
Cloud Elasticity, organizations had to rely on standby capacity or go through
lengthy processes of ordering, configuring, and installing additional capacity,
which could take weeks or months.

When demand subsides, capacity can be swiftly reduced within minutes.


Consequently, organizations only pay for the resources actively used at any

3
Scaling
given time, eliminating the necessity to acquire or retire on-premises
infrastructure to cater to fluctuating demand.

5.2.3 Use Cases of Cloud Elasticity

Common use cases where Cloud Elasticity proves beneficial include:


• Seasonal spikes in retail or e-commerce, notably during holiday periods
like Black Friday through early January.
• Peaks in demand during school district registration, especially in the
spring before the school term starts.
• Businesses experiencing sudden surges due to product launches or viral
social media attention, such as streaming services scaling up resources
for new releases or increased viewership.
• Utilizing public cloud capabilities for Disaster Recovery and Business
Continuity (DR/BC), enabling off-site backups or rapid VM
deployment during on-premises infrastructure outages.
• Scaling virtual desktop infrastructure in the cloud for temporary
workers, contractors, or remote learning applications.
• Temporary scaling of cloud infrastructure for test and development
purposes, dismantling it once testing or development is finished.
• Adapting to unplanned projects with short deadlines.
• Temporary initiatives like data analytics, batch processing, or media
rendering, requiring scalable resources.

5.2.4 Advantages of Cloud Elasticity

The advantages of cloud elasticity encompass:

• Flexibility: By eradicating the need for purchasing, configuring, and


installing new infrastructure during demand fluctuations, Cloud
Elasticity eliminates the necessity to anticipate unexpected spikes in
demand. This empowers organizations to readily address unforeseen
surges, whether triggered by seasonal peaks, mentions on platforms like
Reddit, or endorsements from influential sources like Oprah’s book
club.

• Usage-based Pricing: Unlike paying for idle infrastructure, Cloud


Elasticity enables organizations to exclusively pay for actively utilized
resources. This approach closely aligns IT expenses with real-time
demand, allowing organizations to optimize their infrastructure size
dynamically. Amazon asserts that adopting its instance scheduler with
EC2 cloud service can yield savings exceeding 60% compared to non-
adopters.
Resource Provisioning,
• High Availability: Cloud elasticity fosters both high availability and Load Balancing and Security
fault tolerance by enabling replication of VMs or containers in case of
potential failure. This ensures uninterrupted business services and a
consistent user experience, even amidst automatic provisioning or
deprovisioning, preserving operational continuity.

• Efficiency: Automation of resource adjustments liberates IT personnel


from manual provisioning tasks, enabling them to focus on projects that
significantly benefit the organization.

• Accelerated Time-to-Market: Access to capacity within minutes, as


opposed to the weeks or months typically required in traditional
procurement processes, expedites organizations' ability to deploy
resources swiftly, thereby enhancing their speed-to-market.

Now let us study scaling concept in the next section after understanding the
cloud elasticity and underlying concepts.

5.3 SCALING PRIMITIVES

The basic purpose of Scaling is to enable one to use cloud computing


infrastructure as much as required by the application. Here, the cloud resources
are added or removed according to the current need of the applications. The
property to enhance or to reduce the resources in the cloud is referred to as
cloud elasticity. Scaling exploits the elastic property of the cloud which we
had studied in the earlier section. The scalability of cloud architecture is
achieved using virtualization (see Unit 3: Resource Virtualization).
Virtualization uses virtual machines (VM’s) for enhancing (up scaling) and
reducing (down scaling) computing power. The scaling provides opportunities
to grow businesses to a more secure, available and need based computing/
storage facility on the cloud. Scaling also helps in optimizing the financial
involved for highly resource bound applications for small to medium
enterprises.

The key advantages of cloud scaling are: -

• Minimum cost: The user has to pay a minimum cost for access usage
of hardware after upscaling. The hardware cost for the same scale can
be much greater than the cost paid by the user. Also, the maintenance
and other overheads are also not included here. Further, as and when
the resources are not required, they may be returned to the Service
provider resulting in the cost saving.

5
Scaling
• Ease of use: The cloud upscaling and downscaling can be done in just
a few minutes (sometime dynamically) by using service providers
application interface.

• Flexibility: The users have the flexibility to enable/ disable certain


VM’s for upscaling and downscaling by them self and thus saving
configuration/ installation time for new hardware if purchased
separately.

• Recovery: The cloud environment itself reduces the chance of disaster


and amplifies the recovery of information stored in the cloud.

The scalability of the clouds aims to optimize the utilization of various


resources under varying workload conditions such as under provisioning and
over provisioning of resources. In non-cloud environments resource utilization
can be seen as a major concern as one has no control on scaling. Various
methods exist in literature which may be used in traditional environment
scaling. In general, a peak is forecasted and accordingly infrastructure is set up
in advance. This scaling experience high latency and require manual
monitoring. The associated drawbacks of this type of setup is quite crucial in
nature as estimation of maximum load may exist at both ends making either
high end or poorly configured systems.

Cost

Workload

Checkpoint

Time

Figure 1: Manual Scaling in traditional environment


Resource Provisioning,
Load Balancing and Security

Cost

Workload
Checkpoint

Time

Figure 2: Semi-Automatic Scaling in Cloud Environment

In the case of the clouds, virtual environments are utilized for resource
allocation. These virtual machines enable clouds to be elastic in nature which
can be configured according to the workload of the applications in real time. In
such scenarios, downtime is minimized and scaling is easy to achieve.

On the other hand, scaling saves cost of hardware setup for some small time
peaks or dips in load. In general most cloud service providers provide scaling
as a process for free and charge for the additional resource used. Scaling is also
a common service provided by almost all cloud platforms.

When resources are scaled down in cloud computing, users experience


substantial cost savings due to the pay-as-you-go model inherent in the cloud.
Scaling down entails reducing allocated resources such as CPU, memory, or
storage to match the current demand, ensuring that users only pay for what
they actively use. This optimization results in reduced expenditure on unused
or underutilized resources, aligning expenses more closely with actual
consumption. Additionally, by efficiently managing resource allocation and
avoiding over-provisioning, users benefit from a cost-effective approach that
minimizes unnecessary expenses, thereby optimizing their overall spending
within the cloud environment.

5.4 SCALING SRATEGIES

Let us now see what the strategies for scaling are, how one can achieve scaling
in a cloud environment and what are its types. In general, scaling is categorized

7
Scaling
based on the decision taken for achieving scaling. The three main strategies for
scaling are discussed below.

5.4.1 Proactive Scaling

Consider a scenario when a huge surge in traffic is expected on one of the


applications in the cloud. In this situation a proactive scaling is used to cater
the load. The proactive scaling can also be pre scheduled according to the
expected traffic and demand. This also expects the understanding of traffic
flow in advance to utilize maximum resources, however wrong estimates
generally lead to poor resource management. The prior knowledge of the load
helps in better provisioning of the cloud and accordingly minimum lag is
experienced by the end users when sudden load arrives. The given figure 3
shows the resource provision when load increases with time.
Load

Time of Day
Figure 3: Proactive Scaling
5.4.2 Reactive Scaling

The reactive scaling often monitors and enables smooth workload changes to
work easily with minimum cost. It empowers users to easily scale up or down
computing resources rapidly. In simple words, when the hardware like CPU or
RAM or any other resource touches highest utilization, more of the resources
are added to the environment by the service providers. The auto scaling works
on the policies defined by the users/ resource managers for traffic and scaling.
One major concern with reactive scaling is a quick change in load, i.e. user
experiences lags when infrastructure is being scaled. The given figure 4 shows
the resource provision in reactive scaling.
Resource Provisioning,
Load Balancing and Security

F
Load

Time of Day
Figure 4: Proactive Scaling
5.4.3 Combinational Scaling

Till now we have seen need based and forecast based techniques for scaling.
However, for better performance and low cool down period we can also
combine both of the reactive and proactive scaling strategies where we have
some prior knowledge of traffic. This helps us in scheduling timely scaling
strategies for expected load. On the other hand, we also have provision of load
based scaling apart from the predicted load on the application. This way both
the problems of sudden and expected traffic surges are addressed.

Following table 1 shows the comparison between proactive and reactive


scaling strategies.

Table 1: Proactive Scaling Vs Reactive Scaling

Parameters Proactive Scaling Reactive Scaling

Suitability For applications increasing loads For applications increasing


in expected/ known manner loads in unexpected/
unknown manner

Working User sets the threshold but a User defined threshold values
downtime is required. optimize the resources

Cost Reduction Medium cost reduction Medium cost reduction

Implementation A few steps required Fixed number of steps


required

9
Scaling
 Check Your Progress 1

1. Explain the importance of scaling in cloud computing?


…………………………………………………………………………
…………………………………………………………………………
…………………………………………………………………………

2. How proactive scaling is achieved through virtualization?


…………………………………………………………………………
…………………………………………………………………………
…………………………………………………………………………

3. Write differences between combinational and reactive scaling.

…………………………………………………………………………

…………………………………………………………………………

…………………………………………………………………………

4. List the differences between Cloud Elasticity and Scaling.

…………………………………………………………………………

…………………………………………………………………………

…………………………………………………………………………

5.5 AUTO SCALING IN CLOUD

One of the potential risks in scaling a cloud infrastructure is its magnitude of


scaling. If we scale it down to a very low level, it will adversely affect the
throughput and latency. In this case, a high latency will be affecting the user’s
experience and can cause dissatisfaction of the users. On the other hand, if we
scale-up the cloud infrastructure to a large extent then it will not be a resource
optimization and also would cost heavily, affecting the host and the whole
purpose of cost optimization fails.

In a cloud, auto scaling can be achieved using user defined policies, various
machine health checks and schedules. Various parameters such as Request
counts, CPU usage and latency are the key parameters for decision making in
autoscaling. A policy here refers to the instruction sets for clouds in case of a
Resource Provisioning,
particular scenario (for scaling -up or scaling -down). The autoscaling in the Load Balancing and Security
cloud is done on the basis of following parameters.

1. The number of instances required to scale.


2. Absolute no. or percentage (of the current capacity)

The process of auto scaling also requires some cooldown period for resuming
the services after a scaling takes place. No two concurrent scaling are triggered
so as to maintain integrity. The cooldown period allows the process of
autoscaling to get reflected in the system in a specified time interval and saves
any integrity issues in cloud environment.

Costs

Workload

Time

Figure 4. Automatic scaling in cloud environments

Consider a more specific scenario, when the resource requirement is high for
some time duration e.g. in holidays, weekends etc., a Scheduled scaling can
also be performed. Here the time and scale/ magnitude/ threshold of scaling
can be defined earlier to meet the specific requirements based on the previous
knowledge of traffic. The threshold level is also an important parameter in auto
scaling as a low value of threshold results in under utilization of the cloud
resources and a high level of threshold results in higher latency in the cloud.

After adding additional nodes in scale-up, the incoming requests per second
drops below the threshold. This results in triggering the alternate scale-up-
down processes known as a ping-pong effect. To avoid both under-scaling and
overscaling issues load testing is recommended to meet the service level
agreements (SLAs).

Service Level Agreements (SLAs) in cloud computing outline the terms,


expectations, and commitments between a service provider and users regarding
the quality, availability, and performance of the offered services. These
agreements specify uptime percentages, response times, and support
availability, establishing benchmarks against which the provider's performance
is measured. SLAs ensure reliability and transparency, guaranteeing users a
11
Scaling
certain level of service and outlining remedies or compensations if agreed-
upon standards are not met. They serve as vital tools in fostering trust between
cloud service providers and users by delineating responsibilities and ensuring
accountability, thereby maintaining a mutually beneficial relationship based on
defined service expectations.

In addition, the scale-up process is required to satisfy the following properties.

1. The number of incoming requests per second per node > threshold of
scale down, after scale-up.
2. The number of incoming requests per second per node < threshold of
scale up, after scale-down

Here, in both the scenarios one should reduce the chances of ping-pong effect.

Now we know what scaling is and how it affects the applications hosted on the
cloud. Let us now discuss how auto scaling can be performed in fixed amounts
as well as in percentage of the current capacity.

Fixed Amount Autoscaling

As discussed earlier, the auto scaling can be achieved by determining the


number of instances required to scale by a fixed number. The detailed
algorithm for fixed amount autoscaling threshold is given below. The
algorithm works for both scaling-up and scaling-down and takes inputs U and
D for both respectively.

--------------------------------------------------------------------------------------------
Algorithm : 1
--------------------------------------------------------------------------------------------
Input : SLA specific application
Parameters:
N_min minimum number of nodes
D - scale down value.
U scale up value.
T_U scale up threshold
T_D scale down threshold

Let T (SLA) return the maximum incoming request per second (RPS) per node
for the specific SLA.

T_D ← 0.50 x T_U


T_U ← 0.90 x T (SLA)

Let N_c and RPS_n represent the current number of nodes and incoming
requests per second per node respectively.
Resource Provisioning,
Load Balancing and Security
L1: /* scale up (if RPS_n> T_U) */
Repeat:
N_(c_old) ←N_c
N_c ←N_c + U
RPS_n ←RPS_n x N_(c_old) / N_c
Until RPS_n> T_U

L2: /* scale down (if RPS_n< T_D) */

Repeat:
N_(c_old) ←N_c
N_c ← max(N_min, N_c - D)
RPS_n ←RPS_n x N_(c_old) / N_c
Until RPS_n< T_D or N_c = N_min

Now, let us discuss how this algorithm works in detail. Let the values of a few
parameters are given as U = 2, D = 2, T_U = 120 and T_D = 150. Suppose in
the beginning, RPS = 450 and N_c = 4. Now RPS is increased to 1800 and
RPS_n almost reached to T_U, in this situation an autoscaling request is
generated leading to adding U = 2 nodes. Table - 1 lists all the parameters as
per the scale -up requirements.

Nodes Nodes RPS RPS_n Total nodes New


(Current) (added) (required) RPS_n

4 0 450 112.5 4

1800

2 6 300

2510

2 8 313.75

3300

2 10 330.00

4120

2 12 343.33

5000

2 14 357.14

13
Scaling
Similarly, in case of scaling down, let initially RPS = 8000 and N_c = 19. Now
RPS is reduced to 6200 and following it RPS_n reaches T_D, here an
autoscaling request is initiated deleting D = 2 nodes. Table - 2 lists all the
parameters as per the scale -down requirements.

Nodes Nodes RPS RPS_n Total New


(Current) (reduced) (required) nodes RPS_n

18 8000 421.05 19

6200

2 17 364.7

4850

2 15 323.33

3500

2 13 269.23

2650

2 11 240.90

1900

2 8 211.11
The given table shows the stepwise increase/ decrease in the cloud capacity
with respect to the change in load on the application(request per node per
second).

Percentage Scaling

In the previous section we discussed how scaling up or down is carried out by


a fixed amount of nodes. Considering the situation when we scale up or down
by a percentage of current capacity we change using percentage change in
current capacity. This seems a more natural way of scaling up or down as we
are already running to some capacity.

The below given algorithm is used to determine the scale up and down
thresholds for respective autoscaling.

-----------------------------------------------------------------------------------------------
Algorithm : 2
-----------------------------------------------------------------------------------------------
Input : SLA specific application
Parameters:
N_min - minimum number of nodes
Resource Provisioning,
D - scale down value. Load Balancing and Security
U - scale up value.
T_U - scale up threshold
T_D - scale down threshold

Let T (SLA) returns the maximum requests per second (RPS) per node for
specific SLA.

T_U ← 0.90 x T (SLA)


T_D ← 0.50 x T_U

Let N_c and RPS_n represent the current number of nodes and incoming
requests per second per node respectively.

L1: /* scale up (if RPS_n> T_U) */


Repeat:
N_(c_old) ←N_c
N_c ←N_c + max(1, N_c x U/100)
RPS_n ←RPS_n x N_(c_old) / N_c
Until RPS_n> T_U

L2: /* scale down (if RPS_n< T_D) */

Repeat:
N_(c_old) ←N_c
N_c ← max(N_min, N_c - max(1, N_c x D/ 100))
RPS_n ←RPS_n x N_(c_old) / N_c
Until RPS_n< T_D or N_c = N_min

Let us now understand the working of this algorithm by an example. Let


N_min = 1, at the beginning RPS = 500 and N_c = 6. Now the demand rises
and RPS reaches to 1540 while RPS_n reaches T_U. Here an upscaling is
requested adding 1 i.e. max(1, 6 x 10/200) nodes.

Similarly in case of scaling down, initial RPS = 5000 and N_c = 19, here RPS
reduces to 4140 and RPS_n reaches T_D requesting scale down and hence
deleting 1 i.e. max(1, 1.8 x 8/100). The detailed example is explained using
Table -3 giving details of upscaling with D = 8, U = 1, N_min = 1, T_D = 230
and T_U = 290 .

Nodes Nodes RPS RPS_n Total New


(Current) (added) (required) nodes RPS_n

6 0 500 83.33 6

1695

15
Scaling
1 7 242.14

2190

1 8 273.75

2600

1 9 288.88

3430

1 10 343.00

3940

1 11 358.18

4420

1 12 368.33

4960

1 13 381.53

5500

1 14 392.85

5950

1 15 396.6

The scaling down with the same algorithm is detailed in the table below.

Nodes Nodes RPS RPS_n Total New


(Current) (added) (required) nodes RPS_n

19 5000 263.15 19

3920

1 18 217.77

3510

1 17 206.47

3200

1 16 200

2850
Resource Provisioning,
Load Balancing and Security
1 15 190

2600

1 14 185.71

2360

1 13 181.53

2060

1 12 171.66

1810

1 11 164.5

1500

150

Here if we compare both the algorithms 1 and 2, it is clear that the values of
the threshold U and D are at the higher side in case of 2. In this scenario the
utilization of hardware is more and the cloud experiences low footprints.

Check your Progress 2


1) Explain the concept of fixed amount auto scaling.
…………………………………………………………………………
…………………………………………………………………………
…………………………………………………………………………

2) In Algorithm 1 for fixed amount auto scaling, calculate the values in table
if U = 3.
…………………………………………………………………………
…………………………………………………………………………
…………………………………………………………………………

3) What is a cool down period?

…………………………………………………………………………………………

…………………………………………………………………………………………

…………………………………………………………………………………………

17
Scaling

5.6 TYPES OF SCALING

Let us now discuss the types of scaling, how we see the cloud infrastructure for
capacity enhancing/ reducing. In general we scale the cloud in a vertical or
horizontal way by either provisioning more resources or by installing more
resources.

5.6.1 Vertical scaling or scaling up

The vertical scaling in the cloud refers to either scaling up i.e. enhancing the
computing resources or scaling down i.e. reducing/ cutting down computing
resources for an application. In vertical scaling, the actual number of VMs are
constant but the quantity of the resource allocated to each of them is increased/
decreased. Here no infrastructure is added and application code is also not
changed. The vertical scaling is limited to the capacity of the physical machine
or server running in the cloud. If one has to upgrade the hardware requirements
of an existing cloud environment, this can be achieved by minimum changes.

B 4 CPUs
vertical scaling

A 2 CPUs

An IT resource (a virtual server with two CPUs) is scaled up by replacing it with a more
powerful IT resource with increased capacity for data storage (a physical server with four CPUs).

5.6.2 Horizontal scaling or scaling out

In horizontal scaling, to meet the user requirements for high availability,


excess resources are added to the cloud environment. Here, the resources are
added/ removed as VMs. This includes addition of storage disks, new server
for increasing CPUs or installation of additional RAMs and work like a single
system. To achieve horizontal scaling, a minimum downtime is required. This
Resource Provisioning,
type of scaling allows one to run distributed applications in a more efficient Load Balancing and Security
manner.

Pooled
physical
servers

virtual demand demand


servers

A A B A B C

horizontal scaling
An IT resource (Virtual Server A) is scaled out by adding more of the same IT resources (Virtual Servers B and C).

Another way of maximizing the resource utilization is Diagonal Scaling. This


combines the ideas of both vertical and horizontal scaling. Here, the resource is
scaled up vertically till one hit the physical resource capacity and afterwards
new resources are added like horizontal scaling. The new added resources have
further capacity of being scaled like vertical scaling.

5.7 SUMMARY

In the end, we are now aware of various types of scaling, scaling strategies and
their use in real situations. Various cloud service providers like Amazon AWS,
Microsoft Azure and IT giants like Google offer scaling services on their
application based on the application requirements. These services offer good
help to the entrepreneurs who run small to medium businesses and seek IT
infrastructure support. We have also discussed various advantages of cloud
scaling for business applications.

5.8 SOLUTIONS / ANSWERS

Check Your Progress 1

19
Scaling
1. Cloud being used extensively in serving applications and in other
scenarios where the cost and installation time of infrastructure/ capacity
scaling is expectedly high. Scaling helps in achieving optimized
infrastructure for the current and expected load for the applications with
minimum cost and setup time. Scaling also helps in reducing the
disaster recovery time if happens. (for details see section 5.3)

2. How proactive scaling is achieved through virtualization: The


proactive scaling is a process of forecasting and then managing the load
on the could infrastructure in advance. The precise forecasting of the
requirement is key to success here. The preparedness for the estimated
traffic/ requirements is done using the virtualization. In virtualization,
various resources may be assigned to the required machine in no time
and the machine can be scaled to its hardware limits. The virtualization
helps in achieving low cool down period and serve instantly. (for
details you may refer Resource Utilization Unit.)

3. The reactive scaling technique only works for the actual variation of
load on the application however, the combination works for both
expected and real traffic. A good estimate of load increases
performance of the combinational scaling.

4. Following are the differences between Scaling and Cloud Elasticity:

Scaling Cloud Elasticity

Increasing the capacity to meet the Increasing or Reducing the capacity to


increasing workload. meet the increasing or reducing the
workload.

In a scaling environment, the In elasticity environment, the


available resources may exceed to available resources match the current
meet the future demands. demands.

Scalability adapts only to the It adopts to both the workload


workload increase by provisioning the increase and workload decrease in an
resources in an incremental manner. automatic manner.

Scalability enables a corporate to Elasticity enables a corporate to meet


meet expected demands for services the unexpected changes in the demand
with long term strategic needs. for service with “short-term” tactical
needs.

Check Your Progress 2

2. The fixed amount scaling is a simplistic approach for scaling in cloud


environment. Here the resources are scaled up/ down by a user defined
number of nodes. In fixed amount scaling resource utilization is not
optimized. It can also happen that only a small node can solve the
Resource Provisioning,
resource crunch problem but the used defined numbers are very high Load Balancing and Security
leading to underutilized resources. Therefore a percentage amount of
scaling is a better technique for optimal resource usage.
3. In Algorithm 1 for fixed amount auto scaling, calculate the values in
table if U = 3: For the given U = 3, following calculations are made.

Nodes Nodes RPS RPS_n Total nodes New


(Curren (added) (required) RPS_n
t)

4 0 450 112.5 4

1800

3 7 257.14

2510

3 10 251

3300

3 13 253.84

4120

3 16 257.50

5000

3 19 263.15

3. When auto scaling takes place in cloud, a small time interval (pause)
prevents the triggering next auto scale event. This helps in maintaining
the integrity in the cloud environment for applications. Once the cool
down period is over, next auto scaling event can be accepted.

5.9 FURTHER READINGS

1. Cloud Computing: Principles and Paradigms, Rajkumar Buyya, James


Broberg and Andrzej M. Goscinski, Wiley, 2011.
2. Mastering Cloud Computing, Rajkumar Buyya, Christian Vecchiola,
and Thamarai Selvi, Tata McGraw Hill, 2013.
3. Essentials of cloud Computing: K. Chandrasekhran, CRC press, 2014.
4. Cloud Computing, Sandeep Bhowmik, Cambridge University Press,
2017.

21
UNIT 6 LOAD BALANCING
Structure

6.0 Introduction
6.1 Objectives
6.2 Load Balancing and its Importance
6.2.1 Importance of Load Balancing
6.2.2 Goals of Load Balancing in Cloud Computing
6.2.3 How a Load Balancer Works?
6.3 Types of Load Balancers
6.3.1 Types of Load Balancers based on the Functionality
6.3.2 Types of Load Balancers based on the Configuration
6.4 Load Balancing Algorithms – Static and Dynamic
6.4.1 Static Load Balancing Algorithms
6.4.2 Dynamic Load Balancing Algorithms
6.5 Load Balancing as a Service (LBaaS)
6.5.1 Open Stack LBaaS
6.6 Summary
6.7 Solutions/Answers
6.8 Further Readings

6.0 INTRODUCTION

In the earlier unit, we have studied Cloud Elasticity and Scaling which are very
important characteristics of a cloud. In this unit, we will focus on another
important aspect of cloud computing namely load balancing.

Load balancing is the strategic distribution of incoming network traffic across


multiple servers or resources in cloud computing. It acts as the traffic cop,
managing the flow of requests among various servers to optimize performance,
prevent overloading, and ensure high availability. This critical function is the
backbone of a responsive and robust cloud infrastructure. By evenly
distributing workloads, load balancing minimizes the risk of any single server
becoming overwhelmed, reducing latency and preventing potential downtime.
In the cloud, where scalability and reliability are paramount, load balancing
allows for dynamic resource allocation, ensuring that computing resources are
used efficiently while maintaining consistent performance levels. It plays a
pivotal role in maintaining seamless operations, maximizing resource
utilization, and providing users with uninterrupted access to applications and
services hosted in the cloud.

In this unit, you will study importance of load balancing, goals of load
balancing, levels of load balancing, load balancing algorithms and load
balancing as a service.

1
Resource Provisioning,
Load Balancing and
Security 6.1 OBJECTIVES

After going through this unit, you shall be able to:

• understand load balancing concept and its importance;


• describe how a load balancer works;
• list and explain types of load balancers based on functionality and
configuration;
• discuss various types of static load balancing algorithms;
• discuss various types of dynamic load balancing algorithms; and
• explain Load Balancing-as-a-Service

6.2 LOAD BALANCING AND ITS IMPORTANCE


Load balancing in cloud computing distributes traffic and workloads to ensure
that no single server or machine is under-loaded, overloaded, or idle. Load
balancing optimizes various constrained parameters such as execution
time, response time, and system stability to improve overall cloud
performance. Load balancing architecture in cloud computing consists of a
load balancer that sits between servers and client devices to manage traffic.

As shown in Fig 1, load balancing in cloud computing distributes


traffic, workloads and computing resources evenly throughout a cloud
environment to deliver greater efficiency and reliability for cloud applications.
Cloud load balancing enables enterprises to manage client requests and host
resource distribution among multiple computers, application servers,
or computer networks.

Figure 1: Load Balancing in Cloud Computing

2
Load Balancing
6.2.1 Importance of Load Balancing

Load balancing holds immense importance in cloud computing for several


reasons:

• Optimized Performance: It ensures that resources are efficiently


utilized, preventing any single server from becoming overloaded. By
distributing workloads evenly, load balancing minimizes response
times and enhances overall system performance, providing a smooth
and consistent user experience.

• High Availability and Reliability: Load balancing improves system


reliability by spreading traffic across multiple servers or regions. If one
server fails or experiences issues, the traffic can be redirected to
healthy servers, ensuring continuous availability of applications and
services.

• Scalability and Flexibility: In a cloud environment, load balancing


facilitates dynamic resource allocation. It allows for easy scaling,
enabling the addition or removal of resources based on demand. This
scalability ensures that the infrastructure can handle varying workloads
efficiently.

• Cost Efficiency: Efficient load balancing contributes to cost savings by


optimizing resource usage. It prevents over-provisioning of resources,
reducing unnecessary expenses associated with idle or underutilized
servers.

• Fault Tolerance and Resilience: Load balancing enhances fault


tolerance by distributing traffic across redundant servers or data
centers. This redundancy minimizes the impact of failures or
disruptions, improving the system's resilience against potential issues.

• Support for Modern Architectures: Load balancing is crucial for


modern architectures like microservices and containers. It intelligently
routes traffic among various microservices or containers, ensuring that
each component receives an appropriate share of the workload.

Overall, load balancing is fundamental in cloud computing as it not only


optimizes resource utilization and performance but also ensures high
availability, scalability, and resilience, making it a cornerstone for robust and
reliable cloud-based services.

6.2.2 Goals of Load Balancing in Cloud Computing

The overall goals of load balancing in cloud computing are to


minimize response time for application users and maximize organizational
resources. Other than that, optimal resource utilization, high availability,

3
Resource Provisioning,
Load Balancing and scalability, improved performance and enhanced security are the other goals of
Security
load balancing in cloud computing.

6.2.3 How a Load Balancer Works?

A load balancer is a crucial component that helps distribute incoming network


traffic across multiple servers or resources to ensure efficient utilization,
maximize performance, and maintain high availability of applications or
services. Here's a breakdown of how it typically works:

• Traffic Distribution: When a user sends a request to access a website,


application, or service hosted on the cloud, it first reaches the load
balancer. The load balancer acts as a traffic cop, receiving incoming
requests.

• Load Balancing Algorithms: The load balancer employs various


algorithms to decide how to distribute the incoming requests. Common
algorithms include Round Robin (where each server is sequentially
assigned a request), Least Connections (sending requests to the server
with the fewest active connections), or Weighted Round Robin
(assigning servers based on predefined weights).

• Health Monitoring: Load balancers continuously monitor the health


and performance of the servers in the pool. They perform health
checks, probing servers to ensure they are operational and capable of
handling requests. If a server is found to be unhealthy, the load
balancer can route traffic away from it until it recovers.

• Session Persistence: In cases where maintaining session data is


essential (like in e-commerce or banking apps), the load balancer can
employ techniques like cookie-based or IP-based session persistence.
This ensures that a user's requests are consistently directed to the same
server in a session to maintain continuity.

• Scalability and Elasticity: Load balancers play a vital role in scaling


resources. In cloud environments, they facilitate horizontal scaling by
adding or removing servers dynamically based on demand. When
traffic increases, new servers can be added to the pool, and the load
balancer will distribute traffic accordingly.

• High Availability: Load balancers themselves are often designed for


high availability. They might have redundancy built-in, employing
techniques like clustering or active-passive configurations, ensuring
that if one load balancer fails, another takes over seamlessly to prevent
disruptions.

• Content Delivery and Security: Advanced load balancers can also


handle tasks like SSL termination (decrypting incoming SSL requests
before distributing them to backend servers) and content caching,
4
Load Balancing
which can improve performance by serving frequently accessed content
directly from memory.

Overall, load balancers are fundamental components in cloud computing


architecture, ensuring efficient resource utilization, optimal performance, and
high availability for applications and services.

 Check Your Progress 1


1) What is load balancing in Cloud Computing?

…………………………………………………………………………………………
…………………………………………………………………………………………
…………………………………………………………………………………………
2) What are the goals of load balancing?

…………………………………………………………………………………………
…………………………………………………………………………………………
3) Why is it imperative in Cloud Computing to balance the cloud load?
…………………………………………………………………………………………
…………………………………………………………………………………………

6.3 TYPES OF LOAD BALANCERS

There are broadly 2 types of load balancers:


(i) based on the functionality and
(ii) based on the configuration

6.3.1 Types of Load Balancers based on the Functionality


Several load balancing techniques are there for addressing the specific network
issues:
a. Network Load Balancer / Layer 4 (L4) Load Balancer:
Based on the network variables like IP address and destination ports, Network
Load balancing is the distribution of traffic at the transport level through the
routing decisions. Such load balancing is TCP i.e. level 4, and does not
consider any parameter at the application level like the type of content, cookie
data, headers, locations, application behavior etc. Performing network
addressing translations without inspecting the content of discrete packets,
Network Load Balancing cares only about the network layer information and
directs the traffic on this basis only.

b. Application Load Balancer / Layer 7 (L7) Load Balancer:


Ranking highest in the OSI model, Layer 7 load balancer distributes the
requests based on multiple parameters at the application level. A much wider
5
Resource Provisioning,
Load Balancing and range of data is evaluated by the L7 load balancer including the HTTP headers
Security
and SSL sessions and distributes the server load based on the decision arising
from a combination of several variables. This way application load balancers
control the server traffic based on the individual usage and behavior.

c. Global Server Load Balancer/Multi-site Load Balancer:


With the increasing number of applications being hosted in cloud data centers,
located at varied geographies, the GSLB extends the capabilities of general L4
and L7 across various data centers facilitating the efficient global load
distribution, without degrading the experience for end users. In addition to the
efficient traffic balancing, multi-site load balancers also help in quick recovery
and seamless business operations, in case of server disaster or disaster at any
data center, as other data centers at any part of the world can be used for
business continuity.

6.3.2 Types of Load Balancers based on the Configuration


Software load balancers, hardware load balancers and Virtual load balancers
are the three types of load balancers used in cloud computing to manage and
distribute incoming network traffic among multiple servers or resources to
optimize performance and reliability.

a. Software Load Balancers

Software load balancers are load balancing solutions implemented as software


applications or services within the cloud infrastructure.

Characteristics of Software Load Balancers

• They operate as software instances that can be deployed on virtual


machines or containers.
• They offer flexibility and scalability, allowing for easy configuration
changes and adjustments to accommodate changing traffic patterns.
• These load balancers can be dynamically scaled up or down based on
demand without relying on specific hardware devices.

Examples: Nginx, HAProxy, and load balancing solutions provided by cloud


service providers are common examples of software load balancers.

b. Hardware Load Balancers:

Hardware load balancers are dedicated physical devices designed specifically


to perform load balancing tasks.

Characteristics of Hardware Load Balancers

• They are standalone appliances that sit between the incoming traffic
and the servers, managing the distribution of requests.
• Hardware load balancers are known for their high performance,
specialized hardware optimizations, and ability to handle high volumes
of traffic efficiently.
6
Load Balancing
• These devices often offer robust reliability features and specialized
hardware for load balancing tasks.
Examples: F5 Networks' BIG-IP, Citrix ADC (formerly known as Netscaler),
and Barracuda Load Balancer are examples of hardware load balancers.

c. Virtual Load Balancers:

This load balancer is different from both the software and hardware load
balancers as it is the combination of the program of a hardware load balancer
working on a virtual machine.

Through virtualization, this kind of load balancer imitates the software driven
infrastructure. The program application of hardware equipment is executed on
a virtual machine to get the traffic redirected accordingly. But such load
balancers have similar challenges as of the physical on-premise balancers viz.
lack of central management, lesser scalability and much limited automation.

6.4 LOAD BALANCING ALGORITHMS – STATIC


AND DYNAMIC

A load balancing algorithm is the logic, a set of predefined rules, which a load
balancer uses to route traffic among servers.

There are two primary approaches to load balancing. Static load


balancing distributes traffic without taking this state into consideration; some
static algorithms route an equal amount of traffic, either in a specified order or
at random, to each server in a group. Dynamic load balancing uses algorithms
that distribute traffic based on the current state of each server.

6.4.1 Static Load Balancing Algorithms

Algorithms in this class are also noted as off-line algorithms, in which the
VMs information are required to be known in advance. Thus, static algorithms
generally obtain better overall performance than dynamic algorithms.
However, demands are changing over time in real clouds. Thus, static resource
allocation algorithms are easy to violate the requirements of dynamic VM
allocation. Some of the static load balancing algorithms are as follows:

a. Round Robin: Round-robin network load balancing rotates user


requests across servers in a cyclical manner. As a simplified example,
let’s assume that an enterprise has a group of three servers: Server A,
Server B, and Server C. In the order that Round Robin regulates
requests the first request is sent to Server A, the second request goes to
Server B, and the third request is sent to Server C. The load balancer
continues to route incoming traffic by this order. This ensures that the
traffic load is distributed evenly across servers.

b. Weighted Round Robin: Weighted Round Robin is developed upon


the Round Robin load balancing method. In weighted Round Robin,
each server in the farm is assigned a fixed numerical weighting by the

7
Resource Provisioning,
Load Balancing and network administrator. Servers deemed as able to handle more traffic
Security will receive a higher weight. Weighting can be configured within DNS
records.

c. IP Hash: IP hash load balancing combines the source and destination


IP addresses of incoming traffic and uses a mathematical function to
convert them into hashes. Connections are assigned to specific servers
based on their corresponding hashes. This algorithm is particularly
useful when a dropped connection needs to be returned to the same
server that originally handled it.

d. Opportunistic Algorithm: This static load balancing algorithm that


does not consider the current workload of each system. Therefore it
keeps each node busy by randomly distributing all uncompleted tasks
to the available nodes. This makes the algorithm to provide poor results
on load balancing. It fails to calculate the node's implementation time,
which then lowers the performance of the processing task. Also, when
there are nodes in the idle state, then there will be bottlenecks in the
cloud system.

e. Min-Min Algorithm: This algorithm is easy to use and works at a


faster pace. In addition, it improves performance and consists of a
series of tasks. The time taken to execute the task is computed and
allocated to Virtual Machines (VMs) on the basis of the smallest
completion time for the existing tasks. The process will continue till it
is ensured that each task has been allocated to VM. Because of the
existence of a greater number of smaller tasks, this algorithm performs
better compared to if there were bigger tasks. However, this will lead to
starvation because of giving priority to smaller tasks and deferring the
bigger tasks.

f. Max-Min Algorithm: This algorithm is quite similar to the Min-Min


Load Balancing, based on the calculation time. In this algorithm, all
existing tasks are sent to the system, after which the calculation is
carried out for determining the least time to complete each of the given
tasks. The selected task then has the maximum time to be completed
will be allocated to the relevant machine. A comparison of the
performance of this algorithm with the Min-Min algorithm shows that
the Max-Min algorithm is better because there is just one large task in
the set, which means that the Max-Min algorithm will carry out the
shorter tasks alongside the larger task.

g. Throttled Load Balancer (TLB): A table is generated in this


algorithm that includes the virtual machines as well as the existing state
(available/busy). If a specific task is allocated to a virtual machine, a
request is made to the control unit within the data center, which will
look for the ideal VM suit with respect to their abilities to achieve the
required task. The load balancer will send -1 back to the data center if
an appropriate VM is not available. In contrast, the process of looking
for the ideal virtual machines always takes place from the start of the
table each time; therefore, certain VMs are not employed. Fig 2
presents a demonstration of Throttled Load Balancer.

8
Load Balancing

Figure 2: TLB Algorithm

h. Active Monitoring Load Balancer(AMLB): It is a type of dynamic


load technology. This technology obtains information relevant to each
VM and to the number of requests that are presently allocated to each
of them. The Data Center Controller (DCC) scans the VM index table
after receiving a new request to determine the VM that is least loaded
or idle. First-comefirst serve concept is employed by this algorithm to
allocate load to the VM that has the smallest index number for more
than two servers.The VM ID is sent back by the AMLB algorithm to
the DCC which then sends the request to the VM represented by that
ID. The AMLB is informed about the new allocation by the DCC and it
is sent the cloudlet. Once the task is completed, the information is sent
to the DCC and the VM index table is reduced. When a new request is
received, it goes over the table again using load balancer and then the
process allocation occurs. This is shown in Fig 3.

Figure 3: AMLB Algorithm

6.4.2 Dynamic Load Balancing Algorithms

Algorithms in this class are also noted as online algorithms, in which VMs are
dynamically allocated according to the loads at each time interval. The load
information of VM is not obtained until it comes into the scheduling stage.
These algorithms could dynamically configure the VM placement combining
with VM migration technique. In comparison to static algorithms, dynamic

9
Resource Provisioning,
Load Balancing and algorithms have higher competitive ratio. Some of the dynamic load balancing
Security algorithms are as follows:

a. Least Connection: The least connection algorithm identifies which


servers currently have the fewest number of requests being served and
directs traffic to those servers. This is based on an assumption that all
connections require roughly equal processing power.

b. Weighted Least Connection: This one gives administrators the option


to assign different weights to each server under the assumption that
some servers can handle more requests than others.

c. Weighted Response Time: This algorithm averages out the response


time of each server and combines that with the number of requests each
server is serving to determine where to send traffic. This algorithm can
ensure faster service for users by sending traffic to the servers with the
quickest response time.

d. Resource-Based (Adaptive) load balancing algorithm: Resource


based (or adaptive) load balancing makes decisions based on status
indicators retrieved by LoadMaster from the back-end servers. The
status indicator is determined by a custom program (an “agent”)
running on each server. LoadMaster queries each server regularly for
this status information and then sets the dynamic weight of the real
server appropriately.

In this way, the load balancing method is essentially performing a


detailed “health check” on the real server. This method is appropriate in
any situation where detailed health check information from each server
is required to make load balancing decisions. For example: this method
would be useful for any application where the workload is varied and
detailed application performance and status is required to assess server
health. This method can also be used to provide application-aware
health checking for Layer 4 (UDP) services via the load balancing
method.

6.5 LOAD BALANCING AS A SERVICE (LBaaS)


Load Balancing as a Service (LBaaS) in cloud computing refers to a
managed service offered by cloud providers to dynamically distribute
incoming network traffic across multiple servers, applications, or resources to
optimize performance, enhance availability, and improve reliability. LBaaS
simplifies the management of load balancing operations by abstracting the
complexities of configuring and maintaining load balancers, offering a scalable
and efficient solution for handling traffic distribution in the cloud. Key aspects
of Load Balancing as a Service include managing of load balancing, traffic
optimization, integration with cloud services, monitoring and analytics,
scalability and flexibility, high availability and reliability.

10
Load Balancing
LBaaS is available as part of the services provided by major cloud platforms
like AWS, Azure, Google Cloud, and others. Users can configure and manage
load balancers through web-based interfaces or APIs provided by the cloud
service providers. This service abstraction allows businesses to focus on their
applications' functionality and scalability while relying on the cloud provider's
infrastructure for efficient load balancing.

In the next section let us study how a Open Stack LBaaS works.

6.5.1 Open Stack LBaaS

OpenStack LBaaS allow users to create Load balancer to balance the traffic
load between the Instances, it reside in front of a group of Instances and
manage traffic balancing. LBaaS v2 allows you to configure multiple listener
ports on a single load balancer IP address.

LBaaS service consist with a load balancer, pool, pool members, listener and a
health Monitor. High Availability Proxy (HAProxy) is used to implement the
load balancing. Below given Fig 4 will help you to understand various
components of Open Stack LBaaS.

Figure 4: Open Stack LBaaS

Load Balancer: Load Balancer collect the data from Listeners and route the
traffic to appropriate Instance. It get assigned one IP from from same subnet on
which the Instances are running. The traffic from outside network is redirected
to LB IP and LB route the traffic to the Instance as per load balancer Policies
configuration.

11
Resource Provisioning,
Load Balancing and Listener: Load balancers can listen for requests on multiple ports. Each one
Security
of those ports is specified by a listener.

Pool : A pool holds a list of members that serve content through the load
balancer.

Health monitor: Health Monitor keep tracking the Status of the pool
members. if a Member is not in Healthy state health monitor redirect the traffic
to another healthy instance. Health monitors are associated with pools.

Member: Members are the servers that serve traffic behind a load balancer.
Each member is specified by the IP address and port that it uses to serve
traffic.

 Check Your Progress 2


1) Briefly explain the static and dynamic approaches of load balancing.

…………………………………………………………………………………
…………………………………………………………………………………
…………………………………………………………………………………
2) Briefly explain the Round Robin and Weighted Round Robin Algorithms.

…………………………………………………………………………………
…………………………………………………………………………………
…………………………………………………………………………………

6.6 SUMMARY
In this unit we had studied load balancing and its associated algorithms. Load
balancing in cloud computing is a critical mechanism that optimizes the
distribution of incoming network traffic across multiple servers or resources.
Acting as a traffic manager, the load balancer ensures that no single server is
overwhelmed, thereby enhancing performance, maximizing resource
utilization, and maintaining high availability.

Employing algorithms to determine how to distribute requests, load balancers


also monitor the health of servers and dynamically adjust traffic routing based
on their operational status. This dynamic scalability and efficient resource
allocation contribute to the overall resilience and responsiveness of cloud-
based applications, supporting the seamless handling of varying workloads and
improving the user experience.

12
Load Balancing

6.7 SOLUTIONS / ANSWERS

Check Your Progress 1

1. In a public cloud computing environment, a load balancer distributes


application and network traffic efficiently and methodically across
various servers. This prevents excessive traffic and requests from
collecting in one place and enhances application responsiveness by
spreading the workload categorically between existing servers.

Load balancers sit between backend servers and client devices, receive
server requests, and distribute them to available, capable servers. Cloud
load balancing is the process of distributing traffic such as UDP,
TCP/SSL, HTTP(s), HTTPS/2 with gRPC, and QUIC to multiple
backends to increase security, avoid congestion, and reduce costs and
latency.

2. The goals of load balancing in cloud computing are:

(i) To minimize response time for application users; and

(ii) Maximize organizational resources.

(Also refer section 6.2.2 for details)

3. In the cloud, load balancing is critical for the following reasons. Load
balancing technology is less costly and easier to use than other options.
Firms may now give greater outcomes at a cheaper cost by using this
technology. The scalability of cloud load balancing can help manage
website traffic. High-end network and server traffic may be effectively
managed using effective load balancers. In order to manage and
disperse workloads in the face of numerous visitors every second, e-
commerce businesses rely on cloud load balancing. Load balancers can
deal with any abrupt spikes in traffic. For example, if there are too
many requests for university results, the website may be shut down. It
is unnecessary to be concerned about the flow of traffic while using a
load balancer. Whatever is the scale of the traffic, load balancers will
evenly distribute the website's load over several servers, resulting in the
best outcomes in the shortest amount of time.
The primary benefit of utilizing a load balancer is to ensure that the
website does not go down unexpectedly. This means that if a single
node fails, the load is automatically shifted to another node on the

13
Resource Provisioning,
Load Balancing and network. It allows for more adaptability, scalability, and traffic
Security
handling.

Check Your Progress 2


1. Static Algorithm Approach
This type of method is used when the load on the system is relatively
predictable and hence static. Because of the static method, all of the
traffic is split equally amongst all of the servers. Implementing this
algorithm effectively calls for extensive knowledge of server resources,
which is only known at implementation time.
However, the decision to shift loads does not take into account the
current state of the system. One of the main limitations of a static load
balancing method is that load balancing jobs only begin working once
they have been established. It couldn't be used for load balancing on
other devices.
Dynamic Algorithm
The dynamic process begins by locating the network's lightest server
and assigns priority load balancing to it. As a result, the system's traffic
may need to be increased by utilising network real-time
communication. It's all about the present status of the system in this
case.
Decisions are made in the context of the present system state, which is
a key feature of dynamic algorithms. Processes can be transferred from
high-volume machines to low-volume machines in real time.
2. Round Robin Algorithm
For this algorithm, as its name implies, jobs are assigned in a round-
robin fashion using the name. The initial node is chosen at random, and
other nodes are assigned work in a round-robin fashion. This is one of
the simplest strategies for distributing the load on a network.
Processes are assigned to each other in a random order with no regard
for priority. When the workload is evenly distributed throughout the
processes, it responds quickly. The loading time for each procedure
varies. Some nodes may be underutilized while others are
overburdened, as a result.
Weighted Round Robin Load Balancing Algorithm

14
Load Balancing
Round Robin Load Balancing Algorithms using Weighted Round Robins have
been created to address the most problematic aspects of Round Robins.
Weights and functions are distributed according to the weight values in this
algorithm.
Higher-capacity processors are valued more highly. Consequently, the servers
with the most traffic will be given the most work. Once the servers are fully
loaded, they will see a steady stream of traffic.

6.8 FURTHER READINGS

1. Cloud Computing: Principles and Paradigms, Rajkumar Buyya, James


Broberg and Andrzej M. Goscinski, Wiley, 2011.
2. Mastering Cloud Computing, Rajkumar Buyya, Christian Vecchiola,
and Thamarai Selvi, Tata McGraw Hill, 2013.
3. Essentials of cloud Computing: K. Chandrasekhran, CRC press, 2014.
4. Cloud Computing, Sandeep Bhowmik, Cambridge University Press,
2017.

15
UNIT 7 SECURITY ISSUES IN CLOUD
COMPUTING
Structure

7.0 Introduction
7.1 Objectives
7.2 Cloud Security
7.2.1 How Cloud Security is Different from Traditional IT Security?
7.2.2 Cloud Computing Security Requirements
7.3 Security Issues in Cloud Service Delivery Models
7.4 Security Issues in Cloud Deployment Models

7.4.1 Security Issues in Public Cloud


7.4.2 Security Issues in Private Cloud
7.4.3 Security Issues in Hybrid Cloud
7.5 Ensuring Security in Cloud Against Various Types of Attacks
7.6 Identity and Access Management (IAM)
7.6.1 Benefits of IAM
7.6.2 Types of Digital Authentication
7.6.3 IAM and Cloud Security
7.6.4 Challenges in IAM
7.6.5 Right Use of IAM Security
7.7 Security as a Service (SECaaS)
7.7.1 Benefits of SECaaS
7.8 Multi-Cloud Computing
7.8.1 Benefits of Multi-Cloud
7.9 Summary
7.10 Solutions/Answers
7.11 Further Readings

7.0 INTRODUCTION

The rise of cloud computing as an ever-evolving technology brings with it a


number of opportunities and challenges. Cloud is now becoming the back end
for all forms of computing, including the ubiquitous Internet of Things.

In the earlier unit, we had studied Load Balancing in Cloud computing and in
this unit we will focus on another important aspect namely Cloud Security in
cloud computing.

Cloud security is a discipline of cyber security dedicated to secure cloud


computing systems. This includes keeping data private and safe across online-
based infrastructure, applications, and platforms. Securing these systems
involves the efforts of cloud providers and the clients that use them, whether
an individual, small to medium business, or enterprise uses.

Cloud providers host services on their servers through always-on internet


connections. Since their business relies on customer trust, cloud security
1
Resource Provisioning,
Load Balancing and methods are used to keep client data private and safely stored. However, cloud
Security security also partially rests in the client’s hands as well. Understanding both
facets is pivotal to a healthy cloud security solution.

At its core, cloud security is composed of the following components:

• Data security
• Identity and access management (IAM)
• Governance (policies on threat prevention, detection, and mitigation)
• Data retention (DR) and business continuity (BC) planning
• Legal compliance

In this unit, you will study what is cloud security, how it is different from
traditional(legacy) IT security, cloud computing security requirements,
challenges in providing cloud security, threats, ensuring security, Identity and
Access management and Security-as-a-Service.

7.1 OBJECTIVES

After going through this unit, you shall be able to:

• understand cloud security and how it is different to that of traditional IT


security;
• list and describe various cloud computing security requirements;
• describe the challenges in providing cloud security;
• discuss various types of threats with respect to types of cloud services
and cloud deployment models;
• discuss different techniques to ensure cloud security against various
types of threats,
• elucidate the importance of identity and access management; and
• explain Security-as-a-Service

7.2 CLOUD SECURITY

Cloud security is the whole bundle of technology, protocols, and best practices
that protect cloud computing environments, applications running in the cloud,
and data held in the cloud. Securing cloud services begins with understanding
what exactly is being secured, as well as, the system aspects that must be
managed.

As an overview, backend development against security vulnerabilities is


largely within the hands of cloud service providers. Aside from choosing a
security-conscious provider, clients must focus mostly on proper service
configuration and safe use habits. Additionally, clients should be sure that any
end-user hardware and networks are properly secured.

The full scope of cloud security is designed to protect the following, regardless
of your responsibilities:

2
Security Issues in
• Physical networks — routers, electrical power, cabling, climate Cloud Computing
controls, etc.
• Data storage — hard drives, etc.
• Data servers — core network computing hardware and software
• Computer virtualization frameworks — virtual machine software,
host machines, and guest machines
• Operating systems (OS) — software that houses
• Middleware — application programming interface (API) management,
• Runtime environments — execution and upkeep of a running program
• Data — all the information stored, modified, and accessed
• Applications — traditional software services (email, tax software,
productivity suites, etc.)
• End-user hardware — computers, mobile devices, Internet of Things
(IoT) devices etc..

Cloud security may appear like traditional (legacy) IT security, but this
framework actually demands a different approach. Before diving deeper, let’s
first look how this is different to that of legacy IT security in the next section.

7.2.1 How Cloud Security is Different from Traditional IT Security?

Traditional IT security has felt an immense evolution due to the shift to cloud-
based computing. While cloud models allow for more convenience, always-on
connectivity requires new considerations to keep them secure. Cloud security,
as a modernized cyber security solution, stands out from legacy IT models in a
few ways.

Data storage: The biggest distinction is that older models of IT relied heavily
upon onsite data storage. Organizations have long found that building all IT
frameworks in-house for detailed, custom security controls is costly and rigid.
Cloud-based frameworks have helped offload costs of system development and
upkeep, but also remove some control from users.

Scaling speed: On a similar note, cloud security demands unique attention


when scaling organization IT systems. Cloud-centric infrastructure and apps
are very modular and quick to mobilize. While this ability keeps systems
uniformly adjusted to organizational changes, it does poses concerns when an
organization’s need for upgrades and convenience outpaces their ability to
keep up with security.

End-user system interfacing: For organizations and individual users alike,


cloud systems also interface with many other systems and services that must be
secured. Access permissions must be maintained from the end-user device
level to the software level and even the network level. Beyond this, providers
and users must be attentive to vulnerabilities they might cause through unsafe
setup and system access behaviors.

Proximity to other networked data and systems: Since cloud systems are a
persistent connection between cloud providers and all their users, this
substantial network can compromise even the provider themselves. In
networking landscapes, a single weak device or component can be exploited to
infect the rest. Cloud providers expose themselves to threats from many end-
3
Resource Provisioning,
Load Balancing and users that they interact with, whether they are providing data storage or other
Security services. Additional network security responsibilities fall upon the providers
who otherwise delivered products live purely on end-user systems instead of
their own.

Solving most cloud security issues means that users and cloud providers, both
in personal and business environments, both remain proactive about their own
roles in cyber security. This two-pronged approach means users and providers
mutually must address:

• Secure system configuration and maintenance.


• User safety education, both behaviorally and technically.

Ultimately, cloud providers and users must have transparency and


accountability to ensure both parties stay safe.

7.2.2 Cloud Computing Security Requirements

There are four main cloud computing security requirements that help to ensure
the privacy and security of cloud services: confidentiality, integrity,
availability, and accountability.

Confidentiality

Confidentiality requires blocking unauthorized exposure of cloud computing


service user’s information. Cloud providers charge users to guarantee
confidentiality, the focus will be on authentication of cloud resources (e.g.,
requiring a username and password for each user). Moreover, access control is
an important part of confidentiality in cloud computing. Neither access control
nor authentication works with a compromised cloud computing system, as it is
much harder to block unauthorized information disclosure on such a system.
Many approaches to protecting users’ sensitive cloud data are based on
encryption and data segmentation. If a provider’s server is compromised, data
segmentation reduces the amount of sensitive data that is disclosed. Data
segmentation also has other advantages; for instance, if the entire server is
compromised, only a small amount of user data is leaked, and downtime is
reduced. A cover channel is another potential confidentiality issue in a cloud
computing system; cover channels can cause information leaks through
unauthorized transmission paths.

Cloud computing providers use service-level agreements (SLAs) method to


resolve security issues for customer. Thus, providers of cloud services should
join to create standards for SLAs. Virtualization is the main aspect of the cloud
computing system; therefore many researchers have proposed techniques for
using virtualized systems to implement security goals.

Confidentiality is a part of cloud service that the provider must guarantee,


along with control of the cloud infrastructure. The provider should guarantee
confidential access to the data by ensuring trusted data sharing or through the
use of authorized data access. Therefore, there are huge barriers with the
growth of the CC system between the privacy of the user and security of the
data.

4
Security Issues in
Cloud Computing

Integrity

One goal of using cloud computing systems is to utilize a variety of resources.


That is why cloud computing support all data and why many users stick to the
same clouds. Users also desire the ability to change or update existing data or
to add new data to the cloud. Therefore, data access should be controlled to
ensure data integrity. As with confidentiality, integrity requires access control
and authentication. Thus, if the cloud system is compromised by a weak
password, the cloud data’s integrity will not be protected. To overcome this
huge challenge, providers use virtualization-based dynamic integrity to help
clients use cloud services without interrupting the providers’ work with other
clients. Such a method is useful for ensuring integrity and security with
satisfactory performance and cost. Another method, value-atrisk, helps to
ensure suitable security and integrity. The cloud-based governance design
principles guarantees integrity and security by controlling the path between the
provider and the enterprise client. Another method provides a test of
information integrity based on an Service Level Agreement (SLA) between the
provider and the client. The consumer can use this SLA to verify the accuracy
of the cloud information. In a blind execution of services, the client transfers
each type of information through the cloud computing system using a separate
process. In the trusted computing method, blind processing is used to ensure
the integrity of the client’s data. This method separates the execution
environment from the system, so that the system’s hardware and computing
base can be secured and the credentials’ accuracy can be verified.

Availability

Availability is the ability for the consumer to utilize the system as expected.
One of the significant advantages of a cloud computing is its data availability.
Cloud computing enhances availability through authorized entry. In addition,
availability requires timely support and robust equipment. A client’s
availability may be ensured as one of the terms of a contract; to guarantee
availability, a provider may secure huge capacity and excellent architecture.
Because availability is a main part of the cloud computing system, increased
use of the environment will increase the possibility of a lack of availability and
thus could reduce the cloud computing system’s performance. Cloud
computing affords clients two ways of paying for cloud services: on-demand
resources and (the cheaper option) resource reservation. The optimal virtual-
machine (VM) placement mechanism helps to reduce the cost of both payment
methods. By reducing the cost of running VMs for many cloud providers, it
supports expected changes in demand and price. This method involves the
client making a declaration to pay for certain resources owned by the cloud
computing providers using the Session Initiation Protocol (SIP) optimal
solution.

Accountability

Accountability involves verifying the clients’ various activities in the data


clouds. Accountability is achieved by verifying the information that each client
supplies (and that is logged in various places in information clouds). Directly
connecting all activities to a client’s account is not always satisfactory. Neither
5
Resource Provisioning,
Load Balancing and the client nor the provider takes all the responsibility for a system breakdown.
Security Thus, both the client and the provider must maintain accountability in case
disputes occur. Thus, one of them will need to log any incidents for future
auditing, clearly identify each incident, and provide the necessary equipment
for logging such transactions. As an example, when a client’s account is
compromised in an attack, the client can no longer perform certain activities.
Thus, the cloud service providers need to have saved sufficient information to
restore the compromised account and identify the exceptional behavior.
Tracing even the smallest actions that happen in the clouds could ensure
accountability; such tracking will identify the client or entity that is responsible
for any given disaster. Evidence should be logged for each activity once it
starts processing. The transaction log can then be used during the examination
to determine the aptness of the evaluation. Accountability is a challenge in a
cloud system because misconfigured devices can produce unreliable
calculation results. In addition, when clients rent insufficient resources for their
tasks, this could reduce the performance of the provided services. A virus can
also destroy clients’ data, and a provider can fail to deliver data on time or
even lose data.

7.2.3 Challenges in Cloud Security

Following are some of the key security challenges in cloud computing:

Authentication: Throughout the internet data stored by cloud user is available


to all unauthorized people. Henceforth the certified user and assistance cloud
must have interchangeability administration entity.

Access Control: To check and promote only legalized users, cloud must have
right access control policies. Such services must be adjustable, well planned,
and their allocation is overseeing conveniently. The approach governor
provision must be integrated on the basis of Service Level Agreement (SLA).

Policy Integration: There are many cloud providers such as Amazon, Google
which are accessed by end users. Minimum number of conflicts between their
policies because they user their own policies and approaches.

Service Management: In this different cloud providers such as Amazon,


Google, comprise together to build a new composed services to meet their
customers need. At this stage there should be procure divider to get the easiest
localized services.

Trust Management: The trust management approach must be developed as


cloud environment is service provider and it should include trust negotiation
factor between both parties such as user and provider. For example, to release
their services provider must have little bit trust on user and users have same
trust on provider.

In the follow sections let us discuss major threats and issues in cloud
computing with respect to the cloud service delivery models and cloud
deployment models.

6
Security Issues in
Cloud Computing

 Check Your Progress 1


1) Why security is important in Cloud?

…………………………………………………………………………………………
…………………………………………………………………………………………
…………………………………………………………………………………………
2) How does cloud security work?

…………………………………………………………………………………………
…………………………………………………………………………………………
3) Mention various cloud security risks and discuss briefly.
…………………………………………………………………………………………
…………………………………………………………………………………………

7.3 SECURITY ISSUES IN CLOUD SERVICE


DELIVERY MODELS

The main concern in cloud environments is to provide security around multi-


tenancy and isolation, giving customers more comfort besides “trust us” idea
of clouds. There has been survey works reported, which classifies security
threats in cloud based on the nature of the service delivery models (SaaS,
PaaS, IaaS) of a cloud computing system. However, security requires a holistic
approach. Service delivery model is one of many aspects that need to be
considered for a comprehensive survey on cloud security. Security at different
levels such as Network level, Host level and Application level is necessary to
keep the cloud up and running continuously. In accordance with these different
levels, various types of security breaches may occur which have been
classified in this section.

• Data Threats including data breaches and data loss


• Network Threats including account or service hijacking, and
denial of service, and
• Cloud Environment Specific Threats including insecure
interfaces and APIs, malicious insiders, abuse of cloud services,
insufficient due diligence, and shared technology vulnerabilities

7.3.1 Data Threats

Data is considered to be one the most important valuable resource of any


organization and the number of customers shifting their data to cloud is
increasing every day. Data life cycle in cloud comprises of data creation,
transit, execution, storage and destruction. Data may be created in client or
server in cloud, transferred in cloud through network and stored in cloud
storage. When required data is shifted to execution environment where it can
7
Resource Provisioning,
Load Balancing and be processed. Data can be deleted by its owner to complete its destruction. The
Security biggest challenge in achieving cloud computing security is to keep data secure.
The major issues that arise with the transfer of data to cloud are that the
customers don’t have the visibility of their data and neither do they know its
location. They need to depend on the service provider to ensure that the
platform is secure, and it implements necessary security properties to keep
their data safe. The data security properties that must be maintained in cloud
are confidentiality, integrity, authorization, availability and privacy. However,
many data issues arise due to improper handling of data by the cloud provider.
The major data security threats include data breaches, data loss, unauthorized
access, and integrity violations. All of these issues occur frequently on cloud
data.

7.3.1.1 Data Breaches

Data breach is defined as the leakage of sensitive customer or organization


data to unauthorized user. Data breach from organization can have a huge
impact on its business regarding finance, trust and loss of customers. This may
happen accidently due to flaws in infrastructure, application designing,
operational issues, insufficiency of authentication, authorization, and audit
controls. Moreover, it can also occur due to other reasons such as the attacks
by malicious users who have a virtual machine (VM) on the same physical
system as the one they want to access in unauthorized way. In recent past,
Apple’s iCloud users faced a data leakage attack recently in which an attempt
was made to gain access to their private data. Such attacks have also been done
at other companies cloud such as Microsoft, Yahoo and Google. An example
of data breach is cross VM side channel attack that extracts cryptographic keys
of other VMs on the same system and can access their data.

7.3.1.2 Data Loss

Data loss is the second most important issue related to cloud security. Like
data breach, data loss is a sensitive matter for any organization and can have a
devastating effect on its business. Data loss mostly occurs due to malicious
attackers, data deletion, data corruption, loss of data encryption key, faults in
storage system, or natural disasters. In 2013, 44% of cloud service providers
have faced brute force attacks that resulted in data loss and data leakage.
Similarly, malware attacks have also been targeted at cloud applications
resulting in data destruction.

7.3.1.3 SQL Injection Attacks

SQL injection attacks, are the one in which a malicious code is inserted into a
standard SQL code. Thus the attackers gain unauthorized access to a database
and are able to access sensitive information. Sometimes the hacker’s input data
is misunderstood by the web-site as the user data and allows it to be accessed
by the SQL server and this lets the attacker to have know-how of the
functioning of the website and make changes into that. Various techniques
like: avoiding the usage of dynamically generated SQL in the code, using
filtering techniques to sanitize the user input etc. are used to check the SQL
injection attacks. Some researchers proposed proxy based architecture towards
preventing SQL Injection attacks which dynamically detects and extracts
users’ inputs for suspected SQL control sequences.
8
Security Issues in
7.3.1.4 Cross Site Scripting(XSS) Attacks Cloud Computing

Cross Site Scripting (XSS) attacks, which inject malicious scripts into Web
contents have become quite popular since the inception of Web 2.0. There are
two methods for injecting the malicious code into the web-page displayed to
the user namely - Stored XSS and Reflected XSS. In a Stored XSS, the
malicious code is permanently stored into a resource managed by the web
application and the actual attack is carried out when the victim requests a
dynamic page that is constructed from the contents of this resource. However,
in case of a Reflected XSS, the attack script is not permanently stored; in fact it
is immediately reflected back to the user.

7.3.2 Network Threats

Network plays an important part in deciding how efficiently the cloud services
operate and communicate with users. In developing most cloud solutions,
network security is not considered as an important factor by some
organizations. Not having enough network security creates attacks vectors for
the malicious users and outsiders resulting in different network threats. Most
critical network threats in cloud are account or service hijacking, and denial of
service attacks.

7.3.2.1 Denial of Service(DoS)

Denial of Service (DOS) attacks are done to prevent the legitimate users from
accessing cloud network, storage, data, and other services. DOS attacks have
been on rise in cloud computing in past few years and 81% customers consider
it as a significant threat in cloud. They are usually done by compromising a
service that can be used to consume most cloud resources such as computation
power, memory, and network bandwidth. This causes a delay in cloud
operations, and sometimes cloud is unable to respond to other users and
services. Distributed Denial of Service (DDOS) attack is a form of DOS
attacks in which multiple network sources are used by the attacker to send a
large number of requests to the cloud for consuming its resources. It can be
launched by exploiting the vulnerabilities in web server, databases, and
applications resulting in unavailability of resources.

7.3.2.2 Account or Service Hijacking

Account hijacking involves the stealing of user credentials to get an access to


his account, data or other computing services. These stolen credentials can be
used to access and compromise cloud services. The network attacks includes
phishing, fraud, Cross Site Scripting (XSS), Botnets and software
vulnerabilities such as buffer overflow result in account or service hijacking.
This can lead to the compromise of user privacy as the attacker can eavesdrop
on all his operations, modify data, and redirect his network traffic.

7.3.2.3 Man in the Middle Attack (MITM)

In such an attack, an entity tries to intrude in an ongoing conversation between


a sender and a client to inject false information and to have knowledge of the
important data transferred between them. Various tools implementing strong

9
Resource Provisioning,
Load Balancing and encryption technologies like: Dsniff, Cain, Ettercap, Wsniff, Airjack etc. have
Security been developed in order to provide safeguard against them.

Another cause may be due to improper configuration of Secure Socket Layer


(SSL). For example, if SSL was improperly configured, then the middle party
could hew data. The preventive measure for this attack was before
communication with other parties, SSL should be properly organized.

7.3.2.4 Network Sniffing

It is an important dispute in which plain text were hewed over network. An


invader could snip passwords, which were improperly encrypted during
communication. If encryption techniques for data security were not used, then
attacker could enter as a third party and seize the data. Encryption methods are
deployed for securing their data.

7.3.2.5 Port Scanning

It is an important dispute in which an attack might happen as port 80 (HTTP)


was always opened for provisioning web services. Other ports like 21 (FTP),
etc., would be unlocked when needed. Firewall was a counter measure to safe
the data from disruption in port.

7.3.2.6 Conceded Credentials and Wrecked Authentication

Authentication management is always a challenge for organizations to tackle


and solve to close loopholes and prevent attackers from accessing permissions.

Brute Force Attacks: The attacker attempts to crack the password by guessing
all potential passwords.

Shoulder Surfing: This threat is espionage, which means the attacker is


watching and spying on the user’s motions in attempt to know the passwords.

Replay Attacks: Also known as reflection attacks, replay attacks are a type of
attack that targets a user’s authentication process.

Key loggers: This is a program that records every key pressed by the user and
tracks their behavior.

7.3.2.7 Border Gateway Protocol (BGP) Prefix Hijacking

Prefix hijacking is a type of network attack in which a wrong announcement


related to the IP addresses associated with an Autonomous system (AS) is
made. Hence, malicious parties get access to the untraceable IP addresses. On
the internet, IP space is associated in blocks and remains under the control of
ASs. An autonomous system can broadcast information of an IP contained in
its regime to all its neighbours. These ASs communicate using the Border
Gateway Protocol (BGP) model. Sometimes, due to some error, a faulty AS
may broadcast wrongly about the IPs associated with it. In such case, the actual
traffic gets routed to some IP other than the intended one. Hence, data is leaked
or reaches to some other unintended destination.

7.3.2.8 Distributed Denial of Service Attacks (DDoS)


10
Security Issues in
DDoS may be called an advanced version of DoS in terms of denying the Cloud Computing
important services running on a server by flooding the destination sever with
large numbers of packets such that the target server is not able to handle it. In
DDoS the attack is relayed from different dynamic networks which have
already been compromised unlike the DoS attack. The attackers have the
power to control the flow of information by allowing some information
available at certain times. Thus the amount and type of information available
for public usage is clearly under the control of the attacker [87]. The DDoS
attack is run by three functional units: A Master, A Slave and A Victim.
Master being the attack launcher is behind all these attacks causing DDoS,
Slave is the network which acts like a launch pad for the Master. It provides
the platform to the Master to launch the attack on the Victim. Hence it is also
called as co-ordinated attack. Basically a DDoS attack is operational in two
stages: the first one being Intrusion phase where the Master tries to
compromise less important machines to support in flooding the more important
one. The next one is installing DDoS tools and attacking the victim server or
machine. Hence, a DDoS attack results in making the service unavailable to
the authorized user similar to the way it is done in a DoS attack but different in
the way it is launched. A similar case of Distributed Denial of Service attack
was experienced with CNN news channel website leaving most of its users
unable to access the site for a period of three hours. In general, the approaches
used to fight the DDoS attack involve extensive modification of the underlying
network. These modifications often become costly for the users. Swarm based
logic for guarding against the DDoS attack were provided. This logic provides
a transparent transport layer, through which the common protocols such as
HTTP, SMTP, etc. can pass easily. The use of IDS in the virtual machine is
proposed to protect the cloud from DDoS attacks. A SNORT like intrusion
detection mechanism is loaded onto the virtual machine for sniffing all traffics,
either incoming, or outgoing. Another method commonly used to guard against
DDoS is to have intrusion detection systems on all the physical machines
which contain the user’s virtual machines.

7.3.3 Cloud Environment Specific Threats

Cloud service providers are largely responsible for controlling the cloud
environment. Some threats are specific to cloud computing such as cloud
service provider issues, providing insecure interfaces and APIs to users,
malicious cloud users, shared technology vulnerabilities, misuse of cloud
services, and insufficient due diligence by companies before moving to cloud.

7.3.3.1 Insecure Interfaces and API’s

Application Programming Interface (API) is a set of protocols and standards


that define the communication between software applications through Internet.
Cloud APIs are used at all the infrastructure, platform and software service
levels to communicate with other services. Infrastructure as a Service (IaaS)
APIs are used to access and manage infrastructure resources including network
and VMs, Platform as a Service (PaaS) APIs provide access to the cloud
services such as storage and Software as a Service (SaaS) APIs connect
software applications with the cloud infrastructure. The security of various
cloud services depends on the APIs security. Weak set of APIs and interfaces
can result in many security issues in cloud. Cloud providers generally offer
their APIs to third party to give services to customers. However, weak APIs
11
Resource Provisioning,
Load Balancing and can lead to the third party having access to security keys and critical
Security information in cloud. With the security keys, the encrypted customer data in
cloud can be read resulting in loss of data integrity, confidentiality and
availability. Moreover, authentication and access control principles can also be
violated through insecure APIs.

7.3.3.2 Malicious Insiders

A malicious insider is someone who is an employee in the cloud organization,


or a business partner with an access to cloud network, applications, services, or
data, and misuses his access to do unprivileged activities. Cloud administrators
are responsible for managing, governing, and maintaining the complete
environment. They have access to most data and resources, and might end up
using their access to leak that data. Other categories of malicious insiders
involve hobbyist hackers who are administrators that want to get unauthorized
sensitive information just for fun, and corporate espionage that involves
stealing secret information of business for corporate purposes that might be
sponsored by national governments.

7.3.3.3 Abuse of Cloud Services

The term abuse of cloud services refers to the misuse of cloud services by the
consumers. It is mostly used to describe the actions of cloud users that are
illegal, unethical, or violate their contract with the service provider. In 2010,
abusing of cloud services was considered to be the most critical cloud threat
and different measures were taken to prevent it. However, 84% of cloud users
still consider it as a relevant threat. Research has shown that some cloud
providers are unable to detect attacks launched from their networks, due to
which they are unable to generate alerts or block any attacks. The abuse of
cloud services is a more serious threat to the service provider than service
users. For instance, the use of cloud network addresses for spam by malicious
users has resulted in blacklisting of all network addresses, thus the service
provider must ensure all possible measures for preventing these threats. Over
the years, different attacks have been launched through cloud by the malicious
users. For example, Amazon’s EC2 services were used as a command and
control servers to launch Zeus botnet in 2009. Famous cloud services such as
Twitter, Google and Facebook as a command and control servers for launching
Trojans and Botnets. Other attacks that have been launched using cloud are
brute force for password cracking of encryption, phishing, performing DOS
attack against a web service at specific host, Cross Site Scripting and SQL
injection attacks.

7.3.3.4 Insufficient Due Diligence

The term due diligence refers to individuals or customers having the complete
information for assessments of risks associate with a business prior to using its
services. Cloud computing offers exciting opportunities of unlimited
computing resources, and fast access due which number of businesses shift to
cloud without assessing the risks associated with it. Due to the complex
architecture of cloud, some of organization security policies cannot be applied
using cloud. Moreover, the cloud customers have no idea about the internal
security procedures, auditing, logging, data storage, data access which results
in creating unknown risk profiles in cloud. In some cases, the developers and
12
Security Issues in
designers of applications maybe unaware of their effects from deployment on Cloud Computing
cloud that can result in operational and architectural issues.

7.3.3.5 Shared Technology Vulnerabilities

Cloud computing offers the provisioning of services by sharing of


infrastructure, platform and software. However, different components such as
CPUs, and GPUs may not offer cloud security requirements such as perfect
isolation. Moreover, some applications may be designed without using trusted
computing practices due to which threats of shared technology arise that can be
exploited in multiple ways. In recent years, shared technology vulnerabilities
have been used by attackers to launch attacks on cloud. One such attack is
gaining access to the hypervisor to run malicious code, get unauthorized access
to the cloud resources, VMs, and customer’s data. Xen platform is an open
source solution used to offer cloud services.

Earlier Xen hypervisors code used to create local privilege escalation (in which
a user can have rights of another user) vulnerability that can launch guest to
host VM escape attack. Later, Xen updated the code base of its hypervisor to
fix that vulnerability. Other companies such as Microsoft, Oracle and SUSE
Linux those based on Xen also released updates of their software to fix the
local privilege escalation vulnerability. Similarly, a report released in 2009,
showed the usage of VMware to run code from guests to hosts showing the
possible ways to launch attacks.

7.3.3.6 Inadequate Change Control and Misconfiguration

If an asset is set up wrong, it may suffer from misconfiguration, making it


exposed to attacks. Misconfiguration has now become a major source of data
leaks and unwarranted resource modification. The lack of adequate change
control may be a prevalent cause of misconfiguration. Depending on the nature
of the misconfiguration and how soon it is recognized and remedied, a
misconfigured item might have a significant business impact. Storage objects
left unsecured, unmodified default passwords and default settings, and
removing basic security safeguards are all examples of misconfiguration.

7.3.3.7 Limited Cloud Usage Visibility

Limited cloud usage visibility means when an organization is unable to


determine whether a service running on its platform is secure or harmful.
Unsanctioned app use and sanctioned app misuse are the two most common
categories. When users use apps and services without permission, the former
occurs. Authorized users utilize a sanctioned application in the latter case. This
could result in unauthorized data access and the entry of malware into the
system.

7.3.3.8 Loss of Operational and Security Logs

The lack of operational logs makes evaluating operational variables difficult.


When data is unavailable for analysis, the options for resolving difficulties are
limited. The loss of security logs poses a threat to the security management
program’s application management

13
Resource Provisioning,
Load Balancing and 7.3.3.9 Failure of Isolation
Security
There is a lack of strong isolation or compartmentalization of routing,
reputation, storage, and memory among tenants. Because of the lack of
isolation, attackers attempt to take control of the operations of other cloud
users to obtain unauthorized access to the data.

7.3.3.10 Risks of Noncompliance

Organizations seeking compliance with standards and legislation may be at


danger if the Cloud Service Provider cannot ensure adherence of the
requirements, outsources cloud administration to third parties, and/or refuses to
allow client audits. This danger arises from a lack of oversight over audits and
industry standard evaluation. As a result, cloud platform users are unaware of
provider protocols and practices in the areas of identity management, access,
and separation of roles.

7.3.3.11 Attacks against Cryptography

Cloud services are vulnerable to cryptanalysis due to insecure or outdated


encryption. If criminal users take control of the cloud, data stored there may be
encoded to prevent it from being read. Although fundamental errors in the
design of cryptographic algorithms which may cause suitable encryption
algorithms to become weak, there are also unique ways to break cryptography.
By evaluating accessible places and tracking clients’ query access habits,
incomplete information can be extracted from encrypted data.

7.3.3.12 Attacks through a Backdoor Channel

The attackers can gain access to remote system applications on the victim’s
resource systems via this approach. It’s a passive attack of sorts. Zombies are
sometimes used by attackers to carry out DDoS attacks. Back doors channels,
however, are frequently used by attackers to get control of the victim’s
resources. It has the potential to compromise data security and privacy.

7.4 SECURITY ISSUES IN CLOUD DEPLOYMENT


MODELS

Each of the three ways (Public, Private, Hybrid) in which cloud services can
be deployed has its own advantages and limitations. And from the security
perspective, all the three have got certain areas that need to be addressed with a
specific strategy to avoid them.

7.4.1 Security Issues in a Public Cloud

In a public cloud, there exist many customers on a shared platform and


infrastructure security is provided by the service provider. A few of the key
security issues in a public cloud include:

• The three basic requirements of security: confidentiality, integrity and


availability are required to protect data throughout its lifecycle. Data
must be protected during the various stages of creation, sharing,
14
Security Issues in
archiving, processing etc. However, situations become more Cloud Computing
complicated in case of a public cloud where we do not have any control
over the service provider’s security practices.
• In case of a public cloud, the same infrastructure is shared between
multiple tenants and the chances of data leakage between these tenants
are very high. However, most of the service providers run a multitenant
infrastructure. Proper investigations at the time of choosing the service
provider must be done in order to avoid any such risk.
• In case a Cloud Service Provider uses a third party vendor to provide its
cloud services, it should be ensured what service level agreements they
have in between as well as what are the contingency plans in case of
the breakdown of the third party system.
• Proper SLAs defining the security requirements such as what level of
encryption data should undergo, when it is sent over the internet and
what are the penalties in case the service provider fails to do so.

Although data is stored outside the confines of the client organization in a


public cloud, we cannot deny the possibility of an insider attack originating
from service provider’s end. Moving the data to a cloud computing
environment expands the circle of insiders to the service provider’s staff and
subcontractors. Policy enforcement implemented at the nodes and the data-
centres can prevent a system administrator from carrying out any malicious
action. The three major steps to achieve this are: defining a policy, propagating
the policy by means of a secure policy propagation module and enforcing it
through a policy enforcement module.

7.4.2 Security Issues in a Private Cloud

A private cloud model enables the customer to have total control over the
network and provides the flexibility to the customer to implement any
traditional network perimeter security practice. Although the security
architecture is more reliable in a private cloud, yet there are issues/risks that
need to be considered:

• Virtualization techniques are quite popular in private clouds. In such a


scenario, risks to the hypervisor should be carefully analyzed. There
have been instances when a guest operating system has been able to run
processes on other guest VMs or host. In a virtual environment it may
happen that virtual machines are able to communicate with all the VMs
including the ones who they are not supposed to. To ensure that they
only communicate with the ones which they are supposed to, proper
authentication and encryption techniques such as IPsec [IP level
Security] etc. should be implemented.
• The host operating system should be free from any sort of malware
threat and monitored to avoid any such risk. In addition, guest virtual
machines should not be able to communicate with the host operating
system directly. There should be dedicated physical interfaces for
communicating with the host.
• In a private cloud, users are facilitated with an option to be able to
manage portions of the cloud, and access to the infrastructure is
provided through a web interface or an HTTP end point. There are two
ways of implementing a web-interface, either by writing a whole

15
Resource Provisioning,
Load Balancing and application stack or by using a standard applicative stack, to develop
Security the web interface using common languages such as Java, PHP, Python
etc. As part of screening process, Eucalyptus web interface has been
found to have a bug, allowing any user to perform internal port
scanning or HTTP requests through the management node which he
should not be allowed to do. In the nutshell, interfaces need to be
properly developed and standard web application security techniques
need to be deployed to protect the diverse HTTP requests being
performed.
• While we talk of standard internet security, we also need to have a
security policy in place to safeguard the system from the attacks
originating within the organization. This vital point is missed out on
most of the occasions, stress being mostly upon the internet security.
Proper security guidelines across the various departments should exist
and control should be implemented as per the requirements.

Thus we see that although private clouds are considered safer in comparison to
public clouds, still they have multiple issues which if unattended may lead to
major security loopholes as discussed earlier.

7.4.3 Security Issues in a Hybrid Cloud

The hybrid cloud model is a combination of both public and private cloud and
hence the security issues discussed with respect to both are applicable in case
of hybrid cloud.

In the following section the security methods to avoid the exploitation of the
threats will be discussed.

7.5 ENSURING SECURITY IN CLOUD AGAINST


VARIOUS TYPES OF ATTACKS

This section describes the implementation of various security techniques at


different levels to secure cloud from the above said threats.

7.5.1 Protection from Data Breaches

Various security measures and techniques have been proposed to avoid the
data breach in cloud. One of these is to encrypt data before storage on cloud,
and in the network. This will need efficient key management algorithm, and
the protection of key in cloud. Some measures that must be taken to avoid data
breaches in cloud are to implement proper isolation among VMs to prevent
information leakage, implement proper access controls to prevent unauthorized
access, and to make a risk assessment of the cloud environment to know the
storage of sensitive data and its transmission between various services and
networks.

Many researchers worked on the protection of data in cloud storage.


CloudProof is a system that can be built on top of existing cloud storages like
Amazon S3 and Azure to ensure data integrity and confidentiality using
encryption. To secure data in cloud storage attributed based encryption can be
used to encrypt data with a specific access control policy before storage.
16
Security Issues in
Therefore, only the users with access attributes and keys can access the data. Cloud Computing
Another technique to protect data in cloud involves using scalable and fine
grained data access control. In this scheme, access policies are defined based
on the data attributes. Moreover, to overcome the computational overhead
caused by fine grained access control, most computation tasks can be handed
over to untrusted commodity cloud with disclosing data. This is achieved by
combining techniques of attribute based encryption, proxy re-encryption, and
lazy re-encryption.

7.5.2 Protection from Data Loss

To prevent data loss in cloud different security measures can be adopted. One
of the most important measures is to maintain backup of all data in cloud
which can be accessed in case of data loss. However, data backup must also be
protected to maintain the security properties of data such as integrity and
confidentiality. Various data loss prevention (DLP) mechanisms have been
proposed for the prevention of data loss in network, processing, and storage.
Many companies including Symantec, McAfee, and Cisco have also developed
solutions to implement data loss prevention across storage systems, networks
and end points. Trusted Computing can be used to provide data security. A
trusted server can monitor the functions performed on data by cloud server and
provide the complete audit report to data owner. In this way, the data owner
can be sure that the data access policies have not been violated.

In a nutshell, organizations should apply the following mitigation techniques to


protect against this type of threat:

• Provide data-storage and backup mechanisms.


• Use proper encryption techniques.
• Protect in-transit data.
• Generate strong keys and implement advanced storage and
management.
• Legally require suppliers to use reinforcement and maintenance
techniques

7.5.3 Protection from Account or Service Hijacking

Account or service hijacking can be avoided by adopting different security


features on cloud network. These include employing intrusion detection
systems (IDS) in cloud to monitor network traffic and nodes for detecting
malicious activities. Intrusion detection and other network security systems
must be designed by considering the cloud efficiency, compatibility and
virtualization based context. An IDS system for cloud was designed by
combining system level virtualization and virtual machine monitor
(responsible for managing VMs) techniques. In this architecture, the IDSs are
based on VMs and the sensor connectors on Snort which is a well-known IDS.
VM status and their workload are monitored by IDS and they can be started,
stopped and recovered at any time by management system of IDS. Identity and
access management should also be implemented properly to avoid access to
credentials. To avoid account hijacking threats, multi factor authentication for
remote access using at least two credentials can be used. A technique that uses
multi-level authentication at different levels through passwords was made to
access the cloud services. First the user is authenticated by the cloud access
17
Resource Provisioning,
Load Balancing and password and in the next level the service access password of user is verified.
Security Moreover, user access to cloud services and applications should be approved
by cloud management. The auditing of all the privileged activities of the user
along with information security events generated from it should also be done to
avoid these threats.

In a nutshell, organizations should apply the following mitigation techniques to


protect against this type of threat:

• Appropriate understanding of security policies and SLAs.


• A strong multifactor authentication to provide an extra security check
for the identification of genuine customers and make the cloud
environment more secure and reliable.
• Strict and continuous monitoring to detect unauthorized activities.
• Prevention of credentials being shared among customers and services.

7.5.4 Protection from Denial of Service (DoS) Attacks

To avoid DOS attacks it is important to identify and implement all the basic
security requirements of cloud network, applications, databases, and other
services. Applications should be tested after designing to verify that they have
no loop holes that can be exploited by the attackers. The DDoS attacks can be
prevented by having extra network bandwidth, using IDS that verify network
requests before reaching cloud server, and maintaining a backup of IP pools for
urgent cases. Industrial solutions to prevent DDOS attacks have also been
provided by different vendors. A technique named hop count filtering that can
be used to filter spoofed IP packets, and helps in decreasing DOS attacks by
90%. Another technique for securing cloud from DDoS involves using
intrusion detection system in virtual machine (VM). In this scheme when an
intrusion detection system (IDS) detects an abnormal increase in inbound
traffic, the targeted applications are transferred to VMs hosted on another data
center.

7.5.5 Protection from Insecure Interfaces and APIs

To protect the cloud from insecure API threats it is important for the
developers to design these APIs by following the principles of trusted
computing. Cloud providers must also ensure that all the all the APIs
implemented in cloud are designed securely, and check them before
deployment for possible flaws. Strong authentication mechanisms and access
controls must also be implemented to secure data and services from insecure
interfaces and APIs. The Open Web Application Security Project (OWASP)
provides standards and guidelines to develop secure applications that can help
in avoiding such application threats. Moreover, it is the responsibility of
customers to analyze the interfaces and APIs of cloud provider before moving
their data to cloud.

In a nutshell, organizations should apply the following mitigation techniques to


protect against this type of insecure interfaces and API’s threat:

• Robust authentication and access control methods need to be adopted.


• There need to be encryption of the transmitted data.

18
Security Issues in
• Analysis of the cloud provider interfaces and a proper security model Cloud Computing
for these interfaces.
• Detailed understanding of the dependency chain related to APIs.

7.5.6 Protection from Malicious Insiders

The protection from these threats can be achieved by limiting the hardware and
infrastructure access only to the authorized personnel. The service provider
must implement strong access control, and segregation of duties in the
management layer to restrict administrator access to only his authorized data
and software. Auditing on the employees should also be implemented to check
for their suspicious behavior. Moreover, the employee behavior requirements
should be made part of legal contract, and action should be taken against
anyone involved in malicious activities. To prevent data from malicious
insiders encryption can also be implemented in storage, and public networks.

In a nutshell, organizations should apply the following mitigation techniques to


protect against this type of threat:

• Apply human resource management as part of a legal agreement.


• Institute a compliance reporting system to help determine the security
breach notification so that appropriate action may be taken against a
person who has committed a fraud.
• Non-disclosure of the employees’ privileges and how they are
monitored.
• Conduct a comprehensive supplier assessment.
• Need to adopt, transparency of the information security and
management practices.

7.5.7 Protection from Abuse of Cloud Services

The implementation of strict initial registration and validation processes can


help in identifying malicious consumers. The policies for the protection of
important assets of organization must also be made part of the service level
agreement (SLA) between user and service provider. This will familiarize user
about the possible legal actions that can be conducted against him in case he
violates the agreement. The Service Level Agreement definition language
(SLAng) enables to provide features for SLA monitoring, enforcement and
validation. Moreover, the network monitoring should be comprehensive for
detecting malicious packets and all the updated security devices in network
should be installed.

In a nutshell, organizations should apply the following mitigation techniques to


protect against this type of threat:

• Strong authorization and authentication mechanisms.


• Continuous examination of the network traffic.

7.5.8 Protection from Insufficient Due Diligence

It is important for organizations to fully understand the scope of risks


associated with cloud before shifting their business and critical assets such as
data to it. The service providers must disclose the applicable logs,
19
Resource Provisioning,
Load Balancing and infrastructure such as firewall to consumers to take measures for securing their
Security applications and data. Moreover, the provider must setup requirements for
implementing cloud applications, and services using industry standards. Cloud
provider should also perform risk assessment using qualitative and quantitative
methods after certain intervals to check the storage, flow, and processing of
data.

7.5.9 Protection from Shared Technology Vulnerabilities

In cloud architecture, hypervisor is responsible for mediating interactions of


virtual machines and the physical hardware. Therefore, hypervisor must be
secured to ensure proper functioning of other virtualization components, and
implementing isolation between virtual machines (VMs). Moreover, to avoid
shared technology threats in cloud a strategy must be developed and
implemented for all the service models that include infrastructure, platform,
software, and user security. The baseline requirements for all cloud
components must be created, and employed in design of cloud architecture.
The service provider should also monitor the vulnerabilities in the cloud
environment, and release patches to fix those vulnerabilities regularly.

In a nutshell, organizations should apply the following mitigation techniques to


protect against this type of threat:

• Apply good authentication and access control methods.


• Monitor the cloud environment for unauthorized activities.
• Use SLAs for patching the weakness remediation, vulnerability
scanning, and configuration reviews.

7.5.10 Protection from SQL Injection, XSS, Google Hacking and Forced
Hacking

In order to secure cloud against various security threats such as: SQL injection,
Cross Site Scripting (XSS), DoS and DDoS attacks, Google Hacking, and
Forced Hacking, different cloud service providers adopt different techniques.
A few standard techniques to detect the above mentioned attacks include:

• Avoiding the usage of dynamically generated SQL in the code


• Finding the meta-structures used in the code
• Validating all user entered parameters, and
• Disallowing and removal of unwanted data and characters, etc..

A generic security framework needs to be worked out for an optimized cost


performance ratio. The main criterion to be fulfilled by the generic security
framework is to interface with any type of cloud environment, and to be able to
handle and detect predefined as well as customized security policies. A similar
approach is being used by Symantec Message Labs Web Security cloud that
blocks the security threats originating from internet and filters the data before
they reach the network. Web security cloud’s security architecture rests on two
components:

Multi-layer Security: In order to ensure data security and block possible


malwares, it consists of multilayer security and hence it has a strong security
platform.
20
Security Issues in
URL filtering: It is being observed that the attacks are launched through Cloud Computing
various web pages and internet sites and hence filtering of the web-pages
ensures that no such harmful or threat carrying web pages are accessible. Also,
content from undesirable sites can be blocked.

With its adaptable technology, it provides security even in highly conflicting


environments and ensures protection against new and converging malware
threats. The security model of Amazon Web Services, one of the biggest cloud
service providers in the market makes use of multi-factor authentication
technique, ensuring enhanced control over AWS account settings and the
management of AWS services and resources for which the account is
subscribed. In case the customer opts for Multi Factor Authentication (MFA),
he has to provide a 6-digit code in addition to their username and password
before access is granted to AWS account or services. This single use code can
be received on mobile devices every time he tries to login into his/her AWS
account. Such a technique is called multi-factor authentication, because two
factors are checked before access is granted.

A Google hacking database identifies the various types of information such as:
login passwords, pages containing logon portals, session usage information etc.
Various software solutions such as Web Vulnerability Scanner can be used to
detect the possibility of a Google hack. In order to prevent Google hack, users
need to ensure that only those information that do not affect them should be
shared with Google. This would prevent sharing of any sensitive information
that may result in adverse conditions.

7.5.11 Protection from IP Spoofing

In case of IP spoofing an attacker tries to spoof authorized users creating an


impression that the packets are coming from reliable sources. Thus the attacker
takes control over the client’s data or system showing himself/herself as the
trusted party. Spoofing attacks can be checked by using encryption techniques
and performing user authentication based on Key exchange. Techniques like
IPSec do help in mitigating the risks of spoofing. By enabling encryption for
sessions and performing filtering for incoming and outgoing packets, spoofing
attacks can be reduced.

7.6 IDENTITY AND ACCESS MANAGEMENT


(IAM)

Identity and access management (IAM) is a framework of business processes,


policies and technologies that facilitates the management of electronic or
digital identities. With an IAM framework in place, information technology
(IT) managers can control user access to critical information within their
organizations. Systems used for IAM include single sign-on systems, two-
factor authentication, multifactor authentication and privileged access
management. These technologies also provide the ability to securely store
identity and profile data as well as data governance functions to ensure that
only data that is necessary and relevant is shared. IAM systems can be
deployed on premises, provided by a third-party vendor through a cloud-based
subscription model or deployed in a hybrid model.

21
Resource Provisioning,
Load Balancing and On a fundamental level, Identity and Access Management encompasses the
Security following components:

• how individuals are identified in a system (understand the


difference between identity management and authentication)

• how roles are identified in a system and how they are assigned to
individuals

• adding, removing and updating individuals and their roles in a system

• assigning levels of access to individuals or groups of individuals, and

• protecting the sensitive data within the system and securing the system
itself.

7.6.1 Benefits of IAM

IAM technologies can be used to initiate, capture, record and manage user
identities and their related access permissions in an automated manner. An
organization gains the following IAM benefits:

• Access privileges are granted according to policy, and all individuals


and services are properly authenticated, authorized and audited.

• Companies that properly manage identities have greater control of user


access, which reduces the risk of internal and external data breaches.

• Automating IAM systems allows businesses to operate more efficiently


by decreasing the effort, time and money that would be required to
manually manage access to their networks.

• In terms of security, the use of an IAM framework can make it easier to


enforce policies around user authentication, validation and privileges,
and address issues regarding privilege creep.

• IAM systems help companies better comply with government


regulations by allowing them to show corporate information is not
being misused. Companies can also demonstrate that any data needed
for auditing can be made available on demand.

7.6.2 Types of Digital Authentication

With IAM, enterprises can implement a range of digital authentication


methods to prove digital identity and authorize access to corporate resources.

Unique passwords: The most common type of digital authentication is the


unique password. To make passwords more secure, some organizations require
longer or complex passwords that require a combination of letters, symbols
and numbers. Unless users can automatically gather their collection of
passwords behind a single sign-on entry point, they typically find remembering
unique passwords onerous.

22
Security Issues in
Pre-Shared Key (PSK): PSK is another type of digital authentication where Cloud Computing
the password is shared among users authorized to access the same resources --
think of a branch office Wi-Fi password. This type of authentication is less
secure than individual passwords. A concern with shared passwords like PSK
is that frequently changing them can be cumbersome.

Behavioral Authentication: When dealing with highly sensitive information


and systems, organizations can use behavioral authentication to get far more
granular and analyze keystroke dynamics or mouse-use characteristics. By
applying artificial intelligence, a trend in IAM systems, organizations can
quickly recognize if user or machine behavior falls outside of the norm and can
automatically lock down systems.

Biometrics: Modern IAM systems use biometrics for more precise


authentication. For instance, they collect a range of biometric characteristics,
including fingerprints, irises, faces, palms, gaits, voices and, in some cases,
DNA. Biometrics and behavior-based analytics have been found to be more
effective than passwords.

7.6.3 IAM and Cloud Security

In cloud computing, data is stored remotely and accessed over the Internet.
Because users can connect to the Internet from almost any location and any
device, most cloud services are device- and location-agnostic. Users no longer
need to be in the office or on a company-owned device to access the cloud.
And in fact, remote workforces are becoming more common.

As a result, identity becomes the most important point of controlling access,


not the network perimeter. One component of a strong security posture takes
on a particularly critical role in the cloud is the identity. The concept of identity
in the cloud can refer to many things, but in this unit we will focus on two
main entities: users and cloud resources.

The user's identity, not their device or location, determines what cloud data
they can access and whether they can have any access at all.

With cloud computing, sensitive files are stored in a remote cloud server.
Because employees of the company need to access the files, they do so by
logging in via browser or an app. IAM helps prevent identity-based attacks and
data breaches that come from privilege escalations (when an unauthorized user
has too much access). Thus, IAM systems are essential for cloud computing,
and for managing remote teams. It is a cloud service that controls the
permissions and access for users and cloud resources. IAM policies are sets of
permission policies that can be attached to either users or cloud resources to
authorize what they access and what they can do with it.

The concept “identity is the new perimeter” goes, when AWS first announced
their IAM service in 2012. We are now witnessing a renewed focus on IAM
due to the rise of abstracted cloud services and the recent wave of high-profile
data breaches.

Services that don’t expose any underlying infrastructure rely heavily on IAM
for security. Managing a large number of privileged users with access to an

23
Resource Provisioning,
Load Balancing and ever-expanding set of services is challenging. Managing separate IAM roles
Security and groups for these users and resources adds yet another layer of complexity.
Cloud providers like AWS and Google Cloud help customers solve these
problems with tools like the Google Cloud- IAM recommender (currently in
beta) and the AWS- IAM access advisor. These tools attempt to analyze the
services last accessed by users and resources, and help you find out which
permissions might be over-privileged. These tools indicate that cloud providers
recognize these access challenges, which is definitely a step in the right
direction. However, there are a few more challenges we need to consider.

7.6.4 Challenges in IAM

Following are some of the challenges in using identity and access


management:

• IAM and Single-Sign-On (SSO): Most businesses today use some


form of single sign-on (SSO), such as Okta, to manage the way users
interact with cloud services. This is an effective way of centralizing
access across a large number of users and services. While using SSO to
log into public cloud accounts is definitely the best practice, the
mapping between SSO users and IAM roles can become challenging, as
users can have multiple roles that span several cloud accounts.

• Effective Permissions: Considering that users and services have more


than one permission-set attached to them, understanding the effective
permissions of an entity becomes difficult.
o What can s/he access?
o Which actions can s/he perform on these services?
o If s/he accesses a virtual machine, does s/he inherit the IAM
permissions of that resource?
o Is s/he part of a group that grants her additional permissions?
o With layers upon layers of configurations and permission
profiles, questions like these become difficult to answer.
o
• Multi-cloud: According to RightScale, more than 84% of organizations
use a multi-cloud strategy. Each provider has its own policies, tools and
terminology. There is no common language that helps you understand
relationships and permissions across cloud providers.

7.6.5 Right Use of IAM Security

IAM is crucial aspect of cloud security. Businesses must look at IAM as a part
of their overall security posture and add an integrated layer of security across
their application lifecycle.

Cloud providers deliver a great baseline for implementing a least-privileged


approach to permissions. As cloud adoption scales in your organization, the
challenges mentioned above and more will become apparent, and you might
need to look at multi-cloud solutions to solve them. Some important aspects
are as follows:

• Don’t use root accounts - Always create individual IAM users with
relevant permissions, and don’t give your root credentials to anyone.
24
Security Issues in
• Adopt a role-per-group model - Assign policies to groups of users Cloud Computing
based on the specific things those users need to do. Don’t “stack” IAM
roles by assigning roles to individual users and then adding them to
groups. This will make it hard for you to understand their effective
permissions.

• Grant least-privilege - Only grant the least amount of permissions


needed for a job, just like we discussed with the Lambda function
accessing DynamoDB. This will ensure that if a user or resource is
compromised, the blast radius is reduced to the one or few things that
entity was permitted to do. This is an ongoing task. As your application
is constantly changing, you need to make sure that your permissions
adapt accordingly.

• Leverage cloud provider tools - Managing many permission profiles


at scale is challenging. Leverage the platforms you are already using to
generate least-privilege permission sets and analyze your existing
services. Remember that the cloud provider recommendation is to
always manually review the generated profiles before implementing
them.

7.7 SECURITY AS A SERVICE (SECaaS)

Security as a Service (SECaaS) can most easily be described as a cloud


delivered model for outsourcing security/cybersecurity services. Much like
Software as a Service, SECaaS provides security services on a subscription
basis hosted by cloud providers. Security as Service solutions have become
increasingly popular for corporate infrastructures as a way to ease the in-house
security team’s responsibilities, scale security needs as the businesses grows,
and avoid the costs and maintenance of on-premise alternatives.

7.7.1 Benefits of SECaaS

Following are some of the benefits of the SECaaS:

• Cost Savings: One of the biggest benefits of a Security as a Service


model is that it saves money. A cloud delivered service is often
available in subscription tiers with several upgrade options so a
business only pays for what they need, when they need. It also
eliminates the need for expertise.
• The Latest Security Tools and Updates: When you implement
SECaaS, you get to work with the latest security tools and resources.
For anti-virus and other security tools to be effective, they must be kept
up to date with the latest patches and virus definitions. By deploying
SECaaS throughout your organization, these updates are managed for
you on every server, PC and mobile device.
• Faster Provisioning and Greater Agility: One of the best things
about as-a-service solutions is that your users can be given access to
these tools immediately. SECaaS solutions can be scaled up or down as
required and are provided on demand where and when you need them.
That means no more uncertainty when it comes to deployment or

25
Resource Provisioning,
Load Balancing and updates as everything is managed for you by your SECaaS provider and
Security visible to you through a web-enabled dashboard.
• Free Up Resources: When security provisions are managed externally,
your IT teams can focus on what is important to your organization.
SECaaS frees up resources, gives you total visibility through
management dashboards and the confidence that your IT security is
being managed competently by a team of outsourced security
specialists. You can also choose for your IT teams to take control of
security processes if you prefer and manage all policy and system
changes through a web interface.

Examples of SECaaS include the security services like:

• Continuous Monitoring
• Data Loss Prevention (DLP)
• Business Continuity and Disaster Recovery (BC/DR or BCDR)
• Email Security
• Antivirus Management
• Spam Filtering
• Identity and Access Management (IAM)
• Intrusion Protection
• Security Assessment
• Network Security
• Security Information and Event Management (SIEM)
• Web Security
• Vulnerability Scanning

Combining the most significant features of two distinct cloud service providers
for your IT strategy can create countless possibilities and flexibility by using
the Multi-Cloud computing. Let us study the multi-cloud concept in the next
section.

7.8 MULTI-CLOUD COMPUTING

The term multi-cloud refers to the utilization of virtual data storage or


computing resources from more than one public cloud service provider, with or
without using an existing private cloud and on-premises infrastructure.
Let’s say that you want to develop an app that meets your customer base’s
demands and are looking into public cloud possibilities to support some of the
features. As time goes on, your clients will expect innovations that are only
accessible through an app from a different vendor. Instead of stressing that
you’re locked to a single vendor, consider merging those wanted features with
your existing ones. Although it’s worthwhile for the reasons we list below,
keep in mind that to facilitate mutual scalability, you’ll also need to host your
app in the vendor’s public cloud and buy their app.

Additionally, some businesses pursue multi-cloud strategies due to data


sovereignty concerns. Enterprise data must be physically located in a specific
area per certain laws, regulations, and organizational policies.

26
Security Issues in
Multi-cloud computing can assist the company in meeting those requirements Cloud Computing
as they can choose from multiple IaaS providers’ data center regions or
availability zones. This flexibility in where cloud data is placed also allows
organizations to locate resources close to the end users to achieve the best
performance and minimal latency.

Some businesses are still determining if a cloud strategy is viable, and others
have acted to expand their deployments and establish multi-cloud
environments. Organizations can compete in competitive marketplaces thanks
to the range of options, cost savings, business agility, and innovation prospects.

Multi-cloud adoption decisions are mainly based on 3 main factors:

• Sourcing
• Architecture
• Governance

Combining the most significant features of two distinct cloud service providers
for your IT strategy can create countless possibilities and flexibility. Continue
reading to discover and understand the major benefits of multi-cloud in the
following section.

7.8.1 Benefits of Multi-Cloud

Following are the benefits of adopting multi-cloud by the organizations:

Enhanced service delivery from multiple clouds

Organizations using multi-cloud can reduce downtime for critical services with
the help of a strategy and architecture. The cloud organizations with the lowest
levels of downtime are all those with cloud strategies and architectures.
Additionally, they adopt several other behaviors as following:

• Using a workload allocation method to choose the cloud where an


offering should be implemented to acquire the optimum platform-to-
workload fit.
• Deploying and orchestrating workloads across different clouds while
maximizing performance, availability, and cost through the usage of a
cloud service broker.
• Using a systematic on-boarding approach for cloud workloads.
Consistency and speed are achieved by implementing a deliberate
approach to deploying and utilizing multiple clouds. These companies
can fix issues with cloud-delivered services and resume normal
operations faster than everyone else.

Security

By using a multi-cloud strategy, a company can increase security standards. By


adding new services to the entire corporate portfolio and providing clear
instructions on how users can authenticate data, how it can flow, and where it
can live, IT can lessen the risk of data loss and leakage, shoddy authentication,
and lateral platform compromises.
27
Resource Provisioning,
Load Balancing and Cost savings
Security
Overall expenses can be reduced by carefully considering where and how
workloads are distributed across multiple clouds. IT teams can achieve these
savings via a workload placement process. It allows them to consider an
offering’s architecture when determining whether to transfer it to the cloud and
how.

For instance, it might have branches for constructing a platform to use PaaS
choices in an IaaS environment, executing a direct lift and shift on a workload
well suited to IaaS, or performing a rewrite for the cloud.

Creating Redundant Architectures

By diversifying the hosting regions for your infrastructure when you deploy
with multiple clouds, you can ensure high availability for your customers. As a
result, your users will still have access to the features and services deployed on
other clouds, even if one of your cloud providers experiences technical
difficulties.

Fast and Low-latency Infrastructure

A considerably faster, low-latency infrastructure is possible when your


company expands its networks to include multiple providers. Customers will
have a better user experience due to this improvement in application response
times. This highly optimized connection can only occur if there are private
links between two cloud service providers.

Avoid Lock-ins with a Single Vendor

If you build applications for just one cloud vendor, you risk becoming locked
in with them. As a result, switching providers in the future will be considerably
more difficult. Even though that specific vendor was appropriate for you at the
time, it might not be as convenient if you need to scale up or down.

Additionally, you might pass up some future discounts that are much better.
Developers can work to design apps that work across several platforms by
choosing a multi-cloud strategy from the beginning. As a result, you’ll always
have the freedom to benefit from the most excellent offers or features from
other vendors without compromising what you can provide for your clients.

 Check Your Progress 1


1) How to secure the Cloud?

…………………………………………………………………………………
…………………………………………………………………………………
…………………………………………………………………………………
2) What are the various security aspects that one needs to remember while
opting for Cloud services?

28
Security Issues in
………………………………………………………………………………… Cloud Computing

…………………………………………………………………………………
…………………………………………………………………………………
3) How to choose a SECaaS Provider?

…………………………………………………………………………………
…………………………………………………………………………………
…………………………………………………………………………………

7.9 SUMMARY

Cloud computing is getting widely adopted in businesses around the world.


However, there are different security issues associated with it. In order to
maintain the trust of customers, security should be considered as an integral
part of cloud. In this Unit we have focused on most severe threats on cloud
computing that are considered relevant by most users and businesses. We have
divided these threats into categories of data threats, networks threats, and cloud
environment specific threats. The impact of these threats on cloud users and
providers has been illustrated in this unit. Moreover, we also discuss the
security techniques that can be adopted to avoid these threats. Also, towards
the end we had discussed the IAM and SECaaS.

7.10 SOLUTIONS / ANSWERS

Check Your Progress 1

1. In the 1990s, business and personal data stored locally and security was
local as well. Data would be located on a PC’s internal storage at home,
and on enterprise servers, if you worked for a company.

Introducing cloud technology has forced everyone to reevaluate cyber


security. Your data and applications might be floating between local
and remote systems and always Internet-accessible. For example, if you
are accessing Google Docs on your smartphone, or using Salesforce
software to look after your customers, that data could be held
anywhere. Therefore, protecting it becomes more difficult than when it
was just a question of stopping unwanted users from gaining access to
your network. Cloud security requires adjusting some previous IT
practices, but it has become more essential for two key reasons:

Convenience over security: Cloud computing is exponentially


growing as a primary method for both workplace and individual use.
Innovation has allowed new technology to be implemented quicker
than industry security standards can keep up, putting more
responsibility on users and providers to consider the risks of
accessibility.

29
Resource Provisioning,
Load Balancing and Centralization and multi-tenant storage: Every component from core
Security infrastructure to small data like emails and documents — can now be
located and accessed remotely on 24X7 web-based connections. All
this data gathering in the servers of a few major service providers can
be highly dangerous. Threat actors can now target large multi-
organizational data centers and cause immense data breaches.

Unfortunately, malicious actors realize the value of cloud-based targets


and increasingly probe them for exploits. Despite cloud providers
taking many security roles from clients, they do not manage everything.
This leaves even non-technical users with the duty to self-educate on
cloud security.

That said, users are not alone in cloud security responsibilities. Being
aware of the scope of your security duties will help the entire system
stay much safer.

2. Every cloud security measure works to accomplish one or more of the


following:

• Enable data recovery in case of data loss


• Protect storage and networks against malicious data theft
• Deter human error or negligence that causes data leaks
• Reduce the impact of any data or system compromise

Data security is an aspect of cloud security that involves the technical end
of threat prevention. Tools and technologies allow providers and clients to
insert barriers between the access and visibility of sensitive data. Among
these, encryption is one of the most powerful tools available. Encryption
scrambles your data so that it's only readable by someone who has the
encryption key. If your data is lost or stolen, it will be effectively
unreadable and meaningless. Data transit protections like virtual private
networks (VPNs) are also emphasized in cloud networks.

Identity and access management (IAM) pertains to the accessibility


privileges offered to user accounts. Managing authentication and
authorization of user accounts also apply here. Access controls are pivotal
to restrict users — both legitimate and malicious — from entering and
compromising sensitive data and systems. Password Management, multi-
factor authentication, and other methods fall in the scope of IAM.

Governance focuses on policies for threat prevention, detection, and


mitigation. With SMB and enterprises, aspects like threat intel can help
with tracking and prioritizing threats to keep essential systems guarded
carefully. However, even individual cloud clients could benefit from
valuing safe user behavior policies and training. These apply mostly in
organizational environments, but rules for safe use and response to threats
can be helpful to any user.

Data retention (DR) and business continuity (BC) planning involve


technical disaster recovery measures in case of data loss. Central to any DR
and BC plan are methods for data redundancy such as backups.
30
Security Issues in
Additionally, having technical systems for ensuring uninterrupted Cloud Computing
operations can help. Frameworks for testing the validity of backups and
detailed employee recovery instructions are just as valuable for a thorough
business continuity plan.

Legal compliance revolves around protecting user privacy as set by


legislative bodies. Governments have taken up the importance of protecting
private user information from being exploited for profit. As such,
organizations must follow regulations to abide by these policies. One
approach is the use of data masking, which obscures identity within data
via encryption methods.

3. Some common cloud security risks/threats include:

• Risks of cloud-based infrastructure including incompatible legacy IT


frameworks, and third-party data storage service disruptions.
• Internal threats due to human error such as misconfiguration of user
access controls.
• External threats caused almost exclusively by malicious actors, such
as malware, phishing, and DDoS attacks.

The biggest risk with the cloud is that there is no perimeter. Traditional
cyber security focused on protecting the perimeter, but cloud environments
are highly connected which means insecure APIs (Application
Programming Interfaces) and account hijacks can pose real problems.
Faced with cloud computing security risks, cyber security professionals
need to shift to a data-centric approach.

Interconnectedness also poses problems for networks. Malicious actors


often breach networks through compromised or weak credentials. Once a
hacker manages to make a landing, they can easily expand and use poorly
protected interfaces in the cloud to locate data on different databases or
nodes. They can even use their own cloud servers as a destination where
they can export and store any stolen data.

Third-party storage of your data and access via the internet each pose their
own threats as well. If for some reason those services are interrupted, your
access to the data may be lost. For instance, a phone network outage could
mean you can't access the cloud at an essential time. Alternatively, a power
outage could affect the data center where your data is stored, possibly with
permanent data loss.

Such interruptions could have long-term repercussions. A recent power


outage at an Amazon cloud data facility resulted in data loss for some
customers when servers incurred hardware damage. This is a good example
of why you should have local backups of at least some of your data and
applications.

Check Your Progress 2

1. Fortunately, there is a lot that you can do to protect your own data in
the cloud. Let’s explore some of the popular methods.

31
Resource Provisioning,
Load Balancing and Encryption is one of the best ways to secure your cloud computing
Security systems. There are several different ways of using encryption, and they
may be offered by a cloud provider or by a separate cloud security
solutions provider:
• Communications encryption with the cloud in their entirety.
• Particularly sensitive data encryption, such as account credentials.
• End-to-end encryption of all data that is uploaded to the cloud.

Within the cloud, data is more at risk of being intercepted when it is on the
move. When it's moving between one storage location and another, or
being transmitted to your on-site application, it's vulnerable. Therefore,
end-to-end encryption is the best cloud security solution for critical data.
With end-to-end encryption, at no point is your communication made
available to outsiders without your encryption key.

You can either encrypt your data yourself before storing it on the cloud, or
you can use a cloud provider that will encrypt your data as part of the
service. However, if you are only using the cloud to store non-sensitive
data such as corporate graphics or videos, end-to-end encryption might be
overkill. On the other hand, for financial, confidential, or commercially
sensitive information, it is vital.

If you are using encryption, remember that the safe and secure
management of your encryption keys is crucial. Keep a key backup and
ideally don't keep it in the cloud. You might also want to change your
encryption keys regularly so that if someone gains access to them, they will
be locked out of the system when you make the changeover.

Configuration is another powerful practice in cloud security. Many cloud


data breaches come from basic vulnerabilities such as misconfiguration
errors. By preventing them, you are vastly decreasing your cloud security
risk. If you don’t feel confident doing this alone, you may want to consider
using a separate cloud security solutions provider.

Here are a few principles you can follow:

• Never leave the default settings unchanged: Using the default settings
gives a hacker front-door access. Avoid doing this to complicate a
hacker’s path into your system.
• Never leave a cloud storage bucket open: An open bucket could allow
hackers to see the content just by opening the storage bucket's URL.
• If the cloud vendor gives you security controls that you can switch
on, use them. Not selecting the right security options can put you at
risk.

2. Security should be one of the main points to consider when it comes to


choosing a cloud security provider. That’s because your cyber security is
no longer just your responsibility: cloud security companies must do their
part in creating a secure cloud environment and share the responsibility for
data security.

Unfortunately, cloud companies are not going to give you the blueprints to
their network security. This would be equivalent to a bank providing you
32
Security Issues in
with details of their vault, complete with the combination numbers to the Cloud Computing
safe.

However, getting the right answers to some basic questions gives you
better confidence that your cloud assets will be safe. In addition, you will
be more aware of whether your provider has properly addressed obvious
cloud security risks. We recommend asking your cloud provider some
questions of the following questions:

• Security audits: “Do you conduct regular external audits of your


security?”
• Data segmentation: “Is customer data is logically segmented and kept
separate?”
• Encryption: “Is our data encrypted? What parts of it are encrypted?”
• Customer data retention: “What customer data retention policies are
being followed?”
• User data retention: “Is my data is properly deleted if I leave your
cloud service?”
• Access management: “How are access rights controlled?”

You will also want to make sure you’ve read your provider’s terms of
service (TOS). Reading the TOS is essential to understanding if you are
receiving exactly what you want and need.

Be sure to check that you also know all the services used with your
provider. If your files are on Dropbox or backed up on iCloud (Apple's
storage cloud), that may well mean they are actually held on Amazon's
servers. So, you will need to check out AWS, as well as, the service you
are using directly.

3. Hiring the third party cloud service for the security of your most critical
and sensitive business assets is a massive undertaking. Choosing a SECaaS
provider takes careful consideration and evaluation. Here are some of the
most important considerations when selecting a provider:

• Availability: Your network must be available 24 hours a day and so


should your SECaaS provider. Vet out the vendor’s SLA to make sure
they can provide the uptime your business needs and to know how
outages are handled.

• Fast Response Times: Fast response times are just as important as


availability. Look for providers that offer guaranteed response times for
incidents, queries and system updates.

• Disaster Recovery Planning: Your provider should work closely with


you to understand the vulnerabilities of your infrastructure and the
external threats that are most likely to cause the most damage. From
vandalism to weather disasters, your provider should ensure your
business can recover quickly from these disruptive events.

• Vendor Partnerships: A SECaaS provider is only ever as good as the


vendors that have forged partnerships with. Look for providers that

33
Resource Provisioning,
Load Balancing and work with best in class security solution vendors and who also have the
Security expertise to support these solutions.

7.11 FURTHER READINGS

1. Cloud Computing: Principles and Paradigms, Rajkumar Buyya, James


Broberg and Andrzej M. Goscinski, Wiley, 2011.
2. Mastering Cloud Computing, Rajkumar Buyya, Christian Vecchiola,
and Thamarai Selvi, Tata McGraw Hill, 2013.
3. Essentials of cloud Computing: K. Chandrasekhran, CRC press, 2014.
4. Cloud Computing, Sandeep Bhowmik, Cambridge University Press,
2017.

34

You might also like