Professional Documents
Culture Documents
Block 2
Block 2
Cloud Computing
Indira Gandhi and IoT
National Open University
School of Computer and
Information Sciences
Block
2
RESOURCE PROVISIONING,
LOAD BALANCING AND SECURITY
UNIT 4
Resource Pooling, Sharing and Provisioning
UNIT 5
Scaling
UNIT 6
Laod Balancing
UNIT 7
Security Issues in Cloud Computing
BLOCK INTRODUCTION
The title of the block is Resource Provisioning, Load Balancing and Security. The objectives
of this block are to make you understand about the underlying concepts of Resource
Provisioning, Load Balancing and Security in Cloud Computing.
The block is organized into 4 units:
Unit 4 covers the overview resource pooling, resource pooling architecture, resource sharing,
resource provisioning and various approaches of it. Towards end VM sizing is discussed.
Unit 5 covers the overview of cloud elasticity, scaling primitives, scaling strategies
(proactive and reactive scaling), auto scaling and types of scaling.
Unit 6 covers load balancing, goals of load balancing, levels of load balancing, load
balancing algorithms and load balancing as a service.
Unit 7 covers cloud security, how it is different from traditional(legacy) IT security, cloud
computing security requirements, challenges in providing cloud security, threats, ensuring
security, Identity and Access management and Security-as-a-Service.
PROGRAMME DESIGN COMMITTEE
Prof. (Retd.) S.K. Gupta , IIT, Delhi Sh. Shashi Bhushan Sharma, Associate Professor, SOCIS, IGNOU
Prof. T.V. Vijay Kumar JNU, New Delhi Sh. Akshay Kumar, Associate Professor, SOCIS, IGNOU
Prof. Ela Kumar, IGDTUW, Delhi Dr. P. Venkata Suresh, Associate Professor, SOCIS, IGNOU
Prof. Gayatri Dhingra, GVMITM, Sonipat Dr. V.V. Subrahmanyam, Associate Professor, SOCIS, IGNOU
Mr. Milind Mahajan,. Impressico Business Solutions, Sh. M.P. Mishra, Assistant Professor, SOCIS, IGNOU
New Delhi Dr. Sudhansh Sharma, Assistant Professor, SOCIS, IGNOU
SOCIS FACULTY
Prof. P. Venkata Suresh, Director, SOCIS, IGNOU
Prof. V.V. Subrahmanyam, SOCIS, IGNOU
Prof. Sandeep Singh Rawat, SOCIS, IGNOU
Prof. Divakar Yadav, SOCIS, IGNOU
Dr. Akshay Kumar, Associate Professor, SOCIS, IGNOU
Dr. M.P. Mishra, Associate Professor, SOCIS, IGNOU
Dr. Sudhansh Sharma, Assistant Professor, SOCIS, IGNOU
Print Production
Mr. Sanjay Aggarwal, Assistant Registrar (Publication), MPDD
Dec, 2023
Indira Gandhi National Open University, 2023
ISBN-
All rights reserved. No part of this work may be reproduced in any form, by mimeograph or any other means, without permission in writing from
the Indira Gandhi National Open University.
Further information on the Indira Gandhi National Open University courses may be obtained from the University’s office at Maidan Garhi, New
Delhi-110068.
Printed and published on behalf of the Indira Gandhi National Open University, New Delhi by MPDD, IGNOU.
UNIT 4 RESOURCE POOLING, SHARING
AND PROVISIONING
4.1 Introduction
4.2 Objectives
4.3 Resource Pooling
4.4 Resource Pooling Architecture
4.4.1 Server Pool
4.4.2 Storage Pool
4.4.3 Network Pool
4.5 Resource Sharing
4.5.1 Multi Tenancy
4.5.2 Types of Tenancy
4.5.3 Tenancy at Different Level of Cloud Services
4.6 Resource Provisioning and Approaches
4.6.1 Static Approach
4.6.2 Dynamic Approach
4.6.3 Hybrid Approach
4.7 VM Sizing
4.8 Summary
4.9 Solutions / Answers
4.10 Further Readings
4.1 INTRODUCTION
4.2 OBJECTIVES
1
Resource Provisioning,
Load Balancing and
Security 4.3 RESOURCE POOLING
For creating resource pools, providers need to set up strategies for categorizing
and management of resources. The consumers have no control or knowledge of
the actual locations where the physical resources are located. Although some
service providers may provide choice for geographic location at higher
abstraction level like- region, country, from where customer can get resources.
This is generally possible with large service providers who have multiple data
centers across the world.
2
Resource Pooling,
that resources are dynamically allocated based on demand, optimizing Sharing and
Provisioning
their utilization.
3. On-Demand Self-Service: Users can access and provision resources as
needed without direct interaction with the cloud provider. This self-
service aspect allows for quick scalability and flexibility, enabling
users to increase or decrease their resource usage based on current
requirements.
4. Elasticity: Resource pooling enables elasticity, allowing users to scale
their resources up or down dynamically in response to changing
workloads. This scalability ensures that users can handle fluctuations in
demand efficiently without overprovisioning or underutilizing
resources.
5. Virtualization: Technologies like virtualization play a crucial role in
resource pooling by abstracting physical resources and creating virtual
instances that can be allocated and managed more flexibly.
Virtualization enables better resource allocation, isolation, and
management.
Overall, resource pooling in cloud computing allows for a more
efficient and flexible utilization of computing resources, providing
users with the benefits of scalability, cost-effectiveness, and on-demand
access to a wide array of resources.
Server pools are composed of multiple physical servers along with operating
system, networking capabilities and other necessary software installed on it.
Virtual machines are then configured over these servers and then combined to
create virtual server pools. Customers can choose virtual machine
configurations from the available templates (provided by cloud service
provider) during provisioning. Also, dedicated processor and memory pools
are created from processors and memory devices and maintained separately.
These processor and memory components from their respective pools can then
be linked to virtual servers when demand for increased capacity arises. They
3
Resource Provisioning,
Load Balancing and can further be returned to the pool of free resources when load on virtual
Security
servers decreases.
Storage resources are one of the essential components needed for improving
performance, data management and protection. It is frequently accessed by
users or applications as well as needed to meet growing requirements,
maintaining backups, migrating data, etc.
Storage pools are composed of file based, block based or object based storage
made up of storage devices like- disk or tapes and available to users in
virtualized mode.
…………………………………………………………………………………………
…………………………………………………………………………………………
…………………………………………………………………………………………
4
Resource Pooling,
2. Explain Resource Pooling Architecture. Sharing and
Provisioning
…………………………………………………………………………………
…………………………………………………………………………………
…………………………………………………………………………………
4.5.1 Multi-Tenancy
There are two types of tenancy namely single tenancy and multi-tenancy.
3. One App Instance and One Database per Tenant - It is the architecture
where the whole application is installed separately for each tenant. Each tenant
6
Resource Pooling,
has its own separate app and database instance. This allows a high degree of Sharing and
data isolation but increases the cost. Provisioning
Multi-tenancy can be applied not only in public clouds but also in private or
community deployment models. Also, it can be applied to all three service
models – Infrastructure as a Service (IaaS), Platform as a Service (PaaS) and
Software as a Service (SaaS). Multi-tenancy when performed at infrastructure
level, makes other levels also multi-tenant to certain extent.
7
Resource Provisioning,
Load Balancing and 4.6.1 Static Approach
Security
In static resource provisioning, resources are allocated to virtual machines only
once, at the beginning according to user’s or application’s requirement. It is
not expected to change further. Hence, it is suitable for applications that have
predictable and static workloads. Once a virtual machine is created, it is
expected to run without any further allocations.
4.7 VM SIZING
9
Resource Provisioning,
Load Balancing and match their workload requirements. For example, an instance type
Security might offer configurations like "small," "medium," or "large" with
specific allocations of resources.
…………………………………………………………………………………………
…………………………………………………………………………………………
…………………………………………………………………………………………
10
Resource Pooling,
………………………………………………………………………………… Sharing and
Provisioning
…………………………………………………………………………………
…………………………………………………………………………………
4.8 SUMMARY
11
Resource Provisioning,
Load Balancing and b) Storage pools – They are composed of file based, block based or
Security object based storage made up of storage devices like- disk or tapes and
available to users in virtualized mode.
c) Network pools - They are composed of different networking devices
like- gateways, switches, routers, etc. Virtual networks are then created
from these physical networking devices and offered to customers.
Customers can further build their own networks using these virtual
networks.
3. Storage pools are composed of file based; block based or object based
storage.
12
Resource Pooling,
beginning when creating virtual machines in order to limit the Sharing and
complexity of provisioning. Dynamic provisioning is done later Provisioning
for re-provisioning when the workload changes during run-time.
This approach can be efficient for real-time applications.
13
Resource Provisioning,
Load Balancing and Security
UNIT 5 SCALING
Structure
5.0 Introduction
5.1 Objectives
5.2 Cloud Elasticity
5.3 Scaling Primitives
5.4 Scaling Strategies
5.4.1 Proactive Scaling
5.4.2 Reactive Scaling
5.4.3 Combinational Scaling
5.5 Auto Scaling in Cloud
5.6 Types of Scaling
5.6.1 Vertical Scaling or Scaling Up
5.6.2 Horizontal Scaling or Scaling Out
5.7 Summary
5.8 Solutions/Answers
5.9 Further Readings
5.0 INTRODUCTION
In the earlier unit we had studied resource pooling, sharing and provisioning in
cloud computing. In this unit let us study other important characteristic
features of cloud computing – Cloud Elasticity and Scaling.
In this unit we will focus on the various methods and algorithms used in the
process of scaling. We will discuss various types of scaling, their usage and a
few examples. We will also discuss the importance of various techniques in
saving cost and man efforts by using the concepts of cloud scaling in highly
1
Scaling
dynamic situations. The suitability of scaling techniques in different scenarios
is also discussed in detail.
5.1 OBJECTIVES
Cloud Elasticity is the property of a cloud to grow or shrink capacity for CPU,
memory, and storage resources to adapt to the changing demands of an
organization. Cloud Elasticity can be automatic, without need to perform
capacity planning in advance of the occasion, or it can be a manual process
where the organization is notified they are running low on resources and can
then decide to add or reduce capacity when needed. Monitoring tools offered
by the cloud provider dynamically adjust the resources allocated to an
organization without impacting existing cloud-based operations.
In the absence of Cloud Elasticity, organizations would face paying for largely
unused capacity and handling the ongoing management and upkeep of that
capacity, including tasks like OS upgrades, patching, and addressing
component failures. Cloud Elasticity serves as a defining factor in cloud
computing, setting it apart from other models like client-server setups, grid
computing, or traditional infrastructure.
Cloud Elasticity acts as a vital tool for businesses, preventing both over-
provisioning (allocating more IT resources than necessary for current
demands) and under-provisioning (failing to allocate sufficient resources to
meet existing or imminent demands).
3
Scaling
given time, eliminating the necessity to acquire or retire on-premises
infrastructure to cater to fluctuating demand.
Now let us study scaling concept in the next section after understanding the
cloud elasticity and underlying concepts.
• Minimum cost: The user has to pay a minimum cost for access usage
of hardware after upscaling. The hardware cost for the same scale can
be much greater than the cost paid by the user. Also, the maintenance
and other overheads are also not included here. Further, as and when
the resources are not required, they may be returned to the Service
provider resulting in the cost saving.
5
Scaling
• Ease of use: The cloud upscaling and downscaling can be done in just
a few minutes (sometime dynamically) by using service providers
application interface.
Cost
Workload
Checkpoint
Time
Cost
Workload
Checkpoint
Time
In the case of the clouds, virtual environments are utilized for resource
allocation. These virtual machines enable clouds to be elastic in nature which
can be configured according to the workload of the applications in real time. In
such scenarios, downtime is minimized and scaling is easy to achieve.
On the other hand, scaling saves cost of hardware setup for some small time
peaks or dips in load. In general most cloud service providers provide scaling
as a process for free and charge for the additional resource used. Scaling is also
a common service provided by almost all cloud platforms.
Let us now see what the strategies for scaling are, how one can achieve scaling
in a cloud environment and what are its types. In general, scaling is categorized
7
Scaling
based on the decision taken for achieving scaling. The three main strategies for
scaling are discussed below.
Time of Day
Figure 3: Proactive Scaling
5.4.2 Reactive Scaling
The reactive scaling often monitors and enables smooth workload changes to
work easily with minimum cost. It empowers users to easily scale up or down
computing resources rapidly. In simple words, when the hardware like CPU or
RAM or any other resource touches highest utilization, more of the resources
are added to the environment by the service providers. The auto scaling works
on the policies defined by the users/ resource managers for traffic and scaling.
One major concern with reactive scaling is a quick change in load, i.e. user
experiences lags when infrastructure is being scaled. The given figure 4 shows
the resource provision in reactive scaling.
Resource Provisioning,
Load Balancing and Security
F
Load
Time of Day
Figure 4: Proactive Scaling
5.4.3 Combinational Scaling
Till now we have seen need based and forecast based techniques for scaling.
However, for better performance and low cool down period we can also
combine both of the reactive and proactive scaling strategies where we have
some prior knowledge of traffic. This helps us in scheduling timely scaling
strategies for expected load. On the other hand, we also have provision of load
based scaling apart from the predicted load on the application. This way both
the problems of sudden and expected traffic surges are addressed.
Working User sets the threshold but a User defined threshold values
downtime is required. optimize the resources
9
Scaling
Check Your Progress 1
…………………………………………………………………………
…………………………………………………………………………
…………………………………………………………………………
…………………………………………………………………………
…………………………………………………………………………
…………………………………………………………………………
In a cloud, auto scaling can be achieved using user defined policies, various
machine health checks and schedules. Various parameters such as Request
counts, CPU usage and latency are the key parameters for decision making in
autoscaling. A policy here refers to the instruction sets for clouds in case of a
Resource Provisioning,
particular scenario (for scaling -up or scaling -down). The autoscaling in the Load Balancing and Security
cloud is done on the basis of following parameters.
The process of auto scaling also requires some cooldown period for resuming
the services after a scaling takes place. No two concurrent scaling are triggered
so as to maintain integrity. The cooldown period allows the process of
autoscaling to get reflected in the system in a specified time interval and saves
any integrity issues in cloud environment.
Costs
Workload
Time
Consider a more specific scenario, when the resource requirement is high for
some time duration e.g. in holidays, weekends etc., a Scheduled scaling can
also be performed. Here the time and scale/ magnitude/ threshold of scaling
can be defined earlier to meet the specific requirements based on the previous
knowledge of traffic. The threshold level is also an important parameter in auto
scaling as a low value of threshold results in under utilization of the cloud
resources and a high level of threshold results in higher latency in the cloud.
After adding additional nodes in scale-up, the incoming requests per second
drops below the threshold. This results in triggering the alternate scale-up-
down processes known as a ping-pong effect. To avoid both under-scaling and
overscaling issues load testing is recommended to meet the service level
agreements (SLAs).
1. The number of incoming requests per second per node > threshold of
scale down, after scale-up.
2. The number of incoming requests per second per node < threshold of
scale up, after scale-down
Here, in both the scenarios one should reduce the chances of ping-pong effect.
Now we know what scaling is and how it affects the applications hosted on the
cloud. Let us now discuss how auto scaling can be performed in fixed amounts
as well as in percentage of the current capacity.
--------------------------------------------------------------------------------------------
Algorithm : 1
--------------------------------------------------------------------------------------------
Input : SLA specific application
Parameters:
N_min minimum number of nodes
D - scale down value.
U scale up value.
T_U scale up threshold
T_D scale down threshold
Let T (SLA) return the maximum incoming request per second (RPS) per node
for the specific SLA.
Let N_c and RPS_n represent the current number of nodes and incoming
requests per second per node respectively.
Resource Provisioning,
Load Balancing and Security
L1: /* scale up (if RPS_n> T_U) */
Repeat:
N_(c_old) ←N_c
N_c ←N_c + U
RPS_n ←RPS_n x N_(c_old) / N_c
Until RPS_n> T_U
Repeat:
N_(c_old) ←N_c
N_c ← max(N_min, N_c - D)
RPS_n ←RPS_n x N_(c_old) / N_c
Until RPS_n< T_D or N_c = N_min
Now, let us discuss how this algorithm works in detail. Let the values of a few
parameters are given as U = 2, D = 2, T_U = 120 and T_D = 150. Suppose in
the beginning, RPS = 450 and N_c = 4. Now RPS is increased to 1800 and
RPS_n almost reached to T_U, in this situation an autoscaling request is
generated leading to adding U = 2 nodes. Table - 1 lists all the parameters as
per the scale -up requirements.
4 0 450 112.5 4
1800
2 6 300
2510
2 8 313.75
3300
2 10 330.00
4120
2 12 343.33
5000
2 14 357.14
13
Scaling
Similarly, in case of scaling down, let initially RPS = 8000 and N_c = 19. Now
RPS is reduced to 6200 and following it RPS_n reaches T_D, here an
autoscaling request is initiated deleting D = 2 nodes. Table - 2 lists all the
parameters as per the scale -down requirements.
18 8000 421.05 19
6200
2 17 364.7
4850
2 15 323.33
3500
2 13 269.23
2650
2 11 240.90
1900
2 8 211.11
The given table shows the stepwise increase/ decrease in the cloud capacity
with respect to the change in load on the application(request per node per
second).
Percentage Scaling
The below given algorithm is used to determine the scale up and down
thresholds for respective autoscaling.
-----------------------------------------------------------------------------------------------
Algorithm : 2
-----------------------------------------------------------------------------------------------
Input : SLA specific application
Parameters:
N_min - minimum number of nodes
Resource Provisioning,
D - scale down value. Load Balancing and Security
U - scale up value.
T_U - scale up threshold
T_D - scale down threshold
Let T (SLA) returns the maximum requests per second (RPS) per node for
specific SLA.
Let N_c and RPS_n represent the current number of nodes and incoming
requests per second per node respectively.
Repeat:
N_(c_old) ←N_c
N_c ← max(N_min, N_c - max(1, N_c x D/ 100))
RPS_n ←RPS_n x N_(c_old) / N_c
Until RPS_n< T_D or N_c = N_min
Similarly in case of scaling down, initial RPS = 5000 and N_c = 19, here RPS
reduces to 4140 and RPS_n reaches T_D requesting scale down and hence
deleting 1 i.e. max(1, 1.8 x 8/100). The detailed example is explained using
Table -3 giving details of upscaling with D = 8, U = 1, N_min = 1, T_D = 230
and T_U = 290 .
6 0 500 83.33 6
1695
15
Scaling
1 7 242.14
2190
1 8 273.75
2600
1 9 288.88
3430
1 10 343.00
3940
1 11 358.18
4420
1 12 368.33
4960
1 13 381.53
5500
1 14 392.85
5950
1 15 396.6
The scaling down with the same algorithm is detailed in the table below.
19 5000 263.15 19
3920
1 18 217.77
3510
1 17 206.47
3200
1 16 200
2850
Resource Provisioning,
Load Balancing and Security
1 15 190
2600
1 14 185.71
2360
1 13 181.53
2060
1 12 171.66
1810
1 11 164.5
1500
150
Here if we compare both the algorithms 1 and 2, it is clear that the values of
the threshold U and D are at the higher side in case of 2. In this scenario the
utilization of hardware is more and the cloud experiences low footprints.
2) In Algorithm 1 for fixed amount auto scaling, calculate the values in table
if U = 3.
…………………………………………………………………………
…………………………………………………………………………
…………………………………………………………………………
…………………………………………………………………………………………
…………………………………………………………………………………………
…………………………………………………………………………………………
17
Scaling
Let us now discuss the types of scaling, how we see the cloud infrastructure for
capacity enhancing/ reducing. In general we scale the cloud in a vertical or
horizontal way by either provisioning more resources or by installing more
resources.
The vertical scaling in the cloud refers to either scaling up i.e. enhancing the
computing resources or scaling down i.e. reducing/ cutting down computing
resources for an application. In vertical scaling, the actual number of VMs are
constant but the quantity of the resource allocated to each of them is increased/
decreased. Here no infrastructure is added and application code is also not
changed. The vertical scaling is limited to the capacity of the physical machine
or server running in the cloud. If one has to upgrade the hardware requirements
of an existing cloud environment, this can be achieved by minimum changes.
B 4 CPUs
vertical scaling
A 2 CPUs
An IT resource (a virtual server with two CPUs) is scaled up by replacing it with a more
powerful IT resource with increased capacity for data storage (a physical server with four CPUs).
Pooled
physical
servers
A A B A B C
horizontal scaling
An IT resource (Virtual Server A) is scaled out by adding more of the same IT resources (Virtual Servers B and C).
5.7 SUMMARY
In the end, we are now aware of various types of scaling, scaling strategies and
their use in real situations. Various cloud service providers like Amazon AWS,
Microsoft Azure and IT giants like Google offer scaling services on their
application based on the application requirements. These services offer good
help to the entrepreneurs who run small to medium businesses and seek IT
infrastructure support. We have also discussed various advantages of cloud
scaling for business applications.
19
Scaling
1. Cloud being used extensively in serving applications and in other
scenarios where the cost and installation time of infrastructure/ capacity
scaling is expectedly high. Scaling helps in achieving optimized
infrastructure for the current and expected load for the applications with
minimum cost and setup time. Scaling also helps in reducing the
disaster recovery time if happens. (for details see section 5.3)
3. The reactive scaling technique only works for the actual variation of
load on the application however, the combination works for both
expected and real traffic. A good estimate of load increases
performance of the combinational scaling.
4 0 450 112.5 4
1800
3 7 257.14
2510
3 10 251
3300
3 13 253.84
4120
3 16 257.50
5000
3 19 263.15
3. When auto scaling takes place in cloud, a small time interval (pause)
prevents the triggering next auto scale event. This helps in maintaining
the integrity in the cloud environment for applications. Once the cool
down period is over, next auto scaling event can be accepted.
21
UNIT 6 LOAD BALANCING
Structure
6.0 Introduction
6.1 Objectives
6.2 Load Balancing and its Importance
6.2.1 Importance of Load Balancing
6.2.2 Goals of Load Balancing in Cloud Computing
6.2.3 How a Load Balancer Works?
6.3 Types of Load Balancers
6.3.1 Types of Load Balancers based on the Functionality
6.3.2 Types of Load Balancers based on the Configuration
6.4 Load Balancing Algorithms – Static and Dynamic
6.4.1 Static Load Balancing Algorithms
6.4.2 Dynamic Load Balancing Algorithms
6.5 Load Balancing as a Service (LBaaS)
6.5.1 Open Stack LBaaS
6.6 Summary
6.7 Solutions/Answers
6.8 Further Readings
6.0 INTRODUCTION
In the earlier unit, we have studied Cloud Elasticity and Scaling which are very
important characteristics of a cloud. In this unit, we will focus on another
important aspect of cloud computing namely load balancing.
In this unit, you will study importance of load balancing, goals of load
balancing, levels of load balancing, load balancing algorithms and load
balancing as a service.
1
Resource Provisioning,
Load Balancing and
Security 6.1 OBJECTIVES
2
Load Balancing
6.2.1 Importance of Load Balancing
3
Resource Provisioning,
Load Balancing and scalability, improved performance and enhanced security are the other goals of
Security
load balancing in cloud computing.
…………………………………………………………………………………………
…………………………………………………………………………………………
…………………………………………………………………………………………
2) What are the goals of load balancing?
…………………………………………………………………………………………
…………………………………………………………………………………………
3) Why is it imperative in Cloud Computing to balance the cloud load?
…………………………………………………………………………………………
…………………………………………………………………………………………
• They are standalone appliances that sit between the incoming traffic
and the servers, managing the distribution of requests.
• Hardware load balancers are known for their high performance,
specialized hardware optimizations, and ability to handle high volumes
of traffic efficiently.
6
Load Balancing
• These devices often offer robust reliability features and specialized
hardware for load balancing tasks.
Examples: F5 Networks' BIG-IP, Citrix ADC (formerly known as Netscaler),
and Barracuda Load Balancer are examples of hardware load balancers.
This load balancer is different from both the software and hardware load
balancers as it is the combination of the program of a hardware load balancer
working on a virtual machine.
Through virtualization, this kind of load balancer imitates the software driven
infrastructure. The program application of hardware equipment is executed on
a virtual machine to get the traffic redirected accordingly. But such load
balancers have similar challenges as of the physical on-premise balancers viz.
lack of central management, lesser scalability and much limited automation.
A load balancing algorithm is the logic, a set of predefined rules, which a load
balancer uses to route traffic among servers.
Algorithms in this class are also noted as off-line algorithms, in which the
VMs information are required to be known in advance. Thus, static algorithms
generally obtain better overall performance than dynamic algorithms.
However, demands are changing over time in real clouds. Thus, static resource
allocation algorithms are easy to violate the requirements of dynamic VM
allocation. Some of the static load balancing algorithms are as follows:
7
Resource Provisioning,
Load Balancing and network administrator. Servers deemed as able to handle more traffic
Security will receive a higher weight. Weighting can be configured within DNS
records.
8
Load Balancing
Algorithms in this class are also noted as online algorithms, in which VMs are
dynamically allocated according to the loads at each time interval. The load
information of VM is not obtained until it comes into the scheduling stage.
These algorithms could dynamically configure the VM placement combining
with VM migration technique. In comparison to static algorithms, dynamic
9
Resource Provisioning,
Load Balancing and algorithms have higher competitive ratio. Some of the dynamic load balancing
Security algorithms are as follows:
10
Load Balancing
LBaaS is available as part of the services provided by major cloud platforms
like AWS, Azure, Google Cloud, and others. Users can configure and manage
load balancers through web-based interfaces or APIs provided by the cloud
service providers. This service abstraction allows businesses to focus on their
applications' functionality and scalability while relying on the cloud provider's
infrastructure for efficient load balancing.
In the next section let us study how a Open Stack LBaaS works.
OpenStack LBaaS allow users to create Load balancer to balance the traffic
load between the Instances, it reside in front of a group of Instances and
manage traffic balancing. LBaaS v2 allows you to configure multiple listener
ports on a single load balancer IP address.
LBaaS service consist with a load balancer, pool, pool members, listener and a
health Monitor. High Availability Proxy (HAProxy) is used to implement the
load balancing. Below given Fig 4 will help you to understand various
components of Open Stack LBaaS.
Load Balancer: Load Balancer collect the data from Listeners and route the
traffic to appropriate Instance. It get assigned one IP from from same subnet on
which the Instances are running. The traffic from outside network is redirected
to LB IP and LB route the traffic to the Instance as per load balancer Policies
configuration.
11
Resource Provisioning,
Load Balancing and Listener: Load balancers can listen for requests on multiple ports. Each one
Security
of those ports is specified by a listener.
Pool : A pool holds a list of members that serve content through the load
balancer.
Health monitor: Health Monitor keep tracking the Status of the pool
members. if a Member is not in Healthy state health monitor redirect the traffic
to another healthy instance. Health monitors are associated with pools.
Member: Members are the servers that serve traffic behind a load balancer.
Each member is specified by the IP address and port that it uses to serve
traffic.
…………………………………………………………………………………
…………………………………………………………………………………
…………………………………………………………………………………
2) Briefly explain the Round Robin and Weighted Round Robin Algorithms.
…………………………………………………………………………………
…………………………………………………………………………………
…………………………………………………………………………………
6.6 SUMMARY
In this unit we had studied load balancing and its associated algorithms. Load
balancing in cloud computing is a critical mechanism that optimizes the
distribution of incoming network traffic across multiple servers or resources.
Acting as a traffic manager, the load balancer ensures that no single server is
overwhelmed, thereby enhancing performance, maximizing resource
utilization, and maintaining high availability.
12
Load Balancing
Load balancers sit between backend servers and client devices, receive
server requests, and distribute them to available, capable servers. Cloud
load balancing is the process of distributing traffic such as UDP,
TCP/SSL, HTTP(s), HTTPS/2 with gRPC, and QUIC to multiple
backends to increase security, avoid congestion, and reduce costs and
latency.
3. In the cloud, load balancing is critical for the following reasons. Load
balancing technology is less costly and easier to use than other options.
Firms may now give greater outcomes at a cheaper cost by using this
technology. The scalability of cloud load balancing can help manage
website traffic. High-end network and server traffic may be effectively
managed using effective load balancers. In order to manage and
disperse workloads in the face of numerous visitors every second, e-
commerce businesses rely on cloud load balancing. Load balancers can
deal with any abrupt spikes in traffic. For example, if there are too
many requests for university results, the website may be shut down. It
is unnecessary to be concerned about the flow of traffic while using a
load balancer. Whatever is the scale of the traffic, load balancers will
evenly distribute the website's load over several servers, resulting in the
best outcomes in the shortest amount of time.
The primary benefit of utilizing a load balancer is to ensure that the
website does not go down unexpectedly. This means that if a single
node fails, the load is automatically shifted to another node on the
13
Resource Provisioning,
Load Balancing and network. It allows for more adaptability, scalability, and traffic
Security
handling.
14
Load Balancing
Round Robin Load Balancing Algorithms using Weighted Round Robins have
been created to address the most problematic aspects of Round Robins.
Weights and functions are distributed according to the weight values in this
algorithm.
Higher-capacity processors are valued more highly. Consequently, the servers
with the most traffic will be given the most work. Once the servers are fully
loaded, they will see a steady stream of traffic.
15
UNIT 7 SECURITY ISSUES IN CLOUD
COMPUTING
Structure
7.0 Introduction
7.1 Objectives
7.2 Cloud Security
7.2.1 How Cloud Security is Different from Traditional IT Security?
7.2.2 Cloud Computing Security Requirements
7.3 Security Issues in Cloud Service Delivery Models
7.4 Security Issues in Cloud Deployment Models
7.0 INTRODUCTION
In the earlier unit, we had studied Load Balancing in Cloud computing and in
this unit we will focus on another important aspect namely Cloud Security in
cloud computing.
• Data security
• Identity and access management (IAM)
• Governance (policies on threat prevention, detection, and mitigation)
• Data retention (DR) and business continuity (BC) planning
• Legal compliance
In this unit, you will study what is cloud security, how it is different from
traditional(legacy) IT security, cloud computing security requirements,
challenges in providing cloud security, threats, ensuring security, Identity and
Access management and Security-as-a-Service.
7.1 OBJECTIVES
Cloud security is the whole bundle of technology, protocols, and best practices
that protect cloud computing environments, applications running in the cloud,
and data held in the cloud. Securing cloud services begins with understanding
what exactly is being secured, as well as, the system aspects that must be
managed.
The full scope of cloud security is designed to protect the following, regardless
of your responsibilities:
2
Security Issues in
• Physical networks — routers, electrical power, cabling, climate Cloud Computing
controls, etc.
• Data storage — hard drives, etc.
• Data servers — core network computing hardware and software
• Computer virtualization frameworks — virtual machine software,
host machines, and guest machines
• Operating systems (OS) — software that houses
• Middleware — application programming interface (API) management,
• Runtime environments — execution and upkeep of a running program
• Data — all the information stored, modified, and accessed
• Applications — traditional software services (email, tax software,
productivity suites, etc.)
• End-user hardware — computers, mobile devices, Internet of Things
(IoT) devices etc..
Cloud security may appear like traditional (legacy) IT security, but this
framework actually demands a different approach. Before diving deeper, let’s
first look how this is different to that of legacy IT security in the next section.
Traditional IT security has felt an immense evolution due to the shift to cloud-
based computing. While cloud models allow for more convenience, always-on
connectivity requires new considerations to keep them secure. Cloud security,
as a modernized cyber security solution, stands out from legacy IT models in a
few ways.
Data storage: The biggest distinction is that older models of IT relied heavily
upon onsite data storage. Organizations have long found that building all IT
frameworks in-house for detailed, custom security controls is costly and rigid.
Cloud-based frameworks have helped offload costs of system development and
upkeep, but also remove some control from users.
Proximity to other networked data and systems: Since cloud systems are a
persistent connection between cloud providers and all their users, this
substantial network can compromise even the provider themselves. In
networking landscapes, a single weak device or component can be exploited to
infect the rest. Cloud providers expose themselves to threats from many end-
3
Resource Provisioning,
Load Balancing and users that they interact with, whether they are providing data storage or other
Security services. Additional network security responsibilities fall upon the providers
who otherwise delivered products live purely on end-user systems instead of
their own.
Solving most cloud security issues means that users and cloud providers, both
in personal and business environments, both remain proactive about their own
roles in cyber security. This two-pronged approach means users and providers
mutually must address:
There are four main cloud computing security requirements that help to ensure
the privacy and security of cloud services: confidentiality, integrity,
availability, and accountability.
Confidentiality
4
Security Issues in
Cloud Computing
Integrity
Availability
Availability is the ability for the consumer to utilize the system as expected.
One of the significant advantages of a cloud computing is its data availability.
Cloud computing enhances availability through authorized entry. In addition,
availability requires timely support and robust equipment. A client’s
availability may be ensured as one of the terms of a contract; to guarantee
availability, a provider may secure huge capacity and excellent architecture.
Because availability is a main part of the cloud computing system, increased
use of the environment will increase the possibility of a lack of availability and
thus could reduce the cloud computing system’s performance. Cloud
computing affords clients two ways of paying for cloud services: on-demand
resources and (the cheaper option) resource reservation. The optimal virtual-
machine (VM) placement mechanism helps to reduce the cost of both payment
methods. By reducing the cost of running VMs for many cloud providers, it
supports expected changes in demand and price. This method involves the
client making a declaration to pay for certain resources owned by the cloud
computing providers using the Session Initiation Protocol (SIP) optimal
solution.
Accountability
Access Control: To check and promote only legalized users, cloud must have
right access control policies. Such services must be adjustable, well planned,
and their allocation is overseeing conveniently. The approach governor
provision must be integrated on the basis of Service Level Agreement (SLA).
Policy Integration: There are many cloud providers such as Amazon, Google
which are accessed by end users. Minimum number of conflicts between their
policies because they user their own policies and approaches.
In the follow sections let us discuss major threats and issues in cloud
computing with respect to the cloud service delivery models and cloud
deployment models.
6
Security Issues in
Cloud Computing
…………………………………………………………………………………………
…………………………………………………………………………………………
…………………………………………………………………………………………
2) How does cloud security work?
…………………………………………………………………………………………
…………………………………………………………………………………………
3) Mention various cloud security risks and discuss briefly.
…………………………………………………………………………………………
…………………………………………………………………………………………
Data loss is the second most important issue related to cloud security. Like
data breach, data loss is a sensitive matter for any organization and can have a
devastating effect on its business. Data loss mostly occurs due to malicious
attackers, data deletion, data corruption, loss of data encryption key, faults in
storage system, or natural disasters. In 2013, 44% of cloud service providers
have faced brute force attacks that resulted in data loss and data leakage.
Similarly, malware attacks have also been targeted at cloud applications
resulting in data destruction.
SQL injection attacks, are the one in which a malicious code is inserted into a
standard SQL code. Thus the attackers gain unauthorized access to a database
and are able to access sensitive information. Sometimes the hacker’s input data
is misunderstood by the web-site as the user data and allows it to be accessed
by the SQL server and this lets the attacker to have know-how of the
functioning of the website and make changes into that. Various techniques
like: avoiding the usage of dynamically generated SQL in the code, using
filtering techniques to sanitize the user input etc. are used to check the SQL
injection attacks. Some researchers proposed proxy based architecture towards
preventing SQL Injection attacks which dynamically detects and extracts
users’ inputs for suspected SQL control sequences.
8
Security Issues in
7.3.1.4 Cross Site Scripting(XSS) Attacks Cloud Computing
Cross Site Scripting (XSS) attacks, which inject malicious scripts into Web
contents have become quite popular since the inception of Web 2.0. There are
two methods for injecting the malicious code into the web-page displayed to
the user namely - Stored XSS and Reflected XSS. In a Stored XSS, the
malicious code is permanently stored into a resource managed by the web
application and the actual attack is carried out when the victim requests a
dynamic page that is constructed from the contents of this resource. However,
in case of a Reflected XSS, the attack script is not permanently stored; in fact it
is immediately reflected back to the user.
Network plays an important part in deciding how efficiently the cloud services
operate and communicate with users. In developing most cloud solutions,
network security is not considered as an important factor by some
organizations. Not having enough network security creates attacks vectors for
the malicious users and outsiders resulting in different network threats. Most
critical network threats in cloud are account or service hijacking, and denial of
service attacks.
Denial of Service (DOS) attacks are done to prevent the legitimate users from
accessing cloud network, storage, data, and other services. DOS attacks have
been on rise in cloud computing in past few years and 81% customers consider
it as a significant threat in cloud. They are usually done by compromising a
service that can be used to consume most cloud resources such as computation
power, memory, and network bandwidth. This causes a delay in cloud
operations, and sometimes cloud is unable to respond to other users and
services. Distributed Denial of Service (DDOS) attack is a form of DOS
attacks in which multiple network sources are used by the attacker to send a
large number of requests to the cloud for consuming its resources. It can be
launched by exploiting the vulnerabilities in web server, databases, and
applications resulting in unavailability of resources.
9
Resource Provisioning,
Load Balancing and encryption technologies like: Dsniff, Cain, Ettercap, Wsniff, Airjack etc. have
Security been developed in order to provide safeguard against them.
Brute Force Attacks: The attacker attempts to crack the password by guessing
all potential passwords.
Replay Attacks: Also known as reflection attacks, replay attacks are a type of
attack that targets a user’s authentication process.
Key loggers: This is a program that records every key pressed by the user and
tracks their behavior.
Cloud service providers are largely responsible for controlling the cloud
environment. Some threats are specific to cloud computing such as cloud
service provider issues, providing insecure interfaces and APIs to users,
malicious cloud users, shared technology vulnerabilities, misuse of cloud
services, and insufficient due diligence by companies before moving to cloud.
The term abuse of cloud services refers to the misuse of cloud services by the
consumers. It is mostly used to describe the actions of cloud users that are
illegal, unethical, or violate their contract with the service provider. In 2010,
abusing of cloud services was considered to be the most critical cloud threat
and different measures were taken to prevent it. However, 84% of cloud users
still consider it as a relevant threat. Research has shown that some cloud
providers are unable to detect attacks launched from their networks, due to
which they are unable to generate alerts or block any attacks. The abuse of
cloud services is a more serious threat to the service provider than service
users. For instance, the use of cloud network addresses for spam by malicious
users has resulted in blacklisting of all network addresses, thus the service
provider must ensure all possible measures for preventing these threats. Over
the years, different attacks have been launched through cloud by the malicious
users. For example, Amazon’s EC2 services were used as a command and
control servers to launch Zeus botnet in 2009. Famous cloud services such as
Twitter, Google and Facebook as a command and control servers for launching
Trojans and Botnets. Other attacks that have been launched using cloud are
brute force for password cracking of encryption, phishing, performing DOS
attack against a web service at specific host, Cross Site Scripting and SQL
injection attacks.
The term due diligence refers to individuals or customers having the complete
information for assessments of risks associate with a business prior to using its
services. Cloud computing offers exciting opportunities of unlimited
computing resources, and fast access due which number of businesses shift to
cloud without assessing the risks associated with it. Due to the complex
architecture of cloud, some of organization security policies cannot be applied
using cloud. Moreover, the cloud customers have no idea about the internal
security procedures, auditing, logging, data storage, data access which results
in creating unknown risk profiles in cloud. In some cases, the developers and
12
Security Issues in
designers of applications maybe unaware of their effects from deployment on Cloud Computing
cloud that can result in operational and architectural issues.
Earlier Xen hypervisors code used to create local privilege escalation (in which
a user can have rights of another user) vulnerability that can launch guest to
host VM escape attack. Later, Xen updated the code base of its hypervisor to
fix that vulnerability. Other companies such as Microsoft, Oracle and SUSE
Linux those based on Xen also released updates of their software to fix the
local privilege escalation vulnerability. Similarly, a report released in 2009,
showed the usage of VMware to run code from guests to hosts showing the
possible ways to launch attacks.
13
Resource Provisioning,
Load Balancing and 7.3.3.9 Failure of Isolation
Security
There is a lack of strong isolation or compartmentalization of routing,
reputation, storage, and memory among tenants. Because of the lack of
isolation, attackers attempt to take control of the operations of other cloud
users to obtain unauthorized access to the data.
The attackers can gain access to remote system applications on the victim’s
resource systems via this approach. It’s a passive attack of sorts. Zombies are
sometimes used by attackers to carry out DDoS attacks. Back doors channels,
however, are frequently used by attackers to get control of the victim’s
resources. It has the potential to compromise data security and privacy.
Each of the three ways (Public, Private, Hybrid) in which cloud services can
be deployed has its own advantages and limitations. And from the security
perspective, all the three have got certain areas that need to be addressed with a
specific strategy to avoid them.
A private cloud model enables the customer to have total control over the
network and provides the flexibility to the customer to implement any
traditional network perimeter security practice. Although the security
architecture is more reliable in a private cloud, yet there are issues/risks that
need to be considered:
15
Resource Provisioning,
Load Balancing and application stack or by using a standard applicative stack, to develop
Security the web interface using common languages such as Java, PHP, Python
etc. As part of screening process, Eucalyptus web interface has been
found to have a bug, allowing any user to perform internal port
scanning or HTTP requests through the management node which he
should not be allowed to do. In the nutshell, interfaces need to be
properly developed and standard web application security techniques
need to be deployed to protect the diverse HTTP requests being
performed.
• While we talk of standard internet security, we also need to have a
security policy in place to safeguard the system from the attacks
originating within the organization. This vital point is missed out on
most of the occasions, stress being mostly upon the internet security.
Proper security guidelines across the various departments should exist
and control should be implemented as per the requirements.
Thus we see that although private clouds are considered safer in comparison to
public clouds, still they have multiple issues which if unattended may lead to
major security loopholes as discussed earlier.
The hybrid cloud model is a combination of both public and private cloud and
hence the security issues discussed with respect to both are applicable in case
of hybrid cloud.
In the following section the security methods to avoid the exploitation of the
threats will be discussed.
Various security measures and techniques have been proposed to avoid the
data breach in cloud. One of these is to encrypt data before storage on cloud,
and in the network. This will need efficient key management algorithm, and
the protection of key in cloud. Some measures that must be taken to avoid data
breaches in cloud are to implement proper isolation among VMs to prevent
information leakage, implement proper access controls to prevent unauthorized
access, and to make a risk assessment of the cloud environment to know the
storage of sensitive data and its transmission between various services and
networks.
To prevent data loss in cloud different security measures can be adopted. One
of the most important measures is to maintain backup of all data in cloud
which can be accessed in case of data loss. However, data backup must also be
protected to maintain the security properties of data such as integrity and
confidentiality. Various data loss prevention (DLP) mechanisms have been
proposed for the prevention of data loss in network, processing, and storage.
Many companies including Symantec, McAfee, and Cisco have also developed
solutions to implement data loss prevention across storage systems, networks
and end points. Trusted Computing can be used to provide data security. A
trusted server can monitor the functions performed on data by cloud server and
provide the complete audit report to data owner. In this way, the data owner
can be sure that the data access policies have not been violated.
To avoid DOS attacks it is important to identify and implement all the basic
security requirements of cloud network, applications, databases, and other
services. Applications should be tested after designing to verify that they have
no loop holes that can be exploited by the attackers. The DDoS attacks can be
prevented by having extra network bandwidth, using IDS that verify network
requests before reaching cloud server, and maintaining a backup of IP pools for
urgent cases. Industrial solutions to prevent DDOS attacks have also been
provided by different vendors. A technique named hop count filtering that can
be used to filter spoofed IP packets, and helps in decreasing DOS attacks by
90%. Another technique for securing cloud from DDoS involves using
intrusion detection system in virtual machine (VM). In this scheme when an
intrusion detection system (IDS) detects an abnormal increase in inbound
traffic, the targeted applications are transferred to VMs hosted on another data
center.
To protect the cloud from insecure API threats it is important for the
developers to design these APIs by following the principles of trusted
computing. Cloud providers must also ensure that all the all the APIs
implemented in cloud are designed securely, and check them before
deployment for possible flaws. Strong authentication mechanisms and access
controls must also be implemented to secure data and services from insecure
interfaces and APIs. The Open Web Application Security Project (OWASP)
provides standards and guidelines to develop secure applications that can help
in avoiding such application threats. Moreover, it is the responsibility of
customers to analyze the interfaces and APIs of cloud provider before moving
their data to cloud.
18
Security Issues in
• Analysis of the cloud provider interfaces and a proper security model Cloud Computing
for these interfaces.
• Detailed understanding of the dependency chain related to APIs.
The protection from these threats can be achieved by limiting the hardware and
infrastructure access only to the authorized personnel. The service provider
must implement strong access control, and segregation of duties in the
management layer to restrict administrator access to only his authorized data
and software. Auditing on the employees should also be implemented to check
for their suspicious behavior. Moreover, the employee behavior requirements
should be made part of legal contract, and action should be taken against
anyone involved in malicious activities. To prevent data from malicious
insiders encryption can also be implemented in storage, and public networks.
7.5.10 Protection from SQL Injection, XSS, Google Hacking and Forced
Hacking
In order to secure cloud against various security threats such as: SQL injection,
Cross Site Scripting (XSS), DoS and DDoS attacks, Google Hacking, and
Forced Hacking, different cloud service providers adopt different techniques.
A few standard techniques to detect the above mentioned attacks include:
A Google hacking database identifies the various types of information such as:
login passwords, pages containing logon portals, session usage information etc.
Various software solutions such as Web Vulnerability Scanner can be used to
detect the possibility of a Google hack. In order to prevent Google hack, users
need to ensure that only those information that do not affect them should be
shared with Google. This would prevent sharing of any sensitive information
that may result in adverse conditions.
21
Resource Provisioning,
Load Balancing and On a fundamental level, Identity and Access Management encompasses the
Security following components:
• how roles are identified in a system and how they are assigned to
individuals
• protecting the sensitive data within the system and securing the system
itself.
IAM technologies can be used to initiate, capture, record and manage user
identities and their related access permissions in an automated manner. An
organization gains the following IAM benefits:
22
Security Issues in
Pre-Shared Key (PSK): PSK is another type of digital authentication where Cloud Computing
the password is shared among users authorized to access the same resources --
think of a branch office Wi-Fi password. This type of authentication is less
secure than individual passwords. A concern with shared passwords like PSK
is that frequently changing them can be cumbersome.
In cloud computing, data is stored remotely and accessed over the Internet.
Because users can connect to the Internet from almost any location and any
device, most cloud services are device- and location-agnostic. Users no longer
need to be in the office or on a company-owned device to access the cloud.
And in fact, remote workforces are becoming more common.
The user's identity, not their device or location, determines what cloud data
they can access and whether they can have any access at all.
With cloud computing, sensitive files are stored in a remote cloud server.
Because employees of the company need to access the files, they do so by
logging in via browser or an app. IAM helps prevent identity-based attacks and
data breaches that come from privilege escalations (when an unauthorized user
has too much access). Thus, IAM systems are essential for cloud computing,
and for managing remote teams. It is a cloud service that controls the
permissions and access for users and cloud resources. IAM policies are sets of
permission policies that can be attached to either users or cloud resources to
authorize what they access and what they can do with it.
The concept “identity is the new perimeter” goes, when AWS first announced
their IAM service in 2012. We are now witnessing a renewed focus on IAM
due to the rise of abstracted cloud services and the recent wave of high-profile
data breaches.
Services that don’t expose any underlying infrastructure rely heavily on IAM
for security. Managing a large number of privileged users with access to an
23
Resource Provisioning,
Load Balancing and ever-expanding set of services is challenging. Managing separate IAM roles
Security and groups for these users and resources adds yet another layer of complexity.
Cloud providers like AWS and Google Cloud help customers solve these
problems with tools like the Google Cloud- IAM recommender (currently in
beta) and the AWS- IAM access advisor. These tools attempt to analyze the
services last accessed by users and resources, and help you find out which
permissions might be over-privileged. These tools indicate that cloud providers
recognize these access challenges, which is definitely a step in the right
direction. However, there are a few more challenges we need to consider.
IAM is crucial aspect of cloud security. Businesses must look at IAM as a part
of their overall security posture and add an integrated layer of security across
their application lifecycle.
• Don’t use root accounts - Always create individual IAM users with
relevant permissions, and don’t give your root credentials to anyone.
24
Security Issues in
• Adopt a role-per-group model - Assign policies to groups of users Cloud Computing
based on the specific things those users need to do. Don’t “stack” IAM
roles by assigning roles to individual users and then adding them to
groups. This will make it hard for you to understand their effective
permissions.
25
Resource Provisioning,
Load Balancing and updates as everything is managed for you by your SECaaS provider and
Security visible to you through a web-enabled dashboard.
• Free Up Resources: When security provisions are managed externally,
your IT teams can focus on what is important to your organization.
SECaaS frees up resources, gives you total visibility through
management dashboards and the confidence that your IT security is
being managed competently by a team of outsourced security
specialists. You can also choose for your IT teams to take control of
security processes if you prefer and manage all policy and system
changes through a web interface.
• Continuous Monitoring
• Data Loss Prevention (DLP)
• Business Continuity and Disaster Recovery (BC/DR or BCDR)
• Email Security
• Antivirus Management
• Spam Filtering
• Identity and Access Management (IAM)
• Intrusion Protection
• Security Assessment
• Network Security
• Security Information and Event Management (SIEM)
• Web Security
• Vulnerability Scanning
Combining the most significant features of two distinct cloud service providers
for your IT strategy can create countless possibilities and flexibility by using
the Multi-Cloud computing. Let us study the multi-cloud concept in the next
section.
26
Security Issues in
Multi-cloud computing can assist the company in meeting those requirements Cloud Computing
as they can choose from multiple IaaS providers’ data center regions or
availability zones. This flexibility in where cloud data is placed also allows
organizations to locate resources close to the end users to achieve the best
performance and minimal latency.
Some businesses are still determining if a cloud strategy is viable, and others
have acted to expand their deployments and establish multi-cloud
environments. Organizations can compete in competitive marketplaces thanks
to the range of options, cost savings, business agility, and innovation prospects.
• Sourcing
• Architecture
• Governance
Combining the most significant features of two distinct cloud service providers
for your IT strategy can create countless possibilities and flexibility. Continue
reading to discover and understand the major benefits of multi-cloud in the
following section.
Organizations using multi-cloud can reduce downtime for critical services with
the help of a strategy and architecture. The cloud organizations with the lowest
levels of downtime are all those with cloud strategies and architectures.
Additionally, they adopt several other behaviors as following:
Security
For instance, it might have branches for constructing a platform to use PaaS
choices in an IaaS environment, executing a direct lift and shift on a workload
well suited to IaaS, or performing a rewrite for the cloud.
By diversifying the hosting regions for your infrastructure when you deploy
with multiple clouds, you can ensure high availability for your customers. As a
result, your users will still have access to the features and services deployed on
other clouds, even if one of your cloud providers experiences technical
difficulties.
If you build applications for just one cloud vendor, you risk becoming locked
in with them. As a result, switching providers in the future will be considerably
more difficult. Even though that specific vendor was appropriate for you at the
time, it might not be as convenient if you need to scale up or down.
Additionally, you might pass up some future discounts that are much better.
Developers can work to design apps that work across several platforms by
choosing a multi-cloud strategy from the beginning. As a result, you’ll always
have the freedom to benefit from the most excellent offers or features from
other vendors without compromising what you can provide for your clients.
…………………………………………………………………………………
…………………………………………………………………………………
…………………………………………………………………………………
2) What are the various security aspects that one needs to remember while
opting for Cloud services?
28
Security Issues in
………………………………………………………………………………… Cloud Computing
…………………………………………………………………………………
…………………………………………………………………………………
3) How to choose a SECaaS Provider?
…………………………………………………………………………………
…………………………………………………………………………………
…………………………………………………………………………………
7.9 SUMMARY
1. In the 1990s, business and personal data stored locally and security was
local as well. Data would be located on a PC’s internal storage at home,
and on enterprise servers, if you worked for a company.
29
Resource Provisioning,
Load Balancing and Centralization and multi-tenant storage: Every component from core
Security infrastructure to small data like emails and documents — can now be
located and accessed remotely on 24X7 web-based connections. All
this data gathering in the servers of a few major service providers can
be highly dangerous. Threat actors can now target large multi-
organizational data centers and cause immense data breaches.
That said, users are not alone in cloud security responsibilities. Being
aware of the scope of your security duties will help the entire system
stay much safer.
Data security is an aspect of cloud security that involves the technical end
of threat prevention. Tools and technologies allow providers and clients to
insert barriers between the access and visibility of sensitive data. Among
these, encryption is one of the most powerful tools available. Encryption
scrambles your data so that it's only readable by someone who has the
encryption key. If your data is lost or stolen, it will be effectively
unreadable and meaningless. Data transit protections like virtual private
networks (VPNs) are also emphasized in cloud networks.
The biggest risk with the cloud is that there is no perimeter. Traditional
cyber security focused on protecting the perimeter, but cloud environments
are highly connected which means insecure APIs (Application
Programming Interfaces) and account hijacks can pose real problems.
Faced with cloud computing security risks, cyber security professionals
need to shift to a data-centric approach.
Third-party storage of your data and access via the internet each pose their
own threats as well. If for some reason those services are interrupted, your
access to the data may be lost. For instance, a phone network outage could
mean you can't access the cloud at an essential time. Alternatively, a power
outage could affect the data center where your data is stored, possibly with
permanent data loss.
1. Fortunately, there is a lot that you can do to protect your own data in
the cloud. Let’s explore some of the popular methods.
31
Resource Provisioning,
Load Balancing and Encryption is one of the best ways to secure your cloud computing
Security systems. There are several different ways of using encryption, and they
may be offered by a cloud provider or by a separate cloud security
solutions provider:
• Communications encryption with the cloud in their entirety.
• Particularly sensitive data encryption, such as account credentials.
• End-to-end encryption of all data that is uploaded to the cloud.
Within the cloud, data is more at risk of being intercepted when it is on the
move. When it's moving between one storage location and another, or
being transmitted to your on-site application, it's vulnerable. Therefore,
end-to-end encryption is the best cloud security solution for critical data.
With end-to-end encryption, at no point is your communication made
available to outsiders without your encryption key.
You can either encrypt your data yourself before storing it on the cloud, or
you can use a cloud provider that will encrypt your data as part of the
service. However, if you are only using the cloud to store non-sensitive
data such as corporate graphics or videos, end-to-end encryption might be
overkill. On the other hand, for financial, confidential, or commercially
sensitive information, it is vital.
If you are using encryption, remember that the safe and secure
management of your encryption keys is crucial. Keep a key backup and
ideally don't keep it in the cloud. You might also want to change your
encryption keys regularly so that if someone gains access to them, they will
be locked out of the system when you make the changeover.
• Never leave the default settings unchanged: Using the default settings
gives a hacker front-door access. Avoid doing this to complicate a
hacker’s path into your system.
• Never leave a cloud storage bucket open: An open bucket could allow
hackers to see the content just by opening the storage bucket's URL.
• If the cloud vendor gives you security controls that you can switch
on, use them. Not selecting the right security options can put you at
risk.
Unfortunately, cloud companies are not going to give you the blueprints to
their network security. This would be equivalent to a bank providing you
32
Security Issues in
with details of their vault, complete with the combination numbers to the Cloud Computing
safe.
However, getting the right answers to some basic questions gives you
better confidence that your cloud assets will be safe. In addition, you will
be more aware of whether your provider has properly addressed obvious
cloud security risks. We recommend asking your cloud provider some
questions of the following questions:
You will also want to make sure you’ve read your provider’s terms of
service (TOS). Reading the TOS is essential to understanding if you are
receiving exactly what you want and need.
Be sure to check that you also know all the services used with your
provider. If your files are on Dropbox or backed up on iCloud (Apple's
storage cloud), that may well mean they are actually held on Amazon's
servers. So, you will need to check out AWS, as well as, the service you
are using directly.
3. Hiring the third party cloud service for the security of your most critical
and sensitive business assets is a massive undertaking. Choosing a SECaaS
provider takes careful consideration and evaluation. Here are some of the
most important considerations when selecting a provider:
33
Resource Provisioning,
Load Balancing and work with best in class security solution vendors and who also have the
Security expertise to support these solutions.
34