Download as pdf or txt
Download as pdf or txt
You are on page 1of 25

Sir Padampat Singhania University

Udaipur

Cloud Computing

Laboratory Manual

Course: B.Tech CSE (CTIS) Semester: VIII Year: IV

Course Code: CS-4012

Prepared by
Dr. Kamal Kant Hiran, Assistant Professor
Department of Computer Science and Engineering

Department of Computer Science and Engineering, SPSU, Udaipur

Laboratory Manual for Cloud Computing Page 1


Laboratory Manual
for
Cloud Computing

Laboratory Manual for Cloud Computing Page 2


Introduction

Cloud Technology is the delivery of different services through the Internet. These resources
include tools and applications like data storage, servers, databases, networking and software.
This course will help the students to study, research and analyze the concepts of cloud
technology, cloud architecture, services, obstacles and vulnerabilities, cost management,
legal issues involved in the cloud, migrating to cloud along with Amazon, Google and
Microsoft Cloud Services along with the cases studies.

Outcomes

Upon the completion of Cloud Technology practical course, the student will be able to:

1. Represent the cloud architecture.

2. Chalk out the major differences between SAAS, PAAS & IAAS.

3. Know the details about various companies in the cloud business and the corresponding

services provided by them.

4. Study various cases with regard to migration to cloud, cost management, legal issues, etc.

5. Understand how Amazon, Google and Microsoft Cloud Services work and are different

from each other.

6. Know the obstacles and vulnerabilities in cloud computing.

7. Brief about Jolly Cloud.

“Without deviation, progress is not possible”

- Frank Zappa

Laboratory Manual for Cloud Computing Page 3


Table of Contents

S.No. Experiments Page No.

1 Study the basic cloud architecture and represent it using a case study. 5
2 Enlist major differences between SAAS, PAAS & IAAS. Also submit a 8
research done on various companies in cloud business and the corresponding
services provided by them, tag them under SAAS, PAAS & IAAS.
3 Study and present a report on Jolly Cloud. 10
4 Present a report on obstacles and vulnerabilities in cloud computing on 11
generic level.
5 Present a report on Amazon cloud services. 13
6 Present a report on Microsoft cloud services. 16
7 Present a report on cost management in cloud. 17
8 Enlist and explain legal issues involved in the cloud with the help of a case 19
study.
9 Explain the process to migrating to cloud with a case study. 21
10 Present a report on Google cloud and cloud services. 22

Laboratory Manual for Cloud Computing Page 4


1. Study the basic cloud architecture and represent it using a case study.

Cloud Computing is the use of hardware and software to deliver a service over a network (typically the
Internet). Cloud Computing architecture comprises of many cloud components, which are loosely coupled.
We can broadly divide the cloud architecture into two parts:

Front End - The front end refers to the client part of cloud computing system. It consists of interfaces and
applications that are required to access the cloud computing platforms, Example - Web Browser.

Back End - The back end refers to the cloud itself. It consists of all the resources required to provide cloud
computing services. It comprises of huge data storage, virtual machines, security mechanism, services,
deployment models, servers, etc.

Case Study 1
‗Pay by the Drink‘ Flexibility Creates Major Efficiencies and Revenue for Coca-Cola‘s International
Bottling Investments Group (BIG)
The Coca-Cola Company‘s sophisticated distribution model includes a partner network of franchise bottlers
that manufacture, package, merchandise and distribute branded beverages to their own customers and
vending partners, who then sell the products to consumers. All of these bottling partners work closely with
their customers (grocery stores, restaurants, street vendors, convenience stores, movie theaters and
amusement parks, etc.) to execute localized strategies developed in partnership with Coca Cola. This
network of bottlers sells Coca-Cola products to consumers at a rate of more than 1.9 billion servings a day.
Over a decade ago, Coca-Cola formed their Bottling Investments Group (BIG) to manage their company-
owned bottling assets. The mission of the group was to help bottlers operate at the same high standards that
Coca-Cola sets for all of its bottling franchisees around the world.
Today, BIG manages bottling operations in 18 markets including emerging markets such as India, Vietnam,
Sri Lanka, Nepal, Myanmar and Bangladesh and accounts for more than 25 percent of the total system
volume.
When Coca-Cola initially created BIG, each of the bottlers they brought in faced a different and distinct set
of business issues due to their unique markets. Despite these challenges, though, BIG succeeded in its vision
Laboratory Manual for Cloud Computing Page 5
to become a model bottler by investing for the long-term in infrastructure and building the right culture to
ensure a sustainable healthy business.
―As we have grown through the years, our leadership stayed focused on implementing key strategic
initiatives in supply chain, sales, revenue and profit generation,‖ said Javier Polit, former CIO, BIG.
―Additionally, we have worked to build leadership capability at all levels with a suite of world-class
development programs from front-line supervisor to senior executive.‖
This successful framework helps new bottlers joining BIG increase their efficiencies and revenues in less
time than they could do on their own through world class tool sets and proven processes. Eventually, many
bottlers transition out of BIG back into the franchise system and metrics show that these bottlers generally
continue to perform at high levels.
The Challenge
BIG‘s stated goal is to drive efficiencies, higher revenue, greater transparency and higher standards across
all of its bottlers. But, the bottlers within BIG each faced very unique challenges inherent to their business
and markets. Thus the challenge for the business was how to address the unique complexities and
requirements of a very diverse group of bottlers with an efficient infrastructure and standardized processes.
One key area of consideration was to reduce the complexity, rigidity and costs of running the mission-
critical applications that were common to each of the bottlers. Motivated by this and a desire to leave behind
its capital intensive, highly inflexible on-premises environments located in two outsourced data centers, BIG
began its foray into cloud computing in 2012.
The original solution involved outsourcing the hosting of these mission critical applications, which included
the company‘s business-critical SAP systems. While this initial effort did begin to successfully move BIG
bottlers from a capex model to an OPEX model and provided some savings, the solution was not without
challenges. Despite these early moves to cloud, BIG‘s overall costs for running its mission critical
applications were still quite heavy.
Reducing the cost to run these spinal cord applications represented a significant opportunity not only to
impact the company‘s bottom line, but also to add greater technological and financial flexibility into the
system.
The Solution
In spring of 2016, BIG began the process of transitioning to the Virtustream Enterprise Cloud. This complex
multi-system SAP migration transitioned seven of BIG‘s international bottlers over a six-month time period.
―This new model takes away the need to calculate the optimum service level for our cloud deployment by
working through complex pricing options and strong arm negotiations, and instead, automatically and
dynamically optimizes service requirements to meet the demands of an individual IT environment or
application,‖ explained Polit.
For BIG, this means that its bottlers can literally ―pay by the drink,‖ which not only provides significant cost
savings, but also offers transparency into consumption that can drive further efficiencies.
Virtustream‘s use of the latest Intel® Xeon® E7 v4 Processors delivers cost-effective performance and
scalability, enabling these capabilities for BIG and their customers. Virtustream protects BIG‘s data by
leveraging key security features of the Intel® Xeon® processors, including Intel AES-NI for data encryption
and Intel® TXT for added tamper-resistance through platform attestation.
These technologies also help to ensure that workloads are only moved to trusted servers and that all data is
protected when it is both at rest and travelling between the company‘s data centers and Virtustream‘s,
meaning BIG can be confident that its intellectual property, customer and employee data and other sensitive
information are protected by one of the most advanced security technologies available.

Laboratory Manual for Cloud Computing Page 6


The Benefits
Migrating to the Virtustream Enterprise Cloud created cost savings for BIG. Adopting a consumption-based
model provided a reduction in the total cost of ownership for BIG‘s mission critical apps, and it is estimated
that through future optimizations of the platform, BIG could realize more cost reductions.
There is also a flexibility benefit to the Virtustream cloud for individual bottlers. Consumption-based pricing
allowed each bottler to have direct control in managing its own costs as well as full transparency into its
usage data. Virtustream has provided each of the bottlers unique tools and automated processes to allow
them to reduce the up-time of non-production systems and optimize storage tiers. This move will, over time,
produce additional cost efficiencies.
Source : https://www.virtustream.com/solutions/case-studies/coca-cola

Laboratory Manual for Cloud Computing Page 7


2. Enlist major differences between SAAS, PAAS & IAAS. Also submit a research done on various
companies in cloud business and the corresponding services provided by them, tag them under SAAS,
PAAS & IAAS.

IaaS
Infrastructure as a service (IaaS) is a cloud computing offering in which a vendor provides users access to
computing resources such as servers, storage and networking. Organizations use their own platforms and
applications within a service provider‘s infrastructure.
Key features
 Instead of purchasing hardware outright, users pay for IaaS on demand.
 Infrastructure is scalable depending on processing and storage needs.
 Saves enterprises the costs of buying and maintaining their own hardware.
 Because data is on the cloud, there can be no single point of failure.
 Enables the virtualization of administrative tasks, freeing up time for other work.
PaaS
Platform as a service (PaaS) is a cloud computing offering that provides users with a cloud environment in
which they can develop, manage and deliver applications. In addition to storage and other computing
resources, users are able to use a suite of prebuilt tools to develop, customize and test their own applications.
Key features
 PaaS provides a platform with tools to test, develop and host applications in the same environment.
 Enables organizations to focus on development without having to worry about underlying
infrastructure.
 Providers manage security, operating systems, server software and backups.
 Facilitates collaborative work even if teams work remotely.
SaaS
Software as a service (SaaS) is a cloud computing offering that provides users with access to a vendor‘s
cloud-based software. Users do not install applications on their local devices. Instead, the applications reside
on a remote cloud network accessed through the web or an API. Through the application, users can store and
analyze data and collaborate on projects.
Key features
 SaaS vendors provide users with software and applications via a subscription model.
 Users do not have to manage, install or upgrade software; SaaS providers manage this.
 Data is secure in the cloud; equipment failure does not result in loss of data.
 Use of resources can be scaled depending on service needs.

Laboratory Manual for Cloud Computing Page 8


 Applications are accessible from almost any internet-connected device, from virtually anywhere in
the world.

Source - https://www.ibm.com/cloud/learn/iaas-paas-saas

Source - https://www.msigeek.com/7357/cloud-computing-service-models-benefits

Source - https://blog.crozdesk.com/tapping-saas-paas-iaas/

Laboratory Manual for Cloud Technology Page 9


3. Study and present a report on Jolicloud.

Jolicloud is a computing platform which makes the cloud more simple and more open. Jolicloud connects
you to all of your favorite online apps, social media, videos, photos and files from any device in the world.
Jolicloud is the creator of the drive app, a new way to manage your storage online.

Categories - Cloud Computing, Developer Tools, Enterprise Software, Web Development


Headquarters Regions - European Union (EU)
Founded Date - 2009
Founders - Romain Huet, Tariq Krim
Operating Status - Closed
Closed Date - Apr 1, 2016
Funding Status - Early Stage Venture

Jolicloud was a pioneer in cloud computing with the Jolibook, the first personal cloud computer and JoliOS,
the first cloud OS designed for netbooks and recycled computers.

Application Manager
Perhaps the greatest thing about Jolicloud is its application manager. Hundreds of free apps are available,
and all can be installed with a single click.

Web Apps
A lot of the websites most people use every day – including Gmail and Google Calendar are better thought
of as applications than they are websites. Gmail, for example, is a complete email interface (and an
extremely powerful one at that). Such websites-as-applications are so common on today‘s Internet that we
even have a term for them: web apps.

Jolicloud offers hundreds of web apps in its application manager. Install these and those given web apps can
run in their own window, apart from your browser.

Jolicloud was funded by two investors, Mangrove Capital Partners and Atomico.
Source- https://www.crunchbase.com/organization/jolicloud

Laboratory Manual for Cloud Technology Page 10


4. Present a report on obstacles and vulnerabilities in cloud computing on generic level.
According to (M. Armbrust, et. In ―A view of cloud computing,‖ Commun. ACM, vol. 53, no. 4, pp. 50–58,
2010) following are the top ten obstacles for cloud computing:

Obstacle 1: Business Continuity and Service Availability


Organizations often worry about the availability of the service provided by the cloud providers. Even the
popular service providers like Amazon, Google, Microsoft experience outages. Keeping the technical issues
of availability aside, a cloud provider could suffer outages for non-technical reasons like going out of
business or regulatory action.

Obstacle 2: Data Lock-In


Data Lock-is related to tight dependency of an organization‘s business with the software or hardware
infrastructure of a cloud provider. Even though software stacks have improved interoperability among
platforms, the storage APIs are still essentially proprietary, or at least have not been subject of active
standardization. This leads to customers not being able to extract their data and programs from one site to
run on another as in hybrid cloud computing or surge computing.

Obstacle 3: Data Confidentiality/Auditability


Security of sensitive information in the cloud is one of the most often cited objections to cloud computing.
Cloud users face security threats both from outside and inside the cloud.

The cloud user is responsible for application-level security. The cloud provider is responsible for physical
security, and likely for enforcing external firewall policies. Security for intermediate layers is shared
between the user and the operator.
Although cloud makes external security easier, it does pose new problems related to internal security. Cloud
providers must guard against theft or denial-of-service attacks by users. Users need to be protected from one
another.

Obstacle 4: Data Transfer Bottlenecks


Now-a-days cloud applications are becoming data-intensive. The data store capacity of enterprise
applications or academic scientific programs might range from a few terabytes to a few petabytes or even
more.

Transferring such high volumes of data between two clouds might take from a few days to even months with
network having high data rates.

Obstacle 5: Performance Unpredictability


In the cloud, virtual machines can share CPUs and main memory effectively but network and I/O sharing is
more problematic. As a result, different Amazon EC2 instances vary more in their I/O performance than in
main memory performance.
The obstacle to attracting HPC is, HPC applications need to ensure that all the threads of a program are
running simultaneously, and today‘s virtual machines and operating systems do not provide a programmer
visible way to ensure this.

Obstacle 6: Scalable Storage


The problem with storage is its rigid behavior towards scalability. There have been many attempts to answer
this, varying in the richness of the query and storage APIs, the performance guarantees offered, and the
resulting consistency semantics.
Laboratory Manual for Cloud Technology Page 11
Cloud computing vulnerabilities
We have to consider the following cloud vulnerabilities:

Session Riding: Session riding happens when an attacker steals a user‘s cookie to use the application in the
name of the user. An attacker might also use attacks in order to trick the user into sending authenticated
requests to arbitrary web sites to achieve various things.
Virtual Machine Escape: In virtualized environments, the physical servers run multiple virtual machines
on top of hypervisors. An attacker can exploit a hypervisor remotely by using vulnerability present in the
hypervisor itself – such vulnerabilities are quite rare, but they do exist. Additionally, a virtual machine can
escape from the virtualized sandbox environment and gain access to the hypervisor and consequentially all
the virtual machines running on it.
Reliability and Availability of Service: We expect our cloud services and applications to always be
available when we need them, which is one of the reasons for moving to the cloud. But this isn‘t always the
case, especially in bad weather with a lot of lightning where power outages are common. The CSPs have
uninterrupted power supplies, but even those can sometimes fail, so we can‘t rely on cloud services to be up
and running 100% of the time. We have to take a little downtime into consideration, but that‘s the same
when running our own private cloud.
Data Protection and Portability: When choosing to switch the cloud service provider for a cheaper one,
we have to address the problem of data movement and deletion. The old CSP has to delete all the data we
stored in its data center to not leave the data lying around.
Alternatively, the CSP that goes out of the business needs to provide the data to the customers, so they can
move to an alternate CSP after which the data needs to be deleted. What if the CSP goes out of business
without providing the data? In such cases, it‘s better to use a widely used CSP which has been around for a
while, but in any case data backup is still in order.
CSP Lock-in: We have to choose a cloud provider that will allow us to easily move to another provider
when needed. We don‘t want to choose a CSP that will force us to use his own services, because sometimes
we would like to use one CSP for one thing and the other CSP for something else.
Internet Dependency: By using the cloud services, we‘re dependent upon the Internet connection, so if the
Internet temporarily fails due to a lightning strike or ISP maintenance, the clients won‘t be able to connect to
the cloud services. Therefore, the business will slowly lose money, because the users won‘t be able to use
the service that‘s required for the business operation. Not to mention the services that need to be available
24/7, like applications in a hospital, where human lives are at stake.
Source - https://www.cloudcomputing-news.net/news/2014/nov/21/top-cloud-computing-threats-and-
vulnerabilities-enterprise-environment/

Laboratory Manual for Cloud Technology Page 12


5. Present a report on Amazon cloud services.

Amazon Web Services offers a broad set of global cloud-based products including compute, storage,
databases, analytics, networking, mobile, developer tools, management tools, IoT, security, and enterprise
applications: on-demand, available in seconds, with pay-as-you-go pricing. From data warehousing to
deployment tools, directories to content delivery, over 140 AWS services are available. New services can be
provisioned quickly, without the upfront capital expense. This allows enterprises, start-ups, small and
medium-sized businesses, and customers in the public sector to access the building blocks they need to
respond quickly to changing business requirements.

In 2006, Amazon Web Services (AWS) began offering IT infrastructure services to businesses as web
services—now commonly known as cloud computing. One of the key benefits of cloud computing is the
opportunity to replace upfront capital infrastructure expenses with low variable costs that scale with your
business. With the cloud, businesses no longer need to plan for and procure servers and other IT
infrastructure weeks or months in advance. Instead, they can instantly spin up hundreds or thousands of
servers in minutes and deliver results faster.

Today, AWS provides a highly reliable, scalable, low-cost infrastructure platform in the cloud that powers
hundreds of thousands of businesses in 190 countries around the world.

The AWS Cloud spans 66 Availability Zones within 21 geographic regions around the world, with
announced plans for 12 more Availability Zones and four more Regions in Bahrain, Cape Town, Jakarta,
and Milan.

Source - https://aws.amazon.com/

Amazon Web Services (AWS) is a secure cloud services platform, offering compute power, database
storage, content delivery and other functionality to help businesses scale and grow.
In simple words AWS allows you to do the following things-
1. Running web and application servers in the cloud to host dynamic websites.
2. Securely store all your files on the cloud so you can access them from anywhere.
3. Using managed databases like MySQL, PostgreSQL, Oracle or SQL Server to store information.
4. Deliver static and dynamic files quickly around the world using a Content Delivery Network (CDN).
5. Send bulk email to your customers.

Now that you know what you can do with AWS, lets have an overview of various AWS services.

Basic Terminologies

1. Region — A region is a geographical area. Each region consists of two (or more) availability zones.
2. Availability Zone — It is simply a data center.
3. Edge Location — They are CDN (Content Delivery Network) endpoints for Cloud Front.

Laboratory Manual for Cloud Technology Page 13


Compute
1. EC2 (Elastic Compute Cloud) — These are just the virtual machines in the cloud on which you
have the OS level control. You can run whatever you want in them.

2. Light Sail — If you don‘t have any prior experience with AWS this is for you. It automatically
deploys and manages compute, storage and networking capabilities required to run your applications.

3. ECS (Elastic Container Service) — It is a highly scalable container service to allows you to run
Docker containers in the cloud.

4. EKS (Elastic Container Service for Kubernetes) — Allows you to use Kubernetes on
AWS without installing and managing your own Kubernetes control plane. It is a relatively new
service.

5. Lambda — AWS‘s serverless technology that allows you to run functions in the cloud. It‘s a huge
cost saver as you pay only when your functions execute.

6. Batch — It enables you to easily and efficiently run batch computing workloads of any scale on
AWS using Amazon EC2 and EC2 spot fleet.

7. Elastic Beanstalk — Allows automated deployment and provisioning of resources like a highly
scalable production website.

Storage
1. S3 (Simple Storage Service) — Storage service of AWS in which we can store objects like files,
folders, images, documents, songs, etc. It cannot be used to install software, games or Operating
System.

2. EFS (Elastic File System) — Provides file storage for use with your EC2 instances. It uses NFSv4
protocol and can be used concurrently by thousands of instances.

3. Glacier — It is an extremely low-cost archival service to store files for a long time like a few years
or even decades.

4. Storage Gateway — It is a virtual machine that you install on your on-premise servers. Your on-
premise data can be backed up to AWS providing more durability.

Databases
1. RDS (Relational Database Service) — Allows you to run relational databases like MySQL,
MariaDB, PostgreSQL, Oracle or SQL Server. These databases are fully managed by AWS like
installing antivirus and patches.

2. DynamoDB — It is a highly scalable, high-performance NoSQL database. It provides single-digit


millisecond latency at any scale.

3. Elasticache — It is a way of caching data inside the cloud. It can be used to take load off of your
database by caching most frequent queries.

4. Neptune — It has been launched recently. It is a fast, reliable and scalable graph database service.

Laboratory Manual for Cloud Technology Page 14


5. RedShift — It is AWS‘s data warehousing solution that can be used to run complex OLAP queries.

Migration
1. DMS (Database Migration Service) — It can be used to migrate on-site databases to AWS. It also
allows you to migrate from one type of database to another, e.g. from Oracle to MySQL.

2. SMS (Server VPC (Virtual Private Cloud) — It is simply a data center in the cloud in which you
deploy all your resources. It allows you to better isolate your resources and secure them.

3. Cloud Front -It is AWS‘s Content Delivery Network (CDN) that consists of Edge locations that
cache resources.

4. Route53 — It is AWS‘s highly available DNS (Domain Name System) service. You can register
domain names through it.

5. Direct Connect — Using it you can connect your data center to an Availability zone using a high
speed dedicated line.

6. API Gateway — Allows you to create, store and manage APIs at scale.

7. Migration Service) — It allows you to migrate on-site servers to AWS easily and quickly.

8. Snowball — It is a briefcase sized appliance that can be used to send terabytes of data inside and
outside of AWS.

Besides these, Networking & Content Delivery Tools, Analytic Tools, Security, Identity, and Compliance,
Application and Mobile services, Desktop & App Streaming etc. are also part of AWS.
Source-https://blog.usejournal.com/what-is-aws-and-what-can-you-do-with-it-395b585b03c

Laboratory Manual for Cloud Technology Page 15


6. Present a report on Microsoft cloud services.
Microsoft Azure (formerly Windows Azure) is a cloud computing service created by Microsoft for building,
testing, deploying, and managing applications and services through Microsoft-managed data centers. It
provides software as a service (SaaS), platform as a service (PaaS) and infrastructure as a service (IaaS) and
supports many different programming languages, tools and frameworks, including both Microsoft-specific
and third-party software and systems.

Azure was announced in October 2008, started with codename "Project Red Dog", and released on February
1, 2010, as "Windows Azure" before being renamed "Microsoft Azure" on March 25, 2014. Most users run
Linux on Azure, some of the many Linux distributions offered, including Microsoft's own Linux-
based Azure Sphere.
Source - Wikipedia

Azure is an ever-expanding set of cloud computing services to help your organisation meet its business
challenges. With Azure, your business or organisation has the freedom to build, manage and deploy
applications on a massive, global network using your preferred tools and frameworks.

Source - https://azure.microsoft.com/en-in/overview/what-is-azure/

Laboratory Manual for Cloud Technology Page 16


7. Present a report on cost management in cloud.
The overall goal of cost control and long-term cloud cost management in cloud computing, like every other
business, is to optimize available financial resources, getting the most out of every corporate pound or
dollar. Most people think that technology is the key to driving success in the cloud, but, in reality, it all
comes down to controlling costs. Many IT teams find that their cloud costs grow less efficient as "clutter"
builds up in their accounts.
For effective cost control in cloud computing services, it is quite important to understand the different
factors that impact the cost and leverage cloud cost management tools to help discover the cause(s) of these
inefficiencies. Unplanned costs are frequently the result of lack of visibility about the current consumption
patterns and past trends, non-standard deployments that come from an unclear or absent development
processes, poor organization, or the absence of automated deployment and configuration tools. By contrast
with on-premise infrastructure, which is financed by fixed upfront investments, cloud consumption is an
everyday operational expense. This requires a huge shift in your approach to operational management,
where optimizing cost is as important as optimizing performance.

Ways to Lower the Cost of Cloud Computing

Visibility on Cloud Inventory


According to a recent survey of IT professionals, 75% report they lack visibility of their cloud resources.
This lack of visibility into resources in the cloud can lead to poor management of those resources. Effective
management begins with an in-depth analysis of your entire infrastructure. And if some resources in the
cloud are going unused due to lack of awareness, but the organization is still paying for them, costs will
climb unnecessarily – and cut into the infrastructure savings and other financial benefits the cloud can bring.
Admins who have access to a single pane of glass and detailed resource dashboards are equipped to better
organize, manage, and optimize that ecosystem across all accounts, clouds, departments, and teams.

Cost Analytics
For complete visibility into the cloud services used, the actual usage patterns and trends are the first step. No
matter your cloud environment, in addition to tracking what you have spent, it is important to project what
you will be spending. You need consolidated and granular details in the form of interactive graphical and
tabular reports across multiple dimensions, as well as time frames in a multi-cloud environment to correlate
data for analysis and reporting against business objectives.

Role Based Access


Permit users to actively manage the infrastructure after setting an enterprise-wide mechanism that clearly
defines permissions and accessibility within the platform. Limit the data and actions visible to users by
organizations and roles and identify who launched, terminated, or changed infrastructure, and what they did
to take corrective action and control costs.

Controlled Stack Templates


A crucial characteristic of any DevOps team is to enable more autonomy for teams regarding provisioning
resources without the red tape and extensive time delay of traditional IT environments. If it is implemented
without the accompanying automation and process best practices, decentralized teams have the potential to
produce convoluted and non-standard security rules, configurations, storage volumes, etc. and therefore
drive up costs. Using predefined stack templates, Administrators can bake in security, networking, and
Laboratory Manual for Cloud Technology Page 17
instance family/size configurations so that the process of deploying instances is not only faster, but aligned
with the departmental user‘s roles and privileges and ensures only specific resources are provisioned.

Automated Alerts and Notifications


Stay on top of day-to-day changes in your environment, and participate in critical decisions by sharing
standard and custom built reports with details on cost, usage, performance with stakeholders. Automated
alerts and notifications about authorization failures, budget overruns, cost spikes, untagged infrastructure
result in increased visibility and accountability.

Policy Based Governance


Use cloud-based governance tools to track cloud usage and costs and alert administrators when the total
usage for the account is greater than a certain value or when the total usage for a vendor specific product is
greater than a certain value helps control cost. Schedule operational hours to automatically shut down and
start virtual machines, and automated events that alert administrators on volumes that have been
disassociated from virtual machines (standalone VMs) for more than a set number of days. Based on event
thresholds, remove unused and underutilized resources and avoid unnecessary waste by sizing instances so
they deliver a good balance between performance and cost. Avoid cost overrun by using policies to
terminate servers created to temporarily handle the massive workloads.
In short, use integrated data sources, metadata, or custom tags to define a set of rules that lead to improved
management, reporting, and optimization.

Budgets
Define and allocate budgets for departments, cost centers, projects, and ensure approval mechanisms to
avoid cost overrun by sending out alerts when thresholds are breached. Use the show back report to
chargeback departments for their cloud usage and limit the cost and use of resources. This alignment of cost
with value ensures the anticipated business benefit once the cloud resources are in production.
Source - https://dzone.com/articles/fundamentals-of-cloud-cost-management

Laboratory Manual for Cloud Technology Page 18


8. Enlist and explain legal issues involved in the cloud with the help of a case study.
Enterprises are moving their assets to the cloud to capture its many business benefits, including ease of
deployment and reducing, if not eliminating, the need for IT infrastructure. However, cloud computing
offers an array of pitfalls for the unwary. The unique legal risks and considerations presented by the cloud
are especially important and often overlooked by non lawyers. Here are the top five legal considerations on
the way to the cloud:

Service levels
It should go without saying that the starting point should be the business case and intended use of the
service, and not any legal document, such as a service level agreement. Understand what business problem
the service will be solving; the intended internal and external users; when, where and how the service will be
accessed; whether or not the service is business-critical; the practical consequences if the service is down or
degraded for any period of time; and how the use of the service may change over time. Then, ensure the
agreement reflects your needs.
Almost invariably, the agreement will address availability, planned outages, critical and noncritical outages,
service credits and termination rights. Typically, the sole remedy in case of a breach of the agreement is a
service credit, which is usually capped based on some percentage of fees paid during the previous 12-month
period. Customers should ask whether the credit is simply window dressing or actually a meaningful
economic remedy that would deter the vendor from breaching the agreement.

Termination or suspension of service


The software application and/or the data running or housed in the cloud may be critical to your business.
Continuity of access and use (to both the application and data), especially when both are on a third-party
server, are of utmost importance. To that end, does the cloud vendor in each instance notify you when any of
the terms of the agreement may have been violated, and are you given an opportunity to remedy each
violation?
There is, of course, a delicate balance to be struck here. In a setting where there are multiple customers
(tenants), the cloud vendor will have competing obligations to the other customers, and, inasmuch as the
actions of one tenant may degrade performance for another, some level of flexibility is required. One
approach is to distinguish between the service and the data; in the case of suspension, for example, agree not
to lock down access to the data.
From The Street: Google's Attractiveness--Four Reasons

Representations and warranties


While seemingly arcane, in terms of potential pitfalls, these provisions may be the most important. A
representation is a statement of fact, either past or present, while a warranty may express a promise. Typical
reps and warranties should confirm that there are no pending or threatened claims of intellectual property
right (IPR) infringement and address continued non-infringement, performance, data security and privacy.
Breach of a warranty will typically give rise to a limited remedy and thus will be to the exclusion of other
remedies, such as money damages. Therefore, be sure the limited remedy makes business sense and will
suffice. The cloud providers also typically request reps and warranties from the customer, including those
pertaining to the customer's data. To that end, the buyer must be careful about the sources of its data or risk
exposing itself to liability.
An indemnity is a contractual obligation to compensate a party for a loss. Thus, an indemnity would
compensate the cloud customer for any claims that its use of the service violated any third-party IP rights,
such as patent, copyright or trademark. These suits (especially patent) are costly, so care must be taken to
ensure that you are adequately covered.

Laboratory Manual for Cloud Technology Page 19


Confidentiality
Cloud customers should be sure to get satisfactory promises regarding which vendor personnel will have
access to confidential information (including customer data) and what steps the vendor will undertake to
maintain the confidentiality of that information. Data is king, and this provision deserves considerable
attention.

Commercial/Other
The considerations above are a good starting point but they are just the tip of the iceberg. Here are a few
more to consider: storage fees, if and when there are automatic upgrades; whether or not there are multiple
environments (e.g., development, test, and production) available to customer; how customization works in a
cloud setting; how many data recoveries does the vendor provide free of charge (and what are the costs of
additional backups); and how easy is it to move to another cloud and how will the vendor support the
transition.
Source-https://www.forbes.com/2010/04/12/cloud-computing-enterprise-technology-cio-network-
legal.html#3faeedb02ebe

Laboratory Manual for Cloud Technology Page 20


9. Explain the process to migrating to cloud with a case study.
There are several strategies for migrating applications to new environments. More and more enterprises are
moving applications to the cloud to modernize their current IT asset base or to prepare for future needs.
They are taking the plunge, picking up a few mission-critical applications to move to the cloud and quickly
realizing that there are other applications that are also a good fit for the cloud. To illustrate the step-by-step
strategy, we provide three scenarios listed in the table. Each scenario discusses the motivation for the
migration, describes the before and after application architecture, details the migration process, and
summarizes the technical benefits of migration:

Source - https://media.amazonwebservices.com/CloudMigration-main.pdf
Laboratory Manual for Cloud Technology Page 21
10. Present a report on Google cloud and cloud services.
Google Cloud Platform (GCP), offered by Google, is a suite of cloud computing services that runs on the
same infrastructure that Google uses internally for its end-user products, such as Google
Search and YouTube. Alongside a set of management tools, it provides a series of modular cloud services
including computing, data storage, data analytics and machine learning. Registration requires a credit card or
bank account details.

Google Cloud Platform provides infrastructure as a service, platform as a service, and serverless
computing environments.

In April 2008, Google announced App Engine, a platform for developing and hosting web applications in
Google-managed data centers, which was the first cloud computing service from the company. The service
became generally available in November 2011. Since the announcement of App Engine, Google added
multiple cloud services to the platform.

Google Cloud Platform is a part of Google Cloud, which includes the Google Cloud Platform public cloud
infrastructure, as well as GSuite, enterprise versions of Android and Chrome OS, and application
programming interfaces (APIs) for machine learning and enterprise mapping services.
Google Cloud Platform is a set of Computing, Networking, Storage, Big Data, Machine Learning and
Management services provided by Google that runs on the same Cloud infrastructure that Google uses
internally for its end-user products, such as Google Search, Gmail, Google Photos and YouTube.

Why Google Cloud Platform?


Google Cloud Platform, is a suite of cloud computing services that run on the same infrastructure that
Google uses internally for its end-user products, such as Google Search, Gmail, Google Photos and
YouTube.
Some of the features of GCP what really gives it an upper hand over other vendors are shown in the picture
below.

Google Cloud Platform Regions and Zones


Google Cloud Platform services are available in various locations across North America, South America,
Europe, Asia, and Australia. These locations are divided into regions and zones. You can choose where to
locate your applications to meet your latency, availability and durability requirements.
Here you can see that there is a total of 15 regions with at least 3 zones in every region.

Laboratory Manual for Cloud Technology Page 22


Google Cloud Platform (GCP) Services
Google offers a wide range of Services. Following are the major Google Cloud Services:
 Compute
 Networking
 Storage and Databases
 Big Data
 Machine Learning
 Identity & Security
 Management and Developer Tools
Compute
GCP provides a scalable range of computing options you can tailor to match your needs. It provides highly
customizable virtual machines and the option to deploy your code directly or via containers.
 Google Compute Engine
 Google App Engine
 Google Kubernetes Engine
 Google Cloud Container Registry
 Cloud Functions
Networking
The domain includes services related to networking, it includes the following services:
 Google Virtual Private Cloud (VPC)
 Google Cloud Load Balancing
 Content Delivery Network
 Google Cloud Interconnect
 Google Cloud DNS
Storage and Databases
The storage domain includes services related to data storage, it includes the following services:
 Google Cloud Storage
 Cloud SQL
 Cloud Bigtable
 Google Cloud Datastore
 Persistent Disk

Laboratory Manual for Cloud Technology Page 23


Big Data
The domain includes services related to big data, it includes the following services:
 Google BigQuery
 Google Cloud Dataproc
 Google Cloud Datalab
 Google Cloud Pub/Sub
Cloud AI
The domain includes services related to machine learning, it includes the following services:
 Cloud Machine Learning
 Vision API
 Speech API
 Natural Language API
 Translation API
 Jobs API
Identity & Security
The domain includes services related to security, it includes the following services:
 Cloud Resource Manager
 Cloud IAM
 Cloud Security Scanner
 Cloud Platform Security
Management Tools
The domain includes services related to monitoring and management, it includes the following services:
 Stackdriver
 Monitoring
 Logging
 Error Reporting
 Trace
 Cloud Console
Developer Tools
The domain includes services related to development, it includes the following services:
 Cloud SDK
 Deployment Manager

Laboratory Manual for Cloud Technology Page 24


 Cloud Source Repositories
 Cloud Test Lab
Source-https://www.edureka.co/blog/what-is-google-cloud-platform/

Laboratory Manual for Cloud Technology Page 25

You might also like