Professional Documents
Culture Documents
Cloud Computing - Question Bank
Cloud Computing - Question Bank
The cloud" refers to servers that are accessed over the Internet, and the software and
databases that run on those servers
Cloud servers are located in data centers all over the world. By using cloud computing,
users and companies don't have to manage physical servers themselves or run software
applications on their own machines.
Reduction of costs – unlike on-site hosting the price of deploying applications in the
cloud can be less due to lower hardware costs from more effective use of physical
resources.
Universal access - cloud computing can allow remotely located employees to access
applications and work via the internet.
Up to date software - a cloud provider will also be able to upgrade software keeping in
mind feedback from previous software releases
Choice of applications. This allows flexibility for cloud users to experiment and choose
the best option for their needs. Cloud computing also allows a business to use, access
and pay only for what they use, with a fast implementation time
Potential to be greener and more economical - the average amount of energy needed
for a computational action carried out in the cloud is far less than the average amount
for an on-site deployment. This is because different organizations can share the same
physical resources securely, leading to more efficient use of the shared resources.
Flexibility – cloud computing allows users to switch applications easily and rapidly, using
the one that suits their needs best. However, migrating data between applications can
be an issue.
Cloud Computing:
According to NIST, “Cloud computing is a model for enabling ubiquitous, convenient, on-
demand network access to a shared pool of configurable computing resources (e.g., networks,
servers, storage, applications, and services) that can be rapidly provisioned and released with
minimal management effort or service provider interaction
On-demand self-service
Broad network access
Resource pooling
Rapid elasticity
Measured service
Rapid elasticity:
To the consumer, the capabilities available for provisioning often appear to be unlimited
and can be appropriated in any quantity at any time
Measured Service
Cloud systems automatically control and optimize resource use by leveraging a metering
capability at some level of abstraction appropriate to the type of service (e.g., storage,
processing, bandwidth, and active user accounts).
Resource usage can be monitored, controlled, and reported, providing transparency for
both the provider and consumer of the utilized service.
On-demand self-service
A consumer can unilaterally provision computing capabilities, such as server time and
network storage, as needed automatically without requiring human interaction with
each service provider.
Capabilities are available over the network and accessed through standard mechanisms
that promote use by heterogeneous thin or thick client platforms (e.g., mobile phones,
tablets, laptops, and workstations).
Resource pooling
The provider’s computing resources are pooled to serve multiple consumers using a
multi-tenant model, with different physical and virtual resources dynamically assigned
and reassigned according to consumer demand.
There is a sense of location independence in that the customer generally has no control
or knowledge over the exact location of the provided resources but may be able to
specify location at a higher level of abstraction (e.g., country, state, or datacenter).
3. Discuss the concept of Grid Computing. Differentiate between Cloud computing and
Grid Computing
Grid Computing
Grid computing is a network based computational model that has the ability to process large
volumes of data with the help of a group of networked computers that coordinate to solve a
problem together.
Basically, it’s a vast network of interconnected computers working towards a common problem
by dividing it into several small units called grids. It’s based on a distributed architecture which
means tasks are managed and scheduled in a distributed way with no time dependency.
The group of computers acts as a virtual supercomputer to provide scalable and seamless
access to wide-area computing resources which are geographically distributed and present
them as a single, unified resource to perform large-scale applications such as analyzing huge
sets of data.
Cloud computing, on the other hand, is a whole new class of computing based on network
technology where every user of the cloud has its own private resource that is provided by the
specific service provider
Terminology of Grid Computing and Cloud Computing
Both are network based computing technologies that share similar characteristics such
as resource pooling, however, they are very different from each other in terms of
architecture, business model, interoperability, etc.
Cloud computing, on the other hand, is a form of computing based on virtualized resources
which are located over multiple locations in clusters
Grid computing is based on a distributed system which means computing resources are
distributed among different computing units which are located across different sites,
countries, and continents.
In cloud computing, computing resources are managed centrally which are located over
multiple servers in clusters in cloud providers’ private data centers.
The main function of grid computing is job scheduling using all kinds of computing
resources where a task is divided into several independent sub-tasks and each machine
on a grid is assigned with a task. After all the sub-tasks are completed they are sent back
to the main machine which handles and processes all the tasks.
Cloud computing involves resource pooling through grouping resources on an as-needed basis
from clusters of servers
4. Discuss How Clouds are Changing the computing Environment
The cloud" refers to servers that are accessed over the Internet, and the software and
databases that run on those servers
Cloud servers are located in data centers all over the world. By using cloud computing,
users and companies don't have to manage physical servers themselves or run software
applications on their own machines.
Unparalleled Scalability
Substantial Savings
Unrivaled Flexibility
Matchless Security
Software upgrades have always been perceived as a tedious task especially when the
changes are taking place at the corporate level on a large scale.
But, cloud service providers handle all these hassles of system updates, so you don’t
have to keep tabs of all the maintenance aspects.
All the system upgrades and security updates are taken care of by the service provider
so that companies can focus on the core competencies without being interrupted by
ongoing updates.
Unparalleled Scalability
Every corporate territory comes with a massive amount of data, which makes the
presence of cloud all the more mandatory. Cloud platforms are substantially scalable,
which is highly beneficial for the ever-fluctuating storage needs of the IT environment.
Before the cloud era, companies were struggling with their storage needs and wasting
time upgrading servers. But with the advent of cloud computing, expanding storage
needs are no more an issue as every change is managed on the spot.
Substantial Savings
Moving to cloud allows companies to lower their operational expenses and improve
resource efficiency while cutting down on equipment and IT staffing costs.
As the service provider handles all the system and security updates, big businesses can
bring down their infrastructure budget to half.
Unrivaled Flexibility
Whether employees want to share files and folders across the globe or work from home
using their own device, cloud technology makes it all possible. With cloud, companies
can easily access business-related documents from any location and share it with
multiple users.
This all-encompassing platform comes with unlimited storage and sharing capabilities
to keep corporations connected at all times.
This much-needed flexibility for end users invites room for performance issues in the
infrastructure.
Matchless Security
Storing a substantial amount of sensitive data on a cloud platform may sound risky but
this offsite data storage technology is completely secure and reliable.
Cloud eliminates the possibility of data theft because all the information is securely
stored offsite and away from the device.
Information is always accessible with encrypted passwords and extra care is taken by
cloud service providers to ensure that all the stored data remains protected.
5. Discuss the term Big Data by analyzing its Historical background and 3vs.
Big data is data that contains greater variety arriving in increasing volumes and with
ever-higher velocity. This is known as the three Vs.
Put simply, big data is larger, more complex data sets, especially from new data sources. These
data sets are so voluminous that traditional data processing software just can’t manage them.
But these massive volumes of data can be used to address business problems you wouldn’t
have been able to tackle before.
Big data is a term that describes the large volume of data – both structured and
unstructured – that inundates a business on a day-to-day basis.
But it’s not the amount of data that’s important. It’s what organizations do with the
data that matters.
Big data can be analyzed for insights that lead to better decisions and strategic business moves
The act of accessing and storing large amounts of information for analytics has been
around a long time. But the concept of big data gained momentum in the early 2000s
when industry analyst Doug Laney articulated the now-mainstream definition of big
data as the three V’s:
With the advent of the Internet of Things (IoT), more objects and devices are connected
to the internet, gathering data on customer usage patterns and product performance.
The emergence of machine learning has produced still more data.
While big data has come far, its usefulness is only just beginning. Cloud computing has
expanded big data possibilities even further. The cloud offers truly elastic scalability, where
developers can simply spin up ad hoc clusters to test a subset of data
Volume
Velocity
Variety
Social Data
Machine data
Transactional Data
Volume
The amount of data matters. With big data, you’ll have to process high volumes of low-
density, unstructured data.
This can be data of unknown value, such as Twitter data feeds, clickstreams on a
webpage or a mobile app, or sensor-enabled equipment.
For some organizations, this might be tens of terabytes of data. For others, it may be
hundreds of petabytes.
Velocity
Velocity is the fast rate at which data is received and (perhaps) acted on.
Normally, the highest velocity of data streams directly into memory versus being written
to disk.
Some internet-enabled smart products operate in real time or near real time and will
require real-time evaluation and action.
Variety
Traditional data types were structured and fit neatly in a relational database.
With the rise of big data, data comes in new unstructured data types.
Unstructured and semistructured data types, such as text, audio, and video, require
additional preprocessing to derive meaning and support metadata.
IT as a service:
With the proliferation of cloud technologies, these responsibilities are now expected
from external cloud vendors. This shift has transformed the requirements for internal IT
departments.
IT is now expected to operate as a business unit that assists the organization and its
users in leveraging commoditized IT services in line with organizational goals.
What is ITaaS?
In the same ways as cloud computing services, enterprise IT expertise and services
offered by the internal IT department can be consumed “as a service”.
Consumers have a variety of choices and can employ IT solutions prepared to meet the
specific demands, which have already been assessed and prepared for, but the ITaaS
service provider.
They are no longer the tech folks focused on putting out the fire when an end-user fails
to employ a required IT service. Instead, it’s focused on addressing the unique
requirements of internal consumers, allowing them the options to choose the best
available resources and solutions, along with managing the entire experience lifecycle
associated with it.
Business Focus: The operations and service architecture of an ITaaS vendor is designed
around the business requirements of the organization instead of the technical projects
and IT infrastructure running on-premise.
IT Management Framework: The leadership should support decisions on IT services
with a focus on business and end-user requirements instead of individual projects and
technology assets
Financing: The pricing of an ITaaS offering should justify the promised cost savings and
value. As a broker and managed service provider, ITaaS providers may have a limited
range of profitability when customers can purchase a service directly from the vendor.
Agile ITSM: Since ITaaS is focused on the process of serving user requests and aligning IT
with business, it must adopt an appropriate ITSM framework that enables agile and
effective business operations.
7. How Big Data Works. Discuss its Various Big Data Use Cases in the IT industry.
Big data gives you new insights that open up new opportunities and business models.
Getting started involves three key actions:
Integrate
Manage
Analyze
Integrate
Big data brings together data from many disparate sources and applications. Traditional
data integration mechanisms, such as ETL (extract, transform, and load) generally aren’t
up to the task. It requires new strategies and technologies to analyze big data sets at
terabyte, or even petabyte, scale.
During integration, you need to bring in the data, process it, and make sure it’s
formatted and available in a form that your business analysts can get started with.
Manage
Big data requires storage. Your storage solution can be in the cloud, on premises, or
both. You can store your data in any form you want and bring your desired processing
requirements and necessary process engines to those data sets on an on-demand basis.
Many people choose their storage solution according to where their data is currently residing.
The cloud is gradually gaining popularity because it supports your current compute
requirements and enables you to spin up resources as needed
Analyze
Your investment in big data pays off when you analyze and act on your data.
Get new clarity with a visual analysis of your varied data sets. Explore the data further to
make new discoveries. Share your findings with others.
Build data models with machine learning and artificial intelligence. Put your data to work
Product Development
Companies like Netflix and Procter & Gamble use big data to anticipate customer
demand. They build predictive models for new products and services by classifying key
attributes of past and current products or services and modeling the relationship
between those attributes and the commercial success of the offerings.
In addition, P&G uses data and analytics from focus groups, social media, test markets, and
early store rollouts to plan, produce, and launch new products
Predictive Maintenance
Factors that can predict mechanical failures may be deeply buried in structured data,
such as the year, make, and model of equipment, as well as in unstructured data that
covers millions of log entries, sensor data, error messages, and engine tepmperature.
By analyzing these indications of potential issues before the problems happen, organizations
can deploy maintenance more cost effectively and maximize parts and equipment uptime.
Customer Experience
A clearer view of customer experience is more possible now than ever before. Big data
enables you to gather data from social media, web visits, call logs, and other sources to
improve the interaction experience and maximize the value delivered.
Start delivering personalized offers, reduce customer churn, and handle issues proactively
Machine Learning
And data—specifically big data—is one of the reasons why. We are now able to teach
machines instead of program them. The availability of big data to train machine learning
models makes that possible.
When it comes to security, it’s not just a few rogue hackers—you’re up against entire
expert teams.
Security landscapes and compliance requirements are constantly evolving. Big data helps you
identify patterns in data that indicate fraud and aggregate large volumes of information to
make regulatory reporting much faster
Operational Efficiency
Operational efficiency may not always make the news, but it’s an area in which big data
is having the most impact.
With big data, you can analyze and assess production, customer feedback and returns,
and other factors to reduce outages and anticipate future demands. Big data can also be
used to improve decision-making in line with current market demand
Drive Innovation
Big data can help you innovate by studying interdependencies among humans,
institutions, entities, and process and then determining new ways to use those insights.
Use data insights to improve decisions about financial and planning considerations.
Examine trends and what customers want to deliver new products and services.
Implement dynamic pricing. There are endless possibilities.
8. Discuss the Term Hypervisor with all its Types
The Virtual Machine Monitor (VMM) also known as hypervisor, is the core part of every
system virtualization solution. Implemented as software or hardware it allows for
multiple operating systems to run in a single physical machine Essentially, the VMM can
be seen as a small and light operating system with basic functionality, responsible for
controlling the underlying hardware resources and make them available to each guest
VM.
Hypervisors should be as minimal and light as possible in order to achieve efficiency and
optimal security .All the resources are provided uniformly to each VM, making it
possible for VMs to run on any kind of system regardless of its architecture or different
subsystems. VMMs have two main tasks to accomplish: enforce isolation between the
VMs; manage the resources of the underlying hardware pool.
Isolation is one of the vital security capabilities that VMMs should offer. All interactions
between the VMs and the underlying hardware should go through the VMM .It is
responsible for mediation between the communications and must be able to enforce
isolation and containment . A VM must be restricted from accessing parts of the
memory that belong to another VM, and similarly, a potential crash or failure in one VM
should not affect the operation of the others .
Resource Management
Normally the task of managing and sharing the available hardware resources is the
operating system's responsibility. In the case of a virtualized system this responsibility
becomes an integral part of the hypervisor's function .
The hypervisor should manage the CPU load balancing, map physical to logical memory
addresses, trap the CPU's instructions, and migrate VMs between physical systems and
so on, while protecting the integrity of each VM and the stability of the whole system.
A substantial number of lines of the hypervisor's code are written to cope with the
managerial tasks it has to deliver. At the same time the code should be as minimal as
possible to avoid security vulnerabilities in the virtualization layer
TYPE 1 Hypervisior
Type I hypervisors can be categorized further depending on their design, which can be
monolithic or microkernel .
Type I hypervisors run on top of the hardware. Hypervisors of this type are bootable
operating systems and depending on their design, may incorporate the device drivers
for the communication of the underlying hardware.
Type I hypervisors offer optimal efficiency and usually preferred for server virtualization.
By placing them on top of the bare hardware, they allow for direct communication with
it. The security of the whole system is based on the security capabilities of the
hypervisor.
In a monolithic design, the device drivers are included in the hypervisor's code. This
approach offers better performance since the communication between applications and
hardware takes place without any intermediation with another entity.
However, this entails that the hypervisor will have a large amount of code to
accommodate the drivers which makes the attack surface even bigger, and the security
of the system could be compromised more easily
In the microkernel design, the device drivers are installed in the operating system of the
parent guest. The parent guest is one privileged VM used for creating, destroying and
managing the non-privileged child guest VMs residing in the system.
Each one of the child guest machines that needs to communicate with the hardware will
have to mediate through the parent guest to get access to the hardware.
TYPE 2 Hypervisior
Type II hypervisors are installed on top of the host operating system and run as
applications (e.g. VMware Workstation).
These hypervisors allow for creating virtual machines to run on the operating system,
which in turn provides the device drivers to be used by the VMs.
This type of hypervisors are less efficient comparing to Type I hypervisors, since one
extra layer of software is added to the system, making the communications between
application and hardware more complex.
The security of a system of this type is essentially relying completely on the security of
the host operating system.
Any breach to the host operating system could potentially result in the complete control
over the virtualization layer.
Hardware Virtualization
It is the most common type of virtualization and it provides advantages like optimum
hardware utilization and application uptime.
The basic idea is to combine many small physical servers into one large physical server
so that the processor can be used more effectively. The operating system that is running
on a physical server gets converted into a well-defined OS that runs on the virtual
machine.
Each small server can host a virtual machine, but the entire cluster of servers is treated
as a single device by any process requesting the hardware
The hardware resource allotment is done by the hypervisor. The main advantages
include increased processing power as a result of maximized hardware utilization and
application uptime.
Subtypes:
Full Virtualization – Guest software does not require any modifications since the
underlying hardware is fully simulated.
Emulation Virtualization – The virtual machine simulates the hardware and becomes
independent of it. The guest operating system does not require any modifications.
Paravirtualization – the hardware is not simulated and the guest software run their own
isolated domains.
Full Virtualization:
Para virtualization
Hardware Assisted:
Just like in an apartment building, where many tenants cost-efficiently share the
common infrastructure of the building but have walls and doors that give them privacy
from other tenants, a cloud uses multitenancy technology to share IT resources securely
among multiple applications and tenants (businesses, organizations, etc.) that use the
cloud.
Some clouds use virtualization-based architectures to isolate tenants, others use custom
software architectures to get the job done. In a multi-tenant architecture, multiple
instances of an application operate in a shared environment.
This architecture is able to work because each tenant is integrated physically, but
logically separated; meaning that a single instance of the software will run on one server
and then serve multiple tenants. In this way, a software application in a multi-tenant
architecture can share a dedicated instance of configurations, data, user management
and other properties.
Importance of Multitenancy
Each user is given a separate and ideally secure space within those servers to store
data.The multi-tenant architecture can also aid in providing a better ROI for
organizations, as well as quickening the pace of maintenance and updates for tenants.
Multi-tenant vs. single-tenant
Because each tenant is in a separate environment, they are not bound in the same way that
users of shared infrastructure would be; meaning single-tenant architectures are much more
customizable
Multi-tenancy is the more used option of the two, as most SaaS services operate on
multi-tenancy.
With a multi-tenant architecture, the provider only has to make updates once. With a
single-tenant architecture, the provider must touch multiple instances of the software in
order to make updates.
A potential customer would likely choose a single-tenant infrastructure over multi-
tenancy for the ability to have more control and flexibility in their environment --
typically to address specific requirements.
Tenants don't have to worry about updates, since they are pushed out by the host
provider.
Tenants don't have to worry about the hardware their data is being hosted on.
Multi-tenant apps tend to be less flexible than apps in other tenant architectures, such
as single-tenancy.
Multi-tenant apps need stricter authentication and access controls for security.
Tenants have to worry about noisy neighbors, meaning someone else on the
same CPU that consumes a lot of cycles, which may slow response time.
The ability to enhance the cloud experience and have cross-cloud compatibility has
helped form the Cloud API (Application Programming Interface) environment.
Administrators can integrate applications and other workloads into the cloud using
these APIs.
Understanding the cloud API model isn’t always easy. There are many ways to integrate
into an infrastructure, and each methodology has its own underlying components.
To get a better understanding of cloud computing and how APIs fit into the process, it’s
important to break down the conversation at a high level. There are four major areas
where cloud computing will need to integrate with another platform (or even
another cloud provider).
Also known as Platform-as-a-Service, these service APIs are designed to provide access
and functionality for a cloud environment.
This means integration with databases, messaging systems, portals, and even storage
components.
These APIs are also referred to as Software-as-a-Service APIs. Their goal is to help
connect the application-layer with the cloud and underlying IT infrastructure. So, CRM
and ERP applications are examples of where application APIs can be used to create
a cloud application extension for your environment.
● Many environments today don’t use only one cloud provider or even platform. Now,
there is a need for greater cross-platform compatibility.
More providers are offering generic HTTP and HTTPS API integration to allow their
customers greater cloud versatility.
Furthermore, cross-platform APIs allow cloud tenants the ability to access resources not
just from their primary cloud provider, but from others as well. This can save a lot of
time and development energy since organizations can now access the resources and
workloads of different cloud providers and platforms.
Cloud API
● Apache (Citrix) CloudStack
● Amazon Web Services API and Eucalyptus
● Google Compute Engine
● Simple Cloud
● OpenStack API
● VMware vCloud API
Each solution and platform has its own benefits and challenges. However, many of them
have something in common: Interoperability. For example, The CloudStack model
(although backed by Citrix) still integrates with any underlying hypervisor and supports
other common cloud API models including AWS API, OpenStack API and even VMware
vCloud API.
● Other solutions, such as Simple Cloud API, are developed and funded by a number of
organizations to create a true cross-platform cloud environment. Simple Cloud APIs are
able to integrate with services from Amazon and Microsoft. The solution that you chose
to work with will depend on the infrastructure that you are trying to deliver. If storage
connectivity is a concern, look for a platform that easily integrates with various storage
models across a WAN.
12. Discuss the concept of Billing and Metering of services in Context of Cloud Computing
Metered Billing
From power usage to network traffic, there are a lot of moving parts in the cloud,
necessitating tools that capture and measure this activity in order to record and report
various aspects of system performance.
Organizations using cloud services typically receive daily emails that provide alerts for
spending data, usage spikes, sudden and unexpected changes, and more. This is called
“metering
Metered billing is a pricing model in which you pay for a service only based on the level
of usage. For example, the cost of a service might depend on time used, volume of data
processed, or CPU cycles—depending on the type of service. You receive a monthly bill
to pay for your actual level of usage and nothing more.
Metered billing is an advancement made possible by the increasing number of
applications and services being delivered via the cloud.
Under a metered-billing pricing model, the cloud-based application must be able to
track your usage level and automatically calculate a price that matches your usage level.
Compared to other pricing models such as multi-year licenses, or even traditional pay-
as-you-go models, metered billing enables a much higher degree of agility and flexibility
in resource use, provisioning capacity on the fly without incurring excessive costs.
Historically, the high cost of provisioning servers and infrastructure limited the ability to
develop software as a service (SaaS) applications.
For example, it would take weeks if not months to plan, order, ship, and install new
server hardware in the data center.
Today, new billing and metering models allow procurement of hardware and operating systems
— known as infrastructure as a service (IaaS) — in less than a minute
Purchasing Options
On-Demand Instances–On-Demand Instances let you pay for compute capacity by the
hour with no long-term commitments.
Reserved Instances–Reserved Instances provide you with a significant discount (up to
75%) compared to On-Demand Instance pricing.
Spot Instances–Spot Instances allow customers to bid on unused Amazon EC2 capacity
and run those instances for as long as their bid exceeds the current Spot Price.
Platform as a service (PaaS) billing and metering are determined by actual usage, as
platforms differ in aggregate and instance-level usage measures.
Actual usage billing enables PaaS providers to run application code from multiple
tenants across the same set of hardware depending on the granularity of usage
monitoring.
For example, the network bandwidth, CPU utilization, and disk usage per transaction or
application can determine PaaS cost.
The traditional concept for billing and metering SaaS applications is a monthly fixed
cost; in some cases, depending on the amount of data or number of “seats,” the billing
and pricing are optimized.
The number of users is determined by the number of users the organization allows to
access the SaaS applications, which increases the price of the monthly fee; in some
cases, if certain volumes are met, there is a discount.
For instance, sales software provided as a service would cost US$50 per month per
sales agent for a company using the application.
The primary concepts for SaaS billing and metering include:
Monthly subscription fees
Per-user monthly fees
The monthly subscription fee is a fixed cost billed per month, often for a minimum
contracted length of agreement of one year.
The billing model per month changes the high initial investment from a software capital
cost to a monthly operational expense.
This model is especially appealing to small and medium-sized organizations to help them
get started with the software required for their business initiatives
13. Discuss the process of Cloud Automation. What is the difference between Cloud
Automation and Cloud Orchestration?
Cloud Automation
Cloud automation is a broad term that refers to the processes and tools an organization
uses to reduce the manual efforts associated with provisioning and managing cloud
computing workloads. IT teams can apply cloud automation to private, public and hybrid
cloud environments.
Cloud automation enables IT teams and developers to create, modify, and tear down
resources on the cloud automatically.
Traditionally, deploying and operating enterprise workloads was a time-consuming and
manual process. It often involved repetitive tasks, such as sizing, provisioning and
configuring resources like virtual machines (VMs); establishing VM clusters and load
balancing; creating storage logical unit numbers. invoking virtual networks; making the
actual deployment; and then monitoring and managing availability and performance.
This manual process is inefficient and often fraught with errors. These errors can lead to
troubleshooting, which delays the workload's availability. They might also expose security
vulnerabilities that can put the enterprise at risk
Automation Benefits
Improved security and resilience—when sensitive tasks are automated, you do not need
multiple IT people or developers logging into mission critical systems. The risk of human
error, malicious insiders and account compromise is vastly reduced.
In addition, you can build security best practices into automated workflows, and enforce
security principles in 100% of your deployments.
Improved backup processes—organizations need to back up their system frequently, to
guard against accidental erasure, configuration calamity, equipment failure or cyber-
attack.
Automating backups on the cloud, or backing up on-premise systems automatically to
the cloud, dramatically improves an organization’s resilience to disaster.
Improved governance—when systems are set up manually or on an ad-hoc basis,
administrators may have low visibility over what is actually running and may not have a
centralized way to control the infrastructure.
Cloud automation lets you set up resources in a standardized, controlled manner, which
also means you have much more control over infrastructure running across your
organization.
Cloud Automation vs Cloud Orchestration: What is the Difference?
Infrastructure as a Code(Iac)
Improved consistency
Configuration drift occurs when ad-hoc configuration changes and updates result in a
mismatched development, test, and deployment environments.
This can result in issues at deployment, security vulnerabilities, and risks when
developing applications and services that need to meet strict regulatory compliance
standards. IaC prevents drift by provisioning the same environment every time.
In addition to dramatically reducing the time, effort, and specialized skill required to
provision and scale infrastructure, IaC lets organizations take maximum advantage
of cloud computing’s consumption-based cost structure.
It also enables developers to spend less time on plumbing and more time developing
innovative, mission-critical software solutions.
IaaS enables end users to scale and shrink resources on an as-needed basis, reducing the
need for high, up-front capital expenditures or unnecessary “owned” infrastructure,
especially in the case of “spiky” workloads.
IaaS customers access resources and services through a wide area network (WAN), such
as the internet, and can use the cloud provider's services to install the remaining
elements of an application stack. For example, the user can log in to the IaaS platform to
create virtual machines (VMs); install operating systems in each VM; deploy
middleware, such as databases; create storage buckets for workloads and backups; and
install the enterprise workload into that VM.
Customers can then use the provider's services to track costs, monitor performance,
balance network traffic, troubleshoot application issues, manage disaster recovery and
more.
Components of Iaas:
IaaS providers will manage large data centers, typically around the world, that contain
the physical machines required to power the various layers of abstraction on top of
them and that are made available to end users over the web. In most IaaS models, end
users do not interact directly with the physical infrastructure, but it is provided as a
service to them.
Compute
IaaS is typically understood as virtualized compute resources, so for the purposes of this
article, we will define IaaS compute as a virtual machine. Providers manage
the hypervisors and end users can then programmatically provision virtual “instances”
with desired amounts of compute and memory (and sometimes storage). Most
providers offer both CPUs and GPUs for different types of workloads. Cloud compute
also typically comes paired with supporting services like auto scaling and load
balancing that provide the scale and performance characteristics that make cloud
desirable in the first place.
Network
Storage
The three primary types of cloud storage are block storage, file storage, and object
storage. Block and file storage are common in traditional data centers but can often
struggle with scale, performance and distributed characteristics of cloud. Thus, of the
three, object storage has thus become the most common mode of storage in the cloud
given that it is highly distributed (and thus resilient), it leverages commodity hardware,
data can be accessed easily over HTTP, and scale is not only essentially limitless but
performance scales linearly as the cluster grows.
16 Qu: Discuss various types of PaaS Service Model Available in the Market.
Sol. Various types of PaaS are currently available to developers. They are:
Public PaaS
Private PaaS
Hybrid PaaS
Communications PaaS
Mobile PaaS
OpenPaaS
Public PaaS
Public PaaS is best fit for use in the public cloud. A public PaaS allows the user to control
software deployment while the cloud provider manages the delivery of all other major
IT components necessary to the hosting of applications, including operating systems,
databases, servers and storage system networks.Public PaaS vendors
offer middleware that enables developers to set up, configure and control servers and
databases without the needing to set up the infrastructure. As a result, public PaaS
and infrastructure as a service (IaaS) run together. With PaaS operating on top of a
vendor's IaaS infrastructure while leveraging the public cloud
Private PaaS
Private PaaS aims to deliver the agility of public PaaS while maintaining the security,
compliance, benefits and potentially lower costs of the private data center.A private
PaaS is usually delivered as an appliance or software within the user's firewall, which is
frequently maintained in the company's on-premises data center. A private PaaS can be
developed on any type of infrastructure and can work within the company's
specific private cloud.
Hybrid PaaS
Hybrid PaaS combines public PaaS and private PaaS to provide companies with the
flexibility of infinite capacity provided by a public PaaS and the cost efficiencies of
owning an internal infrastructure in private PaaS. Hybrid PaaS utilizes a hybrid cloud.
Communication PaaS
CPaaS providers also help users throughout the development process by providing
support and product documentation. Some providers also offer software development
kits as well as libraries that can help build applications on different desktop and mobile
platforms. Development teams that choose to use CPaaS can save on infrastructure,
human resources and time to market.
Mobile PaaS
Mobile PaaS (mPaaS) is the use of a paid integrated development environment for the
configuration of mobile apps. In an mPaaS, coding skills are not required. MPaaS is
delivered through a web browser and typically supports public cloud, private cloud and
on-premises storage. The service is usually leased with pricing per month, varying
according to the number of included devices and supported features.MPaaS usually
provides an object-oriented drag-and-drop interface that allows users to simplify the
development of HTML5 or native apps through direct access to features such as the
device's GPS, sensors, cameras and microphone. It often supports various mobile OSes.
Open PaaS
Q 17: Discuss Various Pros and Cons of Platform as a Service Model.
The principal benefit of PaaS is simplicity and convenience for users. The PaaS provider
will supply much of the infrastructure and other IT services, which users can access
anywhere via a web browser. The ability to pay on a per-use basis allows enterprises to
eliminate the capital expenses they traditionally have for on-premises hardware and
software.
Vendor Lockin
Vendor lock-in is another common concern since users cannot easily migrate many of
the services and data from one PaaS product to another competing product. Users must
evaluate the business risks of service downtime and vendor lock-in when they select a
PaaS provider.
Internal changes to a PaaS product are also a potential issue. For example, if a PaaS
provider stops supporting a certain programming language or opts to use a different set
of development tools, the impact on users can be difficult and disruptive. Users must
follow the PaaS provider's service roadmap to understand how the provider's plan will
affect their environment and capabilities
Many PaaS products are geared toward software development. These platforms offer
compute and storage infrastructures, as well as text editing, version management,
compiling and testing services that help developers create new software quickly and
efficiently. A PaaS product can also enable development teams to collaborate and work
together, regardless of their physical location.PaaS architectures keep their underlying
infrastructure hidden from developers and other users. As a result, the model is similar
to serverless computing and function-as-a-service architectures. Meaning the cloud
service provider manages and runs the server and controls the distribution of resources.
Q 18 : Outline Various Business Applications of PaaS Model.
Uses prebuilt, ready-to-use adapters for seamless integration of on-premises and cloud
applications
Requires real-time, fault-tolerant data integration and replication services for a wide
variety of on-premises and cloud databases
Uses developer productivity and tools including issue tracking, code versioning, wikis,
agile-development tools, continuous integration, and delivery automation
Has API-first development components, services, and processes for back- and front-end
developers
Utilizes a mobile application platform with open messaging, data and service
integration, NLP chatbots, and management
Provides language and tools interoperability between on-premises and cloud platforms
Uses multiplatform interoperability for tools, workloads for rapid DevTest deployment,
disaster recovery, and production environments
Uses deep and advanced analytics tools and techniques for statistical, predictive, and
machine-learning analytics
Q 19. Describe the Term Software as a Service in Detail discussing its various Advantages and
Characteristics.
SaaS is closely related to the application service provider (ASP) and on demand
computing software delivery models. The hosted application management model of
SaaS is similar to ASP, where the provider hosts the customer’s software and delivers it
to approved end users over the internet.
With the advent of the internet in the 1990s, providers began hosting software and
making it available to customers via the internet. This forerunner of SaaS, called the
application service provider (ASP) model, had serious limitations, however. For example,
each customer required their own version of the software, which meant they had to
install some software on users’ computers. Configuration was costly and time-
consuming.And, finally, ASP solutions typically didn’t offer a way to collect and
aggregate data efficiently.
The first SaaS solutions emerged in the late 1990s, when the term SaaS was originally
coined. This new model delivered much greater efficiencies than the ASP model. A
single instance of the application could serve multiple users and even customers, thanks
to its so-called multi-tenant architecture. Local installation of software was no longer
required. And it provided a way to collect, aggregate, and centralize valuable application
data.While the delivery model has remained constant since the early 2000s, SaaS has
evolved significantly from first-generation siloed solutions to modern SaaS suites that
enable high visibility across the business and can extend the power of SaaS through
emerging technologies such as IoT, AI, chatbots, digital assistants, and blockchain.
SaaS Characteristics
Multitenant Architecture
A multitenant architecture, in which all users and applications share a single, common
infrastructure and code base that is centrally maintained. Because SaaS vendor clients
are all on the same infrastructure and code base, vendors can innovate more quickly
and save the valuable development time previously spent on maintaining numerous
versions of outdated code
With the SaaS model, you can customise with point-and-click ease, making the weeks or
months it takes to update traditional business software seem hopelessly old fashioned.
Easy Customisation
The ability for each user to easily customise applications to fit their business processes
without affecting the common infrastructure. Because of the way SaaS is architected,
these customisations are unique to each company or user and are always preserved
through upgrades. That means SaaS providers can make upgrades more often, with less
customer risk and much lower adoption cost.
Better Access
Improved access to data from any networked device while making it easier to manage
privileges, monitor data use, and ensure everyone sees the same information at the
same time.
Saas Advantages
SaaS removes the need for organizations to install and run applications on their own
computers or in their own data centers. This eliminates the expense of hardware
acquisition, provisioning and maintenance, as well as software licensing, installation and
support.
Gain access to sophisticated applications. To provide SaaS apps to users, you don’t need
to purchase, install, update or maintain any hardware, middleware or software. SaaS
makes even sophisticated enterprise applications, such as ERP and CRM, affordable for
organisations that lack the resources to buy, deploy and manage the required
infrastructure and software themselves
Use free client software. Users can run most SaaS apps directly from their web browser
without needing to download and install any software, although some apps require
plugins. This means that you don’t need to purchase and install special software for your
users.
Mobilise your workforce easily. SaaS makes it easy to “mobilise” your workforce
because users can access SaaS apps and data from any Internet-connected computer or
mobile device. You don’t need to worry about developing apps to run on different types
of computers and devices because the service provider has already done so. In addition,
you don’t need to bring special expertise onboard to manage the security issues
inherent in mobile computing. A carefully chosen service provider will ensure the
security of your data, regardless of the type of device consuming it.
Access app data from anywhere. With data stored in the cloud, users can access their
information from any Internet-connected computer or mobile device. And when app
data is stored in the cloud, no data is lost if a user’s computer or device fails.
Scalable usage: Cloud services like SaaS offer high vertical scalability, which gives
customers the option to access more, or fewer, services or features on-demand.
Automatic updates: Rather than purchasing new software, customers can rely on a SaaS
provider to automatically perform updates and patch management. This further reduces
the burden on in-house IT staff.
Accessibility and persistence: Since SaaS applications are delivered over the Internet,
users can access them from any Internet-enabled device and location.
.
Saas Applicationas
Salesforce.com
Signature Microsoft productivity applications such as Word, Excel and PowerPoint are
longtime staples of the workplace, but the cloud-based Microsoft Office
365 dramatically expands the Office suite’s parameters. Users now may create, edit and
share content from any PC, Mac, iOS, Android or Windows device in real-time, connect
with colleagues and customers across a range of tools from email to video conferencing
and leverage a range of collaborative technologies supporting secure interactions both
inside and outside of the organization.
Box
Box supports more than 120 file types , and users may preview content prior to
downloading. All content sharing, editing, discussion and approval is confined to one
centralized file, and users receive real-time notifications when edits are made. Box also
automates tasks such as employee onboarding and contract approvals, reducing
repetition and abbreviating review cycles.
Amazon, too, has evolved beyond its core e-commerce platform to support the on-
demand delivery of cloud-based IT resources and applications, bolstered by pay-as-you-
go pricing options. Amazon Web Services currently encompasses more than 70 services
in all, including computing, storage, networking, database, analytics, deployment,
management and tools for the Internet of Things.
Q 23 What is a Public Cloud. What Makes a Cloud Public? Discuss how a Public Cloud Works.
Public Cloud
A public cloud is a type of cloud computing in which a third-party service provider makes
computing resources—which can include anything from ready-to-use software
applications, to individual virtual machines (VMs), to complete enterprise-grade
infrastructures and development platforms—available to users over the public Internet.
These resources might be accessible for free, or access might be sold according to
subscription-based or pay-per-usage pricing models. The public cloud provider owns and
administers the data centers where customers’ workloads run
Service providers assume responsibility for all hardware and infrastructure maintenance
and provides high-bandwidth network connectivity to ensure rapid access to
applications and data. The cloud provider also manages the
underlying virtualization software. In its simplest form, the public cloud model is the
computing version of the “utility” model we all use when consuming electricity or water
in our homes. Public cloud architectures are multi-tenant environments—users share a
pool of virtual resources that are automatically provisioned for and allocated to
individual tenants through a self-service interface. This means that multiple tenants’
workloads might be running CPU instances running on shared physical server at the
same time. Each cloud tenant’s data is logically isolated from that of other tenants.
RESOURCE ALLOCATION
Tenants outside the provider’s firewall share cloud services and virtual resources that
come from the provider’s set of infrastructure, platforms, and software.
USE AGREEMENTS
MANAGEMENT
At a minimum, the provider maintains the hardware underneath the cloud, supports the
network, and manages the virtualization software.
Public clouds are set up the same way as private clouds. Both use a handful of
technologies to virtualize resources into shared pools, add a layer of administrative
control over everything, and create automated self-service functions. Together, those
technologies create a cloud: private if it’s sourced from systems dedicated to and
managed by the people using them, public if you provide it as a shared resource to
multiple users.
And hybrid cloud is a combination of 2 or more interconnected cloud environments—
public or private.
All that technology not only has to integrate for the cloud to just work, it also has
to integrate with any customer’s existing IT—which is what makes public clouds work
well. That connectivity relies on perhaps the most overlooked technology of all: the
operating system. The virtualization, management, and automation software that
creates clouds all sit on top of the operating system. And the consistency, reliability, and
flexibility of the operating system directly determines how strong the connections are
between the physical resources, virtual data pools, management software, automation
scripts, and customers.
Q 24: Why we Use Private Clouds? Discuss Various Benefits and Types of Private Cloud.
Specific security or compliance needs: For organizations that are subject to regulatory
compliance requirements, a private cloud may be necessary to achieve compliance.
Similarly, an organization may choose to use a private cloud to store sensitive data in
order to retain greater control over security.
Predictable resource needs: One of the foremost benefits of public clouds is elasticity,
or the ability to scale resources up and down quickly when needs fluctuate. However,
some organizations don’t need this elasticity because their usage is relatively consistent.
For these organizations, a private cloud can be a better option.
There are different types of private clouds that deliver different services. For example,
when a company uses a private cloud for infrastructure as a service (IaaS), the cloud
might host storage, networking, or compute services. Private clouds can also support
platform as a service (PaaS) applications, which work just like regular software
applications that are hosted on a local computer.
There are also a variety of types of private cloud hosting options. These include
software-only platforms, combined software and hardware packages, and hosted or
managed private clouds. Hosted or managed means the private cloud server may live on
the customer’s premises or in a vendor’s data center, but is hosted and sometimes
managed by a vendor.
Virtual private cloud: This type is different from conventional private clouds because
the resources in a virtual private cloud exist in a walled-off area on a public cloud
instead of being hosted on-premises.
Hosted private cloud: This type of private cloud is hosted by a separate cloud service
provider on-premises or in a data center, but the server is not shared with other
organizations. The cloud service provider is responsible for configuring the network and
maintaining the hardware for the private cloud, as well as keeping the software
updated. This option provides the best of both worlds for organizations that require the
security and availability of a private cloud but prefer not to invest in an in-house data
center.
Managed private cloud: With this type of private cloud, a cloud service provider not
only hosts a private cloud for an organization, but it also manages and monitors the day-
to-day operations of the private cloud.
The cloud service provider may also deploy and update additional cloud-based services
such as storage and identity management or security audits. A managed private cloud
server can save a company considerable time and IT resources.
Total system control, resulting in stronger security: A private cloud offers total system
control and increased security through dedicated hardware and physical infrastructure
that’s used exclusively by the company that owns it.
Greater performance: Because the hardware is dedicated and not used by any other
organization, workload performance for cloud services is never affected by another
company running resource-intensive workloads on a shared server or by a public cloud
service outage.
Scalability: If an organization outgrows its existing hardware resources, it can easily add
more. If the growth is temporary or seasonal, an organization can move to a hybrid
cloud solution, incurring minimal usage fees by using the public cloud only when
necessary.
Predictable costs: In addition, the costs of using a public cloud can be very
unpredictable—with a private cloud, costs are the same each month, regardless of the
workloads an organization is running.
Better customization: Because companies have complete control over a private cloud, it
is much easier to reallocate resources and tailor the cloud to perform specifically
according to requirements that the company defines. IT managers have access to every
level of settings in their private cloud environment—they are not limited by policies set
by public cloud service providers.
Q 25 Discuss the architecture of Hybrid Cloud. What are the Challenges being faced in
Implementation of Hybrid Cloud.
Hybrid cloud is a cloud computing environment that uses a mix of on-premises, private
cloud and third-party, public cloud services with orchestration between the two
platforms. By allowing workloads to move between private and public clouds as
computing needs and costs change, hybrid cloud gives businesses greater flexibility and
more data deployment options.
And adequate wide area network (WAN) connectivity between those two environments.
Typically, an enterprise will choose a public cloud to access compute instances, storage
resources or other services, such as big data analytics clusters.
An enterprise has no direct control over the architecture of a public cloud, so, for a
hybrid cloud deployment, it must architect its private cloud to achieve compatibility
with the desired public cloud or clouds. This involves the implementation of suitable
hardware within the data center, including servers, storage, a local area network (LAN)
and load balancers.
The key to create a successful hybrid cloud is to select hypervisor and cloud software
layers that are compatible with the desired public cloud, ensuring proper
interoperability with that public cloud's application programming interfaces (APIs) and
services.The implementation of compatible software and services also enables instances
to migrate seamlessly between private and public clouds. A developer can also create
advanced applications using a mix of services and resources across the public and
private platforms.
Despite its benefits, hybrid cloud computing can present technical, business and
management challenges. Private cloud workloads must access and interact with public
cloud providers, so, as mentioned above, hybrid cloud requires API compatibility and
solid network connectivity.
For the public cloud piece of a hybrid cloud, there are potential connectivity issues,
service-level agreements (SLAs) breaches and other possible service disruptions. To
mitigate these risks, organizations can architect hybrid cloud workloads that
interoperate with multiple public cloud providers. However, this can complicate
workload design and testing. In some cases, an enterprise needs to redesign workloads
slated for hybrid cloud to address specific public cloud providers' APIs.
Another challenge with hybrid cloud computing is the construction and maintenance of
the private cloud itself, which requires substantial expertise from local IT staff and cloud
architects.
The implementation of additional software, such as databases, helpdesk systems and
other tools can further complicate a private cloud. What's more, the enterprise is fully
responsible for the technical support of a private cloud, and must accommodate any
changes to public cloud APIs and service changes over time.
Q 26 Explain the Process of Cloud Migration. Outline its Various Benefits and Challenges.
Cloud migration is the process of moving digital business operations into the cloud.
Cloud migration is sort of like a physical move, except it involves moving data,
applications, and IT processes from some data centers to other data centers, instead of
packing up and moving physical goods.
Much like a move from a smaller office to a larger one, cloud migration requires quite a
lot of preparation and advance work, but usually it ends up being worth the effort,
resulting in cost savings and greater flexibility. Most often, "cloud migration" describes
the move from on-premises or legacy infrastructure to the cloud. However, the term
can also apply to a migration from one cloud to another cloud.
Scalability: Cloud computing can scale up to support larger workloads and greater
numbers of users far more easily than on-premises infrastructure, which requires
companies to purchase and set up additional physical servers, networking equipment, or
software licenses.
Cost: Companies that move to the cloud often vastly reduce the amount they spend on
IT operations, since the cloud providers handle maintenance and upgrades. Instead of
keeping things up and running, companies can focus more resources on their biggest
business needs – developing new products or improving existing ones.
Performance: For some businesses, moving to the cloud can enable them to
improve performance and the overall user experience for their customers. If their
application or website is hosted in cloud data centers instead of in various on-premises
servers, then data will not have to travel as far to reach the users, reducing latency.
Flexibility: Users, whether they're employees or customers, can access the cloud
services and data they need from anywhere. This makes it easier for a business to
expand into new territories, offer their services to international audiences, and let their
employees work flexibly.
Main challenges of migrating to the cloud
Migrating large databases: Often, databases will need to move to a different platform
altogether in order to function in the cloud. Moving a database is difficult, especially if
there are large amounts of data involved. Some cloud providers actually offer physical
data transfer methods, such as loading data onto a hardware appliance and then
shipping the appliance to the cloud provider, for massive databases that would take too
long to transfer via the Internet. Data can also be transferred over the Internet.
Regardless of the method, data migration often takes significant time.
Data integrity: After data is transferred, the next step is making sure data is intact and
secure, and is not leaked during the process.
Continued operation: A business needs to ensure that its current systems remain
operational and available throughout the migration. They will need to have some
overlap between on-premises and cloud to ensure continuous service; for instance, it's
necessary to make a copy of all data in the cloud before shutting down an existing
database. Businesses typically need to move a little bit at a time instead of all at once.
Cloud Security
Cloud computing security refers to the technical discipline and processes that IT
organizations use to secure their cloud-based infrastructure. Through a cloud service
provider, IT organizations can outsource management of every aspect of the technology
stack, including networking, servers, storage, virtualization, operating systems,
middleware, runtime, data and applications. Cloud computing security includes the
measures that IT organizations take to secure all of these components against cyber
attacks, data theft and other threats.
Cloud security is the set of strategies and practices for protecting data and applications
that are hosted in the cloud. Like cyber security, cloud security is a very broad area, and
it is never possible to prevent every variety of attack. However, a well-designed cloud
security strategy vastly reduces the risk of cyber attacks.
Even with these risks, cloud computing is often more secure than on-premises
computing. Most cloud providers have more resources for keeping data secure than
individual businesses do, which lets cloud providers keep infrastructure up to date and
patch vulnerabilities as soon as possible. A single business, on the other hand, may not
have enough resources to perform these tasks consistently.
By default, most cloud providers follow best security practices and take active steps to
protect the integrity of their servers. However, organizations need to make their own
considerations when protecting data, applications, and workloads running on the cloud.
Security threats have become more advanced as the digital landscape continues to
evolve. These threats explicitly target cloud computing providers due to an
organization's overall lack of visibility in data access and movement. Without taking
active steps to improve their cloud security, organizations can face significant
governance and compliance risks when managing client information, regardless of
where it is stored.
Cloud security should be an important topic of discussion regardless of the size of your
enterprise. Cloud infrastructure supports nearly all aspects of modern computing in all
industries and across multiple verticals.
Today, cloud computing is a very approachable topic for both small and large
enterprises alike. However, while cloud computing affords businesses near-limitless
opportunities for scale and sustainability, it also comes with risks. Establishing successful
cloud security processes is about understanding the common threats experienced by
businesses operating in the cloud. These threats originate from both inside and outside
sources and vary in severity and complexity.
Advanced persistent threats (APTs): APTs are a form of cyber attack where an intruder
or group of intruders successfully infiltrate a system and remain undetected for an
extended period. These stealthy attacks operate silently, leaving networks and systems
intact so that the intruder can spy on business activity and steal sensitive data while
avoiding the activation of defensive countermeasures.
Account hijacking: Stolen and compromised account login credentials are a common
threat to cloud computing. Hackers use sophisticated tools and phishing schemes to
hijack cloud accounts, impersonate authorized users, and gain access to sensitive
business data.
Lack of visibility It's easy to lose track of how your data is being accessed and by whom,
since many cloud services are accessed outside of corporate networks and through third
parties.
Multitenancy Public cloud environments house multiple client infrastructures under the
same umbrella, so it's possible your hosted services can get compromised by malicious
attackers as collateral damage when targeting other businesses.
While enterprises may be able to successfully manage and restrict access points across
on-premises systems, administering these same levels of restrictions can be challenging
in cloud environments. This can be dangerous for organizations that don't deploy bring-
your-own device (BYOD) policies and allow unfiltered access to cloud services from any
device or geolocation.
Compliance
The NIST (National Institute of Standards and Technology) designed a policy framework
that many companies follow when establishing their own cloud security infrastructures.
This framework has five critical pillars:
Recover: Develop and activate necessary procedures to restore system capabilities and
network services in the event of a disruption.
Architecture
In connection with a cloud security framework, an architecture gives you a model with
both written and visual references on how to properly configure your secure cloud
development, deployment, and operations.
When migrating workloads to the cloud, a security architecture will clearly define how
an organization should do the following:
Protect applications and data, with appropriate security controls across network, data,
and application access.
Gain visibility and insights into security, compliance, and threat posture.
Inject security-based principles into the development and operation of cloud-based
services.