Download as pdf or txt
Download as pdf or txt
You are on page 1of 36

ISBM College of Engineering, Pune

Department of Computer Engineering


Cloud Computing:310254(C)
Unit IV Cloud Platforms and Cloud Applications 07 Hours
Amazon Web Services (AWS): Amazon Web Services and Components, Amazon Simple DB,
Elastic Cloud Computing (EC2), Amazon Storage System, Amazon Database services
(Dynamo DB). Microsoft Cloud Services: Azure core concepts, SQL Azure, Windows Azure
Platform Appliance.
Cloud Computing Applications: Healthcare: ECG Analysis in the Cloud, Biology: Protein
Structure Prediction, Geosciences: Satellite Image Processing, Business and Consumer
Applications: CRM and ERP, Social Networking, Google Cloud Application: Google App
Engine. Overview of OpenStack architecture.
Unit IV can be broadly divided into two parts:
Part A: Amazon Web Services (AWS)
Part B: Cloud Computing Applications

Part A: Amazon Web Services (AWS)

Amazon Web Services (AWS), launched in 2006, represents a pivotal moment in the history
of cloud computing, marking the entry of Amazon into the cloud services market. As a pioneer
in cloud infrastructure services, AWS offered scalable and cost-effective cloud computing
solutions to businesses and individuals. Among its suite of services, Amazon EC2 (Elastic
Compute Cloud) and Amazon S3 (Simple Storage Service) emerged as the most popular,
fundamentally changing how companies approach computing and data storage. EC2 provided
resizable compute capacity in the cloud, allowing users to run applications on Amazon's
computing environment. S3 offered secure, durable, and highly-scalable object storage, making
it easier for developers to store and retrieve any amount of data from anywhere on the web.
These services not only demonstrated the practicality and efficiency of cloud computing but
also set the stage for the rapid expansion of the cloud industry, with AWS leading the way as a
cornerstone of Amazon's business and a major driver of innovation in the tech world.

The Amazon Elastic Compute Cloud or EC2 permits users to pay for what they use. Servers
are ‘increased’ on demand and are ‘decreased’ when usage ceases. These instances are easily
virtual machines based on the peak of the Xen VM that are organized by Amazon’s interior
asset administration facility. The VMs are better renowned as elastic compute units (ECU),
which are pre-packaged and can be ‘ordered’ and one can easily choose and select what one
desires and pay for it after one is done. Pricing is done on the kind of ECU and size of data
moved or the time used.

4.1 AMAZON WEB COMPONENTS AND SERVICES


Web services allow applications developed using different programming languages, platforms,
and technologies to communicate with each other seamlessly. Web services are based on
standardized protocols such as SOAP (Simple Object Access Protocol) and REST
(Representational State Transfer), as well as data formats like XML (eXtensible Markup
Language) and JSON (JavaScript Object Notation). These standards provide a common
language for communication, making it easier for developers to understand and work with web
services.

PROF. KAILASH N. TRIAPATHI, DEPARTMENT OF COMPUTER ENGINEERING 1


ISBM College of Engineering, Pune
Department of Computer Engineering
Cloud Computing:310254(C)

Amazon Web Services


Amazon Web Services gives programmatic entry to Amazon’s ready-to-use computing
infrastructure.
The robust computing podium that was assembled and enhanced by Amazon is now obtainable
by anyone who has entry to the internet. Amazon provides numerous web services, building
blocks that fulfil some of the quintessence wants of most systems: storage, computing,
messaging, and datasets.

Amazon Web Services can aid us to architect scalable procedures by providing:

• Reliability: The services run in Amazon’s battle-tested, highly obtainable data centres
that run Amazon’s own business.
• Security: Basic security and authentication methods are obtainable out of the packing
box and customers can enhance them as wanted by layering his/her application-specific
security on the apex of the services.
• Cost benefit: No fastened charges or support costs.
• Ease of development: Simple APIs allow us to harness the full power of this virtual
infrastructure and libraries, obtainable in most extensively employed programming
languages.
• Elasticity: Scale the computing supplies based on demand.
• Cohesiveness: The four-quintessence building-blocks using which services (storage,
computing, messaging and datasets) are created from scratch currently work well and
give a whole result through a large type of request for paid job domains.
• Community: Tap into the vibrant and dynamic customer community that is propelling
the extensive adoption of these web services and is bringing ahead sole requests for
paid jobs assembled on this infrastructure.

There are two grades of support accessible for users of Amazon Web Services:

1. Free forum-based support


2. Paid support packages

Amazon S3 (Storage):
• Web service interface for data storage and retrieval.
• Securely stores data of any kind, accessible from anywhere over the internet.
• Offers unlimited storage with object sizes ranging from 1 byte to 5 GB.
• Guarantees high availability with a 99.9% uptime commitment.

Amazon EC2 (Elastic Computing):


• Web service providing virtual machine usage with rapid scalability.
• Instances based on Linux and can run any software.
• Utilizes the opensource Xen hypervisor and supports Amazon Machine Images (AMIs).
• Offers various server types for scaling computing resources up and down.

Amazon SQS (Simple Queue Service):


• Reliable messaging infrastructure for sending and receiving messages.
• Supports RESTbased HTTP requests and stores messages redundantly across servers.

PROF. KAILASH N. TRIAPATHI, DEPARTMENT OF COMPUTER ENGINEERING 2


ISBM College of Engineering, Pune
Department of Computer Engineering
Cloud Computing:310254(C)

• Each message can contain up to 8 KB of text data.


• Integrates well with other Amazon Web Services for building decoupled frameworks
and scalable infrastructures.

4.2 Amazon SimpleDB:


Amazon SimpleDB (SDB) is a web service for saving, processing and querying, organized
datasets. It is not a relational database in the customary sense, but it is a highly accessible
schema, with a less-structured data shop in the cloud. Amazon SimpleDB (SDB) was a
distributed database service provided by Amazon Web Services (AWS). It was launched in
2007 as part of AWS's suite of cloud computing services. SimpleDB was designed to provide
a scalable, highly available, and easy-to-use database solution for developers building cloud-
based applications.

Key features of Amazon SimpleDB included:

1. Schemaless Data Model: SimpleDB utilized a schemaless data model, allowing developers
to store and query structured data without the need to define a rigid schema beforehand. This
flexibility made it well-suited for rapidly evolving applications and use cases where the data
structure might change frequently.

2. Automated Data Replication and High Availability: SimpleDB automatically replicated data
across multiple availability zones within a region to ensure high availability and durability.
This built-in redundancy helped protect against data loss and ensured that applications
remained accessible even in the event of hardware failures or network issues.

3. Simple Query Language: SimpleDB offered a simple query language similar to SQL
(Structured Query Language) for querying and manipulating data. Developers could perform
queries using SELECT, INSERT, UPDATE, and DELETE operations to retrieve, add, modify,
and delete data from the database.

4. Scalability: SimpleDB was designed to scale horizontally, allowing it to handle growing


workloads and datasets with ease. Amazon managed the underlying infrastructure,
automatically scaling resources to accommodate changes in demand without requiring manual
intervention from developers.

5. Integration with Other AWS Services: SimpleDB seamlessly integrated with other AWS
services, such as Amazon EC2 (Elastic Compute Cloud) and Amazon S3 (Simple Storage
Service), making it easy to build scalable and reliable cloud-based applications that leverage
multiple AWS resources.

Despite its early popularity and adoption by developers, Amazon SimpleDB was eventually
deprecated by AWS in 2019, and new customers were no longer accepted. Existing customers
were given time to migrate their applications to alternative database solutions offered by AWS,
such as Amazon DynamoDB, which provides similar scalability and features but with
additional capabilities and improved performance.

PROF. KAILASH N. TRIAPATHI, DEPARTMENT OF COMPUTER ENGINEERING 3


ISBM College of Engineering, Pune
Department of Computer Engineering
Cloud Computing:310254(C)
4.3 Elastic Cloud Computing (EC2):
Amazon Elastic Compute Cloud is a most important component of Amazon.com’s cloud
computing platform, Amazon Web Services. EC2 permits scalable deployment of applications
by supplying a web service. Clients can use an Amazon Machine Image to conceive a virtual
machine, encompassing any software programs desired. A client can conceive, launch and
terminate server instances as required, giving time for active servers, therefore the period is ‘
elastic’. EC2 presents users with command over the geographical position of instances that
permits for latency optimization and high grades of redundancy.

Amazon’s features are:

These points highlight key features and developments for Amazon EC2 (Elastic Compute
Cloud) within AWS (Amazon Web Services):

Service grade affirmation for EC2: A commitment to providing a high level of service quality
and reliability for EC2 users.
Microsoft Windows in beta pattern on EC2: Availability of Microsoft Windows operating
system on EC2 instances, still in beta testing phase.
Microsoft SQL Server in beta pattern on EC2: Support for Microsoft SQL Server database on
EC2 instances, also in beta.
Designs for an AWS administration console: Plans to develop or enhance a management
console for easier administration of AWS resources.
Designs for load balancing, autoscaling, and cloud monitoring services: Initiatives to improve
AWS services with load balancing (for distributing traffic), autoscaling (for adjusting resources
based on demand), and cloud monitoring (for tracking usage and performance).
Amazon Elastic Compute Cloud (Amazon EC2) is a world wide web service that presents
resizable computing capability that is utilized to construct and host software systems.

Following is the list of the major constituents of EC2 which are briefed.

Amazon Machine Images and Instances


An Amazon Machine Image (AMI) is a template that comprises a software program configuration
(e.g., functioning scheme, submission server and applications). From an AMI, a client can launch
instances, which are running exact replicates of the AMI. Also he/she can launch multiple instances
of an AMI, as shown in Figure. Instances hold running until the client halts or terminates them, or
until they fail. If an instance falls short, it can be commenced from the AMI. One can use a lone
AMI or multiple AMIs counting on needs. From a lone AMI, distinct types of instances can be
launched. An instance type is vitally a hardware archetype.
Amazon publishes numerous AMIs that comprise widespread program configurations for public
use. In supplement, constituents of the AWS developer community have released their own made-
to-order AMIs.

PROF. KAILASH N. TRIAPATHI, DEPARTMENT OF COMPUTER ENGINEERING 4


ISBM College of Engineering, Pune
Department of Computer Engineering
Cloud Computing:310254(C)

Figure Amazon Machine Images and Instances

Storage
When utilizing EC2, data have to be stored. The two most routinely utilized storage kinds are:
1. Amazon Simple Storage Service (Amazon S3)
2. Amazon Elastic Block Store (Amazon EBS) volumes

Amazon S3
Amazon S3 is storage for the internet. It presents a straightforward World Wide Web service
interface that endows us to shop and get any amount of data from any location on the web. Amazon
EBS presents the instances with continual, block-level storage. Amazon EBS volumes are vitally
hard computer disks that can adhere to a running instance. Volumes are particularly matched for
submissions that need a database, a document system or get access to raw block-level storage.
One can adhere multiple volumes to an instance. To hold exact duplicate, a client can conceive a
snapshot of the volume. The user can furthermore detach a volume from an instance and adhere
it to a distinct one.

Databases
The submission running on EC2 might require a database.

The following are two widespread modes of applying a database for the application:
1. Use Amazon Relational Database Service (Amazon RDS), which endows us to effortlessly
get an organized relational database in the cloud.
2. Launch an instance of a database AMI and use that EC2 instance as the database.
Amazon RDS boasts the benefit of managing database administration jobs, for example,
patching the programs, endorsing, saving backups, etc.

Amazon CloudWatch
PROF. KAILASH N. TRIAPATHI, DEPARTMENT OF COMPUTER ENGINEERING 5
ISBM College of Engineering, Pune
Department of Computer Engineering
Cloud Computing:310254(C)
Amazon CloudWatch is a web service that presents real-time supervising to Amazon’s EC2
clients on their asset utilization, such as CPU, computer disk and network. CloudWatch does
not supply any recollection, computer disk space or load average metrics. The data is
aggregated and supplied through the AWS administration console. It can furthermore be
accessed through online tools and web APIs. The metrics assembled by Amazon CloudWatch
endow auto-scaling characteristics to dynamically add or eliminate EC2 instances. The clients
are ascribed by the number of supervising instances.

4.4 Amazon Storage System


Amazon Relational Database Service (Amazon RDS) is a worldwide well-known web service.
It makes relational database to set up, function and scale in the cloud much simpler. It presents
cost-efficient, resizable capability for multiple industry-standard relational databases and
organizes widespread database management tasks.
Amazon RDS guides the users to get access to the capabilities of a well-renowned MySQL
or Oracle database server. Amazon RDS mechanically backs up the database and sustains the
database programs that force the DB Instance. Amazon RDS is flexible.

Amazon RDS has these advantages:

● Accelerated deployment: Amazon RDS decreases friction, when going from task
development to deployment.
● Can use straightforward API calls to get access to the capabilities of a production-ready
relational database without being concerned about infrastructure provisioning or establishing
and sustaining database software.
● Managed: Amazon RDS manages generic database administration tasks.
● Compatible: One can easily get native access to a MySQL or Oracle database server with
Amazon RDS.
● Scalable: With a straightforward API call, clients can scale the compute and storage assets
accessible to the database to rendezvous enterprise desires and submission load.
● Reliable: Amazon RDS sprints on the identical highly dependable infrastructure utilized by
other AWS. Amazon RDS’s automated backup service mechanically organizes the backup
the database and lets us refurbish at any point.
● Secure: Amazon RDS presents World Wide Web service interfaces to confi gure fi rewall
backgrounds that command network get access to the database instances and permits SSL
(Secured Socket Layer) attachments to the DB Instances.
● Inexpensive: In terms of deployment.

4.5 Amazon Database Services (Dynamo DB).


Amazon DynamoDB is found on the values of Dynamo, a progenitor of NoSQL. It adds the
influence of the cloud to the NoSQL database world. It boasts the client’s high availability,
reliability and incremental scalability, with no restrictions on dataset dimensions or demand
throughput for a granted table. It is very fast; furthermore, it sprints on the newest in solid-state
drive (SSD) expertise and integrates many other optimizations to consign reduced latency at
any scale.

Amazon DynamoDB is a NoSQL database service that boasts these benefits:


● Managed

PROF. KAILASH N. TRIAPATHI, DEPARTMENT OF COMPUTER ENGINEERING 6


ISBM College of Engineering, Pune
Department of Computer Engineering
Cloud Computing:310254(C)
● Scalable
● Fast
● Durable
● Highly available
● Flexible

4.6 Microsoft Cloud Services:


Microsoft offers cloud hosting services through Azure and encourages the transition to a
Microsoft Service Partner (MSP) mode, providing a comprehensive range of IT outsourcing
services and increasing recurring revenues. Despite the widespread internal use of applications
like SharePoint and Exchange, the introduction of new, cloud-based services presents an
opportunity to enhance existing setups and target niche markets. Cloud computing, enabled by
Internet-accessible data centers, offers significant benefits, including a scalable platform for
web application development and deployment. Microsoft's cloud platform, Windows Azure,
serves as both a platform and infrastructure for building web applications, forming a critical
component of Microsoft's cloud strategy alongside its software services.
The platform comprises diverse on-demand services hosted in Microsoft data hubs and
consigned through three merchandise brands:

1. Windows Azure (a functioning system supplying scalable compute and storage facilities).
2. SQL Azure (a cloud-based, scale-out type of SQL server).
3. Windows Azure AppFabric (an assemblage of services carrying applications both in the
cloud and on-premise).

Windows Azure: It provides a Microsoft Windows Server–based computing environment for


applications and continual storage for both organized and unstructured data, as well as
asynchronous messaging.

Windows Azure AppFabric: It provides a variety of services that assist consumers to attach
users and on-premise applications to cloud-hosted applications, organize authentication and
apply data administration and associated characteristics, like caching.

SQL Azure: It is vitally an SQL server supplied as a service in the cloud. The platform
furthermore encompasses a variety of administration services that permit users to control all
these assets (resources), either through a web-based portal or programmatically. In most
situations, there is a REST-based API that can be utilized to characterize how the services will
work. Most administration jobs that can be presented through the web portal can furthermore
be accomplished utilizing the API.

Windows Azure is Microsoft's cloud platform for running Windows applications and storing
data, operating on Microsoft data center machines. It allows clients to run applications and
manage data on Microsoft-owned, Internet-accessible machines, shifting from traditional
software provision to a service-based model. This platform supports various applications,
including enterprise services and consumer-targeted solutions.
Key uses include:

PROF. KAILASH N. TRIAPATHI, DEPARTMENT OF COMPUTER ENGINEERING 7


ISBM College of Engineering, Pune
Department of Computer Engineering
Cloud Computing:310254(C)

• Independent Software Vendors (ISVs) developing business-oriented Software as a Service


(SaaS) applications, leveraging Azure's infrastructure for scalability and reliability.
• Creation of consumer-facing SaaS applications by ISVs, taking advantage of Azure's
capacity for handling large-scale, scalable programs.
• Enterprises utilizing Windows Azure to build and run internal applications, benefiting from
its reliability and manageability without the need for extensive scalability.

Windows Azure is the application service in cloud which allows Microsoft data centres
to host and run applications. It presents a cloud functioning system called Windows Azure that
assists as a runtime for the applications and presents a set of services that permits development,
administration and hosting of applications off-premises. All Azure services and applications
created using them run on peak of Windows Azure.

Windows Azure has three centre components:

1. Compute which presents a computation environment with Web Role, Worker Role and
VM Role.
2. Storage which focuses on supplying scalable storage (Blobs, non-relational Tables and
Queues) for large-scale needs.
3. Fabric which values high-speed attachments and swaps to interconnect nodes comprising
some servers. Fabric resources, applications and services running are organized by the
Windows Azure Fabric Controller service.

The Compute Service


The Windows Azure Compute Service is designed to run a wide variety of applications,
especially those requiring support for a large number of simultaneous users through scalable
solutions. It achieves this by allowing applications to scale out across multiple servers, running
identical copies of the same code. This scalability is facilitated by the provision of multiple
instances, each operating within its own virtual machine (VM) running 64-bit Windows Server
2008, managed by a modified Hyper-V hypervisor for cloud integration. Developers access
and manage these applications via the Windows Azure portal, using a Windows Live ID to
create hosting and storage accounts for their applications. The development process for
Windows Azure applications employs the same languages and tools used for any Windows
application, with the option to use ASP.NET and Visual Basic or WCF and C# for web
functions.
The Storage Service
The Windows Azure Storage service offers several options for data storage, catering to different
application needs:
Blobs: The most straightforward method for storing data in Azure, blobs are designed for
storing binary data. A storage account can contain multiple containers, each holding one or
more blobs. Blobs can be up to 50 gigabytes in size and can include metadata, such as the
capture date of a JPEG image or the artist of an MP3 file.

PROF. KAILASH N. TRIAPATHI, DEPARTMENT OF COMPUTER ENGINEERING 8


ISBM College of Engineering, Pune
Department of Computer Engineering
Cloud Computing:310254(C)
Tables and Blobs: Both are aimed at storing and accessing data efficiently.
Queues: Serve a different purpose by facilitating communication between web instances and
worker instances, ensuring messages are passed seamlessly within applications.
All data stored in Windows Azure, whether in blobs, tables, or queues, is replicated three times
to ensure fault tolerance and data integrity, providing strong consistency. This means an
application reading data immediately after it's written will retrieve the exact data written.
Access to Windows Azure storage is not limited to Azure applications but extends to
applications running onpremises or on other host machines. Interaction with these storage
options is achieved through REST services, using URIs for identification and HTTP operations
for access. While .NET clients typically use the ADO.NET Data Services libraries for this
purpose, applications can also interact directly using raw HTTP calls.

4.6.1 Azure core concepts:


Azure core concepts encompass the foundational elements that underpin how services are
delivered and managed within the Azure cloud platform. Understanding these concepts is
crucial for effectively utilizing Azure.
1. Azure Regions and Geography: Azure operates in multiple geographic regions around the
world, ensuring data residency, compliance, and redundancy. Regions are clusters of data
centers interconnected through a dedicated regional low-latency network.
2. Resource Groups: A resource group is a container that holds related resources for an Azure
solution. It helps organize and manage resources by lifecycle, allowing for the collective
management of resources sharing the same lifecycle.
3. Azure Resource Manager (ARM): ARM is the deployment and management service for
Azure, providing a management layer that enables you to create, update, and delete resources
in your Azure account. It uses templates for deployment and allows for the management of
resources as a single group.
4. Virtual Machines (VMs): Azure VMs are on-demand, scalable computing resources offered
by Azure. They can be used to deploy a wide range of computing solutions, from development
and testing environments to high-performance computing applications.
5. Azure Storage: This is a highly durable, scalable, and available cloud storage solution for
large-scale data. It encompasses several data storage options including Blob Storage, Table
Storage, Queue Storage, and File Storage, each tailored for different types of data and usage
scenarios.
6. Azure SQL Database: A fully managed relational database service that supports SQL Server
database engines. It automates updates, patching, backups, and offers scalability without
downtime.

PROF. KAILASH N. TRIAPATHI, DEPARTMENT OF COMPUTER ENGINEERING 9


ISBM College of Engineering, Pune
Department of Computer Engineering
Cloud Computing:310254(C)
7. Azure Active Directory (AAD): Microsoft's multi-tenant, cloud-based directory, and
identity management service. AAD provides single sign-on (SSO) and multi-factor
authentication (MFA) to help protect users from cybersecurity attacks.
8. Azure Virtual Network (VNet): A logically isolated section of the Azure cloud where you
can create your own private network. VNets allow Azure resources to securely communicate
with each other, the internet, and on-premises networks.
9. Azure App Services: A fully managed platform for building, deploying, and scaling web
apps. It supports multiple languages and frameworks, including .NET, .NET Core, Java, Ruby,
Node.js, PHP, and Python.
10. Azure Kubernetes Service (AKS): A managed container orchestration service based on
Kubernetes. It simplifies deploying, managing, and operating containerized applications at
scale.
11. Azure Functions: A serverless compute service that lets you run event-triggered code
without having to explicitly provision or manage infrastructure, allowing for more focus on
application logic.
4.6.2 SQL Azure:
SQL Azure, now more commonly known as Azure SQL Database, is a cloud-based database
service that is part of Microsoft's Azure cloud platform. It offers a fully managed relational
database service that provides SQL Server database capabilities to applications running in the
cloud, onpremises, or in a hybrid environment. Azure SQL Database is built on the SQL Server
database engine architecture, adapted for the cloud environment to ensure scalability,
availability, and security.
Key Features of Azure SQL Database
Fully Managed: Azure handles most of the database administration tasks such as upgrading,
patching, backups, and monitoring without user involvement.
Scalability: Provides dynamic scalability with the ability to easily adjust resources to meet the
demands of your applications, offering both a payasyougo model and prepurchased capacity
options.
High Availability: Designed for 99.99% availability, it automatically handles patching,
backups, and replicas to ensure continuous operation and data protection.
Security: Includes builtin security features such as network security, threat protection, and data
encryption both at rest and in transit.
Performance Tuning: Offers builtin intelligence that identifies potential performance
bottlenecks and provides recommendations for optimizing database performance.
Global Distribution: Supports deploying your database in multiple regions worldwide to
achieve lower latency and a localized presence for your applications.
Service Tiers

PROF. KAILASH N. TRIAPATHI, DEPARTMENT OF COMPUTER ENGINEERING 10


ISBM College of Engineering, Pune
Department of Computer Engineering
Cloud Computing:310254(C)
Azure SQL Database offers several service tiers to cater to different needs, including:
Basic, Standard, and Premium: These tiers are designed to support a range of workloads from
light to heavy, with varying levels of performance and business continuity features.
General Purpose and Business Critical: Part of the vCorebased purchasing model, these tiers
offer flexibility in CPU, memory, and storage configurations. The Business Critical tier
provides higher throughput and lower latency with inmemory technology.
Hyperscale: Designed for very large databases, the Hyperscale tier offers autoscaling storage
up to 100 TB, rapid database backups, and fast database restores.
Use Cases
Web and Mobile Applications: Azure SQL Database provides a relational database service for
applications, supporting complex queries, search, and relational data structures.
Microservices Architecture: Ideal for microservices that require isolated databases with the
ability to scale independently.
Dev/Test Environments: Offers a costeffective, scalable database solution for development and
testing environments, with the ability to mimic production environments.
Business Continuity: With builtin high availability and data protection features, it ensures
business continuity through automated backups, georeplication, and disaster recovery
capabilities.
Azure SQL Database continuously evolves, adding new features and capabilities to enhance
performance, security, and ease of use, making it a compelling choice for developers and
businesses looking to leverage the power of cloud computing for their database needs.
4.6.3 Windows Azure Platform Appliance:
The Windows Azure Platform Appliance, introduced by Microsoft several years ago, was an
innovative approach to bring cloud computing capabilities directly into the data centers of large
enterprise customers and service providers. It aimed to provide a turnkey cloud solution by
deploying a subset of Azure services onpremises, enabling organizations to leverage Azure's
cloud platform within their own facilities. This concept was particularly appealing for
organizations with strict data sovereignty, privacy requirements, or those looking to utilize the
cloud's scalability and efficiency without moving data offpremises.
Key Features and Concepts
Selfcontained Azure Platform: The appliance included hardware and software provided by
Microsoft and its partners, designed to run a range of Azure services including compute,
storage, and virtual network capabilities.
Scalability and Elasticity: It allowed organizations to scale out their services ondemand,
providing the same elasticity and resource management benefits as the public Azure cloud, but
within their own data center.

PROF. KAILASH N. TRIAPATHI, DEPARTMENT OF COMPUTER ENGINEERING 11


ISBM College of Engineering, Pune
Department of Computer Engineering
Cloud Computing:310254(C)
Compliance and Data Sovereignty: By hosting the appliance within an organization's data
center, it addressed compliance and data sovereignty concerns, giving organizations control
over where their data resides.
Consistency with Azure Services: The appliance was designed to offer a consistent experience
with Azure, making it easier for developers to build and migrate applications between the
public Azure cloud and the onpremises appliance.
Target Audience and Use Cases
The Windows Azure Platform Appliance was targeted at large enterprises and service providers
that required cloud capabilities at scale but within their own data centers. Use cases included:
Large enterprises with significant data and privacy concerns.
Government organizations with strict regulatory compliance requirements.
Service providers looking to offer cloud services to their customers under their own brand.
Evolution and Current Status
Over time, the concept of the Windows Azure Platform Appliance has evolved. Microsoft has
shifted its strategy towards more flexible hybrid cloud offerings, such as Azure Stack and Azure
Arc, which provide comprehensive tools for extending Azure services and management to any
infrastructure.
Azure Stack: Azure Stack is an extension of Azure, bringing the agility and innovation of cloud
computing to onpremises environments. It allows organizations to run Azure services from
their data center, offering a consistent hybrid cloud platform.
Azure Arc: Azure Arc extends Azure management and services to any infrastructure, including
other public clouds and onpremises data centers. It allows for the deployment of Azure services
anywhere, providing a truly hybrid experience.
4.6.4 Cloud Computing Applications:
Cloud computing applications leverage the power of internetbased computing to deliver
services and run software that isn't limited by the physical capacity of individual computers or
servers. These applications span various categories, providing solutions for businesses,
developers, and consumers.
1. Software as a Service (SaaS):
Enterprise Applications: Tools like Microsoft Office 365, Google Workspace, Salesforce, and
Slack offer productivity, customer relationship management (CRM), and communication
services directly over the internet, removing the need for local installations and maintenance.
Educational Tools: Platforms such as Blackboard, Canvas, and Google Classroom provide
educational institutions and students with access to learning materials, assignments, and
collaboration tools.
2. Platform as a Service (PaaS):

PROF. KAILASH N. TRIAPATHI, DEPARTMENT OF COMPUTER ENGINEERING 12


ISBM College of Engineering, Pune
Department of Computer Engineering
Cloud Computing:310254(C)
Application Development: Azure App Services, Google App Engine, and Heroku offer
environments for developers to build, deploy, and manage applications without managing the
underlying infrastructure.
Database Management: Services like Amazon RDS and Azure SQL Database provide
managed database services, allowing developers to focus on application logic rather than
database maintenance.
3. Infrastructure as a Service (IaaS):
Virtual Machines: Amazon EC2 and Azure Virtual Machines allow businesses to rent virtual
servers on which they can run their own applications.
Storage Solutions: Amazon S3 and Google Cloud Storage offer scalable, secure, and durable
storage solutions for data of any size.
4. Desktop as a Service (DaaS):
Virtual Desktops: VMware Horizon Cloud and Amazon Work Spaces provide virtual desktops
to users, allowing them to access their personal computing environment from any device,
anywhere.
5. Disaster Recovery as a Service (DRaaS):
Backup and Recovery: Solutions like Azure Site Recovery and Amazon DR ensure that
business data and applications can be recovered quickly after a disaster, minimizing downtime
and data loss.
6. Big Data and Analytics:
Data Processing and Analysis: Platforms like Google BigQuery and Amazon Redshift enable
businesses to analyze large datasets, providing insights that inform business decisions.
Machine Learning and AI: Azure Machine Learning and Google AI Platform allow businesses
and developers to build and deploy machine learning models and artificial intelligence
applications.
7. Internet of Things (IoT):
Device Management: Azure IoT Hub and AWS IoT Core provide platforms for connecting,
monitoring, and managing IoT devices at scale.
Data Processing: These platforms also offer tools for processing data from IoT devices,
enabling realtime analytics and insights.
8. Content Delivery Networks (CDN):
Content Distribution: Amazon CloudFront and Akamai provide global content delivery
services, ensuring fast and reliable access to websites and web content.
9. Collaboration and Communication Tools:
video Conferencing and Collaboration: Zoom, Microsoft Teams, and Google Meet leverage
cloud computing to offer scalable video conferencing, file sharing, and collaboration tools.

PROF. KAILASH N. TRIAPATHI, DEPARTMENT OF COMPUTER ENGINEERING 13


ISBM College of Engineering, Pune
Department of Computer Engineering
Cloud Computing:310254(C)
4.7 Healthcare
The application of cloud computing in healthcare has revolutionized the way healthcare
services are delivered, improving accessibility, efficiency, and scalability while ensuring data
security and compliance with regulations.
1. Electronic Health Records (EHRs)
Accessibility and Integration: Cloud-based EHR systems enable healthcare providers to access
and share patient records securely from any location, improving coordination and patient care.
Scalability: Cloud solutions can easily scale to accommodate the growing amount of patient
data, including medical histories, test results, and imaging studies.
2. Telemedicine
Remote Consultations: Cloud platforms facilitate telemedicine services, allowing patients to
consult with healthcare professionals via video conferencing, reducing the need for physical
visits and expanding access to care, especially in underserved areas.
Monitoring and Management: Continuous monitoring of patients with chronic conditions is
enabled through cloud connected wearable devices, improving patient outcomes.
3. Medical Imaging
Storage and Sharing: Cloud computing provides scalable storage solutions for large imaging
files (e.g., Xrays, MRIs). It also simplifies the sharing of these images among healthcare
providers for consultation and diagnosis, regardless of their location.
Advanced Analysis: Cloud based AI and machine learning tools assist in the analysis and
interpretation of medical images, leading to faster and more accurate diagnoses.
4. Data Analytics and Big Data
Predictive Analytics: The cloud supports advanced analytics on vast amounts of healthcare
data, aiding in the identification of trends, risk factors, and the development of predictive
models for outbreaks and diseases.
Research and Development: Cloud computing accelerates research by providing the
computational resources needed for complex simulations, data analysis, and the storage of large
datasets.
5. Health Information Exchange (HIE)
Secure Data Exchange: Cloud services facilitate the secure and efficient exchange of health
information among different healthcare systems, improving the completeness of patient records
and supporting better health outcomes.
6. Disaster Recovery and Backup
Data Protection: Cloud based solutions offer robust backup and disaster recovery capabilities,
ensuring that medical data is preserved and can be quickly restored in the event of hardware
failures, natural disasters, or cyberattacks.

PROF. KAILASH N. TRIAPATHI, DEPARTMENT OF COMPUTER ENGINEERING 14


ISBM College of Engineering, Pune
Department of Computer Engineering
Cloud Computing:310254(C)
7. Patient Engagement and Portals
Enhanced Communication: Cloud-based patient portals provide a secure platform for patients
to access their health records, schedule appointments, communicate with their healthcare
providers, and receive personalized health advice.
8. Regulatory Compliance
Compliance Support: Cloud providers often offer infrastructure and services that are compliant
with healthcare regulations such as HIPAA in the U.S., helping healthcare organizations protect
patient data and meet regulatory requirements.
The adoption of cloud computing in healthcare continues to grow as technologies evolve and
the demand for more efficient, patient-centered healthcare services increases. Cloud computing
not only enhances the capabilities of healthcare providers but also improves patient outcomes
by making healthcare more accessible, personalized, and data-driven.

4.8 ECG Analysis in the Cloud:


ECG (Electrocardiogram) analysis in the cloud represents a significant advancement in the
monitoring and diagnostic capabilities within the healthcare sector, leveraging cloud
computing technology to enhance the accessibility, efficiency, and effectiveness of ECG data
analysis. This approach offers several benefits and applications:
Benefits and Advancements
1. Realtime Monitoring and Analysis: Cloud platforms can receive and analyze ECG data in
real time from wearable devices, enabling continuous monitoring of patients with
cardiovascular conditions. This facilitates early detection of potential issues, allowing for
prompt intervention.
2. Scalability: Cloud computing provides scalable resources, enabling the storage and analysis
of vast amounts of ECG data from numerous patients simultaneously without compromising
performance or speed.
3. Accessibility: Healthcare providers can access ECG data and analysis results from any
location, improving the coordination of care across different healthcare facilities and
specialists.
4. Data Integration: ECG data can be integrated with other patient health records stored in the
cloud, providing a comprehensive view of the patient's health status. This aids in more accurate
diagnostics and personalized treatment plans.
5. Advanced Analytical Tools: The cloud supports the use of advanced analytics, machine
learning, and artificial intelligence algorithms to interpret ECG data, identify patterns, and
predict potential health issues, sometimes with greater accuracy than traditional methods.
Applications

PROF. KAILASH N. TRIAPATHI, DEPARTMENT OF COMPUTER ENGINEERING 15


ISBM College of Engineering, Pune
Department of Computer Engineering
Cloud Computing:310254(C)
1. Telecardiology: Cloudbased ECG analysis is a cornerstone of telecardiology services,
enabling remote diagnosis and management of cardiac patients, which is particularly beneficial
for individuals in remote or underserved regions.
2. Wearable Health Devices: Integration with wearable technologies allows for the continuous
monitoring of cardiac health in everyday settings, not just in clinical environments. This can
lead to better management of chronic conditions and lifestylerelated interventions.
3. Clinical Research: Researchers can utilize cloudbased platforms to collect and analyze ECG
data from diverse populations over time. This aids in clinical studies related to cardiac health,
the development of new treatments, and the understanding of heart disease patterns.
4. Emergency Response: In emergency situations, immediate access to a patient's ECG analysis
can be critical. Cloud platforms can provide emergency responders and medical personnel with
instant access to this data, potentially saving lives.
5. Cost Reduction: By utilizing cloud services, healthcare providers can reduce the costs
associated with data storage and IT infrastructure, as well as improve the efficiency of ECG
analysis and patient monitoring processes.
Challenges and Considerations
While cloudbased ECG analysis offers numerous benefits, there are challenges to consider,
including data security and privacy concerns, the need for robust internet connectivity,
especially in remote areas, and ensuring compliance with health data regulations such as
HIPAA in the United States or GDPR in Europe.
As cloud computing technology continues to evolve, it is expected that these challenges will
be addressed, further enhancing the capabilities and benefits of cloudbased ECG analysis for
patients and healthcare providers alike.
4.9 Biology:
Cloud computing has significantly impacted the field of biology, offering scalable
computational resources and data storage capabilities that are essential for modern biological
research and applications. This technology enables biologists and researchers to process large
datasets, perform complex simulations, and collaborate more effectively, regardless of
geographical location. Some key applications of cloud computing in biology:
1. Genomics and Genomic Analysis
Data Intensive Genomics: Genomic sequencing generates massive amounts of data. Cloud
computing provides the necessary computational power and storage capacity to analyze this
data, facilitating breakthroughs in understanding genetic variations and their relation to
diseases.
Collaborative Research: Researchers worldwide can access and share genomic data stored in
the cloud, promoting collaboration and accelerating discoveries in genetics.
2. Proteomics

PROF. KAILASH N. TRIAPATHI, DEPARTMENT OF COMPUTER ENGINEERING 16


ISBM College of Engineering, Pune
Department of Computer Engineering
Cloud Computing:310254(C)
Protein Structure Prediction: Cloud computing aids in the computationally intensive task of
predicting protein structures from amino acid sequences, a process crucial for understanding
biological functions and developing new drugs.
Mass Spectrometry Data Analysis: Analyzing data from mass spectrometry, used in identifying
and quantifying proteins, benefits from the cloud's scalable compute resources.
3. Bioinformatics
Database Hosting and Access: Biological databases, containing information on genes, proteins,
diseases, and more, are hosted in the cloud, ensuring researchers have uptodate and easy access
to critical data.
Computational Tools: A wide range of bioinformatics tools for sequence alignment,
phylogenetic analysis, and other tasks are available as cloud services, simplifying the analysis
process.
4. Drug Discovery and Development
Virtual Screening: Cloud computing facilitates the virtual screening of large libraries of
compounds against biological targets to identify potential drug candidates.
Molecular Modeling and Simulations: Simulating the interactions between molecules and
biological targets requires substantial computational resources, readily available in the cloud.
5. Ecology and Conservation Biology
Species Distribution Modeling: Cloud computing supports the analysis of environmental and
observational data to predict the distribution of species across landscapes, aiding in
conservation efforts.
Climate Change Studies: Modeling the impacts of climate change on biodiversity and
ecosystems benefits from the vast computational resources available through cloud computing.
6. Data Sharing and Collaboration
Collaborative Platforms: Cloudbased platforms enable researchers to share data and tools,
collaborate on projects, and publish results in a more streamlined and accessible manner.
Educational Resources: The cloud offers access to educational resources, datasets, and tools
for students and educators in biology, promoting learning and innovation.
7. HighPerformance Computing (HPC)
Complex Computational Tasks: Cloud-based HPC services provide the necessary
computational power for tasks such as phylogenetic tree construction, metabolic network
analysis, and other complex analyses in biology.
Challenges
Despite the advantages, the application of cloud computing in biology faces challenges,
including data security and privacy concerns, the need for reliable internet connectivity, and
ensuring proper data management and compliance with regulatory standards. As cloud

PROF. KAILASH N. TRIAPATHI, DEPARTMENT OF COMPUTER ENGINEERING 17


ISBM College of Engineering, Pune
Department of Computer Engineering
Cloud Computing:310254(C)
computing technologies continue to evolve, they are set to further transform biological
sciences, making research and analysis more efficient, collaborative, and innovative.
4.9.1 Protein Structure Prediction
Protein structure prediction is a critical area of bioinformatics that focuses on determining the
three-dimensional structure of proteins from their amino acid sequences. This field is vital
because a protein's function is closely tied to its structure, and understanding this structure can
have profound implications for drug discovery, disease understanding, and the development of
biotechnological applications. There are several methods and approaches used in protein
structure prediction, each with its own strengths and challenges.
Key Methods in Protein Structure Prediction
1. Homology Modeling (Comparative Modeling):
This is the most accurate method when a clear homolog (sequence similarity) of the target
protein with known structure exists. The assumption is that similar sequences adopt similar
structures. This method involves aligning the amino acid sequence of the unknown protein
(target) with one or more proteins of known structure (template), followed by modeling the
target structure based on the template.
2. Ab Initio (De Novo) Modeling:
Used when no significant homology to proteins of known structure is found. This method
predicts protein structure from scratch, based on physical and biochemical principles. It often
involves simulating the folding process of a protein, which is computationally intensive and
less accurate than homology modeling for large proteins.
3. Fold Recognition (Threading):
This method is used when the target protein may not have a clear sequence homology to any
known structure but could share a similar folding pattern (tertiary structure). It involves
"threading" the sequence of the target protein through a database of known protein structures
to find a compatible fold, then building the model based on this fold.
4. Machine Learning and AI Approaches:
Recent advances have seen the application of deep learning and artificial intelligence to protein
structure prediction. Notably, AlphaFold by DeepMind has achieved remarkable success in
predicting protein structures with high accuracy, transforming the field. These methods use
neural networks to predict protein structures based on vast datasets of known protein structures.
Applications and Implications
Drug Discovery: Knowing the structure of a protein involved in a disease can help in designing
drugs that specifically target the protein's active site.
Understanding Diseases: Misfolded proteins are implicated in many diseases. Predicting and
studying protein structures can provide insights into disease mechanisms.
Biotechnology: Protein engineering relies on understanding and manipulating protein
structures to create proteins with new or enhanced functions.
PROF. KAILASH N. TRIAPATHI, DEPARTMENT OF COMPUTER ENGINEERING 18
ISBM College of Engineering, Pune
Department of Computer Engineering
Cloud Computing:310254(C)
Challenges and Future Directions
Complexity and Accuracy: Despite significant advancements, accurately predicting the
structure of large and complex proteins remains challenging.
Functional Prediction: Knowing the structure does not always reveal the protein's function,
requiring further research and computational methods to predict how proteins interact with
other molecules.
Computational Resources: High computational costs were a barrier, especially for ab initio
methods and largescale analyses, though advances in cloud computing and specialized
hardware are mitigating this issue.
The field of protein structure prediction is rapidly evolving, with new computational methods
and technologies continuously emerging. The integration of machine learning and AI has
particularly accelerated progress, promising to unlock new frontiers in biology, medicine, and
biotechnology.

4.10 Geosciences:
Cloud computing has increasingly become a fundamental resource for geosciences, a field that
often deals with vast amounts of data and complex computational models. The application of
cloud computing in geosciences addresses several challenges by providing scalable
computational resources, storage, and advanced data analytics capabilities. This technology
supports a wide range of applications from climate modeling and seismic analysis to resource
management and environmental monitoring. Cloud computing is applied in various geoscience
domains:
1. Climate Modeling and Meteorology
Data Storage and Analysis: Cloud platforms store and process large datasets from satellites,
weather stations, and climate models, enabling researchers to analyze trends, predict weather
patterns, and study climate change effects.
High-Performance Computing (HPC): Cloud-based HPC allows for running complex climate
models that require extensive computational resources, facilitating more accurate and faster
simulations.
2. Seismic Data Processing and Analysis
Seismic Exploration: In oil and gas exploration, cloud computing is used to process and analyze
seismic data, helping to identify potential resource deposits. The scalability of cloud resources
allows for the rapid processing of large seismic datasets.
Earthquake Research: Researchers use cloud computing to simulate earthquake scenarios,
analyze seismic activity, and develop early warning systems by processing data from seismic
networks in real time.
3. Remote Sensing and Satellite Imagery
Data Management: Cloud platforms host and manage vast amounts of remote sensing data from
satellites, drones, and aircraft, making it accessible to researchers and decisionmakers.

PROF. KAILASH N. TRIAPATHI, DEPARTMENT OF COMPUTER ENGINEERING 19


ISBM College of Engineering, Pune
Department of Computer Engineering
Cloud Computing:310254(C)
Image Processing and Analysis: Advanced cloudbased tools enable the processing and analysis
of satellite imagery for applications such as land use planning, environmental monitoring, and
disaster response.
4. Geospatial Data Analysis
Mapping and Visualization: Cloud computing facilitates the storage, analysis, and visualization
of geospatial data, supporting the creation of detailed maps and 3D models for environmental
studies, urban planning, and resource management.
Location based Services: Geoscientists use cloud computing to develop and deploy location
based services, enhancing navigation, tracking, and geotagging applications.
5. Hydrology and Water Resources Management
Water Cycle Modeling: Cloudbased models simulate water cycles and forecast water
availability, supporting water resource management and flood prediction efforts.
Water Quality Analysis: The cloud supports the analysis of water quality data from various
sources, aiding in the monitoring and management of water resources.
6. Environmental Monitoring and Conservation
Biodiversity Studies: Cloud computing enables the storage and analysis of data related to
species distribution, habitat conditions, and conservation efforts, supporting biodiversity
preservation.
Pollution Monitoring: The cloud facilitates the collection and analysis of environmental
pollution data, helping to assess impacts and develop mitigation strategies.
Challenges and Opportunities
While cloud computing offers remarkable capabilities for geosciences, challenges such as data
security, privacy, and the need for robust internet connectivity must be addressed. Additionally,
there's a growing need for skilled professionals who can navigate both geosciences and cloud
computing technologies.

As cloud computing technologies continue to evolve, their applications in geosciences are


expected to expand, enabling more sophisticated analyses and contributing to our
understanding of Earth's systems and processes. The integration of emerging technologies like
artificial intelligence and machine learning with cloud computing will further enhance the
capabilities and efficiencies in geosciences research and applications.
4.10.1 Satellite Image Processing
Satellite image processing is a vital technology used in various fields such as environmental
monitoring, urban planning, agriculture, disaster management, and climate research. It involves
the manipulation and analysis of satellite imagery to extract useful information and insights.
The process encompasses several steps, from acquisition to the interpretation of data,
employing a range of techniques and tools to enhance, classify, and analyze the images.

PROF. KAILASH N. TRIAPATHI, DEPARTMENT OF COMPUTER ENGINEERING 20


ISBM College of Engineering, Pune
Department of Computer Engineering
Cloud Computing:310254(C)
Key Steps in Satellite Image Processing
1. Acquisition: The first step involves capturing satellite images using sensors mounted on
satellites orbiting the Earth. These sensors can capture data in multiple spectral bands,
including visible light, infrared, and radar.
2. Preprocessing: This stage prepares images for analysis, which may include corrections for
geometric distortions, atmospheric interference, and sensor noise. Preprocessing ensures that
the images accurately represent the Earth's surface.
3. Enhancement: Image enhancement techniques improve the appearance of satellite imagery
for analysis. This can involve adjusting contrast and brightness, filtering noise, and highlighting
specific features within the images.
4. Classification: Classification is used to categorize pixels in an image into meaningful classes
or themes, such as vegetation, water bodies, urban areas, etc. This can be achieved through
supervised classification (where the classes are known beforehand) or unsupervised
classification (where the algorithm identifies patterns in the data).
5. Change Detection: By comparing satellite images taken at different times, changes in the
environment, such as deforestation, urban expansion, or the aftermath of natural disasters, can
be detected and quantified.
6. Analysis and Interpretation: The final step involves analyzing the processed images to extract
valuable information. This can include measuring areas affected by specific phenomena,
identifying resources, or assessing the health of vegetation.
Tools and Technologies
GIS Software: Geographic Information System (GIS) software is essential for satellite image
processing, providing tools for mapping, spatial analysis, and visualization. Examples include
ArcGIS and QGIS.
Remote Sensing Software: Specialized software such as ERDAS IMAGINE and ENVI offers
advanced capabilities for processing, enhancing, and analyzing satellite images.
Cloud Computing: Cloud platforms like Google Earth Engine and AWS for Earth provide vast
computational resources and storage, enabling the processing of large datasets of satellite
imagery efficiently.
Applications
Environmental Monitoring: Assessing the health of forests, monitoring water quality, and
tracking wildlife habitats.
Agriculture: Estimating crop yields, monitoring crop health, and managing agricultural
resources.
Urban Planning: Mapping urban growth, planning infrastructure, and assessing land use
changes.
Disaster Management: Identifying areas affected by floods, wildfires, earthquakes, and other
disasters for effective response and recovery efforts.
PROF. KAILASH N. TRIAPATHI, DEPARTMENT OF COMPUTER ENGINEERING 21
ISBM College of Engineering, Pune
Department of Computer Engineering
Cloud Computing:310254(C)
Climate Research: Studying global environmental changes, ice melt patterns, and sealevel rise.
Challenges and Future Directions
Satellite image processing faces challenges related to data volume, quality, and accessibility.
However, advancements in satellite technology, machine learning, and cloud computing are
continually enhancing the accuracy and efficiency of satellite image analysis. Future directions
include the integration of artificial intelligence to automate the extraction of insights from
satellite data, and the development of more sophisticated models for understanding complex
environmental and societal phenomena.
4.11 Business and Consumer Applications:
Cloud computing has transformed both the business landscape and consumer experiences by
offering scalable, flexible, and cost-effective solutions. Cloud computing serves both domains:
Business Applications
1. Software as a Service (SaaS):
Productivity Suites: Tools like Microsoft Office 365 and Google Workspace provide businesses
with email, document creation, and collaboration tools, accessible from anywhere.
Customer Relationship Management (CRM): Platforms like Salesforce enable businesses to
manage and analyze customer interactions, improving customer service and retention.
2. Infrastructure as a Service (IaaS):
Virtual Machines: Amazon EC2 and Azure Virtual Machines allow businesses to run
applications on virtual servers, scaling resources as needed.
Storage Solutions: Amazon S3 and Google Cloud Storage offer scalable, secure data storage
options.
3. Platform as a Service (PaaS):
Application Development: Azure App Services and Google App Engine provide environments
for developers to build, deploy, and manage applications without the complexity of managing
the underlying infrastructure.
4. Disaster Recovery as a Service (DRaaS):
Businesses use cloudbased backup and recovery solutions to ensure data integrity and
continuity in case of system failures or disasters.
5. Big Data Analytics:
Cloud platforms offer powerful computing capabilities for analyzing large datasets, enabling
businesses to gain insights, make informed decisions, and identify new opportunities.
Consumer Applications
1. Streaming Services:

PROF. KAILASH N. TRIAPATHI, DEPARTMENT OF COMPUTER ENGINEERING 22


ISBM College of Engineering, Pune
Department of Computer Engineering
Cloud Computing:310254(C)
Services like Netflix, Spotify, and YouTube use cloud computing to deliver vast amounts of
content to consumers worldwide, offering personalized experiences based on user preferences.
2. Cloud Storage and Backup:
Consumers use services like iCloud, Google Drive, and Dropbox for storing photos,
documents, and backing up their digital lives, accessible from any device.
3. Gaming:
Cloud gaming platforms like Google Stadia and Microsoft xCloud allow users to stream video
games directly to their devices, eliminating the need for powerful hardware.
4. Social Media and Communication:
Platforms like Facebook, Instagram, and WhatsApp rely on cloud computing to store and
manage billions of user generatedv40 content and messages, facilitating global connectivity.
5. Ecommerce:
Online shopping platforms leverage the cloud for hosting websites, managing inventories,
processing transactions, and analyzing consumer behavior to enhance the shopping experience.
6. Smart Home Devices:
Cloud computing enables the operation and integration of smart home devices, such as
thermostats, lights, and security cameras, allowing for remote control and automation.
Advantages and Considerations
Scalability: Both business and consumer applications benefit from the cloud’s ability to scale
resources up or down based on demand.
Cost Efficiency: Cloud services follow a payasyougo model, reducing the need for upfront
capital investment in hardware and software.
Accessibility: Cloud applications are accessible from anywhere with an internet connection,
providing flexibility for remote work and onthego access for consumers.
Security: While cloud providers implement robust security measures, businesses and
consumers must be aware of security and privacy policies and compliance standards.
As cloud computing continues to evolve, its applications in both the business and consumer
sectors are likely to expand, driving innovation, efficiency, and connectivity across various
aspects of daily life and work.
4.11.1 CRM and ERP
CRM (Customer Relationship Management) and ERP (Enterprise Resource Planning) systems
have significantly evolved with the advent of cloud computing, transforming how businesses
manage customer interactions, operations, and internal processes. Cloudbased CRM and ERP
systems offer numerous advantages over traditional onpremises installations, including cost
savings, scalability, accessibility, and realtime collaboration.

PROF. KAILASH N. TRIAPATHI, DEPARTMENT OF COMPUTER ENGINEERING 23


ISBM College of Engineering, Pune
Department of Computer Engineering
Cloud Computing:310254(C)
Cloud based CRM
Cloud based CRM systems focus on managing a company's interactions with current and
potential customers. They streamline processes to increase sales, improve customer service,
and enhance customer retention by leveraging cloud technologies.
Key Features and Advantages:
Accessibility: Cloud CRM can be accessed from anywhere, at any time, enabling sales and
customer service teams to work remotely and access critical customer information on the go.
Scalability: Businesses can easily scale their CRM capabilities up or down based on growth or
changing needs without significant upfront investments.
Integration: Many cloud CRM systems offer seamless integration with other business
applications, enhancing productivity and data consistency across tools.
Cost Effectiveness: With a subscription model, businesses avoid the high upfront costs
associated with purchasing and maintaining hardware and software.
Real-time Data: Cloud CRM provides realtime customer data and analytics, helping businesses
make informed decisions and personalize customer interactions.
Examples: Salesforce, Microsoft Dynamics 365, HubSpot CRM.
Cloud based ERP
Cloud based ERP systems integrate all facets of an enterprise into a unified suite of
applications. These systems manage business functions such as accounting, HR, procurement,
project management, and manufacturing through a centralized cloud platform.
Key Features and Advantages:
Streamlined Processes: Cloud ERP systems offer integrated applications that streamline
business processes and improve efficiency across departments.
Real-Time Operations: They provide realtime visibility into operations, allowing for better
decision making and quicker response times to market changes.
Flexibility and Mobility: With cloud ERP, employees can access the system from any location,
facilitating remote work and business operations across multiple locations.
Reduced IT Overhead: Businesses save on IT costs related to hardware, software, and
maintenance. Cloud providers manage the infrastructure, security, and software updates.
Enhanced Collaboration: Cloud ERP enables better collaboration within and between
departments by providing a single source of truth and tools for communication and project
management.
Examples: SAP S/4HANA Cloud, Oracle Cloud ERP, NetSuite.
Considerations for CRM and ERP in the Cloud

PROF. KAILASH N. TRIAPATHI, DEPARTMENT OF COMPUTER ENGINEERING 24


ISBM College of Engineering, Pune
Department of Computer Engineering
Cloud Computing:310254(C)
Data Security: While cloud providers implement stringent security measures, businesses must
understand their own responsibilities in securing customer and operational data.
Compliance: Organizations need to ensure that their cloudbased systems comply with relevant
regulations and standards, especially for data privacy and financial transactions.
Customization vs. Standardization: Cloud solutions often encourage standardization to some
extent, which can be a shift for businesses used to highly customized onpremises solutions.
However, many cloud systems offer flexible customization options.
Connectivity: Dependence on internet connectivity is higher with cloud solutions, which can
be a consideration for businesses in areas with unreliable internet service.
The transition to cloud based CRM and ERP systems represents a significant shift towards
more dynamic, flexible, and scalable business operations, enabling organizations of all sizes to
better meet their operational needs and adapt to market changes.
4.11.2 Social Networking
The application of cloud computing in social networking has fundamentally reshaped how
these platforms operate, scale, and innovate. Cloud computing provides the infrastructure and
services necessary to support the vast amounts of data generated by social networks, along with
the billions of interactions that occur daily. Cloud computing powers social networking:
Scalability and Flexibility
Handling Traffic Spikes: Social networks experience fluctuating traffic, with sudden spikes
during events or viral trends. Cloud computing enables these platforms to dynamically scale
resources up or down to handle the increased load, ensuring smooth user experiences.
Global Reach: Cloud services, distributed across global data centers, allow social networks to
deliver content quickly and reliably to users worldwide, minimizing latency.

Data Storage and Management


Massive Data Volumes: Social networks generate huge quantities of data, including posts,
images, videos, and user interactions. Cloud storage solutions offer scalable, secure, and cost-
effective ways to store this data.
Data Analysis and Personalization: Cloud computing enables the processing and analysis of
big data, allowing social networks to personalize content, target advertisements, and provide
recommendations based on user behavior and preferences.
Development and Deployment Speed
Rapid Development: Cloud platforms offer tools and services that accelerate the development
of new features and applications, enabling social networks to quickly innovate and respond to
user demands.

PROF. KAILASH N. TRIAPATHI, DEPARTMENT OF COMPUTER ENGINEERING 25


ISBM College of Engineering, Pune
Department of Computer Engineering
Cloud Computing:310254(C)
Continuous Integration and Deployment: Cloud environments support agile development
practices, allowing for continuous integration and deployment of updates without downtime,
enhancing the overall platform's agility and resilience.
Reliability and Availability
High Availability: Cloud computing provides mechanisms for data redundancy and failover,
ensuring that social networking services remain available even in the event of hardware failures
or other issues.
Disaster Recovery: Cloud-based backup and disaster recovery solutions protect against data
loss, ensuring that social networks can quickly recover from various types of disruptions.
Cost Efficiency
Pay-as-You-Go Model: Instead of investing heavily in physical data centers and servers, social
networks can use cloud computing resources on a pay-as-you-go basis, significantly reducing
upfront and operational costs.
Resource Optimization: Cloud services offer tools for monitoring and optimizing resource
usage, helping social networks manage costs more effectively.
Security and Compliance
Data Security: Cloud providers implement robust security measures, including encryption,
access controls, and network security, to protect user data and comply with privacy regulations.
Regulatory Compliance: Cloud platforms often comply with a range of international and
industry-specific regulations, aiding social networks in meeting legal requirements across
different regions.
Example Applications
Content Delivery Networks (CDNs): Social networks use CDNs to distribute content
efficiently worldwide, reducing latency and improving user experience.
Machine Learning and AI: Cloud computing powers advanced analytics, machine learning
models, and AI algorithms to analyze user data, moderate content, and enhance user
interactions on social platforms.

The integration of cloud computing into social networking underscores a broader trend towards
more scalable, efficient, and innovative digital services. As cloud technologies continue to
evolve, they will undoubtedly drive further advancements in how social networks operate and
how users interact online.
4.11.3 Google
Google Cloud Platform (GCP), which is Google's suite of cloud computing services. Google
Cloud provides a wide range of services that enable developers and businesses to build, deploy,
and scale applications, websites, and services using the same infrastructure and data analytics

PROF. KAILASH N. TRIAPATHI, DEPARTMENT OF COMPUTER ENGINEERING 26


ISBM College of Engineering, Pune
Department of Computer Engineering
Cloud Computing:310254(C)
technology that Google uses for its own products, like Google Search and YouTube. Here's an
overview of some key components and features of Google Cloud Platform:
Computing Services
Google Compute Engine: Offers virtual machines (VMs) that run on Google's infrastructure.
It's highly customizable and suitable for tasks that require consistent performance.
Google App Engine: A platform for building scalable web applications and mobile backends,
allowing developers to focus on coding without worrying about the underlying infrastructure.
Google Kubernetes Engine (GKE): A managed environment for deploying, managing, and
scaling containerized applications using Google infrastructure.
Storage and Databases
Google Cloud Storage: A highly durable and available object storage service for storing and
accessing data on Google Cloud Platform.
Google Cloud SQL: Fully managed relational database service that supports SQL databases
like MySQL, PostgreSQL, and SQL Server.
Google Cloud Bigtable: A scalable, NoSQL database service designed for large analytical and
operational workloads.
Google Firestore: A scalable, fully managed NoSQL document database for mobile, web, and
server development.
Big Data and Analytics
Google BigQuery: A fast, highly scalable, and costeffective multicloud data warehouse
designed for business agility.
Google Cloud Dataflow: A fully managed service for stream and batch data processing.
Google Dataproc: A cloud service for running Apache Spark and Apache Hadoop clusters to
process big data.
Machine Learning and AI
Google AI Platform: Offers machine learning and deep learning services, allowing you to
build, train, and deploy models.
Google Cloud Vision API: Enables developers to understand the content of an image by
encapsulating powerful machine learning models.
Google Cloud Natural Language API: Provides natural language understanding technologies
to developers, including sentiment analysis, entity recognition, and syntax analysis.
Networking
Google Cloud Virtual Private Cloud (VPC): Provides a private network with IP allocation,
routing, and network firewall policies to manage access to resources.

PROF. KAILASH N. TRIAPATHI, DEPARTMENT OF COMPUTER ENGINEERING 27


ISBM College of Engineering, Pune
Department of Computer Engineering
Cloud Computing:310254(C)
Google Cloud Load Balancing: Distributes applications' compute resources in single or
multiple regions, closer to users, and to meet high availability requirements.
Security and Identity
Google Cloud Identity & Access Management (IAM): Manages access control and
permissions for Google Cloud resources.
Google Cloud Security Command Center: Provides risk and threat detection to your Google
Cloud assets.
Development Tools
Google Cloud SDK: A set of tools that can be used to manage resources and applications
hosted on Google Cloud.
Google Cloud Shell: A browser-based shell that provides command line access to Google
Cloud resources.
google Cloud competes with other major cloud service providers like Amazon Web Services
(AWS) and Microsoft Azure. It's known for its high performance computing, data analytics
capabilities, machine learning services, and the extensive integration with Google's vast
ecosystem. Businesses and developers turn to Google Cloud for its innovative technologies,
secure and reliable infrastructure, and commitment to sustainability.
4.12 Cloud Application:
A cloud application, or cloud app, is a software program where cloudbased and local
components work together. This model relies on remote servers for processing logic that is
accessed through a web browser with a continual internet connection. Cloud applications can
provide more efficient data processing and storage capabilities than traditional desktop
applications, enabling users to access functionality and data from virtually any device with
internet connectivity.
Key Characteristics of Cloud Applications
1. Scalability: Cloud apps can scale resources up or down based on demand, allowing for
efficient handling of user load and data processing requirements.
2. Accessibility: They are accessible from anywhere via the internet, facilitating remote work
and mobile access.
3. Multitenancy: Many cloud apps are designed for multitenancy, meaning a single instance of
the app can serve multiple users or organizations, optimizing resource sharing and cost
efficiency.
4. Performance: Cloud applications leverage the robust infrastructure of cloud service
providers, offering high performance and reliability.
5. Security: While hosted in the cloud, these applications often incorporate advanced security
measures to protect data and ensure user privacy.
Types of Cloud Applications

PROF. KAILASH N. TRIAPATHI, DEPARTMENT OF COMPUTER ENGINEERING 28


ISBM College of Engineering, Pune
Department of Computer Engineering
Cloud Computing:310254(C)
1. Software as a Service (SaaS): These are ondemand software offered by third parties.
Examples include Microsoft Office 365, Google Workspace, Salesforce, and Slack. Users
access these services via the internet without worrying about installation, updates, or
maintenance.
2. Platform as a Service (PaaS): Offers a development platform and solution stack as a service.
It provides developers the framework they can use to build custom applications. Google App
Engine and Heroku are examples.
3. Infrastructure as a Service (IaaS): Provides virtualized computing resources over the internet,
like virtual machines, storage, and networks. Amazon Web Services (AWS), Microsoft Azure,
and Google Cloud Platform (GCP) are leaders in this space.
Benefits of Cloud Applications
Cost Efficiency: Reduces costs related to hardware acquisition, maintenance, and software
licensing.
Flexibility and Mobility: Users can access cloud applications and data from any device,
anywhere, promoting flexibility and mobility.
Automatic Updates: Cloud apps can be updated, tested, and deployed quickly, ensuring users
always have access to the latest features and security enhancements.
Collaboration Efficiency: Facilitates easy sharing and collaboration on documents and projects
among team members located in different geographies.
Disaster Recovery: Enhances data safety through offsite backups and disaster recovery
strategies, minimizing data loss risks.
Examples and Uses
Business Applications: CRM, ERP, and accounting software help businesses manage customer
information, resource planning, and financial transactions.
Communication Tools: Email, video conferencing, and messaging apps enable effective
communication and collaboration.
Content Creation: Tools for document creation, photo and video editing, and web design
support creative projects and content management.

As cloud computing continues to evolve, cloud applications are becoming more sophisticated,
incorporating technologies like AI and machine learning to offer more personalized and
efficient user experiences. This evolution is driving significant changes in how businesses
operate and how individuals interact with technology in their daily lives.
4.12.1 Google App Engine.
Google App Engine (GAE) is a fully managed, Platform as a Service (PaaS) offering from
Google Cloud Platform that enables developers to build, deploy, and scale web applications
and APIs quickly and seamlessly. It abstracts away the infrastructure, allowing developers to

PROF. KAILASH N. TRIAPATHI, DEPARTMENT OF COMPUTER ENGINEERING 29


ISBM College of Engineering, Pune
Department of Computer Engineering
Cloud Computing:310254(C)
focus on code while Google manages the deployment, scaling, and maintenance aspects. App
Engine supports popular development languages such as Java, Python, Node.js, Go, Ruby, and
PHP, making it a versatile platform for a wide range of applications.
Key Features of Google App Engine
1. Fully Managed Environment: Google handles the provisioning of servers, networking,
security, and compliance, significantly reducing the operational burden on developers.
2. Automatic Scaling: App Engine automatically scales the application up and down based on
demand, ensuring that it can handle peaks in user traffic without manual intervention.
3. Integrated Services: It offers seamless integration with other Google Cloud services,
including Cloud Datastore, Cloud Storage, Cloud SQL, and Cloud Pub/Sub, providing a
comprehensive ecosystem for developing powerful applications.
4. Zero Server Management: Developers can deploy applications without managing the
underlying infrastructure, enabling faster development cycles and reducing time to market.
5. Versioning and Traffic Splitting: App Engine allows for easy versioning of applications,
enabling developers to deploy multiple versions simultaneously and split traffic for A/B testing
or gradual rollouts.
6. Application Security: With Google’s security model, applications benefit from builtin
security features, including applicationlevel firewall, SSL/TLS encryption, and identity and
access management (IAM).
7. Developer Tools and SDK: App Engine provides a local development environment that
mimics the live environment, allowing for easy testing and development. The gcloud command
line tool and Cloud SDK offer additional utilities for managing and deploying applications.
Use Cases for Google App Engine

Web Applications: Ideal for building and hosting web applications, from simple websites to
complex, dynamic web portals.
Mobile Backends: Developers can use App Engine to create scalable backends for mobile
applications, providing APIs and services like user authentication, push notifications, and data
storage.
Microservices Architecture: Supports the development and deployment of microservices,
allowing teams to build, deploy, and scale services independently.
IoT Applications: Can serve as a backend for IoT applications, processing data from devices
and integrating with analytics and storage solutions.
Eommerce Sites: With its scalability and integration with Google’s analytics and machine
learning services, App Engine is wellsuited for ecommerce platforms requiring dynamic
scaling and personalized user experiences.
Getting Started with Google App Engine

PROF. KAILASH N. TRIAPATHI, DEPARTMENT OF COMPUTER ENGINEERING 30


ISBM College of Engineering, Pune
Department of Computer Engineering
Cloud Computing:310254(C)
Developers can start with Google App Engine by creating a new project in the Google Cloud
Console, selecting the App Engine service, and choosing the desired environment (Standard or
Flexible). The Standard environment offers predefined runtime environments and is optimized
for applications that need to scale to zero instances when not in use, while the Flexible
environment provides more customization options for the runtime environment and is suitable
for applications with specific resource or runtime requirements.

Google App Engine represents a powerful and efficient platform for developers looking to
leverage cloud technologies to build scalable and resilient web and mobile applications, taking
advantage of Google’s vast infrastructure and services ecosystem.
4.13 Overview of OpenStack architecture.
OpenStack is an open-standard and free platform for cloud computing. Mostly, it is deployed
as IaaS (Infrastructure-as-a-Service) in both private and public clouds where various virtual
servers and other types of resources are available for users. This platform combines irrelated
components that networking resources, storage resources, multi-vendor hardware processing
tools, and control diverse throughout the data center. Various users manage it by the command-
line tools, RESTful web services, and web-based dashboard.
In 2010, OpenStack began as the joint project of NASA and Rackspace Hosting. It was handled
by the OpenStack Foundation which is a non-profit collective entity developed in 2012
September for promoting the OpenStack community and software. 50+ enterprises have joined
this project.
Architecture of OpenStack
OpenStack contains a modular architecture along with several code names for the components.

PROF. KAILASH N. TRIAPATHI, DEPARTMENT OF COMPUTER ENGINEERING 31


ISBM College of Engineering, Pune
Department of Computer Engineering
Cloud Computing:310254(C)

Nova (Compute)
Nova is a project of OpenStack that facilitates a way for provisioning compute instances. Nova
supports building bare-metal servers, virtual machines. It has narrow support for various
system containers. It executes as a daemon set on the existing Linux server's top for providing
that service.
This component is specified in Python. It uses several external libraries of Python such as SQL
toolkit and object-relational mapper (SQLAlchemy), AMQP messaging framework (Kombu),
and concurrent networking libraries (Eventlet). Nova is created to be scalable horizontally. We
procure many servers and install configured services identically, instead of switching to any
large server.

Because of its boundless integration into organization-level infrastructure, particularly Nova


performance, and general performance of monitoring OpenStack, scaling facility has become
a progressively important issue.

Managing end-to-end performance needs tracking metrics through Swift, Cinder, Neutron,
Keystone, Nova, and various other types of services. Additionally, analyzing RabbitMQ which
is applied by the services of OpenStack for massage transferring. Each of these services
produces their log files. It must be analyzed especially within the organization-level
infrastructure.

PROF. KAILASH N. TRIAPATHI, DEPARTMENT OF COMPUTER ENGINEERING 32


ISBM College of Engineering, Pune
Department of Computer Engineering
Cloud Computing:310254(C)
Neutron (Networking)
Neutron can be defined as a project of OpenStack. It gives "network connectivity as a service"
facility between various interface devices (such as vNICs) that are handled by some other types
of OpenStack services (such as Nova). It operates the Networking API of OpenStack.

It handles every networking facet for VNI (Virtual Networking Infrastructure) and various
authorization layer factors of PNI (Physical Networking Infrastructure) in an OpenStack
platform. OpenStack networking allows projects to build advanced topologies of the virtual
network. It can include some of the services like VPN (Virtual Private Network) and a firewall.

Neutron permits dedicated static DHCP or IP addresses. It permits Floating IP addresses to


enable the traffic to be rerouted.
Users can apply SDN (Software-Defined Networking) technologies such as OpenFlow for
supporting scale and multi-tenancy. OpenStack networking could manage and deploy
additional services of a network such as VPN (Virtual Private Network), firewalls, load
balancing, and IDS (Intrusion Detection System).

Cinder (Block Storage)


Cinder is a service of OpenStack block storage that is used to provide volumes to Nova VMs,
containers, ironic bare-metal hosts, and more. A few objectives of cinder are as follows:
Open-standard: It is any reference implementation for the community-driven APIs.
Recoverable: Failures must be not complex to rectify, debug, and diagnose.
Fault-Tolerant: Separated processes ignore cascading failures.
Highly available: Can scale to serious workloads.
Component-based architecture: Include new behaviors quickly.
Cinder volumes facilitate persistent storage for guest VMs which are called instances. These
are handled by OpenStack compute software. Also, cinder can be used separately from other
services of OpenStack as software-defined stand-alone storage. This block storage system
handles detaching, attaching, replication, creation, and snapshot management of many block
devices to the servers.

Keystone (Identity)
Keystone is a service of OpenStack that offers shared multi-tenant authorization, service
discovery, and API client authentication by implementing Identity API of OpenStack.
Commonly, it is an authentication system around the cloud OS. Keystone could integrate with
various directory services such as LDAP. It also supports standard password and username
credentials, Amazon Web Services (AWS) style, and token-based systems logins. The catalog
of keystone service permits API clients for navigating and discovering various cloud services
dynamically.

Glance (Image)
The glance service (image) project offers a service in which users can discover and upload data
assets. These assets are defined to be applied to many other services. Currently, it includes
metadata and image definitions.

Images
Image glance services include retrieving, registering, and discovering VM (virtual machine)
images. Glance contains the RESTful API which permits querying of virtual machine metadata
and retrieval of an actual image as well. Virtual machine images are available because Glance

PROF. KAILASH N. TRIAPATHI, DEPARTMENT OF COMPUTER ENGINEERING 33


ISBM College of Engineering, Pune
Department of Computer Engineering
Cloud Computing:310254(C)
could be stored inside a lot of locations through common filesystems to various object-storage
systems such as the OpenStack Swift project.

Metadata Definitions
Image hosts a metadefs catalog. It facilitates an OpenStack community along with a path to
determine several metadata valid values and key names that could be used for OpenStack
resources.

Swift (Object Storage)


Swift is an eventually consistent and distributed blob/object-store. The object store project of
OpenStack is called Swift and it provides software for cloud storage so that we can retrieve
and store a large amount of data along with a general API. It is created for scale and upgraded
for concurrency, availability, and durability across the whole data set. Object storage is ideal to
store unstructured data that could grow without any limitations.

Rack space, in 2009 August, started the expansion of the forerunner to the OpenStack Object
Storage same as a complete substitution for the product of Cloud Files. The starting
development team includes nine developers. Currently, an object storage enterprise
(SwiftStack) is the prominent developer for OpenStack Swift with serious contributions from
IBM, HP, NTT, Red Hat, Intel, and many more.

Horizon (Dashboard)
Horizon is a canonical implementation of the Dashboard of OpenStack which offers the web-
based UI to various OpenStack services such as Keystone, Swift, Nova, etc. Dashboard shifts
with a few central dashboards like a "Settings Dashboard", a "System Dashboard", and a "User
Dashboard". It envelopes Core Support. The horizon application ships using the API
abstraction set for many projects of Core OpenStack to facilitate a stable and consistent
collection of reusable techniques for developers. With these abstractions, the developers
working on OpenStack Horizon do not require to be familiar intimately with the entire
OpenStack project's APIs.

Heat (Orchestration)
Heat can be expressed as a service for orchestrating more than one fusion cloud application
with templates by CloudFormation adaptable Query API and OpenStack-native REST API.

Mistral (Workflow)
Mistral is the OpenStack service that handles workflows. Typically, the user writes the
workflow with its language according to YAML. It uploads the definition of the workflow to
Mistral by the REST API. After that, the user can begin the workflow manually by a similar
API. Also, it configures the trigger for starting the workflow on a few events.

Ceilometer (Telemetry)
OpenStack Ceilometer (Telemetry) offers a Single Point of Contact for many billing systems,
facilitating each counter they require to build customer billing around every future and current
component of OpenStack. The counter delivery is auditable and traceable. The counter should
be extensible easily for supporting new projects. Also, the agents implementing data collections
must be separated from the overall system.

Trove (Database)

PROF. KAILASH N. TRIAPATHI, DEPARTMENT OF COMPUTER ENGINEERING 34


ISBM College of Engineering, Pune
Department of Computer Engineering
Cloud Computing:310254(C)
Trove is the database-as-a-service that is used to provision a non-relational and relational
engine of the database.

Sahara (Elastic map-reduce)


Sahara can be defined as a component for rapidly and easily provisioning Hadoop clusters.
Many users will define various parameters such as Hadoop version number, node flavor
information (RAM and CPU settings, specifying disk space), cluster topology type, and more.
After any user offers each parameter, Sahara expands the cluster in less time. Also, Sahara
offers a means for scaling a pre-existing cluster of Hadoop by removing and adding worker
nodes over demand.

Ironic (Bare metal)


Ironic is another project of OpenStack. It plans bare-metal machines rather than virtual
machines. Initially, Ironic was forked through the driver of Nova Bare metal and has derived
into an isolated project. It was the best idea as a plugin's set and bare-metal hypervisor API that
collaborate with various bare-metal hypervisors. It will apply IPMI and PXE in concert for
turning off and on and provisioning machines, although Ironic supports and could be developed
with vendor-specific plugins for implementing additional functionality.

Zaqar (Messaging)
Zaqar is a service to provide a multi-tenant cloud messaging facility for many web developers.
It offers a complete RESTful API that developers could apply for sending messages among
several components of the mobile and SaaS applications by applying a lot of patterns of
communication. This API is a powerful messaging engine developed with security and
scalability in mind. Some other components of OpenStack can develop with Zaqar for various
surface events and to interact with many guest agents that execute in an over-cloud layer.

Designate (DNS)
Designate can be defined as a REST API multi-tenant to manage DNS. It facilitates DNS as
the Service. This component is compatible with various backend technologies such as BIND
and PowerDNS. It doesn't offer the DND service as its goal is to interface using a DNS server
(existing) for managing DNS zones based on per tenant.

Manila (Shared file system)


OpenStack Manila (Shared file system) facilitates an open API for managing shares within the
vendor-agnostic structure. Standard primitives such as the ability to deny/give, delete, and
create access to any share. It can be applied in a range of different or standalone network
environments. Technical storage appliances through Hitachi, INFINIDAT, Quobyte, Oracle,
IBM, HP, NetApp, and EMC data systems can be supported and filesystem technologies as
well like Ceph and Red Hat GlusterFS.

Searchlight (Search)
Searchlight offers consistent and advanced search capabilities around many cloud services of
OpenStack. It accomplishes it by offloading the queries of user search through other API
servers of OpenStack by indexing the data into the ElasticSearch. This component is being
developed into Horizon. Also, it offers a command-line interface.

Magnum (Container orchestration)

PROF. KAILASH N. TRIAPATHI, DEPARTMENT OF COMPUTER ENGINEERING 35


ISBM College of Engineering, Pune
Department of Computer Engineering
Cloud Computing:310254(C)
Magnum is an API service of OpenStack improved by the containers team of OpenStack
making engines of container orchestration such as Apache Mesos, Kubernetes, and Docker
Swarm available as initial class resources within the OpenStack. Magnum applies heat for
orchestrating an operating system image that includes Kubernetes and Docker and executes
that particular image in bare metal or virtual machine inside the cluster configuration.

Barbican (Key manager)


Barbican is the REST API developed for the management, provisioning, and secure storage of
secrets. Barbican is focused on being helpful for each environment including huge ephemeral
Clouds.

Vitrage (Root Cause Analysis)


Vitrage is an OpenStack Root Cause Analysis (RCA) service to expand, analyze, and organize
OpenStack events and alarms, yielding various insights related to the problem's root cause and
reducing the existence before these problems are detected directly.

Aodh (Rule-based alarm actions)


This service of alarming allows the ability for triggering tasks based on specified rules against
event or metric data gathered by Gnocchi or Ceilometer.

PROF. KAILASH N. TRIAPATHI, DEPARTMENT OF COMPUTER ENGINEERING 36

You might also like