Cloud Computing Unit 1

You might also like

Download as pdf or txt
Download as pdf or txt
You are on page 1of 23

Departmental Elective – CS802 (B) Cloud Computing Theory:

UNIT - 1

TOPIC:- 1. Introduction to Service Oriented Architecture, Web Services, Basic Web Services Architecture,
Introduction to SOAP, WSDL and UDDI; REST ful services: Definition, Characteristics, Components, Types; Software as
a Service, Plat form as a Service, Organizational scenarios of clouds, Administering & Monitoring cloud services,
benefits and limitations, Study of a Hypervisor.

Introduction to Service Oriented Architecture


Service-Oriented Architecture (SOA) is a stage in the evolution of application development
and/or integration. It defines a way to make software components reusable using the
interfaces.
Formally, SOA is an architectural approach in which applications make use of services
available in the network. In this architecture, services are provided to form applications,
through a network call over the internet. It uses common communication standards to speed
up and streamline the service integrations in applications. Each service in SOA is a complete
business function in itself. The services are published in such a way that it makes it easy for
the developers to assemble their apps using those services. Note that SOA is different from
microservice architecture.
 SOA allows users to combine a large number of facilities from existing services to form
applications.
 SOA encompasses a set of design principles that structure system development and
provide means for integrating components into a coherent and decentralized system.
 SOA-based computing packages functionalities into a set of interoperable services, which
can be integrated into different software systems belonging to separate business domains.
The different characteristics of SOA are as follows :
o Provides interoperability between the services.
o Provides methods for service encapsulation, service discovery, service composition,
service reusability and service integration.
o Facilitates QoS (Quality of Services) through service contract based on Service Level
Agreement (SLA).
o Provides loosely couples services.
o Provides location transparency with better scalability and availability.
o Ease of maintenance with reduced cost of application development and
deployment.
There are two major roles within Service-oriented Architecture:
1. Service provider: The service provider is the maintainer of the service and the
organization that makes available one or more services for others to use. To advertise
services, the provider can publish them in a registry, together with a service contract that
specifies the nature of the service, how to use it, the requirements for the service, and the
fees charged.
2. Service consumer: The service consumer can locate the service metadata in the registry
and develop the required client components to bind and use the service.

Services might aggregate information and data retrieved from other services or create
workflows of services to satisfy the request of a given service consumer. This practice is
known as service orchestration Another important interaction pattern is service
choreography, which is the coordinated interaction of services without a single point of

control.
Components of SOA:

Guiding Principles of SOA:


1. Standardized service contract: Specified through one or more service description
documents.
2. Loose coupling: Services are designed as self-contained components, maintain
relationships that minimize dependencies on other services.
3. Abstraction: A service is completely defined by service contracts and description
documents. They hide their logic, which is encapsulated within their implementation.
4. Reusability: Designed as components, services can be reused more effectively, thus
reducing development time and the associated costs.
5. Autonomy: Services have control over the logic they encapsulate and, from a service
consumer point of view, there is no need to know about their implementation.
6. Discoverability: Services are defined by description documents that constitute
supplemental metadata through which they can be effectively discovered. Service
discovery provides an effective means for utilizing third-party resources.
7. Composability: Using services as building blocks, sophisticated and complex operations
can be implemented. Service orchestration and choreography provide a solid support for
composing services and achieving business goals.
Advantages of SOA:
 Service reusability: In SOA, applications are made from existing services. Thus, services
can be reused to make many applications.
 Easy maintenance: As services are independent of each other they can be updated and
modified easily without affecting other services.
 Platform independent: SOA allows making a complex application by combining services
picked from different sources, independent of the platform.
 Availability: SOA facilities are easily available to anyone on request.
 Reliability: SOA applications are more reliable because it is easy to debug small services
rather than huge codes
 Scalability: Services can run on different servers within an environment, this increases
scalability
Disadvantages of SOA:
 High overhead: A validation of input parameters of services is done whenever services
interact this decreases performance as it increases load and response time.
 High investment: A huge initial investment is required for SOA.
 Complex service management: When services interact they exchange messages to tasks.
the number of messages may go in millions. It becomes a cumbersome task to handle a
large number of messages.
Practical applications of SOA: SOA is used in many ways around us whether it is
mentioned or not.
1. SOA infrastructure is used by many armies and air forces to deploy situational awareness
systems.
2. SOA is used to improve healthcare delivery.
3. Nowadays many apps are games and they use inbuilt functions to run. For example, an
app might need GPS so it uses the inbuilt GPS functions of the device. This is SOA in
mobile solutions.
4. SOA helps maintain museums a virtualized storage pool for their information and
content.

Web Services in Cloud Computing


The Internet is the worldwide connectivity of hundreds of thousands of computers belonging to many
different networks.
A web service is a standardized method for propagating messages between client and server
applications on the World Wide Web. A web service is a software module that aims to accomplish a
specific set of tasks. Web services can be found and implemented over a network in cloud computing.
The web service would be able to provide the functionality to the client that invoked the web service.
A web service is a set of open protocols and standards that allow data exchange between different
applications or systems. Web services can be used by software programs written in different
programming languages and on different platforms to exchange data through computer networks such
as the Internet. In the same way, communication on a computer can be inter-processed.

Any software, application, or cloud technology that uses a standardized Web protocol (HTTP or
HTTPS) to connect, interoperate, and exchange data messages over the Internet-usually XML
(Extensible Markup Language) is considered a Web service. Is.
Web services allow programs developed in different languages to be connected between a client and a
server by exchanging data over a web service. A client invokes a web service by submitting an XML
request, to which the service responds with an XML response.

o Web services functions

o It is possible to access it via the Internet or intranet network.

o XML messaging protocol that is standardized.

o Operating system or programming language independent.

o Using the XML standard is self-describing.


A simple location approach can be used to detect this.

Web Service Components


XML and HTTP is the most fundamental web service platform. All typical web services use the
following components:

1. SOAP (Simple Object Access Protocol)


SOAP stands for "Simple Object Access Protocol". It is a transport-independent messaging protocol.
SOAP is built on sending XML data in the form of SOAP messages. A document known as an XML
document is attached to each message.
Only the structure of an XML document, not the content, follows a pattern. The great thing about web
services and SOAP is that everything is sent through HTTP, the standard web protocol.
Every SOAP document requires a root element known as an element. In an XML document, the root
element is the first element.
The "envelope" is divided into two halves. The header comes first, followed by the body. Routing data,
or information that directs the XML document to which client it should be sent, is contained in the
header. The real message will be in the body.

2. UDDI (Universal Description, Search, and Integration)


UDDI is a standard for specifying, publishing and searching online service providers. It provides a
specification that helps in hosting the data through web services. UDDI provides a repository where
WSDL files can be hosted so that a client application can search the WSDL file to learn about the
various actions provided by the web service. As a result, the client application will have full access to
UDDI, which acts as the database for all WSDL files.
The UDDI Registry will keep the information needed for online services, such as a telephone directory
containing the name, address, and phone number of a certain person so that client applications can
find where it is.

3. WSDL (Web Services Description Language)


The client implementing the web service must be aware of the location of the web service. If a web
service cannot be found, it cannot be used. Second, the client application must understand what the
web service does to implement the correct web service. WSDL, or Web Service Description Language,
is used to accomplish this. A WSDL file is another XML-based file that describes what a web service
does with a client application. The client application will understand where the web service is located
and how to access it using the WSDL document.

How does web service work?


The diagram shows a simplified version of how a web service would function. The client will use
requests to send a sequence of web service calls to the server hosting the actual web service.

Remote procedure calls are used to perform these requests. The calls to the methods hosted by the
respective web service are known as Remote Procedure Calls (RPC). Example: Flipkart provides a web
service that displays the prices of items offered on Flipkart.com. The front end or presentation layer
can be written in .NET or Java, but the web service can be communicated using a programming
language.
The data exchanged between the client and the server, XML, is the most important part of web service
design. XML (Extensible Markup Language) is a simple, intermediate language understood by various
programming languages. It is the equivalent of HTML.
As a result, when programs communicate with each other, they use XML. It forms a common platform
for applications written in different programming languages to communicate with each other.
Web services employ SOAP (Simple Object Access Protocol) to transmit XML data between
applications. The data is sent using standard HTTP. A SOAP message is data sent from a web service
to an application. An XML document is all that is contained in a SOAP message. The client
application that calls the web service can be built in any programming language as the content is
written in XML.

Features of Web Service


Web services have the following characteristics:

(a) XML-based: A web service's information representation and record transport layers employ XML.
There is no need for networking, operating system, or platform bindings when using XML. At the mid-
level, web offering-based applications are highly interactive.

(b) Loosely Coupled: The subscriber of an Internet service provider may not necessarily be directly
connected to that service provider. The user interface for a web service provider may change over time
without affecting the user's ability to interact with the service provider. A strongly coupled system
means that the decisions of the mentor and the server are inextricably linked, indicating that if one
interface changes, the other must be updated.
A loosely connected architecture makes software systems more manageable and easier to integrate
between different structures.

(c) Ability to be synchronous or asynchronous: Synchronicity refers to the client's connection to the
execution of the function. Asynchronous operations allow the client to initiate a task and continue with
other tasks. The client is blocked, and the client must wait for the service to complete its operation
before continuing in synchronous invocation.
Asynchronous clients get their results later, but synchronous clients get their effect immediately when
the service is complete. The ability to enable loosely connected systems requires asynchronous
capabilities.

(d) Coarse Grain: Object-oriented systems, such as Java, make their services available differently. At
the corporate level, an operation is too great for a character technique to be useful. Building a Java
application from the ground up requires the development of several granular strategies, which are then
combined into a coarse grain provider that is consumed by the buyer or service.
Corporations should be coarse-grained, as should the interfaces they expose. Building web services is
an easy way to define coarse-grained services that have access to substantial business enterprise logic.

(e) Supports remote procedural calls: Consumers can use XML-based protocols to call procedures,
functions, and methods on remote objects that use web services. A web service must support the input
and output framework of the remote system.
Enterprise-wide component development Over the years, JavaBeans (EJBs) and .NET components
have become more prevalent in architectural and enterprise deployments. Several RPC techniques are
used to both allocate and access them.
A web function can support RPC by providing its services, similar to a traditional role, or translating
incoming invocations into an EJB or .NET component invocation.
(f) Supports document exchanges: One of the most attractive features of XML for communicating
with data and complex entities.

Architecture of Web Services


The Web Services architecture describes how to instantiate the elements and implement the operations
in an interoperable manner.

The architecture of web service interacts among three roles: service provider, service
requester, and service registry. The interaction involves the three operations: publish, find, and bind.
These operations and roles act upon the web services artifacts. The web service artifacts are the web
service software module and its description.
The service provider hosts a network-associable module (web service). It defines a service description
for the web service and publishes it to a service requestor or service registry. These service requestor
uses a find operation to retrieve the service description locally or from the service registry. It uses the
service description to bind with the service provider and invoke with the web service implementation.
The following figure illustrates the operations, roles, and their interaction.

Roles in a Web Service Architecture


There are three roles in web service architecture:

o Service Provider

o Service Requestor

o Service Registry

Service Provider
From an architectural perspective, it is the platform that hosts the services.

Service Requestor
Service requestor is the application that is looking for and invoking or initiating an interaction with a
service. The browser plays the requester role, driven by a consumer or a program without a user
interface.

Service Registry
Service requestors find service and obtain binding information for services during development.
Operations in a Web Service Architecture
Three behaviors that take place in the microservices:

o Publication of service descriptions (Publish)

o Finding of services descriptions (Find)

o Invoking of service based on service descriptions (Bind)

Publish: In the publish operation, a service description must be published so that a service requester
can find the service.

Find: In the find operation, the service requestor retrieves the service description directly. It can be
involved in two different lifecycle phases for the service requestor:

o At design, time to retrieve the service's interface description for program development.

o And, at the runtime to retrieve the service's binding and location description for invocation.

Bind: In the bind operation, the service requestor invokes or initiates an interaction with the service at
runtime using the binding details in the service description to locate, contact, and invoke the service.

Artifacts of the web service


There are two artifacts of web services:

o Service

o Service Registry

Service: A service is an interface described by a service description. The service description is the
implementation of the service. A service is a software module deployed on network-accessible
platforms provided by the service provider. It interacts with a service requestor. Sometimes it also
functions as a requestor, using other Web Services in its implementation.

Service Description: The service description comprises the details of


the interface and implementation of the service. It includes its data types, operations, binding
information, and network location. It can also categorize other metadata to enable discovery and
utilize by service requestors. It can be published to a service requestor or a service registry.

Web Service Implementation Lifecycle


A web service implementation lifecycle refers to the phases for developing web services from the
requirement to development. An Implementation lifecycle includes the following phases:

o Requirements Phase

o Analysis Phase

o Design Phase

o Coding Phase

o Test Phase
o Deployment Phase

Requirements Phase

The objective of the requirements phase is to understand the business requirement and translate them
into the web services requirement. The requirement analyst should do requirement elicitation (it is the
practice of researching and discovering the requirements of the system from the user, customer, and
other stakeholders). The analyst should interpret, consolidate, and communicate these requirements to
the development team. The requirements should be grouped in a centralized repository where they can
be viewed, prioritized, and mined for interactive features.
Analysis Phase

The purpose of the analysis phase is to refine and translate the web service into conceptual models by
which the technical development team can understand. It also defines the high-level structure and
identifies the web service interface contracts.
Design Phase

In this phase, the detailed design of web services is done. The designers define web service interface
contract that has been identified in the analysis phase. The defined web service interface contract
identifies the elements and the corresponding data types as well as mode of interaction between web
services and client.
Coding Phase

Coding and debugging phase is quite similar to other software component-based coding and debugging
phase. The main difference lies in the creation of additional web service interface wrappers, generation
of WSDL, and client stubs.
Test Phase

In this phase, the tester performs interoperability testing between the platform and the client's program.
Testing to be conducted is to ensure that web services can bear the maximum load and stress. Other
tasks like profiling of the web service application and inspection of the SOAP message should also
perform in the test phase.
Deployment Phase
The purpose of the deployment phase is to ensure that the web service is properly deployed in the
distributed system. It executes after the testing phase. The primary task of deployer is to ensure that the
web service has been properly configured and managed. Other optional tasks like specifying and
registering the web service with a UDDI registry also done in this phase.

Web Service Stack or Web Service Protocol Stack


To perform three operations: publish, find, and bind in an interoperable manner, there must be a web
service stack. The web service stack embraces the standard at each level.

In the above figure, the top most layers build upon the capabilities provided by the lower layers. The
three vertical towers represent the requirements that are applied at every level of the stack. The text on
the right represents technologies that apply at that layer of the stack. A web service protocol stack
typically stacks four protocols:

o Transport Protocol

o Messaging Protocol

o Description Protocol

o Discovery Protocol

(Service) Transport Protocol: The network layer is the foundation of the web service stack. It is
responsible for transporting a message between network applications. HTTP is the network protocol
for internet available web services. It also supports other network protocol such as SMTP,
FTP, and BEEP (Block Extensible Exchange Protocol).

(XML) Messaging Protocol: It is responsible for encoding message in a common XML format so that
they can understand at either end of a network connection. SOAP is the chosen XML messaging
protocol because it supports three operations: publish, find, and bind operation.

(Service) Description Protocol: It is used for describing the public interface to a specific web service.
WSDL is the standard for XML-based service description. WSDL describes the interface and
mechanics of service interaction. The description is necessary to specify the business context, quality
of service, and service-to-service relationship.
(Service) Discovery Protocol: It is a centralized service into a common registry so that network Web
services can publish their location and description. It makes it easy to discover which services are
available on the network.
The first three layers of the stack are required to provide or use any web service. The simplest stack
consists of HTTP for the network layer, SOAP protocol for the XML-based messaging, and WSDL for
the service description layer. These three-layer provides interoperability and enables web service to
control the existing internet infrastructure. It creates a low cost of entry to a global environment.
The bottom three layers of the stack identify technologies for compliance and interoperability, the next
two layer- Service Publication and Service Discovery can be implemented with a range of solutions.

What is REST?
REpresentational State Transfer (REST) is a software architectural style that defines the constraints to
create web services. The web services that follows the REST architectural style is called RESTful Web
Services. It differentiates between the computer system and web services. The REST architectural style
describes the six barriers.

1. Uniform Interface
The Uniform Interface defines the interface between client and server. It simplifies and decomposes
the architecture which enables every part to be developed. The Uniform Interface has four guiding
principles:

o Resource-based: Individual resources are identified using the URI as a resource identifier. The
resources themselves are different from the representations returned to the customer. For
example, the server cannot send the database but represents some database records expressed
to HTML, XML or JSON depending on the server request and the implementation details.

o Manipulation of resources by representation: When a client represents a resource associated


with metadata, there is information on the server to modify or delete it.

o Self-Descriptive Message: Each message contains enough information to describe how the
message is processed. For example, the parser can be specified by the Internet media type
(known as the MIME type).

o As the engine of Hypermedia Application State (HATEOAS): Customers provide states by


query-string parameters, body content, request headers, and requested URIs. The services
provide customers with the state by response codes, response headers and body content. It is
called hypermedia (hyperlink within hypertext).
o In addition to the above description, HATEOS also means that, where necessary, the object or
itself is contained in the linked body (or header) to supply the URI for retrieving the related
objects.

o The same interface that any REST services provide is fundamental to the design.

2. Client-server
A client-server interface separates the client from the server. For Example, the separation of concerns
not having an internal relationship with internal storage for each server to improve the portability of
customer's data codes. Servers are not connected with the user interface or user status to make the
server simpler and scalable. Servers and clients are independently replaced and developed until the
interface is changed.

3. Stateless
Stateless means the state of the service doesn't persist between subsequent requests and response. It
means that the request itself contains the state required to handle the request. It can be a query-string
parameter, entity, or header as a part of the URI. The URI identifies the resource and state (or state
change) of that resource in the unit. After the server performs the appropriate state or status piece (s)
that matters are sent back to the client through the header, status, and response body.

o Most of us in the industry have been accustomed to programming with a container, which gives
us the concept of "session," which maintains the status among multiple HTTP requests.
In REST, the client may include all information to fulfil the server's request and multiple
requests in the state. Statelessness enables greater scalability because the server does
not maintain, update, or communicate any session state. The resource state is the data that
defines a resource representation.

Example, the data stored in a database. Consider the application state of having data that may vary
according to client and request. The resource state is constant for every customer who requests it.

4. Layered system
It is directly connected to the end server or by any intermediary whether a client cannot tell.
Intermediate servers improve the system scalability by enabling load-balancing and providing a
shared cache. Layers can enforce security policies.

5. Cacheable
On the World Wide Web, customers can cache responses. Therefore, responses clearly define
themselves as unacceptable or prevent customers from reusing stale or inappropriate data to further
requests. Well-managed caching eliminates some client-server interactions to improving scalability and
performance.

6. Code on Demand (optional)


The server temporarily moves or optimizes the functionality of a client by logic that it executes.
Examples of compiled components are Java applets and client-side scripts.
Compliance with the constraints will enable any distributed hypermedia system with desirable
contingency properties such as performance, scalability, variability, visibility,
portability, and reliability.
Software as a Service | SaaS
SaaS is also known as "On-Demand Software". It is a software distribution model in which services
are hosted by a cloud service provider. These services are available to end-users over the internet so, the
end-users do not need to install any software on their devices to access these services.There are the
following services provided by SaaS providers -

Business Services - SaaS Provider provides various business services to start-up the business. The SaaS
business services include ERP (Enterprise Resource Planning), CRM (Customer Relationship
Management), billing, and sales.

Document Management - SaaS document management is a software application offered by a third


party (SaaS providers) to create, manage, and track electronic documents.

Example: Slack, Samepage, Box, and Zoho Forms.

Social Networks - As we all know, social networking sites are used by the general public, so social
networking service providers use SaaS for their convenience and handle the general public's
information.

Mail Services - To handle the unpredictable number of users and load on e-mail services, many e-mail
providers offering their services using SaaS.

Advantages of SaaS cloud computing layer


1) SaaS is easy to buy
SaaS pricing is based on a monthly fee or annual fee subscription, so it allows organizations to access
business functionality at a low cost, which is less than licensed applications.
Unlike traditional software, which is sold as a licensed based with an up-front cost (and often an
optional ongoing support fee), SaaS providers are generally pricing the applications using a
subscription fee, most commonly a monthly or annually fee.

2. One to Many
SaaS services are offered as a one-to-many model means a single instance of the application is shared
by multiple users.

3. Less hardware required for SaaS


The software is hosted remotely, so organizations do not need to invest in additional hardware.

4. Low maintenance required for SaaS


Software as a service removes the need for installation, set-up, and daily maintenance for the
organizations. The initial set-up cost for SaaS is typically less than the enterprise software. SaaS
vendors are pricing their applications based on some usage parameters, such as a number of users using
the application. So SaaS does easy to monitor and automatic updates.

5. No special software or hardware versions required


All users will have the same version of the software and typically access it through the web browser.
SaaS reduces IT support costs by outsourcing hardware and software maintenance and support to the
IaaS provider.

6. Multidevice support
SaaS services can be accessed from any device such as desktops, laptops, tablets, phones, and thin
clients.

7. API Integration
SaaS services easily integrate with other software or services through standard APIs.

8. No client-side installation
SaaS services are accessed directly from the service provider using the internet connection, so do not
need to require any software installation.

Disadvantages of SaaS cloud computing layer


1) Security
Actually, data is stored in the cloud, so security may be an issue for some users. However, cloud
computing is not more secure than in-house deployment.

2) Latency issue
Since data and applications are stored in the cloud at a variable distance from the end-user, there is a
possibility that there may be greater latency when interacting with the application compared to local
deployment. Therefore, the SaaS model is not suitable for applications whose demand response time is
in milliseconds.

3) Total Dependency on Internet


Without an internet connection, most SaaS applications are not usable.

4) Switching between SaaS vendors is difficult


Switching SaaS vendors involves the difficult and slow task of transferring the very large data files over
the internet and then converting and importing them into another SaaS also.

Popular SaaS Providers


Platform as a Service | PaaS
Platform as a Service (PaaS) provides a runtime environment. It allows programmers to easily create,
test, run, and deploy web applications. You can purchase these applications from a cloud service
provider on a pay-as-per use basis and access them using the Internet connection. In PaaS, back end
scalability is managed by the cloud service provider, so end- users do not need to worry about
managing the infrastructure.
PaaS includes infrastructure (servers, storage, and networking) and platform (middleware,
development tools, database management systems, business intelligence, and more) to support the web
application life cycle.

Example: Google App Engine, Force.com, Joyent, Azure.


PaaS providers provide the Programming languages, Application frameworks, Databases, and Other
tools:

1. Programming languages
PaaS providers provide various programming languages for the developers to develop the applications.
Some popular programming languages provided by PaaS providers are Java, PHP, Ruby, Perl, and
Go.

2. Application frameworks
PaaS providers provide application frameworks to easily understand the application development.
Some popular application frameworks provided by PaaS providers are Node.js, Drupal, Joomla,
WordPress, Spring, Play, Rack, and Zend.

3. Databases
PaaS providers provide various databases such as ClearDB, PostgreSQL, MongoDB, and Redis to
communicate with the applications.

4. Other tools
PaaS providers provide various other tools that are required to develop, test, and deploy the
applications.

Advantages of PaaS
There are the following advantages of PaaS -

1) Simplified Development
PaaS allows developers to focus on development and innovation without worrying about infrastructure
management.

2) Lower risk
No need for up-front investment in hardware and software. Developers only need a PC and an internet
connection to start building applications.

3) Prebuilt business functionality


Some PaaS vendors also provide already defined business functionality so that users can avoid building
everything from very scratch and hence can directly start the projects only.

4) Instant community
PaaS vendors frequently provide online communities where the developer can get the ideas to share
experiences and seek advice from others.

5) Scalability
Applications deployed can scale from one to thousands of users without any changes to the
applications.

Disadvantages of PaaS cloud computing layer


1) Vendor lock-in
One has to write the applications according to the platform provided by the PaaS vendor, so the
migration of an application to another PaaS vendor would be a problem.

2) Data Privacy
Corporate data, whether it can be critical or not, will be private, so if it is not located within the walls
of the company, there can be a risk in terms of privacy of data.

3) Integration with the rest of the systems applications


It may happen that some applications are local, and some are in the cloud. So there will be chances of
increased complexity when we want to use data which in the cloud with the local data.

Popular PaaS Providers


Monitoring as a Service (MaaS) is nothing but the service that is concerned with monitoring
the status and proper functioning of the applications and infrastructure. It combines both
cloud computing and on-premise IT infrastructure. It is mainly concerned with the online
state monitoring of our applications, storage instances, network traffic, etc. This is very
efficient and important because any malfunctioning can be easily found and the issues are
reported as notifications to the user. Before Monitoring as a Service (MaaS) came into
existence the companies have to rely on security engineers and penetration testers for this
kind of governing activities but now after the advent of the cloud, these activities can be
automated.

In this article, we are going to explore some of the standard ‘Monitoring as a Service’ tools
with their detailed specifications. So, let’s get started:

1. Amazon CloudWatch
Amazon CloudWatch allows us to completely monitor the tech stack of our application and
infrastructure. It notifies us with alarms, logs, etc, and helps us to take necessary actions
which thereby reduces the Mean Time to Resolution (MTTR). It also monitors the EC2
instances, Dynamo tables, etc. It is best suited for applications hosted in AWS. The logs,
alerts, and troubleshooting of these applications can be done easily using Amazon
CloudWatch. Amazon CloudWatch does not charge for the first 50 metrics for a single
dashboard. If the metrics limit is exceeded, the user is charged with some amount. Amazon
CloudWatch can be accessed using Command Line Interface, APIs, AWS Console.

2. Azure Monitor
It collects, monitors, and takes necessary actions on the data of the devices and instances in
the Azure and on-premises environment. It is very efficient and identifies and resolves
problems in seconds. It simply collects the data from various sources and stores it as logs.
This data can later be used for logs, analysis, security checks, notifications, etc. The main
advantage of it is that it not only reports the issue to the user but also provides the solution to
solve the issue.

3. AppDynamics
AppDynamics is another cloud monitoring tool that is used for monitoring every aspect of
the application. It can monitor the business transactions, transaction snapshots, tires, and
nodes, etc. It also monitors the full technology stack of the application from the database to
the server. The architecture of AppDynamics is simpler and is controlled by a central
management server known as the controller. AppDynamics was founded in 2008 by a person
in WIly Technology. Now, it is acquired by Cisco company. AppDynamics holds a rank of 9
in Cloud 100 list which is ranked by Forbes.

4. CA Unified Infrastructure Management


It is founded and released by CA technologies It is previously called a CA Nimsoft monitor
and in the later release, it is released with enhanced, alerting and monitoring capabilities. It
provides 360-degree visibility into the application by monitoring every aspect of the
infrastructure. It helps us to manage both the modern cloud and hybrid IT infrastructure
efficiently. It allows rapid setup and configuration. It provides a wide range of possibilities in
a single tool.

5. Solarwinds
The software was founded by Donald Yonce and David Yonce in Tulsa in 1999. It is
customizable and intelligent to use. However, it is not that attractive as other tools but it gets
the job done without any problems. It can support up to 1200 applications and systems. It
allows us to monitor the components through PowerShell, REST API, etc. It also has
configurations for windows and Linux which leads to faster performance.

6. ManageEngine
ManageEngine is founded by Zoho Corporation. It is also an infrastructure monitoring tool
with real-time monitoring of networks It has customizable dashboards for users. It has more
than 70 metrics for VMWare and more than 40 metrics for Hyper V. It also has inbuilt fault
monitoring and alerting. But the problem with this is, it has no hosted version. It manages the
computers in various domains and allows checks the bandwidth too. It is both available as
free edition and premium edition. The free edition starts from 495 dollars and the cloud
version starts from 645 dollars.

7. Zabbix
Zabbix is founded by Alexei Vladishev. It is one of the most popular open-source
infrastructure monitoring tools in the market. It is available on multiple platforms like
Windows, Unix, Linux, etc. It can send notifications on various streams like SMS, email,
script alerts, webhooks, etc. The main advantage is it is open source and has a strong
community for support. Zabbix allows APIs, access controls/permissions, activity
dashboard, audit trails, data visualization, CPU monitoring, and a lot more features.

8. Nagios
Nagios is founded by Ethan Galstad. Nagios is yet another famous monitoring tool. It
periodically runs security checks on all the important aspects of the system. It is available as
both an open-source and paid enterprise solution. It is Linux-based. The architecture of
Nagios can be extended through plugins. It is open-source and gives us full access to source
code. It is more popular and is used by companies like Uber, Twitch, Dropbox, Fiverr,
9GAG, Zalando, etc.

9. Site 24×7
It is also a monitoring tool that inspects the servers, network containers, and visualization
platforms. It runs on both Windows and Linux servers. It easily monitors more than 60
metrics for servers. It also provides plugin integrations for MySQL and Apache. It also
provides website services like HTTP, DNS servers, etc. Site 24×7 monitoring allows us to use
APIs, Baseline managers, Email monitoring, email alerts, event logs, mail server monitoring,
reporting & statistics, SLA, and much more.

10. Datadog
Datadog infrastructure monitoring is founded by Olivier Pompel and Alexis Le-Quoc. It also
monitors both cloud and on-premise infrastructures. It provides visibility into the state of the
components we are using. It allows us to use consolidated dashboards giving us visibility into
the infrastructure. It has a customizable Datadog API. It has more than 400 vendor-backed
integration thereby it can give us deep insight into our IT stack. It has a broader use case, it is
used by more than 800 companies and 2000 developers. With the help of Datadog
infrastructure monitoring, we can monitor the performance and well-being of the entire IT
infrastructure.

Advantages of Cloud Computing


As we all know that Cloud computing is trending technology. Almost every company switched their
services on the cloud to rise the company growth.
Here, we are going to discuss some important advantages of Cloud Computing-

1) Back-up and restore data


Once the data is stored in the cloud, it is easier to get back-up and restore that data using the cloud.

2) Improved collaboration
Cloud applications improve collaboration by allowing groups of people to quickly and easily share
information in the cloud via shared storage.

3) Excellent accessibility
Cloud allows us to quickly and easily access store information anywhere, anytime in the whole world,
using an internet connection. An internet cloud infrastructure increases organization productivity and
efficiency by ensuring that our data is always accessible.

4) Low maintenance cost


Cloud computing reduces both hardware and software maintenance costs for organizations.

5) Mobility
Cloud computing allows us to easily access all cloud data via mobile.

6) IServices in the pay-per-use model


Cloud computing offers Application Programming Interfaces (APIs) to the users for access services on
the cloud and pays the charges as per the usage of service.

7) Unlimited storage capacity


Cloud offers us a huge amount of storing capacity for storing our important data such as documents,
images, audio, video, etc. in one place.

8) Data security
Data security is one of the biggest advantages of cloud computing. Cloud offers many advanced
features related to security and ensures that data is securely stored and handled.

Disadvantages of Cloud Computing


A list of the disadvantage of cloud computing is given below -

1) Internet Connectivity
As you know, in cloud computing, every data (image, audio, video, etc.) is stored on the cloud, and we
access these data through the cloud by using the internet connection. If you do not have good internet
connectivity, you cannot access these data. However, we have no any other way to access data from
the cloud.

2) Vendor lock-in
Vendor lock-in is the biggest disadvantage of cloud computing. Organizations may face problems when
transferring their services from one vendor to another. As different vendors provide different platforms,
that can cause difficulty moving from one cloud to another.

3) Limited Control
As we know, cloud infrastructure is completely owned, managed, and monitored by the service
provider, so the cloud users have less control over the function and execution of services within a cloud
infrastructure.

4) Security
Although cloud service providers implement the best security standards to store important information.
But, before adopting cloud technology, you should be aware that you will be sending all your
organization's sensitive information to a third party, i.e., a cloud computing service provider. While
sending the data on the cloud, there may be a chance that your organization's information is hacked by
Hackers.
What is a hypervisor
A hypervisor, also known as a virtual machine monitor or VMM. The hypervisor is a piece of software
that allows us to build and run virtual machines which are abbreviated as VMs.
A hypervisor allows a single host computer to support multiple virtual machines (VMs) by sharing
resources including memory and processing.

What is the use of a hypervisor?


Hypervisors allow the use of more of a system's available resources and provide greater IT versatility
because the guest VMs are independent of the host hardware which is one of the major benefits of the
Hypervisor.
In other words, this implies that they can be quickly switched between servers. Since a hypervisor with
the help of its special feature, it allows several virtual machines to operate on a single physical server.
So, it helps us to reduce:

o The Space efficiency

o The Energy uses

o The Maintenance requirements of the server.

Kinds of hypervisors
There are two types of hypervisors: "Type 1" (also known as "bare metal") and "Type 2" (also known as
"hosted"). A type 1 hypervisor functions as a light operating system that operates directly on the host's
hardware, while a type 2 hypervisor functions as a software layer on top of an operating system, similar
to other computer programs.
Since they are isolated from the attack-prone operating system, bare-metal hypervisors are extremely
stable.
Furthermore, they are usually faster and more powerful than hosted hypervisors. For these purposes,
the majority of enterprise businesses opt for bare-metal hypervisors for their data center computing
requirements.
While hosted hypervisors run inside the OS, they can be topped with additional (and different)
operating systems.
The hosted hypervisors have longer latency than bare-metal hypervisors which is a very major
disadvantage of the it. This is due to the fact that contact between the hardware and the hypervisor
must go through the OS's extra layer.
The Type 1 hypervisor
The native or bare metal hypervisor, the Type 1 hypervisor is known by both names.
It replaces the host operating system, and the hypervisor schedules VM services directly to the
hardware.
The type 1 hypervisor is very much commonly used in the enterprise data center or other server-based
environments.
It includes KVM, Microsoft Hyper-V, and VMware vSphere. If we are running the updated version of
the hypervisor then we must have already got the KVM integrated into the Linux kernel in 2007.

The Type 2 hypervisor


It is also known as a hosted hypervisor, The type 2 hypervisor is a software layer or framework that
runs on a traditional operating system.
It operates by separating the guest and host operating systems. The host operating system schedules
VM services, which are then executed on the hardware.
Individual users who wish to operate multiple operating systems on a personal computer should use a
form 2 hypervisor.
This type of hypervisor also includes the virtual machines with it.
Hardware acceleration technology improves the processing speed of both bare-metal and hosted
hypervisors, allowing them to build and handle virtual resources more quickly.
On a single physical computer, all types of hypervisors will operate multiple virtual servers for multiple
tenants. Different businesses rent data space on various virtual servers from public cloud service
providers. One server can host multiple virtual servers, each of which is running different workloads for
different businesses.

What is a cloud hypervisor?


Hypervisors are a key component of the technology that enables cloud computing since they are a
software layer that allows one host device to support several virtual machines at the same time.
Hypervisors allow IT to retain control over a cloud environment's infrastructure, processes, and
sensitive data while making cloud-based applications accessible to users in a virtual environment.
Increased emphasis on creative applications is being driven by digital transformation and increasing
consumer expectations. As a result, many businesses are transferring their virtual computers to the
cloud.
Having to rewrite any existing application for the cloud, on the other hand, will eat up valuable IT
resources and create infrastructure silos.
A hypervisor also helps in the rapid migration of applications to the cloud as being a part of a
virtualization platform.
As a result, businesses will take advantage of the cloud's many advantages, such as lower hardware
costs, improved accessibility, and increased scalability, for a quicker return on investment.

Benefits of hypervisors
Using a hypervisor to host several virtual machines has many advantages:
o Speed: The hypervisors allow virtual machines to be built instantly unlike bare-metal servers.
This makes provisioning resources for complex workloads much simpler.

o Efficiency: Hypervisors that run multiple virtual machines on the resources of a single physical
machine often allow for more effective use of a single physical server.

o Flexibility: Since the hypervisor distinguishes the OS from the underlying hardware, the
program no longer relies on particular hardware devices or drivers, bare-metal hypervisors
enable operating systems and their related applications to operate on a variety of hardware
types.

o Portability: Multiple operating systems can run on the same physical server thanks to
hypervisors (host machine). The hypervisor's virtual machines are portable because they are
separate from the physical computer.
As an application requires more computing power, virtualization software allows it to access additional
machines without interruption.

Container vs hypervisor
Containers and hypervisors also help systems run faster and more efficiently. But they both do these
things in very different manner that is why are different form each other.

The Hypervisors:

o Using virtual machines, an operating system can operate independently from the underlying
hardware.

o Make virtual computing, storage, and memory services available to all.

Containers:

o There is no specific need of the O.S for the program to run, the container makes it sure.

o They only need a container engine to run on any platform or on any operating system.

o Are incredibly versatile since an application has everything it requires to operate within a
container.

You might also like