Download as pdf or txt
Download as pdf or txt
You are on page 1of 33

1.What is Cloud?

The term Cloud refers to a Network or Internet. In other words, we can say that Cloud is
something, which is present at remote location. Cloud can provide services over public and
private networks, i.e., WAN, LAN or VPN.

Definition: “Cloud computing is a model for enabling ubiquitous, convenient, on-


demand network access to a shared pool of configurable computing resources (e.g.,
networks, servers, storage, applications, and services) that can be rapidly provisioned
and released with minimal management effort or service provider interaction.”

What is Cloud Computing?


Cloud Computing refers to manipulating, configuring and accessing the hardware and
software resources remotely. It offers online data storage, infrastructure, and application.

Cloud computing offers platform independency, as the software is not required to be


installed locally on the PC. Hence, the Cloud Computing is making our business
applications mobile and collaborative.

Basic Concepts

There are certain services and models working behind the scene making the cloud
computing feasible and accessible to end users. Following are the working models for
cloud computing:

 Deployment Models
 Service Models
1.1. Essential characteristics
The following image shows that cloud computing is composed of five essential
characteristics, three deployment models, and four service models as shown in the
following figure:

Let’s look a bit closer at each of the characteristics, service models, and deployment
models in the next sections.
Five essential characteristics of cloud computing
The essential characteristics can be elaborated as follows:
• On-demand self-service: Users are able to provision cloud computing resources without
requiring human interaction, mostly done though a web-based self-service portal
(management console).
• Broad network access: Cloud computing resources are accessible over the network,
supporting heterogeneous client platforms such as mobile devices and workstations.
• Resource pooling: Service multiple customers from the same physical resources, by
securely separating the resources on logical level.
• Rapid elasticity: Resources are provisioned and released on-demand and/or automated
based on triggers or parameters. This will make sure your application will have exactly the
capacity it needs at any point of time.
• Measured service: Resource usage are monitored, measured, and reported (billed)
transparently based on utilization. In short, pay for use.
2.Evolution of Cloud Computing:
Cloud computing is all about renting computing services. This idea first came in the
1950s. In making cloud computing what it is today, five technologies played a vital role.
These are distributed systems and its peripherals, virtualization, web 2.0, service
orientation, and utility computing.

 Distributed Systems:
It is a composition of multiple independent systems but all of them are depicted as a
single entity to the users. The purpose of distributed systems is to share resources and
also use them effectively and efficiently. Distributed systems possess characteristics
such as scalability, concurrency, continuous availability, heterogeneity, and
independence in failures. But the main problem with this system was that all the
systems were required to be present at the same geographical location. Thus to solve
this problem, distributed computing led to three more types of computing and they
were Mainframe computing, cluster computing, and grid computing.

 Mainframe computing:
Mainframes which first came into existence in 1951 are highly powerful and reliable
computing machines. These are responsible for handling large data such as massive
input-output operations. Even today these are used for bulk processing tasks such as
online transactions etc. These systems have almost no downtime with high fault
tolerance. After distributed computing, these increased the processing capabilities of
the system. But these were very expensive. To reduce this cost, cluster computing
came as an alternative to mainframe technology.

 Cluster computing:
In 1980s, cluster computing came as an alternative to mainframe computing. Each
machine in the cluster was connected to each other by a network with high
bandwidth. These were way cheaper than those mainframe systems. These were
equally capable of high computations. Also, new nodes could easily be added to the
cluster if it was required. Thus, the problem of the cost was solved to some extent but
the problem related to geographical restrictions still pertained. To solve this, the
concept of grid computing was introduced.

 Grid computing:
In 1990s, the concept of grid computing was introduced. It means that different
systems were placed at entirely different geographical locations and these all were
connected via the internet. These systems belonged to different organizations and
thus the grid consisted of heterogeneous (different) nodes. Although it solved some
problems but new problems emerged as the distance between the nodes increased.
The main problem which was encountered was the low availability of high bandwidth
(width of internet) connectivity and with it other network associated issues. Thus
cloud computing is often referred to as “Successor of grid computing”.

 Virtualization: (not seen able but do work)


It was introduced nearly 40 years back. It refers to the process of creating a virtual
layer over the hardware which allows the user to run multiple instances (provide
services) simultaneously on the hardware. It is a key technology used in cloud
computing. It is the base on which major cloud computing services such as Amazon
EC2, VMware vCloud, etc work on. Hardware virtualization is still one of the most
common types of virtualization.

 Web 2.0:
It is the interface through which the cloud computing services interact with the
clients. It is because of Web 2.0 that we have interactive and dynamic web pages. It
also increases flexibility among web pages. Popular examples of web 2.0 include
Google Maps, Facebook, Twitter, etc. Needless to say, social media is possible because
of this technology only. In gained major popularity in 2004.
 Service orientation:
It acts as a reference model for cloud computing. It supports low-cost, flexible, and
evolvable (update able) applications. Two important concepts were introduced in this
computing model. These were Quality of Service (QoS) which also includes the SLA
(Service Level Agreement) and Software as a Service (SaaS).

 Utility computing:
It is a computing model that defines service provisioning techniques for services such
as computer services along with other major services such as storage, infrastructure,
etc. which are provisioned on a pay-per-use basis.

3.Business cases
In this blog series we'll present the business case for cloud computing to empower more
organizations to see the financial advantages that augment the technological benefits of
migrating to cloud computing. In this series installment, we'll delve further into the
different types of Cloud and the associated costs.

It’s important to understand that within the three types of cloud, there are advantages and
disadvantages to each depending on their application, the type of business outcome
sought, the level of security and privacy needed and overall control. There are also
deployments of combinations of all or some of the above, but we’ll discuss this in another
series as it’s a huge topic.

1. In building a business case, Let’s look at each, Infrastructure business case, the
Applications business case and the Talent business case.

Infrastructure Case
When is this the correct approach? Usually, there is some compelling event to initiate a
move to the cloud, such as a compute upgrade due to increased demands from users or
applications, end-of-life of data center facility assets or a facility move where everything
needs to be built again. The initial and most significant savings are usually found when
abandoning infrastructure in favor of the cloud, with infrastructure savings being the most
significant part of the business case in terms of cost savings. The reason why is that in-
house IT is typically under-utilized, because when infrastructure purchases are being
considered, not all applications that will be deployed on it are known and so a margin is
added for this misunderstood capacity requirement. Additionally, over-deployment is a
result of companies configuring infrastructure for peak loads.

Applications case
What are the compelling events that drive a move of applications to the cloud? With the
three main types of cloud there are a number of options of what to do with applications,
but this requires a look at what the main drivers are. This includes a major fork-lift to
update an in-house custom application, a shift to a new application requiring higher
availability and performance, and scarcity of maintenance resources such as talent and
quality control.

Staffing case
In most businesses regardless of industry, people are among the highest costs, with IT
staffing being no different. we introduced the opportunities presented by the cloud in that
it liberates staff dedicated to maintenance, keeping infrastructure running, and
customising and editing application code. While all of these roles and responsibilities are
important and critical, they can be better deployed to create unique differentiators in
applications that set their business' apart from the competition and provide more value for
their customer's strategically.

Ultimately this is an opportunity cost and needs to be included in the analysis. The other
costs are more easily counted: permanent staff and temporary, their offices and facilities
and the total burden of benefits, recruitment, retention, risk, training and development
and retirement. Add to this the cost of technology and other local jurisdictional costs
associated with labour and employment such as termination costs, professional advice,
holidays and competitive considerations driving perks to continually attract and retain top
talent - all of those costs should be identified in the business case.
3.Cloud Based Services/ Services types:
Cloud Computing can be defined as the practice of using a network of remote servers
hosted on the Internet to store, manage, and process data, rather than a local server or a
personal computer. Companies offering such kinds of cloud computing services are
called cloud providers and typically charge for cloud computing services based on usage.
Types of Cloud Computing:
Most cloud computing services fall into three broad categories:
1. Software as a service (Saas)
2. Platform as a service (PaaS)
3. Infrastructure as a service (IaaS)
4. Anything as a service (XaaS)
These are sometimes called the cloud computing stack because they are built on top of
one another. Knowing what they are and how they are different, makes it easier to
accomplish your goals.

Software as a Service(SaaS):

Software-as-a-Service (SaaS) is a way of delivering services and applications over the


Internet. Instead of installing and maintaining software, we simply access it via the
Internet, freeing ourselves from the complex software and hardware management. It
removes the need to install and run applications on our own computers or in the data
centers eliminating the expenses of hardware as well as software maintenance.
SaaS provides a complete software solution that you purchase on a pay-as-you-go basis
from a cloud service provider. Most SaaS applications can be run directly from a web
browser without any downloads or installations required. The SaaS applications are
sometimes called Web-based software, on-demand software, or hosted software.

Advantages of SaaS
1. Cost-Effective: Pay only for what you use.
2. Reduced time: Users can run most SaaS apps directly from their web browser without
needing to download and install any software. This reduces the time spent in
installation and configuration and can reduce the issues that can get in the way of the
software deployment.
3. Accessibility: We can Access app data from anywhere.
4. Automatic updates: Rather than purchasing new software, customers rely on a SaaS
provider to automatically perform the updates.
5. Scalability: It allows the users to access the services and features on-demand.
The various companies providing Software as a service are Cloud9 Analytics,
Salesforce.com, Cloud Switch, Microsoft Office 365, Eloqua, dropBox, and Cloud Tran.

Platform as a Service:

PaaS is a category of cloud computing that provides a platform and environment to allow
developers to build applications and services over the internet. PaaS services are hosted in
the cloud and accessed by users simply via their web browser.
Cloud is a full software stack. Each customer is then able to write or load applications into
this environment, with the provider responsible for expanding or contracting all elements
to adapt to the changing requirements of the users. Where a high degree of availability or
service is required, this provides an impressive advantage for businesses looking to react
and scale rapidly.
A PaaS provider hosts the hardware and software on its own infrastructure. As a result,
PaaS frees users from having to install in-house hardware and software to develop or run a
new application. Thus, the development and deployment of the application take
place independent of the hardware.
The consumer does not manage or control the underlying cloud infrastructure including
network, servers, operating systems, or storage, but has control over the deployed
applications and possibly configuration settings for the application-hosting environment.
Advantages of PaaS:
1. Simple and convenient for users: It provides much of the infrastructure and other IT
services, which users can access anywhere via a web browser.
2. Cost-Effective: It charges for the services provided on a per-use basis thus eliminating
the expenses one may have for on-premises hardware and software.
3. Efficiently managing the lifecycle: It is designed to support the complete web
application lifecycle: building, testing, deploying, managing, and updating.
4. Efficiency: It allows for higher-level programming with reduced complexity thus, the
overall development of the application can be more effective.
The various companies providing Platform as a service are Amazon Web services,
Salesforce, Windows Azure, Google App Engine, cloud Bess and IBM smart cloud.

Infrastructure as a Service

Infrastructure as a service (IaaS) is a service model that delivers computer infrastructure on


an outsourced basis to support various operations. Typically, IaaS is a service where
infrastructure is provided as an outsource to enterprises such as networking equipment,
devices, database, and web servers.
It simply provides the underlying operating systems, security, networking, and servers for
developing such applications, services, and for deploying development tools, databases,
etc.
In case of Infrastructure as a service (IaaS) the CPU, data storage, bandwidth and an
operating system are delivered as a flexible service (ordered and provisioned
incrementally). Each customer can then load their application software stack on top, taking
advantage of this easily-resized Cloud Infrastructure (CI) on demand. This allows for scaling
up or down as needed, providing a huge advantage to businesses in terms of flexibility and
preservation of capital. It is also known as Hardware as a Service (HaaS). IaaS customers
pay on a per-user basis, typically by the hour, week, or month. Some providers also charge
customers based on the amount of virtual machine space they use
Advantages of IaaS:
1. Cost-Effective: Eliminates capital expense and reduces ongoing cost and IaaS customers
pay on a per-user basis, typically by the hour, week, or month.
2. Website hosting: Running websites using IaaS can be less expensive than traditional
web hosting.
3. Security: The IaaS Cloud Provider may provide better security than your existing
software.
4. Maintenance: There is no need to manage the underlying data center or the
introduction of new releases of the development or underlying software. This is all
handled by the IaaS Cloud Provider.
The various companies providing Infrastructure as a service are Amazon web services,
Bluestack, IBM, Openstack, Rackspace, and Vmware.

Anything as a Service

Most of the cloud service providers nowadays offer anything as a service that is a
compilation of all of the above services including some additional services.
Advantages of XaaS: As this is a combined service, so it has all the advantages of every
type of cloud service.
Representing the different cloud services in terms of housing, here's what would be
included within each service:
It’s important to understand that within the three types of cloud, there are advantages and
disadvantages to each depending on their application, the type of business outcome
sought, the level of security and privacy needed and overall control. There are also
deployments of combinations of all or some of the above, but we’ll discuss this in another
series as it’s a huge topic.

5.Cloud Deployment Models:


In cloud computing, we have access to a shared pool of computer resources (servers, storage,
programs, and so on) in the cloud. You simply need to request additional resources when you
require them. Getting resources up and running quickly is a breeze thanks to the clouds. It is
possible to release resources that are no longer necessary. This method allows you to just pay for
what you use. Your cloud provider is in charge of all upkeep. It functions as a virtual computing
environment with a deployment architecture that varies depending on the amount of data you want
to store and who has access to the infrastructure.
Deployment Models

The cloud deployment model identifies the specific type of cloud environment based on ownership,
scale, and access, as well as the cloud’s nature and purpose. The location of the servers you’re
utilizing and who controls them are defined by a cloud deployment model. It specifies how your
cloud infrastructure will look, what you can change, and whether you will be given services or will
have to create everything yourself. Relationships between the infrastructure and your users are also
defined by cloud deployment types.
Different types of cloud computing deployment models are:
1. Public cloud
2. Private cloud
3. Hybrid cloud
4. Community cloud
5. Multi-cloud
Let us discuss them one by one:
1. Public Cloud
The public cloud makes it possible for anybody to access systems and services. The public cloud
may be less secure as it is open for everyone. The public cloud is one in which cloud infrastructure
services are provided over the internet to the general people or major industry groups. The
infrastructure in this cloud model is owned by the entity that delivers the cloud services, not by the
consumer. It is a type of cloud hosting that allows customers and users to easily access systems and
services. This form of cloud computing is an excellent example of cloud hosting, in which service
providers supply services to a variety of customers. In this arrangement, storage backup and
retrieval services are given for free, as a subscription, or on a per-use basis. Example: Google App
Engine etc.
Advantages of the public cloud model:
 Minimal Investment: Because it is a pay-per-use service, there is no substantial upfront fee,
making it excellent for enterprises that require immediate access to resources.
 No setup cost: The entire infrastructure is fully subsidized by the cloud service providers, thus
there is no need to set up any hardware.
 Infrastructure Management is not required: Using the public cloud does not necessitate
infrastructure management.
 No maintenance: The maintenance work is done by the service provider (Not users).
 Dynamic Scalability: To fulfill your company’s needs, on-demand resources are accessible.
2. Private Cloud
The private cloud deployment model is the exact opposite of the public cloud deployment model.
It’s a one-on-one environment for a single user (customer). There is no need to share your
hardware with anyone else. The distinction between private and public cloud is in how you handle
all of the hardware. It is also called the “internal cloud” & it refers to the ability to access systems
and services within a given border or organization. The cloud platform is implemented in a cloud-
based secure environment that is protected by powerful firewalls and under the supervision of an
organization’s IT department.
The private cloud gives the greater flexibility of control over cloud resources.
Advantages of the private cloud model:
 Better Control: You are the sole owner of the property. You gain complete command over
service integration, IT operations, policies, and user behavior.
 Data Security and Privacy: It’s suitable for storing corporate information to which only
authorized staff have access. By segmenting resources within the same infrastructure, improved
access and security can be achieved.
 Supports Legacy Systems: This approach is designed to work with legacy systems that are
unable to access the public cloud.
 Customization: Unlike a public cloud deployment, a private cloud allows a company to tailor
its solution to meet its specific needs.
3. Hybrid cloud
By bridging the public and private worlds with a layer of proprietary software, hybrid cloud
computing gives the best of both worlds. With a hybrid solution, you may host the app in a safe
environment while taking advantage of the public cloud’s cost savings. Organizations can move
data and applications between different clouds using a combination of two or more cloud
deployment methods, depending on their needs.
Advantages of the hybrid cloud model:
 Flexibility and control: Businesses with more flexibility can design personalized solutions that
meet their particular needs.
 Cost: Because public clouds provide for scalability, you’ll only be responsible for paying for
the extra capacity if you require it.
 Security: Because data is properly separated, the chances of data theft by attackers are
considerably reduced.
4. Community cloud
It allows systems and services to be accessible by a group of organizations. It is a distributed
system that is created by integrating the services of different clouds to address the specific needs of
a community, industry, or business. The infrastructure of the community could be shared
between the organization which has shared concerns or tasks. It is generally managed by a third
party or by the combination of one or more organizations in the community.
Advantages of the community cloud model:
 Cost Effective: It is cost-effective because the cloud is shared by multiple organizations or
communities.
 Security: Community cloud provides better security.
 Shared resources: It allows you to share resources, infrastructure, etc. with multiple
organizations.
 Collaboration and data sharing: It is suitable for both collaboration and data sharing.
5. Multi-cloud
We’re talking about employing multiple cloud providers at the same time under this paradigm, as
the name implies. It’s similar to the hybrid cloud deployment approach, which combines public and
private cloud resources. Instead of merging private and public clouds, multi-cloud uses many
public clouds. Although public cloud providers provide numerous tools to improve the reliability of
their services, mishaps still occur. It’s quite rare that two distinct clouds would have an incident at
the same moment. As a result, multi-cloud deployment improves the high availability of your
services even more.
Advantages of a multi-cloud model:
 You can mix and match the best features of each cloud provider’s services to suit the demands
of your apps, workloads, and business by choosing different cloud providers.
 Reduced Latency: To reduce latency and improve user experience, you can choose cloud
regions and zones that are close to your clients.
 High availability of service: It’s quite rare that two distinct clouds would have an incident at
the same moment. So, the multi-cloud deployment improves the high availability of your
services.
SUMMARY OF DEPLOYMENT MODELS

Deployment models define the type of access to the cloud, i.e., how the cloud is located?
Cloud can have any of the four types of access: Public, Private, Hybrid, and Community.

Public Cloud

The public cloud allows systems and services to be easily accessible to the general public.
Public cloud may be less secure because of its openness.

Private Cloud

The private cloud allows systems and services to be accessible within an organization. It is
more secured because of its private nature.

Community Cloud

The community cloud allows systems and services to be accessible by a group of


organizations.

Hybrid Cloud

The hybrid cloud is a mixture of public and private cloud, in which the critical activities are
performed using private cloud while the non-critical activities are performed using public
cloud.
6.Virtual machines, Bare-metal server:

In computing, a virtual machine (VM) is the virtualization/emulation of a computer


system. Virtual machines are based on computer architectures and provide functionality of
a physical computer. Their implementations may involve specialized hardware, software,
or a combination.

A Virtual Machine (VM) is a compute resource that uses software instead of a physical
computer to run programs and deploy apps. One or more virtual “guest” machines run
on a physical “host” machine. Each virtual machine runs its own operating
system and functions separately from the other VMs, even when they are
all running on the same host. This means that, for example, a virtual MacOS virtual
machine can run on a physical PC.

Virtual machine technology is used for many use cases across on-premises and cloud
environments. More recently, public cloud services are using virtual machines
to provide virtual application resources to multiple users at once, for even more cost
efficient and flexible compute.
The two types of virtual machines

Users can choose from two different types of virtual machines—process VMs and system
VMs:

1: A process virtual machine allows a single process to run as an application on a host


machine, providing a platform-independent programming environment by masking the
information of the underlying hardware or operating system. An example of a process VM
is the Java Virtual Machine, which enables any operating system to run Java applications as
if they were native to that system.

2: A system virtual machine is fully virtualized to substitute for a physical machine. A


system platform supports the sharing of a host computer’s physical resources between
multiple virtual machines, each running its own copy of the operating system. This
virtualization process relies on a hypervisor, which can run on bare hardware.

The hypervisor, also known as a virtual machine monitor (VMM), manages these
VMs as they run alongside each other. It separates VMs from each other logically, assigning
each its own slice of the underlying computing power, memory, and storage. This prevents
the VMs from interfering with each other; so if, for example, one OS suffers a crash or a
security compromise, the others survive.

Advantages of virtual machines


Because of their flexibility and portability, virtual machines provide many benefits, such as:

 Cost savings—running multiple virtual environments from one piece of infrastructure


means that you can drastically reduce your physical infrastructure footprint. This boosts
your bottom line—decreasing the need to maintain nearly as many servers and saving on
maintenance costs and electricity.

 Agility and speed—Spinning up a VM is relatively easy and quick and is much simpler than
provisioning an entire new environment for your developers. Virtualization makes the
process of running dev-test scenarios a lot quicker.

 Lowered downtime—VMs are so portable and easy to move from one hypervisor to
another on a different machine—this means that they are a great solution for backup, in
the event the host goes down unexpectedly.

 Scalability—VMs allow you to more easily scale your apps by adding more physical or
virtual servers to distribute the workload across multiple VMs. As a result you can increase
the availability and performance of your apps.

 Security benefits— Because virtual machines run in multiple operating systems, using a
guest operating system on a VM allows you to run apps of questionable security and
protects your host operating system. VMs also allow for better security forensics, and are
often used to safely study computer viruses, isolating the viruses to avoid risking their host
computer.
Disadvantages of virtual machines

While virtual machines have several advantages over physical machines, there are also
some potential disadvantages:

 Running multiple virtual machines on one physical machine can result


in unstable performance if infrastructure requirements are not met.
 Virtual machines are less efficient and run slower than a full physical
computer. Most enterprises use a combination of physical and virtual
infrastructure to balance the corresponding advantages and disadvantages.
What are 5 types of virtualization?

All the components of a traditional data center or IT infrastructure can be virtualized


today, with various specific types of virtualization:

 Hardware virtualization: When virtualizing hardware, virtual versions of


computers and operating systems (VMs) are created and consolidated into a
single, primary, physical server. A hypervisor communicates directly with a
physical server’s disk space and CPU to manage the VMs. Hardware
virtualization, which is also known as server virtualization, allows hardware
resources to be utilized more efficiently and for one machine
to simultaneously run different operating systems.
 Software virtualization: Software virtualization creates a computer system
complete with hardware that allows one or more guest operating systems to
run on a physical host machine. For example, Android OS can run on a host
machine that is natively using a Microsoft Windows OS, utilizing the same
hardware as the host machine does. Additionally, applications can be
virtualized and delivered from a server to an end user’s device, such as a laptop
or smartphone. This allows employees to access centrally hosted applications
when working remotely.
 Storage virtualization: Storage can be virtualized by consolidating multiple
physical storage devices to appear as a single storage device. Benefits
include increased performance and speed, load balancing and reduced costs.
Storage virtualization also helps with disaster recovery planning, as virtual
storage data can be duplicated and quickly transferred to another location,
reducing downtime.
 Network virtualization: Multiple sub-networks can be created on the same
physical network by combining equipment into a single, software-based virtual
network resource. Network virtualization also divides available bandwidth into
multiple, independent channels, each of which can be assigned to servers and
devices in real time. Advantages include increased reliability, network
speed, security and better monitoring of data usage. Network virtualization can
be a good choice for companies with a high volume of users who need access at
all times.
 Desktop virtualization: This common type of virtualization separates the
desktop environment from the physical device and stores a desktop on a remote
server, allowing users to access their desktops from anywhere on any device. In
addition to easy accessibility, benefits of virtual desktops include better data
security, cost savings on software licenses and updates, and ease of
management.

Bare-metal server:

A bare-metal server is a physical computer server that is used by one consumer, or tenant,
only.[1] Each server offered for rental is a distinct physical piece of hardware that is a
functional server on its own. They are not virtual servers running in multiple pieces of
shared hardware.
A bare metal server, also called a dedicated server by some, is a form of cloud services in
which the user rents a physical machine from a provider that is not shared with any other
tenants.
Because users get complete control over the physical machine with a bare metal (or
dedicated) server, they have the flexibility to choose their own operating system. A bare
metal server helps avoid the noisy neighbor challenges of shared infrastructure and allows
users to finely tune hardware and software for specific data-intensive workloads.

What are the benefits of a bare metal server?


The primary benefits of bare metal servers are based on the access that end users have to
hardware resources. The advantages of this approach include the following:

 Enhanced physical isolation which offers security and regulatory benefits


 Greater processing power
 Complete control of their software stack
 More consistent disk and network I/O performance
 Greater quality of service (QoS) by eliminating the noisy neighbor phenomenon
 Imaging capability for creating a seamless experience when moving and scaling
workloads

Taken together, bare metal servers have an important role in the infrastructure mix for
many companies due to their unique combination of performance and control.
What's the difference between a bare metal server and a virtual server?

Today, available compute options for cloud services go beyond just bare metal servers and
cloud servers. Containers are becoming a default infrastructure choice for many cloud-
native applications. Platform-as-a-service (PaaS) offerings have an important niche of the
applications market for developers who don't want to manage an OS or runtime
environment. And serverless computing is emerging as the model of choice for cloud
purists.

But, when evaluating bare metal servers, users still gravitate toward the comparison to
virtual servers. For most companies, the criteria for choice are application specific or
workload specific. It's extremely common for a company to use a mix of bare metal servers
along with virtualized resources across their cloud environment.

Virtual servers are the more common model of cloud computing because they offer
greater resource density, faster provisioning times, and the ability to scale up and down
quickly as needs dictate. But bare metal servers are the right fit for a few primary use cases
that take advantage of the combination of attributes. These attributes are dedicated
resources, greater processing power, and more consistent disk and network I/O
performance.
7.What is container technology?

Container technology, also simply known as just a container, is a method to package an


application so it can be run, with its dependencies, isolated from other processes. The
major public cloud computing providers, including Amazon Web Services, Microsoft Azure
and Google Cloud Platform have embraced container technology, with container software
having names including the popular choices of Docker, Apache Mesos, rkt (pronounced
“rocket”), and Kubernetes.

Container technology gets its name from the shipping industry. Rather than come up with
a unique way to ship each product, goods get placed into steel shipping containers, which
are already designed to be picked up by the crane on the dock, and fit into the ship
designed to accommodate the container’s standard size. In short, by standardizing the
process, and keeping the items together, the container can be moved as a unit, and it costs
less to do it this way.

With computer container technology, it is an analogous situation. Ever have the situation
where a program runs perfectly great on one machine, but then turns into a clunky mess
when it is moved to the next? This has the potential to occur when migrating the software
from a developer’s PC to a test server, or a physical server in a company data center, to a
cloud server. Issues arise when moving software due to differences between machine
environments, such as the installed OS, SSL libraries, storage, security, and network
topology.

Containers are a form of operating system virtualization. A single container might be used
to run anything from a small microservice or software process to a larger application. Inside
a container are all the necessary executables, binary code, libraries, and configuration files.
Compared to server or machine virtualization approaches, however, containers do not
contain operating system images. This makes them more lightweight and portable, with
significantly less overhead. In larger application deployments, multiple containers may be
deployed as one or more container clusters.

Benefits of containers

Containers are a streamlined way to build, test, deploy, and redeploy applications on
multiple environments from a developer’s local laptop to an on-premises data center and
even the cloud. Benefits of containers include:
 Less overhead
Containers require less system resources than traditional or hardware virtual machine
environments because they don’t include operating system images.
 Increased portability
Applications running in containers can be deployed easily to multiple different
operating systems and hardware platforms.
 More consistent operation
DevOps teams know applications in containers will run the same, regardless of where
they are deployed.
 Greater efficiency
Containers allow applications to be more rapidly deployed, patched, or scaled.
 Better application development
Containers support agile and DevOps efforts to accelerate development, test, and
production cycles.
How containers work
Containers hold the components necessary to run desired software. These components
include files, environment variables, dependencies and libraries. The host OS constrains
the container's access to physical resources, such as CPU, storage and memory, so a single
container cannot consume all of a host's physical resources.

What is containerization?
Containerization is the packaging of software code with just the operating system (OS)
libraries and dependencies required to run the code to create a single lightweight
executable—called a container—that runs consistently on any infrastructure.
Containerization allows developers to create and deploy applications faster and more
securely. With traditional methods, code is developed in a specific computing environment
which, when transferred to a new location, often results in bugs and errors. For example,
when a developer transfers code from a desktop computer to a virtual machine (VM) or
from a Linux to a Windows operating system. Containerization eliminates this problem by
bundling the application code together with the related configuration files, libraries, and
dependencies required for it to run. This single package of software or “container” is
abstracted away from the host operating system, and hence, it stands alone and becomes
portable—able to run across any platform or cloud, free of issues.

Containers vs. virtual machines (VMs):

One way to better understand a container is to understand how it differs from a


traditional virtual machine (VM). In traditional virtualization—whether it be on-premises or
in the cloud—a hypervisor is required to virtualize physical hardware. Each VM then
contains a guest OS, a virtual copy of the hardware that the OS requires to run, along with
an application and its associated libraries and dependencies.

Instead of virtualizing the underlying hardware, containers virtualize the operating system
(typically Linux) so each individual container contains only the application and its libraries
and dependencies. The absence of the guest OS is why containers are so lightweight and,
thus, fast and portable.
8.CDN (content delivery network):

What is a CDN (content delivery network)?

A CDN (content delivery network), also called a content distribution network, is a group of
geographically distributed and interconnected servers. They provide cached internet
content from a network location closest to a user to speed up its delivery.
The primary goal of a CDN is to improve web performance by reducing the time
needed to send content and rich media to users.
A content delivery network (CDN) refers to a geographically distributed group of
servers which work together to provide fast delivery of Internet content.
A CDN allows for the quick transfer of assets needed for loading Internet content
including HTML pages, javascript files, stylesheets, images, and videos. The popularity of
CDN services continues to grow, and today the majority of web traffic is served through
CDNs, including traffic from major sites like Facebook, Netflix, and Amazon.

A properly configured CDN may also help protect websites against some common
malicious attacks, such as Distributed Denial of Service (DDOS) attacks.

How does a CDN work?


At its core, a CDN is a network of servers linked together with the goal of delivering
content as quickly, cheaply, reliably, and securely as possible. In order to improve speed
and connectivity, a CDN will place servers at the exchange points between different
networks.

These Internet exchange points (IXPs) are the primary locations where different Internet
providers connect in order to provide each other access to traffic originating on their
different networks. By having a connection to these high speed and highly interconnected
locations, a CDN provider is able to reduce costs and transit times in high speed data
delivery.

Beyond placement of servers in IXPs, a CDN makes a number of optimizations on


standard client/server data transfers. CDNs place Data Centers at strategic locations across
the globe, enhance security, and are designed to survive various types of failures and
Internet congestion.

What are the benefits of a CDN?


CDNs provide several advantages, including the following:

 Efficiency. CDNs improve webpage loading times and reduce bounce rates. Both
advantages keep users from abandoning a slow-loading site or e-commerce
application.
 Security. CDNs enhance security with services
 Availability. CDNs can handle more traffic and avoid network failures better than
the origin server, increasing content availability.
 Optimization. These networks provide a diverse mix of performance and web
content optimization services that complement cached site content.
 Resource and cost savings. CDNs reduce bandwidth consumption and costs.
What are examples of CDN platforms?
There are many CDNs available with a variety of features. Products include the following:

 Akamai Technologies Intelligent Edge


 Amazon CloudFront
 ArvanCloud
 CDN77
 Cloudflare
 Verizon Edgecast

Some CDN providers, like Cloudflare and Verizon, market their platforms as CDNs with
added services, like DDoS or WAFs. Other providers, such as ArvanCloud, offer CDN
services as one of several broader cloud services, such as cloud security and
managed domain name system.
9.What Is Cloud Storage? Definition, Types:
Cloud storage allows users to store digital data files on virtual servers.

Cloud storage is defined as a data deposit model in which digital information such as
documents, photos, videos and other forms of media are stored on virtual or cloud
servers hosted by third parties. It allows you to transfer data on an offsite storage system
and access them whenever needed. This article delves into the basics of cloud storage.
What Is Cloud Storage?
Cloud storage is a data deposit model in which digital information such as documents,
photos, videos and other forms of media are stored on virtual or cloud servers hosted by
third parties. It allows you to transfer data on an offsite storage system and access them
whenever needed.
Cloud storage is a cloud computing model that allows users to save important data or
media files on remote, third-party servers. Users can access these servers at any time over
the internet. Also known as utility storage, cloud storage is maintained and operated by a
cloud-based service provider.
From greater accessibility to data backup, cloud storage offers a host of benefits.
The most notable being large storage capacity and minimal costs. Cloud storage delivers
on-demand and eliminates the need to purchase and manage your own data storage
Infrastructure. With “anytime, anywhere” data access, this gives you agility, global scale
and durability.
How Cloud Storage Works:

Cloud storage works as a virtual data center. It offers end users and applications virtual
storage infrastructure that can be scaled to the application’s requirements. It generally
operates via a web-based API implemented remotely through its interaction with in-house
cloud storage infrastructure.

Cloud storage includes at least one data server to which a user can connect via the
internet. The user sends files to the data server, which forwards the message to multiple
servers, manually or in an automated manner, over the internet. The stored data can then
be accessed via a web-based interface.To ensure the constant availability of data, cloud
storage systems involve large numbers of data servers. Therefore, if a server requires
maintenance or fails, the user can be assured that the data has been moved elsewhere to
ensure availability.

Types of Cloud Storage:


Cloud services have made it possible for anyone to store digital data and access it from
anywhere. This means that cloud storage is essentially a virtual hard drive. From saving
important data such as word documents, and video files, to accessing the cloud to process
complex data and run applications – cloud storage is a versatile system. To decide which is
the best cloud storage, the user needs to determine their use case/s first. Let’s look at the
different types of cloud storage solutions:

1. Private cloud storage

Private cloud storage is also known as enterprise or internal cloud storage. Data is stored
on the company or organization’s intranet in this case. This data is protected by the
company’s own firewall. Private cloud storage is a great option for companies with
expensive data centers and can manage data privacy in-house. A major advantage of
saving data on a private cloud is that it offers complete control to the user. On the other
hand, one of the major drawbacks of private cloud storage is the cost and effort of
maintenance and updates. The responsibility of managing private cloud storage lies with
the host company.

2. Public cloud storage


Public cloud storage requires few administrative controls and can be accessed online by
the user and anyone else who the user authorizes. With public cloud storage, the
user/company doesn’t need to maintain the system. Public cloud storage is hosted by
different solution providers, so there’s very little opportunity for customizing the security
fields, as they are common for all users. Amazon Web Services (AWS), IBM Cloud, Google
Cloud, and Microsoft Azure are a few popular public cloud storage solution providers.
Public cloud storage is easily scalable, affordable, reliable and offers seamless monitoring
and zero maintenance.

3. Hybrid cloud storage


Hybrid cloud storage is a combination of private and public cloud storage. As the name
suggests, hybrid cloud storage offers the best of both worlds to the user – the security of a
private cloud and the personalization of a public cloud. In a hybrid cloud, data can be
stored on the private cloud, and information processing tasks can be assigned to the public
cloud as well, with the help of cloud computing services. Hybrid cloud storage is affordable
and offers easy customization and greater user control.

4. Community cloud storage


Community cloud storage is a variation of the private cloud storage model, which offers
cloud solutions for specific businesses or communities. In this model, cloud storage
providers offer their cloud architecture, software and other development tools to meet the
community’s requirements. Any data is stored on the community-owned private cloud
storage to manage the community’s security and compliance needs. Community cloud
storage is a great option for health, financial or legal companies with strict compliance
policies.

Benefits of Cloud Storage Adoption


The cloud is rapidly becoming the storage environment of choice for enterprises. 30% of all
corporate data was stored on the cloud in 2015, which increased to 50% in 2020. The cloud
storage market has also grown in tandem and is expected to be worth $137.3 billion by
2025, as per MarketsandMarkets. This is because the cloud offers several benefits over
traditional on-premise storage systems.

Benefits of cloud storage


o Flexibilityand ease of access: Cloud storage means that your data is not tied
down to any one location. Various stakeholders can access assets stored on
the cloud from a location and device of their choice without any download
or installation hassles.
o Remote management support: Cloud storage also paves the way for remote
management either by internal IT teams or by managed service providers
(MSPs). They can troubleshoot without being present on-site, speeding up
issue resolution.
o Fast scalability: A major benefit of cloud storage is that you can provision new
resources with only a few clicks without the need for any additional
infrastructure. When faced with an unprecedented increase in data
volumes, this feature aids business continuity.
o Redundancy for backup: Data redundancy (i.e., replicating the same data in
multiple locations) is essential for an effective backup mechanism. The cloud
ensures your data is kept secure in a remote location in case of a natural
disaster, accident, or cyberattack.
o Long-term cost savings: In the long-term, cloud storage can save you
significantly in the costs of hardware equipment, storage facilities, power
supply, and personnel, which are sure to multiply as your organization
grows.

10.Emerging trends in cloud computing:


While this decade comes to an end, it’s time to look ahead to what’s in store for the
future. One area where business leaders are looking for continued development is the
cloud. The cloud helps businesses with reducing costs, boosting efficiency, increasing
agility, and assisting in overall growth and innovation. In the coming years, there are
likely to be a variety of developments that will impact cloud technology. From new
abilities to new members of the workforce, here are some of the trends we can expect in
the years to come.

Developing Quantum Computing


As the amount of data collected increases, there will need to be a shift in processing time.
Enter quantum computing: a high level of computing that looks to develop computer
technology, based on quantum theory. This advanced type of computing will help with
quickly organizing, categorizing, and analyzing cloud data. Using this technology,
companies will not be backlogged by an unwieldy amount of data.

Adapting Omni-Cloud Solutions


While many companies have selected just one cloud provider, they may find themselves
needing multiple cloud vendors to help with future problems. This tactic is known as omni-
cloud, where a business uses more than one cloud provider to avoid being locked into one
solution. By using more than one cloud vendor, businesses have options for meeting both
internal and customer demand. That’s why having more than one vendor can help bridge
any gaps in cloud capabilities. Currently, companies have been using multi-cloud or hybrid
solutions, but omnicloud allows them to test a wider range of options.

Transforming SaaS to Intelligent SaaS


Software-as-a-Service (SaaS) will be infused with extra abilities and become intelligent
SaaS. Artificial intelligence (AI) will lead the charge on this transformation. With many
benefits like automation, analytical insights, and chatbots, AI will help expand what
software can do. In the coming years, it will be hard for organizations not to embrace
intelligent optimization within all tools.
AI, the Internet of Things, and blockchain are all technologies that will generate more data
to add to the cloud.

Integrating Block chain Technology


Block chain technology, which can track and record a product’s journey, is helping with
tracking through all stages of the product life cycle to improve delivery. When the cloud is
connected to this technology, businesses will have an even better understanding of what
happened during a product’s delivery time. These insights, such as weather delays or
tracking where a contaminated product originated, can save businesses millions of dollars.
Although some companies are already executing this practice, it will continue to grow and
mature in the coming years.

Personalizing Cloud Experience


Right now cloud businesses are creating solutions and applications to meet individual
client needs. However, there will be more of this special treatment as companies demand
software adjusted to their unique requirements. If the company is using a hybrid or multi-
cloud approach then offering personalized options will be even more crucial.
An additional thought for cloud companies to consider is how the cloud will possibly
become a large storage place for data. Businesses will want to share this data with a
variety of people from their employees to clients. That’s where additional personalization
might be necessary especially for companies that want specific people to see certain
amounts of information.

Increasing Security Capabilities


Especially with new security measures in place, businesses do not want to face monetary
penalties or legal action. Because there are businesses unaware of the impact of these
regulations, cloud companies will need to be well versed in possible weak points.
Security itself also needs to improve to face off with thieves using technology like AI or
social engineering. Although incidents like this have been small, there may be an increase
of new cyberattacks due to new laws. This is why cloud companies will need to maintain
and improve security measures.

Hiring Digital Communities


The new employees joining companies today come highly knowledgeable in technology
and the wide variety of tools available. Known as digital natives, these individuals may
create a divide between employees who are technology literate and those who are not.
Businesses will have to adopt new training techniques or bring on additional cloud
solutions to assist new team members.
The cloud keeps growing and developing new abilities to assist businesses. New ways of
collecting data, along with new technology capabilities, will lead to further cloud benefits
for businesses.

11.Web Services in Cloud Computing


The Internet is the worldwide connectivity of hundreds of thousands of computers
belonging to many different networks.

A web service is a standardized method for propagating messages between client and
server applications on the World Wide Web. A web service is a software module that aims
to accomplish a specific set of tasks. Web services can be found and implemented over a
network in cloud computing.

The web service would be able to provide the functionality to the client that invoked the
web service.

A web service is a set of open protocols and standards that allow data exchange between
different applications or systems. Web services can be used by software programs written
in different programming languages and on different platforms to exchange data through
computer networks such as the Internet. In the same way, communication on a computer
can be inter-processed.

Any software, application, or cloud technology that uses a standardized Web protocol
(HTTP or HTTPS) to connect, interoperate, and exchange data messages over the Internet-
usually XML (Extensible Markup Language) is considered a Web service. Is.
Web services allow programs developed in different languages to be connected between a
client and a server by exchanging data over a web service. A client invokes a web service by
submitting an XML request, to which the service responds with an XML response.

o Web services functions


o It is possible to access it via the Internet or intranet network.
o XML messaging protocol that is standardized.
o Operating system or programming language independent.
o Using the XML standard is self-describing.

A simple location approach can be used to detect this.

Web Service Components


XML and HTTP is the most fundamental web service platform. All typical web services use
the following components:

1. SOAP (Simple Object Access Protocol)


SOAP stands for "Simple Object Access Protocol". It is a transport-independent messaging
protocol. SOAP is built on sending XML data in the form of SOAP messages. A document
known as an XML document is attached to each message.

Only the structure of an XML document, not the content, follows a pattern. The great thing
about web services and SOAP is that everything is sent through HTTP, the standard web
protocol.

Every SOAP document requires a root element known as an element. In an XML document,
the root element is the first element.

The "envelope" is divided into two halves. The header comes first, followed by the body.
Routing data, or information that directs the XML document to which client it should be
sent, is contained in the header. The real message will be in the body.

2. UDDI (Universal Description, Search, and Integration)


UDDI is a standard for specifying, publishing and searching online service providers. It
provides a specification that helps in hosting the data through web services. UDDI provides
a repository where WSDL files can be hosted so that a client application can search the
WSDL file to learn about the various actions provided by the web service. As a result, the
client application will have full access to UDDI, which acts as the database for all WSDL
files.
The UDDI Registry will keep the information needed for online services, such as a
telephone directory containing the name, address, and phone number of a certain person
so that client applications can find where it is.

3. WSDL (Web Services Description Language)


The client implementing the web service must be aware of the location of the web service.
If a web service cannot be found, it cannot be used. Second, the client application must
understand what the web service does to implement the correct web service. WSDL, or
Web Service Description Language, is used to accomplish this. A WSDL file is another XML-
based file that describes what a web service does with a client application. The client
application will understand where the web service is located and how to access it using the
WSDL document.

How does web service work?


The diagram shows a simplified version of how a web service would function. The client
will use requests to send a sequence of web service calls to the server hosting the actual
web service.

Remote procedure calls are used to perform these requests. The calls to the methods
hosted by the respective web service are known as Remote Procedure Calls (RPC).
Example: Flipkart provides a web service that displays the prices of items offered on
Flipkart.com. The front end or presentation layer can be written in .NET or Java, but the
web service can be communicated using a programming language.

The data exchanged between the client and the server, XML, is the most important part of
web service design. XML (Extensible Markup Language) is a simple, intermediate language
understood by various programming languages. It is the equivalent of HTML.
As a result, when programs communicate with each other, they use XML. It forms a
common platform for applications written in different programming languages to
communicate with each other.

Web services employ SOAP (Simple Object Access Protocol) to transmit XML data between
applications. The data is sent using standard HTTP. A SOAP message is data sent from a
web service to an application. An XML document is all that is contained in a SOAP message.
The client application that calls the web service can be built in any programming language
as the content is written in XML.

Features of Web Service


Web services have the following characteristics:

(a) XML-based: A web service's information representation and record transport layers
employ XML. There is no need for networking, operating system, or platform bindings
when using XML. At the mid-level, web offering-based applications are highly interactive.

(b) Loosely Coupled: The subscriber of an Internet service provider may not necessarily be
directly connected to that service provider. The user interface for a web service provider
may change over time without affecting the user's ability to interact with the service
provider. A strongly coupled system means that the decisions of the mentor and the server
are inextricably linked, indicating that if one interface changes, the other must be updated.

A loosely connected architecture makes software systems more manageable and easier to
integrate between different structures.

(c) Ability to be synchronous or asynchronous: Synchronicity refers to the client's


connection to the execution of the function. Asynchronous operations allow the client to
initiate a task and continue with other tasks. The client is blocked, and the client must wait
for the service to complete its operation before continuing in synchronous invocation.

Asynchronous clients get their results later, but synchronous clients get their effect
immediately when the service is complete. The ability to enable loosely connected systems
requires asynchronous capabilities.

(d) Coarse Grain: Object-oriented systems, such as Java, make their services available
differently. At the corporate level, an operation is too great for a character technique to be
useful. Building a Java application from the ground up requires the development of several
granular strategies, which are then combined into a coarse grain provider that is consumed
by the buyer or service.
Corporations should be coarse-grained, as should the interfaces they expose. Building web
services is an easy way to define coarse-grained services that have access to substantial
business enterprise logic.

(e) Supports remote procedural calls: Consumers can use XML-based protocols to call
procedures, functions, and methods on remote objects that use web services. A web
service must support the input and output framework of the remote system.

Enterprise-wide component development Over the years, JavaBeans (EJBs) and .NET
components have become more prevalent in architectural and enterprise deployments.
Several RPC techniques are used to both allocate and access them.

A web function can support RPC by providing its services, similar to a traditional role, or
translating incoming invocations into an EJB or .NET component invocation.

(f) Supports document exchanges: One of the most attractive features of XML for
communicating with data and complex entities.

You might also like