Download as pdf or txt
Download as pdf or txt
You are on page 1of 24

Cloud Computing | Virtualization

UNIT 3: VIRTUALIZATION
Introduction and Benefits
Virtualization is a technology that you can use to create a virtual representation of servers,
storage, networks, and other physical machines. It makes one physical computer act and
performs like many computers. Virtualization is a technique of how to separate a service
from the underlying physical delivery of that service. It is the process of creating a virtual
version of something like computer hardware.
Virtual Software mimics the functions of physical hardware to run multiple virtual machines
simultaneously on a single physical machine. It is creating a virtual (rather than actual) version
of something. It was initially developed in the mainframe era.
It involves using specialized software to create a virtual or software-created version of a
computing resource rather than the actual version of the same resource.
With the help of virtualization, multiple Operating Systems and applications can run on the
same machine, and hardware, increasing the utilization and flexibility of hardware. It is one of
the cost-effective, hardware-reducing, and energy-saving techniques used by cloud providers
is Virtualization.
Virtualization allows the sharing of a single physical instance of a resource or application
among multiple customers and organizations at a time. It does this by assigning a logical name
to physical storage and providing a pointer to that physical resource on demand. The term
virtualization is often synonymous with hardware virtualization, which plays a fundamental
role in efficiently delivering Infrastructure-as-a-Service (IaaS) solutions for cloud
computing. Moreover, virtualization technologies provide a virtual environment for not only
executing applications but also for storage, memory, and networking. It provides a virtual
environment for not only executing applications but also for storage, memory, and networking.

• Host Machine: The machine on which the virtual machine is going to be built is
known as Host Machine.
• Guest Machine: The virtual machine is referred to as a Guest Machine.

Work of Virtualization in Cloud Computing

PRIYANKA PATIL 1
Cloud Computing | Virtualization

Virtualization has a prominent impact on Cloud Computing. In the case of cloud computing,
users store data in the cloud, but with the help of Virtualization, users have the extra benefit
of sharing the infrastructure. Cloud Vendors take care of the required physical resources, but
these cloud providers charge a huge amount for these services which impacts every user or
organization. Virtualization helps Users or Organisations maintain those services that are
required by a company through external (third-party) people, which helps in reducing costs
to the company. This is the way through which Virtualization works in Cloud Computing.

Benefits of Virtualization

1. Protection From System Failures

While talking about virtualization in cloud computing benefits, we have to talk about how
they offer protection against system failures. All systems are prone to crashing at least once
in a while, no matter what high-end technology your organization might be using. Even
though your business can survive a few glitches, if the developer who is working on an
important application suddenly faces a system failure, then it can ruin all their hard work
and their progress will be lost. That is why virtualization is important in these scenarios. It
automatically backs up all the data across a few devices. This makes sure that you can access
these files from any device as they are saved in the virtualized cloud network. So even if the
system goes under for a while, you will not lose your progress.

2. Hassle-Free Data Transfers

Data transfer also becomes very smooth with virtualization. You can very easily transfer
data from a physical server to a virtual cloud and vice versa. Long-distance transfer of data
and files also becomes easier under virtualization. Instead of looking through hard drives to
find data, instead, you can find them in the virtual cloud space very easily. Data locating
and transfer becomes hassle-free with virtualization.

3. Firewall and Security Support

Traditional data protection methods can provide your data security, but it costs a lot as well.
But with the help of virtualization and virtual firewalls that can restrict access to important
data and it only costs a fraction of the money. Cybersecurity is a central focus of IT, but
with virtualization, you can very easily solve cybersecurity issues and provide premium
protection to your data without any extra costs.

4. Smoother IT Operations

If you want to increase the efficiency of your IT professionals, then you can do so with the
help of virtualization. These virtual networks are faster and easier to operate. They also save
all the progress instantaneously and eliminate downtime by doing so. Virtualization can also
help your team solve crucial problems within the cloud computing system.

5. Cost-Effective Strategies

This is a great advantage of virtualization in cloud computing. So, if you want to reduce the
operational costs of your organization, then virtualization is a great way to do so. All of the

PRIYANKA PATIL 2
Cloud Computing | Virtualization

data is stored on virtual clouds, which eliminates the need to have multiple physical servers,
which reduces business costs by a lot and reduces waste as well. Maintenance fees and
electrical fees also go down in this process. A lot of server space is also saved thanks to
virtualization, which can be used for other important purposes.

6. Disaster Recovery Is Efficient and Easy

As we already said, if physical servers face an issue, your data can be lost forever, or even
if it is not, it takes a lot of time and effort to recover it. But in virtualization, data is always
backed up onto the cloud system, and thus the recovery process becomes hassle-free and
duplication also becomes very easy.

7. Quick and Easy Set Up

Setting up physical servers and systems is a time-consuming and complicated process. Not
only that, but it also costs a lot of money. But setting up a virtual system in the cloud
computing space is pretty easy, and it takes much less time to set up the whole software
system efficiently.

8. Cloud Migration Becomes Easy

A lot of people tend to think that migrating to a cloud-based system is going to be pretty
difficult. But in reality, the migration from physical servers to a virtual cloud system is pretty
easy and does not take a lot of time, either. It also saves them power costs, cooling costs,
maintenance costs and the costs of a server maintenance engineer as well.

9. Reduce Downtime

If a physical server is stricken by some disaster and needs to be fixed, it could take up days
of time and a lot of money. But with a virtual system, even if one virtual machine has been
affected, you can easily clone or replicate the system and it will only take minutes. This
helps the business continue to run smoothly soon after running into a bump.

10. Virtualization Saves Energy

With a virtual system that replaces physical servers, you can also cut down on the costs of
running and maintaining those physical servers on a daily basis. The organization can cut
back on maintenance, power and energy costs while managing waste better as well.

11. Increase Efficiency and Productivity

Virtualization comes with the upside of fewer servers to take care of. And with fewer
servers, your IT team will now be free of the burden of maintaining the infrastructure and
hardware. Instead of going through the arduous process of installing the updates in the
servers one by one, they can do it on the main virtual server once, and it will be maintained
throughout all the VMs. Much less time will be consumed in maintaining the systems which
will increase productivity.

PRIYANKA PATIL 3
Cloud Computing | Virtualization

12. Streamlined Processing and Operations

Virtualization centralizes resources and management. Which makes it easier for the IT team
to maintain the system in a more streamlined way. Instead of juggling individual devices,
which can be complicated, they can manage their operations from a single source. Repair,
software installation, patching and maintenance become much easy and less time-
consuming. So it frees up your IT team to focus their energy elsewhere.

13. Control Independence and DevOps

During Dev/Tests, developers can easily clone a virtual machine since the cloud
environment is always segmented into various VMs. They can run a test on this clone very
easily without interrupting the production system. You can apply the latest software patch
to a virtual clone and, after a successful run, can put it into the production application of the
company.

14. Utilization of Hardware Efficiently

With the help of Virtualization Hardware is Efficiently used by users as well as Cloud
Service Providers. In this, the need for a Physical Hardware System for the User decreases
and this results in less cost. From the Service Provider's point of view, they will vitalize the
Hardware using Hardware Virtualization which decreases the Hardware requirement from
the Vendor side which is provided to the User. Before Virtualization, Companies and
organizations had to set up their Server which required extra space for placing them,
engineers to check their performance, and extra hardware costs but with the help of
Virtualization all these limitations are removed by Cloud vendors who provide Physical
Services without setting up any Physical Hardware system.

15. Availability increases with Virtualization


One of the main benefits of Virtualization is that it provides advanced features that allow
virtual instances to be available all the time. It also can move virtual instances from one
virtual Server to another Server which is a very tedious and risky task in a Server System.
During the migration of Data from one server to another, it ensures its safety. Also, we can
access information from any location and any time from any device.

Characteristics of Virtualization
• Increased Security: The ability to control the execution of a guest program in a
completely transparent manner opens new possibilities for delivering a secure,
controlled execution environment. All the operations of the guest programs are
generally performed against the virtual machine, which then translates and applies
them to the host programs.
• Managed Execution: In particular, sharing, aggregation, emulation, and
isolation are the most relevant features.
• Sharing: Virtualization allows the creation of a separate computing
environment within the same host.
• Aggregation: It is possible to share physical resources among several guests,
but virtualization also allows aggregation, which is the opposite process.

Types of Virtualizations

PRIYANKA PATIL 4
Cloud Computing | Virtualization

1. Application Virtualization
2. Network Virtualization
3. Desktop Virtualization
4. Storage Virtualization
5. Server Virtualization
6. Data virtualization

1. Application Virtualization: Application virtualization helps a user to have remote access


to an application from a server. The server stores all personal information and other
characteristics of the application but can still run on a local workstation through the
Internet. An example of this would be a user who needs to run two different versions of
the same software. Technologies that use application virtualization are hosted applications
and packaged applications.
2. Network Virtualization: The ability to run multiple virtual networks with each having a
separate control and data plan. It co-exists together on top of one physical network. It can
be managed by individual parties that are potentially confidential to each other. Network
virtualization provides a facility to create and provision virtual networks, logical switches,
routers, firewalls, load balancers, Virtual Private Networks (VPN), and workload security
within days or even weeks.
3. Desktop Virtualization: Desktop virtualization allows the users’ OS to be remotely
stored on a server in the data center. It allows the user to access their desktop virtually,
from any location by a different machine. Users who want specific operating systems other
than Windows Server will need to have a virtual desktop. The main benefits of desktop
virtualization are user mobility, portability, and easy management of software installation,
updates, and patches.
4. Storage Virtualization: Storage virtualization is an array of servers that are managed by
a virtual storage system. The servers aren’t aware of exactly where their data is stored and
instead function more like worker bees in a hive. It makes managing storage from multiple
sources be managed and utilized as a single repository. storage virtualization software
maintains smooth operations, consistent performance, and a continuous suite of advanced
functions despite changes, breakdowns, and differences in the underlying
equipment. Storage virtualization is the process of grouping the physical storage from
multiple network storage devices so that it looks like a single storage device. Storage
virtualization is also implemented by using software applications. Storage virtualization is
mainly done for backup and recovery purposes.
5. Server Virtualization: This is a kind of virtualization in which the masking of server
resources takes place. Here, the central server (physical server) is divided into multiple
different virtual servers by changing the identity number, and processors. So, each system
can operate its operating systems in an isolated manner. Where each sub-server knows the
identity of the central server. It causes an increase in performance and reduces the
operating cost by the deployment of main server resources into a sub-server resource. It’s
beneficial in virtual migration, reducing energy consumption, reducing infrastructural
costs, etc. When the virtual machine software or virtual machine manager (VMM) is
directly installed on the Server system is known as server virtualization. Server
virtualization is done because a single physical server can be divided into multiple servers
on a demand basis and for balancing the load.
6. Data Virtualization: This is the kind of virtualization in which the data is collected from
various sources and managed at a single place without knowing more about the technical
information like how data is collected, stored & formatted then arranged that data logically
so that it virtual view can be accessed by its interested people and stakeholders, and users

PRIYANKA PATIL 5
Cloud Computing | Virtualization

through the various cloud services remotely. Many big giant companies are providing their
services like Oracle, IBM, At scale, Cdata, etc.
7. Hardware Virtualization: When the virtual machine software or virtual machine
manager (VMM) is directly installed on the hardware system is known as hardware
virtualization. The main job of the hypervisor is to control and monitor the processor,
memory, and other hardware resources. After the virtualization of the hardware system we
can install different operating systems on it and run different applications on those OS.
Hardware virtualization is mainly done for the server platforms because controlling virtual
machines is much easier than controlling a physical server.
8. Operating System Virtualization: When the virtual machine software or virtual machine
manager (VMM) is installed on the Host operating system instead of directly on the
hardware system is known as operating system virtualization. Operating System
Virtualization is mainly used for testing the applications on different platforms of OS.

Uses of Virtualization
• Data-integration
• Business-integration
• Service-oriented architecture data-services
• Searching organizational data

Implementation Levels of Virtualization


It is not simple to set up virtualization. Your computer runs on an operating system that
gets configured on some particular hardware. It is not feasible or easy to run a different
operating system using the same hardware. To do this, you will need a hypervisor. Now,
what is the role of the hypervisor? It is a bridge between the hardware and the virtual
operating system, which allows smooth functioning. There are a total of five levels that are
commonly used.

1) Instruction Set Architecture Level (ISA)


ISA virtualization can work through ISA emulation. This is used to run many legacy codes
written for a different hardware configuration. These codes run on any virtual machine

PRIYANKA PATIL 6
Cloud Computing | Virtualization

using the ISA. With this, a binary code that originally needed some additional layers to run
is now capable of running on the x86 machines. It can also be tweaked to run on the x64
machine. With ISA, it is possible to make the virtual machine hardware agnostic.

For the basic emulation, an interpreter is needed, which interprets the source code and then
converts it into a hardware format that can be read. This then allows processing. This is one
of the five implementation levels of virtualization in Cloud Computing..

2) Hardware Abstraction Level (HAL)


True to its name HAL lets the virtualization perform at the level of the hardware. This
makes use of a hypervisor which is used for functioning. The virtual machine is formed at
this level, which manages the hardware using the virtualization process. It allows the
virtualization of each of the hardware components, which could be the input-output device,
the memory, the processor, etc.

Multiple users will not be able to use the same hardware and also use multiple virtualization
instances at the very same time. This is mostly used in the cloud-based infrastructure.

3) Operating System Level


At the level of the operating system, the virtualization model is capable of creating a layer
that is abstract between the operating system and the application. This is an isolated
container on the operating system and the physical server, which uses the software and
hardware. Each of these then functions in the form of a server.

When there are several users and no one wants to share the hardware, then this is where the
virtualization level is used. Every user will get his virtual environment using a dedicated
virtual hardware resource. In this way, there is no question of any conflict.

4) Library Level
The operating system is cumbersome, and this is when the applications use the API from
the libraries at a user level. These APIs are documented well, and this is why the library
virtualization level is preferred in these scenarios. API hooks make it possible as it controls
the link of communication from the application to the system.

5) Application Level
The application-level virtualization is used when there is a desire to virtualize only one
application and is the last of the implementation levels of virtualization in Cloud
Computing. One does not need to virtualize the entire environment of the platform.

This is generally used when you run virtual machines that use high-level languages. The
application will sit above the virtualization layer, which in turn sits on the application
program.

It lets the high-level language programs compiled to be used at the application level of the
virtual machine run seamlessly.

Conclusion

PRIYANKA PATIL 7
Cloud Computing | Virtualization

There are in total of five implementation levels of virtualization in Cloud Computing.


However, every enterprise may not use each one of the different levels of virtualization
implementation in Cloud Computing. The level used is based on the working of the
company and also on its preference for the level of virtualization. The company will use
the virtual machine to develop and test across multiple platforms. Cloud-based applications
are on the rise, making virtualization a must-have thing for enterprises worldwide.

Virtualization Structure

A virtualization architecture is a conceptual model of a virtual infrastructure that is most


frequently applied in cloud computing. Virtualization itself is the process of creating and
delivering a virtual rather than a physical version of something. This could be a desktop, an
operating system (OS), a server, a storage device, or network resources.

The architecture specifies the arrangement and interrelationships among the particular
components in the virtual environment.

In cloud computing, virtualization facilitates the creation of virtual versions of hardware such
as desktops, as well as virtual ecosystems for OS, storage, memory, and networking resources.
A virtualization architecture runs multiple OSs on the same machine using the same hardware
and also ensures their smooth functioning.

In a virtualization architecture, specialized software is used to create a virtual version of a


computing resource. This eliminates the need to re-create an actual version of that resource. A
logical name is assigned to the resource and a pointer is provided to that resource on demand.
As a result, multiple OSes and applications can run on the same machine and multiple users
(or organizations) can share a single physical instance of a resource or application at the same
time.

The virtualization architecture is a visual depiction or model of virtualization. It maps out and
describes the various virtual elements in the ecosystem, including the following:
• Application virtual services
• Infrastructure virtual services
• Virtual OS
• Hypervisor

PRIYANKA PATIL 8
Cloud Computing | Virtualization

The application and infrastructure of virtual services are embedded into a virtual data center or
OS. The hypervisor separates the OS from the underlying hardware and enables a host
machine to simultaneously run multiple VMs that will share the same physical resources.

Xen Virtualization Architecture


Xen is an open-source hypervisor program developed by Cambridge University. Xen is a
micro-kernel hypervisor, which separates the policy from the mechanism.
The Xen hypervisor implements all the mechanisms, leaving the policy to be handled by
Domain 0. Xen does not include any device drivers natively. It just provides a mechanism by
which a guest OS can have direct access to the physical devices. As a result, the size of the Xen
hypervisor is kept rather small. Xen provides a virtual environment located between the
hardware and the OS. Several vendors are in the process of developing commercial Xen
hypervisors, among them are Citrix XenServer and Oracle VM.

The core components of a Xen system are the hypervisor, kernel, and applications. The
organization of the three components is important. Like other virtualization systems, many
guest OSes can run on top of the hypervisor. However, not all guest OSes are created equal,
and one in particular controls the others. The guest OS, which has control ability, is called
Domain 0, and the others are called Domain U. Domain 0 is a privileged guest OS of Xen. It
is first loaded when Xen boots without any file system drivers being available. Domain 0 is
designed to access hardware directly and manage devices. Therefore, one of the responsibilities
of Domain 0 is to allocate and map hardware resources for the guest domains (the Domain U
domains).
For example, Xen is based on Linux and its security level is C2. Its management VM is named
Domain 0, which has the privilege to manage other VMs implemented on the same host. If
Domain 0 is compromised, the hacker can control the entire system. So, in the VM system,
security policies are needed to improve the security of Domain 0. Domain 0, behaving as a
VMM, allows users to create, copy, save, read, modify, share, migrate, and roll back VMs as
easily as manipulating a file, which flexibly provides tremendous benefits for users.
Unfortunately, it also brings a series of security problems during the software life cycle and
data lifetime. Traditionally, a machine’s lifetime can be envisioned as a straight line where the
current state of the machine is a point that progresses monotonically as the software executes.
During this time, configuration changes are made, the software is installed, and patches are
applied. In such an environment, the VM state is akin to a tree: At any point, execution can go
into N different branches where multiple instances of a VM can exist at any point in this tree

PRIYANKA PATIL 9
Cloud Computing | Virtualization

at any given time. VMs are allowed to roll back to previous states in their execution (e.g., to
fix configuration errors) or rerun from the same point many times (e.g., as a means of
distributing dynamic content or circulating a “live” system image).
Full Virtualization

Full virtualization is the first virtualization software solution that has ever existed in the
industry. It was first developed in the late 90’s and 2000s. Full virtualization, as its name
implies, keeps the VM completely divided from the hardware and the VM is unaware of the
situation that is running in a virtual environment.

In the full virtualization method, the guest operating system does need to be modified, which
makes it portable and allows it to support nearly any operating system on the market. Full
virtualization, as a technique of operation, uses binary translation and a direct approach to
execute instructions from a virtual machine on physical hardware.However, full virtualization
lacks performance and speed since it uses methods like Hardware Emulation
Overhead and Context Switching Overhead to operate.

With full virtualization, noncritical instructions run on the hardware directly while critical
instructions are discovered and replaced with traps into the VMM to be emulated by software.
Both the hypervisor and VMM approaches are considered full virtualization. Why are only
critical instructions trapped in the VMM? This is because binary translation can incur a large
performance overhead. Noncritical instructions do not control hardware or threaten the security
of the system, but critical instructions do. Therefore, running noncritical instructions on
hardware not only can promote efficiency but also can ensure system security.
In the full virtualization technique, the hypervisor completely simulates the underlying
hardware. The main advantage of this technique is that it allows the running of the
unmodified OS. In full virtualization, the guest OS is completely unaware that it’s being
virtualized.
Full virtualization uses a combination of direct execution and binary translation. This allows
direct execution of non-sensitive CPU instructions, whereas sensitive CPU instructions are
translated on the fly. To improve performance, the hypervisor maintains a cache of the recently
translated instructions.

Full Virtualization is a virtualization technique that simulates an entire physical computer,


including its hardware components, to create multiple virtual machines (VMs) that run

PRIYANKA PATIL 10
Cloud Computing | Virtualization

independently on a single physical host. In this approach, the guest operating system is unaware
that it's running in a virtualized environment, as it interacts with virtualized hardware that
emulates real hardware.

How Does Full Virtualization Work?

1. Hypervisor Layer:
Full virtualization relies on a hypervisor, also known as a Virtual Machine Monitor
(VMM), which sits between the physical hardware and guest operating systems. The
hypervisor manages and controls the allocation of physical resources to virtual
machines.
2. Hardware Virtualization:
Full virtualization uses hardware-assisted virtualization technologies like Intel VT-
x or AMD-V to enhance performance. These technologies allow the hypervisor to run
guest OSes directly on the physical CPU without significant performance overhead.
3. Isolation:
VMs created through full virtualization are completely isolated from each other. Each
VM runs its instance of the guest operating system, which cannot interfere with other
VMs.

Features of Full Virtualization

• Portability: Since the guest operating system does not need any sort of modification,
it is easy to move it.
• Lower Security: Full virtualization is considered less secure compared to
paravirtualization due to its architecture and the method of communication between
the guest operating system and hypervisor.
• Slower and Lacks Performance: Since full virtualization does not allow the guest
operating system to directly communicate with hardware it lacks performance and
speed.
• No guest Operating System Modification: Full virtualization does not need guest
OS modifications because it does not communicate directly with guest OS.

Para Virtualization

Paravirtualization is a virtualization technique that is popular in the industry. It does not


separate the VM from the hardware like full virtualization but rather the VM is partially isolated
from the hardware. It also uses a modified VM to inform the VM that it’s running in a
virtualized environment and alters the OS kernel to use hypercalls. By doing this, it improves
the performance and the speed of the VM. Altering the OS, decreases the portability and
support for other operating systems, meaning that not all operating systems in the market can
run on paravirtualization.

Para-virtualization needs to modify the guest operating systems. A para-virtualized VM


provides special APIs requiring substantial OS modifications in user applications. Performance
degradation is a critical issue of a virtualized system. No one wants to use a VM if it is much
slower than using a physical machine. The virtualization layer can be inserted at different
positions in a machine software stack. However, para-virtualization attempts to reduce the
virtualization overhead, and thus improve performance by modifying only the guest OS kernel.
PRIYANKA PATIL 11
Cloud Computing | Virtualization

In paravirtualization, the hypervisor doesn’t simulate underlying hardware. Instead, it


provides hypercalls. Hypercalls are similar to kernel system calls. They allow the guest OS to
communicate with the hypervisor. The guest OS uses hypercalls to execute sensitive CPU
instructions. This technique is not as portable as full virtualization, as it requires modification
in the guest OS. However, it provides better performance because the guest OS is aware that
it’s being virtualized.

Para Virtualization, takes a slightly different approach to virtualization. It involves modifying


the guest operating systems to be aware of the virtualized environment. Unlike full
virtualization, where the guest OS runs unmodified, paravirtualization requires guest OSs to
use a specific set of APIs to interact with the virtualization layer.

Although para-virtualization reduces the overhead, it has incurred other problems.


1. Its compatibility and portability may be in doubt because it must support the
unmodified OS as well.
2. The cost of maintaining para-virtualized OSes is high, because they may require deep
OS kernel modifications.
3. The performance advantage of para-virtualization varies greatly due to workload
variations.
Compared with full virtualization, para-virtualization is relatively easy and more practical. The
main problem with full virtualization is its low performance in binary translation. To speed up
binary translation is difficult. Therefore, many virtualization products employ the para-
virtualization architecture. The popular Xen, KVM, and VMware ESX are good examples.
How Para Virtualization Works?

1. Hypervisor Layer:
Similar to full virtualization, paravirtualization also employs a hypervisor, but here,
the guest operating systems are aware of it. The hypervisor provides a set of APIs that
guest OSes must use to communicate with the underlying hardware.
2. Guest OS Modifications:
Guest operating systems must be modified to replace certain hardware-related
instructions with hypercalls, which are calls to the hypervisor. These hypercalls allow

PRIYANKA PATIL 12
Cloud Computing | Virtualization

the guest OS to request services from the hypervisor, such as memory management or
CPU scheduling.
3. Performance Benefits:
Since para virtualization avoids the overhead of emulating complete hardware, it often
offers better performance than full virtualization. Guest OSes can communicate more
directly with the hypervisor, resulting in improved efficiency.

Features of Paravirtualization

• Modifies Guest Operating System: In paravirtualization, the guest operating system


must be modified to make it use hypercalls to communicate with the hypervisor.
• Faster and More Performance: Since the virtual machine on the paravirtualization
has direct access to the hypervisor or virtualization layer, it is much faster and offers
more performance.
• Less Portable: A paravirtualized system is less portable due to it being tightly coupled
with the underlying hypervisor.
• Low Compatibility: Not all operating systems can be modified in the way the
paravirtualization method needs, thus limiting the scope of operating systems that can
be used with a paravirtualized hypervisor.

Full Virtualization VS Paravirtualization

1. Generalized

Full Virtualization Paravirtualization


Easily portable Less portable
Uses binary translation and a direct Uses hypercalls
approach
Low security High security
Slower Faster
Less performance More performance
Hight compatibility with any type of Low compatibility with operating systems
operating system
No OS modification OS modification
Microsoft and VMware Xen and KVM

2. Aspect wise

Aspect Full Virtualization Para Virtualization


Guest OS Not required; runs Requires modifications to use
Modification unmodified hypercalls
Performance Slightly lower due to Better performance due to direct
emulation interaction
Isolation Strong isolation between Isolation with awareness of other
VMs VMs
Guest OS Supports various OS types Works best with compatible OSes
Flexibility
Hardware Compatible with most Requires hardware support for
Compatibility hardware para virtualization
Hypervisor Layer Hypervisor manages virtual Hypervisor provides APIs for
hardware independently communication
PRIYANKA PATIL 13
Cloud Computing | Virtualization

Interaction with Emulates complete hardware Uses hypercalls to request


Hardware hardware services
Examples VMware, Hyper-V, Xen, KVM, QEMU
VirtualBox
Resource Slightly higher resource Lower resource overhead for the
Overhead overhead hypervisor
Use Cases Mixed OS environments, Performance-critical applications,
strong isolation needed homogeneous OS environment
Virtualization of CPU
A single CPU can run numerous operating systems (OS) via CPU virtualization in cloud
computing. This is possible by creating virtual machines (VMs) that share the physical
resources of the CPU. Each Virtual Machine can’t see or interact with each other’s data or
processes.

CPU virtualization is very important in cloud computing. It enables cloud providers to offer
services like –
• Virtual private servers (VPSs)
• Cloud storage (EBS)
• Cloud computing platforms (AWS, Azure and Google Cloud)

Consider an example to understand CPU virtualization. Imagine we have a physical server with
a single CPU. We want to run two different operating systems on this server, Windows &
Linux. So it can easily be done by creating two Virtual Machines (VMs), one for Windows and
one for Linux. The virtualization software will create a virtual CPU for each VM. The
virtualization software will create a virtual CPU for each VM. The virtual CPUs will execute
on the physical CPU but separately. This means the Windows Virtual Machine cannot view or
communicate with the Linux VM, and vice versa.

The virtualization software will also allocate memory and other resources to each VM. This
guarantees each VM has enough resources to execute. CPU virtualization is made difficult but
necessary for cloud computing.

How does CPU Virtualization work? In Step by Step Process

Step 1: Creating Virtual Machines (VMs)

PRIYANKA PATIL 14
Cloud Computing | Virtualization

• Let’s take an example you have a powerful computer with a CPU, memory, and other
resources.
• To start CPU virtualization, you use special software called a hypervisor. This is like
the conductor of a virtual orchestra.
• The hypervisor creates virtual machines (VMs) – these are like separate, isolated worlds
within your computer.
• The “virtual” resources of each VM include CPU, memory, and storage. It’s like having
multiple mini-computers inside your main computer.

Step 2: Allocating Resources

• The hypervisor carefully divides the real CPU’s processing power among the VMs. It’s
like giving each VM its slice of the CPU pie.
• It also makes sure that each Virtual memory (VM) gets its share of memory, storage,
and other resources.

Step 3: Isolation and Independence

Each VM operates in its isolated environment. It can’t see or interfere with what’s happening
in other VMs.

Step 4: Running Operating Systems and Apps

• Within each Virtual Machine, you can install & run different operating systems (like
Windows, and Linux) and applications.
• The VM thinks it’s a real computer, even though it’s sharing the actual computer’s
resources with other VMs.

Step 5: Managing Workloads

• The hypervisor acts as a smart manager, deciding when each VM gets to use the real
CPU.
• It ensures that no VM takes up all the CPU time, making sure everyone gets their turn
to work.

Step 6: Efficient Use of Resources

• Even though there’s only one physical CPU, each VM believes it has its dedicated CPU.
• The hypervisor cleverly switches between VMs so that all the tasks appear to be
happening simultaneously.

Advantages Of CPU Virtualization in Cloud Computing

1) Efficient Resource Utilization


CPU virtualization lets one powerful machine handle multiple tasks simultaneously. This
maximizes the use of h/w resources and reduces wastage.

2) Cost Savings
By running multiple virtual machines on a single physical server, cloud providers save on
hardware costs, energy consumption, and maintenance.
PRIYANKA PATIL 15
Cloud Computing | Virtualization

3) Scalability
CPU virtualization allows easy scaling, adding or removing virtual machines according to
demand. This flexibility helps businesses adapt to changing needs as per requirements.

4) Isolation and Security


Each Virtual Machine (VM) is isolated from others, providing a layer of security. If one VM
has a problem, it’s less likely to affect others.

5) Compatibility and Testing


Different operating systems (OS) & applications can run on the same physical hardware (h/w),
making it easier to test new software without affecting existing setups.

Disadvantages Of CPU Virtualization In Cloud Computing:

1) Overhead
The virtualization layer adds some overhead, which means a small portion of CPU power is
used to manage virtualization itself.

2) Performance Variability

Depending on the number of virtual machines and their demands, performance can vary. If one
VM needs a lot of resources, others might experience slower performance.

3) Complexity

Handling multiple virtual machines and how they work together needs expertise. Creating and
looking after virtualization systems can be complicated.

4) Compatibility Challenges

Some older software or hardware might not work well within virtualized environments.
Compatibility issues can arise.

5) Resource Sharing

While CPU virtualization optimizes resource usage, if one VM suddenly requires a lot of
resources, it might impact the performance of others.

Virtualization of Memory and I/O Devices


Memory Virtualization:
To support virtualization, processors such as the x86 employ a special running mode and
instructions, known as hardware-assisted virtualization. In this way, the VMM and guest OS
run in different modes and all sensitive instructions of the guest OS and its applications are
trapped in the VMM. To save processor states, modes witching is completed by hardware. For
the x86 architecture, Intel and AMD have proprietary technologies for hardware-assisted
virtualization. Hardware Support for Virtualization: Modern operating systems and processors
permit multiple processes to run simultaneously. If there is no protection mechanism in a
processor, all instructions from different processes will access the hardware directly and cause
PRIYANKA PATIL 16
Cloud Computing | Virtualization

a system crash. Therefore, all processors have at least two modes, user mode, and supervisor
mode, to ensure controlled access to critical hardware. Instructions running in supervisor mode
are called privileged instructions. Other instructions are unprivileged instructions. In a
virtualized environment, it is more difficult to make OSes and applications run correctly
because there are more layers in the machine stack.

Virtual memory virtualization is similar to the virtual memory support provided by modern
operating systems. In a traditional execution environment, the operating system maintains
mappings of virtual memory to machine memory using page tables, which is a one-stage
mapping from virtual memory to machine memory. All modern x86 CPUs include a memory
management unit (MMU) and a translation lookaside buffer (TLB) to optimize virtual memory
performance. However, in a virtual execution environment, virtual memory virtualization
involves sharing the physical system memory in RAM and dynamically allocating it to the
physical memory of the VMs. That means a two-stage mapping process should be maintained
by the guest OS and the VMM, respectively: virtual memory to physical memory and physical
memory to machine memory. Furthermore, MMU virtualization should be supported, which is
transparent to the guest OS. The guest OS continues to control the mapping of virtual addresses
to the physical memory addresses of VMs. However the guest OS cannot directly access the
actual machine memory. The VMM is responsible for mapping the guest's physical memory to
the actual machine memory.
I/O Virtualization:
I/O virtualization involves managing the routing of I/O requests between virtual devices and
the shared physical hardware.

PRIYANKA PATIL 17
Cloud Computing | Virtualization

There are three ways to implement I/O virtualization:


• Full device emulation
• Paravirtualization
• Direct I/O
Full device emulation is the first approach for I/O virtualization. Generally, this approach
emulates well-known, real-world devices. All the functions of a device or bus infrastructure,
such as device enumeration, identification, interrupts, and DMA, are replicated in software.
This software is located in the VMM and acts as a virtual device. The I/O access requests of
the guest OS are trapped in the VMM which interacts with the I/O devices. A single hardware
device can be shared by multiple VMs that run concurrently. However, software emulation runs
much slower than the hardware it emulates. The para-virtualization method of I/O virtualization
is typically used in Xen. It is also known as the split driver model consisting of a frontend
driver and a backend driver. The frontend driver is running in Domain U and the backend driver
is running in Domain 0. They interact with each other via a block of shared memory. The
frontend driver manages the I/O requests of the guest OSes and the backend driver is
responsible for managing the real I/O devices and multiplexing the I/O data of different VMs.
Although para I/O-virtualization achieves better device performance than full device
emulation, it comes with a higher CPU overhead.
Hypervisors
A hypervisor, also known as a virtual machine monitor or VMM is software that creates and
runs virtual machines (VMs). A hypervisor allows one host computer to support multiple guest
VMs by virtually sharing its resources, such as memory and processing.
It is a small software layer that enables multiple instances of operating systems to run
alongside each other, sharing the same physical computing resources. This prevents the
VMs from interfering with each other; so if, for example, one OS suffers a crash or a
security compromise, the others survive.
A hypervisor is a form of virtualization software used in Cloud hosting to divide and allocate
the resources on various pieces of hardware. The program which provides partitioning,
isolation, or abstraction is called a virtualization hypervisor. The hypervisor is a hardware
virtualization technique that allows multiple guest operating systems (OS) to run on a single
host system at the same time. A hypervisor is sometimes also called a virtual machine manager
(VMM).
Benefits of hypervisors
• Speed: Hypervisors allow virtual machines to be created instantly, unlike bare-metal
servers. This makes it easier to provision resources as needed for dynamic workloads.
• Efficiency: Hypervisors that run several virtual machines on one physical machine’s
resources also allow for more efficient utilization of one physical server. It is more cost- and
energy-efficient to run several virtual machines on one physical machine than to run
multiple underutilized physical machines for the same task.
• Flexibility: Bare-metal hypervisors allow operating systems and their associated
applications to run on a variety of hardware types because the hypervisor separates the OS
from the underlying hardware, so the software no longer relies on specific hardware devices
or drivers.

PRIYANKA PATIL 18
Cloud Computing | Virtualization

• Portability: Hypervisors allow multiple operating systems to reside on the same physical
server (host machine). Because the virtual machines that the hypervisor runs are
independent of the physical machine, they are portable. IT teams can shift workloads and
allocate networking, memory, storage, and processing resources across multiple servers as
needed, moving from machine to machine or platform to platform. When an application
needs more processing power, the virtualization software allows it to seamlessly access
additional machines.

Why use a hypervisor?

Hypervisors make it possible to use more of a system’s available resources and provide greater
IT mobility since the guest VMs are independent of the host hardware. This means they can be
easily moved between different servers. Because multiple virtual machines can run off of one
physical server with a hypervisor, a hypervisor reduces:
• Space
• Energy
• Maintenance requirements

Characteristics of hypervisors

There are different categories of hypervisors and different brands of hypervisors within each
category. The market has matured to make hypervisors a commodity product in the enterprise
space, but there are still differentiating factors that should guide your choice. Here’s what to
look for:

• Performance. Look for benchmark data that show how well the hypervisor performs in a
production environment. Ideally, bare-metal hypervisors should support guest OS
performance close to native speeds.

• Ecosystem. You will need good documentation and technical support to implement and
manage hypervisors across multiple physical servers at scale. Also, look for a healthy
community of third-party developers that can support the hypervisor with their own agents
and plugins that offer capabilities, such as backup and restore capacity analysis and fail-
over management.

• Management tools. Running VMs isn’t the only thing you must manage when using a
hypervisor. You must provision the VMs, maintain them, audit them, and clean up disused
ones to prevent "VM sprawl." Ensure that the vendor or third-party community supports
the hypervisor architecture with comprehensive management tools.

• Live migration. This enables you to move VMs between hypervisors on different physical
machines without stopping them, which can be useful for both fail-over and workload
balancing.

• Cost. Consider the cost and fee structure involved in licensing hypervisor technology.
Don’t just think about the cost of the hypervisor itself. The management software that
makes it scalable to support an enterprise environment can often be expensive. Lastly,
examine the vendor’s licensing structure, which may change depending on whether you
deploy it in the cloud or locally.

PRIYANKA PATIL 19
Cloud Computing | Virtualization

Types of hypervisors

There are two main hypervisor types, referred to as “Type 1” (or “bare metal”) and “Type 2”
(or “hosted”). A type 1 hypervisor acts like a lightweight operating system and runs directly
on the host’s hardware, while a type 2 hypervisor runs as a software layer on an operating
system, like other computer programs.

Both types of hypervisors can run multiple virtual servers for multiple tenants on one physical
machine. Public cloud service providers lease server space on different virtual servers to
different companies. One server might host several virtual servers that are all running
workloads for different companies. This type of resource sharing can result in a “noisy
neighbor” effect when one of the tenants runs a large workload that interferes with the server
performance for other tenants. It also poses more of a security risk than using a dedicated bare-
metal server.

A bare-metal server that a single company has full control over will always provide higher
performance than a virtual server that is sharing a physical server’s bandwidth, memory and
processing power with other virtual servers. The hardware for bare-metal servers can also be
optimized to increase performance, which is not the case with shared public servers. Businesses
that need to comply with regulations that require physical separation of resources will need to
use their bare-metal servers that do not share resources with other tenants.

TYPE-1 Hypervisor:
The hypervisor runs directly on the underlying host system. It is also known as a “Native
Hypervisor” or “Bare metal hypervisor”. It does not require any base server operating system.
It has direct access to hardware resources. The most commonly deployed type of hypervisor
is the type 1 or bare-metal hypervisor, where virtualization software is installed directly on
the hardware where the operating system is normally installed. Because bare-metal
hypervisors are isolated from the attack-prone operating system, they are extremely secure. In
addition, they generally perform better and more efficiently than hosted hypervisors. For
these reasons, most enterprise companies choose bare-metal hypervisors for data
center computing needs.

A Type 1 hypervisor runs directly on the underlying computer’s physical hardware,


interacting directly with its CPU, memory, and physical storage. For this reason, Type 1
hypervisors are also referred to as bare-metal hypervisors. A Type 1 hypervisor takes the
place of the host operating system.

PRIYANKA PATIL 20
Cloud Computing | Virtualization

Type 1 hypervisors are highly efficient because they have direct access to physical hardware.
This also increases their security, because there is nothing in between them and the CPU that
an attacker could compromise. But a Type 1 hypervisor often requires a separate management
machine to administer different VMs and control the host hardware.

Pros & Cons of Type-1 Hypervisor:


Pros: Such kinds of hypervisors are very efficient because they have direct access to the
physical hardware resources (like CPU, Memory, Network, and Physical storage). This causes
the empowerment of the security because there is no kind of third-party resource so the attacker
couldn’t compromise with anything.
Cons: One problem with Type-1 hypervisors is that they usually need a dedicated separate
machine to perform their operation instruct different VMs and control the host hardware
resources.

TYPE-2 Hypervisor:
A Host operating system runs on the underlying host system. It is also known as ‘Hosted
Hypervisor”. Such kinds of hypervisors don’t run directly over the underlying hardware
rather they run as an application in a Host system (physical machine). The software is
installed on an operating system. The hypervisor asks the operating system to make hardware
calls. An example of a Type 2 hypervisor includes VMware Player or Parallels Desktop.
Hosted hypervisors are often found on endpoints like PCs. The type-2 hypervisor is very
useful for engineers, and security analysts (for checking malware, or malicious source code
and newly developed applications).

While bare-metal hypervisors run directly on the computing hardware, hosted hypervisors run
on top of the operating system (OS) of the host machine. Although hosted hypervisors run
within the OS, additional (and different) operating systems can be installed on top of the
hypervisor. The downside of hosted hypervisors is that latency is higher than bare-metal
hypervisors. This is because communication between the hardware and the hypervisor must
pass through the extra layer of the OS. Hosted hypervisors are sometimes known as client
hypervisors because they are most often used with end users and software testing, where higher
latency is less of a concern.

A Type 2 hypervisor doesn’t run directly on the underlying hardware. Instead, it runs as an
application in an OS. Type 2 hypervisors rarely show up in server-based environments.
Instead, they’re suitable for individual PC users needing to run multiple operating systems.

PRIYANKA PATIL 21
Cloud Computing | Virtualization

Examples include engineers, security professionals analyzing malware, and business users
who need access to applications only available on other software platforms.

Type 2 hypervisors often feature additional toolkits for users to install into the guest OS.
These tools provide enhanced connections between the guest and the host OS, often enabling
the user to cut and paste between the two or access host OS files and folders from within the
guest VM.

A Type 2 hypervisor enables quick and easy access to an alternative guest OS alongside the
primary one running on the host system. This makes it great for end-user productivity. A
consumer might use it to access their favorite Linux-based development tools while using a
speech dictation system only found in Windows, for example.

However, because a Type 2 hypervisor must access computing, memory, and network
resources via the host OS, it introduces latency issues that can affect performance. It also
introduces potential security risks if an attacker compromises the host OS because they could
then manipulate any guest OS running in the Type 2 hypervisor.

Pros & Cons of Type-2 Hypervisor:


Pros: Such kind of hypervisors allow quick and easy access to a guest Operating System
alongside the host machine running. These hypervisors usually come with additional useful
features for guest machines. Such tools enhance the coordination between the host machine
and the guest machine.
Cons: Here there is no direct access to the physical hardware resources so the efficiency of
these hypervisors lags in performance as compared to the type-1 hypervisors, and potential
security risks are also there an attacker can compromise the security weakness if there is access
to the host operating system so he can also access the guest operating system.

Choosing the right hypervisor:


Type 1 hypervisors offer much better performance than Type 2 ones because there’s no
middle layer, making them the logical choice for mission-critical applications and workloads.
But that’s not to say that hosted hypervisors don’t have their place – they’re much simpler to
set up, so they’re a good bet if, say, you need to deploy a test environment quickly. One of the
best ways to determine which hypervisor meets your needs is to compare their performance
metrics. These include CPU overhead, the amount of maximum host and guest memory, and
support for virtual processors. The following factors should be examined before choosing a
suitable hypervisor:
1. Understand your needs: The company and its applications are the reason for the data center
(and your job). Besides your company’s needs, you (and your co-workers in IT) also have your
own needs. The needs for a virtualization hypervisor are:
a. Flexibility
b. Scalability
c. Usability
d. Availability
e. Reliability
f. Efficiency
g. Reliable support
2. The cost of a hypervisor: For many buyers, the toughest part of choosing a hypervisor is
striking the right balance between cost and functionality. While several entry-level solutions
PRIYANKA PATIL 22
Cloud Computing | Virtualization

are free, or practically free, the prices at the opposite end of the market can be staggering.
Licensing frameworks also vary, so it’s important to be aware of exactly what you’re getting
for your money.
3. Virtual machine performance: Virtual systems should meet or exceed the performance of
their physical counterparts, at least about the applications within each server. Everything
beyond meeting this benchmark is profit.
4. Ecosystem: It’s tempting to overlook the role of a hypervisor’s ecosystem – that is, the
availability of documentation, support, training, third-party developers and consultancies, and
so on – in determining whether or not a solution is cost-effective in the long term.
5. Test for yourself: You can gain basic experience from your existing desktop or laptop. You
can run both VMware vSphere and Microsoft Hyper-V in either VMware Workstation or
VMware Fusion to create a nice virtual learning and testing environment.

KVM Hypervisors

Kernel-based Virtual Machine (KVM) is an open-source virtualization technology built into


Linux. Specifically, KVM lets you turn Linux into a hypervisor that allows a host machine to
run multiple, isolated virtual environments called guests or virtual machines (VMs).

KVM is part of Linux. If you’ve got Linux 2.6.20 or newer, you’ve got KVM. KVM was first
announced in 2006 and merged into the mainline Linux kernel version a year later. Because
KVM is part of existing Linux code, it immediately benefits from every new Linux feature, fix,
and advancement without additional engineering.

How does KVM work?

KVM converts Linux into a type-1 (bare-metal) hypervisor. All hypervisors need some
operating system-level components—such as a memory manager, process scheduler,
input/output (I/O) stack, device drivers, security manager, a network stack, and more—to run
VMs. KVM has all these components because it’s part of the Linux kernel. Every VM is
implemented as a regular Linux process, scheduled by the standard Linux scheduler, with
dedicated virtual hardware like a network card, graphics adapter, CPU(s), memory, and disks.

KVM features

KVM is part of Linux. Linux is part of KVM. Everything Linux has, KVM has too. However,
there are specific features that make KVM an enterprise’s preferred hypervisor.

1. Security

KVM uses a combination of security-enhanced Linux (SELinux) and secure virtualization


(sVirt) for enhanced VM security and isolation. SELinux establishes security boundaries
around VMs. sVirt extends SELinux’s capabilities, allowing Mandatory Access Control
(MAC) security to be applied to guest VMs and preventing manual labeling errors.

2. Storage

KVM can use any storage supported by Linux, including some local disks and network-
attached storage (NAS). Multipath I/O may be used to improve storage and provide
redundancy. KVM also supports shared file systems so VM images may be shared by multiple
PRIYANKA PATIL 23
Cloud Computing | Virtualization

hosts. Disk images support thin provisioning, allocating storage on demand rather than all up
front.

3. Hardware support

KVM can use a wide variety of certified Linux-supported hardware platforms. Because
hardware vendors regularly contribute to kernel development, the latest hardware features are
often rapidly adopted in the Linux kernel.

4. Memory management

KVM inherits the memory management features of Linux, including non-uniform memory
access and kernel same-page merging. The memory of a VM can be swapped, backed by large
volumes for better performance, and shared or backed by a disk file.

5. Live migration

KVM supports live migration, which is the ability to move a running VM between physical
hosts with no service interruption. The VM remains powered on, network connections remain
active, and applications continue to run while the VM is relocated. KVM also saves a VM's
current state so it can be stored and resumed later.

6. Performance and scalability

KVM inherits the performance of Linux, scaling to match the demand load if the number of
guest machines and requests increases. KVM allows the most demanding application
workloads to be virtualized and is the basis for many enterprise virtualization setups, such as
data centers and private clouds (via OpenStack).

7. Scheduling and resource control

In the KVM model, a VM is a Linux process, scheduled and managed by the kernel. The Linux
scheduler allows fine-grained control of the resources allocated to a Linux process and
guarantees a quality of service for a particular process. In KVM, this includes a completely fair
scheduler, control groups, network namespaces, and real-time extensions.

8. Lower latency and higher prioritization

The Linux kernel features real-time extensions that allow VM-based apps to run at lower
latency with better prioritization (compared to bare metal). The kernel also divides processes
that require long computing times into smaller components, which are then scheduled and
processed accordingly.

PRIYANKA PATIL 24

You might also like