Professional Documents
Culture Documents
UNIT 3 Virtualization
UNIT 3 Virtualization
UNIT 3: VIRTUALIZATION
Introduction and Benefits
Virtualization is a technology that you can use to create a virtual representation of servers,
storage, networks, and other physical machines. It makes one physical computer act and
performs like many computers. Virtualization is a technique of how to separate a service
from the underlying physical delivery of that service. It is the process of creating a virtual
version of something like computer hardware.
Virtual Software mimics the functions of physical hardware to run multiple virtual machines
simultaneously on a single physical machine. It is creating a virtual (rather than actual) version
of something. It was initially developed in the mainframe era.
It involves using specialized software to create a virtual or software-created version of a
computing resource rather than the actual version of the same resource.
With the help of virtualization, multiple Operating Systems and applications can run on the
same machine, and hardware, increasing the utilization and flexibility of hardware. It is one of
the cost-effective, hardware-reducing, and energy-saving techniques used by cloud providers
is Virtualization.
Virtualization allows the sharing of a single physical instance of a resource or application
among multiple customers and organizations at a time. It does this by assigning a logical name
to physical storage and providing a pointer to that physical resource on demand. The term
virtualization is often synonymous with hardware virtualization, which plays a fundamental
role in efficiently delivering Infrastructure-as-a-Service (IaaS) solutions for cloud
computing. Moreover, virtualization technologies provide a virtual environment for not only
executing applications but also for storage, memory, and networking. It provides a virtual
environment for not only executing applications but also for storage, memory, and networking.
• Host Machine: The machine on which the virtual machine is going to be built is
known as Host Machine.
• Guest Machine: The virtual machine is referred to as a Guest Machine.
PRIYANKA PATIL 1
Cloud Computing | Virtualization
Virtualization has a prominent impact on Cloud Computing. In the case of cloud computing,
users store data in the cloud, but with the help of Virtualization, users have the extra benefit
of sharing the infrastructure. Cloud Vendors take care of the required physical resources, but
these cloud providers charge a huge amount for these services which impacts every user or
organization. Virtualization helps Users or Organisations maintain those services that are
required by a company through external (third-party) people, which helps in reducing costs
to the company. This is the way through which Virtualization works in Cloud Computing.
Benefits of Virtualization
While talking about virtualization in cloud computing benefits, we have to talk about how
they offer protection against system failures. All systems are prone to crashing at least once
in a while, no matter what high-end technology your organization might be using. Even
though your business can survive a few glitches, if the developer who is working on an
important application suddenly faces a system failure, then it can ruin all their hard work
and their progress will be lost. That is why virtualization is important in these scenarios. It
automatically backs up all the data across a few devices. This makes sure that you can access
these files from any device as they are saved in the virtualized cloud network. So even if the
system goes under for a while, you will not lose your progress.
Data transfer also becomes very smooth with virtualization. You can very easily transfer
data from a physical server to a virtual cloud and vice versa. Long-distance transfer of data
and files also becomes easier under virtualization. Instead of looking through hard drives to
find data, instead, you can find them in the virtual cloud space very easily. Data locating
and transfer becomes hassle-free with virtualization.
Traditional data protection methods can provide your data security, but it costs a lot as well.
But with the help of virtualization and virtual firewalls that can restrict access to important
data and it only costs a fraction of the money. Cybersecurity is a central focus of IT, but
with virtualization, you can very easily solve cybersecurity issues and provide premium
protection to your data without any extra costs.
4. Smoother IT Operations
If you want to increase the efficiency of your IT professionals, then you can do so with the
help of virtualization. These virtual networks are faster and easier to operate. They also save
all the progress instantaneously and eliminate downtime by doing so. Virtualization can also
help your team solve crucial problems within the cloud computing system.
5. Cost-Effective Strategies
This is a great advantage of virtualization in cloud computing. So, if you want to reduce the
operational costs of your organization, then virtualization is a great way to do so. All of the
PRIYANKA PATIL 2
Cloud Computing | Virtualization
data is stored on virtual clouds, which eliminates the need to have multiple physical servers,
which reduces business costs by a lot and reduces waste as well. Maintenance fees and
electrical fees also go down in this process. A lot of server space is also saved thanks to
virtualization, which can be used for other important purposes.
As we already said, if physical servers face an issue, your data can be lost forever, or even
if it is not, it takes a lot of time and effort to recover it. But in virtualization, data is always
backed up onto the cloud system, and thus the recovery process becomes hassle-free and
duplication also becomes very easy.
Setting up physical servers and systems is a time-consuming and complicated process. Not
only that, but it also costs a lot of money. But setting up a virtual system in the cloud
computing space is pretty easy, and it takes much less time to set up the whole software
system efficiently.
A lot of people tend to think that migrating to a cloud-based system is going to be pretty
difficult. But in reality, the migration from physical servers to a virtual cloud system is pretty
easy and does not take a lot of time, either. It also saves them power costs, cooling costs,
maintenance costs and the costs of a server maintenance engineer as well.
9. Reduce Downtime
If a physical server is stricken by some disaster and needs to be fixed, it could take up days
of time and a lot of money. But with a virtual system, even if one virtual machine has been
affected, you can easily clone or replicate the system and it will only take minutes. This
helps the business continue to run smoothly soon after running into a bump.
With a virtual system that replaces physical servers, you can also cut down on the costs of
running and maintaining those physical servers on a daily basis. The organization can cut
back on maintenance, power and energy costs while managing waste better as well.
Virtualization comes with the upside of fewer servers to take care of. And with fewer
servers, your IT team will now be free of the burden of maintaining the infrastructure and
hardware. Instead of going through the arduous process of installing the updates in the
servers one by one, they can do it on the main virtual server once, and it will be maintained
throughout all the VMs. Much less time will be consumed in maintaining the systems which
will increase productivity.
PRIYANKA PATIL 3
Cloud Computing | Virtualization
Virtualization centralizes resources and management. Which makes it easier for the IT team
to maintain the system in a more streamlined way. Instead of juggling individual devices,
which can be complicated, they can manage their operations from a single source. Repair,
software installation, patching and maintenance become much easy and less time-
consuming. So it frees up your IT team to focus their energy elsewhere.
During Dev/Tests, developers can easily clone a virtual machine since the cloud
environment is always segmented into various VMs. They can run a test on this clone very
easily without interrupting the production system. You can apply the latest software patch
to a virtual clone and, after a successful run, can put it into the production application of the
company.
With the help of Virtualization Hardware is Efficiently used by users as well as Cloud
Service Providers. In this, the need for a Physical Hardware System for the User decreases
and this results in less cost. From the Service Provider's point of view, they will vitalize the
Hardware using Hardware Virtualization which decreases the Hardware requirement from
the Vendor side which is provided to the User. Before Virtualization, Companies and
organizations had to set up their Server which required extra space for placing them,
engineers to check their performance, and extra hardware costs but with the help of
Virtualization all these limitations are removed by Cloud vendors who provide Physical
Services without setting up any Physical Hardware system.
Characteristics of Virtualization
• Increased Security: The ability to control the execution of a guest program in a
completely transparent manner opens new possibilities for delivering a secure,
controlled execution environment. All the operations of the guest programs are
generally performed against the virtual machine, which then translates and applies
them to the host programs.
• Managed Execution: In particular, sharing, aggregation, emulation, and
isolation are the most relevant features.
• Sharing: Virtualization allows the creation of a separate computing
environment within the same host.
• Aggregation: It is possible to share physical resources among several guests,
but virtualization also allows aggregation, which is the opposite process.
Types of Virtualizations
PRIYANKA PATIL 4
Cloud Computing | Virtualization
1. Application Virtualization
2. Network Virtualization
3. Desktop Virtualization
4. Storage Virtualization
5. Server Virtualization
6. Data virtualization
PRIYANKA PATIL 5
Cloud Computing | Virtualization
through the various cloud services remotely. Many big giant companies are providing their
services like Oracle, IBM, At scale, Cdata, etc.
7. Hardware Virtualization: When the virtual machine software or virtual machine
manager (VMM) is directly installed on the hardware system is known as hardware
virtualization. The main job of the hypervisor is to control and monitor the processor,
memory, and other hardware resources. After the virtualization of the hardware system we
can install different operating systems on it and run different applications on those OS.
Hardware virtualization is mainly done for the server platforms because controlling virtual
machines is much easier than controlling a physical server.
8. Operating System Virtualization: When the virtual machine software or virtual machine
manager (VMM) is installed on the Host operating system instead of directly on the
hardware system is known as operating system virtualization. Operating System
Virtualization is mainly used for testing the applications on different platforms of OS.
Uses of Virtualization
• Data-integration
• Business-integration
• Service-oriented architecture data-services
• Searching organizational data
PRIYANKA PATIL 6
Cloud Computing | Virtualization
using the ISA. With this, a binary code that originally needed some additional layers to run
is now capable of running on the x86 machines. It can also be tweaked to run on the x64
machine. With ISA, it is possible to make the virtual machine hardware agnostic.
For the basic emulation, an interpreter is needed, which interprets the source code and then
converts it into a hardware format that can be read. This then allows processing. This is one
of the five implementation levels of virtualization in Cloud Computing..
Multiple users will not be able to use the same hardware and also use multiple virtualization
instances at the very same time. This is mostly used in the cloud-based infrastructure.
When there are several users and no one wants to share the hardware, then this is where the
virtualization level is used. Every user will get his virtual environment using a dedicated
virtual hardware resource. In this way, there is no question of any conflict.
4) Library Level
The operating system is cumbersome, and this is when the applications use the API from
the libraries at a user level. These APIs are documented well, and this is why the library
virtualization level is preferred in these scenarios. API hooks make it possible as it controls
the link of communication from the application to the system.
5) Application Level
The application-level virtualization is used when there is a desire to virtualize only one
application and is the last of the implementation levels of virtualization in Cloud
Computing. One does not need to virtualize the entire environment of the platform.
This is generally used when you run virtual machines that use high-level languages. The
application will sit above the virtualization layer, which in turn sits on the application
program.
It lets the high-level language programs compiled to be used at the application level of the
virtual machine run seamlessly.
Conclusion
PRIYANKA PATIL 7
Cloud Computing | Virtualization
Virtualization Structure
The architecture specifies the arrangement and interrelationships among the particular
components in the virtual environment.
In cloud computing, virtualization facilitates the creation of virtual versions of hardware such
as desktops, as well as virtual ecosystems for OS, storage, memory, and networking resources.
A virtualization architecture runs multiple OSs on the same machine using the same hardware
and also ensures their smooth functioning.
The virtualization architecture is a visual depiction or model of virtualization. It maps out and
describes the various virtual elements in the ecosystem, including the following:
• Application virtual services
• Infrastructure virtual services
• Virtual OS
• Hypervisor
PRIYANKA PATIL 8
Cloud Computing | Virtualization
The application and infrastructure of virtual services are embedded into a virtual data center or
OS. The hypervisor separates the OS from the underlying hardware and enables a host
machine to simultaneously run multiple VMs that will share the same physical resources.
The core components of a Xen system are the hypervisor, kernel, and applications. The
organization of the three components is important. Like other virtualization systems, many
guest OSes can run on top of the hypervisor. However, not all guest OSes are created equal,
and one in particular controls the others. The guest OS, which has control ability, is called
Domain 0, and the others are called Domain U. Domain 0 is a privileged guest OS of Xen. It
is first loaded when Xen boots without any file system drivers being available. Domain 0 is
designed to access hardware directly and manage devices. Therefore, one of the responsibilities
of Domain 0 is to allocate and map hardware resources for the guest domains (the Domain U
domains).
For example, Xen is based on Linux and its security level is C2. Its management VM is named
Domain 0, which has the privilege to manage other VMs implemented on the same host. If
Domain 0 is compromised, the hacker can control the entire system. So, in the VM system,
security policies are needed to improve the security of Domain 0. Domain 0, behaving as a
VMM, allows users to create, copy, save, read, modify, share, migrate, and roll back VMs as
easily as manipulating a file, which flexibly provides tremendous benefits for users.
Unfortunately, it also brings a series of security problems during the software life cycle and
data lifetime. Traditionally, a machine’s lifetime can be envisioned as a straight line where the
current state of the machine is a point that progresses monotonically as the software executes.
During this time, configuration changes are made, the software is installed, and patches are
applied. In such an environment, the VM state is akin to a tree: At any point, execution can go
into N different branches where multiple instances of a VM can exist at any point in this tree
PRIYANKA PATIL 9
Cloud Computing | Virtualization
at any given time. VMs are allowed to roll back to previous states in their execution (e.g., to
fix configuration errors) or rerun from the same point many times (e.g., as a means of
distributing dynamic content or circulating a “live” system image).
Full Virtualization
Full virtualization is the first virtualization software solution that has ever existed in the
industry. It was first developed in the late 90’s and 2000s. Full virtualization, as its name
implies, keeps the VM completely divided from the hardware and the VM is unaware of the
situation that is running in a virtual environment.
In the full virtualization method, the guest operating system does need to be modified, which
makes it portable and allows it to support nearly any operating system on the market. Full
virtualization, as a technique of operation, uses binary translation and a direct approach to
execute instructions from a virtual machine on physical hardware.However, full virtualization
lacks performance and speed since it uses methods like Hardware Emulation
Overhead and Context Switching Overhead to operate.
With full virtualization, noncritical instructions run on the hardware directly while critical
instructions are discovered and replaced with traps into the VMM to be emulated by software.
Both the hypervisor and VMM approaches are considered full virtualization. Why are only
critical instructions trapped in the VMM? This is because binary translation can incur a large
performance overhead. Noncritical instructions do not control hardware or threaten the security
of the system, but critical instructions do. Therefore, running noncritical instructions on
hardware not only can promote efficiency but also can ensure system security.
In the full virtualization technique, the hypervisor completely simulates the underlying
hardware. The main advantage of this technique is that it allows the running of the
unmodified OS. In full virtualization, the guest OS is completely unaware that it’s being
virtualized.
Full virtualization uses a combination of direct execution and binary translation. This allows
direct execution of non-sensitive CPU instructions, whereas sensitive CPU instructions are
translated on the fly. To improve performance, the hypervisor maintains a cache of the recently
translated instructions.
PRIYANKA PATIL 10
Cloud Computing | Virtualization
independently on a single physical host. In this approach, the guest operating system is unaware
that it's running in a virtualized environment, as it interacts with virtualized hardware that
emulates real hardware.
1. Hypervisor Layer:
Full virtualization relies on a hypervisor, also known as a Virtual Machine Monitor
(VMM), which sits between the physical hardware and guest operating systems. The
hypervisor manages and controls the allocation of physical resources to virtual
machines.
2. Hardware Virtualization:
Full virtualization uses hardware-assisted virtualization technologies like Intel VT-
x or AMD-V to enhance performance. These technologies allow the hypervisor to run
guest OSes directly on the physical CPU without significant performance overhead.
3. Isolation:
VMs created through full virtualization are completely isolated from each other. Each
VM runs its instance of the guest operating system, which cannot interfere with other
VMs.
• Portability: Since the guest operating system does not need any sort of modification,
it is easy to move it.
• Lower Security: Full virtualization is considered less secure compared to
paravirtualization due to its architecture and the method of communication between
the guest operating system and hypervisor.
• Slower and Lacks Performance: Since full virtualization does not allow the guest
operating system to directly communicate with hardware it lacks performance and
speed.
• No guest Operating System Modification: Full virtualization does not need guest
OS modifications because it does not communicate directly with guest OS.
Para Virtualization
1. Hypervisor Layer:
Similar to full virtualization, paravirtualization also employs a hypervisor, but here,
the guest operating systems are aware of it. The hypervisor provides a set of APIs that
guest OSes must use to communicate with the underlying hardware.
2. Guest OS Modifications:
Guest operating systems must be modified to replace certain hardware-related
instructions with hypercalls, which are calls to the hypervisor. These hypercalls allow
PRIYANKA PATIL 12
Cloud Computing | Virtualization
the guest OS to request services from the hypervisor, such as memory management or
CPU scheduling.
3. Performance Benefits:
Since para virtualization avoids the overhead of emulating complete hardware, it often
offers better performance than full virtualization. Guest OSes can communicate more
directly with the hypervisor, resulting in improved efficiency.
Features of Paravirtualization
1. Generalized
2. Aspect wise
CPU virtualization is very important in cloud computing. It enables cloud providers to offer
services like –
• Virtual private servers (VPSs)
• Cloud storage (EBS)
• Cloud computing platforms (AWS, Azure and Google Cloud)
Consider an example to understand CPU virtualization. Imagine we have a physical server with
a single CPU. We want to run two different operating systems on this server, Windows &
Linux. So it can easily be done by creating two Virtual Machines (VMs), one for Windows and
one for Linux. The virtualization software will create a virtual CPU for each VM. The
virtualization software will create a virtual CPU for each VM. The virtual CPUs will execute
on the physical CPU but separately. This means the Windows Virtual Machine cannot view or
communicate with the Linux VM, and vice versa.
The virtualization software will also allocate memory and other resources to each VM. This
guarantees each VM has enough resources to execute. CPU virtualization is made difficult but
necessary for cloud computing.
PRIYANKA PATIL 14
Cloud Computing | Virtualization
• Let’s take an example you have a powerful computer with a CPU, memory, and other
resources.
• To start CPU virtualization, you use special software called a hypervisor. This is like
the conductor of a virtual orchestra.
• The hypervisor creates virtual machines (VMs) – these are like separate, isolated worlds
within your computer.
• The “virtual” resources of each VM include CPU, memory, and storage. It’s like having
multiple mini-computers inside your main computer.
• The hypervisor carefully divides the real CPU’s processing power among the VMs. It’s
like giving each VM its slice of the CPU pie.
• It also makes sure that each Virtual memory (VM) gets its share of memory, storage,
and other resources.
Each VM operates in its isolated environment. It can’t see or interfere with what’s happening
in other VMs.
• Within each Virtual Machine, you can install & run different operating systems (like
Windows, and Linux) and applications.
• The VM thinks it’s a real computer, even though it’s sharing the actual computer’s
resources with other VMs.
• The hypervisor acts as a smart manager, deciding when each VM gets to use the real
CPU.
• It ensures that no VM takes up all the CPU time, making sure everyone gets their turn
to work.
• Even though there’s only one physical CPU, each VM believes it has its dedicated CPU.
• The hypervisor cleverly switches between VMs so that all the tasks appear to be
happening simultaneously.
2) Cost Savings
By running multiple virtual machines on a single physical server, cloud providers save on
hardware costs, energy consumption, and maintenance.
PRIYANKA PATIL 15
Cloud Computing | Virtualization
3) Scalability
CPU virtualization allows easy scaling, adding or removing virtual machines according to
demand. This flexibility helps businesses adapt to changing needs as per requirements.
1) Overhead
The virtualization layer adds some overhead, which means a small portion of CPU power is
used to manage virtualization itself.
2) Performance Variability
Depending on the number of virtual machines and their demands, performance can vary. If one
VM needs a lot of resources, others might experience slower performance.
3) Complexity
Handling multiple virtual machines and how they work together needs expertise. Creating and
looking after virtualization systems can be complicated.
4) Compatibility Challenges
Some older software or hardware might not work well within virtualized environments.
Compatibility issues can arise.
5) Resource Sharing
While CPU virtualization optimizes resource usage, if one VM suddenly requires a lot of
resources, it might impact the performance of others.
a system crash. Therefore, all processors have at least two modes, user mode, and supervisor
mode, to ensure controlled access to critical hardware. Instructions running in supervisor mode
are called privileged instructions. Other instructions are unprivileged instructions. In a
virtualized environment, it is more difficult to make OSes and applications run correctly
because there are more layers in the machine stack.
Virtual memory virtualization is similar to the virtual memory support provided by modern
operating systems. In a traditional execution environment, the operating system maintains
mappings of virtual memory to machine memory using page tables, which is a one-stage
mapping from virtual memory to machine memory. All modern x86 CPUs include a memory
management unit (MMU) and a translation lookaside buffer (TLB) to optimize virtual memory
performance. However, in a virtual execution environment, virtual memory virtualization
involves sharing the physical system memory in RAM and dynamically allocating it to the
physical memory of the VMs. That means a two-stage mapping process should be maintained
by the guest OS and the VMM, respectively: virtual memory to physical memory and physical
memory to machine memory. Furthermore, MMU virtualization should be supported, which is
transparent to the guest OS. The guest OS continues to control the mapping of virtual addresses
to the physical memory addresses of VMs. However the guest OS cannot directly access the
actual machine memory. The VMM is responsible for mapping the guest's physical memory to
the actual machine memory.
I/O Virtualization:
I/O virtualization involves managing the routing of I/O requests between virtual devices and
the shared physical hardware.
PRIYANKA PATIL 17
Cloud Computing | Virtualization
PRIYANKA PATIL 18
Cloud Computing | Virtualization
• Portability: Hypervisors allow multiple operating systems to reside on the same physical
server (host machine). Because the virtual machines that the hypervisor runs are
independent of the physical machine, they are portable. IT teams can shift workloads and
allocate networking, memory, storage, and processing resources across multiple servers as
needed, moving from machine to machine or platform to platform. When an application
needs more processing power, the virtualization software allows it to seamlessly access
additional machines.
Hypervisors make it possible to use more of a system’s available resources and provide greater
IT mobility since the guest VMs are independent of the host hardware. This means they can be
easily moved between different servers. Because multiple virtual machines can run off of one
physical server with a hypervisor, a hypervisor reduces:
• Space
• Energy
• Maintenance requirements
Characteristics of hypervisors
There are different categories of hypervisors and different brands of hypervisors within each
category. The market has matured to make hypervisors a commodity product in the enterprise
space, but there are still differentiating factors that should guide your choice. Here’s what to
look for:
• Performance. Look for benchmark data that show how well the hypervisor performs in a
production environment. Ideally, bare-metal hypervisors should support guest OS
performance close to native speeds.
• Ecosystem. You will need good documentation and technical support to implement and
manage hypervisors across multiple physical servers at scale. Also, look for a healthy
community of third-party developers that can support the hypervisor with their own agents
and plugins that offer capabilities, such as backup and restore capacity analysis and fail-
over management.
• Management tools. Running VMs isn’t the only thing you must manage when using a
hypervisor. You must provision the VMs, maintain them, audit them, and clean up disused
ones to prevent "VM sprawl." Ensure that the vendor or third-party community supports
the hypervisor architecture with comprehensive management tools.
• Live migration. This enables you to move VMs between hypervisors on different physical
machines without stopping them, which can be useful for both fail-over and workload
balancing.
• Cost. Consider the cost and fee structure involved in licensing hypervisor technology.
Don’t just think about the cost of the hypervisor itself. The management software that
makes it scalable to support an enterprise environment can often be expensive. Lastly,
examine the vendor’s licensing structure, which may change depending on whether you
deploy it in the cloud or locally.
PRIYANKA PATIL 19
Cloud Computing | Virtualization
Types of hypervisors
There are two main hypervisor types, referred to as “Type 1” (or “bare metal”) and “Type 2”
(or “hosted”). A type 1 hypervisor acts like a lightweight operating system and runs directly
on the host’s hardware, while a type 2 hypervisor runs as a software layer on an operating
system, like other computer programs.
Both types of hypervisors can run multiple virtual servers for multiple tenants on one physical
machine. Public cloud service providers lease server space on different virtual servers to
different companies. One server might host several virtual servers that are all running
workloads for different companies. This type of resource sharing can result in a “noisy
neighbor” effect when one of the tenants runs a large workload that interferes with the server
performance for other tenants. It also poses more of a security risk than using a dedicated bare-
metal server.
A bare-metal server that a single company has full control over will always provide higher
performance than a virtual server that is sharing a physical server’s bandwidth, memory and
processing power with other virtual servers. The hardware for bare-metal servers can also be
optimized to increase performance, which is not the case with shared public servers. Businesses
that need to comply with regulations that require physical separation of resources will need to
use their bare-metal servers that do not share resources with other tenants.
TYPE-1 Hypervisor:
The hypervisor runs directly on the underlying host system. It is also known as a “Native
Hypervisor” or “Bare metal hypervisor”. It does not require any base server operating system.
It has direct access to hardware resources. The most commonly deployed type of hypervisor
is the type 1 or bare-metal hypervisor, where virtualization software is installed directly on
the hardware where the operating system is normally installed. Because bare-metal
hypervisors are isolated from the attack-prone operating system, they are extremely secure. In
addition, they generally perform better and more efficiently than hosted hypervisors. For
these reasons, most enterprise companies choose bare-metal hypervisors for data
center computing needs.
PRIYANKA PATIL 20
Cloud Computing | Virtualization
Type 1 hypervisors are highly efficient because they have direct access to physical hardware.
This also increases their security, because there is nothing in between them and the CPU that
an attacker could compromise. But a Type 1 hypervisor often requires a separate management
machine to administer different VMs and control the host hardware.
TYPE-2 Hypervisor:
A Host operating system runs on the underlying host system. It is also known as ‘Hosted
Hypervisor”. Such kinds of hypervisors don’t run directly over the underlying hardware
rather they run as an application in a Host system (physical machine). The software is
installed on an operating system. The hypervisor asks the operating system to make hardware
calls. An example of a Type 2 hypervisor includes VMware Player or Parallels Desktop.
Hosted hypervisors are often found on endpoints like PCs. The type-2 hypervisor is very
useful for engineers, and security analysts (for checking malware, or malicious source code
and newly developed applications).
While bare-metal hypervisors run directly on the computing hardware, hosted hypervisors run
on top of the operating system (OS) of the host machine. Although hosted hypervisors run
within the OS, additional (and different) operating systems can be installed on top of the
hypervisor. The downside of hosted hypervisors is that latency is higher than bare-metal
hypervisors. This is because communication between the hardware and the hypervisor must
pass through the extra layer of the OS. Hosted hypervisors are sometimes known as client
hypervisors because they are most often used with end users and software testing, where higher
latency is less of a concern.
A Type 2 hypervisor doesn’t run directly on the underlying hardware. Instead, it runs as an
application in an OS. Type 2 hypervisors rarely show up in server-based environments.
Instead, they’re suitable for individual PC users needing to run multiple operating systems.
PRIYANKA PATIL 21
Cloud Computing | Virtualization
Examples include engineers, security professionals analyzing malware, and business users
who need access to applications only available on other software platforms.
Type 2 hypervisors often feature additional toolkits for users to install into the guest OS.
These tools provide enhanced connections between the guest and the host OS, often enabling
the user to cut and paste between the two or access host OS files and folders from within the
guest VM.
A Type 2 hypervisor enables quick and easy access to an alternative guest OS alongside the
primary one running on the host system. This makes it great for end-user productivity. A
consumer might use it to access their favorite Linux-based development tools while using a
speech dictation system only found in Windows, for example.
However, because a Type 2 hypervisor must access computing, memory, and network
resources via the host OS, it introduces latency issues that can affect performance. It also
introduces potential security risks if an attacker compromises the host OS because they could
then manipulate any guest OS running in the Type 2 hypervisor.
are free, or practically free, the prices at the opposite end of the market can be staggering.
Licensing frameworks also vary, so it’s important to be aware of exactly what you’re getting
for your money.
3. Virtual machine performance: Virtual systems should meet or exceed the performance of
their physical counterparts, at least about the applications within each server. Everything
beyond meeting this benchmark is profit.
4. Ecosystem: It’s tempting to overlook the role of a hypervisor’s ecosystem – that is, the
availability of documentation, support, training, third-party developers and consultancies, and
so on – in determining whether or not a solution is cost-effective in the long term.
5. Test for yourself: You can gain basic experience from your existing desktop or laptop. You
can run both VMware vSphere and Microsoft Hyper-V in either VMware Workstation or
VMware Fusion to create a nice virtual learning and testing environment.
KVM Hypervisors
KVM is part of Linux. If you’ve got Linux 2.6.20 or newer, you’ve got KVM. KVM was first
announced in 2006 and merged into the mainline Linux kernel version a year later. Because
KVM is part of existing Linux code, it immediately benefits from every new Linux feature, fix,
and advancement without additional engineering.
KVM converts Linux into a type-1 (bare-metal) hypervisor. All hypervisors need some
operating system-level components—such as a memory manager, process scheduler,
input/output (I/O) stack, device drivers, security manager, a network stack, and more—to run
VMs. KVM has all these components because it’s part of the Linux kernel. Every VM is
implemented as a regular Linux process, scheduled by the standard Linux scheduler, with
dedicated virtual hardware like a network card, graphics adapter, CPU(s), memory, and disks.
KVM features
KVM is part of Linux. Linux is part of KVM. Everything Linux has, KVM has too. However,
there are specific features that make KVM an enterprise’s preferred hypervisor.
1. Security
2. Storage
KVM can use any storage supported by Linux, including some local disks and network-
attached storage (NAS). Multipath I/O may be used to improve storage and provide
redundancy. KVM also supports shared file systems so VM images may be shared by multiple
PRIYANKA PATIL 23
Cloud Computing | Virtualization
hosts. Disk images support thin provisioning, allocating storage on demand rather than all up
front.
3. Hardware support
KVM can use a wide variety of certified Linux-supported hardware platforms. Because
hardware vendors regularly contribute to kernel development, the latest hardware features are
often rapidly adopted in the Linux kernel.
4. Memory management
KVM inherits the memory management features of Linux, including non-uniform memory
access and kernel same-page merging. The memory of a VM can be swapped, backed by large
volumes for better performance, and shared or backed by a disk file.
5. Live migration
KVM supports live migration, which is the ability to move a running VM between physical
hosts with no service interruption. The VM remains powered on, network connections remain
active, and applications continue to run while the VM is relocated. KVM also saves a VM's
current state so it can be stored and resumed later.
KVM inherits the performance of Linux, scaling to match the demand load if the number of
guest machines and requests increases. KVM allows the most demanding application
workloads to be virtualized and is the basis for many enterprise virtualization setups, such as
data centers and private clouds (via OpenStack).
In the KVM model, a VM is a Linux process, scheduled and managed by the kernel. The Linux
scheduler allows fine-grained control of the resources allocated to a Linux process and
guarantees a quality of service for a particular process. In KVM, this includes a completely fair
scheduler, control groups, network namespaces, and real-time extensions.
The Linux kernel features real-time extensions that allow VM-based apps to run at lower
latency with better prioritization (compared to bare metal). The kernel also divides processes
that require long computing times into smaller components, which are then scheduled and
processed accordingly.
PRIYANKA PATIL 24