Virtualization Assig

You might also like

Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1of 29

DESKTOP VIRTUALIZATION

Desktop virtualization is a virtualization technology that separates an individual's PC


applications from his or her desktop. Virtualized desktops are generally hosted on a remote
central server, rather than the hard drive of the personal computer. Because the client-server
computing model is used in virtualizing desktops, desktop virtualization is also known as client
virtualization.

Desktop virtualization is the concept of isolating a logical operating system (OS) instance from
the client that is used to access it.

Desktop virtualization provides a way for users to maintain their individual desktops on a
single, central server. The users may be connected to the central server through a LAN, WAN
or over the Internet.

Desktop virtualization, often called client virtualization, is a virtualization technology used to


separate a computer desktop environment from the physical computer. Desktop virtualization
is considered a type of client-server computing model because the "virtualized" desktop is
stored on a centralized, or remote, server and not the physical machine being virtualized.

Desktop virtualization "virtualizes desktop computers" and these virtual desktop environments
are "served" to users on the network. You interact with a virtual desktop in the same way you
would use a physical desktop. Another benefit of desktop virtualization is that is lets you
remotely log in to access your desktop from any location.

VDI (Virtual Desktop Infrastructure -- or Interface) is a popular method of desktop


virtualization. This type of desktop virtualization uses the server computing model, as the
desktop virtualization in this scenario is enabled through hardware and software. VDI hosts the
desktop environment in a virtual machine (VM) that runs on a centralized or remote server.

BENEFITS

Desktop virtualization has many benefits, including a lower total cost of ownership (TCO),
increased security, reduced energy costs, reduced downtime and centralized management.
LIMITATIONS

Limitations of desktop virtualization include difficulty in maintenance and set up of printer


drivers; increased downtime in case of network failures; complexity and costs involved in VDI
deployment and security risks in the event of improper network management.

There are several different conceptual models of desktop virtualization, which can broadly be
divided into two categories based on whether or not the operating system instance is executed
locally or remotely. It is important to note that not all forms of desktop virtualization
technology involve the use of virtual machines (VMs).

Client virtualization requires processing to occur on local hardware; the use of thin clients, zero
clients and mobile devices is not possible. These types of desktop virtualization include:

OS image streaming: The operating system runs on local hardware, but it boots to a
remote disk image across the network. This is useful for groups of desktops that use the same
disk image. OS image streaming, also known as remote desktop virtualization, requires a
constant network connection in order to function.

Client-based virtual machines: A VM runs on a fully functional PC, with a hypervisor in


place. Client-based virtual machines can be managed by regularly syncing the disk image with
a server, but a constant network connection is not necessary in order for them to function.

PRODUCT

1. Amazon WorkSpaces

Amazon WorkSpaces, a managed, secure cloud desktop service, heads the giant names in
our top 20 best virtual desktop infrastructure software. You can use Amazon WorkSpaces to
provision either Windows or Linux desktops in just a few minutes and quickly scale to provide
thousands of desktops to workers across the globe. You can pay either monthly or hourly, just
for the WorkSpaces you launch, which helps you save money when compared to traditional
desktops and on-premises VDI solutions.

Among its most prominent features, it helps you eliminate many administrative tasks
associated with managing your desktop lifecycle including provisioning, deploying,
maintaining, and recycling desktops. There is less hardware inventory to manage and no need
for complex virtual desktop infrastructure (VDI) deployments that don’t scale.

Why Amazon WorkSpaces is a good pick for your business:

1. Fully-Managed Service. Setting up and maintaining a Virtual Desktop Infrastructure


can be challenging and costly in terms of resources and slows down your cloud desktop
provisioning. However, you can overcome these roadblocks with Amazon
WorkSpaces. This fully-managed service allows you to focus on scaling to thousands
of workers no matter where they are in the world. Thus, you can abstract countless
administrative tasks and focus on simplified cloud desktop lifecycle management
instead.

2. Decreased Overhead Costs. Amazon WorkSpaces is a cost-efficient solution for cloud


desktops as it has multiple pricing tiers that meet your most pressing requirements. It
has five bundles depending on your focus: performance, power, graphics, value, and
standard services. Aside from that, you can opt for hourly or monthly billing so you can
better control your usage and expenses. Moreover, this gives you full control over your
desktop resources to further cut operational costs.

3. Active Directory Integration. Rather than forcing you to spend time on creating a new
directory, Amazon WorkSpaces lets you connect to your existing Active Directory.
This way, you can easily manage and modify user access rights from a single interface
and roll it throughout the organization with ease.
2. IBM Cloud

IBM Cloud is an accelerated virtual desktop infrastructure software that is integrated with
industry-standard graphics and storage capabilities to eliminate productivity barriers. The
platform empowers mobile workforces to gain workstation-like experience on any device for
fast and convenient access to graphics-intensive applications and files anytime and anywhere
there is an internet connection.

Our initial IBM Cloud reviews show the platform’s robust VDI functionalities backed by
security safeguards to protect in-flight and at-rest content from loss and theft. The system
configures and scales computing and storage options housed in different data centers
worldwide, with GPU technology that speeds up access to graphics-intensive materials.

Why IBM Cloud is a good pick for your business:

1. Cost efficient. IBM Cloud allows organizations to switch from a capital expense
(CAPEX) to operating expense (OPEX) model for their infrastructure, while also
reducing total cost of ownership as users become less reliant on desktop workstations
and standalone software licenses. The software is also easy to set up, with on-demand
access to top compute services and desktop virtualization solutions.

2. Speed and collaboration. Teams can increase productivity with improved


collaboration as IBM Cloud speeds up access to graphics files. The system can be
quickly configured with high-performance NVIDIA GRID GPUs, which allow multiple
users to access and share the graphics processing power of a single GPU.

3. IBM’s vaunted security. IBM Cloud sends only encrypted visual output and mouse or
keyboard input over the network. This means users no longer need local copies of files,
and the risk of getting content compromised can be avoided. Ground up security
solutions include physical safeguards; network security; and system, application and
data security.

NETWORK VIRTUALIZATION

Network virtualization refers to the management and monitoring of an entire computer network
as a single administrative entity from a single software-based administrator’s console. Network
virtualization also may include storage virtualization, which involves managing all storage as
a single resource. Network virtualization is designed to allow network optimization of data
transfer rates, flexibility, scalability, reliability and security. It automates many network
administrative tasks, which actually disguise a network's true complexity. All network servers
and services are considered one pool of resources, which may be used without regard to the
physical components.

Network virtualization is especially useful for networks experiencing a rapid, large and
unpredictable increase in usage.

The intended result of network virtualization is improved network productivity and efficiency,
as well as job satisfaction for the network administrator.

Network virtualization is accomplished by using a variety of hardware and software and


combining network components. Software and hardware vendors combine components to offer
external or internal network virtualization. The former combines local networks, or subdivides
them into virtual networks, while the latter configures single systems with containers, creating
a network in a box. Still other software vendors combine both types of network virtualization.

Network virtualization is intended to optimize network speed, reliability, flexibility,


scalability and security. It is said to be especially useful in networks that experience sudden,
large and unforeseen surges in usage.
Network virtualization works by combining the available resources in a network and splitting
up the available bandwidth into channels, each of which is independent from the others and
each of which can be assigned (or reassigned) to a particular server or device in real time. Each
channel is independently secured. Every subscriber has shared access to all the resources on
the network from a single computer.

Network virtualization is intended to improve productivity, efficiency and job satisfaction of


the administrator by performing many of these tasks automatically, thereby disguising the true
complexity of the network. Files, images, programs and folders can be centrally managed from
a single physical site. Storage media such as hard drives and tape drives can be easily added or
reassigned. Storage space can be shared or reallocated among the servers.

TYPES OF NETWORK VIRTUALIZATION

Virtual networks exist in two forms; internal and external. Both of these terms refer to inside
or outside the server. Eternal virtualization will use tools such as switches, adapters or a
network to combine one or more networks into virtual units. Internal virtualization refers to
using network-like functionality in software containers on a single network server. Internal
software allows VMs to exchange data on a host without using an external network.
ADVANTAGES AND DISADVANTAGES

The use of network virtualization does have its upsides and downsides, including:

Advantages:

 More productive IT environments (i.e., efficient scaling).

 Improved security and recovery times.

 Faster in application delivery.

 More efficient networks.

 Reduced overall costs.

Disadvantages:

 Increased upfront costs (investing in virtualization software).

 Need to license software.

 There may be a learning curve if IT managers are not experienced.

 Not every application and server will work in a virtualized environment.

 Availability can be an issue if an organization can’t connect to their virtualized data.

PRODUCT

Cisco Network Virtualization and Automation Infrastructure. Industry-leading Cisco


developed a fully open, preintegrated, validated system that provides modular building blocks
for creating reliable, repeatable, and high performance network function virtualization (NFV)
deployments. With Cisco Network Functions Virtualization and Automation Infrastructure it
is supported through a single point of contact and addresses the complexity, deployment, and
operational challenges of NFV across multiple technology providers. Cisco Network Functions
Virtualization and Automation is based on the industry-leading partnership of Cisco, Intel, and
Red Hat. Cisco and Intel NFV Quick Start. To help speed the adoption of NFV services through
demonstrations and proofs of concept (PoC) tests, the partnership has deployed labs globally.
Cisco network virtualization solutions address three important aspects of network
virtualization:

 Access control
 Path isolation
 Services edge

Access control provides secure, customized access for individuals and groups to protect the
Enterprise LAN from external threats. Complementary features include:

 Port authentication using standards such as IEEE 802.1x for strong connections
between authorized users and VPNs.
 Cisco Network Admission Control (NAC), to minimize security risks by removing
harmful traffic.

Path isolation maps validated users or devices to the correct secure set of available resources
(virtual private network, or VPN). Cisco offers three path isolation solutions:

 Generic routing encapsulation (GRE) tunnels create closed user groups on the
Enterprise LAN to allow guest access to the Internet, while preventing access to internal
resources.
 Virtual routing and forwarding (VRF)-lite, allows network managers to use a single
routing device to support multiple virtual routers.
 Multiprotocol label switching (MPLS) VPNs also partition a campus network for
closed user groups

Services Edge provides access to services for a legitimate sets of users and devices, by using
centralized policy enforcement to:

 Minimize capital and operational expenses.


 Share service modules across all partitions of the network.
 Rapidly deploy policies and services across the whole network.

SERVER VIRTUALIZATION

Server virtualization is the masking of server resources, including the number and identity of
individual physical servers, processors, and operating systems, from server users. The server
administrator uses a software application to divide one physical server into multiple isolated
virtual environments. The virtual environments are sometimes called virtual private servers,
but they are also known as guests, instances, containers or emulations.

There are three popular approaches to server virtualization: the virtual machine model, the
paravirtual machine model, and virtualization at the operating system (OS) layer.

Virtual machines are based on the host/guest paradigm. Each guest runs on a virtual imitation
of the hardware layer. This approach allows the guest operating system to run without
modifications. It also allows the administrator to create guests that use different operating
systems. The guest has no knowledge of the host's operating system because it is not aware that
it's not running on real hardware. It does, however, require real computing resources from the
host -- so it uses a hypervisor to coordinate instructions to the CPU. The hypervisor is called a
virtual machine monitor (VMM). It validates all the guest-issued CPU instructions and
manages any executed code that requires addition privileges. VMware and Microsoft Virtual
Server both use the virtual machine model.

The paravirtual machine (PVM) model is also based on the host/guest paradigm -- and it uses
a virtual machine monitor too. In the paravirtual machine model, however, The VMM actually
modifies the guest operating system's code. This modification is called porting. Porting
supports the VMM so it can utilize privileged systems calls sparingly. Like virtual machines,
paravirtual machines are capable of running multiple operating systems. Xen and UML both
use the paravirtual machine model.

Virtualization at the OS level works a little differently. It isn't based on the host/guest paradigm.
In the OS level model, the host runs a single OS kernel as its core and exports operating system
functionality to each of the guests. Guests must use the same operating system as the host,
although different distributions of the same system are allowed. This distributed architecture
eliminates system calls between layers, which reduces CPU usage overhead. It also requires
that each partition remain strictly isolated from its neighbours so that a failure or security
breach in one partition isn't able to affect any of the other partitions. In this model, common
binaries and libraries on the same physical machine can be shared, allowing an OS level virtual
server to host thousands of guests at the same time. Virtuozzo and Solaris Zones both use OS-
level virtualization.

Server virtualization can be viewed as part of an overall virtualization trend in enterprise IT


that includes storage virtualization, network virtualization, and workload management. This
trend is one component in the development of autonomic computing, in which the server
environment will be able to manage itself based on perceived activity. Server virtualization can
be used to eliminate server sprawl, to make more efficient use of server resources, to improve
server availability, to assist in disaster recovery, testing and development, and to centralize
server administration.

PRODUCT

Hyper-V Microsoft Hyper-V helps in expanding or establishing a private cloud environment.


It promotes effective hardware utilization, improves business continuity, as well as makes
development and test more efficient.
Features of Microsoft Hyper-V for Windows Server 2019:
 Persistent memory support.
 Shielded VM updates.
 Simple Two-Node clusters.
 ReFS Deduplication.
 Storage Spaces Direct improvements.
 Windows Admin Center.
 Encrypted subnets.
STORAGE VIRTUALIZATION

Storage virtualization is the pooling of physical storage from multiple storage devices
into what appears to be a single storage device -- or pool of available storage capacity -- that is
managed from a central console. The technology relies on software to identify available storage
capacity from physical devices and to then aggregate that capacity as a pool of storage that can
be used in a virtual environment by virtual machines (VMs).

The virtual storage software intercepts I/O requests from physical or virtual machines and
sends those requests to the appropriate physical location of the storage devices that are part of
the overall pool of storage in the virtualized environment. To the user, virtual storage appears
like a standard read or write to a physical drive.

Even a RAID array can sometimes be considered a type of storage virtualization. Multiple
physical disks in the array are presented to the user as a single storage device that, in the
background, replicates data to multiple disks in case of a single disk failure.

Types of storage virtualization

There are two basic methods of virtualizing storage: file-based or block-based. File-based
storage virtualization is a specific use case, applied to network-attached storage (NAS)
systems. Using the Server Message Block (SMB) or Network File System (NFS) protocols,
file-based storage virtualization breaks the dependency in a normal NAS array between the
data being accessed and the location of physical memory. This enables the NAS system to
better handle file migration in the background to improve performance.

Block-based or block access virtual storage is more widely applied in virtual storage systems
than file-based storage virtualization. Block-based systems abstract the logical storage, such as
a drive partition, from the actual physical memory blocks in a storage device, such as a hard
disk drive (HDD) or solid-state memory device. This enables the virtualization management
software to collect the capacity of the available blocks of memory space and pool them into a
shared resource to be assigned to any number of VMs, bare-metal servers or containers.

Virtualization methods

Storage virtualization today usually refers to capacity that is accumulated from multiple
physical devices and then made available to be reallocated in a virtualized environment.
Modern IT methodologies, such as hyper-converged infrastructure (HCI), take advantage of
virtual storage, in addition to virtual compute power and often virtual network capacity.

There are multiple ways storage can be applied to a virtualized environment:

Host-based storage virtualization is seen in HCI systems and cloud storage. In this case, the
host, or a hyper-converged system made up of multiple hosts, presents virtual drives of a set
capacity to the guest machines, whether they are VMs in an enterprise environment or PCs
accessing cloud storage. All of the virtualization and management are done at the host level via
software, and the physical storage can be almost any device or array.

Array-based storage virtualization most commonly refers to the method in which a storage
array presents different types of physical storage for use as storage tiers. How much of a storage
tier is made up of solid-state drives (SSDs) or HDDs is handled by software in the array and is
hidden at the guest machine or user level.
Network-based storage virtualization is the most common form used in enterprises today. A
network device, such as a smart switch or purpose-built server, connects to all storage devices
in a Fibre Channel (FC) storage area network (SAN) and presents the storage as a virtual pool.

Storage virtualization disguises the actual complexity of a storage system, such as a SAN,
which helps a storage administrator perform the tasks of backup, archiving and recovery more
easily and in less time.

PRODUCT

Storage virtualization products route data and metadata through the device. They allow files to
be migrated in real time and allow aggregation of many NAS devices or SAN arrays into one
pool of storage. The in-band method of operation carries the downside of added latency and a
potential single point of failure, which would mean deployment of these products in pairs. In-
band storage virtualization products include Avere OS, EMC Rainfinity, F5 ARX, IBM SAN
Volume Controller and NetApp V-series.

Out-of-band, or split-path, storage virtualization products separate data and metadata and offer
benefits similar to in-band products. They can also be implemented nondisruptively to a
network/fabric and will not block access to files should the device fail. They do, however, use
agents, and these have to be managed. Out-of-band storage virtualization products include
AutoVirt, Avere OS, EMC Invista and LSI Storage Virtualization Manager.

Another product category that can reasonably be included in the core of true storage
virtualization products is the virtual storage appliance. These products -- available as hardware
and software -- allow users to create SAN-like pools of storage from server disks, white-box
disk arrays and multiple-vendor arrays. The product sits above disk resources and aggregates
them and allows provisioning and data protection functions. Vendors of virtual storage
appliances include HP LeftHand, Pivot3, Seanodes, FalconStor (NSS), Caringo and DataCore.

APPLICATION VIRTUALIZATION

Application virtualization, also called application service virtualization, is a term under the
larger umbrella of virtualization. It refers to running an application on a thin client; a terminal
or a network workstation with few resident programs and accessing most programs residing on
a connected server. The thin client runs in an environment separate from, sometimes referred
to as being encapsulated from, the operating system where the application is located.

Application virtualization fools the computer into working as if the application is running on
the local machine, while in fact it is running on a virtual machine (such as a server) in another
location, using its operating system (OS), and being accessed by the local machine.
Incompatibility problems with the local machine’s OS, or even bugs or poor quality code in
the application, may be overcome by running virtual applications.

Application virtualization attempts to separate application programs from an OS with which it


has conflicts, even causing systems to halt or crash. Other benefits to application virtualization
include:

 Requiring fewer resources compared to using a separate virtual machine.


 Allowing incompatible applications to run on a local machine simultaneously.
 Maintaining a standard, more efficient, and cost-effective OS configuration across
multiple machines in a given organization, independent of the applications being used.
 Facilitating more rapid application deployment.
 Facilitating security by isolating applications from the local OS.
 Easier tracking of license usage, which may save on license costs.
 Allowing applications to be copied to portable media and used by other client
computers, with no need for local installation.
 Increasing ability to handle high and diverse/variable work volume.

LIMITATIONS

There are limitations to application virtualization. Not all applications can be virtualized, like
applications requiring device drivers and 16-bit applications running in shared memory space.
Some applications must become closely integrated with the local OS, such as anti-virus
programs, as they are very difficult to run with application virtualization.

Application virtualization is used in a wide variety of applications, including banking, business


scenario simulations, e-commerce, stock trading, and insurance sales and marketing.
Drawbacks of application virtualization

Application virtualization does have its challenges, however. Not all applications are suited to
virtualization. Graphics-intensive applications, for example, can get bogged down in the
rendering process. In addition, users require a steady and reliable connection to the server to
use the applications.

The use of peripheral devices can get more complicated with app virtualization, especially
when it comes to printing. System monitoring products can also have trouble with virtualized
applications, making it difficult to troubleshoot and isolate performance issues.

Application Virtualization Features & Capabilities

Among the most important application virtualization features are:


 Support for a wide range of applications and application types
 Capable of delivering to a wide variety of endpoints with few restrictions such as driver
management, etc.
 Ease of deployment
 Ease of packaging applications into a single executable
 Access control through authentication, IP address etc.

The concept of virtualization generally refers to separating the logical from the physical, and
that is at the heart of application virtualization too. The advantages of this approach to accessing
application software are that any incompatibility problems between the local machine’s
operating system and the application are irrelevant; The user’s machine is not actually using
its own operating system.

Advantages of Application Virtualization

Application virtualization, by decoupling the applications from the hardware on which they
run has many advantages. One advantage is maintaining a standard cost-effective operating
system configuration across multiple machines by isolating applications from their local
operating systems. There are additional cost advantages like saving on license costs, and greatly
reducing the need for support services to maintain a healthy computing environment.

PRODUCT

Citrix XenApp

Citrix offers a client-side and a serverside application virtualization solution. With "server-
side" they just mean what was formerly called Citrix Presentation Server (and before that
Metaframe, and before that Winframe). The application is executed on the server, and its user
interface is displayed on the client using the ICA or the RDP protocol. The client-
sideapplication virtualization works similar to the other tools in this list and supports
application streaming. Citrix Provisioning Server for Desktops is another virtualization
solution that allows application streaming. However, it streams a complete virtual OS to
physical desktops, so I think that this product is not really an application virtualization solution.
HYPERVISORS-XEN,VMWARE ESXI,KVM

Xen is the open source hypervisor included in the Linux kernel and, as such, it is available in
all Linux distributions. The Xen Project is one of the many open source projects managed by
the Linux Foundation.

Xen components

A typical environment running Xen consists of different parts. To start with, there's Domain 0.
In Xen, this is how you refer to the host operating system (OS), as it's not really a host OS in
the sense that other virtual machines (VMs) -- domains in Xen terminology -- don't have to use
it to get access to the host server hardware. Domain 0 is only responsible for access to the
drivers, and if any coordination has to be done, it will be handled by Domain 0. Apart from
Domain 0, there are the other VMs that are referred to as Domain U.

Paravirtualization

Xen offers two types of virtualization: paravirtualization and full virtualization. In


paravirtualization, the virtualized OS runs a modified version of the OS, which results in the
OS knowing that it's virtualized. This enables much more efficient communication between the
OS and the physical hardware, as the hardware devices can be addressed directly. The only
drawback of paravirtualization is that a modified guest OS needs to be used, which isn't
provided by many vendors.

The counterpart of paravirtualization is full virtualization. This is a virtualization mode where


the CPU needs to provide support for virtualization extensions. In full virtualization,
unmodified virtualized OSes can efficiently address the hardware because of this support.

 Xen is an open source hypervisor program developed by Cambridge University.


 Xen is a microkernel hypervisor, which separates the policy from the mechanism.
 The Xen hypervisor implements all the mechanisms, leaving the policy to be handled
by Domain 0, as shown in figure does not include any device drivers natively.It just
provides a mechanism by which a guest OScan have direct access to the physical
devices. As a result, the size of the Xen hypervisor is keptrather small.
 Xen provides a virtual environment located between the hardware and the OS.
 A number of vendors are in the process of developing commercial Xen hypervisors,
among the mare Citrix XenServer and Oracle VM.
 The core components of a Xen system are the hypervisor, kernel, and applications.
 The organization of the three components is important.
 Like other virtualization systems, many guest OSescan run on top of the hypervisor.
 The guest OS, which has control ability, is called Domain 0, and the others are called
Domain U. Domain 0 is a privileged guest OS of Xen.
 It is first loaded when Xen boots without any file system drivers being available.
Domain 0 is designed to access hardware directly and manage devices. Therefore, one
of the responsibilities of Domain 0 is to allocate and map hardware resources for the
guest domains (the Domain U domains).
 For example, Xen is based on Linux and its security level is C2. Its management VM
is named Domain 0, which has the privilege to manage other VMs implemented on the
same host.
 If Domain0 is compromised, the hacker can control the entire system. So, in the VM
system, security policies are needed to improve the security of Domain 0.
 Domain 0, behaving as a VMM, allows users to create, copy, save, read, modify, share,
migrate, and roll back VMs as easily as manipulating a file, which flexibly provides
tremendous benefits for users.
 It also brings a series of security problems during the software life cycle and data
lifetime.
 Traditionally, a machine’s lifetime can be envisioned as a straight line where the current
state of the machine is a point that progresses monotonically as the software executes.
 During this time, configuration changes are made, software is installed, and patches are
applied. In such an environment, the VM state is in to a tree: At any point, execution
can go into N different branches where multiple instances of a VM can exist at any
point in this tree at any given time. VMs are allowed to roll back to previous states in
their execution (e.g., to fix configuration errors) or rerun from the same point many
times (e.g., as a means of distributing dynamic content or circulating a “live” system
image).

• Xen has “thin hypervisor” model

◦ No device drivers and keeps domains/guests isolated

◦ 2 MB executable

◦ Relies on service domains for functionality

VMware ESXi is an operating system-independent hypervisor based on


the VMkernel operating system that interfaces with agents that run on top of it. ESXi stands
for Elastic Sky X Integrated.
ESXi is a type-1 hypervisor, meaning it runs directly on system hardware without the need for
an operating system (OS). Type-1 hypervisors are also referred to as bare-metal hypervisors
because they run directly on hardware.

ESXi is targeted at enterprise organizations. VMware describes an ESXi system as similar to a


stateless compute node. Virtualization administrators can upload state information from a
saved configuration file.

ESXi's VMkernel interfaces directly with VMware agents and approved third-party modules.
Admins can configure VMware ESXi using its console or a vSphere client. They can also check
VMware's hardware compatibility list for approved, supported hardware on which to install
ESXi.

Different versions of ESX and ESXi

VMware released ESXi after the release of VMware ESX version 4.1 in 2010. After version 5
of ESX, only ESXi has continued support. ESXi is currently on version 6.7, which mainly
includes bug fixes for previous ESXi versions.

ESX licensees can choose to deploy ESXi instead of ESX on any given server. Before ESXi,
VMware offered the ESX hypervisor, which comprised more parts, such as the console OS
and firewall. Remote command-line interfaces and system management standards replaced the
service console functions.

The hypervisor supports Auto Deploy and custom image creation, along with other tools that
weren't included in ESX. According to VMware, ESXi's architecture occupies less than 150
MB of space -- 32 MB of on-disk space -- compared to about 2 GB with ESX.

A stripped-down, free version of ESXi -- VMware vSphere Hypervisor -- supports fewer


features. Although it can't communicate with vCenter Server, it virtualizes servers with options
like thin provisioning. The paid version of ESXi includes live migration of machines, automatic
load balancing, and pooling storage and compute resources across multiple hosts.

Key features of ESXi


VMware ESXi supports key features including traffic shaping, memory ballooning, role-based
security access, logging and auditing, a GUI, and vSphere PowerCLI. ESXi also has the ability
to configure 128 CPUs and 120 devices.

VLADAN SEGET

Admins can manage this functionality using remote tools instead of a CLI, and ESXI can use
an API-based integration model instead of third-party management agents. ESXi supports the
creation of VMs with VMware Server and Microsoft Virtual Server.

ESXi benefits and drawbacks

Installing ESXi in a data center is quick and simple because of its lightweight footprint of 150
MB. Also, admins need fewer patches because of ESXi's lightweight format. Due to its smaller
size, ESXi is seen as more secure. In addition, security management is built into the VMkernel.
ESXi also offers a simplified GUI.

Unfortunately, ESXi offers fewer configuration options to maintain its size. There is also a
learning curve for those who haven't used a virtualization product before.

Another drawback to ESXi is that the overhead created with additional CPU work and OS calls
might cause an application to slow down in a VM. The free version of ESXi also limits users
to the use of two physical CPUs.

FEATURES
By consolidating multiple servers onto fewer physical devices, ESXi reduces space, power and
IT administrative requirements while driving high-speed performance.

Small Footprint
With a footprint of just 150MB, ESXi lets you do more with less while minimizing security
threats to your hypervisor.

Reliable Performance
Accommodate apps of any size. Configure virtual machines up to 128 virtual CPUs, 6 TB of
RAM and 120 devices to satisfy all your application needs. Consult individual solution limits
to ensure you do not exceed supported configurations for your environment. Learn more
about configuration maximums.

Enhanced Security
Protect sensitive virtual machine data with powerful encryption capabilities. Role-based access
simplifies administration, and extensive logging and auditing ensure greater accountability and
easier forensic analysis.

Ecosystem Excellence
Get support for a broad ecosystem of hardware OEM vendors, technology service partners,
apps, and guest operating systems.

User-Friendly Experience
Manage day-to-day administrative operations with built-in modern UI based on HTML5
standards. For customers who need to automate their operations, VMware offers both a vSphere
Command Line Interface and developer-friendly REST-based APIs.

Firstly, you should understand the VMware vSphere architecture before considering how to
optimize the performance of VMware vSphere ESXi. VMware vSphere ESXi (a hypervisor)
installs the virtualization layer on an x86-based platform. vSphere ESXi is a platform for
running some virtual machines on a single physical machine. Each VM runs on the VMM,
which can provide virtual hardware for the guest operating system in the VM. VMware
vSphere ESXi uses the VMM to share the physical hardware (for example, CPU, memory,
network, storage devices, and so on) with each virtual machine in the VMkernel. Each virtual
machine has only one VMM for physical resource sharing.

The following diagram shows the components of the vSphere ESXi host and the guest OS:

There are many different factors responsible for performance issues in a VMware vSphere
environment. These factors depend on different hardware and software components, for
example, CPU, memory, network, disk I/O,...

VMware ESXi is the next-generation hypervisor, providing a new foundation for virtual
infrastructure. This innovative architecture operates independently from any general
purpose operating system, offering improved security, increased reliability, and
simplified management.

The commercial versions of vSphere include features like:

 Memory over commitment


 High availability (called vSphere HA)

 vMotion
 Storage vMotion (svMotion)

 vSphere Data Protection (for backup and recovery)

 vSphere Replication

 vShield Endpoint protection (the option to use agentless anti-virus solutions)

 Hot add of memory and hot plug for CPU

 Fault tolerance (FT) for availability

 Distributed resource scheduler (DRS) for VM “load balancing” (effectively)

 Distributed power management (DPM), consolidates VMs with vMotion and shuts down
hosts to save power

5. KVM

The open-source KVM (or Kernel-Based Virtual Machine) is a Linux-based type-1 hypervisor
that can be added to a most Linux operating systems including Ubuntu, SUSE, and Red Hat
Enterprise Linux. It supports most common Linux operating systems, Solaris, and Windows.
Most hypervisors that offer KVM offer additional management tools on top such as Red
Hat’s Virtual Machine Manager.
KVM hypervisor is the virtualization layer in Kernel-based Virtual Machine (KVM), a
free, open source virtualization architecture for Linux distributions.

A hypervisor is a program that allows multiple operating systems to share a


single hardwarehost. In KVM, the Linux kernel acts as a Type 2 Hypervisor, streamlining
management and improving performance in virtualized environments. The hypervisor creates
virtual machine (VM) environments and coordinates calls for processor, memory, hard
disk, network, and other resources through the host OS. KVM requires a processor with
hardware virtualization extensions to connect to the guest OSs.
KVM has been bundled along with the Linux operating system (OS) since 2007 and can be
installed along with the Linux kernel. Numerous guest OSs can work
with KVM including BSD(Berkeley Software Distribution), Solaris, Windows, Haiku,
ReactOS, Plan 9, and the AROS Research OS. In addition, a modified version of QEMU
("Quick Emulator") can use KVM to run mac os

How does KVM work?

KVM converts Linux into a type-1 (bare-metal) hypervisor. All hypervisors need some
operating system-level components—such as a memory manager, process scheduler,
input/output (I/O) stack, device drivers, security manager, a network stack, and more—to run
VMs. KVM has all these components because it’s part of the Linux kernel. Every VM is
implemented as a regular Linux process, scheduled by the standard Linux scheduler, with
dedicated virtual hardware like a network card, graphics adapter, CPU(s), memory, and disks.

HYPERVISOR KVM: ARCHITECTURE

KVM (from abbreviation for Kernel-based Virtual Machine) is software that allows to
implement computer-based virtualization in OS Linux and Linux-like systems. For some time
now KVM is a part of Linux-kernel, that is why they develop together. It works only in systems
with hardware vizualation support on the CPU Intel and AMD.

For organization of work KVM uses direct access to a kernel with CPU-specific module (kym-
intel or kvm-amd). Moreover, the complex contains a main kernel kvm.ko and elements UI,
including popular QEMU. Hypervisor enables to work directly with virtual machines files and
disc images from other programs. Isolated space is created for every machine with its own
RAM, disk, network access, video card and other devices.
ADVANTAGES AND DISADVANTAGES OF KVM

As any software solution KVM has both pros and cons, depending on which hosters and final
consumers decide about using this software. There are several advantages of hypervisors, such
as:
 Independently dedicated resources. Every KVM-based virtual machine receives its own
volume of RAM and ROM and cannot interrupt another fields, thereby increasing work
stability.
 Wide support of guest OS. Except full support of UNIX-distribution including *BSD, Solaris,
Linux it is possible to install Windows and even MacOS;
 Interaction with kernel enables to directly address the workstation hardware that makes the
work faster.
 With the support of software market giants (RedHat Linux, HP, Intel, IBM) the project is
growing fast, covering more amount of hardware and OS, including the newest ones.
 Simple administration gives a possibility of remote control using VNC and a wide array of
external software and add-ons.

However, it was impossible to avoid disadvantages.


 Hypervisor is relatively young (for example, if compare with Xen) and an extreme growth
correspondingly leads to different issues, especially when adding the support of new hardware
and software environment.
 The complicity of settings, especially for inexperienced user. To say the truth the most of
options can stay unchanged as they have been already optimal adjusted from a box.

THE FUNCTIONAL POSSIBILITIES AND PROPERTIES OF HYPERVISOR

KVM complex is featured by such main properties as security, convenient RAM control,
reliable data storing, dynamical migration, performance, scalability and stability.

Security
Every machine in KVM is a Linux-based process, therefore it follows all standard security
policies and get isolation from other processes. Special add-ons (such as SELinux) also add
another security elements such as access control, encryption etc.

RAM control
As KVM is a part of Linux kernel, hypervisor inherits powerful instruments of RAM control.
The memory pages of every process (virtual machines) can be easily copied and changed
without slowing the work. Multiply-CPU systems KVM allow to control huge volumes of
memory. Memory generalization that is a process of unification of the same pages and
delivering a copy for machine after request are available so as another methods of optimisation.

Data storing

For machine images and data storing KVM can use any data storage device that is supported
by pre-installed operating systems, for example hard drive, NAS, removable storage device
including multi-threat input-output for work enhancement. Moreover, hypervisor can operate
with distributed file systems such as GFS2. Disks for KVM have their own unique format that
supports dynamic creation of different-level images, encryption and compression.
Dynamic migration

The important feature of KVM is support of dymanic migration: it means the relocation of
virtual machines between the different hosts without stopping them. Such migration is
unnoticeable for user at all. The machine continues to work, the performance isn't interrupted,
network connections are active. Sure, it is possible to make a migration by saving the current
state of virtual machine to an image and opening it on a new host.

Performance and scalability

Scalability and performance thanks to a tight integration with Linux are totally inherited from
Linux. Thus, the hypervisor supports up to 16 CPU (both virtual and physical) and up to 256
Gb of RAM in every virtual machine. It enables to use hypervisor even in the most high-loaded
systems.
Stability
The program complex is continually improved. If originally it has supported only Linux x86
platform, the amount of different platforms now exceeds the dozens, including popular server
operating systems. Moreover, it is easy to open the virtual machines with modified OS pack,
in case it is compatible with the pre-installed platform. Because of cooperation with key
software development companies the hypervisor might be called the most stable and reliable
on the market one.

You might also like