Professional Documents
Culture Documents
Virtualization Assig
Virtualization Assig
Virtualization Assig
Desktop virtualization is the concept of isolating a logical operating system (OS) instance from
the client that is used to access it.
Desktop virtualization provides a way for users to maintain their individual desktops on a
single, central server. The users may be connected to the central server through a LAN, WAN
or over the Internet.
Desktop virtualization "virtualizes desktop computers" and these virtual desktop environments
are "served" to users on the network. You interact with a virtual desktop in the same way you
would use a physical desktop. Another benefit of desktop virtualization is that is lets you
remotely log in to access your desktop from any location.
BENEFITS
Desktop virtualization has many benefits, including a lower total cost of ownership (TCO),
increased security, reduced energy costs, reduced downtime and centralized management.
LIMITATIONS
There are several different conceptual models of desktop virtualization, which can broadly be
divided into two categories based on whether or not the operating system instance is executed
locally or remotely. It is important to note that not all forms of desktop virtualization
technology involve the use of virtual machines (VMs).
Client virtualization requires processing to occur on local hardware; the use of thin clients, zero
clients and mobile devices is not possible. These types of desktop virtualization include:
OS image streaming: The operating system runs on local hardware, but it boots to a
remote disk image across the network. This is useful for groups of desktops that use the same
disk image. OS image streaming, also known as remote desktop virtualization, requires a
constant network connection in order to function.
PRODUCT
1. Amazon WorkSpaces
Amazon WorkSpaces, a managed, secure cloud desktop service, heads the giant names in
our top 20 best virtual desktop infrastructure software. You can use Amazon WorkSpaces to
provision either Windows or Linux desktops in just a few minutes and quickly scale to provide
thousands of desktops to workers across the globe. You can pay either monthly or hourly, just
for the WorkSpaces you launch, which helps you save money when compared to traditional
desktops and on-premises VDI solutions.
Among its most prominent features, it helps you eliminate many administrative tasks
associated with managing your desktop lifecycle including provisioning, deploying,
maintaining, and recycling desktops. There is less hardware inventory to manage and no need
for complex virtual desktop infrastructure (VDI) deployments that don’t scale.
3. Active Directory Integration. Rather than forcing you to spend time on creating a new
directory, Amazon WorkSpaces lets you connect to your existing Active Directory.
This way, you can easily manage and modify user access rights from a single interface
and roll it throughout the organization with ease.
2. IBM Cloud
IBM Cloud is an accelerated virtual desktop infrastructure software that is integrated with
industry-standard graphics and storage capabilities to eliminate productivity barriers. The
platform empowers mobile workforces to gain workstation-like experience on any device for
fast and convenient access to graphics-intensive applications and files anytime and anywhere
there is an internet connection.
Our initial IBM Cloud reviews show the platform’s robust VDI functionalities backed by
security safeguards to protect in-flight and at-rest content from loss and theft. The system
configures and scales computing and storage options housed in different data centers
worldwide, with GPU technology that speeds up access to graphics-intensive materials.
1. Cost efficient. IBM Cloud allows organizations to switch from a capital expense
(CAPEX) to operating expense (OPEX) model for their infrastructure, while also
reducing total cost of ownership as users become less reliant on desktop workstations
and standalone software licenses. The software is also easy to set up, with on-demand
access to top compute services and desktop virtualization solutions.
3. IBM’s vaunted security. IBM Cloud sends only encrypted visual output and mouse or
keyboard input over the network. This means users no longer need local copies of files,
and the risk of getting content compromised can be avoided. Ground up security
solutions include physical safeguards; network security; and system, application and
data security.
NETWORK VIRTUALIZATION
Network virtualization refers to the management and monitoring of an entire computer network
as a single administrative entity from a single software-based administrator’s console. Network
virtualization also may include storage virtualization, which involves managing all storage as
a single resource. Network virtualization is designed to allow network optimization of data
transfer rates, flexibility, scalability, reliability and security. It automates many network
administrative tasks, which actually disguise a network's true complexity. All network servers
and services are considered one pool of resources, which may be used without regard to the
physical components.
Network virtualization is especially useful for networks experiencing a rapid, large and
unpredictable increase in usage.
The intended result of network virtualization is improved network productivity and efficiency,
as well as job satisfaction for the network administrator.
Virtual networks exist in two forms; internal and external. Both of these terms refer to inside
or outside the server. Eternal virtualization will use tools such as switches, adapters or a
network to combine one or more networks into virtual units. Internal virtualization refers to
using network-like functionality in software containers on a single network server. Internal
software allows VMs to exchange data on a host without using an external network.
ADVANTAGES AND DISADVANTAGES
The use of network virtualization does have its upsides and downsides, including:
Advantages:
Disadvantages:
PRODUCT
Access control
Path isolation
Services edge
Access control provides secure, customized access for individuals and groups to protect the
Enterprise LAN from external threats. Complementary features include:
Port authentication using standards such as IEEE 802.1x for strong connections
between authorized users and VPNs.
Cisco Network Admission Control (NAC), to minimize security risks by removing
harmful traffic.
Path isolation maps validated users or devices to the correct secure set of available resources
(virtual private network, or VPN). Cisco offers three path isolation solutions:
Generic routing encapsulation (GRE) tunnels create closed user groups on the
Enterprise LAN to allow guest access to the Internet, while preventing access to internal
resources.
Virtual routing and forwarding (VRF)-lite, allows network managers to use a single
routing device to support multiple virtual routers.
Multiprotocol label switching (MPLS) VPNs also partition a campus network for
closed user groups
Services Edge provides access to services for a legitimate sets of users and devices, by using
centralized policy enforcement to:
SERVER VIRTUALIZATION
Server virtualization is the masking of server resources, including the number and identity of
individual physical servers, processors, and operating systems, from server users. The server
administrator uses a software application to divide one physical server into multiple isolated
virtual environments. The virtual environments are sometimes called virtual private servers,
but they are also known as guests, instances, containers or emulations.
There are three popular approaches to server virtualization: the virtual machine model, the
paravirtual machine model, and virtualization at the operating system (OS) layer.
Virtual machines are based on the host/guest paradigm. Each guest runs on a virtual imitation
of the hardware layer. This approach allows the guest operating system to run without
modifications. It also allows the administrator to create guests that use different operating
systems. The guest has no knowledge of the host's operating system because it is not aware that
it's not running on real hardware. It does, however, require real computing resources from the
host -- so it uses a hypervisor to coordinate instructions to the CPU. The hypervisor is called a
virtual machine monitor (VMM). It validates all the guest-issued CPU instructions and
manages any executed code that requires addition privileges. VMware and Microsoft Virtual
Server both use the virtual machine model.
The paravirtual machine (PVM) model is also based on the host/guest paradigm -- and it uses
a virtual machine monitor too. In the paravirtual machine model, however, The VMM actually
modifies the guest operating system's code. This modification is called porting. Porting
supports the VMM so it can utilize privileged systems calls sparingly. Like virtual machines,
paravirtual machines are capable of running multiple operating systems. Xen and UML both
use the paravirtual machine model.
Virtualization at the OS level works a little differently. It isn't based on the host/guest paradigm.
In the OS level model, the host runs a single OS kernel as its core and exports operating system
functionality to each of the guests. Guests must use the same operating system as the host,
although different distributions of the same system are allowed. This distributed architecture
eliminates system calls between layers, which reduces CPU usage overhead. It also requires
that each partition remain strictly isolated from its neighbours so that a failure or security
breach in one partition isn't able to affect any of the other partitions. In this model, common
binaries and libraries on the same physical machine can be shared, allowing an OS level virtual
server to host thousands of guests at the same time. Virtuozzo and Solaris Zones both use OS-
level virtualization.
PRODUCT
Storage virtualization is the pooling of physical storage from multiple storage devices
into what appears to be a single storage device -- or pool of available storage capacity -- that is
managed from a central console. The technology relies on software to identify available storage
capacity from physical devices and to then aggregate that capacity as a pool of storage that can
be used in a virtual environment by virtual machines (VMs).
The virtual storage software intercepts I/O requests from physical or virtual machines and
sends those requests to the appropriate physical location of the storage devices that are part of
the overall pool of storage in the virtualized environment. To the user, virtual storage appears
like a standard read or write to a physical drive.
Even a RAID array can sometimes be considered a type of storage virtualization. Multiple
physical disks in the array are presented to the user as a single storage device that, in the
background, replicates data to multiple disks in case of a single disk failure.
There are two basic methods of virtualizing storage: file-based or block-based. File-based
storage virtualization is a specific use case, applied to network-attached storage (NAS)
systems. Using the Server Message Block (SMB) or Network File System (NFS) protocols,
file-based storage virtualization breaks the dependency in a normal NAS array between the
data being accessed and the location of physical memory. This enables the NAS system to
better handle file migration in the background to improve performance.
Block-based or block access virtual storage is more widely applied in virtual storage systems
than file-based storage virtualization. Block-based systems abstract the logical storage, such as
a drive partition, from the actual physical memory blocks in a storage device, such as a hard
disk drive (HDD) or solid-state memory device. This enables the virtualization management
software to collect the capacity of the available blocks of memory space and pool them into a
shared resource to be assigned to any number of VMs, bare-metal servers or containers.
Virtualization methods
Storage virtualization today usually refers to capacity that is accumulated from multiple
physical devices and then made available to be reallocated in a virtualized environment.
Modern IT methodologies, such as hyper-converged infrastructure (HCI), take advantage of
virtual storage, in addition to virtual compute power and often virtual network capacity.
Host-based storage virtualization is seen in HCI systems and cloud storage. In this case, the
host, or a hyper-converged system made up of multiple hosts, presents virtual drives of a set
capacity to the guest machines, whether they are VMs in an enterprise environment or PCs
accessing cloud storage. All of the virtualization and management are done at the host level via
software, and the physical storage can be almost any device or array.
Array-based storage virtualization most commonly refers to the method in which a storage
array presents different types of physical storage for use as storage tiers. How much of a storage
tier is made up of solid-state drives (SSDs) or HDDs is handled by software in the array and is
hidden at the guest machine or user level.
Network-based storage virtualization is the most common form used in enterprises today. A
network device, such as a smart switch or purpose-built server, connects to all storage devices
in a Fibre Channel (FC) storage area network (SAN) and presents the storage as a virtual pool.
Storage virtualization disguises the actual complexity of a storage system, such as a SAN,
which helps a storage administrator perform the tasks of backup, archiving and recovery more
easily and in less time.
PRODUCT
Storage virtualization products route data and metadata through the device. They allow files to
be migrated in real time and allow aggregation of many NAS devices or SAN arrays into one
pool of storage. The in-band method of operation carries the downside of added latency and a
potential single point of failure, which would mean deployment of these products in pairs. In-
band storage virtualization products include Avere OS, EMC Rainfinity, F5 ARX, IBM SAN
Volume Controller and NetApp V-series.
Out-of-band, or split-path, storage virtualization products separate data and metadata and offer
benefits similar to in-band products. They can also be implemented nondisruptively to a
network/fabric and will not block access to files should the device fail. They do, however, use
agents, and these have to be managed. Out-of-band storage virtualization products include
AutoVirt, Avere OS, EMC Invista and LSI Storage Virtualization Manager.
Another product category that can reasonably be included in the core of true storage
virtualization products is the virtual storage appliance. These products -- available as hardware
and software -- allow users to create SAN-like pools of storage from server disks, white-box
disk arrays and multiple-vendor arrays. The product sits above disk resources and aggregates
them and allows provisioning and data protection functions. Vendors of virtual storage
appliances include HP LeftHand, Pivot3, Seanodes, FalconStor (NSS), Caringo and DataCore.
APPLICATION VIRTUALIZATION
Application virtualization, also called application service virtualization, is a term under the
larger umbrella of virtualization. It refers to running an application on a thin client; a terminal
or a network workstation with few resident programs and accessing most programs residing on
a connected server. The thin client runs in an environment separate from, sometimes referred
to as being encapsulated from, the operating system where the application is located.
Application virtualization fools the computer into working as if the application is running on
the local machine, while in fact it is running on a virtual machine (such as a server) in another
location, using its operating system (OS), and being accessed by the local machine.
Incompatibility problems with the local machine’s OS, or even bugs or poor quality code in
the application, may be overcome by running virtual applications.
LIMITATIONS
There are limitations to application virtualization. Not all applications can be virtualized, like
applications requiring device drivers and 16-bit applications running in shared memory space.
Some applications must become closely integrated with the local OS, such as anti-virus
programs, as they are very difficult to run with application virtualization.
Application virtualization does have its challenges, however. Not all applications are suited to
virtualization. Graphics-intensive applications, for example, can get bogged down in the
rendering process. In addition, users require a steady and reliable connection to the server to
use the applications.
The use of peripheral devices can get more complicated with app virtualization, especially
when it comes to printing. System monitoring products can also have trouble with virtualized
applications, making it difficult to troubleshoot and isolate performance issues.
The concept of virtualization generally refers to separating the logical from the physical, and
that is at the heart of application virtualization too. The advantages of this approach to accessing
application software are that any incompatibility problems between the local machine’s
operating system and the application are irrelevant; The user’s machine is not actually using
its own operating system.
Application virtualization, by decoupling the applications from the hardware on which they
run has many advantages. One advantage is maintaining a standard cost-effective operating
system configuration across multiple machines by isolating applications from their local
operating systems. There are additional cost advantages like saving on license costs, and greatly
reducing the need for support services to maintain a healthy computing environment.
PRODUCT
Citrix XenApp
Citrix offers a client-side and a serverside application virtualization solution. With "server-
side" they just mean what was formerly called Citrix Presentation Server (and before that
Metaframe, and before that Winframe). The application is executed on the server, and its user
interface is displayed on the client using the ICA or the RDP protocol. The client-
sideapplication virtualization works similar to the other tools in this list and supports
application streaming. Citrix Provisioning Server for Desktops is another virtualization
solution that allows application streaming. However, it streams a complete virtual OS to
physical desktops, so I think that this product is not really an application virtualization solution.
HYPERVISORS-XEN,VMWARE ESXI,KVM
Xen is the open source hypervisor included in the Linux kernel and, as such, it is available in
all Linux distributions. The Xen Project is one of the many open source projects managed by
the Linux Foundation.
Xen components
A typical environment running Xen consists of different parts. To start with, there's Domain 0.
In Xen, this is how you refer to the host operating system (OS), as it's not really a host OS in
the sense that other virtual machines (VMs) -- domains in Xen terminology -- don't have to use
it to get access to the host server hardware. Domain 0 is only responsible for access to the
drivers, and if any coordination has to be done, it will be handled by Domain 0. Apart from
Domain 0, there are the other VMs that are referred to as Domain U.
Paravirtualization
◦ 2 MB executable
ESXi's VMkernel interfaces directly with VMware agents and approved third-party modules.
Admins can configure VMware ESXi using its console or a vSphere client. They can also check
VMware's hardware compatibility list for approved, supported hardware on which to install
ESXi.
VMware released ESXi after the release of VMware ESX version 4.1 in 2010. After version 5
of ESX, only ESXi has continued support. ESXi is currently on version 6.7, which mainly
includes bug fixes for previous ESXi versions.
ESX licensees can choose to deploy ESXi instead of ESX on any given server. Before ESXi,
VMware offered the ESX hypervisor, which comprised more parts, such as the console OS
and firewall. Remote command-line interfaces and system management standards replaced the
service console functions.
The hypervisor supports Auto Deploy and custom image creation, along with other tools that
weren't included in ESX. According to VMware, ESXi's architecture occupies less than 150
MB of space -- 32 MB of on-disk space -- compared to about 2 GB with ESX.
VLADAN SEGET
Admins can manage this functionality using remote tools instead of a CLI, and ESXI can use
an API-based integration model instead of third-party management agents. ESXi supports the
creation of VMs with VMware Server and Microsoft Virtual Server.
Installing ESXi in a data center is quick and simple because of its lightweight footprint of 150
MB. Also, admins need fewer patches because of ESXi's lightweight format. Due to its smaller
size, ESXi is seen as more secure. In addition, security management is built into the VMkernel.
ESXi also offers a simplified GUI.
Unfortunately, ESXi offers fewer configuration options to maintain its size. There is also a
learning curve for those who haven't used a virtualization product before.
Another drawback to ESXi is that the overhead created with additional CPU work and OS calls
might cause an application to slow down in a VM. The free version of ESXi also limits users
to the use of two physical CPUs.
FEATURES
By consolidating multiple servers onto fewer physical devices, ESXi reduces space, power and
IT administrative requirements while driving high-speed performance.
Small Footprint
With a footprint of just 150MB, ESXi lets you do more with less while minimizing security
threats to your hypervisor.
Reliable Performance
Accommodate apps of any size. Configure virtual machines up to 128 virtual CPUs, 6 TB of
RAM and 120 devices to satisfy all your application needs. Consult individual solution limits
to ensure you do not exceed supported configurations for your environment. Learn more
about configuration maximums.
Enhanced Security
Protect sensitive virtual machine data with powerful encryption capabilities. Role-based access
simplifies administration, and extensive logging and auditing ensure greater accountability and
easier forensic analysis.
Ecosystem Excellence
Get support for a broad ecosystem of hardware OEM vendors, technology service partners,
apps, and guest operating systems.
User-Friendly Experience
Manage day-to-day administrative operations with built-in modern UI based on HTML5
standards. For customers who need to automate their operations, VMware offers both a vSphere
Command Line Interface and developer-friendly REST-based APIs.
Firstly, you should understand the VMware vSphere architecture before considering how to
optimize the performance of VMware vSphere ESXi. VMware vSphere ESXi (a hypervisor)
installs the virtualization layer on an x86-based platform. vSphere ESXi is a platform for
running some virtual machines on a single physical machine. Each VM runs on the VMM,
which can provide virtual hardware for the guest operating system in the VM. VMware
vSphere ESXi uses the VMM to share the physical hardware (for example, CPU, memory,
network, storage devices, and so on) with each virtual machine in the VMkernel. Each virtual
machine has only one VMM for physical resource sharing.
The following diagram shows the components of the vSphere ESXi host and the guest OS:
There are many different factors responsible for performance issues in a VMware vSphere
environment. These factors depend on different hardware and software components, for
example, CPU, memory, network, disk I/O,...
VMware ESXi is the next-generation hypervisor, providing a new foundation for virtual
infrastructure. This innovative architecture operates independently from any general
purpose operating system, offering improved security, increased reliability, and
simplified management.
vMotion
Storage vMotion (svMotion)
vSphere Replication
Distributed power management (DPM), consolidates VMs with vMotion and shuts down
hosts to save power
5. KVM
The open-source KVM (or Kernel-Based Virtual Machine) is a Linux-based type-1 hypervisor
that can be added to a most Linux operating systems including Ubuntu, SUSE, and Red Hat
Enterprise Linux. It supports most common Linux operating systems, Solaris, and Windows.
Most hypervisors that offer KVM offer additional management tools on top such as Red
Hat’s Virtual Machine Manager.
KVM hypervisor is the virtualization layer in Kernel-based Virtual Machine (KVM), a
free, open source virtualization architecture for Linux distributions.
KVM converts Linux into a type-1 (bare-metal) hypervisor. All hypervisors need some
operating system-level components—such as a memory manager, process scheduler,
input/output (I/O) stack, device drivers, security manager, a network stack, and more—to run
VMs. KVM has all these components because it’s part of the Linux kernel. Every VM is
implemented as a regular Linux process, scheduled by the standard Linux scheduler, with
dedicated virtual hardware like a network card, graphics adapter, CPU(s), memory, and disks.
KVM (from abbreviation for Kernel-based Virtual Machine) is software that allows to
implement computer-based virtualization in OS Linux and Linux-like systems. For some time
now KVM is a part of Linux-kernel, that is why they develop together. It works only in systems
with hardware vizualation support on the CPU Intel and AMD.
For organization of work KVM uses direct access to a kernel with CPU-specific module (kym-
intel or kvm-amd). Moreover, the complex contains a main kernel kvm.ko and elements UI,
including popular QEMU. Hypervisor enables to work directly with virtual machines files and
disc images from other programs. Isolated space is created for every machine with its own
RAM, disk, network access, video card and other devices.
ADVANTAGES AND DISADVANTAGES OF KVM
As any software solution KVM has both pros and cons, depending on which hosters and final
consumers decide about using this software. There are several advantages of hypervisors, such
as:
Independently dedicated resources. Every KVM-based virtual machine receives its own
volume of RAM and ROM and cannot interrupt another fields, thereby increasing work
stability.
Wide support of guest OS. Except full support of UNIX-distribution including *BSD, Solaris,
Linux it is possible to install Windows and even MacOS;
Interaction with kernel enables to directly address the workstation hardware that makes the
work faster.
With the support of software market giants (RedHat Linux, HP, Intel, IBM) the project is
growing fast, covering more amount of hardware and OS, including the newest ones.
Simple administration gives a possibility of remote control using VNC and a wide array of
external software and add-ons.
KVM complex is featured by such main properties as security, convenient RAM control,
reliable data storing, dynamical migration, performance, scalability and stability.
Security
Every machine in KVM is a Linux-based process, therefore it follows all standard security
policies and get isolation from other processes. Special add-ons (such as SELinux) also add
another security elements such as access control, encryption etc.
RAM control
As KVM is a part of Linux kernel, hypervisor inherits powerful instruments of RAM control.
The memory pages of every process (virtual machines) can be easily copied and changed
without slowing the work. Multiply-CPU systems KVM allow to control huge volumes of
memory. Memory generalization that is a process of unification of the same pages and
delivering a copy for machine after request are available so as another methods of optimisation.
Data storing
For machine images and data storing KVM can use any data storage device that is supported
by pre-installed operating systems, for example hard drive, NAS, removable storage device
including multi-threat input-output for work enhancement. Moreover, hypervisor can operate
with distributed file systems such as GFS2. Disks for KVM have their own unique format that
supports dynamic creation of different-level images, encryption and compression.
Dynamic migration
The important feature of KVM is support of dymanic migration: it means the relocation of
virtual machines between the different hosts without stopping them. Such migration is
unnoticeable for user at all. The machine continues to work, the performance isn't interrupted,
network connections are active. Sure, it is possible to make a migration by saving the current
state of virtual machine to an image and opening it on a new host.
Scalability and performance thanks to a tight integration with Linux are totally inherited from
Linux. Thus, the hypervisor supports up to 16 CPU (both virtual and physical) and up to 256
Gb of RAM in every virtual machine. It enables to use hypervisor even in the most high-loaded
systems.
Stability
The program complex is continually improved. If originally it has supported only Linux x86
platform, the amount of different platforms now exceeds the dozens, including popular server
operating systems. Moreover, it is easy to open the virtual machines with modified OS pack,
in case it is compatible with the pre-installed platform. Because of cooperation with key
software development companies the hypervisor might be called the most stable and reliable
on the market one.